OpenAI has added a watermarking system to texts created by ChatGPT and prepared a tool to detect watermarks, which it has been preparing for about a year. But the company is divided internally on whether to release the system because it could hurt the company's profits.
The company found that the method was "99.9% effective" in detecting AI texts when there was enough AI text, and in a survey it commissioned, "a quarter of respondents globally supported AI detection tools." But OpenAI seemed worried that using watermarks would turn off ChatGPT users surveyed, with nearly 30% saying they would use the software less if watermarks were implemented.
Some employees had other concerns, such as using Google Translate to translate text back and forth between languages, or having ChatGPT add emojis and then remove them, tricks that could easily block watermarks. Still, employees believe the method is effective. However, given user dissatisfaction, some of the methods suggested to try "may be less controversial among users, but unproven." (WSJ)