OpenAI, Creator Of Popular ChatGPT AI Tool, Can’t Tell If Something Was Written By AI
OpenAI recently halted its tool designed to distinguish between human and AI-authored writing, explaining that the tool’s accuracy was unsatisfactory. The wind-down took effect from July 20th 2023, as stated in an updated blog post by the company. OpenAI is currently exploring more efficient techniques for text provenance while working on mechanisms to help users identify AI-generated audio or visual content. However, details of these mechanisms have not been released yet.
The company disclosed that the classifier was not particularly successful at identifying AI-generated text, even suggesting that it could mistakenly label human-written text as AI-created. OpenAI had previously hinted at the possibility of improving the classifier’s accuracy by feeding it more data.
The rise of OpenAI’s ChatGPT, which quickly became one of the fastest-growing apps, stirred concerns among various groups. Educators, in particular, were apprehensive about students relying on the AI to complete their assignments. Due to issues surrounding accuracy, safety, and potential academic dishonesty, New York schools prohibited the use of ChatGPT on campus. Furthermore, the potential for AI-generated misinformation has raised eyebrows, as studies indicate AI-written content, such as tweets, may be more persuasive than human-written ones.
Amidst these concerns, OpenAI’s trust and safety leader recently stepped down. This comes at a time when the Federal Trade Commission is scrutinizing OpenAI’s methods of information and data verification. The company has refrained from commenting beyond its blog post. As the world grapples with the flood of AI-generated content, it appears that even the pioneers of generative AI like OpenAI are still searching for effective ways to manage the situation.