OpenAI, the research company behind ChatGPT, claims to have released a new tool that can help distinguish between text written by AI and text written by humans.
On Tuesday, the company said that a first version of the release had been released, and that in the future, it wanted to get feedback and share better methods.
The creators of ChatGPT warned that not all texts written by AI can be reliably detected. However, the company is of the opinion that effective classifiers can assist in identifying automated campaigns of misinformation, presenting an AI chatbot as a human, and employing AI tools for academic dishonesty.
After a week of discussion at schools and colleges regarding concerns that ChatGPT’s capacity to write just about anything on command could fuel academic dishonesty and hinder learning, the launch of “Al Text Classifier” follows.
Millions of people started experimenting with ChatGPT after it was made available for free on the OpenAI website on November 30 for teenagers and college students. Even though many people came up with harmless ways to use it, the ease with which it could answer take-home test questions and help with other assignments made some teachers panic.
By the time the new school year began, major public school districts in New York City, Los Angeles, and elsewhere began to prohibit its use in classrooms and on school devices.
The tool is better at determining whether the text was written by an AI or a human when it is longer. The tool will categorize any text as “very unlikely, unlikely, unclear if it is, possibly, or likely” AI-generated when you type it in, be it an essay for college admissions or a literary analysis of Ralph Ellison’s “Invisible Man.”
But it’s hard to figure out how it came up with a result, just like ChatGPT, which was trained on a lot of digitized books, newspapers, and online writings but often confidently spits out lies.