A free tool has been made available by OpenAI, the firm that created DALL-E and ChatGPT, to “identify between language written by a person and text authored by AIs.” In a press release, it states that the classifier is “not totally dependable” and “should not be used as a major decision-making tool.” In order to evaluate whether someone is attempting to pass off produced language as having been written by a human, it can be helpful, according to OpenAI.
Although using the classifier tool requires a free OpenAI account, it is a rather straightforward tool. It only requires you to paste text into a box and press a button to determine whether the text is very unlikely, unlikely, unclear whether it is, possibly, or likely to have been produced by artificial intelligence.
According to the press release from OpenAI, “pairs of human-written text and AI-written text on the same topic” were used to train the model that powers the application.
However, it provides a number of cautions regarding utilising the tool. The business lists the following restrictions above the text box:
- A minimum of 1,000 characters is needed, which is roughly 150–250 words.
- The classifier isn’t always correct; it’s capable of mislabeling both text produced by AI and text written by humans.
- Editing AI-generated text is simple and can be used to trick the classifier.
- Because it was predominantly trained on English content created by adults, the classifier is likely to make mistakes when dealing with text written by kids and non-English texts.
Additionally, the company claims that sometimes it will “incorrectly but confidently” classify human-written material as coming from an AI, particularly if it deviates significantly from the training data. It is obvious that the classifier is still very much a “work in progress” from this.
Additionally, according to OpenAI, the tool outperformed its prior tool for identifying material that has been affected by AI in its tests, labelling AI-written text as “possibly AI-written” 26% of the time and making incorrect AI detections 9% of the time.
OpenAI wasn’t the first to develop a tool for identifying text produced by ChatGPT; shortly after the chatbot gained notoriety, websites like GPTZero, which was created by a student named Edward Tian to “detect AI plagiarism,” followed suit.
With this identification technology, OpenAI is particularly focused on the educational sector. According to its press release, “identifying AI-written text has been an important issue of contention among educators” as a result of the fact that different schools have either banned or accepted ChatGPT. The company claims to be “working with educators in the US” to learn more about how ChatGPT is being used in classrooms. It is also looking for comments from anyone with experience in the field of education.