One of the major problems most educational institutions have is students trying to cheat by using ChatGPT for their assignments and tests. Although, there are many software available online to detect AI-created content: they are not 100% accurate enough. Now, according to WSJ, the company has developed watermarks that can aid in detecting in AI generated content.
Highlights:
- OpenAI is testing a new feature to implement watermarks in ChatGPT-generated text.
- Watermarks may result in a loss in the quality of content generated by ChatGPT.
- Over 30% of the people said that they would use the software less if watermarking was implemented.
What are Watermarks and How OpenAI is Using Them?
Watermarks are digital signatures embedded in the data which helps to detect if it is AI-generated or not.
In simple words, Companies can use specific patterns to generate text, which can’t be detected by humans but if you place the text under any detection technology, they can identify that it is AI-generated. So it is leaving a trail of footsteps when committing a crime.
There have been requests from many educational institutions from all over the world to introduce something like this for ChatGPT-generated content.
But is it not possible to detect AI-generated without imprinting watermarks in the data? The answer is yes, it is possible but the tool would have very low accuracy. ChatGPT has launched an AI-written text detector named AI Classifier last year. However, due to backlash from various users over its very low accuracy: it was forced to shut down.
Challenges For ChatGPT’s New Tool
Earlier there were rumours that if watermarks were implemented, there would be a loss in the quality of content generated by these large language models. This is because to implement watermarks, they must change the way each token gets generated to form a pattern that can be detected.
According to some speculated reports, OpenAI has already developed a watermark system 2 years ago and there have been tests going on for a year to detect if these watermarks degrade the quality of content. The researchers have found out that they do not affect the quality. This means that this system is just a click away from being released.
Nevertheless, now there are 2 other problems that the company might have to face.
The first one is, Will the public accept this system? According to the surveys conducted by OpenAI, Nearly 30% said they would use ChatGPT less if it deployed watermarks and a rival didn’t.
This would heavily affect their business. A lot of users might shift to another LLM. The system can be very controversial.
We have already talked about how about half of the coding answers by GPT were wrong.
Watermark can help detect plagiarism even if the user uses minor circumventing techniques like paraphrasing but the problem is, that it might be hard for the detector tool if the user asks another LLM to rewrite the sentences. This is because the tool can only detect ChatGPT’s content but not the content produced by LLMs of other companies.
Another problem that may occur is – False positives. There have been multiple news reports that some of the AI-detector tools available online right now are labelling human-generated content as AI-written. This has enraged a lot of people.
A few instances include, Hard Working and talented people have been scolded by their employers for using AI-generated content. This might make them feel extremely bad. Check out this LinkedIn post where a tech-content writer found her well-written content to be 60% plagiarized which made her feel these AI detector tools are stereotyping humans that they write poorly.
This may have a lot of impact in their future where employers may treat this as misconduct or get a poor view of them. No matter how high-tech the company might build their tool as, there are going to be at least 1% of false-positive cases.
Conclusion
Apparently, there have been recurrent discussions going on about whether to implement this or not at the headquarters of OpenAI. It is up to the company to decide if the risks outweigh the rewards in this issue. Stay tuned to our website to know all the latest updates regarding this issue.