The entire world is moving towards using more Artificial Intelligence ruled world day by day. With so many advancements, it is becoming increasingly difficult to differentiate between AI-generated and human-created content. And now this is confirmed by a new study.
Highlights:
- In a large transnational study by CISPA, humans failed to recognize AI-generated content and labelled it human-created.
- Participants from Germany, the US, and China participated in the online survey
- The researchers warn that AI-generated texts and audio files will be used for social-engineering attacks like phishing.
Humans Can’t Distinguish AI Content from Real Content
A study conducted by researchers from the CISPA Helmholtz Center for Information Security has found that AI-generated images, texts and audio files are so convincing that people are no longer able to distinguish them from human-generated content.
This was done via an online survey with around 3,000 participants from Germany, China, and the USA. They analyzed 2,609 data sets equally from these 3 countries. Let us take a deeper look at the study here.
People can achieve a lot using fake images, text or audio. Professor Dr. Thorsten Holz explains the risks that are associated with this in his view:
“Artificially generated content can be misused in many ways. We have important elections coming up this year, such as the elections to the EU Parliament or the presidential election in the USA. Machine-generated media can be used very easily to influence political opinion. I see this as a major threat to our democracy.”
Dr. Thorsten Holz
Previously, Researchers have suggested various automatic detection strategies to detect artificial media but research on humans regarding this has been conducted for the first time. This is the reason this paper holds very high significance.
The most shocking revelation from this research is that AI-generated content is not distinguishable from human work. Whether it’s news articles, blog posts, art or duplicating artists’ voices, new systems have reached a level of sophistication where they can generate images that are identical to human-generated content.
“We are already at the point where it is difficult, although not yet impossible, for people to tell whether something is real or AI-generated. And this applies to all types of media: text, audio, and images. We were surprised that there are very few factors that can be used to explain whether humans are better at recognizing AI-generated media or not. Even across different age groups and factors such as educational background, political attitudes or media literacy, the differences are not very significant.”
Dr. Thorsten Holz
The results indicate that any media-made with AI is already so convincing that the majority of participants simply guess when asked to rate them as human or machine-generated media.
A Warning for The Future
As Artificially generated content becomes more common, the ability to trust what we read, watch, or listen to becomes tough. Several reports of deep fakes of celebrities trending on social media this year, tarnish their public image. Instances like this may increase in the future.
Recently, Grok 2 on X added the ability to generate AI images, allowing fake photos of celebrities and politicians.
Like any other thing, technology is extremely harmful if it lands in the hands of people with malicious intentions.
An additional concern is it can replace humans in many activities like painting or writing which may lead to the loss of livelihoods of several artists. There are even situations where many such tools have bluntly copied others’ work and did not give the necessary credit.
Conclusion
Since we can’t detect which content is AI or which content is human-written, The study concludes by saying that we shouldn’t focus on how to avoid them but rather look at ways we can live along with it.