We all know about OpenAI’s latest innovation, an AI tool for Search called SearchGPT. While sharing about the prototype version, the model committed a huge goof-up in its demo video which instantly became the talk of the town.
Highlights:
- SearchGPT provided incorrect and misleading information in the demo video; a case of hallucination.
- Industry experts are pointing out the challenges the company will face in the future.
- People are debating on the reliability of the model for important tasks.
How SearchGPT Gave Wrong Answers?
SearchGPT is designed to directly answer the user’s queries with the latest information gathered from the internet, along with providing links to authentic sources. The product is currently in the prototype stage and has been launched to a small group of users, which if it works well, will then be integrated into the ChaTGPT.
“Getting answers on the web can take a lot of effort, often requiring multiple attempts to get relevant results. We believe that by enhancing the conversational capabilities of our models with real-time information from the web, finding what you’re looking for can be faster and easier.”
However, in the demo video they included in their article, something unusual happened. In the video, a user prompts “music festivals in Boone North Carolina in August” into the query box and the model returns with a list of festivals that are happening in North Carolina.
The first result is an Appalachian Summer festival happening from July 29th to August 16th of this year. It even linked to a source on the official website for people to refer to.
However, when one of the journalists from The Atlantic called the festival officials, they informed me that the festival started on June 29th and the final concert would be on July 27. Even the website content showed the same.
The dates provided by the SearchGPT are the days when the box office would remain closed i.e., July 29th to August 16th 2024. That’s a small error but that is something that regular users might not like.
This can be a case of Hallucinations. This happens when ML models generate outputs that are grammatically correct and coherent but factually erroneous or incomprehensible. This happens because the training data is either incorrect or it is not properly trained.
After this silly mistake was uncovered, people did more fact-checking and they were quick to find out that half of the information provided in this live demo is not true. The errors include providing incorrect links to a local website and the mention of A festival in Swannanoa, two hours away from Boone, closer to Asheville, which is false.
This led to a sparking debate on social media. People seemed to be disappointed with this goof-up, in cases like this, the model is expected to answer “None that I could find but here are the closest matches” instead of hallucinating.
As AI becomes a bigger part of our daily lives, the information it gives us must be accurate. Misleading information can have far-reaching consequences, especially when users rely on these systems for educational or professional purposes.
This is not the first time this has happened with OpenAI, when ChatGPT was launched back in November 2022, the model immediately started responding to queries with sexist and racist remarks which enraged a lot of people.
Conclusion
Incidents like these remind us that there is still a long way to go before AI systems can be fully trusted to provide accurate and reliable information consistently. Sometimes, they may end up citing the wrong sources or hallucinating information. We hope OpenAI takes care of this error before launching it officially!