Articles by FavTutor
  • AI News
  • Data Structures
  • Web Developement
  • AI Code GeneratorNEW
  • Student Help
  • Main Website
No Result
View All Result
FavTutor
  • AI News
  • Data Structures
  • Web Developement
  • AI Code GeneratorNEW
  • Student Help
  • Main Website
No Result
View All Result
Articles by FavTutor
No Result
View All Result
Home AI News, Research & Latest Updates

SearchGPT Just Made a Mistake And OpenAI Even Shared It

Geethanjali Pedamallu by Geethanjali Pedamallu
July 31, 2024
Reading Time: 4 mins read
OpenAI SearchGPT Error in Prototype
Follow us on Google News   Subscribe to our newsletter

We all know about OpenAI’s latest innovation, an AI tool for Search called SearchGPT. While sharing about the prototype version, the model committed a huge goof-up in its demo video which instantly became the talk of the town.

Highlights:

  • SearchGPT provided incorrect and misleading information in the demo video; a case of hallucination.
  • Industry experts are pointing out the challenges the company will face in the future.
  • People are debating on the reliability of the model for important tasks.

How SearchGPT Gave Wrong Answers?

SearchGPT is designed to directly answer the user’s queries with the latest information gathered from the internet, along with providing links to authentic sources. The product is currently in the prototype stage and has been launched to a small group of users, which if it works well, will then be integrated into the ChaTGPT.

“Getting answers on the web can take a lot of effort, often requiring multiple attempts to get relevant results. We believe that by enhancing the conversational capabilities of our models with real-time information from the web, finding what you’re looking for can be faster and easier.”

However, in the demo video they included in their article, something unusual happened. In the video, a user prompts “music festivals in Boone North Carolina in August” into the query box and the model returns with a list of festivals that are happening in North Carolina.

SearchGPT Demo Error

The first result is an Appalachian Summer festival happening from July 29th to August 16th of this year. It even linked to a source on the official website for people to refer to.

However, when one of the journalists from The Atlantic called the festival officials, they informed me that the festival started on June 29th and the final concert would be on July 27. Even the website content showed the same.

The dates provided by the SearchGPT are the days when the box office would remain closed i.e., July 29th to August 16th 2024. That’s a small error but that is something that regular users might not like.

This can be a case of Hallucinations. This happens when ML models generate outputs that are grammatically correct and coherent but factually erroneous or incomprehensible. This happens because the training data is either incorrect or it is not properly trained.

After this silly mistake was uncovered, people did more fact-checking and they were quick to find out that half of the information provided in this live demo is not true. The errors include providing incorrect links to a local website and the mention of A festival in Swannanoa, two hours away from Boone, closer to Asheville, which is false.

This led to a sparking debate on social media. People seemed to be disappointed with this goof-up, in cases like this, the model is expected to answer “None that I could find but here are the closest matches” instead of hallucinating.

As AI becomes a bigger part of our daily lives, the information it gives us must be accurate. Misleading information can have far-reaching consequences, especially when users rely on these systems for educational or professional purposes.

This is not the first time this has happened with OpenAI, when ChatGPT was launched back in November 2022, the model immediately started responding to queries with sexist and racist remarks which enraged a lot of people.

Conclusion

Incidents like these remind us that there is still a long way to go before AI systems can be fully trusted to provide accurate and reliable information consistently. Sometimes, they may end up citing the wrong sources or hallucinating information. We hope OpenAI takes care of this error before launching it officially!

ShareTweetShareSendSend
Geethanjali Pedamallu

Geethanjali Pedamallu

Hi, I am P S Geethanjali, a college student learning something new every day about what's happening in the world of Artificial Intelligence and Machine Learning. I'm passionate about exploring the latest AI technologies and how they solve real-world problems. In my free time, you will find me reading books or listening to songs for relaxation.

RelatedPosts

Candidate during Interview

9 Best AI Interview Assistant Tools For Job Seekers in 2025

May 1, 2025
AI Generated Tom and Jerry Video

AI Just Created a Full Tom & Jerry Cartoon Episode

April 12, 2025
Amazon Buy for Me AI

Amazon’s New AI Makes Buying from Any Website Easy

April 12, 2025
Microsoft New AI version of Quake 2

What Went Wrong With Microsoft’s AI Version of Quake II?

April 7, 2025
AI Reasoning Model Better Method

This Simple Method Can Make AI Reasoning Faster and Smarter

April 3, 2025

About FavTutor

FavTutor is a trusted online tutoring service to connects students with expert tutors to provide guidance on Computer Science subjects like Java, Python, C, C++, SQL, Data Science, Statistics, etc.

Categories

  • AI News, Research & Latest Updates
  • Trending
  • Data Structures
  • Web Developement
  • Data Science

Important Subjects

  • Python Assignment Help
  • C++ Help
  • R Programming Help
  • Java Homework Help
  • Programming Help

Resources

  • About Us
  • Contact Us
  • Editorial Policy
  • Privacy Policy
  • Terms and Conditions

Website listed on Ecomswap. © Copyright 2025 All Rights Reserved.

No Result
View All Result
  • AI News
  • Data Structures
  • Web Developement
  • AI Code Generator
  • Student Help
  • Main Website

Website listed on Ecomswap. © Copyright 2025 All Rights Reserved.