July 26th, 2024

OpenAI just announced a new search tool. Its demo already got something wrong

OpenAI's new search tool, SearchGPT, aims to improve internet navigation but showed inaccuracies during its demo, misrepresenting festival dates. It's a prototype, with plans for future enhancements and user testing.

Read original articleLink Icon
OpenAI just announced a new search tool. Its demo already got something wrong

OpenAI has introduced a new search tool called SearchGPT, designed to enhance how users navigate the internet. However, during its demonstration, the tool displayed inaccuracies, notably misrepresenting the dates of the An Appalachian Summer Festival in Boone, North Carolina. While the tool correctly identified some festivals, it incorrectly stated that the festival would occur from July 29 to August 16, when in fact, it had already started on June 29 and concluded on July 27. OpenAI acknowledged that this is an initial prototype and plans to improve it. Although SearchGPT is not yet publicly available, users can join a waitlist for testing. The tool aims to provide in-line citations and links to external sources, with future plans to integrate search features into ChatGPT. Despite the potential of AI in transforming search capabilities, the demonstration highlights ongoing issues with generative AI models, which often produce incorrect or fabricated information, a phenomenon known as "hallucination." This raises concerns about the reliability of AI-generated content and its impact on web traffic for original sources. The challenges faced by AI search engines are not new, as previous attempts by companies like Google have also resulted in significant errors. The cycle of tech companies releasing innovative products followed by public criticism for inaccuracies continues, emphasizing the need for evidence-based improvements in AI technology.

Link Icon 11 comments
By @shmeeed - 4 months
It seems like just about any time an AI company releases such a video, they're bound to make the same mistake of not cross checking their own demo.

I don't get it. I mean, how hard can it be? These are billion dollar ventures. For god's sake, at least have an intern fact check your press releases for 15 minutes before publishing them.

On the other hand, these repeated basic mistakes certainly help in keeping expectations in check. But I doubt that's the goal of the demo...

By @Kon-Peki - 4 months
Even if the demo was 100% correct, I don't even understand why it would have been impressive. "Traditional" search engines will give you this answer instantly. Isn't the point to do this stuff better? To do things a normal search engine can't do?

Yesterday my coworker was talking about using Gemini and wanted to show me how neat it was. So he typed a questing into Google. The search results came back instantly with the correct information as the #1 link. 5 seconds later, the Gemini box displayed the correct information. What the hell is so impressive about that?

By @drzzhan - 4 months
Of course it would get something wrong. Everything is put into a probabilistic space and then get pulled out. Basically asking to draw a deterministic results from a non-delta distribution. I wonder why they manage to make OCR working well in their model but then suck at pulling links and quotes.
By @SpicyLemonZest - 4 months
I don't agree that the demo got this wrong. It's not a hallucination, it's pulling the date range from the festival's ticketing page (https://appsummer.org/tickets/) mentions July 29 – August 16 date range. Perhaps a superintelligence should be smart enough mention the other dates, and to understand that the "Closed" dates are fake and the festival's not really happening in that interval, but this seems like perfectly reasonable output for the moderately-intelligent search tool OpenAI is advertising.
By @TowerTall - 4 months
By @chucke1992 - 4 months
I still believe that the best approach is the contextual search embedded everywhere - I think Microsoft's approach (and now Apple's one) is a way forward.
By @nyxtom - 4 months
Imagine actually thinking that you can search for something and you are always going to get a correct answer on the internet. At least with LLMs you can fine tune or at least pick different models you want to use and communicate at will with it. It's not 100% but the alternative is a crap ton of research and verification into topics I don't really have time for. Can't tell you how many times now it's been useful to use AI as a researcher aid in prototyping. It has vastly improved my iteration times, especially on things I normally would spend weeks on tutorials.
By @Slyfox33 - 4 months
Shocker.
By @Havoc - 4 months
tbf this feels more like an expectations issue.

People need to learn to use LLMs for what they are - useful but fallible and prone to hallucinations.

Until we have a technical solution for that it’s the people that will need to adapt