OpenAI just announced a new search tool. Its demo already got something wrong
OpenAI's new search tool, SearchGPT, aims to improve internet navigation but showed inaccuracies during its demo, misrepresenting festival dates. It's a prototype, with plans for future enhancements and user testing.
Read original articleOpenAI has introduced a new search tool called SearchGPT, designed to enhance how users navigate the internet. However, during its demonstration, the tool displayed inaccuracies, notably misrepresenting the dates of the An Appalachian Summer Festival in Boone, North Carolina. While the tool correctly identified some festivals, it incorrectly stated that the festival would occur from July 29 to August 16, when in fact, it had already started on June 29 and concluded on July 27. OpenAI acknowledged that this is an initial prototype and plans to improve it. Although SearchGPT is not yet publicly available, users can join a waitlist for testing. The tool aims to provide in-line citations and links to external sources, with future plans to integrate search features into ChatGPT. Despite the potential of AI in transforming search capabilities, the demonstration highlights ongoing issues with generative AI models, which often produce incorrect or fabricated information, a phenomenon known as "hallucination." This raises concerns about the reliability of AI-generated content and its impact on web traffic for original sources. The challenges faced by AI search engines are not new, as previous attempts by companies like Google have also resulted in significant errors. The cycle of tech companies releasing innovative products followed by public criticism for inaccuracies continues, emphasizing the need for evidence-based improvements in AI technology.
Related
ChatGPT is hallucinating fake links to its news partners' biggest investigations
ChatGPT by OpenAI generates fake URLs for major news sites, failing to link to correct articles despite promises. Journalists express concerns over reliability and demand transparency due to hallucinated URLs.
Google Researchers Publish Paper About How AI Is Ruining the Internet
Google researchers warn about generative AI's negative impact on the internet, creating fake content blurring authenticity. Misuse includes manipulating human likeness, falsifying evidence, and influencing public opinion for profit. AI integration raises concerns.
GenAI does not Think nor Understand
GenAI excels in language processing but struggles with logic-based tasks. An example reveals inconsistencies, prompting caution in relying on it. PartyRock is recommended for testing language models effectively.
ChatGPT Isn't 'Hallucinating'–It's Bullshitting – Scientific American
AI chatbots like ChatGPT can generate false information, termed as "bullshitting" by authors to clarify responsibility and prevent misconceptions. Accurate terminology is crucial for understanding AI technology's impact.
Google Researchers Publish Paper About How AI Is Ruining the Internet
Google researchers warn that generative AI contributes to the spread of fake content, complicating the distinction between truth and deception, and potentially undermining public understanding and accountability in digital information.
I don't get it. I mean, how hard can it be? These are billion dollar ventures. For god's sake, at least have an intern fact check your press releases for 15 minutes before publishing them.
On the other hand, these repeated basic mistakes certainly help in keeping expectations in check. But I doubt that's the goal of the demo...
Yesterday my coworker was talking about using Gemini and wanted to show me how neat it was. So he typed a questing into Google. The search results came back instantly with the correct information as the #1 link. 5 seconds later, the Gemini box displayed the correct information. What the hell is so impressive about that?
People need to learn to use LLMs for what they are - useful but fallible and prone to hallucinations.
Until we have a technical solution for that it’s the people that will need to adapt
Related
ChatGPT is hallucinating fake links to its news partners' biggest investigations
ChatGPT by OpenAI generates fake URLs for major news sites, failing to link to correct articles despite promises. Journalists express concerns over reliability and demand transparency due to hallucinated URLs.
Google Researchers Publish Paper About How AI Is Ruining the Internet
Google researchers warn about generative AI's negative impact on the internet, creating fake content blurring authenticity. Misuse includes manipulating human likeness, falsifying evidence, and influencing public opinion for profit. AI integration raises concerns.
GenAI does not Think nor Understand
GenAI excels in language processing but struggles with logic-based tasks. An example reveals inconsistencies, prompting caution in relying on it. PartyRock is recommended for testing language models effectively.
ChatGPT Isn't 'Hallucinating'–It's Bullshitting – Scientific American
AI chatbots like ChatGPT can generate false information, termed as "bullshitting" by authors to clarify responsibility and prevent misconceptions. Accurate terminology is crucial for understanding AI technology's impact.
Google Researchers Publish Paper About How AI Is Ruining the Internet
Google researchers warn that generative AI contributes to the spread of fake content, complicating the distinction between truth and deception, and potentially undermining public understanding and accountability in digital information.