AI search engine study finds wrong cites in 60%+ of queries; Grok3 had 94% wrong
Nearly 25% of Americans use AI search engines, but over 60% of their responses are incorrect, often misattributing sources and ignoring content access preferences, raising concerns about misinformation and publisher credibility.
Read original articleAI search engines are increasingly popular, with nearly 25% of Americans using them instead of traditional search engines. However, a study by the Tow Center for Digital Journalism reveals significant issues with how these generative search tools cite news content. The research assessed eight AI search engines and found that they often fail to accurately retrieve and cite original articles, with over 60% of responses being incorrect. Premium models, contrary to expectations, provided more confidently incorrect answers than free versions. Many chatbots ignored publishers' Robot Exclusion Protocol preferences, leading to unauthorized access to content. Additionally, these tools frequently misattributed sources, citing syndicated versions instead of original articles, which undermines the credibility of both the AI and the news publishers. The study highlighted that chatbots often fabricated URLs, complicating users' ability to verify information. This raises concerns about the impact on news publishers' traffic and revenue, as well as the potential for misinformation to spread due to the authoritative tone of AI responses. The findings indicate a pressing need for better citation practices and respect for content ownership in the development of AI search technologies.
- Nearly 25% of Americans use AI search engines instead of traditional ones.
- Over 60% of responses from AI search tools were found to be incorrect.
- Premium AI models provided more confidently incorrect answers than free versions.
- Many chatbots ignored publishers' preferences for content access, leading to unauthorized citations.
- AI tools often misattributed sources and fabricated URLs, complicating information verification.
Related
Chatbots Are Primed to Warp Reality
The integration of AI chatbots raises concerns about misinformation and manipulation, particularly in political contexts, as they can mislead users and implant false memories despite efforts to improve accuracy.
Washington Post Leverages 'AI' to Undermine History and Make Search Less Useful
The Washington Post's new AI assistant has raised concerns over search result quality, basic functionality, and the potential compromise of journalistic integrity, reflecting broader trends in media automation and cost-cutting.
AI means the end of internet search as we've known it
AI is transforming internet search from keyword-based queries to conversational interactions, with Google’s AI Overviews providing detailed answers, raising concerns for publishers about traffic loss and information accuracy.
AI Distortion is new threat to trusted information
Deborah Turness, CEO of BBC News, warns that AI distortion threatens trusted information, with research showing significant inaccuracies in AI-generated content. Collaboration is needed to address these challenges and maintain public trust.
AI summaries turn real news into nonsense, BBC finds
The BBC's research revealed that over 51% of AI-generated news summaries contained significant inaccuracies, with Gemini performing the worst. The study emphasizes the need for responsible AI use to maintain public trust.
Related
Chatbots Are Primed to Warp Reality
The integration of AI chatbots raises concerns about misinformation and manipulation, particularly in political contexts, as they can mislead users and implant false memories despite efforts to improve accuracy.
Washington Post Leverages 'AI' to Undermine History and Make Search Less Useful
The Washington Post's new AI assistant has raised concerns over search result quality, basic functionality, and the potential compromise of journalistic integrity, reflecting broader trends in media automation and cost-cutting.
AI means the end of internet search as we've known it
AI is transforming internet search from keyword-based queries to conversational interactions, with Google’s AI Overviews providing detailed answers, raising concerns for publishers about traffic loss and information accuracy.
AI Distortion is new threat to trusted information
Deborah Turness, CEO of BBC News, warns that AI distortion threatens trusted information, with research showing significant inaccuracies in AI-generated content. Collaboration is needed to address these challenges and maintain public trust.
AI summaries turn real news into nonsense, BBC finds
The BBC's research revealed that over 51% of AI-generated news summaries contained significant inaccuracies, with Gemini performing the worst. The study emphasizes the need for responsible AI use to maintain public trust.