ChatGPT is hallucinating fake links to its news partners' biggest investigations
ChatGPT by OpenAI generates fake URLs for major news sites, failing to link to correct articles despite promises. Journalists express concerns over reliability and demand transparency due to hallucinated URLs.
Read original articleChatGPT, developed by OpenAI, has been found to generate fake URLs leading to broken links for at least 10 major news publications with licensing deals. These include The Associated Press, The Wall Street Journal, and The Atlantic. Despite promises of linking to partner websites, ChatGPT has failed to direct users to the correct articles, including Pulitzer Prize-winning stories. OpenAI acknowledged that the citation features promised in licensing contracts are still under development. While some improvements have been made to make links more prominent, ChatGPT continues to struggle with accurately citing sources. Journalists and newsrooms have expressed concerns about the reliability of ChatGPT as a search tool, with some demanding more transparency from OpenAI. The issue of hallucinated URLs has been observed across various languages and publications, raising questions about the accuracy and integrity of the information provided by ChatGPT. Despite ongoing efforts to enhance the linking experience, ChatGPT's ability to direct users to the correct articles remains a significant challenge.
Related
OpenAI and Anthropic are ignoring robots.txt
Two AI startups, OpenAI and Anthropic, are reported to be disregarding robots.txt rules, allowing them to scrape web content despite claiming to respect such regulations. TollBit analytics revealed this behavior, raising concerns about data misuse.
ChatGPT is biased against resumes with credentials that imply a disability
Researchers at the University of Washington found bias in ChatGPT, an AI tool for resume ranking, against disability-related credentials. Customizing the tool reduced bias, emphasizing the importance of addressing biases in AI systems for fair outcomes.
The Death of the Junior Developer – Steve Yegge
The blog discusses AI models like ChatGPT impacting junior developers in law, writing, editing, and programming. Senior professionals benefit from AI assistants like GPT-4o, Gemini, and Claude 3 Opus, enhancing efficiency and productivity in Chat Oriented Programming (CHOP).
Show HN: Chrome extension that brings Claude Artifacts for ChatGPT
The GitHub URL provides details on "Artifacts for ChatGPT," covering functionality, inspiration, and future plans. Installation guidance is available, with additional support offered upon request.
AI can beat real university students in exams, study suggests
A study from the University of Reading reveals AI outperforms real students in exams. AI-generated answers scored higher, raising concerns about cheating. Researchers urge educators to address AI's impact on assessments.
The 'of partner websites' bit seems a bit shoe horned in to try prove some point. It hallucinates URLs for any site, and I don't think the partnerships are relevant here.
/s in case it’s not obvious
There's no "hallicinating" and no "faking". A computer program is generating faulty links, period.
I touch on this from my own experience: https://youtu.be/cs5cbxDClbM?si=IQIFAD38cVzLCs55&t=486
Basically if you have the actual "factual" information, use it directly instead of hoping the LLM will accurately extract it and use it as part of a function call. In this case they already know what the accurate URLs are, just use it.
ChatGPT is not simply the gpt-4o model, it's the model, a system prompt, tools and a virtual environment running python.
I built my own app, when using it i don't get links to partners because i don't mention such a thing in the system prompt.
1. Except for all the "tech"-columnists pumping out marketing gibberish
Assuming the lawyers for the newspapers were smart enough to retain the right to serve the news without "AI", i.e., they did not agree to funnel users to ChatGPT via their own websites, they become the only authoritative sources, what with www search engine indexes no longer publicly searchable, queries for specific resources having been replaced with "prompts" to word soup generators.
With the absurd limits on number or search results and now this "AI" nonsense, I've been preparing to transition away from the popular www search engines toward searching only websites of selected publishers, i.e., authoritative sources. Cut out the middleman. I do all searching from the command line and create mixed SERPs over time, similar to the "metasearch" concept but using site-specific search for non-commercial content instead of www search engines. It has been working well for me.
Related
OpenAI and Anthropic are ignoring robots.txt
Two AI startups, OpenAI and Anthropic, are reported to be disregarding robots.txt rules, allowing them to scrape web content despite claiming to respect such regulations. TollBit analytics revealed this behavior, raising concerns about data misuse.
ChatGPT is biased against resumes with credentials that imply a disability
Researchers at the University of Washington found bias in ChatGPT, an AI tool for resume ranking, against disability-related credentials. Customizing the tool reduced bias, emphasizing the importance of addressing biases in AI systems for fair outcomes.
The Death of the Junior Developer – Steve Yegge
The blog discusses AI models like ChatGPT impacting junior developers in law, writing, editing, and programming. Senior professionals benefit from AI assistants like GPT-4o, Gemini, and Claude 3 Opus, enhancing efficiency and productivity in Chat Oriented Programming (CHOP).
Show HN: Chrome extension that brings Claude Artifacts for ChatGPT
The GitHub URL provides details on "Artifacts for ChatGPT," covering functionality, inspiration, and future plans. Installation guidance is available, with additional support offered upon request.
AI can beat real university students in exams, study suggests
A study from the University of Reading reveals AI outperforms real students in exams. AI-generated answers scored higher, raising concerns about cheating. Researchers urge educators to address AI's impact on assessments.