June 28th, 2024

ChatGPT is hallucinating fake links to its news partners' biggest investigations

ChatGPT by OpenAI generates fake URLs for major news sites, failing to link to correct articles despite promises. Journalists express concerns over reliability and demand transparency due to hallucinated URLs.

Read original articleLink Icon
ChatGPT is hallucinating fake links to its news partners' biggest investigations

ChatGPT, developed by OpenAI, has been found to generate fake URLs leading to broken links for at least 10 major news publications with licensing deals. These include The Associated Press, The Wall Street Journal, and The Atlantic. Despite promises of linking to partner websites, ChatGPT has failed to direct users to the correct articles, including Pulitzer Prize-winning stories. OpenAI acknowledged that the citation features promised in licensing contracts are still under development. While some improvements have been made to make links more prominent, ChatGPT continues to struggle with accurately citing sources. Journalists and newsrooms have expressed concerns about the reliability of ChatGPT as a search tool, with some demanding more transparency from OpenAI. The issue of hallucinated URLs has been observed across various languages and publications, raising questions about the accuracy and integrity of the information provided by ChatGPT. Despite ongoing efforts to enhance the linking experience, ChatGPT's ability to direct users to the correct articles remains a significant challenge.

Link Icon 14 comments
By @sixhobbits - 4 months
I like a bit of ChatGPT bashing, same as the next guy, but this seems a bit unfair. The partnerships have been signed, but as far as I know OpenAI has made no indication that the implementation is done or even started yet, so this story is basically "ChatGPT hallucinates URLs", which is a pretty well-known issue.

The 'of partner websites' bit seems a bit shoe horned in to try prove some point. It hallucinates URLs for any site, and I don't think the partnerships are relevant here.

By @barbariangrunge - 4 months
I think the bigger issue is trust. The chat it is no longer trying to return objective information. “OpenAI” is letting companies pay to have it promote their content—is that accurate?
By @netruk44 - 4 months
I wonder how difficult it would be to make a website that, instead of serving 404's, a model does a semantic search using the nonexistent URL to come up with a URL that actually does exist and most closely matches the "intent" of the invalid one.
By @Kiro - 4 months
I very seldom experience hallucinations and I'm a really heavy user of ChatGPT. Can someone give me a prompt that is likely to generate hallucinations?
By @redeux - 4 months
Clearly those companies should use ChatGPT to generate those URLs. That way when ChatGPT guesses what the URL should be, it’ll be correct!

/s in case it’s not obvious

By @chrisjj - 4 months
> ChatGPT is hallucinating fake links

There's no "hallicinating" and no "faking". A computer program is generating faulty links, period.

By @Edmond - 4 months
This seems a result of relying on the LLM to accurately extract information that needs to be exact.

I touch on this from my own experience: https://youtu.be/cs5cbxDClbM?si=IQIFAD38cVzLCs55&t=486

Basically if you have the actual "factual" information, use it directly instead of hoping the LLM will accurately extract it and use it as part of a function call. In this case they already know what the accurate URLs are, just use it.

By @maxbaines - 4 months
It's easy to think of the models in this context, rather than the application built on top of the model, in this case ChatGPT.

ChatGPT is not simply the gpt-4o model, it's the model, a system prompt, tools and a virtual environment running python.

I built my own app, when using it i don't get links to partners because i don't mention such a thing in the system prompt.

By @klyrs - 4 months
It's a special kind of hell, taking a hallucinogen and finding yourself in a white buttondown and slacks, writing ad copy.
By @1vuio0pswjnm7 - 4 months
Perhaps ChatGPT's inherent flaws will drive www users to use newspaper websites instead. This could be a win for journalism, which seems to be the adversary (not the partner^1) of Silicon Valley.

1. Except for all the "tech"-columnists pumping out marketing gibberish

Assuming the lawyers for the newspapers were smart enough to retain the right to serve the news without "AI", i.e., they did not agree to funnel users to ChatGPT via their own websites, they become the only authoritative sources, what with www search engine indexes no longer publicly searchable, queries for specific resources having been replaced with "prompts" to word soup generators.

With the absurd limits on number or search results and now this "AI" nonsense, I've been preparing to transition away from the popular www search engines toward searching only websites of selected publishers, i.e., authoritative sources. Cut out the middleman. I do all searching from the command line and create mixed SERPs over time, similar to the "metasearch" concept but using site-specific search for non-commercial content instead of www search engines. It has been working well for me.

By @LetsGetTechnicl - 4 months
LLM's?? Hallucinating? Color me shocked!!
By @localfirst - 4 months
We seem to be exiting the hype phase
By @curtis3389 - 4 months
It drives me nuts how OpenAI has misleadingly marketed their products, and the tech community at large has failed to clarify what their products really are. This article is so unsurprising to the point of being boring. Of course an LLM generates broken links. Why wouldn't it?