September 1st, 2024

Have we stopped to think about what LLMs model?

Recent discussions critique claims that large language models understand language, emphasizing their limitations in capturing human linguistic complexities. The authors warn against deploying LLMs in critical sectors without proper regulation.

Read original articleLink Icon
Have we stopped to think about what LLMs model?

Recent discussions surrounding large language models (LLMs) highlight a critical examination of what these technologies actually represent in terms of language and cognition. A peer-reviewed paper by Abeba Birhane and Marek McGann critiques the prevalent claims that LLMs can "understand" language, arguing that such assertions misinterpret the nature of human linguistic capabilities. The authors emphasize that LLMs, which are built on vast datasets and statistical techniques, do not replicate the complexities of human language, which is inherently social, contextual, and embodied. They argue that LLMs operate under flawed assumptions about language completeness and data representation, failing to capture the nuances of human interaction, such as emotional context and social participation. The paper warns that treating LLMs as language understanding machines can lead to misguided policies and social implications. Furthermore, the authors express concern over the deployment of LLMs in critical sectors like education and healthcare without adequate testing and regulation, highlighting the potential risks of misinformation and unreliability. While the AI industry continues to promote the economic benefits of LLMs, the authors call for a more cautious and skeptical approach to their development and application.

- The paper critiques the exaggerated claims about LLMs' understanding of language.

- LLMs are based on flawed assumptions about language and data representation.

- Human language is complex and cannot be fully captured by LLMs.

- There are significant risks in deploying LLMs in critical sectors without proper regulation.

- A more cautious approach to LLM development is advocated by researchers.

Link Icon 8 comments
By @jusssi - 6 months
"Nothing is risked by ChatGPT when it is prompted and generates text."

This is just not true. Too much BS and it risks getting shut down.

By @FrustratedMonky - 6 months
Is prompt engineering really 'psychology'. Convincing the AI to do what you want. Just like you might 'prompt' a human to do something.

Like in the short story Lena, 2021-01-04 by qntm

https://qntm.org/mmacevedo

In short story, the weights of the LLM are a brain scan.

But same situation. People could use multiple copies of the AI. But each time, they would have to 'talk it into' doing what they wanted

By @throw310822 - 6 months
"Language models" certainly feels a misnomer. The fact that their inputs and outputs are (actually were, originally) language, doesn't mean that what they model is language. The proof is that there is an infinite number of perfectly correct linguistic productions (think "colorless green ideas sleep furiously") that are not generated by language models exactly because their point is not modelling language but some approximation of the human mind. The fact that the models are trained through language and use language to communicate is purely incidental.
By @royal__ - 6 months
Sure, language is more than text. It's complex, messy, and ever-changing. But that's exactly why language models are so phenomenal; they can extract patterns from these complex systems.
By @edoardo-schnell - 6 months
As much as I would like to agree to the "AI models do not understand, they just predict the next token", I feel the author of the research does not use valid arguments. Language is more than text? Fine, I could turn on the webcam and integrate video stream into the calculations. Stomping your feet and crying about slurs in the models won't make your argument valid.
By @mewpmewp2 - 6 months
LLMs could technically consume any type of sensory data, although maybe it is not right to call them specifically language models then. But they can be multi modal and could be fed with similar data as people consume.

In addition I don't think it makes sense to compare this to building bridges or pharma.

I don't think ChatGPT is more likely to harm a person with misinformation than just plain Google or YouTube would.

In fact already existing search and recommendation algorithms I believe are more likely to lead you down the misinfo rabbit hole.

At least ChatGPT to an extent is biased to try and stay objective as opposed to any rabbit holes leading people to fringe content.

By @bundtlake - 6 months
They model the rhetoric, semantics, and ideas of some of the most unhinged and immature denizens of the internet.

The stolen data used in the training sets are filled with online communities you would shudder to be forced to experience, and books you’d refuse to read.

It’s why they’ll suddenly suggest you put glue in pizza sauce, or why they read in a soulless overly verbose “m’lady” tone.

More data made them less useful but better at fooling people with a superficial interest in them, and that demographic is so large it affords these companies leverage in funding rounds.

Markets truly are irrational.