The Clever Hans Effect, Iterative LLM Prompting, and Socrates' Meno
The relationship between AI and human intelligence is debated, with large language models creating an illusion of intelligence through interaction, emphasizing AI's role in enhancing human inquiry rather than replicating cognition.
Read original articleArtificial intelligence (AI) and natural intelligence have long been subjects of philosophical and scientific inquiry, particularly regarding their interrelation. John McCarthy, who coined the term "artificial intelligence," aimed to determine if all aspects of human intelligence could be replicated by machines. The pursuit of Artificial General Intelligence (AGI) reflects this belief, yet philosophers like Hubert Dreyfus argue that human intelligence is too complex to be fully captured by algorithms. Recent advancements in large language models (LLMs) have led to misconceptions about their capabilities, as they often appear to exhibit intelligence through iterative prompting, similar to the Clever Hans effect, where apparent intelligence arises from responsive guidance rather than intrinsic reasoning. This paper posits that both LLMs and the boy in Plato's Meno demonstrate that intelligence is an emergent phenomenon shaped by interaction rather than isolated cognition. The iterative prompting process, akin to Socratic questioning, reveals that LLMs generate responses based on statistical patterns rather than genuine understanding. This challenges traditional views of intelligence as an inherent property and suggests that the value of AI lies in its ability to foster collaborative exploration and prompt insightful questions, rather than in its capacity for independent reasoning.
- The relationship between AI and human intelligence has been debated for decades.
- Large language models (LLMs) may create the illusion of intelligence through user guidance.
- The Clever Hans effect illustrates how perceived intelligence can emerge from interaction rather than intrinsic capability.
- Intelligence is increasingly viewed as a relational process shaped by context and collaboration.
- The potential of AI lies in enhancing human inquiry rather than replicating human cognition.
Related
Francois Chollet – LLMs won't lead to AGI – $1M Prize to find solution [video]
The video discusses limitations of large language models in AI, emphasizing genuine understanding and problem-solving skills. A prize incentivizes AI systems showcasing these abilities. Adaptability and knowledge acquisition are highlighted as crucial for true intelligence.
How to Raise Your Artificial Intelligence: A Conversation
Alison Gopnik and Melanie Mitchell discuss AI complexities, emphasizing limitations of large language models (LLMs). They stress the importance of active engagement with the world for AI to develop conceptual understanding and reasoning abilities.
Rodney Brooks' Three Laws of Artificial Intelligence
Rodney Brooks discusses misconceptions about AI, emphasizing overestimation of its capabilities, the need for human involvement, challenges from unpredictable scenarios, and the importance of constraints to ensure safe deployment.
Transcript for Yann LeCun: AGI and the Future of AI – Lex Fridman Podcast
Yann LeCun discusses the limitations of large language models, emphasizing their lack of real-world understanding and sensory data processing, while advocating for open-source AI development and expressing optimism about beneficial AGI.
How close is AI to human-level intelligence?
Recent advancements in AI, particularly with OpenAI's o1, have sparked serious discussions about artificial general intelligence (AGI). Experts caution that current large language models lack the necessary components for true AGI.
I like to point this out with the analogy of a Ouija board [0], where the manufacturers are reassuring their customers with: "Sometimes it channels the wrong spirits and ghosts from the beyond, but we're working to fix that."
Much like LLM "hallucinations", the wording implies something about "normal" operation which isn't quite true.
> Iterative prompting, like Socratic questioning, may never produce true intelligence in the systems it engages with, but it reveals the collaborative nature of intelligence itself—an emergent property shaped by context, guidance, and the interplay of minds, human or otherwise. Through this lens, the promise of AI shifts from creating autonomous intelligences to augmenting human inquiry.
This reads (like the rest of the post) as if the author is curiously unaware that, in fact, lots of research has been published on what LLMs can do "zero-shot" or without iterative prompting, vs what takes iterative prompting, and that the number of things that they can do without anything at all Socratic in shape is both non-zero and increasing predictably with scale.
IMHO what LLMs have most obviously revealed about "true intelligence" is just how attached many people are to the idea that it's a special possession of humans and the tenuous intellectual lengths to which they will go to hold on to that idea.
We are mimicking machines and halluciate everything just the same.
Those kind of discussions are getting now a bit boring to be honest.
In fact, the big labs should post-train their models to extract as much information from questioners about their questions before attempting to answer them.
These interaction traces would also be useful during later training runs.
Related
Francois Chollet – LLMs won't lead to AGI – $1M Prize to find solution [video]
The video discusses limitations of large language models in AI, emphasizing genuine understanding and problem-solving skills. A prize incentivizes AI systems showcasing these abilities. Adaptability and knowledge acquisition are highlighted as crucial for true intelligence.
How to Raise Your Artificial Intelligence: A Conversation
Alison Gopnik and Melanie Mitchell discuss AI complexities, emphasizing limitations of large language models (LLMs). They stress the importance of active engagement with the world for AI to develop conceptual understanding and reasoning abilities.
Rodney Brooks' Three Laws of Artificial Intelligence
Rodney Brooks discusses misconceptions about AI, emphasizing overestimation of its capabilities, the need for human involvement, challenges from unpredictable scenarios, and the importance of constraints to ensure safe deployment.
Transcript for Yann LeCun: AGI and the Future of AI – Lex Fridman Podcast
Yann LeCun discusses the limitations of large language models, emphasizing their lack of real-world understanding and sensory data processing, while advocating for open-source AI development and expressing optimism about beneficial AGI.
How close is AI to human-level intelligence?
Recent advancements in AI, particularly with OpenAI's o1, have sparked serious discussions about artificial general intelligence (AGI). Experts caution that current large language models lack the necessary components for true AGI.