December 21st, 2024

AI Is the Black Mirror

Philosopher Shannon Vallor argues AI reflects human intelligence, not mimics it. She warns against equating AI with human reasoning, emphasizing the need to preserve confidence in human cognition for global challenges.

Read original articleLink Icon
ConfusionSkepticismCuriosity
AI Is the Black Mirror

The article discusses the complex relationship between artificial intelligence (AI) and human cognition, emphasizing the misconceptions surrounding AI's capabilities. Philosopher Shannon Vallor argues that AI should not be viewed as a mind but rather as a mirror reflecting human intelligence. This perspective challenges the notion that AI can think or possess emotions like humans. Vallor warns against the dangers of equating AI with human reasoning, as it diminishes our understanding of human thought processes and could lead to a loss of agency. She critiques the tech industry's portrayal of humans as mere machines, which undermines our unique cognitive abilities. Vallor's insights highlight the need to rebuild confidence in human reasoning, especially in addressing global challenges like climate change and democracy. The article also touches on the debate surrounding artificial general intelligence (AGI) and the implications of redefining human intelligence in relation to AI. Vallor remains skeptical about claims that AI systems are developing cognitive abilities akin to human minds, asserting that current AI lacks the experiential foundation necessary for true thinking.

- AI should be viewed as a reflection of human intelligence, not as a mind.

- Equating AI with human reasoning can undermine our understanding of human cognition.

- Vallor emphasizes the importance of maintaining confidence in human reasoning to tackle global issues.

- The portrayal of humans as machines by the tech industry is problematic and reductive.

- Current AI lacks the experiential foundation necessary for true cognitive abilities.

AI: What people are saying
The comments reflect a range of perspectives on the nature of AI and its relationship to human intelligence.
  • Many commenters agree that AI, particularly LLMs, lacks an "inner life" and true understanding, functioning instead as a reflection of human input.
  • Some argue that advancements in AI could lead to it developing a form of reasoning or consciousness similar to humans.
  • Critics express skepticism about the claims that AI understands language or thought in the same way humans do, pointing to noticeable differences in output quality.
  • There is a discussion about the implications of AI creating original content and how it interacts with human knowledge.
  • Several commenters call for more substantial arguments regarding the uniqueness of human cognition compared to AI capabilities.
Link Icon 11 comments
By @uxhacker - 4 months
By @whakim - 4 months
I understand the point being made - that LLMs lack any "inner life" and that by ignoring this aspect of what makes us human we've really moved the goalposts on what counts as AGI. However, I don't think mirrors and LLMs are all that similar except in the very abstract sense of an LLM as a mirror to humanity (what does that even mean, practically speaking?) I also don't feel that the author adequately addressed the philosophical zombie in the room - even if LLMs are just stochastic parrots, if its output was totally indistinguishable from a human's, would it matter?
By @kazinator - 4 months
> With ChatGPT, the output you see is a reflection of human intelligence, our creative preferences, our coding expertise, our voices—whatever we put in.

Sure, but it's a reflection of a large amount of human intelligence, from many individualks, almost instantly available (in a certain form) to one individual.

By @Vecr - 4 months
Where are the actual arguments? She states that AI is a mirror, and yeah, you put stuff in and you get stuff out, but who thinks otherwise?

There are interesting ways to argue for humans being special, but I read the entire article and unless I missed something important there's nothing like that there.

By @mebutnotme - 4 months
The argument here seems to be AI can’t become a mind as it does not experience. There is a counter argument though that the way we access our past experiences is via the neural pathways we lay down during those experiences and that with the new neural networks AIs now have we have given them those same pathways just in a different way.

At present I don’t think it is yet at the same point but when the AI can adjust those pathways, add more in compute time (infinite memory like tech) and is allowed to ‘think’ about those pathways then I can see it gaining our level or better of philosophical thought.

By @jeisc - 4 months
The first thing that we humans made were weapons and since then everything we make is considered first for its potential value as a defensive/offensive weapon. - Please AI will never experience pleasure or pain so it has no motivation for propagation or domination, it will always only magnify the human who has pushed the button Enter on the prompt. - The ultimate prompt "Find a way to eliminate human suffering without eliminating humans?" -
By @scotty79 - 4 months
It's weird how I drifted away from this article only after few paragraphs as if it was AI slop.
By @kaielvin - 4 months
Agentic AI is starting to create original content, developing on existing one (from humanity) and on its own senses (it now has hearing and sight). This content is flooding the internet, so any new knowledge being acquired now comes from humanity+AI, if not purely from AI (the likes of AlphaZero learn on their own, without human input). Maybe AI is a mirror, but it looks into it and sees itself.
By @silisili - 4 months
> we understand language in much the same way as these large language models

Yeah, gonna need proof on that one.

First, LLM slop is uncannily easy to pick out in comments vs human thought.

Second, there's no prompt that you can give a human that will generate absolutely nonsense response or canceling the request.

If anything, it feels like it doesn't actually understand language at all, and just craps out what it thinks looks like a language. Which is exactly what it does, in fact, sometimes to fanfare.

By @tzury - 4 months
AI is next phase in our evolution, a path chosen by natural selection.

This is my opinion, my view and how I set my life to embrace it and immerse into it.

I actually wrote a piece about it a day ago.

https://blog.tarab.ai/p/evolution-mi-and-the-forgotten-human

Sorry for the “self promotion”, but it’s a direct relation to the topic.