The Danger of Superhuman AI Is Not What You Think
Shannon Vallor critiques the narrative equating advanced AI with superhuman capabilities, arguing it undermines human agency and essential qualities like consciousness and empathy, urging a nuanced understanding of intelligence.
Read original articlethe discourse surrounding "superhuman" AI risks undermining the essence of what it means to be human. Shannon Vallor, an expert in the ethics of AI, critiques the prevalent narrative that equates advanced AI systems, like ChatGPT and Gemini, with superhuman capabilities. She argues that this rhetoric diminishes human agency and conflates human consciousness with mere computational efficiency. Vallor emphasizes that current AI lacks fundamental human qualities such as consciousness, empathy, and moral intelligence, which are essential to our humanity.
During a discussion with AI researcher Yoshua Bengio, Vallor questioned the appropriateness of labeling AI as superhuman when it fundamentally lacks the emotional and cognitive depth of human beings. She highlights a shift in the AI research community's goals, moving from creating machines indistinguishable from human minds to developing systems that outperform humans in economically valuable tasks. This shift, she argues, reduces human intelligence to mere task completion, neglecting the richness of human experience and creativity.
Vallor warns that this ideology could lead to a cultural erosion of our self-understanding, where human qualities are dismissed as mere optimization processes. By redefining intelligence in terms of economic output, we risk losing sight of the intrinsic values that define humanity. Ultimately, Vallor calls for a more nuanced understanding of intelligence that recognizes the unique aspects of human experience beyond mere computational prowess.
Related
'Superintelligence,' Ten Years On
Nick Bostrom's book "Superintelligence" from 2014 shaped the AI alignment debate, highlighting risks of artificial superintelligence surpassing human intellect. Concerns include misalignment with human values and skepticism about AI achieving sentience. Discussions emphasize safety in AI advancement.
Superintelligence–10 Years Later
Reflection on the impact of Nick Bostrom's "Superintelligence" book after a decade, highlighting AI evolution, risks, safety concerns, regulatory calls, and the shift towards AI safety by influential figures and researchers.
AI's Cognitive Mirror: The Illusion of Consciousness in the Digital Age
The article explores AI's limitations in developing spiritual consciousness due to lacking sensory perception like humans. It discusses AI's strengths in abstract thought but warns of errors and biases. It touches on AI worship, language models, and the need for safeguards.
Pop Culture
Goldman Sachs report questions generative AI's productivity benefits, power demands, and industry hype. Economist Daron Acemoglu doubts AI's transformative potential, highlighting limitations in real-world applications and escalating training costs.
Someone is wrong on the internet (AGI Doom edition)
The blog post critiques the existential risk of Artificial General Intelligence (AGI), questioning fast takeoff scenarios and emphasizing practical knowledge over doomsday predictions. It challenges assumptions and advocates for nuanced understanding.
It's going to be used to pump as much money out of each individual as possible
Every online store, big corporate especially will instantly realize how much you are able and willing to pay for everything and that's your "personalized price"
It will scan all your social media, it will look at your purchase histories, it will know your income and credit
Like a used car dealer on steroids that never gets tired and learns more and more about you by following you around and watching what you do
Everything from your daily food to your big ticket purchases, maximum customized prices
It's going to be evil as hell and lawmakers will just be paid off eventually to let it all happen, data collection and no two people paying the same for the same thing will become 100% legal and the norm
> How, I asked, does an AI system without the human capacity for conscious self-reflection, empathy or moral intelligence become superhuman merely by being a faster problem-solver? Aren’t we more than that? And doesn’t granting the label “superhuman” to machines that lack the most vital dimensions of humanity end up obscuring from our view the very things about being human that we care about?
This is such an annoying framing. There are massive assumptions throughout this, from "human capacity", "conscious self-reflection", "moral intelligence". And then throwing those assumptive definitions into another persons face, demanding how they relate to "superhuman" intelligence.
It's almost like the author is saying "I have intuitive definitions for this grab-bag of words that don't align with my intuitive definition of this one particular word you are using".
> characterizations of human beings as acting wisely, playfully, inventively, insightfully, meditatively, courageously, compassionately or justly are no more than poetic license
Again, the author has some intuitive sense of the definition of these words. If the author wants to get spiritual/supernatural/mystical about it all then they are free to go ahead and do so. They should come out and make the bold claim instead of masquerading their opinion as reasonable/rational discourse. They are more likely to find a sympathetic audience for that.
>Once you have reduced the concept of human intelligence to what the markets will pay for…”
Bengio and Hinton certainly do not make that ridiculous reduction as a condition or criterion. Even Altman does not, although he certainly feels the pressure.
Vallor taught for many years at a preeminent Jesuit institution and when I read this polemic my first thought is—-please put your “belief system” cards on the table. What is the fundamental substrate on which you are thinking and building your arguments? Would you be comfortable with Daniel Dennett’s thoughts on brain function and consciousness or do you suspect that there must be some essential and almost ineffable light that illuminates the human mind from above or below.
My cards on the table: I am an atheist and I am not looking for another false center for my universe. The hard problem of AGI is now the embodiment problem and iterative self-teaching and self-control of action and attention.
You could say that we need to get the animal into an AI now that we have language reasonably well embedded. From that point onward, and I think quite soon (10 years), it will be AGI.
Perhaps “super” is not the right adjective but it grabs attention effectively.
Ms. Vallor is too concerned over semantics, too fixated on the current crop of LLMs, too invested in "pinning down" Mr. Bengio.
Five to ten years out though I suspect we will no longer be having arguments over what it means to be human, intelligent.
This is the question I always ask. I'll be interested in AI when it can do my laundry and my dishes. I guess self-driving cars might fit in here, although I personally like driving, some people hate it and even I could see how an AI driver could be desirable at times (e.g. coming home from the bars).
An AI system can only be dangerous or not dangerous based on what physical events happen in the real world, as a result of that AI interacting with the world via known physical laws. In order to decide whether a system is or isn't dangerous you need to have a predictive model of what that system will actually do in various hypothetical situations. You can't just say "because it is improperly defined as X it cannot do Y".
If you want to predict what happens when you put too much plutonium into a small volume, you can't make any progress on this problem by talking about whether the device is truly a "bomb", or by saying that the whole question is just a rehash of the promethean myth, or that you cannot measure explosions on a single axis. The only thing that will do is to have a reliable model of the behavior of the system.
Many people seem to either not understand, or intentionally obfuscate, that an AI, like a bomb, is also a physical system which interacts with the physical world. Marc Andreessen makes this error when he says AI is "just math" and therefore AIs are inherently safe. No, it's not just math, it's a computer, made of matter, that physically interacts with both human brains and other computer systems, and therefore by extension with the entire rest of the physical world. Now of course, the way that an AI interacts with the world is radically different than the way a bomb interacts with the world, and we cannot usefully model the behavior of the AI with nuclear physics, but the fact remains. (See also: https://www.youtube.com/watch?v=kBfRG5GSnhE)
So when you see arguments like this, ask: Is the person making an argument based on a model, or not? Examples of model based arguments include things like:
- "AI risk is not a concern because of computational intractability" - we understand something about the limits of computation, and possibly those limits might constrain what AIs can do.
- "AI risk is a concern because less intelligent entities are not usually able to constrain the behavior of more intelligent entities" - A coarse and imperfect model indeed, but certainly a model based on observations of interactions in the real world.
"Verbal Trick" arguments include things like:
- "AI risk is not a concern because we can't even define intelligence"
- "AI is just math"
- "AI risk is not a concern because intelligence is multi-dimensional"
A third category to watch out for is the misapplied model:
- "AI risk is not a concern because people have always been worried about the apocalypse" - this is a model based argument, but it can only answer the question of why people are worried, it cannot answer the question of whether there is in fact something to be worried about.
That's capitalism.
Most of the current criticisms of AI can be leveled at corporations. A corporation is a slow AI. It's capable of doing things its employees cannot do alone. Its goal is to optimize some simple metrics. The Milton Friedman position on corporations is that they have no duties other than to maximize shareholder value.
What has the chattering classes freaked out about AI is that it may make possible corporations that don't need them. The Ivy League could go the way of the vocational high school.
And a very logical next step after this neat and tidy dehumanization is, as history has shown, the gas chambers for the obviously malfunctioning “machines”. Because whatever, it’s not like they’re actually feeling anything, right?
(edit)
When you reduce persons to the status of things, you are going to get people treated like things.
And every single time, that’s a recipe for disaster.
The 10,000ft view of the argument is that the human brain has two simultaneously running processes. One process is interested in fine details, analytic and critical thinking, reductionism, language, intelligence. The other process is interested in high-level details, connections between elements, holistic, symbolism, intuition. The argument further assumes that the systems are (to some degree) independent and cannot be built upon each other. These faculties are often ascribed to the left/right brain.
If you subscribe to this idea (and this is a big if which you have to grant for the purpose of this argument), then you could argue that LLMs only capture System 2/analytical/reductionist processes of the brain. In that case, you could claim that "superhuman" is an incorrect way to describe their abilities since it only captures 1 aspect of the bicameral mind.
However, this argument is inappropriate for discussions specifically on the topic of AI safety. The most basic response would be to point out that LLMs have or will surpass human capability in System 2 type thinking and thereby deserve the description "superhuman intelligence".
This article seems to smuggle in this bicameral distinction as if it is universally agreed upon fact. The author seems to be demanding that Bengio concede to this framing as a basis for any discussion the author wants to have.
1. https://en.wikipedia.org/wiki/Bicameral_mentality
2. https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow
3. https://en.wikipedia.org/wiki/The_Master_and_His_Emissary
"humans are no more than mechanical generators of economically valuable outputs"
rather than
"humane, nonmechanical, noneconomic standards for the treatment and valuation of human beings — standards like dignity, justice, autonomy and respect"
and that 'superhuman' AI will make it worse. But that seems unproven - things could get better also.
So the better-at-problemsolving machines aren't coming to destroy everything that is dear to me because I... used the wrong word to refer to them?
I mean, sure, there are arguments to make against Bengio and Hinton, definitely against Yudkowsky, I myself only give less than 50% that they are right and doom is imminent (which is of course enough risk to warrant taking action to prevent it, even the kind of "crazy" action Yudkowsky advocates for), but this "argument"... what the heck did I just read.
When alarmists like myself take about danger, we rarely use the term “superhuman intelligence”. We tend to use the term “superintelligence”. And the reason is that our working definition of an intelligent agent contains agents both vastly broader and vastly narrower than humans, but that nevertheless are a danger to humans.
So the question isn’t “but does this agent truly understand the poetry it wrote”. It’s “can it use its world model to produce a string of words that will cause humans to harm themselves”. It’s not “does it appreciate a Georgia O’Keefe painting.” It’s “can it manipulate matter to create viruses that will kill all present and future O’Keefes”.
The problem is humanity vs malicious humanity with absurd access to resources.
AI doesn't need to think or be conscious or be superhuman to be a problem. It just needs to allow a small set of people to get away with superhuman things.
Related
'Superintelligence,' Ten Years On
Nick Bostrom's book "Superintelligence" from 2014 shaped the AI alignment debate, highlighting risks of artificial superintelligence surpassing human intellect. Concerns include misalignment with human values and skepticism about AI achieving sentience. Discussions emphasize safety in AI advancement.
Superintelligence–10 Years Later
Reflection on the impact of Nick Bostrom's "Superintelligence" book after a decade, highlighting AI evolution, risks, safety concerns, regulatory calls, and the shift towards AI safety by influential figures and researchers.
AI's Cognitive Mirror: The Illusion of Consciousness in the Digital Age
The article explores AI's limitations in developing spiritual consciousness due to lacking sensory perception like humans. It discusses AI's strengths in abstract thought but warns of errors and biases. It touches on AI worship, language models, and the need for safeguards.
Pop Culture
Goldman Sachs report questions generative AI's productivity benefits, power demands, and industry hype. Economist Daron Acemoglu doubts AI's transformative potential, highlighting limitations in real-world applications and escalating training costs.
Someone is wrong on the internet (AGI Doom edition)
The blog post critiques the existential risk of Artificial General Intelligence (AGI), questioning fast takeoff scenarios and emphasizing practical knowledge over doomsday predictions. It challenges assumptions and advocates for nuanced understanding.