August 3rd, 2024

The Danger of Superhuman AI Is Not What You Think

Shannon Vallor critiques the narrative equating advanced AI with superhuman capabilities, arguing it undermines human agency and essential qualities like consciousness and empathy, urging a nuanced understanding of intelligence.

Read original articleLink Icon
The Danger of Superhuman AI Is Not What You Think

the discourse surrounding "superhuman" AI risks undermining the essence of what it means to be human. Shannon Vallor, an expert in the ethics of AI, critiques the prevalent narrative that equates advanced AI systems, like ChatGPT and Gemini, with superhuman capabilities. She argues that this rhetoric diminishes human agency and conflates human consciousness with mere computational efficiency. Vallor emphasizes that current AI lacks fundamental human qualities such as consciousness, empathy, and moral intelligence, which are essential to our humanity.

During a discussion with AI researcher Yoshua Bengio, Vallor questioned the appropriateness of labeling AI as superhuman when it fundamentally lacks the emotional and cognitive depth of human beings. She highlights a shift in the AI research community's goals, moving from creating machines indistinguishable from human minds to developing systems that outperform humans in economically valuable tasks. This shift, she argues, reduces human intelligence to mere task completion, neglecting the richness of human experience and creativity.

Vallor warns that this ideology could lead to a cultural erosion of our self-understanding, where human qualities are dismissed as mere optimization processes. By redefining intelligence in terms of economic output, we risk losing sight of the intrinsic values that define humanity. Ultimately, Vallor calls for a more nuanced understanding of intelligence that recognizes the unique aspects of human experience beyond mere computational prowess.

Related

'Superintelligence,' Ten Years On

'Superintelligence,' Ten Years On

Nick Bostrom's book "Superintelligence" from 2014 shaped the AI alignment debate, highlighting risks of artificial superintelligence surpassing human intellect. Concerns include misalignment with human values and skepticism about AI achieving sentience. Discussions emphasize safety in AI advancement.

Superintelligence–10 Years Later

Superintelligence–10 Years Later

Reflection on the impact of Nick Bostrom's "Superintelligence" book after a decade, highlighting AI evolution, risks, safety concerns, regulatory calls, and the shift towards AI safety by influential figures and researchers.

AI's Cognitive Mirror: The Illusion of Consciousness in the Digital Age

AI's Cognitive Mirror: The Illusion of Consciousness in the Digital Age

The article explores AI's limitations in developing spiritual consciousness due to lacking sensory perception like humans. It discusses AI's strengths in abstract thought but warns of errors and biases. It touches on AI worship, language models, and the need for safeguards.

Pop Culture

Pop Culture

Goldman Sachs report questions generative AI's productivity benefits, power demands, and industry hype. Economist Daron Acemoglu doubts AI's transformative potential, highlighting limitations in real-world applications and escalating training costs.

Someone is wrong on the internet (AGI Doom edition)

Someone is wrong on the internet (AGI Doom edition)

The blog post critiques the existential risk of Artificial General Intelligence (AGI), questioning fast takeoff scenarios and emphasizing practical knowledge over doomsday predictions. It challenges assumptions and advocates for nuanced understanding.

Link Icon 21 comments
By @ck2 - 2 months
Superhuman AI won't be used to improve quality of life just like the day after someone figured out you could put ads on usenet

It's going to be used to pump as much money out of each individual as possible

Every online store, big corporate especially will instantly realize how much you are able and willing to pay for everything and that's your "personalized price"

It will scan all your social media, it will look at your purchase histories, it will know your income and credit

Like a used car dealer on steroids that never gets tired and learns more and more about you by following you around and watching what you do

Everything from your daily food to your big ticket purchases, maximum customized prices

It's going to be evil as hell and lawmakers will just be paid off eventually to let it all happen, data collection and no two people paying the same for the same thing will become 100% legal and the norm

By @zoogeny - 2 months
This article really feels like so much nit picking. I hate these kind of semantic debates.

> How, I asked, does an AI system without the human capacity for conscious self-reflection, empathy or moral intelligence become superhuman merely by being a faster problem-solver? Aren’t we more than that? And doesn’t granting the label “superhuman” to machines that lack the most vital dimensions of humanity end up obscuring from our view the very things about being human that we care about?

This is such an annoying framing. There are massive assumptions throughout this, from "human capacity", "conscious self-reflection", "moral intelligence". And then throwing those assumptive definitions into another persons face, demanding how they relate to "superhuman" intelligence.

It's almost like the author is saying "I have intuitive definitions for this grab-bag of words that don't align with my intuitive definition of this one particular word you are using".

> characterizations of human beings as acting wisely, playfully, inventively, insightfully, meditatively, courageously, compassionately or justly are no more than poetic license

Again, the author has some intuitive sense of the definition of these words. If the author wants to get spiritual/supernatural/mystical about it all then they are free to go ahead and do so. They should come out and make the bold claim instead of masquerading their opinion as reasonable/rational discourse. They are more likely to find a sympathetic audience for that.

By @robwwilliams - 2 months
Vallor performs her own bait-and-switch: characterizing many who do not share her view with a venal economic imperative. She writes:

>Once you have reduced the concept of human intelligence to what the markets will pay for…”

Bengio and Hinton certainly do not make that ridiculous reduction as a condition or criterion. Even Altman does not, although he certainly feels the pressure.

Vallor taught for many years at a preeminent Jesuit institution and when I read this polemic my first thought is—-please put your “belief system” cards on the table. What is the fundamental substrate on which you are thinking and building your arguments? Would you be comfortable with Daniel Dennett’s thoughts on brain function and consciousness or do you suspect that there must be some essential and almost ineffable light that illuminates the human mind from above or below.

My cards on the table: I am an atheist and I am not looking for another false center for my universe. The hard problem of AGI is now the embodiment problem and iterative self-teaching and self-control of action and attention.

You could say that we need to get the animal into an AI now that we have language reasonably well embedded. From that point onward, and I think quite soon (10 years), it will be AGI.

Perhaps “super” is not the right adjective but it grabs attention effectively.

By @htk - 2 months
She talks about "The struggle against this reductive and cynical ideology..." when talking about humans in factories, but falls on the same weak argument by broad stroking AI as "mindless mechanical remixers and regurgitators".
By @JKCalhoun - 2 months
The author and Yoshua Bengio talked past one another.

Ms. Vallor is too concerned over semantics, too fixated on the current crop of LLMs, too invested in "pinning down" Mr. Bengio.

Five to ten years out though I suspect we will no longer be having arguments over what it means to be human, intelligent.

By @SoftTalker - 2 months
“What if, instead of replacing humane vocations in media, design and the arts with mindless mechanical remixers and regurgitators of culture like ChatGPT, we asked AI developers to help us with the most meaningless tasks in our lives?”

This is the question I always ask. I'll be interested in AI when it can do my laundry and my dishes. I guess self-driving cars might fit in here, although I personally like driving, some people hate it and even I could see how an AI driver could be desirable at times (e.g. coming home from the bars).

By @emtel - 2 months
So many arguments along these lines run afoul of a very simple rule: You can't make predictions about the future by playing word games!

An AI system can only be dangerous or not dangerous based on what physical events happen in the real world, as a result of that AI interacting with the world via known physical laws. In order to decide whether a system is or isn't dangerous you need to have a predictive model of what that system will actually do in various hypothetical situations. You can't just say "because it is improperly defined as X it cannot do Y".

If you want to predict what happens when you put too much plutonium into a small volume, you can't make any progress on this problem by talking about whether the device is truly a "bomb", or by saying that the whole question is just a rehash of the promethean myth, or that you cannot measure explosions on a single axis. The only thing that will do is to have a reliable model of the behavior of the system.

Many people seem to either not understand, or intentionally obfuscate, that an AI, like a bomb, is also a physical system which interacts with the physical world. Marc Andreessen makes this error when he says AI is "just math" and therefore AIs are inherently safe. No, it's not just math, it's a computer, made of matter, that physically interacts with both human brains and other computer systems, and therefore by extension with the entire rest of the physical world. Now of course, the way that an AI interacts with the world is radically different than the way a bomb interacts with the world, and we cannot usefully model the behavior of the AI with nuclear physics, but the fact remains. (See also: https://www.youtube.com/watch?v=kBfRG5GSnhE)

So when you see arguments like this, ask: Is the person making an argument based on a model, or not? Examples of model based arguments include things like:

- "AI risk is not a concern because of computational intractability" - we understand something about the limits of computation, and possibly those limits might constrain what AIs can do.

- "AI risk is a concern because less intelligent entities are not usually able to constrain the behavior of more intelligent entities" - A coarse and imperfect model indeed, but certainly a model based on observations of interactions in the real world.

"Verbal Trick" arguments include things like:

- "AI risk is not a concern because we can't even define intelligence"

- "AI is just math"

- "AI risk is not a concern because intelligence is multi-dimensional"

A third category to watch out for is the misapplied model:

- "AI risk is not a concern because people have always been worried about the apocalypse" - this is a model based argument, but it can only answer the question of why people are worried, it cannot answer the question of whether there is in fact something to be worried about.

By @goethes_kind - 2 months
Could we not one day come up with an AI that is more empathetic than the average human?
By @batch12 - 2 months
There is no artificial intelligence. These models aren't sentient. However, sentience isn't required to be destructive. We have a hard time fighting virii and bacteria. Even something comparatively simple like MS blaster did a fair amount of damage. If someone figures out a good way weaponize the technology, we'll all have some bad days.
By @Animats - 2 months
"Once you have reduced the concept of human intelligence to what the markets will pay for, then suddenly, all it takes to build an intelligent machine — even a superhuman one — is to make something that generates economically valuable outputs at a rate and average quality that exceeds your own economic output. Anything else is irrelevant."

That's capitalism.

Most of the current criticisms of AI can be leveled at corporations. A corporation is a slow AI. It's capable of doing things its employees cannot do alone. Its goal is to optimize some simple metrics. The Milton Friedman position on corporations is that they have no duties other than to maximize shareholder value.

What has the chattering classes freaked out about AI is that it may make possible corporations that don't need them. The Ivy League could go the way of the vocational high school.

By @hprotagonist - 2 months
As the ideology behind this bait-and-switch leaks into the wider culture, it slowly corrodes our own self-understanding. If you try to point out, in a large lecture or online forum on AI, that ChatGPT does not experience and cannot think about the things that correspond to the words and sentences it produces — that it is only a mathematical generator of expected language patterns — chances are that someone will respond, in a completely serious manner: “But so are we.”

And a very logical next step after this neat and tidy dehumanization is, as history has shown, the gas chambers for the obviously malfunctioning “machines”. Because whatever, it’s not like they’re actually feeling anything, right?

(edit)

When you reduce persons to the status of things, you are going to get people treated like things.

And every single time, that’s a recipe for disaster.

By @zoogeny - 2 months
Among the more mystically inclined there is a new metaphysic of consciousness that is emerging. It is based on the Bicameral Mentality [1] posited by Jaynes in 1976. It is strongly related to the System 1/System 2 theory of the psychologist Kahneman in his 2011 book Thinking, Fast and Slow [2]. Ian McGilchrist has spilled a lot of ink trying to formalize this idea including in his 2009 work The Master and His Emissary.

The 10,000ft view of the argument is that the human brain has two simultaneously running processes. One process is interested in fine details, analytic and critical thinking, reductionism, language, intelligence. The other process is interested in high-level details, connections between elements, holistic, symbolism, intuition. The argument further assumes that the systems are (to some degree) independent and cannot be built upon each other. These faculties are often ascribed to the left/right brain.

If you subscribe to this idea (and this is a big if which you have to grant for the purpose of this argument), then you could argue that LLMs only capture System 2/analytical/reductionist processes of the brain. In that case, you could claim that "superhuman" is an incorrect way to describe their abilities since it only captures 1 aspect of the bicameral mind.

However, this argument is inappropriate for discussions specifically on the topic of AI safety. The most basic response would be to point out that LLMs have or will surpass human capability in System 2 type thinking and thereby deserve the description "superhuman intelligence".

This article seems to smuggle in this bicameral distinction as if it is universally agreed upon fact. The author seems to be demanding that Bengio concede to this framing as a basis for any discussion the author wants to have.

1. https://en.wikipedia.org/wiki/Bicameral_mentality

2. https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow

3. https://en.wikipedia.org/wiki/The_Master_and_His_Emissary

By @tim333 - 2 months
Her argument, which is a bit vague, kind of seems to be that at the moment there's a problem with seeing

"humans are no more than mechanical generators of economically valuable outputs"

rather than

"humane, nonmechanical, noneconomic standards for the treatment and valuation of human beings — standards like dignity, justice, autonomy and respect"

and that 'superhuman' AI will make it worse. But that seems unproven - things could get better also.

By @akomtu - 2 months
Superhuman doesn't mean divine. It can be superhuman in its ability to control and oppress. The danger is that AI will be devilishly creative and wrap the majority of the humanity with a logically perfect ideology that equates humans with machines, and once they fully internalize that ideology, they'll complete spiritual self-destruction. And those few wise enough to find holes in the AI ideology will be quickly isolated.
By @SonOfLilit - 2 months
I guess I can sigh in relief, the Superhuman AI that Joshua Bengio and Geoffrey Hinton are warning us might drive humanity towards extinction the way we did to most apes and many tribal civilizations is no danger after all, because you see, to be Human is not about winning at war or economy, not about being good at solving a vast array of tasks, it's about playing with your kids.

So the better-at-problemsolving machines aren't coming to destroy everything that is dear to me because I... used the wrong word to refer to them?

I mean, sure, there are arguments to make against Bengio and Hinton, definitely against Yudkowsky, I myself only give less than 50% that they are right and doom is imminent (which is of course enough risk to warrant taking action to prevent it, even the kind of "crazy" action Yudkowsky advocates for), but this "argument"... what the heck did I just read.

By @lazyeye - 2 months
Completely misses the point and focuses on labels and definitions which is irrelevant. The danger in AI will almost certainly come from a direction noone was expecting...a kid constructs a lethal virus, a military event is triggered with catastrophic consequences etc...
By @fifticon - 2 months
I see real threats in this. It resembles other 'questionable' bargains that we humans have earlier given in to. We have come to accept that we must work and produce capital to be allowed to live, currently partly to enrich various billionaires (the wealth we produce already allow us to buy dozens of big TVs we don't need, and we have shops full of junk we don't need, clearly we produce more than we need to live, for questionable reasons). We can no longer live as stone age hunters (I'm not saying that is a better fate, but we no longer have that choice). It is a very real threat that other "things you will have to accept" will arrive on the ship of AI. Sigh.
By @afthonos - 2 months
There’s a lot of good stuff in this article, but it nevertheless misses the point (as did Yoshua Bengio as described, to be clear).

When alarmists like myself take about danger, we rarely use the term “superhuman intelligence”. We tend to use the term “superintelligence”. And the reason is that our working definition of an intelligent agent contains agents both vastly broader and vastly narrower than humans, but that nevertheless are a danger to humans.

So the question isn’t “but does this agent truly understand the poetry it wrote”. It’s “can it use its world model to produce a string of words that will cause humans to harm themselves”. It’s not “does it appreciate a Georgia O’Keefe painting.” It’s “can it manipulate matter to create viruses that will kill all present and future O’Keefes”.

By @jfyi - 2 months
This is just another framing of the problem as ai vs humanity. It comes off preachy and frankly misses the boat as much as claiming ai is sentient.

The problem is humanity vs malicious humanity with absurd access to resources.

AI doesn't need to think or be conscious or be superhuman to be a problem. It just needs to allow a small set of people to get away with superhuman things.

By @lukeschlather - 2 months
The incoherent ramblings of LLMs often remind me of children or people with neurodegenerative diseases. If we want to preserve our humanity as this article suggests I don't think we can take it for granted that current AIs lack the same building blocks as our consciousness. The author rails against the definition of "can perform economically valuable tasks" as well as the average human, but I feel like this is a direct response to the author's refusal to consider that LLM's might be feeling. If we want to keep our humanity we have to be open to the possibility that if it can eloquently describe a strong emotion, it actually might be feeling that emotion and not simply "predicting the next token." But the author seems to think that it is obvious fact that state-of-the-art LLMs do not feel anything, and I'm not sure it's even a falsifiable question.