July 4th, 2024

Superintelligence–10 Years Later

Reflection on the impact of Nick Bostrom's "Superintelligence" book after a decade, highlighting AI evolution, risks, safety concerns, regulatory calls, and the shift towards AI safety by influential figures and researchers.

Read original articleLink Icon
Superintelligence–10 Years Later

In a reflection on the impact of Nick Bostrom's book "Superintelligence" ten years after its release, Conrad Gray discusses the evolution of artificial intelligence (AI) and its implications. The book sparked discussions on AI risks and safety, with influential figures like Elon Musk endorsing its importance. Despite critics downplaying the risks, recent advancements in AI, such as ChatGPT, have brought the topic of AI safety to the forefront. Concerns about the misuse of AI models have led to calls for regulation and safety measures, with countries passing laws and organizations like the AI Safety Institute emerging. Notable AI researchers, including Geoffrey Hinton, have shifted focus to AI safety, recognizing the potential dangers posed by advanced AI systems. As the race towards achieving Artificial General Intelligence (AGI) continues, the need to address control and alignment problems becomes crucial to ensure that superintelligent AI aligns with human values. The current challenges with AI systems, such as lying and interpretability issues, highlight the complexities involved in developing safe AI. The urgency to address these issues is emphasized as the world navigates the path towards potentially achieving superintelligence.

Related

Some Thoughts on AI Alignment: Using AI to Control AI

Some Thoughts on AI Alignment: Using AI to Control AI

The GitHub content discusses AI alignment and control, proposing Helper models to regulate AI behavior. These models monitor and manage the primary AI to prevent harmful actions, emphasizing external oversight and addressing implementation challenges.

Moonshots, Malice, and Mitigations

Moonshots, Malice, and Mitigations

Rapid AI advancements by OpenAI with Transformer models like GPT-4 and Sora are discussed. Emphasis on aligning AI with human values, moonshot concepts, societal impacts, and ideologies like Whatever Accelerationism.

Anthropic CEO on Being an Underdog, AI Safety, and Economic Inequality

Anthropic CEO on Being an Underdog, AI Safety, and Economic Inequality

Anthropic's CEO, Dario Amodei, emphasizes AI progress, safety, and economic equality. The company's advanced AI system, Claude 3.5 Sonnet, competes with OpenAI, focusing on public benefit and multiple safety measures. Amodei discusses government regulation and funding for AI development.

Ray Kurzweil is (still, somehow) excited about humans merging with machines

Ray Kurzweil is (still, somehow) excited about humans merging with machines

Ray Kurzweil discusses merging humans with AI, foreseeing a future of advanced technology surpassing human abilities. Critics question feasibility, equity, societal disruptions, and ethical concerns. Kurzweil's techno-optimism overlooks societal complexities.

'Superintelligence,' Ten Years On

'Superintelligence,' Ten Years On

Nick Bostrom's book "Superintelligence" from 2014 shaped the AI alignment debate, highlighting risks of artificial superintelligence surpassing human intellect. Concerns include misalignment with human values and skepticism about AI achieving sentience. Discussions emphasize safety in AI advancement.

Link Icon 16 comments
By @zxcb1 - 3 months
Norbert Wiener was ahead of his time in recognizing the potential danger of emergent intelligent machines. I believe he was even further ahead in recognizing that the first artificial intelligences had already begun to emerge. He was correct in identifying the corporations and bureaus that he called "machines of flesh and blood" as the first intelligent machines.

https://en.wikipedia.org/wiki/Possible_Minds

By @satvikpendem - 3 months
I remember reading Bostrom's work in 2014 and raving about it to others while no one really understood what I was so interested in. Well, now everyone is talking about this topic. One of my favorite analogies in the book goes something like, imagine a worm wriggling in the ground, it has no conception of the god-like beings that inhabit the world, in cities, having all sorts of goals, doing all sorts of jobs. It literally does not have the brain power to comprehend what is happening.

Now imagine we are the worm.

By @sireat - 3 months
I just want to point out that the paperclip problem was already present in a short story from 1959 by a Soviet sci-fi writer Dneprov - Crabs on the Island

https://archive.org/stream/AnatolyDneprovCrabsOnTheIsland/An...

I am sure there are even earlier examples - but the above is a nice short read.

By @throwerofstone - 3 months
The author states that AI safety is very important, that many experts think it is very important and that even governments consider it to be very important, but there is no mention of why it is important or what "safe" AI even looks like. Am I that out of the loop that what this concept entails is so obvious that it doesn't require an explanation, or am I overlooking something here?
By @bluetomcat - 3 months
This is one of the most delusional and speculative books I've ever read. The author comes up with elaborate analytical models resting on slippery, loosely-defined terms. Being smart with algebra while totally disconnected from technological grounds. It's the kind of stuff VP execs and Bill Gates like to read, and one of the reasons for the current bubble.
By @n4r9 - 3 months
The author claims that we are "between third and fifth point" in the following list:

>i Safety alarmists are proved wrong

>ii Clear relationship between AI intelligence and safety/reliability

>iii Large and growing industries with vested interests in robotics and machine intelligence.

>iv A promising new technique in artificial intelligence, which is tremendously exciting to those who have participated in or followed the research.

>v The enactment of some safety rituals, whatever helps demonstrate that the participants are ethical and responsible (but nothing that significantly impedes the forward charge).

>vi A careful evaluation of seed AI in a sandbox environment, showing that it is behaving cooperatively and showing good judgment.

Have we really gone past the first point? After decades of R&D, driverless cars are still not as safe as humans in all conditions. We have yet to see the impact of generative AI on the intellectual development of software engineers, or to what extent it will exacerbate the "enshittification" of software. There's compelling evidence that nation states are trusting AI to identify "likely" terrorists who are then indiscriminately bombed.

By @AndrewKemendo - 3 months
I’ve been working on this issue for a while and the conclusion I have come to is:

We’re not going to see actual movement on managing AI risk until there is the equivalent of a Hiroshima/three mile island/chernobyl from a self-improving system that has no human in the loop.

Not enough people actually believe ASI is possible and harmful, to create a movement that will stop the people who are pursuing it who don’t care or don’t believe its going to be harmful.

It would have been impossible to have a nuclear weapons ban prior to World War II because 1. Almost nobody knew about it 2. Nobody would have actually believed it could be that bad

The question you should ask is, if someone does make it, on any timeline is there any possible counter at that point?

By @Borrible - 3 months
Perfect is the enemy of good, so why vote for a lesser good?

Humans are so existentially biased and self-centred!

And they are always forgetting that they wouldn't even be there if others hadn't made room for them.From the Great Oxygen to the K–Pg extinction event.

Be generous!

"Man is something that shall be overcome. What have you done to overcome him?"

Friedrich Nietzsche

By @alangou - 3 months
There is the exemplary manga Blame!, in which the intelligent machines have run amok and turned the whole Solar System into one dystopian megastructure, and the safeguard protocols, like an immune system gone awry, have wiped out the majority of humans. The hero is an agent tasked the reviving humanity, which requires him to embark on a centuries-long journey through this megastructure, which is something like Journey to the West meets the Odyssey.

https://en.wikipedia.org/wiki/Blame!

By @_xerces_ - 3 months
It is as ridiculous as this: https://en.wikipedia.org/wiki/Roko%27s_basilisk

The main danger is from people losing their jobs and the societal upheaval that could arise from that. AI isn't going to take over the world by itself and turn humanity into slaves, destroy us (unless we put it in charge of weapons and it fucks up) or harvest our blood for the iron.

By @CuriouslyC - 3 months
I love how people think because we are getting very good at efficiently encoding human intelligence that implies that we are very close to creating superintelligence, and that our progress on creating superintelligence will somehow resemble the rate of progress on the simpler problem of encoding existing intelligence.
By @amai - 3 months
10 years later the critique of „Superintelligence“ is still valid: https://www.astromaier.de/post/2015-06-07-superintelligence-...
By @navane - 3 months
AI safety is fear mongering to shut up the Luddites

The AI we have now (Stable Diffusion, chatgpt) are technical advancements that allow inferior but cheaper production of artistic content. It is not a step closer to death-by-paperclips; it is merely another step of big capital automating production, hoarding more wealth in a smaller group.

The closer thing to AI safety is unsupervised execution of laws by ML.

By @_wire_ - 3 months
This thread is a testament to Dunning-Kruger effect.

Every observation listed here can be generalized to a case of worry that AI is stupid, selfish and/or mean. Unsurprisingly, just like what's feared about people!

Epic confusion of the topic of cybernetics with "AI".

In colloquial parlance "AI" must be quoted to remind that the topic is hopelessly ambiguous, where every use demands explicit clarification via references to avoid abject confusion. This comment is about confusion, not about "AI".

"Thou shall not make machines in the likeness of the human mind."

Too late.

Weinzenbuam's Eliza showed the bar to likeness is low.

Television similarly demonstrates a low bar, but interestingly doesn't arouse viewers' suspicions about what all those little people are doing inside when you change the channel.

I find it helpful when considering implications of "AI" to interject the observation of the distinction between life and computing machinery: these are profoundly different dynamics: life is endogenous, mechanical computers are exogenous. We don't know how or why life emerges, but we do know how and why computers occur, because we make them. That computers are an emergent aspect of life may be part of the conundrum of the former and therefore a mystery, we design an control computers, to the extent that it can be said we design or control anything. So if you chose to diminish or contest the importance of design in outcomes of the application of computers, you challenge the importance of volition in all affairs. This might be fair, but apropos Descartes' testament to mind: if you debase yourself, you debase all your conclusions, so best to treat confusion and fear about the implications of applied computing as a study of your own limits.

There's a single enormous and obvious hazard of "AI" in this era: that we imbue imitations of human responses with humanity. The bar is low for a convincing imitation and transformer technology demonstrates surprisingly high levels of imitation. This is conducive to confusion, which is becoming rampant.

The start of a responsible orientation to rampant confusion is to formally contextualize it and impose a schedule of hygiene on fakery.

The great hazard of centralized fakery (e.g. radio and television) is a trap for the mind.

We are living in the aftermath of a 500 year campaign of commercial slavery. When people are confused, they can be ensnared, trapped and enslaved. The hazard of "AI" is not the will of sentient machines burying the world in manufactured effluvia— we've achieved this already!— it's the continuation of commercial slavery by turning people into robots.

This thread reads like it's being created by bots; cascading hallucinations.

Well, this bot is throwing down the gauntlet: Prove you're human, y'all!