Superintelligence–10 Years Later
Reflection on the impact of Nick Bostrom's "Superintelligence" book after a decade, highlighting AI evolution, risks, safety concerns, regulatory calls, and the shift towards AI safety by influential figures and researchers.
Read original articleIn a reflection on the impact of Nick Bostrom's book "Superintelligence" ten years after its release, Conrad Gray discusses the evolution of artificial intelligence (AI) and its implications. The book sparked discussions on AI risks and safety, with influential figures like Elon Musk endorsing its importance. Despite critics downplaying the risks, recent advancements in AI, such as ChatGPT, have brought the topic of AI safety to the forefront. Concerns about the misuse of AI models have led to calls for regulation and safety measures, with countries passing laws and organizations like the AI Safety Institute emerging. Notable AI researchers, including Geoffrey Hinton, have shifted focus to AI safety, recognizing the potential dangers posed by advanced AI systems. As the race towards achieving Artificial General Intelligence (AGI) continues, the need to address control and alignment problems becomes crucial to ensure that superintelligent AI aligns with human values. The current challenges with AI systems, such as lying and interpretability issues, highlight the complexities involved in developing safe AI. The urgency to address these issues is emphasized as the world navigates the path towards potentially achieving superintelligence.
Related
Some Thoughts on AI Alignment: Using AI to Control AI
The GitHub content discusses AI alignment and control, proposing Helper models to regulate AI behavior. These models monitor and manage the primary AI to prevent harmful actions, emphasizing external oversight and addressing implementation challenges.
Moonshots, Malice, and Mitigations
Rapid AI advancements by OpenAI with Transformer models like GPT-4 and Sora are discussed. Emphasis on aligning AI with human values, moonshot concepts, societal impacts, and ideologies like Whatever Accelerationism.
Anthropic CEO on Being an Underdog, AI Safety, and Economic Inequality
Anthropic's CEO, Dario Amodei, emphasizes AI progress, safety, and economic equality. The company's advanced AI system, Claude 3.5 Sonnet, competes with OpenAI, focusing on public benefit and multiple safety measures. Amodei discusses government regulation and funding for AI development.
Ray Kurzweil is (still, somehow) excited about humans merging with machines
Ray Kurzweil discusses merging humans with AI, foreseeing a future of advanced technology surpassing human abilities. Critics question feasibility, equity, societal disruptions, and ethical concerns. Kurzweil's techno-optimism overlooks societal complexities.
'Superintelligence,' Ten Years On
Nick Bostrom's book "Superintelligence" from 2014 shaped the AI alignment debate, highlighting risks of artificial superintelligence surpassing human intellect. Concerns include misalignment with human values and skepticism about AI achieving sentience. Discussions emphasize safety in AI advancement.
Now imagine we are the worm.
https://archive.org/stream/AnatolyDneprovCrabsOnTheIsland/An...
I am sure there are even earlier examples - but the above is a nice short read.
>i Safety alarmists are proved wrong
>ii Clear relationship between AI intelligence and safety/reliability
>iii Large and growing industries with vested interests in robotics and machine intelligence.
>iv A promising new technique in artificial intelligence, which is tremendously exciting to those who have participated in or followed the research.
>v The enactment of some safety rituals, whatever helps demonstrate that the participants are ethical and responsible (but nothing that significantly impedes the forward charge).
>vi A careful evaluation of seed AI in a sandbox environment, showing that it is behaving cooperatively and showing good judgment.
Have we really gone past the first point? After decades of R&D, driverless cars are still not as safe as humans in all conditions. We have yet to see the impact of generative AI on the intellectual development of software engineers, or to what extent it will exacerbate the "enshittification" of software. There's compelling evidence that nation states are trusting AI to identify "likely" terrorists who are then indiscriminately bombed.
We’re not going to see actual movement on managing AI risk until there is the equivalent of a Hiroshima/three mile island/chernobyl from a self-improving system that has no human in the loop.
Not enough people actually believe ASI is possible and harmful, to create a movement that will stop the people who are pursuing it who don’t care or don’t believe its going to be harmful.
It would have been impossible to have a nuclear weapons ban prior to World War II because 1. Almost nobody knew about it 2. Nobody would have actually believed it could be that bad
The question you should ask is, if someone does make it, on any timeline is there any possible counter at that point?
Humans are so existentially biased and self-centred!
And they are always forgetting that they wouldn't even be there if others hadn't made room for them.From the Great Oxygen to the K–Pg extinction event.
Be generous!
"Man is something that shall be overcome. What have you done to overcome him?"
Friedrich Nietzsche
The main danger is from people losing their jobs and the societal upheaval that could arise from that. AI isn't going to take over the world by itself and turn humanity into slaves, destroy us (unless we put it in charge of weapons and it fucks up) or harvest our blood for the iron.
The AI we have now (Stable Diffusion, chatgpt) are technical advancements that allow inferior but cheaper production of artistic content. It is not a step closer to death-by-paperclips; it is merely another step of big capital automating production, hoarding more wealth in a smaller group.
The closer thing to AI safety is unsupervised execution of laws by ML.
Every observation listed here can be generalized to a case of worry that AI is stupid, selfish and/or mean. Unsurprisingly, just like what's feared about people!
Epic confusion of the topic of cybernetics with "AI".
In colloquial parlance "AI" must be quoted to remind that the topic is hopelessly ambiguous, where every use demands explicit clarification via references to avoid abject confusion. This comment is about confusion, not about "AI".
"Thou shall not make machines in the likeness of the human mind."
Too late.
Weinzenbuam's Eliza showed the bar to likeness is low.
Television similarly demonstrates a low bar, but interestingly doesn't arouse viewers' suspicions about what all those little people are doing inside when you change the channel.
I find it helpful when considering implications of "AI" to interject the observation of the distinction between life and computing machinery: these are profoundly different dynamics: life is endogenous, mechanical computers are exogenous. We don't know how or why life emerges, but we do know how and why computers occur, because we make them. That computers are an emergent aspect of life may be part of the conundrum of the former and therefore a mystery, we design an control computers, to the extent that it can be said we design or control anything. So if you chose to diminish or contest the importance of design in outcomes of the application of computers, you challenge the importance of volition in all affairs. This might be fair, but apropos Descartes' testament to mind: if you debase yourself, you debase all your conclusions, so best to treat confusion and fear about the implications of applied computing as a study of your own limits.
There's a single enormous and obvious hazard of "AI" in this era: that we imbue imitations of human responses with humanity. The bar is low for a convincing imitation and transformer technology demonstrates surprisingly high levels of imitation. This is conducive to confusion, which is becoming rampant.
The start of a responsible orientation to rampant confusion is to formally contextualize it and impose a schedule of hygiene on fakery.
The great hazard of centralized fakery (e.g. radio and television) is a trap for the mind.
We are living in the aftermath of a 500 year campaign of commercial slavery. When people are confused, they can be ensnared, trapped and enslaved. The hazard of "AI" is not the will of sentient machines burying the world in manufactured effluvia— we've achieved this already!— it's the continuation of commercial slavery by turning people into robots.
This thread reads like it's being created by bots; cascading hallucinations.
Well, this bot is throwing down the gauntlet: Prove you're human, y'all!
Related
Some Thoughts on AI Alignment: Using AI to Control AI
The GitHub content discusses AI alignment and control, proposing Helper models to regulate AI behavior. These models monitor and manage the primary AI to prevent harmful actions, emphasizing external oversight and addressing implementation challenges.
Moonshots, Malice, and Mitigations
Rapid AI advancements by OpenAI with Transformer models like GPT-4 and Sora are discussed. Emphasis on aligning AI with human values, moonshot concepts, societal impacts, and ideologies like Whatever Accelerationism.
Anthropic CEO on Being an Underdog, AI Safety, and Economic Inequality
Anthropic's CEO, Dario Amodei, emphasizes AI progress, safety, and economic equality. The company's advanced AI system, Claude 3.5 Sonnet, competes with OpenAI, focusing on public benefit and multiple safety measures. Amodei discusses government regulation and funding for AI development.
Ray Kurzweil is (still, somehow) excited about humans merging with machines
Ray Kurzweil discusses merging humans with AI, foreseeing a future of advanced technology surpassing human abilities. Critics question feasibility, equity, societal disruptions, and ethical concerns. Kurzweil's techno-optimism overlooks societal complexities.
'Superintelligence,' Ten Years On
Nick Bostrom's book "Superintelligence" from 2014 shaped the AI alignment debate, highlighting risks of artificial superintelligence surpassing human intellect. Concerns include misalignment with human values and skepticism about AI achieving sentience. Discussions emphasize safety in AI advancement.