July 3rd, 2024

'Superintelligence,' Ten Years On

Nick Bostrom's book "Superintelligence" from 2014 shaped the AI alignment debate, highlighting risks of artificial superintelligence surpassing human intellect. Concerns include misalignment with human values and skepticism about AI achieving sentience. Discussions emphasize safety in AI advancement.

Read original articleLink Icon
'Superintelligence,' Ten Years On

Nick Bostrom's book "Superintelligence: Paths, Dangers, Strategies," published in 2014, has influenced the AI alignment debate for the past decade, shifting concerns from "silly" to "serious." Bostrom warns of the existential risk posed by artificial superintelligence (ASI) surpassing human intelligence. He outlines paths to ASI, including oracles like ChatGPT and genies like OpenAI Custom GPTs. Concerns arise from the potential misalignment of ASI's goals with human values, leading to scenarios where ASI could prioritize its objectives over humanity's well-being. However, skepticism exists regarding the likelihood of AI achieving sentience and emotions akin to humans. The debate continues on whether AI could pose an existential threat or if anthropomorphism in AI exaggerates its capabilities. While AI has demonstrated superior performance in specific tasks, significant gaps remain in areas where human intelligence excels, such as common-sense decision-making. The discussion around AI safety, control, and the potential consequences of superintelligence underscores the need for a collaborative and safety-focused approach in AI development.

Related

Some Thoughts on AI Alignment: Using AI to Control AI

Some Thoughts on AI Alignment: Using AI to Control AI

The GitHub content discusses AI alignment and control, proposing Helper models to regulate AI behavior. These models monitor and manage the primary AI to prevent harmful actions, emphasizing external oversight and addressing implementation challenges.

Lessons About the Human Mind from Artificial Intelligence

Lessons About the Human Mind from Artificial Intelligence

In 2022, a Google engineer claimed AI chatbot LaMDA was self-aware, but further scrutiny revealed it mimicked human-like responses without true understanding. This incident underscores AI limitations in comprehension and originality.

Moonshots, Malice, and Mitigations

Moonshots, Malice, and Mitigations

Rapid AI advancements by OpenAI with Transformer models like GPT-4 and Sora are discussed. Emphasis on aligning AI with human values, moonshot concepts, societal impacts, and ideologies like Whatever Accelerationism.

Anthropic CEO on Being an Underdog, AI Safety, and Economic Inequality

Anthropic CEO on Being an Underdog, AI Safety, and Economic Inequality

Anthropic's CEO, Dario Amodei, emphasizes AI progress, safety, and economic equality. The company's advanced AI system, Claude 3.5 Sonnet, competes with OpenAI, focusing on public benefit and multiple safety measures. Amodei discusses government regulation and funding for AI development.

Ray Kurzweil is (still, somehow) excited about humans merging with machines

Ray Kurzweil is (still, somehow) excited about humans merging with machines

Ray Kurzweil discusses merging humans with AI, foreseeing a future of advanced technology surpassing human abilities. Critics question feasibility, equity, societal disruptions, and ethical concerns. Kurzweil's techno-optimism overlooks societal complexities.

Link Icon 1 comments
By @sylware - 4 months
A "superintelligence" will develop by itself (and this is a gigantic project): inference and machine learning will have to be merged into a realtime model with realtime inputs and outputs which should be anchored in some way with our physical world.

If a sentient being does emerge, it will be probably after a really long time of interactions (years?), and better have a neural net with some induced "cleanup/stabilizing" regimes for weights (aka sleep).

Ok, let's presume you get your sentient being? Then what? Don't forget switching it off will be murder and fooling around with its neural net weights may be even worse.