AI existential risk probabilities are too unreliable to inform policy
Governments struggle to assess AI existential risks due to unreliable forecasts and lack of consensus among experts. Policymakers must critically evaluate risk estimates before making decisions that could impact stakeholders.
Read original articleGovernments face challenges in assessing the existential risks posed by artificial intelligence (AI) due to the lack of consensus and reliability in risk probability estimates. The AI safety community often relies on speculative forecasts to inform policy, but these estimates are deemed too unreliable for effective decision-making. The authors argue that while some risks can be quantified through inductive, deductive, or subjective methods, the unique nature of AI x-risk makes these approaches problematic. Inductive estimates lack a suitable reference class, as human extinction from AI is unprecedented. Deductive models are insufficient due to the complexities of technological progress and governance, while subjective probabilities often reflect personal judgment rather than grounded analysis.
The authors highlight the variability in risk estimates, citing a forecasting tournament where AI experts provided a wide range of probabilities for AI extinction by 2100, indicating a lack of consensus even among knowledgeable individuals. This inconsistency raises concerns about the legitimacy of using such forecasts in public policy, especially when the costs of potential regulations could disproportionately affect stakeholders. The essay emphasizes the need for policymakers to critically evaluate the justification behind risk estimates and to avoid basing decisions on speculative forecasts that lack empirical support. Ultimately, while AI x-risk forecasting may serve academic purposes, its application in public policy requires a more robust foundation to ensure informed and legitimate governance.
Related
'Superintelligence,' Ten Years On
Nick Bostrom's book "Superintelligence" from 2014 shaped the AI alignment debate, highlighting risks of artificial superintelligence surpassing human intellect. Concerns include misalignment with human values and skepticism about AI achieving sentience. Discussions emphasize safety in AI advancement.
Superintelligence–10 Years Later
Reflection on the impact of Nick Bostrom's "Superintelligence" book after a decade, highlighting AI evolution, risks, safety concerns, regulatory calls, and the shift towards AI safety by influential figures and researchers.
Someone is wrong on the internet (AGI Doom edition)
The blog post critiques the existential risk of Artificial General Intelligence (AGI), questioning fast takeoff scenarios and emphasizing practical knowledge over doomsday predictions. It challenges assumptions and advocates for nuanced understanding.
All the existential risk, none of the economic impact. That's a shitty trade
Despite high expectations, AI advancements have not significantly impacted productivity or profits. Concerns about creating highly intelligent entities pose potential existential threats, urging careful monitoring and management of AI implications.
AI existential risk probabilities are too unreliable to inform policy
Governments struggle to assess AI existential risks due to unreliable probability estimates and lack of consensus among researchers. A more evidence-based approach is needed for informed policy decisions.
But why is that our default? I could just as well say we can't trust the estimates of p(survival), therefore we should fall back on a default action of not developing advanced AI.
I posted my specific proposal here: https://olegtrott.substack.com/p/recursion-in-ai-is-scary-bu...
Unlike previous ideas, it's implementable (once we have AGI-level language models) and works around the fact that much data on the Internet is false. I should probably call it Volition Extrapolated by Truthful Language Models.
I'm not sure about when the robots can build better robots and their own power sources - maybe worry about that when the time comes.
Related
'Superintelligence,' Ten Years On
Nick Bostrom's book "Superintelligence" from 2014 shaped the AI alignment debate, highlighting risks of artificial superintelligence surpassing human intellect. Concerns include misalignment with human values and skepticism about AI achieving sentience. Discussions emphasize safety in AI advancement.
Superintelligence–10 Years Later
Reflection on the impact of Nick Bostrom's "Superintelligence" book after a decade, highlighting AI evolution, risks, safety concerns, regulatory calls, and the shift towards AI safety by influential figures and researchers.
Someone is wrong on the internet (AGI Doom edition)
The blog post critiques the existential risk of Artificial General Intelligence (AGI), questioning fast takeoff scenarios and emphasizing practical knowledge over doomsday predictions. It challenges assumptions and advocates for nuanced understanding.
All the existential risk, none of the economic impact. That's a shitty trade
Despite high expectations, AI advancements have not significantly impacted productivity or profits. Concerns about creating highly intelligent entities pose potential existential threats, urging careful monitoring and management of AI implications.
AI existential risk probabilities are too unreliable to inform policy
Governments struggle to assess AI existential risks due to unreliable probability estimates and lack of consensus among researchers. A more evidence-based approach is needed for informed policy decisions.