July 27th, 2024

AI existential risk probabilities are too unreliable to inform policy

Governments struggle to assess AI existential risks due to unreliable forecasts and lack of consensus among experts. Policymakers must critically evaluate risk estimates before making decisions that could impact stakeholders.

Read original articleLink Icon
AI existential risk probabilities are too unreliable to inform policy

Governments face challenges in assessing the existential risks posed by artificial intelligence (AI) due to the lack of consensus and reliability in risk probability estimates. The AI safety community often relies on speculative forecasts to inform policy, but these estimates are deemed too unreliable for effective decision-making. The authors argue that while some risks can be quantified through inductive, deductive, or subjective methods, the unique nature of AI x-risk makes these approaches problematic. Inductive estimates lack a suitable reference class, as human extinction from AI is unprecedented. Deductive models are insufficient due to the complexities of technological progress and governance, while subjective probabilities often reflect personal judgment rather than grounded analysis.

The authors highlight the variability in risk estimates, citing a forecasting tournament where AI experts provided a wide range of probabilities for AI extinction by 2100, indicating a lack of consensus even among knowledgeable individuals. This inconsistency raises concerns about the legitimacy of using such forecasts in public policy, especially when the costs of potential regulations could disproportionately affect stakeholders. The essay emphasizes the need for policymakers to critically evaluate the justification behind risk estimates and to avoid basing decisions on speculative forecasts that lack empirical support. Ultimately, while AI x-risk forecasting may serve academic purposes, its application in public policy requires a more robust foundation to ensure informed and legitimate governance.

Link Icon 9 comments
By @DennisP - 4 months
This article says that we can't trust the estimates of p(doom), therefore we should take no action. But it assumes that "no action" means spending many billions of dollars to develop advanced AI.

But why is that our default? I could just as well say we can't trust the estimates of p(survival), therefore we should fall back on a default action of not developing advanced AI.

By @trott - 4 months
A lot of people are worried about aligning superintelligent, self-improving AI. But I think it will be easier than aligning current AI, for the same reason that it's easier to explain what you want to a human than it is to train a dog.

I posted my specific proposal here: https://olegtrott.substack.com/p/recursion-in-ai-is-scary-bu...

Unlike previous ideas, it's implementable (once we have AGI-level language models) and works around the fact that much data on the Internet is false. I should probably call it Volition Extrapolated by Truthful Language Models.

By @TideAd - 4 months
The authors are basically asking the alignment problem to be well-defined and easy to model. I sympathize. Unfortunately the alignment problem is famously difficult to conceptualize in its entirety. It's like 20 different difficult counterintuitive subproblems, and the combined weight of all the subproblems that makes up the risk. Of course probabilities are all over place. It'll remain tricky to model right up until we make a superintelligence, and if we don't get that right then it'll be way too late for government policy help.
By @greenthrow - 4 months
The purpose of hyping up the existential risks of purely theoretical AGI is to distract us from the actual problems involved with LLMs today and the much more likely problems they will cause soon.
By @tim333 - 4 months
At this point when we can pull the plug I think governments should just keep an eye on what's going on. And maybe avoid connecting unstable AI networks to the nuclear missile systems to reduce the risk of Terminator 2.

I'm not sure about when the robots can build better robots and their own power sources - maybe worry about that when the time comes.

By @jahewson - 4 months
It’s really nice to see data being used to back up an argument in this space. There’s too much sci-fi out there.
By @aoeusnth1 - 4 months
We can’t rely on any estimates for p(the plane crashes), therefore everyone should get in a plane.
By @amelius - 4 months
A nonzero probability should be enough because the consequences are infinitely bad.