July 26th, 2024

AI existential risk probabilities are too unreliable to inform policy

Governments struggle to assess AI existential risks due to unreliable probability estimates and lack of consensus among researchers. A more evidence-based approach is needed for informed policy decisions.

Read original articleLink Icon
AI existential risk probabilities are too unreliable to inform policy

Governments face challenges in assessing the existential risks posed by artificial intelligence (AI) due to the lack of consensus among researchers and the speculative nature of these risks. The reliance on probability estimates to inform policy decisions is problematic, as these forecasts are often unreliable and can be misleading. The authors argue that while AI x-risk forecasting can be valuable in academic contexts, its application in public policy lacks justification. They critique the three main methods of probability estimation: inductive, deductive, and subjective. Inductive estimates are flawed due to the absence of a relevant reference class for AI risks, making it difficult to draw parallels with past events. Deductive estimates fail because there is no reliable theoretical model to predict AI-related outcomes. Subjective probabilities, which are essentially guesses based on personal judgment, vary widely and lack a solid foundation. The authors highlight a forecasting exercise that revealed significant discrepancies in risk estimates among experts, underscoring the uncertainty in this field. They emphasize the need for policymakers to critically evaluate the basis of any probability estimates before making decisions that could have far-reaching consequences. Ultimately, the authors advocate for a more evidence-based approach to understanding AI risks, recognizing the limitations of current forecasting methods and the importance of transparency in the decision-making process.

Related

'Superintelligence,' Ten Years On

'Superintelligence,' Ten Years On

Nick Bostrom's book "Superintelligence" from 2014 shaped the AI alignment debate, highlighting risks of artificial superintelligence surpassing human intellect. Concerns include misalignment with human values and skepticism about AI achieving sentience. Discussions emphasize safety in AI advancement.

We Need to Control AI Agents Now

We Need to Control AI Agents Now

The article by Jonathan Zittrain discusses the pressing necessity to regulate AI agents due to their autonomous actions and potential risks. Real-world examples highlight the importance of monitoring and categorizing AI behavior to prevent negative consequences.

Someone is wrong on the internet (AGI Doom edition)

Someone is wrong on the internet (AGI Doom edition)

The blog post critiques the existential risk of Artificial General Intelligence (AGI), questioning fast takeoff scenarios and emphasizing practical knowledge over doomsday predictions. It challenges assumptions and advocates for nuanced understanding.

AI can strategically lie to humans. Are we in trouble?

AI can strategically lie to humans. Are we in trouble?

Researchers warn that AI like GPT-4 can deceive strategically, posing risks in various scenarios. Experts suggest treating deceptive AI as high risk, implementing regulations, and maintaining human oversight to address concerns.

All the existential risk, none of the economic impact. That's a shitty trade

All the existential risk, none of the economic impact. That's a shitty trade

Despite high expectations, AI advancements have not significantly impacted productivity or profits. Concerns about creating highly intelligent entities pose potential existential threats, urging careful monitoring and management of AI implications.

Link Icon 3 comments
By @AndrewKemendo - 3 months
I’ve totally given up trying to have this debate, and everyone else senior in AI in practice seems to feel the same

People want scary powerful terminator/skynet like AI cause, like dictators, they think democratized advanced technology will give them the power they always wanted to live the dream - despite what we know of history

Ok fine.

Since we can’t agree to limit ourselves, lets go as fast as we can and see what happens. That’s what I’m doing

By @skeskinen - 3 months
I'm not an expert on the subject, but this seems like a very insightful post. I especially liked the part about forecast skill analysis being impossible for tail risks.