September 13th, 2024

OpenAI acknowledges new models increase risk of misuse to create bioweapons

OpenAI's new o1 models pose a medium risk for misuse in creating biological weapons, prompting calls for regulatory measures. The models will be cautiously released to paid subscribers and programmers.

Read original articleLink Icon
OpenAI acknowledges new models increase risk of misuse to create bioweapons

OpenAI has acknowledged that its latest AI models, referred to as o1, significantly increase the risk of misuse for creating biological weapons. The company highlighted that these models possess advanced reasoning and problem-solving capabilities, marking a crucial step towards artificial general intelligence (AGI). OpenAI's system card rated the risk of these models being used for chemical, biological, radiological, and nuclear (CBRN) weapons as "medium," the highest risk classification it has assigned to any of its models to date. Experts, including AI scientist Yoshua Bengio, emphasized the urgency for regulatory measures, such as California's proposed SB 1047, which aims to mitigate the risks associated with advanced AI technologies. OpenAI's Chief Technology Officer, Mira Murati, stated that the company is exercising caution in releasing the o1 models, which will be available to paid ChatGPT subscribers and programmers via an API. Despite the increased capabilities, OpenAI claims that the new models perform better on safety metrics than previous versions and do not pose risks beyond those already possible with existing technologies.

- OpenAI's new o1 models increase the risk of misuse for bioweapons.

- The models are rated "medium risk" for CBRN weapon development.

- Experts call for urgent regulatory measures to address AI risks.

- OpenAI is cautious in releasing the models, prioritizing safety.

- The o1 models will be accessible to paid subscribers and programmers.

Link Icon 8 comments
By @sgillen - 5 months
This feels like a marketing move to me, claim your model is so powerful that it’s dangerous, and people will be more interested in using it.
By @RcouF1uZ4gsC - 5 months
Is the risk higher than with any chemistry or microbiology grad student?
By @bradyriddle - 5 months
Is it just way easier/accessible to make a bioweapon then I think it is? Serious question. I've seen this pop up a few times.
By @tahoeskibum - 5 months
In other news, Gutenberg's ghost acknowledges that the printing press increases the risk of misuse to create weapons, misinformation...
By @kylehotchkiss - 5 months
Can't we make the crawler just a little smarter then to avoid chemistry websites?
By @rafaelmn - 5 months
Given how shit GPT is at programming and the amount of training data available in this domain - I highly doubt it would be more useful in this area over a Google search.
By @bormaj - 5 months
Can't read the article because of the paywall, so this may/not be related...

Is it fair to say that the real liability here is a dataset mapping protein/molecule structures to outcomes/effects? Hypothetically the govt could always require OpenAI to blur responses with malicious intent. But if the underlying corpus is available, what's stopping a bad actor from training another model to do the same thing?

I guess the question I'm asking here is what risk is unique to their model if not the data it was trained on?

By @goles - 5 months