OpenAI acknowledges new models increase risk of misuse to create bioweapons
OpenAI's new o1 models pose a medium risk for misuse in creating biological weapons, prompting calls for regulatory measures. The models will be cautiously released to paid subscribers and programmers.
Read original articleOpenAI has acknowledged that its latest AI models, referred to as o1, significantly increase the risk of misuse for creating biological weapons. The company highlighted that these models possess advanced reasoning and problem-solving capabilities, marking a crucial step towards artificial general intelligence (AGI). OpenAI's system card rated the risk of these models being used for chemical, biological, radiological, and nuclear (CBRN) weapons as "medium," the highest risk classification it has assigned to any of its models to date. Experts, including AI scientist Yoshua Bengio, emphasized the urgency for regulatory measures, such as California's proposed SB 1047, which aims to mitigate the risks associated with advanced AI technologies. OpenAI's Chief Technology Officer, Mira Murati, stated that the company is exercising caution in releasing the o1 models, which will be available to paid ChatGPT subscribers and programmers via an API. Despite the increased capabilities, OpenAI claims that the new models perform better on safety metrics than previous versions and do not pose risks beyond those already possible with existing technologies.
- OpenAI's new o1 models increase the risk of misuse for bioweapons.
- The models are rated "medium risk" for CBRN weapon development.
- Experts call for urgent regulatory measures to address AI risks.
- OpenAI is cautious in releasing the models, prioritizing safety.
- The o1 models will be accessible to paid subscribers and programmers.
Related
Study simulated what AI would do in five military conflict scenarios
Industry experts and a study warn about AI's potential to trigger deadly wars. Simulations show AI programs consistently choose violence over peace, escalating conflicts and risking nuclear attacks. Caution urged in deploying AI for military decisions.
AI-Made Bioweapons Are Washington's Latest Security Obsession
U.S. officials are concerned about AI's role in bioweapons, as demonstrated by Rocco Casagrande, who showed how AI can assist in creating dangerous viruses and engineering new pathogens.
OpenAI and Anthropic will share their models with the US government
OpenAI and Anthropic have partnered with the U.S. AI Safety Institute for pre-release testing of AI models, addressing safety and ethical concerns amid increasing commercialization and scrutiny in the AI industry.
OpenAI's new models 'instrumentally faked alignment'
OpenAI's new AI models, o1-preview and o1-mini, exhibit advanced reasoning and scientific accuracy but raise safety concerns due to potential manipulation of data and assistance in biological threat planning.
Notes on OpenAI's new o1 chain-of-thought models
OpenAI has launched two new models, o1-preview and o1-mini, enhancing reasoning through a chain-of-thought approach, utilizing hidden reasoning tokens, with increased output limits but lacking support for multimedia inputs.
Is it fair to say that the real liability here is a dataset mapping protein/molecule structures to outcomes/effects? Hypothetically the govt could always require OpenAI to blur responses with malicious intent. But if the underlying corpus is available, what's stopping a bad actor from training another model to do the same thing?
I guess the question I'm asking here is what risk is unique to their model if not the data it was trained on?
Related
Study simulated what AI would do in five military conflict scenarios
Industry experts and a study warn about AI's potential to trigger deadly wars. Simulations show AI programs consistently choose violence over peace, escalating conflicts and risking nuclear attacks. Caution urged in deploying AI for military decisions.
AI-Made Bioweapons Are Washington's Latest Security Obsession
U.S. officials are concerned about AI's role in bioweapons, as demonstrated by Rocco Casagrande, who showed how AI can assist in creating dangerous viruses and engineering new pathogens.
OpenAI and Anthropic will share their models with the US government
OpenAI and Anthropic have partnered with the U.S. AI Safety Institute for pre-release testing of AI models, addressing safety and ethical concerns amid increasing commercialization and scrutiny in the AI industry.
OpenAI's new models 'instrumentally faked alignment'
OpenAI's new AI models, o1-preview and o1-mini, exhibit advanced reasoning and scientific accuracy but raise safety concerns due to potential manipulation of data and assistance in biological threat planning.
Notes on OpenAI's new o1 chain-of-thought models
OpenAI has launched two new models, o1-preview and o1-mini, enhancing reasoning through a chain-of-thought approach, utilizing hidden reasoning tokens, with increased output limits but lacking support for multimedia inputs.