July 12th, 2024

OpenAI promised to make its AI safe. Employees say it 'failed' its first test

OpenAI faces criticism for failing safety test on GPT-4 Omni model, signaling a shift towards profit over safety. Concerns raised on self-regulation effectiveness and reliance on voluntary commitments for AI risk mitigation. Leadership changes reflect ongoing safety challenges.

Read original articleLink Icon
OpenAI promised to make its AI safe. Employees say it 'failed' its first test

OpenAI employees expressed concerns that the company failed its first test to ensure the safety of its AI technology, specifically the GPT-4 Omni model. The incident highlighted a shift in OpenAI's culture towards prioritizing commercial interests over public safety, contrary to its nonprofit origins. The rushed testing process raised questions about the effectiveness of self-regulation by tech companies and the government's reliance on voluntary commitments to safeguard against AI risks. Despite internal complaints and resignations, OpenAI defended its safety process and commitment to thorough testing. The company's preparedness initiative aimed to address catastrophic risks associated with advanced AI systems, emphasizing evidence-based work. OpenAI's leadership changes and internal restructuring reflected ongoing challenges in balancing innovation with safety protocols. The incident underscored the complexities of ensuring AI safety and the need for continuous improvement in testing procedures to mitigate potential harms.

Link Icon 7 comments
By @benjismith - 3 months
It doesn't sound like they "failed" any actual safety test, but rather that they rushed their safety tests, thereby "failing" (in the eyes of many people) to conduct sufficiently rigorous tests.

Now that the 4o model have been out in the wild for 2 months, have there been any claims of serious safety failures? The article doesn't seem to imply any such thing.

By @southernplaces7 - 3 months
If anything, the entire narrative around AI "safety" is essentially a propaganda win for OpenAI and other major players. It let's them keep pandering to legislators in the push for "safety regulations" that are in reality an attempt to seal off corporate AI behind a competition killing walled garden of new laws.

Current AI is nowhere near anything resembling an all-consuming AGI monster. The reality is so far from this that it's laughable and the uses of current AI are (except in terms of possible scale at which visual and text sludge can be produced) not much different from the kind of human-created spam and visual sludge made until recently mostly by humans, many of them minimally paid third world content mill writers.

I'd love to read a specifically enumerated list of other real dangers.

By @rntn - 3 months
By @daft_pink - 3 months
It’s a chatbot. Why are we being so dramatic about safety?
By @dingosity - 3 months
"The previously unreported incident sheds light on the changing culture at OpenAI, where company leaders including CEO Sam Altman have been accused of prioritizing commercial interests over public safety..."

I don't know how true this is, but the idea that a commercial entity in the modern era would prioritize public safety over commercial interests is pretty laughable. (thinking about Boeing and Waymo most recently.)

By @bongodongobob - 3 months
Making AI safe is impossible and stupid. If you want chatbots, use some NLP and a state machine.
By @olgeni - 3 months
This is funny because it says "President Biden’s strategy" as if they actually believed it :D