July 16th, 2024

OpenAI illegally barred staff from airing safety risks, whistleblowers say

Whistleblowers at OpenAI allege the company restricted employees from reporting safety risks, leading to a complaint to the SEC. OpenAI made changes in response, but concerns persist over employees' rights.

Read original articleLink Icon
OpenAI illegally barred staff from airing safety risks, whistleblowers say

Whistleblowers at OpenAI have raised concerns about the company's alleged illegal practices of preventing employees from disclosing safety risks associated with its technology. The whistleblowers filed a complaint with the Securities and Exchange Commission, claiming that OpenAI enforced overly restrictive agreements that hindered employees from raising concerns to federal regulators. These agreements reportedly required employees to waive their rights to whistleblower compensation and seek prior consent before disclosing information to authorities. OpenAI's spokesperson stated that changes have been made to address these concerns. The whistleblowers' letter highlights the potential chilling effect on employees' rights to report violations and calls for regulatory action to ensure compliance with federal laws. The SEC has acknowledged the complaint, but it remains unclear if an investigation has been initiated. The whistleblowers urge swift action to address these agreements, emphasizing the importance of employees in safeguarding against potential dangers posed by AI technology.

Related

A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too

A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too

A hacker breached OpenAI's internal messaging systems, accessing A.I. technology details but not code. The incident raised concerns about foreign theft. OpenAI responded by enhancing security measures and exploring regulatory frameworks.

A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too

A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too

A hacker breached OpenAI's internal messaging systems, accessing discussions on A.I. tech. No code was compromised. The incident sparked internal debates on security and A.I. risks amid global competition.

OpenAI was hacked year-old breach wasn't reported to the public

OpenAI was hacked year-old breach wasn't reported to the public

Hackers breached OpenAI's internal messaging systems, exposing AI technology details, raising national security concerns. OpenAI enhanced security measures, dismissed a manager, and established a Safety and Security Committee to address the breach.

Ex-OpenAI staff call for "right to warn" about AI risks without retaliation

Ex-OpenAI staff call for "right to warn" about AI risks without retaliation

A group of former AI experts advocate for AI companies to allow employees to voice concerns without retaliation. They emphasize AI risks like inequality and loss of control, calling for transparency and a "right to warn."

Whistleblowers accuse OpenAI of 'illegally restrictive' NDAs

Whistleblowers accuse OpenAI of 'illegally restrictive' NDAs

Whistleblowers accuse OpenAI of illegal communication restrictions with regulators, including inhibiting reporting of violations and waiving whistleblower rights. OpenAI has not responded. Senator Grassley's office confirms the letter, emphasizing whistleblower protection. CEO Altman acknowledges the need for policy revisions amid ongoing transparency and accountability debates in the AI industry.

Link Icon 22 comments
By @helsinkiandrew - 4 months
By @infecto - 4 months
These agreements will most likely be ironed out.

What I am more interested in is the constant pressure on "safety risks" without anything that feels tangible to me so far. I believe there is indeed risk using models that could be biased but I don't believe that is a new problem. I still don't think we are at risk from a runaway AGI that is going to destroy us.

By @rurban - 4 months
By @_fat_santa - 4 months
I'm really really not a fan of the constant talk about "safety". My issue is that it never actually points to anything tangible, anytime I read about safety it's always used in a roundabout generic way. There's so much handwaving about the issue but every time I've tried to dig into just what the hell "safety" means, it's always either refers back to itself (ie. "ai safety is about safety") or makes some vague reference to an LLM telling a mean joke.
By @sirolimus - 4 months
I'm using mistral now, openai is a dying corporation in my opinion. All AI will and should be open-source and home-ran
By @neilv - 4 months
Recent: OpenAI whistleblowers ask SEC to investigate alleged restrictive NDAs (reuters.com) | 76 points by JumpCrisscross 2 days ago | 17 comments | https://news.ycombinator.com/item?id=40959851
By @ChrisArchitect - 4 months
By @Ragnarork - 4 months
> In a statement, Hannah Wong, a spokesperson for OpenAI said, “Our whistleblower policy protects employees’ rights to make protected disclosures. Additionally, we believe rigorous debate about this technology is essential and have already made important changes to our departure process to remove nondisparagement terms.”

How can corporate communication always play that card: "Our policy on X is very good, and we believe X is very important, so that's why we're making changes right now to things that were blatantly in contradiction with X, that we wouldn't have made if this didn't make it to the press"?

By @jrochkind1 - 4 months
The headline made me think there were going to be specific "safety risks" mentioned, but there do not seem to be. I am not sure what justifies the phrase "safety risks", "safety" especially. It makes us think of like, safety to humanity from AI or whatever, but the most focused upon thing in the article is SEC-related stuff; I'm not sure if the word "safety" is meant to refer to securities/financial related stuff, or meant to refer to other stuff?

(Note, I am personally a pretty anti-AI person honestly I went to the article hoping to get more ammunition on it, wondering what the "safety risks" employees were worried about were, disappointed to not get info on that, unclear if it was a thing or not).

By @keepamovin - 4 months
Moves and countermoves. I like the brief consideration raised by this post about all the ways that a shiny, new successful company may be attacked by its competitors surreptitiously, through the media, using lawfare by proxy, and so on. Such unadmirable deviousness!
By @batmansmk - 4 months
I start to feel this is all marketing. Pretend it's dangerous, so it implies it's beyond what we imagine. Because on our end, on the reality of a B2B product used daily, finding use cases for the limited OpenAI we have access to is far from trivial.
By @z3sRzPP3 - 4 months
Very short article, still concerning, but damn I'd like more info.
By @z3sRzPP3 - 4 months
Very short article, still concerning, but I'd love more info.
By @1vuio0pswjnm7 - 4 months
"OpenAI made staff sign employee agreements that required them to waive their federal rights to whistleblower compensation, the letter said. These agreements also required OpenAI staff to get prior consent from the company if they wished to disclose information to federal authorities. OpenAI did not create exemptions in its employee nondisparagement clauses for disclosing securities violations to the SEC."

Who drafted this employee agreement. Not a single OpenAI employees thought to have it reviewed by a lawyer before signing? Or perhaps someone did but was too frightened to tell OpenAI, "Your employee agreement needs to be fixed."

This company continues to com across as naive and amateur. Perhaps because it was never intended to become a commercial entity.

The other day I saw a Tesla with the license plate "OPENAI". No doubt there were also license plates that said "FTX".

By @mrcwinn - 4 months
Fear not: the guy who’s daily driving a Koenigsegg supercar has your best interests at heart.
By @farceSpherule - 4 months
OMG! I am totally shocked than an arrogant egomaniac did something like this. Shocked I tell you.
By @helsinkiandrew - 4 months
Matt Levine put this as a great idea for a lucrative job in his Bloomberg column (where hedge fund can be replaced with any company):

> Take a job at a hedge fund.

> Get handed an employment agreement on the first day that says “you agree not to disclose any of our secrets unless required by law.”

> Sign. Take the agreement home with you.

> Circle that sentence in red marker, write “$$$$$!!!!!” next to it and send it to the SEC.

> The SEC extracts a $10 million fine.

> They give you $3 million.

> You can keep your job! Why not; it’s illegal to retaliate against whistleblowers. Or, you know, get a new one and do it again.

https://www.bloomberg.com/opinion/articles/2024-07-15/openai... (archive: https://archive.ph/SWLh0)

By @say_it_as_it_is - 4 months
What kind of people are gravitating towards AI Safety roles-- activists? How does an organization avoid activists mucking up their models when a disproportionate number of applications is an activist? I guess the answer is to actively recruit for the position rather than accept applications.