OpenAI illegally barred staff from airing safety risks, whistleblowers say
Whistleblowers at OpenAI allege the company restricted employees from reporting safety risks, leading to a complaint to the SEC. OpenAI made changes in response, but concerns persist over employees' rights.
Read original articleWhistleblowers at OpenAI have raised concerns about the company's alleged illegal practices of preventing employees from disclosing safety risks associated with its technology. The whistleblowers filed a complaint with the Securities and Exchange Commission, claiming that OpenAI enforced overly restrictive agreements that hindered employees from raising concerns to federal regulators. These agreements reportedly required employees to waive their rights to whistleblower compensation and seek prior consent before disclosing information to authorities. OpenAI's spokesperson stated that changes have been made to address these concerns. The whistleblowers' letter highlights the potential chilling effect on employees' rights to report violations and calls for regulatory action to ensure compliance with federal laws. The SEC has acknowledged the complaint, but it remains unclear if an investigation has been initiated. The whistleblowers urge swift action to address these agreements, emphasizing the importance of employees in safeguarding against potential dangers posed by AI technology.
Related
A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too
A hacker breached OpenAI's internal messaging systems, accessing A.I. technology details but not code. The incident raised concerns about foreign theft. OpenAI responded by enhancing security measures and exploring regulatory frameworks.
A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too
A hacker breached OpenAI's internal messaging systems, accessing discussions on A.I. tech. No code was compromised. The incident sparked internal debates on security and A.I. risks amid global competition.
OpenAI was hacked year-old breach wasn't reported to the public
Hackers breached OpenAI's internal messaging systems, exposing AI technology details, raising national security concerns. OpenAI enhanced security measures, dismissed a manager, and established a Safety and Security Committee to address the breach.
Ex-OpenAI staff call for "right to warn" about AI risks without retaliation
A group of former AI experts advocate for AI companies to allow employees to voice concerns without retaliation. They emphasize AI risks like inequality and loss of control, calling for transparency and a "right to warn."
Whistleblowers accuse OpenAI of 'illegally restrictive' NDAs
Whistleblowers accuse OpenAI of illegal communication restrictions with regulators, including inhibiting reporting of violations and waiving whistleblower rights. OpenAI has not responded. Senator Grassley's office confirms the letter, emphasizing whistleblower protection. CEO Altman acknowledges the need for policy revisions amid ongoing transparency and accountability debates in the AI industry.
What I am more interested in is the constant pressure on "safety risks" without anything that feels tangible to me so far. I believe there is indeed risk using models that could be biased but I don't believe that is a new problem. I still don't think we are at risk from a runaway AGI that is going to destroy us.
How can corporate communication always play that card: "Our policy on X is very good, and we believe X is very important, so that's why we're making changes right now to things that were blatantly in contradiction with X, that we wouldn't have made if this didn't make it to the press"?
(Note, I am personally a pretty anti-AI person honestly I went to the article hoping to get more ammunition on it, wondering what the "safety risks" employees were worried about were, disappointed to not get info on that, unclear if it was a thing or not).
Who drafted this employee agreement. Not a single OpenAI employees thought to have it reviewed by a lawyer before signing? Or perhaps someone did but was too frightened to tell OpenAI, "Your employee agreement needs to be fixed."
This company continues to com across as naive and amateur. Perhaps because it was never intended to become a commercial entity.
The other day I saw a Tesla with the license plate "OPENAI". No doubt there were also license plates that said "FTX".
> Take a job at a hedge fund.
> Get handed an employment agreement on the first day that says “you agree not to disclose any of our secrets unless required by law.”
> Sign. Take the agreement home with you.
> Circle that sentence in red marker, write “$$$$$!!!!!” next to it and send it to the SEC.
> The SEC extracts a $10 million fine.
> They give you $3 million.
> You can keep your job! Why not; it’s illegal to retaliate against whistleblowers. Or, you know, get a new one and do it again.
https://www.bloomberg.com/opinion/articles/2024-07-15/openai... (archive: https://archive.ph/SWLh0)
Related
A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too
A hacker breached OpenAI's internal messaging systems, accessing A.I. technology details but not code. The incident raised concerns about foreign theft. OpenAI responded by enhancing security measures and exploring regulatory frameworks.
A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too
A hacker breached OpenAI's internal messaging systems, accessing discussions on A.I. tech. No code was compromised. The incident sparked internal debates on security and A.I. risks amid global competition.
OpenAI was hacked year-old breach wasn't reported to the public
Hackers breached OpenAI's internal messaging systems, exposing AI technology details, raising national security concerns. OpenAI enhanced security measures, dismissed a manager, and established a Safety and Security Committee to address the breach.
Ex-OpenAI staff call for "right to warn" about AI risks without retaliation
A group of former AI experts advocate for AI companies to allow employees to voice concerns without retaliation. They emphasize AI risks like inequality and loss of control, calling for transparency and a "right to warn."
Whistleblowers accuse OpenAI of 'illegally restrictive' NDAs
Whistleblowers accuse OpenAI of illegal communication restrictions with regulators, including inhibiting reporting of violations and waiving whistleblower rights. OpenAI has not responded. Senator Grassley's office confirms the letter, emphasizing whistleblower protection. CEO Altman acknowledges the need for policy revisions amid ongoing transparency and accountability debates in the AI industry.