July 13th, 2024

Ex-OpenAI staff call for "right to warn" about AI risks without retaliation

A group of former AI experts advocate for AI companies to allow employees to voice concerns without retaliation. They emphasize AI risks like inequality and loss of control, calling for transparency and a "right to warn."

Read original articleLink Icon
Ex-OpenAI staff call for "right to warn" about AI risks without retaliation

A group of former OpenAI and Google DeepMind employees have published an open letter advocating for AI companies to allow employees to raise concerns about AI risks without facing retaliation. The letter emphasizes the potential risks of AI, such as exacerbating inequalities and loss of control over autonomous systems. The signatories call for greater transparency and obligations for AI companies to share information with governments and civil society. The letter outlines four key principles for AI companies to commit to, including not enforcing agreements that prohibit criticism and supporting a culture of open criticism. The call for a "right to warn" comes after concerns were raised about OpenAI's restrictive non-disclosure agreements for departing employees. The letter has garnered support from prominent AI experts, highlighting the need for transparency, oversight, and protection for employees who speak out about potential AI risks.

Related

AI Companies Need to Be Regulated: Open Letter

AI Companies Need to Be Regulated: Open Letter

AI companies face calls for regulation due to concerns over unethical practices highlighted in an open letter by MacStories to the U.S. Congress and European Parliament. The letter stresses the need for transparency and protection of content creators.

A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too

A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too

A hacker breached OpenAI's internal messaging systems, accessing A.I. technology details but not code. The incident raised concerns about foreign theft. OpenAI responded by enhancing security measures and exploring regulatory frameworks.

A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too

A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too

A hacker breached OpenAI's internal messaging systems, accessing A.I. technology details but not code. Concerns over national security risks arose, leading to internal security debates and calls for tighter controls on A.I. labs.

A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too

A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too

A hacker breached OpenAI's internal messaging systems, accessing discussions on A.I. tech. No code was compromised. The incident sparked internal debates on security and A.I. risks amid global competition.

OpenAI promised to make its AI safe. Employees say it 'failed' its first test

OpenAI promised to make its AI safe. Employees say it 'failed' its first test

OpenAI faces criticism for failing safety test on GPT-4 Omni model, signaling a shift towards profit over safety. Concerns raised on self-regulation effectiveness and reliance on voluntary commitments for AI risk mitigation. Leadership changes reflect ongoing safety challenges.

Link Icon 0 comments