US lawmakers send a letter to OpenAI requesting government access
US lawmakers have urged OpenAI to enhance safety standards, allocate resources for AI safety research, and allow pre-deployment testing, following whistleblower allegations and concerns about AI risks and accountability.
Read original articleUS lawmakers, including Senate Democrats and an independent lawmaker, have sent a letter to OpenAI CEO Sam Altman addressing concerns about the company's safety standards and whistleblower treatment. The letter, reported by The Washington Post, includes a request for OpenAI to commit to making its next foundation model available for pre-deployment testing by U.S. government agencies. Lawmakers also seek assurance that OpenAI will allocate 20% of its computing resources to AI safety research and implement measures to prevent theft of its AI products by malicious actors or foreign adversaries. This scrutiny follows whistleblower allegations of inadequate safety protocols for the GPT-4 Omni model and claims of retaliation against employees raising safety concerns. In response to these issues, Microsoft and Apple recently withdrew from OpenAI's board, despite Microsoft's significant investment in the company. Additionally, former OpenAI employee William Saunders expressed concerns about the potential existential risks posed by the company's future AI developments, comparing it to the Titanic disaster. He emphasized the right of AI sector employees to alert the public about possible dangers associated with rapid advancements in synthetic intelligence. The situation highlights ongoing regulatory challenges facing OpenAI and the broader AI industry amid increasing calls for accountability and transparency.
Related
Former OpenAI employee quit to avoid 'working for the Titanic of AI'
A former OpenAI employee raised concerns about the company's direction, likening it to the Titanic. Departures, lawsuits, and founding rival companies highlight challenges in balancing innovation and safety in AI development.
OpenAI promised to make its AI safe. Employees say it 'failed' its first test
OpenAI faces criticism for failing safety test on GPT-4 Omni model, signaling a shift towards profit over safety. Concerns raised on self-regulation effectiveness and reliance on voluntary commitments for AI risk mitigation. Leadership changes reflect ongoing safety challenges.
Ex-OpenAI staff call for "right to warn" about AI risks without retaliation
A group of former AI experts advocate for AI companies to allow employees to voice concerns without retaliation. They emphasize AI risks like inequality and loss of control, calling for transparency and a "right to warn."
Whistleblowers accuse OpenAI of 'illegally restrictive' NDAs
Whistleblowers accuse OpenAI of illegal communication restrictions with regulators, including inhibiting reporting of violations and waiving whistleblower rights. OpenAI has not responded. Senator Grassley's office confirms the letter, emphasizing whistleblower protection. CEO Altman acknowledges the need for policy revisions amid ongoing transparency and accountability debates in the AI industry.
OpenAI illegally barred staff from airing safety risks, whistleblowers say
Whistleblowers at OpenAI allege the company restricted employees from reporting safety risks, leading to a complaint to the SEC. OpenAI made changes in response, but concerns persist over employees' rights.
Related
Former OpenAI employee quit to avoid 'working for the Titanic of AI'
A former OpenAI employee raised concerns about the company's direction, likening it to the Titanic. Departures, lawsuits, and founding rival companies highlight challenges in balancing innovation and safety in AI development.
OpenAI promised to make its AI safe. Employees say it 'failed' its first test
OpenAI faces criticism for failing safety test on GPT-4 Omni model, signaling a shift towards profit over safety. Concerns raised on self-regulation effectiveness and reliance on voluntary commitments for AI risk mitigation. Leadership changes reflect ongoing safety challenges.
Ex-OpenAI staff call for "right to warn" about AI risks without retaliation
A group of former AI experts advocate for AI companies to allow employees to voice concerns without retaliation. They emphasize AI risks like inequality and loss of control, calling for transparency and a "right to warn."
Whistleblowers accuse OpenAI of 'illegally restrictive' NDAs
Whistleblowers accuse OpenAI of illegal communication restrictions with regulators, including inhibiting reporting of violations and waiving whistleblower rights. OpenAI has not responded. Senator Grassley's office confirms the letter, emphasizing whistleblower protection. CEO Altman acknowledges the need for policy revisions amid ongoing transparency and accountability debates in the AI industry.
OpenAI illegally barred staff from airing safety risks, whistleblowers say
Whistleblowers at OpenAI allege the company restricted employees from reporting safety risks, leading to a complaint to the SEC. OpenAI made changes in response, but concerns persist over employees' rights.