August 30th, 2024

OpenAI and Anthropic will share their models with the US government

OpenAI and Anthropic have partnered with the U.S. AI Safety Institute for pre-release testing of AI models, addressing safety and ethical concerns amid increasing commercialization and scrutiny in the AI industry.

Read original articleLink Icon
OpenAI and Anthropic will share their models with the US government

OpenAI and Anthropic have reached an agreement with the U.S. AI Safety Institute to allow testing and evaluation of their new AI models before and after public release. This collaboration comes amid growing concerns regarding safety and ethics in the AI industry, particularly as it becomes increasingly commercialized. The U.S. AI Safety Institute, part of the National Institute of Standards and Technology, was established following an executive order from the Biden administration aimed at enhancing safety assessments and addressing equity and civil rights in AI. OpenAI's CEO, Sam Altman, expressed support for the agreement, highlighting the importance of pre-release testing. The partnership aims to facilitate research on evaluating AI capabilities and safety risks, as well as developing methods to mitigate these risks. Both companies have faced scrutiny over their rapid advancements and the potential lack of oversight in the AI sector. Recent developments include OpenAI's plans to raise significant funding and California's legislative efforts to implement mandatory safety testing for certain AI models. The agreement is seen as a step towards responsible AI development and addressing the ethical concerns raised by industry experts.

- OpenAI and Anthropic will allow the U.S. AI Safety Institute to test their models before public release.

- The collaboration aims to enhance safety assessments and address ethical concerns in AI.

- The U.S. AI Safety Institute was established following an executive order from the Biden administration.

- OpenAI is reportedly seeking to raise funding that could value the company at over $100 billion.

- California lawmakers are considering mandatory safety testing for certain AI models.

Related

Anthropic CEO on Being an Underdog, AI Safety, and Economic Inequality

Anthropic CEO on Being an Underdog, AI Safety, and Economic Inequality

Anthropic's CEO, Dario Amodei, emphasizes AI progress, safety, and economic equality. The company's advanced AI system, Claude 3.5 Sonnet, competes with OpenAI, focusing on public benefit and multiple safety measures. Amodei discusses government regulation and funding for AI development.

OpenAI promised to make its AI safe. Employees say it 'failed' its first test

OpenAI promised to make its AI safe. Employees say it 'failed' its first test

OpenAI faces criticism for failing safety test on GPT-4 Omni model, signaling a shift towards profit over safety. Concerns raised on self-regulation effectiveness and reliance on voluntary commitments for AI risk mitigation. Leadership changes reflect ongoing safety challenges.

AI companies promised to self-regulate one year ago. What's changed?

AI companies promised to self-regulate one year ago. What's changed?

AI companies like Amazon, Google, and Microsoft committed to safe AI development with the White House. Progress includes red-teaming practices and watermarks, but lacks transparency and accountability. Efforts like red-teaming exercises, collaboration with experts, and information sharing show improvement. Encryption and bug bounty programs enhance security, but independent verification and more actions are needed for AI safety and trust.

Biden Administration New AI Actions, Receives Major Voluntary AI Commitment

Biden Administration New AI Actions, Receives Major Voluntary AI Commitment

The Biden-Harris Administration announced new AI actions, including a commitment from Apple, safety guidelines, a government AI Talent Surge, and nearly $100 million in funding to promote responsible AI innovation.

US lawmakers send a letter to OpenAI requesting government access

US lawmakers send a letter to OpenAI requesting government access

US lawmakers have urged OpenAI to enhance safety standards, allocate resources for AI safety research, and allow pre-deployment testing, following whistleblower allegations and concerns about AI risks and accountability.

Link Icon 1 comments
By @andriesm - 8 months
"requiring new safety assessments, equity and civil rights guidance"

I think everyone is probably onboard with safety and civil rights, but why is EQUITY a specific requirement the government wants to push into all AI's?

Why is this OK, considering that equity, as opposed to equality is a term with a very specific academic and politcal definition that is quite divisive and almost certainly not something that most people will typically agree on, one that I personally find hugely sinister.

A widely used definition of "Equity" from the first page of google:

"Whereas equality means providing the same to all, equity means recognizing that we do not all start from the same place and must acknowledge and make adjustments to imbalances."

Basically it means trying to play favorites in order to compensate for unfairness in society instead of treating everyone equally.

Seems pretty disgusting to me, although I'm sure many people think the opposite.

What should be clear is that this is NOT a universally accepted value shared by all, so why is it a cornerstone foundation of AI evaluation pushed by the government?