June 26th, 2024

Anthropic: Expanding Access to Claude for Government

Anthropic expands AI models Claude 3 Haiku and Sonnet for government users via AWS Marketplace, emphasizing responsible AI deployment and tailored service agreements to enhance citizen services and policymaking.

Read original articleLink Icon
Anthropic: Expanding Access to Claude for Government

Anthropic, a company focused on building reliable AI systems, is expanding access to its AI models Claude 3 Haiku and Claude 3 Sonnet for government users through Amazon Web Services (AWS). These models are now available in the AWS Marketplace for the US Intelligence Community and in AWS GovCloud. The applications of Claude for government agencies include improving citizen services, streamlining document review, enhancing policymaking with data-driven insights, and creating realistic training scenarios. The company is adapting its service agreements to meet the unique needs of government users, including crafting contractual exceptions to enable beneficial uses by selected government agencies. Anthropic emphasizes responsible AI deployment and is committed to working with governments to ensure safe and effective AI policies. They have collaborated with organizations like the UK Artificial Intelligence Safety Institute to conduct pre-deployment testing. By making AI tools available to government users, Anthropic aims to transform how elected governments serve their constituents and promote peace and security.

Related

Claude 3.5 Sonnet

Claude 3.5 Sonnet

Claude 3.5 Sonnet, the latest in the model family, excels in customer support, coding, and humor comprehension. It introduces Artifacts on Claude.ai for real-time interactions, prioritizing safety and privacy. Future plans include Claude 3.5 Haiku and Opus, emphasizing user feedback for continuous improvement.

OpenAI and Anthropic are ignoring robots.txt

OpenAI and Anthropic are ignoring robots.txt

Two AI startups, OpenAI and Anthropic, are reported to be disregarding robots.txt rules, allowing them to scrape web content despite claiming to respect such regulations. TollBit analytics revealed this behavior, raising concerns about data misuse.

Apple Wasn't Interested in AI Partnership with Meta Due to Privacy Concerns

Apple Wasn't Interested in AI Partnership with Meta Due to Privacy Concerns

Apple declined an AI partnership with Meta due to privacy concerns, opting for OpenAI's ChatGPT integration into iOS. Apple emphasizes user choice and privacy in AI partnerships, exploring collaborations with Google and Anthropic for diverse AI models.

Hackers 'jailbreak' powerful AI models in global effort to highlight flaws

Hackers 'jailbreak' powerful AI models in global effort to highlight flaws

Hackers exploit vulnerabilities in AI models from OpenAI, Google, and xAI, sharing harmful content. Ethical hackers challenge AI security, prompting the rise of LLM security start-ups amid global regulatory concerns. Collaboration is key to addressing evolving AI threats.

Anthropic: Collaborate with Claude on Projects

Anthropic: Collaborate with Claude on Projects

Claude.ai introduces Projects feature for Pro and Team users to organize chats, enhance collaboration, and create artifacts like code snippets. North Highland reports productivity gains. Future updates prioritize user-friendly enhancements.

Link Icon 12 comments
By @alach11 - 5 months
There's no doubt that LLMs massively expand the ability of agencies like the NSA to perform large-scale surveillance at a higher quality. I wonder if Anthropic (or other LLM providers) ever push back or restrict these kinds of use cases? Or is that too risky for them?
By @noodlesUK - 5 months
I can imagine that for many government tasks, there would be a need for a reduced-censorship version of the AI model. It's pretty easy running into the guardrails on ChatGPT and friends when you talk about violence or other spicy topics.

This then begs the question of what level of censorship reduction to apply. Should government employees be allowed to e.g., war-game a mass murder with an AI? What about discussing how to erode civil rights?

By @ryanackley - 5 months
I find all of the virtue signalling from AI companies exhausting.
By @nameless101 - 5 months
So, basically all "confidential" information, if you are a subject "of interest", will be in the cloud and used to train models that can spit it out again. And the models will confabulate stories about you.

The can call themselves "sonnet", "bard", "open" and a whole plethora of other positive things. What remains is that they go into the direction of Palantir and the rest is just marketing.

By @andrepd - 5 months
> Claude offers a wide range of potential applications for government agencies, both in the present and looking toward the future. Government agencies can use Claude to provide improved citizen services, streamline document review and preparation, enhance policymaking with data-driven insights, and create realistic training scenarios. In the near future, AI could assist in disaster response coordination, enhance public health initiatives, or optimize energy grids for sustainability. Used responsibly, AI has the potential to transform how elected governments serve their constituents and promote peace and security.

> For example, we have crafted a set of contractual exceptions to our general Usage Policy that are carefully calibrated to enable beneficial uses by carefully selected government agencies. These allow Claude to be used for legally authorized foreign intelligence analysis, such as combating human trafficking, identifying covert influence or sabotage campaigns, and providing warning in advance of potential military activities, opening a window for diplomacy to prevent or deter them.

Sometimes I wonder if this is cynicism or if they actually drank their own cool-aid.

By @tootie - 5 months
Is the announcement just that they're on the AWS marketplace for govcloud? Do people ever actually make use of AWS marketplace? It just seems like a way to skirt procurement.
By @potwinkle - 5 months
I wonder if they really intend to control ethics of Sonnet's use in government or if it's just a nice thing to say.
By @bionhoward - 5 months
Meanwhile, the best models with sensible OSI-approved licenses are from China.

What are the security implications if American corpos like Google DeepMind, Microsoft GitHub, Anthropic and “Open”AI have explicitly anticompetitive / noncommercial licenses for greed/fear, so the only models people can use without fear of legal repercussions are Chinese?

Surely, Capitalism wouldn’t lead us to make a tremendous unforced error at societal scale?

Every AI is a sleeper agent risk if nobody has the balls and / or capacity to verify their inputs. Guess who wrote about that? https://arxiv.org/abs/2401.05566

By @danlitt - 5 months
Is there really anyone who thinks this is a good idea? AI systems routinely spit out false information. Why would a system like that be anywhere near a Government?

Perhaps (optimistically) this is just a credibility-grab from Anthropic, with no basis in fact.

By @localfirst - 5 months
Going forward be very very wary of inputting sensitive information in Anthropic, OpenAI products, especially if you work for a foreign government, corporation.

Listen to Edward Snowden. This guy is not fucking around.