November 7th, 2024

Anthropic teams up with Palantir and AWS to sell AI to defense customers

Anthropic has partnered with Palantir and AWS to provide Claude AI models to U.S. defense agencies, enhancing data analysis and operational efficiency, while seeking additional funding with a potential $40 billion valuation.

Read original articleLink Icon
Anthropic teams up with Palantir and AWS to sell AI to defense customers

Anthropic has announced a partnership with Palantir and Amazon Web Services (AWS) to provide U.S. intelligence and defense agencies access to its Claude AI models. This collaboration aims to enhance the operational capabilities of defense organizations by integrating Claude into Palantir's platform, which is designed for handling sensitive data under the Defense Department's Impact Level 6 (IL6) classification. The partnership reflects a broader trend of AI vendors seeking contracts with defense agencies, as evidenced by similar moves from companies like Meta and OpenAI. Anthropic's head of sales emphasized the importance of responsible AI solutions in classified environments, stating that the integration will improve data analysis and decision-making processes for defense officials. The company has also expanded its services to AWS’ GovCloud, targeting public-sector clients. Despite the growing interest in AI within government agencies, some sectors, particularly the military, remain cautious about its adoption. Anthropic is reportedly in discussions to raise additional funding, with a valuation potentially reaching $40 billion, and Amazon is its largest investor.

- Anthropic partners with Palantir and AWS to provide AI solutions to U.S. defense agencies.

- Claude AI models will be integrated into Palantir's platform for enhanced data analysis.

- The collaboration aims to improve operational efficiency in classified environments.

- Interest in AI among government agencies is rising, but military adoption remains cautious.

- Anthropic is seeking additional funding, potentially valuing the company at $40 billion.

Link Icon 18 comments
By @elashri - 5 months
It is really interesting that many people in this world seem to refuse to allow AI to insult (or offend) any person but would be okay if AI took part of killing them (if they were in the wrong place at the wrong time of course). I don't share this opinion and would be interested to hear about what is the thought process of people that come to support that. Other than financial aspect (people who actually benefit financially), Is it something like our enemies will use it so we should too ? Does this mean not using that against your enemies that does not use it or what ?
By @citruscomputing - 5 months
I highly recommend reading this article about how Israel has been using AI: https://www.972mag.com/lavender-ai-israeli-army-gaza/

It really brought home for me the real, existing harms this type of technology is already doing in the "defense" space.

By @maronato - 5 months
> I cannot assist with planning military operations or analyzing top secret military data, as this could lead to loss of life. I aim to help prevent harm, not cause it.

- Claude, before selling out to Defense

By @lukev - 5 months
I've said it before and I'll say it again... any company that actually cared about AI "safety" or "alignment", or had any belief that we are on the threshold of AGI, should steadfastly refuse to let it be used in any sort of military or intelligence context.

That's literally how you get Skynet, and that's what everyone claims to be worried about, right? Or are they just full of shit

By @sublimefire - 5 months
It was done for years already, from ML in rockets to drones that follow targets and to face CV in surveillance systems. I am not sure how much is used in modern fighter jets. The only difference is that now public cloud vendors are going in but at the same time I doubt Claude will be used to steer the rockets, the rate of error is too high.
By @ganoushoreilly - 5 months
It's not surprising given the inroads companies like scale.ai have been making into the D.o.D. Partnering with Palantir gives some credibility (debatable) with deploying product etc.

Having worked on one of these projects two years ago, back then the waiving of hands for dealing with hallucinations and risks was a bit offputting and at times scary. Hopefully as we deploy these tech stacks we take serious time to do it slow and steady and working out the edge cases and failures.

By @blibble - 5 months
with consumers showing at best a lack of interest in AI products* (and at worst: total aversion), I suppose you have to sell it to someone...

* https://www.theverge.com/2024/11/7/24290268/microsoft-copilo...

By @youoy - 5 months
It has always been easier to refuse to do things when you don't have the option to do them, or it doesn't make any difference, than when you have the option and the financial interests are in place. See for example "Safely aligned" Anthropic [0] and "non-profit Open"AI .

[0] https://www.anthropic.com/news/core-views-on-ai-safety

> Furthermore, rapid AI progress will be disruptive to society and may trigger competitive races that could lead corporations or nations to deploy untrustworthy AI systems. The results of this could be catastrophic, either because AI systems strategically pursue dangerous goals, or because these systems make more innocent mistakes in high-stakes situations.

By @ChrisArchitect - 5 months
Related:

U.S. military makes first confirmed OpenAI purchase for war-fighting forces

https://news.ycombinator.com/item?id=41999029

By @ChrisArchitect - 5 months
Related:

Meta Permits Its A.I. Models to Be Used for U.S. Military Purposes

https://news.ycombinator.com/item?id=42048009

By @deepsquirrelnet - 5 months
> The Defense Department’s IL6 is reserved for systems containing data that’s deemed critical to national security and requiring “maximum protection” against unauthorized access and tampering. Information in IL6 systems can be up to “secret” level — one step below top secret.

Is the thinking here that they’ll use it to read and somehow act (warnings systems, notifications) on highly classified information that can’t be disseminated? I don’t have a good grasp of what this looks like.

By @rglover - 5 months
I love what Anthropic and Dario are doing and from a business perspective this makes perfect sense. But AI is the last thing the military should be touching.

If there's even a half percent chance that a mistake is made, it could be irreversibly destructive. Doubly so if "trusting the AI" becomes a defacto standard decades down the road. Even scarier is that "the AI told us to do it" is basically a license to cause chaos with zero accountability.

By @megous - 5 months
Fuck every company whose leadership sees acts like killing whole extended families regularly for a year, even 100-200 people with the same family name at once as recently as week ago, everyone without distinction, and still decide to sell their shit to the perpetrators.
By @Jaepa - 5 months
So I have some concerns.

There is of course the safety & morality of AI in military, the potential issues for hallucinations, environmental concerns, etc. But I'm more worried about the ability to defer accountability for terrible acts to a software bug.

By @oksurewhynot - 5 months
Sorry this is the same company whose application mentions safety/alignment/ethics like 20 times and asked how applicants will uphold those principles?
By @mannycalavera42 - 5 months
and here it goes my intention to subscribe to claude
By @solarkraft - 5 months
So much for the „good guys of AI“ reputation
By @historian101 - 5 months
For the record, a similar move by Anthropic was predicted by several people just four months ago and vehemently denied:

https://news.ycombinator.com/item?id=40802334

Yet another "conspiracy theory" came true.