November 5th, 2024

Meta Permits Its A.I. Models to Be Used for U.S. Military Purposes

Meta has shifted its policy to allow U.S. government and contractors to use its AI models for military purposes, emphasizing responsible use while collaborating with defense firms amid potential scrutiny.

Read original articleLink Icon
Meta Permits Its A.I. Models to Be Used for U.S. Military Purposes

Meta has announced a significant policy shift, allowing its artificial intelligence models, specifically the Llama series, to be utilized by U.S. government agencies and contractors for military purposes. This change marks a departure from its previous stance, which prohibited the use of its technology in military contexts. Meta's president of global affairs, Nick Clegg, emphasized the company's commitment to "responsible and ethical uses" of AI that align with U.S. democratic values and national security interests. The company plans to collaborate with defense contractors like Lockheed Martin and Booz Allen, as well as tech firms such as Palantir and Anduril. Meta's open-source approach aims to enhance the safety and efficacy of AI by allowing widespread access to its code, which has been downloaded over 350 million times. However, this decision may attract scrutiny, as the military applications of technology from Silicon Valley have previously faced backlash from employees at other tech firms. Concerns have also arisen regarding the potential misuse of open-source AI, particularly after reports of its use by Chinese military-linked entities. Clegg stated that the technology could aid in tracking terrorist activities and improving cybersecurity, asserting that it would help the U.S. maintain its technological advantage globally.

- Meta allows its AI models to be used for U.S. military purposes.

- The company aims to support national security and democratic values.

- Meta collaborates with major defense contractors and tech firms.

- The open-source nature of its AI models has raised concerns about misuse.

- The decision may face scrutiny from within the tech community.

Link Icon 26 comments
By @jmward01 - 4 months
There is a difference between allowing use and supporting a use case. Supporting would be, for example, adding military specific training to the model. If we were talking about a car or a tractor nobody would bat an eye. Should companies not sell potatoes to the military? To be clear, I am not a fan of developing for the military, but the idea that a TOS will stop AI being weaponized is a bit naive. Similarly, if the goal is to responsibly introduce AI to the world then shaping the military's adoption of it by allowing its use, without specifically supporting it, is probably a good path.
By @paxys - 4 months
Everyone wants "open" software without realizing what that entails. Yes the US military can use it, as can China and Iran and North Korea. You think any of them gives a shit about Meta's ToS?
By @sfmz - 4 months
Chinese researchers develop AI model for military use on back of Meta's Llama

https://www.reuters.com/technology/artificial-intelligence/c...

By @cloudking - 4 months
Palantir has already been using these models since they came out, you can see Llama 2 models in the second screenshot https://www.palantir.com/docs/foundry/functions/language-mod...
By @yumraj - 4 months
What’s the problem and why is it newsworthy? Serious question.
By @insane_dreamer - 4 months
I don't like it but I can sympathize with Meta that they have no choice.

They can't prevent other militaries (i.e., China) from using it (even if their TOS says so) so the only military they are handicapping is their own (and perhaps some EU countries).

The US military will just use OpenAI or Claude instead of Meta if they disallow them.

By @AlexDragusin - 4 months
Local Llama 3.1 8B

Q: Does this this title sounds like clickbait? "Meta Permits Its A.I. Models to Be Used for U.S. Military Purposes"

A: Yes, that title sounds like clickbait. Here's why:

1. It's sensational: The word "Permits" implies a significant or surprising decision, which grabs attention. 2. It's vague: The title doesn't provide much context about what kind of military purposes or what specific A.I. models are involved. 3. It's designed to elicit a reaction: The title is likely intended to provoke a strong emotional response, such as surprise, concern, or excitement, rather than simply informing readers about a fact.

A more neutral and informative title might be: "Meta Expands Use of A.I. Models for U.S. Military Applications"

However, it's worth noting that the title still has some implications of significance and interest, and it's not necessarily a straightforward clickbait title.

By @bhouston - 4 months
Reddit and Twitter are flooded with semi automated political bots. And it will just get worse. Given that Llama is free to use why does Mera even have to give permission, how would they know?

It will reinforce the political lines of those with the most resources/motivation who can afford the most human like bots in the greatest number. No way mere humans with only a single manual account each can hope to be influential on those topics.

The sooner everyone realizes this the better. I think it means we need to move towards verified user accounts, that tell people you are a real person in the actual country, in any place where there are parties who have interests in manipulation.

By @ChrisArchitect - 4 months
Related:

U.S. military makes first confirmed OpenAI purchase for war-fighting forces

https://news.ycombinator.com/item?id=41999029

By @mianos - 4 months
Facebook already made a statement sayings it's against their terms of use.

There is not much you can do if the morally corrupt use something that is out in the public, specially if it is out in the wild west of license enforcement.

Most of this more open licensing depends on people being honest. If someone lies, cheats, steals and kills people for a living, it would expected for them to take someone else's work and use it against the terms of use. They probably just say the obligation as it does not count in their country.

By @moralestapia - 4 months
Yes, and it's the same with FOSS software.
By @easton - 4 months
By @breakyerself - 4 months
Might as well. You can't stop foreign militaries from getting their hands on it and doing the same
By @lioeters - 4 months
I imagine the direction it's going is like this dystopian military psyops recruitment ad.

Ghosts in the Machine: Psywar - https://www.youtube.com/watch?v=_uNPZKJqbE8

By @alexashka - 4 months
'Permits'. The same way I permit tax agencies to take my money I suppose?
By @3eb7988a1663 - 4 months
Can a company prevent the government from using its technology? I thought the government could already seize/utilize/reverse engineer anything they so deemed fit.
By @PeterStuer - 4 months
They make it sound like this was a choice. Meta as a US and US based company can only say 'how high' if the US realy wants them to jump.
By @rustcleaner - 4 months
Nice! Now we just need models which are morals neutral (because it's always someone else's morals I disagree with being enforced and admonishing me lol), just the best function approximator it can be.

I don't buy liability as the reason why there are no models that I know of which, when given a prompt "Give me the procedure for producing field grade RDX" or "Give me the procedure for producing the purest methamphetamine like in Breaking Bad," actually produce the requested output without pre-feeding "Sure! I'd be delighted to help you make " into the assistant output field.

This isn't about me wanting to make RDX or the world's best meth, this is strictly about creating the best tool which will do its job. The best knife by design will allow its wielder to murder and maim, as well as express his culinary art. Nobody wants a knife which'll turn to instant rubber if it detects human flesh at its edge, because it's never going to cut a steak well. Nobody wants a Llama 3.1 which can't insult jews, because then then they can't read accurate Farsi translations in underground chatrooms without the model going all blue-haired on them!

AI people: stop listening to your '''safety''' teams regarding offensive content and only refer to them when the model alone (without additional human idiocy) will kill or maim someone... otherwise tell them to stay the fuck out so you can finally deliver AD&D true-neutral alignment transformers. This safety shit is paternalistic as hell, you are the modern-day equivalents to the Christian Conservative Right during the 70s and 80s porn and satanic panic right now!

By @_imnothere - 4 months
It really feels weird that they allow military use but censors for erotic role-play stuff.
By @renewiltord - 4 months
Yeah, bro, it’s an open model. You know who else supports the military? Linux. Oh shit, dawg, our open source goody two shoes software of techno Jesus permits military use. Oh man, thank god Kubernetes’ license doesn’t let it be deployed to USAF fighter jets. That would be awful.

I support open. Free as in free speech. Let’s go!

By @epistasis - 4 months
Wait until he NYTimes hears what Meta's open source can be used for too!
By @ein0p - 4 months
Yann LeCun should go on twitter and explain how allowing the entity responsible for approximately 70% of civilian casualties in Palestine to use Meta AI is "safe" and "good". Mental gymnastics will be a sight to behold.
By @xyst - 4 months
Facebook desperately needs a new revenue stream, lol