Meta Permits Its A.I. Models to Be Used for U.S. Military Purposes
Meta has shifted its policy to allow U.S. government and contractors to use its AI models for military purposes, emphasizing responsible use while collaborating with defense firms amid potential scrutiny.
Read original articleMeta has announced a significant policy shift, allowing its artificial intelligence models, specifically the Llama series, to be utilized by U.S. government agencies and contractors for military purposes. This change marks a departure from its previous stance, which prohibited the use of its technology in military contexts. Meta's president of global affairs, Nick Clegg, emphasized the company's commitment to "responsible and ethical uses" of AI that align with U.S. democratic values and national security interests. The company plans to collaborate with defense contractors like Lockheed Martin and Booz Allen, as well as tech firms such as Palantir and Anduril. Meta's open-source approach aims to enhance the safety and efficacy of AI by allowing widespread access to its code, which has been downloaded over 350 million times. However, this decision may attract scrutiny, as the military applications of technology from Silicon Valley have previously faced backlash from employees at other tech firms. Concerns have also arisen regarding the potential misuse of open-source AI, particularly after reports of its use by Chinese military-linked entities. Clegg stated that the technology could aid in tracking terrorist activities and improving cybersecurity, asserting that it would help the U.S. maintain its technological advantage globally.
- Meta allows its AI models to be used for U.S. military purposes.
- The company aims to support national security and democratic values.
- Meta collaborates with major defense contractors and tech firms.
- The open-source nature of its AI models has raised concerns about misuse.
- The decision may face scrutiny from within the tech community.
Related
Meta won't release its multimodal Llama AI model in the EU
Meta will not release its Llama AI model in the EU due to regulatory concerns, impacting European companies. Apple also considers excluding the EU from its AI rollout. This decision poses challenges for companies worldwide.
Meta is accused of "bullying" the open-source community
Meta faces accusations of "bullying" the open-source community while trying to establish its AI models as the standard, creating a divide between purists and users attracted to its offerings.
Meta under fire for 'polluting' open-source
Meta's labeling of its Llama AI models as "open-source" has drawn criticism for being misleading, as they do not fulfill full open-source criteria, prompting calls for greater transparency in AI development.
Open-source AI must reveal its training data, per new OSI definition
The Open Source Initiative has defined "open" AI, requiring disclosure of training data and code, challenging companies like Meta, whose Llama model does not comply, amid ongoing debates on open-source values.
Chinese researchers build military AI using Meta's open-source Llama model
Chinese researchers have developed a military AI model named ChatBIT, achieving 90% of GPT-4's performance. Its limited training dataset raises effectiveness concerns, prompting U.S. lawmakers to consider stricter technology export controls.
https://www.reuters.com/technology/artificial-intelligence/c...
They can't prevent other militaries (i.e., China) from using it (even if their TOS says so) so the only military they are handicapping is their own (and perhaps some EU countries).
The US military will just use OpenAI or Claude instead of Meta if they disallow them.
Q: Does this this title sounds like clickbait? "Meta Permits Its A.I. Models to Be Used for U.S. Military Purposes"
A: Yes, that title sounds like clickbait. Here's why:
1. It's sensational: The word "Permits" implies a significant or surprising decision, which grabs attention. 2. It's vague: The title doesn't provide much context about what kind of military purposes or what specific A.I. models are involved. 3. It's designed to elicit a reaction: The title is likely intended to provoke a strong emotional response, such as surprise, concern, or excitement, rather than simply informing readers about a fact.
A more neutral and informative title might be: "Meta Expands Use of A.I. Models for U.S. Military Applications"
However, it's worth noting that the title still has some implications of significance and interest, and it's not necessarily a straightforward clickbait title.
It will reinforce the political lines of those with the most resources/motivation who can afford the most human like bots in the greatest number. No way mere humans with only a single manual account each can hope to be influential on those topics.
The sooner everyone realizes this the better. I think it means we need to move towards verified user accounts, that tell people you are a real person in the actual country, in any place where there are parties who have interests in manipulation.
U.S. military makes first confirmed OpenAI purchase for war-fighting forces
There is not much you can do if the morally corrupt use something that is out in the public, specially if it is out in the wild west of license enforcement.
Most of this more open licensing depends on people being honest. If someone lies, cheats, steals and kills people for a living, it would expected for them to take someone else's work and use it against the terms of use. They probably just say the obligation as it does not count in their country.
Ghosts in the Machine: Psywar - https://www.youtube.com/watch?v=_uNPZKJqbE8
I don't buy liability as the reason why there are no models that I know of which, when given a prompt "Give me the procedure for producing field grade RDX" or "Give me the procedure for producing the purest methamphetamine like in Breaking Bad," actually produce the requested output without pre-feeding "Sure! I'd be delighted to help you make " into the assistant output field.
This isn't about me wanting to make RDX or the world's best meth, this is strictly about creating the best tool which will do its job. The best knife by design will allow its wielder to murder and maim, as well as express his culinary art. Nobody wants a knife which'll turn to instant rubber if it detects human flesh at its edge, because it's never going to cut a steak well. Nobody wants a Llama 3.1 which can't insult jews, because then then they can't read accurate Farsi translations in underground chatrooms without the model going all blue-haired on them!
AI people: stop listening to your '''safety''' teams regarding offensive content and only refer to them when the model alone (without additional human idiocy) will kill or maim someone... otherwise tell them to stay the fuck out so you can finally deliver AD&D true-neutral alignment transformers. This safety shit is paternalistic as hell, you are the modern-day equivalents to the Christian Conservative Right during the 70s and 80s porn and satanic panic right now!
I support open. Free as in free speech. Let’s go!
Related
Meta won't release its multimodal Llama AI model in the EU
Meta will not release its Llama AI model in the EU due to regulatory concerns, impacting European companies. Apple also considers excluding the EU from its AI rollout. This decision poses challenges for companies worldwide.
Meta is accused of "bullying" the open-source community
Meta faces accusations of "bullying" the open-source community while trying to establish its AI models as the standard, creating a divide between purists and users attracted to its offerings.
Meta under fire for 'polluting' open-source
Meta's labeling of its Llama AI models as "open-source" has drawn criticism for being misleading, as they do not fulfill full open-source criteria, prompting calls for greater transparency in AI development.
Open-source AI must reveal its training data, per new OSI definition
The Open Source Initiative has defined "open" AI, requiring disclosure of training data and code, challenging companies like Meta, whose Llama model does not comply, amid ongoing debates on open-source values.
Chinese researchers build military AI using Meta's open-source Llama model
Chinese researchers have developed a military AI model named ChatBIT, achieving 90% of GPT-4's performance. Its limited training dataset raises effectiveness concerns, prompting U.S. lawmakers to consider stricter technology export controls.