Open Source AI Is the Path Forward
Mark Zuckerberg discusses the significance of open source AI, introducing Llama 3.1 405B as a frontier-level model. He collaborates with Amazon and Nvidia to support developers, emphasizing customization, transparency, and safety in AI development for a democratized and secure technological future.
Read original articleIn a recent article by Mark Zuckerberg, the Founder and CEO of Meta, he discusses the importance of open source AI in the development of advanced technologies. Drawing parallels to the evolution of Unix and Linux, Zuckerberg emphasizes the benefits of open source models like Llama in AI development. He announces the release of Llama 3.1 405B, the first frontier-level open source AI model, along with collaborations with companies like Amazon and Nvidia to support developers. Zuckerberg highlights the advantages of open source AI for developers, Meta, and the world. He argues that open source AI allows for customization, control, data protection, cost efficiency, and long-term ecosystem development. Zuckerberg also addresses concerns about safety, stating that open source AI models are more transparent and subject to scrutiny, potentially making them safer than closed models. Overall, Zuckerberg advocates for open source AI as a means to democratize access to AI benefits, prevent monopolies, and ensure a safer technological future.
Related
Not all 'open source' AI models are open: here's a ranking
Researchers found large language models claiming to be open source restrict access. Debate on AI model openness continues, with concerns over "open-washing" by tech giants. EU's AI Act may exempt open source models. Transparency and reproducibility are crucial for AI innovation.
Meta starts testing user-created AI chatbots on Instagram
Meta CEO Mark Zuckerberg announced testing user-created AI chatbots on Instagram via Meta AI studio. Creators will label AI characters clearly. Initial U.S. tests involve 50 creators, aiming for wider use by August. Meta promotes AI for engaging interactions.
Zuckerberg disses closed-source AI competitors as trying to 'create God'
Mark Zuckerberg advocates for open-source AI tools over closed-source competitors focusing on a single AI entity. Meta plans AI Studio tests for Instagram avatars and envisions AI integration in tech products.
Regulation Alone Will Not Save Us from Big Tech
The article addresses challenges of Big Tech monopolies in AI, advocating for user-owned, open-source AI to prioritize well-being over profit. Polosukhin suggests a shift to open source AI for a diverse, accountable ecosystem.
Meta won't release its multimodal Llama AI model in the EU
Meta will not release its Llama AI model in the EU due to regulatory concerns, impacting European companies. Apple also considers excluding the EU from its AI rollout. This decision poses challenges for companies worldwide.
- Many commenters question the definition of "open source," arguing that Llama 3.1 is more accurately described as "open weights" due to restrictions on data access and usage.
- Concerns about the potential misuse of open-source AI in harmful applications, especially in the context of geopolitical tensions, are prevalent.
- Some express skepticism about Meta's motivations, suggesting that the move may be a strategic play to undermine competitors rather than a genuine commitment to open-source principles.
- There is a call for transparency regarding the training data and algorithms used, with many asserting that true open-source AI should include this information.
- Several commenters highlight the disparity in access to the necessary resources (like hardware and data) for effectively utilizing these models, questioning the democratization of AI.
Llama 3.1 - https://news.ycombinator.com/item?id=41046540 - July 2024 (114 comments)
$279mm in 1957 dollars is about $3.2bn today [2]. A public cluster of GPUs provided for free to American universities, companies and non-profits might not be a bad idea.
[1] https://en.m.wikipedia.org/wiki/Heavy_Press_Program
[2] https://data.bls.gov/cgi-bin/cpicalc.pl?cost1=279&year1=1957...
I find the language around "open source AI" to be confusing. With "open source" there's usually "source" to open, right? As in, there is human legible code that can be read and modified by the user? If so, then how can current ML models be open source? They're very large matrices that are, for the most part, inscrutable to the user. They seem akin to binaries, which, yes, can be modified by the user, but are extremely obscured to the user, and require enormous effort to understand and effectively modify.
"Open source" code is not just code that isn't executed remotely over an API, and it seems like maybe its being conflated with that here?
- No more vendor lock-in
- Instead of just wrapping proprietary API endpoints, developers can now integrate AI deeply into their products in a very cost-effective and performant way
- Price race to the bottom with near-instant LLM responses at very low prices are on the horizon
As a founder, it feels like a very exciting time to build a startup as your product automatically becomes better, cheaper, and more scalable with every major AI advancement. This leads to a powerful flywheel effect: https://www.kadoa.com/blog/ai-flywheel
This is not altruism although it's still great for devs and startups. All FB GPU investments is primarily for new AI products "friends", recommendations and selling ads.
https://www.joelonsoftware.com/2002/06/12/strategy-letter-v/
Someone can correct me here but AFAIK we don't even know which datasets are used to train these models, so why should we even use "open" to describe Llama? This is more similar to a freeware than an open-source project.
[1] https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/202...
This means they need content that will grab attention, and creating open source models that allow anyone to create any content on their own becomes good for Meta. The users of the models can post it to their Instagram/FB/Threads account.
Releasing an open model also releases Meta from the burden of having to police the content the model generates, once the open source community fine-tunes the models.
Overall, this move is good business move for Meta - the post doesn't really talk about the true benefit, instead moralizing about open source, but this is a sound business move for Meta.
But I have strong doubts they (or any other company) actually believe what they are saying.
Here is the reality:
- Facebook is spending untold billions on GPU hardware.
- Facebook is arguing in favor of open sourcing the models, that they spent billions of dollars to generate, for free...?
It follows that companies with much smaller resources (money) will not be able to match what Facebook is doing. Seems like an attempt to kill off the competition (specifically, smaller organizations) before they can take root.
Bravo! While I don't agree with Zuck's views and actions on many fronts, on this occasion I think he and the AI folks at Meta deserve our praise and gratitude. With this release, they have brought the cost of pretraining a frontier 400B+ parameter model to ZERO for pretty much everyone -- well, everyone except Meta's key competitors.[a] THANK YOU ZUCK.
Meanwhile, the business-minded people at Meta surely won't mind if the release of these frontier models to the public happens to completely mess up the AI plans of competitors like OpenAI/Microsoft, Google, Anthropic, etc. Come to think of it, the negative impact on such competitors was likely a key motivation for releasing the new models.
---
[a] The license is not open to the handful of companies worldwide which have more than 700M users.
Step 1. Chick-Fil-A releases a grass-fed beef burger to spite other fast-food joints, calls it "the vegan burger"
Step 2. A couple of outraged vegans show up in the comments, pointing out that beef, even grass-fed beef, isn't vegan
Step 3. Fast food enthusiasts push back: it's unreasonable to want companies to abide by this restrictive definition of "vegan". Clearly this burger is a gamechanger and the definition needs to adapt to the times.
Step 4. Goto Step 2 in an infinite loop
I.e., the more important thing - the more "free" thing - is the licensing now.
E.g., I play around with different image diffusion models like Stable Diffusion and specific fine-tuned variations for ControlNet or LoRA that I plug into ComfyUI.
But I can't use it at work because of the licensing. I have to use InvokeAI instead of ComfyUI if I want to be careful and only very specific image diffusion models without the latest and greatest fine-tuning. As others have said - the weights themselves are rather inscrutable. So we're building on more abstract shapes now.
But the key open thing is making sure (1) the tools to modify the weights are open and permissive (ComfyUI, related scripts or parts of both the training and deployment) and (2) the underlying weights of the base models and the tools to recreate them have MIT or other generous licensing. As well as the fine-tuned variants for specific tasks.
It's not going to be the naive construction in the future where you take a base model and as company A you produce company A's fine tuned model and you're done.
It's going to be a tree of fine-tuned models as a node-based editor like ComfyUI already shows and that whole tree has to be open if we're to keep the same hacker spirit where anyone can tinker with it and also at some point make money off of it. Or go free software the whole way (i.e., LGPL or equivalent the whole tree of tools).
In that sense unfortunately Llama has a ways to go to be truly open: https://news.ycombinator.com/item?id=36816395
Ok, first of all, has this really worked? AI moderators still can't capture the mass of obvious spam/bots on all their platforms, threads included. Second, AI detection doesn't work, and with how much better the systems are getting, it's probably never going to, unless you keep the best models for yourself, and it's is clear from the rest of the note that its not zuck's intention to do so.
> As long as everyone has access to similar generations of models – which open source promotes – then governments and institutions with more compute resources will be able to check bad actors with less compute.
This just doesn't make sense. How are you going to prevent AI spam, AI deepfakes from causing harm with more compute? What are you gonna do with more compute about nonconsensual deepfakes? People are already using AI to bypass identity verification on your social media networks, and pump out loads of spam.
This also has the important effect of neutralizing the critique of US Government AI regulation because it will democratize "frontier" models and make enforcement nearly impossible. Thank you, Zuck, this is an important and historic move.
It also opens up the market to a lot more entry in the area of "ancillary services to support the effective use of frontier models" (including safety-oriented concerns), which should really be the larger market segment.
Is the vision here to treat LLM-based AI as a "public good", akin to a utility provider in a civilized country (taxpayer funded, govt maintained, non-for-profit)?
I think we could arguably call this "open source" when all the infra blueprints, scripts and configs are freely available for anyone to try and duplicate the state-of-the-art (resource and grokking requirements nonwithstanding)
The whole thing is interesting, but this part strikes me as potentially anticompetitive reasoning. I wonder what the lines are that they have to avoid crossing here?
Which open-source has such restrictions and clause?
The HPC domain (data and compute intensive applications that typically need vector, parallel or other such architectures) have been around for the longest time, but confined to academic / government tasks.
LLM's with their famous "matrix multiply" at their very core are basically demolishing an ossified frontier where a few commercial entities (Intel, Microsoft, Apple, Google, Samsung etc) have defined for decades what computing looks like for most people.
Assuming that the genie is out of the bottle, the question is: what is the shape of end-user devices that are optimally designed to use compute intensive open source algorithms? The "AI PC" is already a marketing gimmick, but could it be that Linux desktops and smartphones will suddenly be "ΑΙ natives"?
For sure its a transformational period and the landscape T+10 yrs could be drastically different...
I think it's interesting to think about this question of open source, benefits, risk, and even competition, without all of the baggage that Meta brings.
I agree with the FTC, that the benefits of open-weight models are significant for competition. The challenge is in distinguishing between good competition and bad competition.
Some kind of competition can harm consumers and critical public goods, including democracy itself. For example, competing for people's scarce attention or for their food buying, with increasingly optimized and addictive innovations. Or competition to build the most powerful biological weapons.
Other kinds of competition can massively accelerate valuable innovation. The FTC must navigate a tricky balance here — leaning into competition that serves consumers and the broader public, while being careful about what kind of competition it is accelerating that could cause significant risk and harm.
It's also obviously not just "big tech" that cares about the risks behind open-weight foundation models. Many people have written about these risks even before it became a subject of major tech investment. (In other words, A16Z's framing is often rather misleading.) There are many non-big tech actors who are very concerned about current and potential negative impacts of open-weight foundation models.
One approach which can provide the best of both worlds, is for cases where there are significant potential risks, to ensure that there is at least some period of time where weights are not provided openly, in order to learn a bit about the potential implications of new models.
Longer-term, there may be a line where models are too risky to share openly, and it may be unclear what that line is. In that case, it's important that we have governance systems for such decisions that are not just profit-driven, and which can help us continue to get the best of all worlds. (Plug: my organization, the AI & Democracy Foundation; https://ai-dem.org/; is working to develop such systems and hiring.)
But this is really positive stuff and it’s nice to view my time there through the lens of such a change for the better.
Keep up the good work on this folks.
Time to start thinking about opening up a little on the training data.
Dead internet theory is very much happening in real time, and I dread what's about to come since the world has collectively decided to lose their minds with this AI crap. And people on this site are unironically excited about this garbage that is indistinguishable from spam getting more and more popular. What a fucking joke
Not the usual nation-state rhetoric, but something that justifies that closed source leads to better user-experience and fewer security and privacy issues.
An ecosystem that benefits vendors, customers, and the makers of close source?
Are there historical analogies other than Microsoft Windows or Apple iPhone / iOS?
But they still have 70 thousand people (a small country) doing _something_. What are they doing? Updating Facebook UI? Not really, the UI hasn't been updated, and you don't need 70 thousand people to do that. Stuff like React and Llama? Good, I guess, we'll see how they make use of Llama in a couple of years. Spellcheck for posts maybe?
This is a very important concern in Health Care because of HIPAA compliance. You can't just send your data over the wire to someone's proprietary API. You would at least need to de-identify your data. This can be a tricky task, especially with unstructured text.
---
Some observations:
* The model is much better at trajectory correcting and putting out a chain of tangential thoughts than other frontier models like Sonnet or GPT-4o. Usually, these models are limited to outputting "one thought", no matter how verbose that thought might be.
* I remember in Dec of 2022 telling famous "tier 1" VCs that frontier models would eventually be like databases: extremely hard to build, but the best ones will eventually be open and win as it's too important to too many large players. I remember the confidence in their ridicule at the time but it seems increasingly more likely that this will be true.
Okay then Mark. Replace "modern AI models" with "social media" and repeat this statement with a straight face.
It's a bit buggy but it is fun.
Disclaimer: I am the author of L2E
On a more serious note, I don't really buy his arguments about safety. First, widespread AI does not reduce unintentional harm but increases it, because the rate of accident is compound. Second, the chance of success for threat actors will increase, because of the asymmetric advantage of gaining access to all open information and hiding their own information. But there is no reverse at this point, I enjoy it while it lasts, AGI will come sooner or later anyway.
1. Software: this is all Pytorch/HF, so completely open-source. This is total parity between what corporates have and what the public has.
2. Model weights: Meta and a few other orgs release open models - as opposed to OpenAI's closed models. So, ok, we have something to work with.
3. Data: to actually do anything useful you need tons of data. This is beyond the reach of the ordinary man, setting aside the legality issues.
4. Hardware: GPUs, which are extremely expensive. Not just that, even if you have the top dollars, you have to go stand in a queue and wait for O(months), since mega-corporates have gotten there before you.
For Inference, you need 1,2 and 4. For training (or fine-tuning), you need all of these. With newer and larger models like the latest Llama, 4 is truly beyond the reach of ordinary entities.
This is NOTHING like open-source, where a random guy can edit/recompile/deploy software on a commodity computer. Wrt LLMs, Data/Hardware are in the equation, the playing field is complete stacked. This thread has a bunch of people discussing nuances of 1 and 2, but this bike-shedding only hides the basic point: Control of LLMs are for mega-corps, not for individuals.
Open-Source Code in the past was fantastic because the West had a monopoly on CPUs and computers. Sharing and contributing was amazing while ensured that tyrants couldn't use this tech to harm people simply because they don't have a hardware to run.
But now, things are different. China is advancing in chip technology, and Russia is using open-source AI to harm people on the scale today, with auto-targeting drones being just the start. Red sea conflict etc.
And somehow, Zuckerberg keeps finding ways to mess up people's lives, despite having the best intentions.
Right now you can build a semi-autonomous drone with AI to kill people for ~$500-700. The western world will still use safe and secure commercial models. While new axis of evil will use models based on Meta or any other open source to do whatever harm they can imagine with not a hint of control.
This particular model. Fine-tune it to develop a nuclear bomb using all possible research that level of government can get on the scale. Killing drone swarms etc. Once the knowledge got public these models can be a base model to get expert-level knowledge to anyone who wants it, uncensored. Especially if you are government that wants to destroy a peaceful order for whatever reason.
Open weights (and open inference code) is NOT open source, but just some weak open washing marketing.
The model that comes closest to being TRULY open is AI2’s OLMo. See their blog post on their approach:
https://blog.allenai.org/hello-olmo-a-truly-open-llm-43f7e73...
I think the only thing they’re not open about is how they’ve curated/censored their “Dolma” training data set, as I don’t think they explicitly share each decision made or the original uncensored dataset:
https://blog.allenai.org/dolma-3-trillion-tokens-open-llm-co...
By the way, OSI is working on defining open source for AI. They post weekly updates to their blog. Example:
https://opensource.org/blog/open-source-ai-definition-weekly...
I was thinking today about Musk, Zuckerberg and Altman. Each claims that the next version of their big LLMs will be the best.
For some reason it reminded me of one apocryphal cause of WW1, which was that the kings of Europe were locked in a kind of ego driven contest. It made me think about the Nation State as a technology. In some sense, the kings were employing the new technology which was clearly going to be the basis for the future political order. And they were pitting their own implementation of this new technology against the other kings.
I feel we are seeing a similar clash of kings playing out. The claims that this is all just business or some larger claim about the good of humanity seem secondary to the ego stakes of the major players. And when it was about who built the biggest rocket, it felt less dangerous.
It breaks my heart just a little bit. I feel sympathy in some sense for the AIs we will create, especially if they do reach the level of AGI. As another tortured analogy, it is like a bunch of competitive parents forcing their children into adversarial relationships to satisfy the parent's ego.
however, the "open-source" narrative is being pushed a bit too much like descriptive ML models were called "AI", or applied statistics "data science". with reinforced examples such as this, we start to lose the original meaning of the term.
the current approach of startups or small players "open-sourcing" their platforms and tools as a means to promote network effect works but is harmful in the long run.
you will find examples of terraform and red hat happening, and a very segmented market. if you want the true spirit of open-source, there must be a way to replicate the weights through access to training data and code. whether one could afford millions of GPU hours or not, real innovation would come from remixing the internals, and not just fine-tuning existing stuff.
i understand that this is not realistically going to ever happen, but don't perform deceptive marketing at the same time.
*I reserve the right to remove this praise if they abuse this open source model position in the future.
With the new model, I am seeing alot of how open source they are and can be build upon. Is it now completely open source or similar to their last models ?
My impression is that AI if done correctly will be the new way to build APIs with large data sets and information. It can't write code unless you want to dump billions of dollars into a solution with millions of dollars of operational costs. As it stands it loses context too quickly to do advance human tasks. BUT this is where it is great at assembling data and information. You know what is great at assembling data and information? APIs.
Think of it this way if we can make it faster and it trains on a datalake for a company it could be used to return information faster than a nested micro-service architecture that is just a spiderweb of dependencies.
Because AI loses context simple API requests could actually be more efficient.
Also, are there any "IP" rights attached at all to a bunch numbers coming out of a formula that someone else calculated for you? (edit: after all, a "model" is just a matrix of numbers coming out of running a training algorithm that is not owned by Meta over training data that is not owned by Meta.)
Meta imposes a notification duty AND a request for another license (no mention of the details of these) for applications of their model with a large number of users. This is against the spirit of open source. (In practical terms it is not a show stopper since you can easily switch models, although they all have subtlely different behaviours and quality levels.)
> Third, a key difference between Meta and closed model providers is that selling access to AI models isn’t our business model. That means openly releasing Llama doesn’t undercut our revenue, sustainability, or ability to invest in research like it does for closed providers. (This is one reason several closed providers consistently lobby governments against open source.)
Maybe this is a strategic play to hurt other AI companies that depend on this business model?
Having run many red teams recently as I build out promptfoo's red teaming featureset [0], I've noticed the Llama models punch above their weight in terms of accuracy when it comes to safety. People hate excessive guardrails and Llama seems to thread the needle.
Very bullish on open source.
Does anyone have details on exactly what this means or where/how this metric gets derived?
We mostly don’t all want or need the hardware to run these AIs ourselves, all the time. But, when we do, we need lots of it for a little while.
This is what Holochain was born to do. We can rent massive capacity when we need it, or earn money renting ours when we don’t.
All running cryptographically trusted software at Internet scale, without the knowledge or authorization of commercial or government “do-gooders”.
Exciting times!
Still huge props to them for doing what they do.
Mostly unrelated to the correctness of the article, but this feels like a bad argument. AFAIK, Anthropic/OpenAI/Google are not having issues with their weights being leaked (are they?). Why is it that Meta's model weights are?
Llama 3.1 Official Launch
By giving away higher and higher quality models, they undermine the potential return on investment for startups who seek money to train their own. Thus investment in foundation model building stops and they control the ecosystem.
- Open training data (this is very big)
- Open training algorithms (does it include infrastructure code?)
- Open weights (result of previous two)
- Open runtime algorithm
- We need to control our own destiny and not get locked into a closed vendor. - We need to protect our data. - We want to invest in the ecosystem that’s going to be the standard for the long term.
Thank you Meta for being the bright light of ethical guidance for us all.
We don't get the data or training code. The small runtime framework is open source but that's of little use as its largely fixed in implementation due to the weights. Yes we can fine tune but that is akin to modifying video games - we can do it but there's only so much you can do within reasonable effort and no one would call most video games 'open source'*.
Its freeware and Meta's strategy is much more akin to the strategy Microsoft used with Internet Explorer to capture the web browser market. No one was saying god bless Microsoft for trying to capture the browser market with I.E. Nothing wrong with Meta's strategy just don't call it open source.
*weights are data and so is the video/audio output of a video game. If we gave away that video game output for free we wouldn't call the video game open source as the myriad freeware games essentially do.
Can't wait to see how the landscape will look in 2027 and beyond.
The actual problem is running these models. Very few companies can afford the hardware to run these models privately. If you run them in the cloud, then I don't see any potential financial gain for any company to fine-tune these huge models just to catch up with OpenAI or Anthropic, when you can probably get a much better deal by fine-tuning the closed-source models.
Also this point:
> We need to protect our data. Many organizations handle sensitive data that they need to secure and can’t send to closed models over cloud APIs.
First, it's ironic that Meta is talking about privacy. Second, most companies will run these models in the cloud anyway. You can run OpenAI via Azure Enterprise and Anthropic on AWS Bedrock.
Llama 3 Training System
Total: 19.2 exaFLOPS
|
+-------------+-------------+
| |
Cluster 1 Cluster 2
9.6 exaFLOPS 9.6 exaFLOPS
| |
+------+------+ +------+------+
| | | |
12K GPUs 12K GPUs 12K GPUs 12K GPUs
| | | |
[####] [####] [####] [####]
400+ 400+ 400+ 400+
TFLOPS/GPU TFLOPS/GPU TFLOPS/GPU TFLOPS/GPU
If he really wants to replicate Linux's success against proprietary Unices, he needs to release Llama with some kind of GPL equivalent, that forces everyone to play the open source game.
They provide their model, with weights and code, as "source available" and it looks like they allow for commercial use until a 700M monthly subscriber cap is surpassed. They also don't allow you to train other AI models with their model:
""" ... v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof). ... """
Has anyone tried that?
I hate how the moment it's too late will be, by design, closed doors.
> This is one reason several closed providers consistently lobby governments against open source.
Is this substantially true? I've noticed a tendency of those who support the general arguments in this post to conflate the beliefs of people concerned about AI existential risk, some of whom work at the leading AI labs, with the position of the labs themselves. In most cases I've seen, the AI labs (especially OpenAI) have lobbied against any additional regulation on AI, including with SB1047[1] and the EU AI Act[2]. Can anyone provide an example of this in the context of actual legislation?
> On this front, open source should be significantly safer since the systems are more transparent and can be widely scrutinized. Historically, open source software has been more secure for this reason.
This may be true if we could actually understand what was happening in neural networks, or train them to consistently avoid unwanted behaviors. As things are, the public weights are simply inscrutable black boxes, and the existence of jailbreaks and other strange LLM behaviors show that we don't understand how our training processes create models' emergent behaviors. The capabilities of these models and their influence are growing faster than our understand of them, and our ability to steer them to behave precisely how we want, and that will only get harder as the models get more powerful.
> At this point, the balance of power will be critical to AI safety. I think it will be better to live in a world where AI is widely deployed so that larger actors can check the power of smaller bad actors.
This paragraph ignores the concept of offense/defense balance. It's much easier to cause a pandemic than to stop one, and cyberattacks, while not as bad as pandemics, seem to also favor the attacker (this one is contingent on how much AI tools can improve our ability to write secure code). At the extreme, it would clearly be bad if everyone had access to a anti-matter weapon large enough to destroy the Earth; at some level of capability, we have to limit the commands an advanced AI will follow from an arbitrary person.
That said, I'm unsure if limiting public weights at this time would be good regulation. They do seem to have some benefits in increasing research around alignment/interpretability, and I don't know if I buy the argument that public weights are significantly more dangerous from a "misaligned ASI" perspective than many competing closed companies. I also don't buy the view of some in the leading labs that we'll likely have "human level" systems by the end of the decade; it seems possible but unlikely. But I worry that Zuckerberg's vision of the future does not adequately guard against downside risks, and is not compatible with the way the technology will actually develop.
[1] https://thebulletin.org/2024/06/california-ai-bill-becomes-a...
Only the big players can afford to push go, and FB would love to see OpenAI’s code so they can point it to their proprietary user data.
So about all the bots and sock puppets on social media..
Claude is supposed to be better, but it is also even more locked down than ChatGPT.
Word will let me write a manifest for a new Nazi party, but Claude is so locked down that it won't find a cartoon in a picture and Gemini... well.
If AIs are not to harm society, they need to enable us to think in new ways.
And you can't even try it without an FB/IG account.
Zuck will never change.
Why do people keep mislabeling this as Open Source? The whole point of calling something Open Source is that the "magic sauce" of how to build something is publicly available, so I could built it myself if I have the means. But without the training data publicly available, could I train Llama 3.1 if I had the means? No wonder Zuckerberg doesn't start with defining what Open Source actually means, as then the blogpost would have lost all meaning from the get go.
Just call it "Open Model" or something. As it stands right now, the meaning of Open Source is being diluted by all these companies pretending to doing one thing, while actually doing something else.
I initially got very exciting seeing the title and the domain, but hopelessly sad after reading through the article and realizing they're still trying to pass their artifacts off as Open Source projects.
This is hard to disagree with.
Not that anyone would go buy 100,000 H100s to train their own Llama, but words matter. Definitions matter.
https://raw.githubusercontent.com/meta-llama/llama-models/ma...
> 2. Additional Commercial Terms. If, on the Llama 3.1 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
The definition of free software (and open source, for that mater), is well-established. The same definition applies to all programs, whether they are "AI" or not. In any case, if a program was built by training against a dataset, the whole dataset is part of the source code.
Llama is distributed in binary form, and it was built based on a secret dataset. Referring to it as "open source" is not ignorance, it's malice.
1. Meta pushed engineering wages higher across the industry.
2. They promote high performing engineers very quickly. There are engineers making 7 figures there with just a few years experience.
3. They have open sourced the most important frameworks: React and Pytorch
This company is a guiding light forcing the hand of other large corporations. Mark Zuckerberg is a hero, and has done a fantastic job
I don't see open source being able to compete with the cutting-edge proprietary models. There's just not enough money. GPT-5 will take an estimated $1.2 billion to train. MS and OpenAI are already talking about building a $100 billion training data center.
How can you compete with that if your plan is to give away the training result for free?
Because they sold the resultant code and systems built on it for money... this is the gold miner saying that all shovels and jeans should be free.
Am I happy Facebook open sources some of their code? Sure, I think it's good for everyone. Do I think they're talking out of both sides of their mouth? Absolutely.
Let me know when Facebook opens up the entirety of their Ad and Tracking platforms and we can start talking about how it's silly for companies to keep software closed.
I can say with 100% confidence if Facebook were selling their AI advances instead of selling the output it produces, they wouldn't be advocating for everyone else to open source their stacks.
* You can't use them for any purpose. For example, the license prohibits using these models to train other models. * You can't meaningfully modify them given there is almost no information available about the training data, how they were trained, or how the training data was processed.
As such, the model itself is not available under an open source license and the AI does not comply with the "open source AI" definition by OSI.
It's an utter disgrace for Meta to write such a blogpost patting themselves on the back while lying about how open these models are.
Related
Not all 'open source' AI models are open: here's a ranking
Researchers found large language models claiming to be open source restrict access. Debate on AI model openness continues, with concerns over "open-washing" by tech giants. EU's AI Act may exempt open source models. Transparency and reproducibility are crucial for AI innovation.
Meta starts testing user-created AI chatbots on Instagram
Meta CEO Mark Zuckerberg announced testing user-created AI chatbots on Instagram via Meta AI studio. Creators will label AI characters clearly. Initial U.S. tests involve 50 creators, aiming for wider use by August. Meta promotes AI for engaging interactions.
Zuckerberg disses closed-source AI competitors as trying to 'create God'
Mark Zuckerberg advocates for open-source AI tools over closed-source competitors focusing on a single AI entity. Meta plans AI Studio tests for Instagram avatars and envisions AI integration in tech products.
Regulation Alone Will Not Save Us from Big Tech
The article addresses challenges of Big Tech monopolies in AI, advocating for user-owned, open-source AI to prioritize well-being over profit. Polosukhin suggests a shift to open source AI for a diverse, accountable ecosystem.
Meta won't release its multimodal Llama AI model in the EU
Meta will not release its Llama AI model in the EU due to regulatory concerns, impacting European companies. Apple also considers excluding the EU from its AI rollout. This decision poses challenges for companies worldwide.