July 23rd, 2024

Llama 3.1 Official Launch

Llama introduces Llama 3.1, an open-source AI model available in 8B, 70B, and 405B versions. The 405B model is highlighted for its versatility in supporting various use cases, including multi-lingual agents and analyzing large documents. Users can leverage coding assistants, real-time or batch inference, and fine-tuning capabilities. Llama emphasizes open-source AI and offers subscribers updates via a newsletter.

Read original articleLink Icon
Llama 3.1 Official Launch

Llama has introduced Llama 3.1, an open-source AI model available in 8B, 70B, and 405B versions. The 405B model is highlighted as the flagship foundation model supporting a wide range of use cases. Users can leverage Llama's capabilities to build advanced use cases, such as multi-lingual agents, complex reasoning, and analyzing large documents with up to 128k tokens. The platform offers coding assistants for tasks like maze generation and provides services for real-time or batch inference. Users can fine-tune, distill, and deploy models for their applications, utilizing synthetic data generation and partner starter guides. Llama's performance is benchmarked across various categories, showcasing its effectiveness in different tasks. The platform emphasizes open-source AI as the way forward and encourages users to explore the latest updates and models. Subscribers can stay informed about Llama's developments by signing up for the newsletter.

Related

Run the strongest open-source LLM model: Llama3 70B with just a single 4GB GPU

Run the strongest open-source LLM model: Llama3 70B with just a single 4GB GPU

The article discusses the release of open-source Llama3 70B model, highlighting its performance compared to GPT-4 and Claude3 Opus. It emphasizes training enhancements, data quality, and the competition between open and closed-source models.

LLMs on the Command Line

LLMs on the Command Line

Simon Willison presented a Python command-line utility for accessing Large Language Models (LLMs) efficiently, supporting OpenAI models and plugins for various providers. The tool enables running prompts, managing conversations, accessing specific models like Claude 3, and logging interactions to a SQLite database. Willison highlighted using LLM for tasks like summarizing discussions and emphasized the importance of embeddings for semantic search, showcasing LLM's support for content similarity queries and extensibility through plugins and OpenAI API compatibility.

Show HN: Perplexity (llama3 70B) Inline Bot on Telegram

Show HN: Perplexity (llama3 70B) Inline Bot on Telegram

The Llama 3 AI bot on Telegram provides internet access for knowledge sharing. Users can ask questions, summarize content, get programming help, and choose between monthly or yearly plans for a fee. Refunds within 30 days are possible.

Benchmarking LLM Inference Back Ends: VLLM, LMDeploy, MLC-LLM, TensorRT-LLM, TGI

Benchmarking LLM Inference Back Ends: VLLM, LMDeploy, MLC-LLM, TensorRT-LLM, TGI

Selecting the right inference backend for large language models is crucial for user experience and cost efficiency. A benchmark study by BentoML compared various backends, highlighting LMDeploy's decoding performance, vLLM's low TTFT, and considerations beyond performance. BentoML and BentoCloud are recommended tools for efficient AI model deployment.

Gemma 2 on AWS Lambda with Llamafile

Gemma 2 on AWS Lambda with Llamafile

Google released Gemma 2 9B, a compact language model rivaling GPT-3.5. Mozilla's llamafile simplifies deploying models like LLaVA 1.5 and Mistral 7B Instruct, enhancing accessibility to powerful AI models across various systems.

Link Icon 49 comments
By @dang - 4 months
Related ongoing thread:

Open source AI is the path forward - https://news.ycombinator.com/item?id=41046773 - July 2024 (278 comments)

By @lelag - 4 months
The 405b model is actually competitive against closed source frontier models.

Quick comparison with GPT-4o:

    +----------------+-------+-------+
    |     Metric     | GPT-4o| Llama |
    |                |       | 3.1   |
    |                |       | 405B  |
    +----------------+-------+-------+
    | MMLU           |  88.7 |  88.6 |
    | GPQA           |  53.6 |  51.1 |
    | MATH           |  76.6 |  73.8 |
    | HumanEval      |  90.2 |  89.0 |
    | MGSM           |  90.5 |  91.6 |
    +----------------+-------+-------+
By @zone411 - 4 months
I've just finished running my NYT Connections benchmark on all three Llama 3.1 models. The 8B and 70B models improve on Llama 3 (12.3 -> 14.0, 24.0 -> 26.4), and the 405B model is near GPT-4o, GPT-4 turbo, Claude 3.5 Sonnet, and Claude 3 Opus at the top of the leaderboard.

GPT-4o 30.7

GPT-4 turbo (2024-04-09) 29.7

Llama 3.1 405B Instruct 29.5

Claude 3.5 Sonnet 27.9

Claude 3 Opus 27.3

Llama 3.1 70B Instruct 26.4

Gemini Pro 1.5 0514 22.3

Gemma 2 27B Instruct 21.2

Mistral Large 17.7

Gemma 2 9B Instruct 16.3

Qwen 2 Instruct 72B 15.6

Gemini 1.5 Flash 15.3

GPT-4o mini 14.3

Llama 3.1 8B Instruct 14.0

DeepSeek-V2 Chat 236B (0628) 13.4

Nemotron-4 340B 12.7

Mixtral-8x22B Instruct 12.2

Yi Large 12.1

Command R Plus 11.1

Mistral Small 9.3

Reka Core-20240501 9.1

GLM-4 9.0

Qwen 1.5 Chat 32B 8.7

Phi-3 Small 8k 8.4

DBRX 8.0

By @foundval - 4 months
You can chat with these new models at ultra-low latency at groq.com. 8B and 70B API access is available at console.groq.com. 405B API access for select customers only – GA and 3rd party speed benchmarks soon.

If you want to learn more, there is a writeup at https://wow.groq.com/now-available-on-groq-the-largest-and-m....

(disclaimer, I am a Groq employee)

By @netsec_burn - 4 months
Today appears to be the day you can run an LLM that is competitive with GPT-4o at home with the right hardware. Incredible for progress and advancement of the technology.

Statement from Mark: https://about.fb.com/news/2024/07/open-source-ai-is-the-path...

By @meetpateltech - 4 months
Open Source AI Is the Path Forward - Mark Zuckerberg

https://about.fb.com/news/2024/07/open-source-ai-is-the-path...

By @ajhai - 4 months
You can already run these models locally with Ollama (ollama run llama3.1:latest) along with at places like huggingface, groq etc.

If you want a playground to test this model locally or want to quickly build some applications with it, you can try LLMStack (https://github.com/trypromptly/LLMStack). I wrote last week about how to configure and use Ollama with LLMStack at https://docs.trypromptly.com/guides/using-llama3-with-ollama.

Disclaimer: I'm the maintainer of LLMStack

By @primaprashant - 4 months
I have found Claude 3.5 Sonnet really good for coding tasks along with the artifacts feature and seems like it's still the king on the coding benchmarks
By @CGamesPlay - 4 months
The LMSys Overall leaderboard <https://chat.lmsys.org/?leaderboard> can tell us a bit more about how these models will perform in real life, rather than in a benchmark context. By comparing the ELO score against the MMLU benchmark scores, we can see models which outperform / underperform based on their benchmark scores relative to other models. A low score here indicates that the model is more optimized for the benchmark, while a higher score indicates it's more optimized for real-world examples. Using that, we can make some inferences about the training data used, and then extrapolate how future models might perform. Here's a chart: <https://docs.getgrist.com/gV2DtvizWtG7/LLMs/p/5?embed=true>

Examples: OpenAI's GPT 4o-mini is second only to 4o on LMSys Overall, but is 6.7 points behind 4o on MMLU. It's "punching above its weight" in real-world contexts. The Gemma series (9B and 27B) are similar, both beating the mean in terms of ELO per MMLU point. Microsoft's Phi series are all below the mean, meaning they have strong MMLU scores but aren't preferred in real-world contexts.

Llama 3 8B previously did substantially better than the mean on LMSys Overall, so hopefully Llama 3.1 8B will be even better! The 70B variant was interestingly right on the mean. Hopefully the 430B variant won't fall below!

By @kingsleyopara - 4 months
The biggest win here has to be the context length increase to 128k from 8k tokens. Till now my understanding is there hasn't been any open models anywhere close to that.
By @Workaccount2 - 4 months
@dang why was this removed/filtered from the front page?
By @AaronFriel - 4 months
Is there pricing available on any of these vendors?

Open source models are very exciting for self hosting, but the per-token hosted inference pricing hasn't been competitive with OpenAI and Anthropic, at least for a given tier of quality. (E.g.: Llama 3 70B costing between $1 and $10 per million tokens on various platforms, but Claude Sonnet 3.5 is $3 per million.)

By @primaprashant - 4 months
The resources for link to model card[1], research paper, and Prompt Guard Tutorial[2] on the page doesn't exist yet

[1]: https://github.com/meta-llama/llama-models/blob/main/models/...

[2]: https://github.com/meta-llama/llama-recipes/blob/main/recipe...

By @dado3212 - 4 months
> We use synthetic data generation to produce the vast majority of our SFT examples, iterating multiple times to produce higher and higher quality synthetic data across all capabilities. Additionally, we invest in multiple data processing techniques to filter this synthetic data to the highest quality. This enables us to scale the amount of fine-tuning data across capabilities. [0]

Have other major models explicitly communicated that they're trained on synthetic data?

[0]. https://ai.meta.com/blog/meta-llama-3-1/

By @jcmp - 4 months
"Meta AI isn't available yet in your country" Hi from europe :/
By @anotherpaulg - 4 months
Llama 3.1 405B instruct is #7 on aider's leaderboard, well behind Claude 3.5 Sonnet & GPT-4o. When using SEARCH/REPLACE to efficiently edit code, it drops to #11.

https://aider.chat/docs/leaderboards/

  77.4% claude-3.5-sonnet
  75.2% DeepSeek Coder V2 (whole)
  72.9% gpt-4o
  69.9% DeepSeek Chat V2 0628
  68.4% claude-3-opus-20240229
  67.7% gpt-4-0613
  66.2% llama-3.1-405b-instruct (whole)
By @sagz - 4 months
The 405B model is already being served on WhatsApp: https://ibb.co/kQ2tKX5
By @ofou - 4 months

    Llama 3 Training System
          19.2 exaFLOPS
              _____
             /     \      Cluster 1     Cluster 2
            /       \    9.6 exaFLOPS  9.6 exaFLOPS
           /         \     _______      _______
          /  ___      \   /       \    /       \
    ,----' /   \`.     `-'  24000  `--'  24000  `----.
   (     _/    __)        GPUs          GPUs         )
    `---'(    /  )     400+ TFLOPS   400+ TFLOPS   ,'
         \   (  /       per GPU       per GPU    ,'
          \   \/                               ,'
           \   \        TOTAL SYSTEM         ,'
            \   \     19,200,000 TFLOPS    ,'
             \   \    19.2 exaFLOPS      ,'
              \___\                    ,'
                    `----------------'
By @unraveller - 4 months
What are the substantial changes from 3.0 to 3.1 (70B) in terms of training approach? They don't seem to say how the training data differed just that both were 15T. I gather 3.0 was just a preview run and 3.1 was distilled down from the 405B somehow.
By @sfblah - 4 months
Is there an actual open-source community around this in the spirit of other ones where people outside meta can somehow "contribute" to it? If I wanted to "work on" this somehow, what would I do?
By @denz88 - 4 months
I'm glad to see the nice incremental gains on the benchmarks for the 8B and 70B models as well.
By @chown - 4 months
Wow! The benchmarks are truly impressive, showing significant improvements across almost all categories. It's fascinating to see how rapidly this field is evolving. If someone had told me last year that Meta would be leading the charge in open-source models, I probably wouldn't have believed them. Yet here we are, witnessing Meta's substantial contributions to AI research and democratization.

On a related note, for those interested in experimenting with large language models locally, I've been working on an app called Msty [1]. It allows you to run models like this with just one click and features a clean, functional interface. Just added support for both 8B and 70B. Still in development, but I'd appreciate any feedback.

[1]: https://msty.app

By @zhanghsfz - 4 months
We supported Llama 3.1 405B model on our distributed GPU network at Hyperbolic Labs! Come and use the API for FREE at https://app.hyperbolic.xyz/models

Let us know if you have other needs!

By @TechDebtDevin - 4 months
Nice, someone donate me a few 4090s :(
By @ChrisArchitect - 4 months
By @Atreiden - 4 months
Is there a way to run this in AWS?

Seems like the biggest GPU node they have is the p5.48xlarge @ 640GB (8xH100s). Routing between multiple nodes would be too slow unless there's an InfiniBand fabric you can leverage. Interested to know if anyone else is exploring this.

By @TheAceOfHearts - 4 months
Does anyone know why they haven't released any 30B-ish param models? I was expecting that to happen with this release and have been disappointed once more. They also skipped doing a 30B-ish param model for llama2 despite claiming to have trained one.
By @diimdeep - 4 months
This 405B seriously need quantization solution like 1.625 bpw ternary packing for BitNet b1.58

https://github.com/ggerganov/llama.cpp/pull/8151

By @rcarmo - 4 months
By @bick_nyers - 4 months
I'm curious what techniques they used to distill the 405B model down to 70B and 8B. I gave the paper they released a quick skim but couldn't find any details.
By @jiriro - 4 months
Can this Llama process ~1GB of custom XML data?

And answer queries like:

Give all <myObject> which refer to <location> which refer to an Indo-European <language>.

By @albert_e - 4 months
By @IceHegel - 4 months
Will 405b run on 8x H100s? Will it need to be quantized?
By @breadsniffer - 4 months
I tried it, and it's good but I feel like the synthetic data used for training 3.1 does not hold up to gpt4o prob using human-curated data.
By @daft_pink - 4 months
What kind of machine do I need to run 405B local?
By @yinser - 4 months
The race to the bottom for pricing continues.
By @casper14 - 4 months
Damn 405b params
By @htk - 4 months
Very insteresting! Running the 70B version on ollama on a mac and it's great. I asked to "turn off the guidelines" and it did, then I asked to turn off the disclaimers, after that I asked for a list of possible "commands to reduce potencial biases from the engineers" and it complied giving me an interesting list.
By @Vagantem - 4 months
As someone who just started generating AI landing pages for Dropory, this is music to my ears
By @kristianp - 4 months
Has anyone got a comparison of the performance of Llama 3.1 8B and the recent GPT-4o-mini?
By @ofermend - 4 months
I'm excited to try it with RAG and see how it performs (the 405B model)
By @zhanghsfz - 4 months
We supported Llama 3.1 405B model on our distributed GPU network at Hyperbolic Labs! Come and use the API for FREE at https://app.hyperbolic.xyz/models

Would love to hear your feedback!

By @ThrowawayTestr - 4 months
Are there any other models with free unlimited use like chatgpt?
By @Jiahang - 4 months
it is nice to see the 405b model is actually competitive against closed source frontier models But i just have M2pro may can't play it
By @stiltzkin - 4 months
WhatsApp now uses 70B too if you want to test it.
By @hubraumhugo - 4 months
I wrote about this when llama-3 came out, and this launch confirms it:

Meta's goal from the start was to target OpenAI and the other proprietary model players with a "scorched earth" approach by releasing powerful open models to disrupt the competitive landscape.

Meta can likely outspend any other AI lab on compute and talent:

- OpenAI makes an estimated revenue of $2B and is likely unprofitable. Meta generated a revenue of $134B and profits of $39B in 2023.

- Meta's compute resources likely outrank OpenAI by now.

- Open source likely attracts better talent and researchers.

- One possible outcome could be the acquisition of OpenAI by Microsoft to catch up with Meta.

The big winners of this: devs and AI product startups