July 8th, 2024

Meta AI develops compact language model for mobile devices

Meta AI introduces MobileLLM, a compact language model challenging the need for large AI models. Optimized with under 1 billion parameters, it outperforms larger models by 2.7% to 4.3% on tasks. MobileLLM's innovations include model depth prioritization, embedding sharing, grouped-query attention, and weight-sharing techniques. The 350 million parameter version matches larger models' accuracy on specific tasks, hinting at compact models' potential for efficiency. While not publicly available, Meta has open-sourced the pre-training code, promoting research towards sustainable AI models for personal devices.

Read original articleLink Icon
Meta AI develops compact language model for mobile devices

Meta AI has introduced MobileLLM, a compact language model tailored for mobile devices and resource-constrained platforms. This innovation challenges the conventional belief that effective AI models must be large in size. The research team focused on optimizing models with fewer than 1 billion parameters, a fraction of the size of models like GPT-4. Key innovations in MobileLLM include prioritizing model depth, implementing embedding sharing and grouped-query attention, and utilizing a novel weight-sharing technique. These design choices enabled MobileLLM to outperform previous models by 2.7% to 4.3% on benchmark tasks. The 350 million parameter version of MobileLLM showed comparable accuracy to a much larger model on specific tasks, indicating the potential for compact models to offer similar functionality with fewer computational resources. While MobileLLM is not yet publicly available, Meta has open-sourced the pre-training code for further research. This development signifies a move towards more efficient and sustainable AI models, potentially paving the way for advanced AI features on personal devices.

Related

Optimizing AI Inference at Character.ai

Optimizing AI Inference at Character.ai

Character.AI optimizes AI inference for LLMs, handling 20,000+ queries/sec globally. Innovations like Multi-Query Attention and int8 quantization reduced serving costs by 33x since late 2022, aiming to enhance AI capabilities worldwide.

Researchers upend AI status quo by eliminating matrix multiplication in LLMs

Researchers upend AI status quo by eliminating matrix multiplication in LLMs

Researchers innovate AI language models by eliminating matrix multiplication, enhancing efficiency. A MatMul-free method reduces power consumption, costs, and challenges the necessity of matrix multiplication in high-performing models.

Meta Large Language Model Compiler

Meta Large Language Model Compiler

Large Language Models (LLMs) are utilized in software engineering but underused in code optimization. Meta introduces the Meta Large Language Model Compiler (LLM Compiler) for code optimization tasks. Trained on LLVM-IR and assembly code tokens, it aims to enhance compiler understanding and optimize code effectively.

Benchmarking LLM Inference Back Ends: VLLM, LMDeploy, MLC-LLM, TensorRT-LLM, TGI

Benchmarking LLM Inference Back Ends: VLLM, LMDeploy, MLC-LLM, TensorRT-LLM, TGI

Selecting the right inference backend for large language models is crucial for user experience and cost efficiency. A benchmark study by BentoML compared various backends, highlighting LMDeploy's decoding performance, vLLM's low TTFT, and considerations beyond performance. BentoML and BentoCloud are recommended tools for efficient AI model deployment.

Gemma 2 on AWS Lambda with Llamafile

Gemma 2 on AWS Lambda with Llamafile

Google released Gemma 2 9B, a compact language model rivaling GPT-3.5. Mozilla's llamafile simplifies deploying models like LLaVA 1.5 and Mistral 7B Instruct, enhancing accessibility to powerful AI models across various systems.

Link Icon 1 comments
By @hdhshdhshdjd - 3 months
It’s going to be really interesting to see Meta and Apple battle out who can make an on-device model work better in their respective headsets and wearables.