January 25th, 2025

DeepSeek R1 Runs at 200 Tokens per Second on Raspberry Pi

The Open Source DeepSeek R1 model runs at 200 tokens per second on Raspberry Pi, outperforming some leading models, raising concerns among major AI companies, and is available for local applications.

Read original articleLink Icon
DeepSeek R1 Runs at 200 Tokens per Second on Raspberry Pi

The Open Source DeepSeek R1 model has been successfully tested to run at 200 tokens per second on a Raspberry Pi without internet connectivity. This model is a smaller, distilled version compared to OpenAI's O1 class models and reportedly outperforms both GPT-4o and Claude Sonnet 3.5. The 7 billion parameter models show significant performance improvements over older models, while the 14 billion parameter model competes well with OpenAI's offerings. Other tests on Raspberry Pi 5 with 16GB RAM have shown that larger models can run, albeit at lower speeds of 0.5 to 2.0 tokens per second. The DeepSeek model is available online in both free and paid versions, and its development has raised concerns among major AI companies like Meta. The model's capabilities suggest a shift in the AI landscape, prompting competitors to reassess their strategies. Users are encouraged to explore DeepSeek for local AI applications, with various resources available for installation and setup.

- DeepSeek R1 runs at 200 tokens per second on a Raspberry Pi, outperforming some leading models.

- The model is open-source and available for both free and paid use.

- Larger models can run on Raspberry Pi 5, but at significantly lower speeds.

- The success of DeepSeek is causing concern among major AI companies, indicating a shift in the competitive landscape.

- Users are encouraged to utilize DeepSeek for local AI applications, with resources for installation readily available.

Link Icon 0 comments