October 30th, 2024

DeepSeek v2.5 – open-source LLM comparable to GPT-4o, but 95% less expensive

DeepSeek launched DeepSeek-V2.5, an advanced open-source model with a 128K context length, excelling in math and coding tasks, and offering competitive API pricing for developers.

Read original articleLink Icon
DeepSeek v2.5 – open-source LLM comparable to GPT-4o, but 95% less expensive

DeepSeek has launched DeepSeek-V2.5, an advanced model that integrates general and coding capabilities, featuring an upgraded API and web interface. This version boasts a remarkable 128K context length and is available for free access. DeepSeek-V2.5 has achieved top rankings in major large model leaderboards, placing in the top three in AlignBench, surpassing GPT-4, and closely competing with GPT-4-Turbo. It also ranks highly in MT-Bench, rivaling LLaMA3-70B and outperforming Mixtral 8x22B. The model specializes in math, coding, and reasoning tasks, and is open-source, making it accessible for various applications. The pricing for the API is set at $0.14 per million input tokens and $0.28 per million output tokens, positioning it as a cost-effective solution. DeepSeek aims to redefine possibilities in AI with this new release, emphasizing its capabilities in handling complex tasks efficiently.

- DeepSeek-V2.5 integrates general and coding capabilities with a 128K context length.

- It ranks in the top three of AlignBench, outperforming GPT-4.

- The model specializes in math, coding, and reasoning tasks.

- API pricing is competitive at $0.14 per million input tokens and $0.28 per million output tokens.

- DeepSeek is an open-source model, enhancing accessibility for developers.

Link Icon 21 comments
By @joshhart - 4 months
The benchmarks compare it favorably to GPT-4-turbo but not GPT-4o. The latest versions of GPT-4o are much higher in quality than GPT-4-turbo. The HN title here does not reflect what the article is saying.

That said the conclusion that it's a good model for cheap is true. I just would be hesitant to say it's a great model.

By @viraptor - 4 months
Why say comparable when gpt4o is not included in the comparison table? (Neither is the interesting Sonnet 3.5)

Here's an Aider leaderboard with the interesting models included: https://aider.chat/docs/leaderboards/ Strangely, v2.5 is below the old v2 Coder. Maybe we can count on v2.5 Coder being released then?

By @shamanic - 4 months
In my experience, Deepseek is my favourite model to use for coding tasks. it is not as smart of an assistant as 4o or Sonnet, but it has outstanding task adhesion, code quality is consistently top notch & it is never lazy. unlike GPT4o or the new Sonnet (yuck) it doesn't try to be too smart for its own good, which actually makes it easier to work with on projects. the main downside is that it has a problem with looping, where it gets some concept or context inside its context and refuses to move on from it. however if you remember the old GPT4 ( pre turbo ) days then this is really not a problem, just start a new chat.
By @uxhacker - 4 months
It’s interesting to see a Chinese LLM like DeepSeek enter the global stage, particularly given the backdrop of concerns over data security with other Chinese-owned platforms, like TikTok. The key question here is: if DeepSeek becomes widely adopted, will we see a similar wave of scrutiny over data privacy?

With TikTok, concerns arose partly because of its reach and the vast amount of personal information it collects. An LLM like DeepSeek would arguably have even more potential to gather sensitive data, especially as these models can learn from and remember interaction patterns, potentially accessing or “training” on sensitive information users might input without thinking.

The challenge is that we’re not yet certain how much data DeepSeek would retain and where it would be stored. For countries already wary of data leaving their borders or being accessible to foreign governments, we could see restrictions or monitoring mechanisms placed on similar LLMs—especially if companies start using these models in environments where proprietary information is involved.

In short, if DeepSeek or similar Chinese LLMs gain traction, it’s quite likely they’ll face the same level of scrutiny (or more) that we’ve seen with apps like TikTok.

By @jyap - 4 months
This 236B model came out around September 6th.

DeepSeek-V2.5 is an upgraded version that combines DeepSeek-V2-Chat and DeepSeek-Coder-V2-Instruct.

From: https://huggingface.co/deepseek-ai/DeepSeek-V2.5

By @TZubiri - 4 months
https://www.youtube.com/watch?v=OW-reOkee1Y (sorry for the shitty source)

A word of advice on advertising low-cost alternatives.

'The weaknesses make your low cost believable. [..] If you launched Ryan Air and you said we are as good as British Airways but we are half the price, people would go "it does not make sense"'

By @khanan - 4 months
Did you try to ask it if Winnie the pooh look like the president of China?
By @zone411 - 4 months
In my NYT Connections benchmark, it hasn't performed well: https://github.com/lechmazur/nyt-connections/ (see the table).
By @DrPhish - 4 months
I run it at home at q8 on my dual Epyc server. I find it to be quite good, especially when you host it locally and are able to tweak all the settings to get the kind of results you need for a particular task.
By @gdevenyi - 4 months
What does open source mean here? Where's the code? The weights?
By @patrickhogan1 - 4 months
It’s cheaper, but where do you get the initial free credits? It seems most models get such a boost and lock in from the initial free credits.
By @nextworddev - 4 months
Where are the servers hosted, and is there any proof that the data doesn’t cross overseas to China?
By @Alifatisk - 4 months
Oh wow, it almost beats Claude3 Opus!
By @ziofill - 4 months
What about comparisons to Claude 3.5? Sneaky.
By @BoNour - 4 months
not bad for a 250B model, would be more impressive if with more fine tunning it matches performance of gpt 4
By @evil_yam - 4 months
open model, not open-source model
By @nprateem - 4 months
As in significantly worse than..?
By @Giorgi - 4 months
In what world "comparable", looks like another Chinese ChatGPT "alternative" that is a crap.
By @yieldcrv - 4 months
tl;dr not even close to closed source text-only modes, and a lightyear behind the other 3 senses these multimodal ones have had for a year

just a personal benchmark I follow, the UX on locally run stuff has diverged vastly

By @bionhoward - 4 months
Sadly it’s equally useless as OpenAI models because the terms of use read “ 3.6 You will not use the Services for the following improper purposes: 4) Using the Services to develop other products and services that are in competition with the Services (unless such restrictions are illegal under relevant legal norms).”

For the billionth time, there are zero products and services which are NOT in competition with general intelligence. Therefore, this kind of clause simply begs for malicious compliance…go use something else.