February 5th, 2025

Gemini 2.0 is now available to everyone

Google has launched Gemini 2.0, featuring the Flash model for all users, a Pro model for coding, and a cost-efficient Flash-Lite model, all with enhanced safety measures and ongoing updates.

Read original articleLink Icon
FrustrationExcitementDisappointment
Gemini 2.0 is now available to everyone

Google has announced the general availability of its Gemini 2.0 model, which includes several updates aimed at enhancing performance and usability for developers and users. The Gemini 2.0 Flash model, known for its efficiency and low latency, is now accessible to all users via the Gemini app and API in Google AI Studio and Vertex AI. This model supports multimodal input and has a context window of 1 million tokens. Additionally, an experimental version of Gemini 2.0 Pro has been released, which is optimized for coding tasks and complex prompts, featuring a larger context window of 2 million tokens. The new Gemini 2.0 Flash-Lite model is also introduced, designed to be cost-efficient while maintaining quality. All models are built with safety measures, including reinforcement learning techniques to improve response accuracy and security against potential cyber threats. Google emphasizes its commitment to ongoing improvements and updates for the Gemini 2.0 family of models.

- Gemini 2.0 Flash is now generally available for all users and developers.

- The Gemini 2.0 Pro model is optimized for coding and complex prompts, featuring a 2 million token context window.

- Gemini 2.0 Flash-Lite is introduced as a cost-efficient model with improved quality.

- All models incorporate safety measures to enhance response accuracy and mitigate security risks.

- Google plans to continue updating and improving the Gemini 2.0 models.

AI: What people are saying
The comments on the launch of Google Gemini 2.0 reveal a mix of excitement and frustration among users.
  • Users express confusion over the various models and their availability, particularly regarding the "Gemini Advanced" and subscription details.
  • Many commenters highlight the competitive pricing of Gemini 2.0 models compared to other AI offerings, noting cost-effectiveness.
  • There are mixed reviews on the performance of Gemini 2.0, with some praising its capabilities while others criticize its limitations and quality compared to competitors.
  • Concerns about the naming conventions of the models and the clarity of their functionalities are frequently mentioned.
  • Users are eager for more information on features like coding performance and video processing capabilities.
Link Icon 53 comments
By @singhrac - 2 months
What is the model I get at gemini.google.com (i.e. through my Workspace subscription)? It says "Gemini Advanced" but there are no other details. No model selection option.

I find the lack of clarity very frustrating. If I want to try Google's "best" model, should I be purchasing something? AI Studio seems focused around building an LLM wrapper app, but I just want something to answer my questions.

Edit: what I've learned through Googling: (1) if you search "is gemini advanced included with workspace" you get an AI overview answer that seems to be incorrect, since they now include Gemini Advanced (?) with every workspace subscription.(2) a page exists telling you to buy the add-on (Gemini for Google Workspace), but clicking on it says this is no longer available because of the above. (3) gemini.google.com says "Gemini Advanced" (no idea which model) at the top, but gemini.google.com/advanced redirects me to what I have deduced is the consumer site (?) which tells me that Gemini Advanced is another $20/month

The problem, Google PMs if you're reading this, is that the gemini.google.com page does not have ANY information about what is going on. What model is this? What are the limits? Do I get access to "Deep Research"? Does this subscription give me something in aistudio? What about code artifacts? The settings option tells me I can change to dark mode (thanks!).

Edit 2: I decided to use aistudio.google.com since it has a dropdown for me on my workspace plan.

By @mohsen1 - 2 months
> available via the Gemini API in Google AI Studio and Vertex AI.

> Gemini 2.0, 2.0 Pro and 2.0 Pro Experimental, Gemini 2.0 Flash, Gemini 2.0 Flash Lite

3 different ways of accessing the API, more than 5 different but extremely similarly named models. Benchmarks only comparing to their own models.

Can't be more "Googley"!

By @pmayrgundter - 2 months
I tried voice chat. It's very good, except for the politics

We started talking about my plans for the day, and I said I was making chili. G asked if I have a recipe or if I needed one. I said, I started with Obama's recipe many years ago and have worked on it from there.

G gave me a form response that it can't talk politics.

Oh, I'm not talking politics, I'm talking chili.

G then repeated form response and tried to change conversation, and as long as I didn't use the O word, we were allowed to proceed. Phew

By @leetharris - 2 months
These names are unbelievably bad. Flash, Flash-Lite? How do these AI companies keep doing this?

Sonnet 3.5 v2

o3-mini-high

Gemini Flash-Lite

It's like a competition to see who can make the goofiest naming conventions.

Regarding model quality, we experiment with Google models constantly at Rev and they are consistently the worst of all the major players. They always benchmark well and consistently fail in real tasks. If this is just a small update to the gemini-exp-1206 model, then I think they will still be in last place.

By @silvajoao - 2 months
Try out the new models at https://aistudio.google.com.

It's a great way to experiment with all the Gemini models that are also available via the API.

If you haven't yet, try also Live mode at https://aistudio.google.com/live.

You can have a live conversation with Gemini and have the model see the world via your phone camera (or see your desktop via screenshare on the web), and talk about it. It's quite a cool experience! It made me feel the joy of programming and using computers that I had had so many times before.

By @serjester - 2 months
For anyone that parsing PDF's this is a game changer in term of price per dollar - I wrote a blog about it [1]. I think a lot of people were nervous about pricing since they released the beta, and although it's slightly more expensive than 1.5 Flash, this is still incredibly cost-effective. Looking forward to also benchmarking the lite version.

[1] https://www.sergey.fyi/articles/gemini-flash-2

By @simonw - 2 months
I upgraded my llm-gemini plugin to handle this, and shared the results of my "Generate an SVG of a pelican riding a bicycle" benchmark here: https://simonwillison.net/2025/Feb/5/gemini-2/

The pricing is interesting: Gemini 2.0 Flash-Lite is 7.5c/million input tokens and 30c/million output tokens - half the price of OpenAI's GPT-4o mini (15c/60c).

Gemini 2.0 Flash isn't much more: 10c/million for text/image input, 70c/million for audio input, 40c/million for output. Again, cheaper than GPT-4o mini.

By @jbarrow - 2 months
I've been very impressed by Gemini 2.0 Flash for multimodal tasks, including object detection and localization[1], plus document tasks. But the 15 requests per minute limit was a severe limiter while it was experimental. I'm really excited to be able to actually _do_ things with the model.

In my experience, I'd reach for Gemini 2.0 Flash over 4o in a lot of multimodal/document use cases. Especially given the differences in price ($0.10/million input and $0.40/million output versus $2.50/million input and $10.00/million output).

That being said, Qwen2.5 VL 72B and 7B seem even better at document image tasks and localization.

[1] https://notes.penpusher.app/Misc/Google+Gemini+101+-+Object+...

By @starchild3001 - 2 months
I use all top of the line models everyday. Not for coding, but for general "cognitive" tasks like research, thinking, analysis, writing etc. What Google calls Gemini Pro 2.0 has been my most favorite model for the past couple of months. I think o1/4o come pretty close. Those are kinda equals, with a slight preference for Gemini. Claude has fallen behind, clearly. DeepSeek is intriguing. It excels occassionally where others won't. For consistency's sake, Gemini Pro 2.0 is amazing.

I highly recommend using it via https://aistudio.google.com/. Gemini app has some additional bells and whistles, but for some reason quality isn't always on par with aistudio. Also Gemini app seems to have more filters -- it seems more shy answering controversial topics. Just some general impressions.

By @gwern - 2 months
2.0 Pro Experimental seems like the big news here?

> Today, we’re releasing an experimental version of Gemini 2.0 Pro that responds to that feedback. It has the strongest coding performance and ability to handle complex prompts, with better understanding and reasoning of world knowledge, than any model we’ve released so far. It comes with our largest context window at 2 million tokens, which enables it to comprehensively analyze and understand vast amounts of information, as well as the ability to call tools like Google Search and code execution.

By @Ninjinka - 2 months
Pricing is CRAZY.

Audio input is $0.70 per million tokens on 2.0 Flash, $0.075 for 2.0 Flash-Lite and 1.5 Flash.

For gpt-4o-mini-audio-preview, it's $10 per million tokens of audio input.

By @butlike - 2 months
Flash is back, baby.

Next release should be called Gemini Macromedia

By @leonidasv - 2 months
That 1M tokens context window alone is going to kill a lot of RAG use cases. Crazy to see how we went from 4K tokens context windows (2023 ChatGPT-3.5) to 1M in less than 2 years.
By @esafak - 2 months
Benchmarks or it didn't happen. Anything better than https://lmarena.ai/?leaderboard?

My experience with the Gemini 1.5 models has been positive. I think Google has caught up.

By @msuvakov - 2 months
Gemini 2.0 works great with large context. A few hours ago, I posted a ShowHN about parsing an entire book in a single prompt. The goal was to extract characters, relationships, and descriptions that could then be used for image generation:

https://news.ycombinator.com/item?id=42946317

By @mtaras - 2 months
Updates for Gemini models will always be exciting to me because of how generous free API tier is, I barely run into limits for personal use. Huge context window is a huge advantage for use in personal projects, too
By @staticman2 - 2 months
I have a fun query in AI studio where I pasted a 800,000 token Wuxia martial arts novel and ask it worldbuilding questions.

1.5 pro and the old 2.0 flash experimental generated responses in AI studio but the new 2.0 models respond with blank answers.

I wonder if it's timing out or some sort of newer censorship models is preventing 2.0 from answering my query. The novel is pg-13 at most but references to "bronze skinned southern barbarians" "courtesans" "drugs" "demonic sects" and murder could I guess set it off.

By @sho_hn - 2 months
Anyone have a take on how the coding performance (quality and speed) of the 2.0 Pro Experimental compares to o3-mini-high?

The 2 million token window sure feels exciting.

By @bionhoward - 2 months
I always get to, “You may not use the Services to develop models that compete with the Services (e.g., Gemini API or Google AI Studio).” [1] and exit

- [1] https://ai.google.dev/gemini-api/terms

By @andrewstuart - 2 months
I’ve tried Gemini many times and never found it to be useful at all compared to OpenAI and Claude.
By @EVa5I7bHFq9mnYK - 2 months
I am dreaming about an aggregator site where one can select any model - openai, gemini, claude, llama, qwen... and pay API rate + some profit margin for any query. Without having to register with each AI provider, and without sharing my PII with them.
By @bwb - 2 months
I am blown away by how bad this is compared to the competition. What is Google doing? I asked it to give me something simple like 6 results, and it gives me 3, not to mention bad data hallucinations that ChatGPT and others are working fine with.
By @crowcroft - 2 months
Worth noting that with 2.0 they're now offering free search tool use for 1,500 queries per day.

Their search costs 7x Perplexity Sonar's but imagine a lot of people will start with Google given they can get a pretty decent amount of search for free now.

By @CSMastermind - 2 months
Is it still the case that it doesn't really support video input?

As in I have a video file I want to send it to the model and get a response about it. Not their 'live stream' or whatever functionality.

By @m_ppp - 2 months
I'm interested to know how well video processing works here. Ran into some problems when I was using vertex to serve longer youtube videos.
By @mmanfrin - 2 months
It sure is cool that people who joined Google's pixel pass continue to be unable to give them money to access Advanced.
By @tmaly - 2 months
I am on the iOS app and I see Gemini 2.0 and Gemini 1.5 as options in the drop down. I am on free tier
By @barrenko - 2 months
If you're Google and you're reading, please offer finetuning on multi-part dialogue.
By @SuperHeavy256 - 2 months
it sucks btw. I tried scheduling an event in google calendar through gemini, and it got the date wrong, the time wrong, and the timezone wrong. it set an event that's supposed to be tomorrow to next year.
By @dontseethefnord - 2 months
Does it still say that water isn’t frozen at 0 degrees Fahrenheit?
By @ryao - 2 months
Google should release the weights under a MIT license like Deepseek.
By @yogthos - 2 months
I wish the blog mentioned whether they backported DeepSeek ideas into their model to make it more efficient.
By @eamag - 2 months
where can I try coding with it? Is it available in cursor/copilot?
By @Alifatisk - 2 months
Exciting news to see these models being released to the Gemini app, I would wish my preferences on which model I want to default to got saved for further sessions.

How many tokens can gemini.google.com handle as input? How large is the context window before it forgets? A quick search said it's 128k token window but that applies to Gemini 1.5 Pro, how is it now then?

My assumption is that "Gemini 2.0 Flash Thinking Experimental is just" "Gemini 2.0 Flash" with reasoning and "Gemini 2.0 Flash Thinking Experimental with apps" is just "Gemini 2.0 Flash Thinking Experimental" with access to the web and Googles other services, right? So sticking to "Gemini 2.0 Flash Thinking Experimental with apps" should be the optimal choice.

Is there any reason why Gemini 1.5 Flash is still an option? Feels like it should be removed as an option unless it does something better than the other.

I have difficulties understanding where each variant of the Gemini model is suited the most. Looking at aistudio.google.com, they have already update the available models.

Is "Gemini 2.0 Flash Thinking Experimental" on gemini.google.com just "Gemini experiment 1206" or was it "Gemini Flash Thinking Experimental" aistudio.google.com?

I have a note on my notes app where I rank every llm based on instructions following and math, to this day, I've had difficulties knowing where to place every Gemini model. I know there is a little popup when you hover over each model that tries to explain what each model does and which tasks it is best suited for, but these explanations have been very vague to me. And I haven't even started on the Gemini Advanced series or whatever I should call it.

The available models on aistudio is now:

- Gemini 2.0 Flash (gemini-2.0-flash)

- Gemini 2.0 Flash Lite Preview (gemini-2.0-flash-lite-preview-02-05)

- Gemini 2.0 Pro Experimental (gemini-2.0-pro-exp-02-05)

- Gemini 2.0 Flash Thinking Experimental (gemini-2.0-flash-thinking-exp-01-21)

If I had to sort these from most likely to fulfill my need to least likely, then it would probably be:

gemini-2.0-flash-thinking-exp-01-21 > gemini-2.0-pro-exp-02-05 > gemini-2.0-flash-lite-preview-02-05 > gemini-2.0-flash

Why? Because aistudio describes gemini-2.0-flash-thinking-exp-01-21 as being able to tackle most complex and difficult tasks while gemini-2.0-pro-exp-02-05 and gemini-2.0-flash-lite-preview-02-05 only differs with how much context they can handle.

So with that out of the way, how does Gemini-2.0-flash-thinking-exp-01-21 compare against o3-mini, Qwen 2.5 Max, Kimi k1.5, DeepSeek R1, DeepSeek V3 and Sonnet 3.5?

My current list of benchmarks I go through is artificialanalysis.ai, lmarena.ai, livebench.ai and aider.chat:s polygot benchmark but still, the whole Gemini suite is difficult to reason and sort out.

I feel like this trend of having many different models with the same name but different suffix starts be an obstacle to my mental model.

By @DimuP - 2 months
Finally, it's here!
By @adammarples - 2 months
how many "r"'s in strrawberry

stumped it

By @karaterobot - 2 months
> Gemini 2.0 is now being forced on everyone.
By @foresto - 2 months
Not to be confused with Project Gemini.

https://geminiprotocol.net/

By @darthapple76 - 2 months
is this AGI yet?
By @denysvitali - 2 months
When will they release Gemini 2.0 Pro Max?
By @gallerdude - 2 months
Is there really no standalone app, like ChatGPT/Claude/DeepSeek, available yet for Gemini?
By @mistrial9 - 2 months
Why does no one mention that you must login with a Google account, with all of the record keeping, cross correlations and 3rd party access implied there..
By @user3939382 - 2 months
I wonder how common this is, but my interest in this product is 0 simply because my level of trust and feeling of goodwill for Google almost couldn’t be lower.
By @butz - 2 months
Does "everyone" here means "users with google accounts"?
By @rzz3 - 2 months
It’s funny, I’ve never actually used Gemini and, though this may be incorrect, I automatically assume it’s awful. I assume it’s awful because the AI summaries at the top of Google Search are so awful, and that’s made me never give Google AI a chance.
By @sylware - 2 months
This is a lie since I don't have a google account, and cannot search on google anymore since noscript/basic (x)html browsers interop was broken a few weeks ago.
By @lowmagnet - 2 months
Here's me not using Gemini 1 because the only use case for me for old assistant is setting a timer. Because of reports that Gemini is randomly incapable of setting one.