October 24th, 2024

Complete the Un-Completable: The State of AI Completion in JetBrains IDEs

JetBrains has improved AI code completion in its IDEs with local and cloud methods, introducing a new pipeline in the 2024.2 release, enhancing speed, accuracy, and user experience based on feedback.

Read original articleLink Icon
Complete the Un-Completable: The State of AI Completion in JetBrains IDEs

JetBrains has made significant advancements in AI code completion within its IDEs, enhancing the coding experience for developers. The company offers two primary methods for AI code completion: Local Full Line Code Completion, which operates directly on the user's machine for quick, context-aware suggestions, and Cloud-based AI code completion, which utilizes cloud resources for more complex tasks. Recent updates have improved the user experience, with metrics showing millions of completions daily and a high acceptance rate. The 2024.2 release introduced a reworked cloud completion pipeline, utilizing in-house large language models (LLMs) tailored for code completion, resulting in faster and more accurate suggestions. Key improvements include highlighting suggestions for better readability, allowing partial acceptance of suggestions, and enhancing project awareness for more relevant code blocks. User feedback has been instrumental in shaping these updates, and JetBrains plans to continue refining both local and cloud completions, expanding language support, and improving overall user experience.

- JetBrains has enhanced AI code completion in its IDEs, focusing on speed and accuracy.

- The 2024.2 release features a new cloud completion pipeline and in-house LLMs.

- Improvements include better readability of suggestions and partial acceptance options.

- User feedback has significantly influenced the development of these features.

- Future updates will focus on expanding language support and refining user experience.

Link Icon 5 comments
By @heroprotagonist - 7 months
I am very hopeful and have enough faith in JetBrains to believe that they will ensure that any added capabilities for completion will be available to ALL AI plugins on their marketplace.

I would be hugely discouraged if I ever discovered that part of their push to get developers using _their_ AI models and service was to lock out any competition from being able to offer an alternative.

There's been a lag between some of the fancy features enabled by VSCode-based tools and plugins like Cursor, Void, etc, and their equivalent becoming available in Jetbrains, due to the completion limitations.

I tried those tools. But I love my PyCharm IDE. When Continue made a configurable plugin that would let me hook it into 3 different LLM at once (for different contexts of completion), I decided to stick with PyCharm instead of investing the effort to adapt to VSCode.

Those plugins are only going to improve if additional capabilities in this area are exposed by the IDE. This will be fantastic but will mean that Jetbrains own features and AI service will be competing with its plugin ecosystem. Which will include paid plugins' associated lock-in models, and open plugins that let users choose from whatever AI model works best for their use case.

I have faith that Jetbrains will continue doing the right thing here until I see evidence otherwise. For any other company, I would be a bit concerned to see what could be perceived as a conflict of interest between themselves and their users. But Jetbrains is smart enough to know that developer satisfaction is their primary goal which will drive sales, and that limiting ecosystem capabilities to drive an advantage for their own service would be contrary to that effort.

By @solardev - 7 months
Rather than "acceptance", I wonder if there's a better metric like "committed to git" (harder to measure, of course) or maybe "correctly finishes a code block from this standardized test suite".

I'll often accept a suggestion just to test it, seeing if the IDE pops up an error a few seconds later. Or else running it in dev and seeing if it actually works. Probably 50% of the time I'll still end up significantly modifying the code or rewriting it from scratch.

By @pizza - 7 months
What I would realllly like is AI autocomplete that is somehow sensitive to my mental map of the code I'm writing. There's too often a temptation when writing something brand new to get to a semi working proof of concept as fast as possible with an LLM, but it's now like 1000 lines of code that I really don't understand.

Previously I used to obey a rule that if I ever use LLM generated code, I have to manually type it in - so that I myself have the familiarity of every symbol in the code. This helped considerably reduce that problem. But I stopped doing that out of laziness.

By @tandr - 7 months
It would be really nice if they would allow to integrate with local/same-network completion (say, ollama)... How many companies are explicitly prohibiting any and all of these tools because "our code cannot leave internal network" rule (which, tbh, does make sense in a lot of cases)
By @viraptor - 7 months
Off topic, but I love that they cared enough to update the font to add the "iJ" ligature to make "IntelliJ" in the text match the logo. Just that tiny extra touch.