October 29th, 2024

How I write code using Cursor

Cursor is a coding tool that enhances productivity in Visual Studio Code with features like tab completion and inline editing. Despite some limitations, it positively impacts coding workflows and experimentation.

Read original articleLink Icon
SkepticismEnthusiasmFrustration
How I write code using Cursor

Cursor is a coding tool that integrates Large Language Model (LLM) features into a fork of Visual Studio Code, aimed at enhancing productivity for experienced developers. Tom Yedwab, who has extensive coding experience, shares his insights on whether Cursor is a valuable tool or just a trend. He highlights its key features, including tab completion, inline editing, a chat sidebar, and a composer for cross-file refactoring. The tab completion feature stands out for its ability to suggest code completions and navigate edits efficiently, significantly reducing the time spent on repetitive tasks. However, Yedwab notes some limitations, such as occasional incorrect suggestions and the need for better adherence to coding styles. He also discusses the .cursorrules file, which can help the LLM understand coding standards within a project. Yedwab emphasizes that Cursor has changed his coding workflow, making him less reliant on external libraries and more willing to experiment with unfamiliar languages. He appreciates the tool's ability to facilitate quick iterations and prototyping, allowing him to focus on high-level problem-solving rather than getting bogged down in boilerplate code. Overall, while Cursor has its drawbacks, Yedwab finds it a beneficial addition to his coding toolkit.

- Cursor integrates LLM features into Visual Studio Code to enhance coding productivity.

- Key features include tab completion, inline editing, and a chat sidebar for code refactoring.

- The tab completion feature is particularly effective for reducing repetitive coding tasks.

- Yedwab notes limitations in suggestion accuracy and adherence to coding styles.

- The tool has transformed his workflow, reducing reliance on external libraries and encouraging experimentation with new languages.

AI: What people are saying
The comments on the article about Cursor reveal a mix of opinions regarding its effectiveness and impact on coding practices.
  • Many users report significant productivity boosts when using Cursor, particularly for boilerplate code and repetitive tasks.
  • Critics argue that AI tools like Cursor can produce average or suboptimal code, especially in complex scenarios, and may lead to a lack of critical thinking.
  • Some users express concerns about the potential for AI-generated code to create maintenance challenges in large codebases.
  • There is a divide between those who embrace AI tools for enhancing their workflow and those who feel it diminishes their coding skills and enjoyment.
  • Concerns about data privacy and the implications of using AI tools in corporate environments are also prevalent among users.
Link Icon 70 comments
By @CrendKing - 6 months
I've been using AI to solve isolated problems, mainly as a replacement of search engine specifically for programming. I'm still not convinced of these "write whole block of code for me" type of use case. Here's my arguments against the videos from the article.

1. Snake case to camelCase. Even without AI we can already complete these tasks easily. VSCode itself has command of "Transform to Camel Case" for selection. It is nice the AI can figure out which text to transform based on context, but not too impressive. I could select one ":, use "Select All Occurrences", press left, then ctrl+shift+left to select all the keys.

2. Generate boilerplate from documentation. Boilerplate are tedious, but not really time-consuming. How many of you spend 90% of time writing boilerplate instead of the core logic of the project? If a language/framework (Java used to be, not sure about now) requires me to spend that much time on boilerplate, that's a language to be ditched/fixed.

3. Turn problem description into a block of concurrency code. Unlike the boilerplate, these code are more complicated. If I already know the area, I don't need AI's help to begin with. If I don't know, how can I trust the generated code to be correct? It could miss a corner case that my question didn't specify, which I don't yet know existing myself. In the end, I still need to spend time learning Python concurrency, then I'll be writing the same code myself in no time.

In summary, my experience about AI is that if the question is easy (e.g. easy to find exactly same question in StackOverflow), their answer is highly accurate. But if it is a unique question, their accuracy drops quickly. But it is the latter case where we spend most of the time on.

By @cynicalpeace - 6 months
It's amazing how little of my colleagues don't use Cursor simply because they haven't taken the 10 minutes to set it up.

It's amazing how many naysayers there are about Cursor. There are many here and they obviously don't use Cursor. I know this because they point out pitfalls that Cursor barely runs into, and their criticism is not about Cursor, but about AI code in general.

Some examples:

"I tried to create a TODO app entirely with AI prompts" - Cursor doesn't work like that. It lets you take the wheel at any moment because it's embedded in your IDE.

"AI is only good for reformatting or boilerplate" - I copy over my boilerplate. I use Cursor for brand new features.

"Sonnet is same as old-timey google" - lol Google never generated code for you in your IDE, instantly, in the proper place (usually).

"the constantly changing suggested completions seem really distracting" - You don't need to use the suggested completions. I barely do. I mostly use the chat.

"IDEs like cursor make you feel less competent" - This is perhaps the strongest argument, since my quarrel is simply philosophical. If you're executing better, you're being more competent. But yes some muscles atrophy.

"My problem with all AI code assistants is usually the context" - In Cursor you can pass in the context or let it index/search your codebase for the context.

You all need to open your minds. I understand change is hard, but this change is WAY better.

Cursor is a tool, and like any tool you need to know how to use it. Start with the chat. Start by learning when/what context you need to pass into chat. Learn when Cmd+K is better. Learn when to use Composer.

By @friggeri - 6 months
I recently started using Cursor for all my typescript/react personal projects and the increase in productivity has been staggering. Not only has it helped me execute way faster, similar to the OP I also find that it prevents me from getting sidetracked by premature abstraction/optimization/refactoring.

I recently went from an idea for a casual word game (aka wordle) to a fully polished product in about 2h, which would have taking me 4 or 5 times that if I hadn’t used Cursor. I estimate that 90% of the time was spent thinking about the product, directing the AI, and testing and about 10% of the time actually coding.

By @mherrmann - 6 months
In my experience, Cursor writes average code. This makes sense, if you think about it. The AI was trained on all the code that is publicly available. This code is average by definition.

I'm below average in a lot of programming languages and tools. Cursor is extremely useful there because I don't have to spend tens of minutes looking up APIs or language syntax.

On the other hand, in areas I know more about, I feel that I can still write better code than Cursor. This applies to general programming as well. So even if Cursor knows exactly how to write the syntax and which function to invoke, I often find the higher-level code structure it creates sub-optimal.

Overall, Cursor is an extremely useful tool. It will be interesting to see whether it will be able to crawl out of the primordial soup of averages.

By @benreesman - 6 months
I’m doing an experiment in this in real time: I’ve got a bunch of top-flight junior folks, all former Jane and Google and Galois and shit, but all like 24.

I’ve also been logging every interaction with an LLM and the exit status of the build on every mtime of every language mode file and all the metadata: I can easily plot when I lean on the thing and when I came out ahead, I can tag diffs that broke CI. I’m measuring it.

My conclusion is that I value LLMs for coding in exact the same way that the kids do: you have to break Google in order for me to give a fuck about Sonnet.

LLMs seem like magic unless you remember when search worked.

By @ciconia - 6 months
Watching the videos in the article, the constantly changing suggested completions seem really distracting.

Personally, I find this kind of workflow totally counter-productive. My own programming workflow is ~90% mental work / doing sketches with pen & paper, and ~10% writing the code. When I do sit down to write the code, I know already what I want to write, don't need suggestions.

By @taldo - 6 months
Cursor has been an enabler for unfamiliar corners of development. Mind you, it's not a foolproof tool that writes correct code on the first try or anything close to that.

I've been in compilers, storage, and data backends for 15ish years, and had to do a little project that required recording audio clips in a browser and sending them over a websocket. Cursor helped me do it in about 5 minutes, while it would've taken at least 30 min of googling to find the relevant keywords like MediaStream and MediaRecorder, learn enough to whip something up, fail, then try to fix it until it worked.

Then I had to switch to streaming audio in near-realtime... here it wasn't as good: it tried sending segments of MediaRecorder audio which are not suitable for streaming (because of media file headers and stuff). But a bit of Googling, finding out about Web Audio APIs and Audio Worklet, and a bit of prompting, and it basically wrote something that almost worked. Sure it had some concurrency bugs like reading from the same buffer that it's overwriting in another thread. But that's why we're checking the generated code, right?

By @vishal-padia - 6 months
In the article, you mentioned that you've been writing code for 36 years, so don't you feel IDEs like cursor make you feel less competent? Meaning I loved the process of scratching my head over a problem and then coming to a solution but now we have AI Agents solving the problems and optimizing code which takes the fun out of it.
By @myflash13 - 6 months
All of the examples given in the article are contrived, textbook-style examples. Real world projects are far more messy. I want someone to talk about their flow with Cursor on a mature codebase in production with lots of interlaced components and abstractions.

I have a feeling that blindly building things with AI will actually lead to incomprehensible monstrous codebases that are impossible to maintain over the long run.

Read “Programming as Theory Building” by Peter Naur. Programming is 80% theory-in-the-mind and only about 20% actual code.

Here's an actual example of a task I have at work right now that AI is almost useless in helping me solve. "I'm working with 4 different bank APIs, and I need to simplify the current request and data model so that data stored in disparate sources are unified into one SQL table called 'transactions'". AI can't even begin to understand this request, let alone refactor the codebase to solve it. The end result should have fewer lines of code, not more, and it requires a careful understanding of multiple APIs and careful data modelling design and mapping where a single mistake could result in real financial damage.

By @ertucetin - 6 months
I also found myself feeling a bit dumb after using Copilot for some time. It felt like I didn’t have to know the API, and it just auto-completed for me. Then I realized I was starting to forget everything and disabled Copilot. Now, when I need something, I ask ChatGPT (like searching on Stack Overflow).
By @mattxxx - 6 months
Here's a few cursor perks:

  1. Auto-complete makes me type ~20% faster (I type 100+ WPM)
  2. Composer can work across a few files simultaneously to update something (e.g. updating a chrome extension's manifest while proposing a code change)
  3. Write something that you know _exactly_ how it should work but are too lazy to author it yourself (e.g. Write a function that takes 2 lists of string and pair-wise matches the most similar. Allow me to pass the similarity function as a parameter. Use openai embedding distance to find most similar pairings between these two results)
By @elashri - 6 months
My problem with all AI code assistants is usually the context. I am not sure how cursor fare in this regard but I always struggle to feed the model enough of the code project to be useful for me on a level more than providing line per line suggestion (which copilot does anyway). I don't have experience with cursor or cody (other alternative) and how they tackle this problem by using embeddings (which I suppose have similar context limit).
By @abricq - 6 months
This last month I decided to try the Jetbrain equivalent of Cursor, for their IDEs (https://www.jetbrains.com/ai/). It's a pluging well integrated in the code editor that you can easily summon.

I work in Rust and I had to start working with several new libraries this month. One example of them is `proptest-rs`, a rust property testing library that defines a whole new grammar to define the tests. I am 100% sure that I spent much less time to get on-boarded with the librariy's best practices and usages. I just quickly went through their book (to learn the vocabulary) and asked the AI to generate the code itself. I was very surprised that it did not do any mistakes, considering that sort of weird custom grammar of the lib. I will at least keep trying for another months.

By @easyKL - 6 months
Recent interview to Cursor developers: https://lexfridman.com/cursor-team
By @qwertox - 6 months
I tried Cursor and while I was extremely surprised by the ability to do multiline edits in the middle of the lines, I could not get to accept how aggressive it was when trying to autocomplete/auto-edit segments of code while I was just typing.

It was like a second person being in the editor having a mind of its own constantly touching my code, even if it should have left it alone. It felt like I was finding myself undoing stuff it made all the time.

By @gaploid - 6 months
Using ChatGPT and AI assistants over the past year, here are my best use cases:

- Generating wrappers and simple CRUD APIs on top of database tables, provided only with a DDL of the tables.

- Optimizing SQL queries and schemas, especially for less familiar SQL dialects—extremely effective.

- Generating Swagger comments for API methods. Joyness

- Re-creating classes or components based on similar classes, especially with Next.js, where the component mechanics often make this necessary.

- Creating utility methods for data conversion or mapping between different formats or structures.

- Assisting with CSS and the intricacies of HTML for styling.

- GPT4 o1 is significantly better at handling more complex scenarios in creation and refactoring.

Current challenges based on my experience:

- LLM lacks critical thinking; they tend to accommodate the user’s input even if the question is flawed or lacks a valid answer.

- There’s a substantial lack of context in most cases. LLMs should integrate deeper with data sampling capabilities or, ideally, support real-time debugging context.

- Challenging to use in large projects due to limited awareness of project structure and dependencies.

By @HarHarVeryFunny - 6 months
Interesting to hear the perspective of an experienced developer using what seems to be the SOTA coding assistant of Cursor/Claude.

I thought his "Changes to my workflow" section was the most interesting, coupled with the fact that coding productivity (churning out lines of code) was not something he found to be a benefit. However, IMO, the workflow changes he found beneficial seem to be a bit questionable in terms of desireability...

1) Having LLM write support libraries/functions from scratch rather than rely on external libraries seems a bit of a double-edged sword. It's good to minimize dependencies and not be affected by changes to external libraries, but OTOH there's probably a lot of thought and debugging that has been put into those external libraries, as well as support for features you may not need today but may tomorrow. Is it really preferable to have the LLM reinvent the wheel using untested code it's written channeling internet sources?

2) Avoiding functions (couched as excessive abstractions) in favor of having the LLM generate repeated copies of the same code seems like a poor idea, and will affect code readability, debugging and maintenance whereby a bugfix in one section is not guaranteed to be replicated in other copies of the same code.

3) Less hesitancy to use unfamiliar frameworks and libraries is a plus in terms of rapid prototyping, as well as coming up to speed with a new framework, but at the same time is a liability since the quality of LLM generated code is only as good as the person reviewing it for correctness and vulnerabilities. If you are having the LLM generate code using a framework you are not familiar with, then you are at it's mercy as to quality, same as if you cut and pasted some code from the internet without understanding it.

I'm not sure we've yet arrived at the best use of "AI" for developer productivity - while it can be used for everything and anything, just as ChatGPT can be asked anything, some uses are going to leverage the best of the underlying technology, while others are going to fall prey to it's weaknesses and fundamental limitations.

By @kobe_bryant - 6 months
Just once I'd like to see an article like this from someone who's not currently working on an AI tool (some sort of Khan Academy tutor in this case)
By @baudpunk - 6 months
I recommend that naysayers for technologies like Cursor watch the documentary Jurassic Punk. When comparing the current AI landscape to the era of computer graphics emerging in film, the parallels are pretty staggering to me.

There is a very vocal old guard who are stubborn about ditching their 10,000+ hours master-level expertise to start from zero and adapt to the new paradigm. There is a lot of skepticism. There are a lot of people who take pride in how hard coding should be, and the blood and sweat they've invested.

If you look at AI from 10,000 feet, I think what you'll see is not AGI ruining the world, but rather LLMs limited by regression, eventually training on their own hallucinations, but good enough in their current state to be amazing tools. I think that Cursor, and products like it, are to coding what Photoshop was to artists. There are still people creating oil paintings, but the industry — and the profits — are driven by artists using Photoshop.

Cursor makes coders more efficient, and therefore more profitable, and anyone NOT using Cursor in a hiring pool of people who ARE using it will be left holding the short straw.

If you are an expert level software engineer, you will recognize where Cursor's output is bad, and you will be able to rapidly remediate. That still makes you more valuable and more efficient. If you're an expert level software engineer, and you don't use Cursor, you will be much slower, and it is just going to reduce your value more and more over time.

By @epolanski - 6 months
Some uses I have, e.g. a notepad in Cursor with a predefined set of files and a prompt e.g. to implement storybook stories and documentation out of components I use. I give it the current file I'm working on, and it will generate new files and update documentation files. Similarly for E2E or unit tests, it often suggests cases I would've not thought about.

The people that are negative about these things, because they need to review it, seem to be missing the massive amount of time saved imho.

Many users point to the fact that they spend most of the time thinking, I'm glad for them, most of the time I spend is glueing APIs, boilerplates, refactoring, and on those aspects Cursor helps tremendously.

The biggest killer feature that I get from similar tools (I ditched Copilot recently in favor of it) is that they allow me to stay focused and in the flow longer.

I have a tendency to phase out when tasks get too boring, repetitive or stressed out when I can't come up with a solution. Similarly going on a search engine to find an answer would often put me in a long loop of looking for answers deeply buried in a very long article (you need to help SEO after all, don't you?) and then it would be more likely that I would get distracted by messages on my company chat or social media.

I can easily say that Cursor has made me more productive than I was one year ago.

I feel like the criticism many have comes from the wrong expectations of these tools doing the work for you, whereas they are more into easing out the boring and sometimes the hard parts.

By @pama - 6 months
What are the closest Emacs packages and flows for something similar to Cursor? Is the ability to use a tab in this way something that can be simply instructed or finetuned?
By @OtomotO - 6 months
"For example, suppose I have a block of code with variable names in under_score notation that I want to convert to camelCase."

Oh my, oh my... How have I done this all these years before "AI" was a th- hype?

I did it without wasting even a fraction of the CO2 needed for these toys.

"AI" has some usecases, granted. But selling it as the holy grail and again sh•tting on the environment is getting more and more ridiculous by the day.

Humanity, even the smarter part, truly deserves what is coming.

Apes on a space rock

By @socketcluster - 6 months
I think AI is still nowhere near where it needs to be to provide business value in software development.

As an experiment, some time ago, I tried to build a TODO app entirely with AI prompts. I used a special serverless platform on the backend to store the data so that it would persist between page refreshes. I uploaded the platform's frontend components README file to the AI as part of the input.

Anyway, what happened is that it was able to create the TODO app quickly; it was mostly right after the first prompt and the app was storing and loading the TODOs on the server. Then I started asking for small changes like 'Add a delete button to the TODOs'; it got that right. Impressive!

All the code fit in a single file so I kept copying the new code and starting a new prompt to ask for changes... But eventually, in trying to turn it into a real product, it started to break things that it had fixed before and it started to feel like a game of whac-a-mole. Fixing one thing broke another and it often broke the same thing multiple times... I tried to keep conversations longer instead of starting a new one each iteration but the results were the same.

By @anonyfox - 6 months
Personal observation from a heavy LLM codegen user:

The sweet spot seems to be bootstrapping something new from scratch and get all the boilerplate done in seconds. This is probably also where the hype comes from, feels like magic.

But the issue is, that once it gets slightly more complicated, thinks break apart and run into a dead end quickly. For example yesterday I wanted to build a simple CLI tool in Go (which is outstandingly friendly to LLM codegen as a language + stdlib) that acts as a simnple reverse proxy and (re-)starts the original thing in the background on file changes.

AI was able to knock out _something_ immediately that indeed compiled, only it didn't actually work like intended. After lots of iterations back-and-forth (Claude mostly) the code balooned in size to figure out what could be the issue, adding all kinds of useless crap that kindof-looks-helpful-but-isn't. After an hour I gave up and went through the whole code manually (few hundred lines, single file) and spotted the issue immediately (holding a mutex lock that gets released with `defer` doesn't play well with a recursive function call). After pointing that out, the LLM was able to fix it and produced a version that finally worked - still with tons of crap and useless complexity everywhere. And thats a simple straightforward coding task that can be accomplished in a single file and only few hundred lines, greenfield style. And all my claude chat tokens of the day got burned for this, only for me at the end having to dig in myself.

LLMs are great to produce things in small limited scopes (especially boilerplate-y stuff) or refactor something that already exists, when it has enough context and essentially doesn't really think about a problem but merely changes linguistic details (rewrites text to a different format ultimately) - its a large LANGUAGE model after all.

But full blown autonomous app building? Only if you do something that has been done exactly thousands of times before and is simple to begin with. There is lots of business value in that, though. Most programmers at companies don't do rocket science or novel things at all. It won't build any actual novelty - ideal case would be building an X for Y (like Uber For Catsitting) only but never an initial X.

Personal productivity of mine went through the roof since GPT4/Cursor though, but I guess I know how/when to use it properly. And developer demand will surge when the wave of LLM-coded startups get their funding and realize the codebase cannot be extended anymore with LLMs due to complexity and the raw amount of garbage in there.

By @artdigital - 6 months
Is Cursor still sending everything you do to their own servers? Last time I looked into it, that’s what was happening which made it an absolute no-go for corporate use.

Using Cody currently with our company enterprise API key

By @knuckleheads - 6 months
I use and am happy with Cursor and Chatgpt. Cursor will write out the syntax for me if I let it, sometimes wrong, sometimes right, but enough to keep a flow going. If there are larger questions, I just flip over to chatgpt and try and suss out what is going on there. Super helpful with tailwind and react, speaking as a dba and backend systems person.
By @cryptica - 6 months
> Subsequently, but very infrequently, I will accept a totally different completion and the previously-declined suggestion will quietly be applied as well.

This sounds like a nightmare.

I think the biggest problem with AI at the moment is that it incorrectly assumes that coding is the difficult part of developing software, but it's actually the easiest part. Debugging broken code is a lot harder and more time consuming than writing new code; especially if it's code that someone else wrote. Also, architecting a system which is robust and resilient to requirement changes is much more challenging than coding.

It boggles the mind that many developers who hate reading and debugging their team members' code love spending hours reading and debugging AI-generated code. AI is literally an amalgamation of other peoples' code.

By @Hrun0 - 6 months
I have tried both cursor and cline (formerly continue dev), and I don't seem to see the incredible performance boost using cursor like a lot of people say
By @komali2 - 6 months
I am the least knowledgeable user of emacs ever, however just some notes as I read this

> For example, suppose I have a block of code with variable names in under_score notation that I want to convert to camelCase. It is sufficient to rename one instance of one variable, and then tab through all the lines that should be updated, including the other related variables.

For me that would be :%s/camel_case/camelCase/gc then yyyyyynyyyy as I confirm each change. Or if it's across a project, then put cursor on word, SPC p % (M-x projectile-replace-regexp), RET on the word, camelCase, yyyynyyynynn as it brings me through all files to confirm. There's probably even better smarter ways than that, I learn "just enough to get it done" and move on, to my continual detriment.

> Many times it will suggest imports when I add a dependency in Python or Go.

I can write a function in python or typescript and if I define a type like:

    function(post: Po
and wait a half second, I'll be given a dropdown of options that include the Post type from Prisma. If I navigate to that (C-J) and press RET, it'll auto-complete the type, and then add the import to the top, either as a new line or included with the import object if something's already being imported from Prisma. The same works for Python, haven't tried other. I'm guessing this functionality comes from LSP? Actually not sure lol. Like I said, least knowledgeable emacs user.

As for boilerplate, my understanding is most emacs people have a bunch of snippets they use, or they use snippet libraries for common languages like tsx or whatever. I don't know how to use these so I don't.

I still intend to try cursor since my boss thinks it might improve productivity and thus wants me to see if that's true, and I don't want to be some kind of technophobe, however I remain skeptical that these built-in tools are at least any more useful than me just quickly opening a chatgpt window in my browser and pasting some code in, with the downside of me losing all my bindings.

My spacemacs config: https://github.com/komali2/Configs/blob/master/emacs/.spacem...

By @blixt - 6 months
I have the opposite experience with Chat and Compose. I use the latter much more. The "intelligence level" of the Chat is pretty poor and like the author says, starts with pointless code blocks, and you often end up with AI slop after a few minutes of back and forth.

Meanwhile, the Compose mode gives code changes a good shot either in one file or multiple, and you can easily direct it towards specific files. I do wish it could be a bit smarter about which files it looks at since unless you tell it about that file you have types in, it'll happily reimplement types it doesn't know of. And another big issue with Compose mode is that the product is just not really complete (as can be seen by how different the 3 UXes of applying edits are). It has reverted previous edits for me, even if they were saved on disk and the UI was in a fresh state (and even their "checkout" functionality lost the content).

The Cmd+K "edit these lines" mode has the most reliable behavior since it's such a self-contained problem where the implementation uses the least amount of tricks to make the LLM faster. But obviously it's also the least powerful.

I think it's great that companies are trying to figure this out but it's also clear that this problem isn't solved. There is so much to do around how the model gets context about your code, how it learns about your codebase over time (.cursorrules is just a crutch), and a LOT to do about how edits to code are applied when 95% of the output of the model is the old code and you just want those new lines of code applied. (On that last one, there are many ways to reduce output from the LLM but they're all problematic – Anthropic's Fast Edit feature is great here because it can rewrite the file super fast, but if I understand correctly it's way too expensive).

By @xlii - 6 months
Ok, so… I’m looking at the Python example with HTTP Endpoints and I already have questions.

Why both functions are inlined and why FastAPI is used at all. I’m also not seeing any network bindings. Is it bound to local host (doubt it) or does it immediately bind to all interfaces.

It’s a 3 second thought from looking at Python code that I know only enough to write small and buggy utilities (yet Python is widely popular - so LLM have ton of data learned in). I know it’s a demo only but this video strengthen my feeling about drop of critical thinking and a raise of McDonalds productivity.

McDonalds is not bad: it’s great when you are somewhere you don’t know and not feeling well. They have same menu almost everywhere and it’s like 99% safe due to process standardization. Plus they can get you satiated in 20 minutes or less. It’s still type of food that you can’t really feed on for long; and if you do, there will be consequences.

The silver lining of it is that it most likely cut off all people from the field who hate the domain but love the money, as it’s exactly the problem it automatizes away.

By @christkv - 6 months
I have not tried cursor using a combination of copilot and chatgpt o1. Is cursor considered a better solution? I find it takes the tedium out of work and allows me to focus on the important bits. Right now working on a flutter app and it’s great to be able to quickly try out different ideas iterating quicker towards a final design than before LLMs.
By @nephy - 6 months
I find LLM’s to be much worse than the junior engineers I work with. I have tried copilot and I always end up disabling it because it’s often wrong and annoying. Just read the docs for the things you are using folks. We don’t need to burn the rainforest for autocomplete.
By @johnisgood - 6 months
Cursor is unusable for many as long as https://github.com/getcursor/cursor/issues/598 is not fixed.

I would like to use it, but I literally cannot because of this bug!

By @sumshelf - 6 months
I've been using Cursor and Claude 3.5 Sonnet for about two months. My typical workflow with Cursor involves the following steps: I ask Cursor to generate the code. Then, I have it review its own code. Next, I ask it to improve some code based on the review. Sometimes I ask it to explain the code to me. Finally, I make any necessary manual adjustments. The only downside is that I tend to use up my fast request limit pretty quickly.
By @hum3hum3 - 6 months
OK I have been writing code for 50 years but can only use cursor for home use. From my experience, I echo the authors comments. You do have to be careful with larger suggestions that it makes sense but the syntax will be right. It is just faster.
By @greenie_beans - 6 months
i had an interesting experience with cursor. i use it everyday btw

i had two csvs. one had an ISBN column, the other had isbn10, isbn13, and isbn. i tried to tell it to write python code that would merge these two sheets by finding the matching isbns. didn't work very well. it was trying to do pandas then i tried to get it to use pure python. it took what felt like an hour of back and forth with terrible results.

in a new chat, i told it that i wanted to know different algorithms for solving the problem. once we weighed all of the options, it wrote perfect python code. took like 5 minutes. like duh use a hashmap, why was that so hard?

By @DeathArrow - 6 months
From what I see, it's not better than GitHub Copilot for my use case. I work in large code bases, where the code in one files depends on the code in other files. Copilot seems to only be aware of code in current file, unless I write very long prompts to direct it to specifically look in some other files.

So for making changes to existing code, Copilot isn't helpful and neither seems to Cursor.

I can use Copilot to write some new methods to parse strings or split strings or convert to/from JSON or make http calls. Bat anything that implies using or changing existing code doesn't yield good results.

By @wanderingmind - 6 months
As an aside, do we really need a new IDE for AI copilot? What makes Cursor better than say Cody, which is just an extension on VSCode which helps me stay with the mature IDE that has all the bells and whistles I need.
By @Fizzadar - 6 months
I’ve yet to switch to Cursor as my main editor (Sublime still wins on performance by miles, plus no distracting tab anything), but I do hop in when stamping out boilerplate and repetitive code for which it is great, but it’s only a minor performance bump.

I have also used cursor to write Kubernetes client code with great success because the API space is so large it doesn’t fit into my head that well (not often writing such code these days) so that has been incredibly helpful.

So it’s not revolutionising my workflow but certainly a useful tool in some situations.

By @poulpy123 - 6 months
I recently tried again github copilot, and I was much more convinced of its use. I'm using it mainly as a better autocompletion tool, and I sometimes use the chat for discovering/understanding librairies and errors, because google is now too polluted by SEO spam.

I'm not trying to generate programs with it, I find it still far too weak for it.

However, and while I believe the current models can't really reach it, there is nothing in my eyes that prevents to create an AI good enough for that

By @adamontherun - 6 months
I've been working with Cursor for a few months. I learned quickly to stay away from the hype around creating entire features using Composer. Found a ton of value in using it to work with context from the codebase as well as external documentation.

Shared my 8 pro tips in this post towards the bottom https://betaacid.co/blog/cursor-dethrones-copilot

By @imetatroll - 6 months
Take the video of "building an HTTP REST api". On the one hand it is lighting fast to create something basic. On the other hand it misses so many details... proper error responses that the frontend - presumably there is one - can use, translations, setting up the db, infra, deploy, testing, etc. There is so much more to getting something ready for the real world.

As a learning tool and for one offs AI can be very nice.

By @niemandhier - 6 months
I had horrible experience with a number of ai tools for infra-as-code.

My theory is, that in that case it’s hard to predict what to do from context and libraries are at the same time hyper specialized and similar.

Example: Creating a node and attaching a volume using ansible looks similar for different cloud providers but there a subtle differences in how to speciy the location etc.

By @ainiriand - 6 months
Someone experienced enough could give some insights about how does it work with big monorepos and/or legacy code?
By @marviel - 6 months
Good Taste, Management / Communication Skills, Code Review Ability.

If you have these skills, the productivity gains from tools like Cursor are insane.

If you lack any of these, it makes sense that you don't get the hype; you're missing a critical piece of the new development paradigm, and should work on that.

By @foreigner - 6 months
So the Cursor AI can make edits to the .cursorrules file that's meant to control it? Hmm...
By @kristopolous - 6 months
I really honestly don't get the hype. VSCode with some copilot plugins seems to do effectively 100% of this fork and last time I scrolled through cursor's 1000+ open github issues, they didn't seem responsive or know how to fix things.
By @amunozo - 6 months
I've been using Copilot in VSCode as I have it free as a student. I wanted to try Cursor, but money is tight. Are they that much different? If so, what makes Cursor so special?
By @e2e4 - 6 months
Any thoughts on how does cursor compare to cline (claude.dev) and aider?
By @delackner - 6 months
For me, all these tools suffer from the same basic question, how can I put proprietary private source into a tool notorious for siphoning up and copying everything you say to it?
By @JadoJodo - 6 months
I’m surprised more people don’t use Supermaven. Its completions are quite a bit faster than Cursor and it also integrates into IDEs (Jetbrains), not just editors.
By @pratibha_simone - 6 months
Many pro comments above, but I like it as a coding newbie!
By @kleiba - 6 months
Slightly OT, but perhaps someone can help me out: what's a tried & tested setup to integrate a locally running AI coding assistent into Emacs?
By @swang - 6 months
Trying to remember: Was cursor the one that took yc money and basically reskinned some open source project, the project that was stolen, or neither..?
By @dankwizard - 6 months
Why is 60 - 70% of my screen whitespace on this website (Or maybe more accurately... light pink space?)

If cursor made those margins, humans 1 cursor 0

By @snozolli - 6 months
I don't know if the author will read these comments, but there's a missing word:

the architecture of I am building.

By @KronisLV - 6 months
Honestly, seems like a cool tool and I could see myself using something like it instead of just my current GitHub Copilot subscription.

Sure, you probably don't want to blindly copy or accept suggested changes, but when the tools work, they're like a pretty good autocomplete for various snippets and I guess quite a bit more in the case of Cursor.

If that helps you focus on problem solving and lets the tooling, language and boilerplate get out of the way a little bit more, all the better! For what it's worth, I'm probably sticking with JetBrains IDEs for the foreseeable future, since they have a lot of useful features and are what I'm used to (with VS Code for various bits of scripting, configuration etc.).

By @vegapulse - 6 months
Cursor is awesome. Have been using it for a month now, greatly improved efficiency.
By @szemy2 - 6 months
Can someone please elaborate how Cursor is different to Copilot?
By @wruza - 6 months
Parallelize a task, write a server that exposes. It’s not code.

Show code.

By @vineyardlabs - 6 months
Related question. For those who do any kind of programming for fun on the side, how do you feel about using tools like cursor for those projects. Is it a cool productivity enhancer that allows you to focus less on the code and more on the end-product, or does it suck the fun out of it for you?

I work in an environment right now where feeding proprietary code/docs into 3rd party hosted LLMs is a hard no-go, and we don't have any great locally hosted solution set up yet, so I haven't really taken the dive into actively writing code with LLM assistance. I feel like I should practice this skill, but the idea of using a tool like Cursor on personal projects just seems so antithetical to the point that I can't bring myself to actually do it.

By @rco8786 - 6 months
There's a lot of people in these comments who are talking about the state of off-the-shelf LLMs for writing code, and they are missing the point of this article. The article is about Cursor, the IDE.

It is much, much more than a ChatGPT wrapper. I'd encourage everyone to give it a shot with the free trial. If you're already a VSCode user, it only takes a minute to setup with your exact same devenv you already have.

Cursor has single-handedly changed the way I think about AI and its capabilities/potential. It's truly best-in-class and by a wide margin. I have no affiliation with Cursor, I'm just blown away by how good it is.

By @234120987654 - 6 months
Wow, I did not expect to see such negativity in this thread. Most of them read to me like the "Dropbox is just an FTP"-narrative. Yes, you and your pride can do most of these things in 0.3ms and better, but so will 1 million more people now.

You can do most of the things the author showed with your craftfully set-up IDE and magic tricks, but that's not the point. I don't want to spend a lifetime setting up these things only to break when moving to another language.

Also, where the tab-completion shines for me in Cursor is exactly the edge case where it knows when _not_ to change things. In the camel casing example, if one of them were already camel cased, it would know not to touch it.

For the chat and editing, I've gotten a pretty good sense as to when I can expect the model to give me a correct completion (all required info in context or something relatively generic). For everything else I will just sit down and do it myself, because I can always _choose_ to do so. Just use it for when it suits you and don't for when it doesn't. That's it.

There's just so many cases where Cursor has been an incredible help and productivity boost. I suspect that the complainers either haven't used it at all or dismissed it too quickly.

By @Giorgi - 6 months
That article reads just like half-a--ed chatgpt prompt designed to shill Cursor because it became irrelevant after introduction of Canvas tool.