March 12th, 2025

I use Cursor daily - here's how I avoid the garbage parts

The article reviews the author's experience with Cursor, an AI coding tool, highlighting the need for a .cursorrules file, context provision, manual review, and caution with complex tasks.

Read original articleLink Icon
ConcernFrustrationSkepticism
I use Cursor daily - here's how I avoid the garbage parts

The article discusses the author's experience using Cursor, an AI tool for coding, highlighting both its benefits and limitations. The author emphasizes the importance of creating a .cursorrules file to streamline the coding process and improve AI output. They suggest keeping the rules minimal and gradually building upon them, as excessive input can lead to poor results. The author advises providing context to the AI by referencing existing code and using specific files to enhance its performance. Additionally, they recommend being cautious with AI-generated code, advocating for manual review and refactoring to ensure quality. The author notes that while AI can assist in coding, especially when one is mentally fatigued, it may not always produce optimal results, particularly with complex tasks or bug fixes. They encourage developers, especially juniors, to experiment with AI tools while being mindful of their own coding skills. Ultimately, the effectiveness of AI in coding may vary based on the specific project and technology stack used.

- Creating a .cursorrules file can significantly enhance the coding experience with Cursor.

- Providing context and referencing existing code improves AI output quality.

- Manual review and refactoring of AI-generated code are essential for maintaining code integrity.

- Caution is advised when using AI for complex tasks or bug fixes.

- The effectiveness of AI tools can vary based on the project and technology stack.

AI: What people are saying
The comments reflect a mix of experiences and concerns regarding the Cursor AI coding tool.
  • Many users find Cursor effective for small tasks but struggle with larger codebases, leading to coherence issues.
  • There are concerns about Cursor's business model creating a conflict of interest, as it may prioritize profit over user experience.
  • Users express frustration with the tool's reliance on context management, which can lead to inefficiencies and errors.
  • Some commenters highlight the potential negative impact on junior engineers' coding skills due to over-reliance on AI tools.
  • Alternatives like Cline and Windsurf are mentioned as preferable options by some users.
Link Icon 47 comments
By @walthamstow - about 2 months
Eng leadership at my place are pushing Cursor pretty hard. It's great for banging out small tickets and improving the product incrementally kaizen-style, but it falls down with anything heavy.

I think it's weakening junior engineers' reasoning and coding abilities as they become reliant on it without having lived for long, or at all, in the before times. I think may be doing the same to me too.

Personally, and quietly, I have a major concern about the conflict of interest of Cursor deciding which files to add to context then charging you for the size of the context.

As with so many products, it's cheap to start with, you become dependent on it, then one day it's not cheap and you're fucked.

By @laborcontract - about 2 months
Cursor's current business model produces a fundamental conflict between the well-being of the user and the financial well-being of the company. We're starting to see these cracks form as LLM providers are relying on scaling through inference-time compute.

Cursor has been trying to do things to reduce the costs of inference, especially through context pruning. For instance, if you "attach" files to a conversation, Cursor no longer stuffs the code from those files into the prompt. Instead, it'll run function calls to open those files and read bits and pieces of the code until the model feels it has enough information. This seems like a perfectly reasonable strategy until you realize you cannot do the same thing with reasoning models, if you're limiting the reasoning to just the initial prompt.

If you prune out context from the initial prompt, instead of reasoning on richer context, the llm reasons only on the prompt itself (w/ no access to the attached files). After the thinking process, Cursor runs function calls to retrieve more context, which entirely defeats the point of "thinking" and induces the model to create incoherent plans and speculative edits in its thinking process, thus explaining Claude's bizarre over-editing behavior. I suspect this is why so many Cursor users are complaining about Claude 3.7.

On top of this, Cursor has every incentive to keep the thinking effort for both o3-mini and Claude 3.7 to the very minimum so as to reduce server load.

Cursor is being hailed as one of the greatest SAAS growth stories but their $20/mo all-you-can-eat business model puts them in such a bad place.

By @cyprx - about 2 months
I had been using Cursor for a month until a day when my house got no internet, then i realized that i started forgetting how to write code properly
By @jillesvangurp - about 2 months
The UX of tools like these is largely constrained by how good they are with constructing a complete context of what you are trying to do. Micromanaging context can be frustrating.

I played with aider a few days ago. Pretty frustrating experience. It kept telling me to "add files" that are in the damn directory that I opened it in. "Add them yourself" was my response. Didn't work; it couldn't do it somehow. Probably once you dial that in, it starts working better. But I had a rough time with it creating commits with broken code, not picking up manual file changes, etc. It all felt a bit flaky and brittle. Half the problem seems to be simple cache coherence issues and me having to tell it things that it should be figuring out by itself.

The model quality seems less important than the plumbing to get the full context to the AI. And since large context windows are expensive, a lot of these tools are cutting corners all the time.

I think that's a short term problem. Not cutting those corners is valuable enough that a logical end state is tools that don't do that that cost a bit more. Just load the whole project. Yes it will make every question cost 2-3$ or something like that. That's expensive now but if it drops by 20x we won't care.

Basically large models that support huge context windows of millions/tens of millions of tokens cost something like the price of a small car and use a lot of energy. That's OK. Lots of people own small cars. Because they are kind of useful. AIs that have a complete, detailed context of all your code, requirements, intentions, etc. will be able to do a much better job that one that has to guess all of that from a few lines of text. That would be useful. And valuable to a lot of people.

Nvidia is rich because they have insane margins on their GPUs. They cost a fraction of what they sell them for. That means that price will crash over time. So, I'm optimistic that a lot of this stuff will improve rapidly.

By @2sk21 - about 2 months
I read this point in the article with bafflement:

"Learn when a problem is best solved manually."

Sure, but how? This is like the vacuous advice for investors: buy low and sell high

By @Amekedl - about 2 months
Compounding the opinions of other commentors, I feel that using Cursor is a bad idea. It's a closed source SaaS, and with these components involved, service quality can do wild swings on a daily basis, not something I'm particularly keen of.
By @blainm - about 2 months
I've found tools like Cursor useful for prototyping and MVP development. However, as the codebase grows, they struggle. It's likely due to larger files or an increased number of them filling up the context window, leading to coherence issues. What once gave you a speed boost now starts to work against you. In such cases, manually selecting relevant files or snippets from them yields better results, but at that point it's not much different from using the web interface to something like Claude.
By @kevingadd - about 2 months
> Like mine will keep forgetting about nullish coallescing (??) in JS, and even after I fix it up it will revert my change in its future changes. So of course I put that rule in and it won't happen again.

I'm surprised that this sort of pattern - you fix a bug and the AI undoes your fix - is common enough for the author to call it out. I would have assumed the model wouldn't be aggressively editing existing working code like that.

By @DeathArrow - about 2 months
Apart from the fact that it chews fast requests like there's no tomorrow I dislike how it does changes I didn't ask to. And if I ask to undo what it did without being asked, it goes on and beaks more code.

In my test application I had a service which checked the cache, then asked the repository if no data is in cache, then uses external APIs to fetch some data, combine it and update the DB and the cache.

I asked Cursor to change using DateTime type to using Unix timestamp. It did the changes but it also removed cache checks and calling external APIs, so my web app relied just on the data in DB. When asked to add back what it removed, it broke functionality in other parts of the application.

And that is with a small simple app.

By @torginus - about 2 months
I have been a religious Cursor + Sonnet user for like past half a year, and maybe I'm an idiot, but I don't like this agentic workflow at all.

What worked for me is having it generate functions, classes, ranging from tens of lines of code to low hundreds. That way I could quickly interate on its output and check if its actually what I wanted.

It created a prompt-check-prompt iterative workflow where I could make progress quite fast and be reasonably certain of getting what I wanted. Sometimes it required fiddling with manually including files in the context, but that was a sacrifice I was willing to make and if I messed up, I could quickly try again.

With these agentic workflows, and thinking models I'm at a loss.

To take advantage of them, you need very long and detailed prompts, they take a long time to generate and drop huge chunks of code on your head. What it generates is usually wrong due to the combination of sloppy or ambiguous requirements by me, model weaknesses, and agent issues. So I need to take a good chunk of time to actually understand what it made, and fix it.

The iteration time is longer, I have less control over what it's doing, which means I spend many minutes of crafting elaborate prompts, reading the convoluted and large output, figuring out what's wrong with it, either fixing it by hand, or modifying my prompt, rinse and repeat.

TLDR: Agents and reasoning models generate 10x as much code, that you have to spend 10x time reviewing and 10x as much time crafting a good prompt.

In theory it would come out as a wash, in practice, it's worse since the super-productive tight AI iteration cycle is gone.

Overall I haven't found these thinking models to be that good for coding, other than the initial project setup and scaffolding.

By @yard2010 - about 2 months
How can I stop Cursor from sending .env files with secrets as plain text? Nothing I tried from the docs works.
By @rhodescolossus - about 2 months
I've tried Cursor a couple of times but my complain is always the same: why forking VS Code when all this functionality could just be an extension, same as Copilot does?

Some VSCode extensions don't work, you need to redo all your configuration, add all your workspaces... and the gain vs Copilot is not that high

By @DaveMcMartin - about 2 months
For those of you who, like me, use Neovim, you can achieve "cursor at home" by using a plugin like Avante.nvim or CodeCompanion. You can configure it to suit your preferences.

Just sharing this because I think some might find it useful.

By @timothygold - about 2 months
I made a quick prototype to demonstrate what I think A.I code assistance should be..

https://github.com/hibernatus-hacker/ai-hedgehog

This is a simple code assistant that doesn't get in your way and makes sure you are coding (not losing your ability to program).

You configure a replicate API token from replicate... install the tool and point it at your code base.

When you save a file it asks the LLM for advise and feedback on the file as a "senior developer".

Run this along side your favorite editor to get feedback from an LLM as your working on (open source code nothing you don't want third parties to see).

You are still programming and using your brain but you have some feedback when you save files.

The feedback is less computationally expensive or fraught with difficulty than actually getting code from LLM's so it should work with much less powerful models.

It would be nice if there was a search built in so it could search for useful documentation for you.

By @gregwebs - about 2 months
AI blows me away when asked to write greenfield code. It can get a complex task using hundreds of lines of code right on the first try or perhaps it needs a second try on the prompt and an additional tweak of the output code.

As things move from prototype to production ready the productivity starts to become a wash for me.

AI doesn’t do a good job organizing the code and keeping it DRY. Then it’s not easy for it to make those refactorings later. AI is good at writing code that isn’t inherently defective but if there is complexity in the code it will introduce bugs in its changes.

I use Continue for small additions and tab completions and Claude for large changes. The tab completions are a small productivity boost.

Nice to see these tips- I will start experimenting with prompts to produce better code.

By @mrlowlevel - about 2 months
Do any of these tools use the rich information from the AST to pull in context? Coupled with semantic search for entry points into the AST, it feels like you could do a lot…
By @stared - about 2 months
Other useful things I've discovered:

- Push for DRY principles ("make code concise," "ensure good design").

- Swap models strategically; sometimes it's beneficial to design with one model and implement with another. For example, use DeepSeek R1 for planning and Claude 3.5 (or 3.7) for execution. GPT-4.5 excels at solving complex problems that other models struggle with, but it's expensive. - Insist on proper typing; clear, well-typed code improves autocompletion and static analysis.

- Certain models, particularly Claude 3.7, overly favor nested conditionals and defensive programming. They frequently introduce nullable arguments or union types unnecessarily. To mitigate this, keep function signatures as simple and clean as possible, and validate inputs once at the entry point rather than repeatedly in deeper layers.

- Emphasize proper exception handling. Some models (again, notably Claude 3.7) have a habit of wrapping everything in extensive try/catch blocks, resulting in nested and hard-to-debug code reminiscent of legacy JavaScript, where undefined values silently pass through multiple abstraction layers. Allowing code to fail explicitly is a blessing for debugging purposes; masking errors is like replacing a fuse with a nail.

By @mattwad - about 2 months
Everyone posting should be considerate of your language/stack. It's pretty likely that Cursor doesn't work equally for every language. I'm working on a Next.js/Typescript/Solidity monorepo with multiple apps and packages and it handles pretty much anything I throw at it. I know I can squeeze more out because I have only been really using it heavily for the past month or so.
By @atoav - about 2 months
I have been using AI coding for a while now and the issues I keep having is the following:

- LLM keeps forgetting/omitting parts of the code

- LLM keeps changing unrelated parts of the code

- LLM does not output correctly typed code (with Rust this can feel like throwing mud at a wall and see what sticks, in the end you're faster on your own)

- LLM flip-flops back and forth between two equally wrong answers when asked about a particularly (from the perspective of the LLM) to answer problem

In the end the main thing any AI coding tool will have to solve, is how to get the human in front of the LLM to trust that the output does what it does without breaking other things.

But of course LLMs are already crazy good at whst they do. I just wonder how people who have no idea what they are doing will be able to handle that power.

By @ookblah - about 2 months
parts of the article are spot on. after the magic has worn off i find it's best to literally treat it like another person. would you blindly merge code from someone else or huge swaths of features? no. i have to review every single piece of code, because later on when there's a bug or new feature you have to have that understanding.

another huge thing for me has been to scaffold a complex feature just to see what it would do. just start out with literal garbage and an idea and as long as it works you can start to see if something is going to pan out or not. then tear it down and do it again with those new assumptions you learned. keep doing it until you have a clear direction.

or sometimes my brain just needs to take a break and i'll work on boilerplate stuff that i've been meaning to do or small refactors.

By @divan - about 2 months
How does the current state of Cursor agentic workflow compare to Windsurf Editor?

I've been using Windsurf since it was released, and back then, it was so ahead of Cursor it's not even funny. Windsurf feels like it's trained on good programming practices (check usage of the function in other parts of the project for consistency, double checking for errors after changes made, etc). It's also surprisingly fast (it can "search" the 5k files codebase in, like, 2 seconds. It even asked me once to copy and paste output from Chrome DevTools because it suspected that my interpretation of the result was not accurate (and it was right).

The only thing I truly wish is to have the same experience with locally running models. Perhaps Mac Studio 512GB will deliver :)

By @austin-cheney - about 2 months
With so many caveats, so many exceptions, so many rules, so much manual validation, and so little trust why bother? It sounds like an employee pending termination.
By @hakaneskici - about 2 months
What is your mental model when coding with Cursor?

When I code with with AI assistance; I "think" differently and noticed that I have more memory bandwidth to think about the big picture rather than the details.

With AI assistance, I can keep the entire program logic in my head; otherwise I have to do expensive context switching between the main components of the program/system.

How are you "thinking" when typing prompts vs typing actual code?

By @trash_cat - about 2 months
> And then at the top of the file, just write some text about what the project is about. If you have a particular file structure and way of organising code that is great to put in as well.

By asking the AI to generate a context.md file, you get an automatically structured overview of the project, including its purpose, file organization, and key components. This makes it easier to onboard new contributors, including other LLMs.

By @factsaresacred - about 2 months
Too bad they removed the ability to use Chat (rebranded as Ask) with your own API keys in version 0.47. Now every feature requires a subscription.

Natural for Cursor to nudge users towards their paid plans, but why provide the ability to use your own API keys in the first place if you're going to make them useless later?

By @HugoDias - about 2 months
I saw this post on the first page a few minutes ago (published 5 hours ago), but it quickly dropped to the 5th page. Given its comments and points, that seems odd. I had to search to find it again. Any idea why?
By @flippyhead - about 2 months
Note that the latest update (0.47.x) made this useful change:

Rules: Allow nested .cursor/rules directories and improved UX to make it clearer when rules are being applied.

This has made things a lot easier in my monorepos.

By @ThinkBeat - about 2 months
What programming languages do you primarily use ? I feel that knowing what programming languages a llm is best at is valuable but often not directly apparent.
By @TheAnkurTyagi - about 2 months
nice also you can use project-specific structure and markdown files to ensure the AI organizes content correctly for your use case. we are using it on 800k lines of golang and it works well. https://getstream.io/blog/cursor-ai-large-projects/
By @pestkranker - about 2 months
Is there an equivalent to cursorrules and copilot-instructions for the Jetbrains IDEs (Rider) + GitHub Copilot extension?
By @askonomm - about 2 months
So, if I liked being a manager more than a developer, I'd use Cursor, and lean in entirely on AI?
By @ZeroTalent - about 2 months
Cursor is not SOTA. It's just popular. Look into alternatives. CLine, Augment Code, etc.
By @eleumik - about 2 months
How much is the price of a kilogram of code in the us ?
By @flipgimble - about 2 months
Cursor overwrites the “code” command line shortcut/alias that’s normally set by VS Code. It does this on every update with no setting to disable this behavior. There are numbers of forum threads asking about manual solutions. This seems like a deliberately anti-user feature meant to get their usage numbers up at all costs. This small thing makes me not trust the decision making process at Cursor won’t sell me out as a user.
By @dimgl - about 2 months
The new Cursor update (0.47) is cursed. They got rid of codebase searching (WTF?) and the agent is noticeably worse, even when using Sonnet 3.5.

I'm really shocked, actually. This might push me to look at competitors.

By @timothygold - about 2 months
I tried cursor for a day or two and then asked for a refund... here's why:

* It has terrible support for Elixir (my fav language) because the models are only really trained on python.

* Terrible clunky interface... it would be nice if you didn't have to click around, do modifier ctrl + Y stuff ALL the time.

* The code generated is still riddled with errors or naff (apart from boiler plate)... so I am still * prompt engineering * the crap out of it.. which I'm good at but I can prompt engineer using phind.com...

* The fact that the code is largely broken first time and they still haven't really fixed the context window problem means you have to copy paste error codes back into it.. defeating the purpose of an in integrated IDE imo.

* The free demo mode stops working after generating one function... if I had been given more time to evaluate it fully I would never have signed up. I signed up to see if it was any good.. which it isn't.

By @quotz - about 2 months
Cline is much better
By @kobe_bryant - about 2 months
I was trying to figure out what he does and his website proudly states at the very top “ No templates, no no-code, no AI slop - just great sites built to grow.”. interesting!
By @DiabloD3 - about 2 months
I'm sorry, but isn't Cursor just an editor? Maybe an editor shouldn't actually have garbage parts to avoid?

Why not just use an editor that is focused on coding, and then just not use an LLM at all? Less fighting the tooling, more getting your job done with less long term landmines.

There are a lot of editors, and many of them even have native or semi-native LLM support now. Pick one.

Edit: Also, side note, why are so many people running their LLMs in the cloud? All the cutting edge models are open weight licensed, and run locally. You don't need to depend on some corporation that will inevitably rug-pull you.

Like, a 7900XTX runs you about $1000. You probably already own a GPU that cost more in your gaming rig.

By @hannah_creator - about 2 months
Very useful!!!
By @r_singh - about 2 months
Just use Cline, it beats Cursor hollow — saves me like hours per day