February 10th, 2025

You are using Cursor AI incorrectly

Geoffrey Huntley highlights misconceptions about Cursor AI among engineers, advocating for a structured approach and a "standard library" of prompts to enhance accuracy and automate development tasks effectively.

Read original articleLink Icon
You are using Cursor AI incorrectly

Geoffrey Huntley discusses the common misconceptions and misuses of Cursor AI among software engineers. He emphasizes that many users treat Cursor as a simple search engine or IDE, rather than leveraging its capabilities as an autonomous agent. Key issues include under-specifying prompts, using low-level requests, and misunderstanding the potential of Cursor Rules. Huntley suggests that users should build a "standard library" of prompting rules to enhance their interactions with Cursor. He provides examples of how to create WordPress plugins using Guzzle and emphasizes the importance of adhering to coding standards. Additionally, he highlights the need for users to actively engage with Cursor by asking it to write and update rules based on their learnings. By doing so, users can improve the accuracy of the AI's responses and automate various tasks, such as adding license headers and committing changes to source control. Huntley concludes that a well-structured approach to using Cursor can lead to more successful outcomes in software development.

- Many engineers misuse Cursor by treating it like a search engine or IDE.

- Building a "standard library" of prompting rules can enhance interactions with Cursor.

- Users should actively engage with Cursor to improve its accuracy and functionality.

- Automating tasks like adding license headers and committing changes can streamline development.

- Understanding and utilizing Cursor Rules is crucial for maximizing its potential.

Link Icon 5 comments
By @proc0 - 3 months
Here's the crux of why AI is not yet useful for more than simple projects: they cannot actually know when something is correct or wrong. As a result there is no guarantee that something is implemented properly. To clarify, a human junior engineer might not have knowledge but they KNOW they don't have the knowledge, AND they know when they have the right answer. A junior engineer can check their results and verify with near 100% certainty that something works or it doesn't.

With Cursor I keep running into suggestions that create bugs. Even a junior dev knows how to check their solution to see if it actually works or not. The article says to build a "stdlib" of things that go wrong so it stops, but I would think that will exceed the max tokens very quickly and make things exceedingly harder to troubleshoot. My guess is that once inference is practically free (in computation) and we can throw 1000 agents at a single application in order to get the proper level of agency such that it is obvious to check your answer and be as reliable as a human.

By @typs - 3 months
+1 on using Cursor rules. In general I feel that Cursor definitely has a fairly steep learning but is a great product if you can move up it.
By @foobarbecue - 3 months
Gosh this sounds like such a miserable way to work -- spending all that time trying to teach your tooling to be reasonable.
By @JohnFen - 3 months
That sounds like at least as much work as just doing it yourself to begin with. What's the benefit here?