How I write code using Cursor
Cursor is a coding tool that enhances productivity in Visual Studio Code with features like tab completion and inline editing. Despite some limitations, it positively impacts coding workflows and experimentation.
Read original articleCursor is a coding tool that integrates Large Language Model (LLM) features into a fork of Visual Studio Code, aimed at enhancing productivity for experienced developers. Tom Yedwab, who has extensive coding experience, shares his insights on whether Cursor is a valuable tool or just a trend. He highlights its key features, including tab completion, inline editing, a chat sidebar, and a composer for cross-file refactoring. The tab completion feature stands out for its ability to suggest code completions and navigate edits efficiently, significantly reducing the time spent on repetitive tasks. However, Yedwab notes some limitations, such as occasional incorrect suggestions and the need for better adherence to coding styles. He also discusses the .cursorrules file, which can help the LLM understand coding standards within a project. Yedwab emphasizes that Cursor has changed his coding workflow, making him less reliant on external libraries and more willing to experiment with unfamiliar languages. He appreciates the tool's ability to facilitate quick iterations and prototyping, allowing him to focus on high-level problem-solving rather than getting bogged down in boilerplate code. Overall, while Cursor has its drawbacks, Yedwab finds it a beneficial addition to his coding toolkit.
- Cursor integrates LLM features into Visual Studio Code to enhance coding productivity.
- Key features include tab completion, inline editing, and a chat sidebar for code refactoring.
- The tab completion feature is particularly effective for reducing repetitive coding tasks.
- Yedwab notes limitations in suggestion accuracy and adherence to coding styles.
- The tool has transformed his workflow, reducing reliance on external libraries and encouraging experimentation with new languages.
Related
Cursor – The AI Code Editor
Cursor is an AI-powered code editor that enhances developer productivity through predictive editing, natural language coding, and a focus on privacy, receiving positive feedback for its efficiency and user experience.
Code Smarter, Not Harder: Developing with Cursor and Claude Sonnet
Cursor is an AI-powered code editor that enhances software development by integrating with the Claude Sonnet 3.5 model, allowing users to generate code and reference various sources for context.
Cursorcasts: Learn how to code with AI using Cursor
Cursor is an AI-driven platform for beginners to learn coding, offering free screencasts, features like autocomplete, code editing, and project sharing, with sign-in options for personalized experiences.
Lex Fridman with Cursor Team: Future of Programming with AI
Aman Sanger and his team discussed their AI-assisted code editor, Cursor, on the Lex Fridman Podcast, covering features, AI's impact on programming, and future challenges in the field.
Copilot vs. Cursor vs. Cody vs. Supermaven vs. Aider
Vincent Schmalbach prefers Cursor over GitHub Copilot for its effective code modification and autocomplete features, while Aider serves command-line users. Sourcegraph Cody is less reliable for code modifications.
- Many users report significant productivity boosts when using Cursor, particularly for boilerplate code and repetitive tasks.
- Critics argue that AI tools like Cursor can produce average or suboptimal code, especially in complex scenarios, and may lead to a lack of critical thinking.
- Some users express concerns about the potential for AI-generated code to create maintenance challenges in large codebases.
- There is a divide between those who embrace AI tools for enhancing their workflow and those who feel it diminishes their coding skills and enjoyment.
- Concerns about data privacy and the implications of using AI tools in corporate environments are also prevalent among users.
1. Snake case to camelCase. Even without AI we can already complete these tasks easily. VSCode itself has command of "Transform to Camel Case" for selection. It is nice the AI can figure out which text to transform based on context, but not too impressive. I could select one ":, use "Select All Occurrences", press left, then ctrl+shift+left to select all the keys.
2. Generate boilerplate from documentation. Boilerplate are tedious, but not really time-consuming. How many of you spend 90% of time writing boilerplate instead of the core logic of the project? If a language/framework (Java used to be, not sure about now) requires me to spend that much time on boilerplate, that's a language to be ditched/fixed.
3. Turn problem description into a block of concurrency code. Unlike the boilerplate, these code are more complicated. If I already know the area, I don't need AI's help to begin with. If I don't know, how can I trust the generated code to be correct? It could miss a corner case that my question didn't specify, which I don't yet know existing myself. In the end, I still need to spend time learning Python concurrency, then I'll be writing the same code myself in no time.
In summary, my experience about AI is that if the question is easy (e.g. easy to find exactly same question in StackOverflow), their answer is highly accurate. But if it is a unique question, their accuracy drops quickly. But it is the latter case where we spend most of the time on.
It's amazing how many naysayers there are about Cursor. There are many here and they obviously don't use Cursor. I know this because they point out pitfalls that Cursor barely runs into, and their criticism is not about Cursor, but about AI code in general.
Some examples:
"I tried to create a TODO app entirely with AI prompts" - Cursor doesn't work like that. It lets you take the wheel at any moment because it's embedded in your IDE.
"AI is only good for reformatting or boilerplate" - I copy over my boilerplate. I use Cursor for brand new features.
"Sonnet is same as old-timey google" - lol Google never generated code for you in your IDE, instantly, in the proper place (usually).
"the constantly changing suggested completions seem really distracting" - You don't need to use the suggested completions. I barely do. I mostly use the chat.
"IDEs like cursor make you feel less competent" - This is perhaps the strongest argument, since my quarrel is simply philosophical. If you're executing better, you're being more competent. But yes some muscles atrophy.
"My problem with all AI code assistants is usually the context" - In Cursor you can pass in the context or let it index/search your codebase for the context.
You all need to open your minds. I understand change is hard, but this change is WAY better.
Cursor is a tool, and like any tool you need to know how to use it. Start with the chat. Start by learning when/what context you need to pass into chat. Learn when Cmd+K is better. Learn when to use Composer.
I recently went from an idea for a casual word game (aka wordle) to a fully polished product in about 2h, which would have taking me 4 or 5 times that if I hadn’t used Cursor. I estimate that 90% of the time was spent thinking about the product, directing the AI, and testing and about 10% of the time actually coding.
I'm below average in a lot of programming languages and tools. Cursor is extremely useful there because I don't have to spend tens of minutes looking up APIs or language syntax.
On the other hand, in areas I know more about, I feel that I can still write better code than Cursor. This applies to general programming as well. So even if Cursor knows exactly how to write the syntax and which function to invoke, I often find the higher-level code structure it creates sub-optimal.
Overall, Cursor is an extremely useful tool. It will be interesting to see whether it will be able to crawl out of the primordial soup of averages.
I’ve also been logging every interaction with an LLM and the exit status of the build on every mtime of every language mode file and all the metadata: I can easily plot when I lean on the thing and when I came out ahead, I can tag diffs that broke CI. I’m measuring it.
My conclusion is that I value LLMs for coding in exact the same way that the kids do: you have to break Google in order for me to give a fuck about Sonnet.
LLMs seem like magic unless you remember when search worked.
Personally, I find this kind of workflow totally counter-productive. My own programming workflow is ~90% mental work / doing sketches with pen & paper, and ~10% writing the code. When I do sit down to write the code, I know already what I want to write, don't need suggestions.
I've been in compilers, storage, and data backends for 15ish years, and had to do a little project that required recording audio clips in a browser and sending them over a websocket. Cursor helped me do it in about 5 minutes, while it would've taken at least 30 min of googling to find the relevant keywords like MediaStream and MediaRecorder, learn enough to whip something up, fail, then try to fix it until it worked.
Then I had to switch to streaming audio in near-realtime... here it wasn't as good: it tried sending segments of MediaRecorder audio which are not suitable for streaming (because of media file headers and stuff). But a bit of Googling, finding out about Web Audio APIs and Audio Worklet, and a bit of prompting, and it basically wrote something that almost worked. Sure it had some concurrency bugs like reading from the same buffer that it's overwriting in another thread. But that's why we're checking the generated code, right?
I have a feeling that blindly building things with AI will actually lead to incomprehensible monstrous codebases that are impossible to maintain over the long run.
Read “Programming as Theory Building” by Peter Naur. Programming is 80% theory-in-the-mind and only about 20% actual code.
Here's an actual example of a task I have at work right now that AI is almost useless in helping me solve. "I'm working with 4 different bank APIs, and I need to simplify the current request and data model so that data stored in disparate sources are unified into one SQL table called 'transactions'". AI can't even begin to understand this request, let alone refactor the codebase to solve it. The end result should have fewer lines of code, not more, and it requires a careful understanding of multiple APIs and careful data modelling design and mapping where a single mistake could result in real financial damage.
1. Auto-complete makes me type ~20% faster (I type 100+ WPM)
2. Composer can work across a few files simultaneously to update something (e.g. updating a chrome extension's manifest while proposing a code change)
3. Write something that you know _exactly_ how it should work but are too lazy to author it yourself (e.g. Write a function that takes 2 lists of string and pair-wise matches the most similar. Allow me to pass the similarity function as a parameter. Use openai embedding distance to find most similar pairings between these two results)
I work in Rust and I had to start working with several new libraries this month. One example of them is `proptest-rs`, a rust property testing library that defines a whole new grammar to define the tests. I am 100% sure that I spent much less time to get on-boarded with the librariy's best practices and usages. I just quickly went through their book (to learn the vocabulary) and asked the AI to generate the code itself. I was very surprised that it did not do any mistakes, considering that sort of weird custom grammar of the lib. I will at least keep trying for another months.
It was like a second person being in the editor having a mind of its own constantly touching my code, even if it should have left it alone. It felt like I was finding myself undoing stuff it made all the time.
- Generating wrappers and simple CRUD APIs on top of database tables, provided only with a DDL of the tables.
- Optimizing SQL queries and schemas, especially for less familiar SQL dialects—extremely effective.
- Generating Swagger comments for API methods. Joyness
- Re-creating classes or components based on similar classes, especially with Next.js, where the component mechanics often make this necessary.
- Creating utility methods for data conversion or mapping between different formats or structures.
- Assisting with CSS and the intricacies of HTML for styling.
- GPT4 o1 is significantly better at handling more complex scenarios in creation and refactoring.
Current challenges based on my experience:
- LLM lacks critical thinking; they tend to accommodate the user’s input even if the question is flawed or lacks a valid answer.
- There’s a substantial lack of context in most cases. LLMs should integrate deeper with data sampling capabilities or, ideally, support real-time debugging context.
- Challenging to use in large projects due to limited awareness of project structure and dependencies.
I thought his "Changes to my workflow" section was the most interesting, coupled with the fact that coding productivity (churning out lines of code) was not something he found to be a benefit. However, IMO, the workflow changes he found beneficial seem to be a bit questionable in terms of desireability...
1) Having LLM write support libraries/functions from scratch rather than rely on external libraries seems a bit of a double-edged sword. It's good to minimize dependencies and not be affected by changes to external libraries, but OTOH there's probably a lot of thought and debugging that has been put into those external libraries, as well as support for features you may not need today but may tomorrow. Is it really preferable to have the LLM reinvent the wheel using untested code it's written channeling internet sources?
2) Avoiding functions (couched as excessive abstractions) in favor of having the LLM generate repeated copies of the same code seems like a poor idea, and will affect code readability, debugging and maintenance whereby a bugfix in one section is not guaranteed to be replicated in other copies of the same code.
3) Less hesitancy to use unfamiliar frameworks and libraries is a plus in terms of rapid prototyping, as well as coming up to speed with a new framework, but at the same time is a liability since the quality of LLM generated code is only as good as the person reviewing it for correctness and vulnerabilities. If you are having the LLM generate code using a framework you are not familiar with, then you are at it's mercy as to quality, same as if you cut and pasted some code from the internet without understanding it.
I'm not sure we've yet arrived at the best use of "AI" for developer productivity - while it can be used for everything and anything, just as ChatGPT can be asked anything, some uses are going to leverage the best of the underlying technology, while others are going to fall prey to it's weaknesses and fundamental limitations.
There is a very vocal old guard who are stubborn about ditching their 10,000+ hours master-level expertise to start from zero and adapt to the new paradigm. There is a lot of skepticism. There are a lot of people who take pride in how hard coding should be, and the blood and sweat they've invested.
If you look at AI from 10,000 feet, I think what you'll see is not AGI ruining the world, but rather LLMs limited by regression, eventually training on their own hallucinations, but good enough in their current state to be amazing tools. I think that Cursor, and products like it, are to coding what Photoshop was to artists. There are still people creating oil paintings, but the industry — and the profits — are driven by artists using Photoshop.
Cursor makes coders more efficient, and therefore more profitable, and anyone NOT using Cursor in a hiring pool of people who ARE using it will be left holding the short straw.
If you are an expert level software engineer, you will recognize where Cursor's output is bad, and you will be able to rapidly remediate. That still makes you more valuable and more efficient. If you're an expert level software engineer, and you don't use Cursor, you will be much slower, and it is just going to reduce your value more and more over time.
The people that are negative about these things, because they need to review it, seem to be missing the massive amount of time saved imho.
Many users point to the fact that they spend most of the time thinking, I'm glad for them, most of the time I spend is glueing APIs, boilerplates, refactoring, and on those aspects Cursor helps tremendously.
The biggest killer feature that I get from similar tools (I ditched Copilot recently in favor of it) is that they allow me to stay focused and in the flow longer.
I have a tendency to phase out when tasks get too boring, repetitive or stressed out when I can't come up with a solution. Similarly going on a search engine to find an answer would often put me in a long loop of looking for answers deeply buried in a very long article (you need to help SEO after all, don't you?) and then it would be more likely that I would get distracted by messages on my company chat or social media.
I can easily say that Cursor has made me more productive than I was one year ago.
I feel like the criticism many have comes from the wrong expectations of these tools doing the work for you, whereas they are more into easing out the boring and sometimes the hard parts.
Oh my, oh my... How have I done this all these years before "AI" was a th- hype?
I did it without wasting even a fraction of the CO2 needed for these toys.
"AI" has some usecases, granted. But selling it as the holy grail and again sh•tting on the environment is getting more and more ridiculous by the day.
Humanity, even the smarter part, truly deserves what is coming.
Apes on a space rock
As an experiment, some time ago, I tried to build a TODO app entirely with AI prompts. I used a special serverless platform on the backend to store the data so that it would persist between page refreshes. I uploaded the platform's frontend components README file to the AI as part of the input.
Anyway, what happened is that it was able to create the TODO app quickly; it was mostly right after the first prompt and the app was storing and loading the TODOs on the server. Then I started asking for small changes like 'Add a delete button to the TODOs'; it got that right. Impressive!
All the code fit in a single file so I kept copying the new code and starting a new prompt to ask for changes... But eventually, in trying to turn it into a real product, it started to break things that it had fixed before and it started to feel like a game of whac-a-mole. Fixing one thing broke another and it often broke the same thing multiple times... I tried to keep conversations longer instead of starting a new one each iteration but the results were the same.
The sweet spot seems to be bootstrapping something new from scratch and get all the boilerplate done in seconds. This is probably also where the hype comes from, feels like magic.
But the issue is, that once it gets slightly more complicated, thinks break apart and run into a dead end quickly. For example yesterday I wanted to build a simple CLI tool in Go (which is outstandingly friendly to LLM codegen as a language + stdlib) that acts as a simnple reverse proxy and (re-)starts the original thing in the background on file changes.
AI was able to knock out _something_ immediately that indeed compiled, only it didn't actually work like intended. After lots of iterations back-and-forth (Claude mostly) the code balooned in size to figure out what could be the issue, adding all kinds of useless crap that kindof-looks-helpful-but-isn't. After an hour I gave up and went through the whole code manually (few hundred lines, single file) and spotted the issue immediately (holding a mutex lock that gets released with `defer` doesn't play well with a recursive function call). After pointing that out, the LLM was able to fix it and produced a version that finally worked - still with tons of crap and useless complexity everywhere. And thats a simple straightforward coding task that can be accomplished in a single file and only few hundred lines, greenfield style. And all my claude chat tokens of the day got burned for this, only for me at the end having to dig in myself.
LLMs are great to produce things in small limited scopes (especially boilerplate-y stuff) or refactor something that already exists, when it has enough context and essentially doesn't really think about a problem but merely changes linguistic details (rewrites text to a different format ultimately) - its a large LANGUAGE model after all.
But full blown autonomous app building? Only if you do something that has been done exactly thousands of times before and is simple to begin with. There is lots of business value in that, though. Most programmers at companies don't do rocket science or novel things at all. It won't build any actual novelty - ideal case would be building an X for Y (like Uber For Catsitting) only but never an initial X.
Personal productivity of mine went through the roof since GPT4/Cursor though, but I guess I know how/when to use it properly. And developer demand will surge when the wave of LLM-coded startups get their funding and realize the codebase cannot be extended anymore with LLMs due to complexity and the raw amount of garbage in there.
Using Cody currently with our company enterprise API key
This sounds like a nightmare.
I think the biggest problem with AI at the moment is that it incorrectly assumes that coding is the difficult part of developing software, but it's actually the easiest part. Debugging broken code is a lot harder and more time consuming than writing new code; especially if it's code that someone else wrote. Also, architecting a system which is robust and resilient to requirement changes is much more challenging than coding.
It boggles the mind that many developers who hate reading and debugging their team members' code love spending hours reading and debugging AI-generated code. AI is literally an amalgamation of other peoples' code.
> For example, suppose I have a block of code with variable names in under_score notation that I want to convert to camelCase. It is sufficient to rename one instance of one variable, and then tab through all the lines that should be updated, including the other related variables.
For me that would be :%s/camel_case/camelCase/gc then yyyyyynyyyy as I confirm each change. Or if it's across a project, then put cursor on word, SPC p % (M-x projectile-replace-regexp), RET on the word, camelCase, yyyynyyynynn as it brings me through all files to confirm. There's probably even better smarter ways than that, I learn "just enough to get it done" and move on, to my continual detriment.
> Many times it will suggest imports when I add a dependency in Python or Go.
I can write a function in python or typescript and if I define a type like:
function(post: Po
and wait a half second, I'll be given a dropdown of options that include the Post type from Prisma. If I navigate to that (C-J) and press RET, it'll auto-complete the type, and then add the import to the top, either as a new line or included with the import object if something's already being imported from Prisma. The same works for Python, haven't tried other. I'm guessing this functionality comes from LSP? Actually not sure lol. Like I said, least knowledgeable emacs user.As for boilerplate, my understanding is most emacs people have a bunch of snippets they use, or they use snippet libraries for common languages like tsx or whatever. I don't know how to use these so I don't.
I still intend to try cursor since my boss thinks it might improve productivity and thus wants me to see if that's true, and I don't want to be some kind of technophobe, however I remain skeptical that these built-in tools are at least any more useful than me just quickly opening a chatgpt window in my browser and pasting some code in, with the downside of me losing all my bindings.
My spacemacs config: https://github.com/komali2/Configs/blob/master/emacs/.spacem...
Meanwhile, the Compose mode gives code changes a good shot either in one file or multiple, and you can easily direct it towards specific files. I do wish it could be a bit smarter about which files it looks at since unless you tell it about that file you have types in, it'll happily reimplement types it doesn't know of. And another big issue with Compose mode is that the product is just not really complete (as can be seen by how different the 3 UXes of applying edits are). It has reverted previous edits for me, even if they were saved on disk and the UI was in a fresh state (and even their "checkout" functionality lost the content).
The Cmd+K "edit these lines" mode has the most reliable behavior since it's such a self-contained problem where the implementation uses the least amount of tricks to make the LLM faster. But obviously it's also the least powerful.
I think it's great that companies are trying to figure this out but it's also clear that this problem isn't solved. There is so much to do around how the model gets context about your code, how it learns about your codebase over time (.cursorrules is just a crutch), and a LOT to do about how edits to code are applied when 95% of the output of the model is the old code and you just want those new lines of code applied. (On that last one, there are many ways to reduce output from the LLM but they're all problematic – Anthropic's Fast Edit feature is great here because it can rewrite the file super fast, but if I understand correctly it's way too expensive).
Why both functions are inlined and why FastAPI is used at all. I’m also not seeing any network bindings. Is it bound to local host (doubt it) or does it immediately bind to all interfaces.
It’s a 3 second thought from looking at Python code that I know only enough to write small and buggy utilities (yet Python is widely popular - so LLM have ton of data learned in). I know it’s a demo only but this video strengthen my feeling about drop of critical thinking and a raise of McDonalds productivity.
McDonalds is not bad: it’s great when you are somewhere you don’t know and not feeling well. They have same menu almost everywhere and it’s like 99% safe due to process standardization. Plus they can get you satiated in 20 minutes or less. It’s still type of food that you can’t really feed on for long; and if you do, there will be consequences.
The silver lining of it is that it most likely cut off all people from the field who hate the domain but love the money, as it’s exactly the problem it automatizes away.
I would like to use it, but I literally cannot because of this bug!
i had two csvs. one had an ISBN column, the other had isbn10, isbn13, and isbn. i tried to tell it to write python code that would merge these two sheets by finding the matching isbns. didn't work very well. it was trying to do pandas then i tried to get it to use pure python. it took what felt like an hour of back and forth with terrible results.
in a new chat, i told it that i wanted to know different algorithms for solving the problem. once we weighed all of the options, it wrote perfect python code. took like 5 minutes. like duh use a hashmap, why was that so hard?
So for making changes to existing code, Copilot isn't helpful and neither seems to Cursor.
I can use Copilot to write some new methods to parse strings or split strings or convert to/from JSON or make http calls. Bat anything that implies using or changing existing code doesn't yield good results.
I have also used cursor to write Kubernetes client code with great success because the API space is so large it doesn’t fit into my head that well (not often writing such code these days) so that has been incredibly helpful.
So it’s not revolutionising my workflow but certainly a useful tool in some situations.
I'm not trying to generate programs with it, I find it still far too weak for it.
However, and while I believe the current models can't really reach it, there is nothing in my eyes that prevents to create an AI good enough for that
Shared my 8 pro tips in this post towards the bottom https://betaacid.co/blog/cursor-dethrones-copilot
As a learning tool and for one offs AI can be very nice.
My theory is, that in that case it’s hard to predict what to do from context and libraries are at the same time hyper specialized and similar.
Example: Creating a node and attaching a volume using ansible looks similar for different cloud providers but there a subtle differences in how to speciy the location etc.
If you have these skills, the productivity gains from tools like Cursor are insane.
If you lack any of these, it makes sense that you don't get the hype; you're missing a critical piece of the new development paradigm, and should work on that.
If cursor made those margins, humans 1 cursor 0
the architecture of I am building.
Sure, you probably don't want to blindly copy or accept suggested changes, but when the tools work, they're like a pretty good autocomplete for various snippets and I guess quite a bit more in the case of Cursor.
If that helps you focus on problem solving and lets the tooling, language and boilerplate get out of the way a little bit more, all the better! For what it's worth, I'm probably sticking with JetBrains IDEs for the foreseeable future, since they have a lot of useful features and are what I'm used to (with VS Code for various bits of scripting, configuration etc.).
Show code.
I work in an environment right now where feeding proprietary code/docs into 3rd party hosted LLMs is a hard no-go, and we don't have any great locally hosted solution set up yet, so I haven't really taken the dive into actively writing code with LLM assistance. I feel like I should practice this skill, but the idea of using a tool like Cursor on personal projects just seems so antithetical to the point that I can't bring myself to actually do it.
It is much, much more than a ChatGPT wrapper. I'd encourage everyone to give it a shot with the free trial. If you're already a VSCode user, it only takes a minute to setup with your exact same devenv you already have.
Cursor has single-handedly changed the way I think about AI and its capabilities/potential. It's truly best-in-class and by a wide margin. I have no affiliation with Cursor, I'm just blown away by how good it is.
You can do most of the things the author showed with your craftfully set-up IDE and magic tricks, but that's not the point. I don't want to spend a lifetime setting up these things only to break when moving to another language.
Also, where the tab-completion shines for me in Cursor is exactly the edge case where it knows when _not_ to change things. In the camel casing example, if one of them were already camel cased, it would know not to touch it.
For the chat and editing, I've gotten a pretty good sense as to when I can expect the model to give me a correct completion (all required info in context or something relatively generic). For everything else I will just sit down and do it myself, because I can always _choose_ to do so. Just use it for when it suits you and don't for when it doesn't. That's it.
There's just so many cases where Cursor has been an incredible help and productivity boost. I suspect that the complainers either haven't used it at all or dismissed it too quickly.
Related
Cursor – The AI Code Editor
Cursor is an AI-powered code editor that enhances developer productivity through predictive editing, natural language coding, and a focus on privacy, receiving positive feedback for its efficiency and user experience.
Code Smarter, Not Harder: Developing with Cursor and Claude Sonnet
Cursor is an AI-powered code editor that enhances software development by integrating with the Claude Sonnet 3.5 model, allowing users to generate code and reference various sources for context.
Cursorcasts: Learn how to code with AI using Cursor
Cursor is an AI-driven platform for beginners to learn coding, offering free screencasts, features like autocomplete, code editing, and project sharing, with sign-in options for personalized experiences.
Lex Fridman with Cursor Team: Future of Programming with AI
Aman Sanger and his team discussed their AI-assisted code editor, Cursor, on the Lex Fridman Podcast, covering features, AI's impact on programming, and future challenges in the field.
Copilot vs. Cursor vs. Cody vs. Supermaven vs. Aider
Vincent Schmalbach prefers Cursor over GitHub Copilot for its effective code modification and autocomplete features, while Aider serves command-line users. Sourcegraph Cody is less reliable for code modifications.