January 17th, 2025

AI Coding Assistant Is Gaslighting You – The Hidden Cost of Uncertainty

AI coding assistants are unpredictable, complicating developers' decision-making. Simple prompting may be more effective than autonomous agents. Improvements should focus on clarity and complementing human expertise while acknowledging limitations.

Read original articleLink Icon
AI Coding Assistant Is Gaslighting You – The Hidden Cost of Uncertainty

The article discusses the unpredictability and cognitive burden associated with AI coding assistants, particularly focusing on a review of the tool Devin. It highlights that while these tools can be powerful, their inconsistent performance creates uncertainty for developers, making it difficult to decide when to use them. This "slot machine problem" means that developers face a gamble with each interaction, as the tools may excel in one task but fail in another. This unpredictability disrupts the flow of mastery in software development, where developers rely on deterministic knowledge. The article criticizes the marketing of AI assistants as "colleagues," arguing that their capabilities do not align with traditional software engineering domains. Instead, it suggests that developers may find more success with simple, focused prompting of language models, which offers greater control and predictability. To improve AI coding assistants, the article proposes several strategies, including capability clustering, confidence signaling, bounded autonomy, and pattern recognition. Ultimately, the future of AI coding assistance should focus on complementing human developers' strengths while being transparent about limitations, reducing the cognitive overhead involved in deciding when to use these tools.

- AI coding assistants exhibit unpredictable performance, creating uncertainty for developers.

- The "slot machine problem" complicates decision-making on when to use AI tools.

- Simple prompting of language models is often more effective than autonomous agents.

- Proposed improvements include clearer capability definitions and confidence indicators.

- The goal is to create tools that complement human expertise while acknowledging limitations.

Link Icon 2 comments
By @Terr_ - 4 months
> Confidence Signaling: AI assistants could provide clear, reliable indicators of their confidence

I'm not sure this is possible: In most LLM applications we humans are perceiving the (un-)confidence from the lines given to a fictional robotic story character. Simply change the name of the character to Zap Brannigan and it becomes a confident expert on everything.

What we want is the confidence of the author that their character is proposing good ideas that will work... However the only confidence the real-world LLM can offer is "how much trouble did I have finding a next word that seems to fit."