Code review antipatterns
Code reviews can improve quality but may be misused through antipatterns like excessive nitpicking and conflicting demands, leading to frustration. Constructive feedback and collaboration are essential for success.
Read original articleCode review is a common practice intended to enhance code quality and developer skills. However, it can also be misused through various "antipatterns" that obstruct progress and frustrate developers. These include behaviors such as "The Death of a Thousand Round Trips," where a reviewer continuously points out minor issues without fully reading the code, leading to endless revisions. "The Ransom Note" involves holding a patch hostage until the developer completes unrelated tasks. "The Double Team" sees two reviewers making conflicting demands, causing confusion and delays. Other antipatterns include vague criticisms that force developers to guess solutions, prioritizing trivial issues over significant ones, and demanding justifications for previously agreed-upon designs. The article emphasizes that these behaviors stem from a misuse of authority, where reviewers pursue personal agendas rather than focusing on improving the code. It encourages reviewers to be constructive, minimize unnecessary delays, and maintain a collaborative spirit to avoid creating a toxic environment that can hinder project success.
- Code review can enhance quality but may be misused through various antipatterns.
- Common antipatterns include excessive nitpicking, conflicting demands, and vague criticisms.
- Misuse of authority in code reviews can lead to frustration and hinder progress.
- Reviewers should focus on constructive feedback and collaboration.
- Maintaining a positive review culture is essential for project success.
Related
Code Reviews Do Find Bugs
Code reviews are effective in finding bugs, despite past doubts. Research shows reviewers identify defects, with small code chunks being most efficient. Reviews aid learning, improve maintainability, and promote knowledge sharing.
Sonar is destroying my job and it's driving me to despair
A developer, Chris Hatton, voices frustration with SonarQube's rigid rules impacting Kotlin coding practices. Suggestions include more user flexibility, rule discussions, and improved communication for better code quality tool effectiveness.
Beyond Clean Code
The article explores software optimization and "clean code," emphasizing readability versus performance. It critiques the belief that clean code equals bad code, highlighting the balance needed in software development.
Features I'd like to see in future IDEs
Proposed improvements for IDEs include queryable expressions for debugging, coloration of comments, embedding images, a personal code vault for snippets, and commit masks to prevent accidental code commits.
Good Refactoring vs. Bad Refactoring
Refactoring is vital for code quality, enhancing readability and maintainability. However, poor practices can lead to complexity and bugs. Incremental changes, thorough understanding, and testing are essential for success.
> In your first code review passes, pick small and simple nits. Variable names are a bit unclear; comments have typos in.
> Wait for the developer to fix those, and then drop your bombshell: there’s a much more fundamental problem with the patch, that needs a chunk of it to be completely rewritten – which means throwing away a lot of the nitpick fixes you’ve already made the developer do in that part of the patch.
I've done this more than once, and felt pretty bad about it each time. The problem is that I'm pretty good at spotting the little issues while I'm reading through some code. But I'm not as good at identifying the larger design issues until I'm thinking about it later
Focus all your effort on catching errors that a linter and auto-formatter could catch. Shrug when questioned about architectural decisions, what really matters is that this field was PascalCase when it should be camelCase.
When questioned why this isn't caught automatically by the build process, you have a few stock responses (Bonus: These may well be true!):
- Linters are too slow, and a fast development experience is crucial
- Linting the entire existing code base would effectively destroy "git blame" as a useful tool.
- Different teams can't agree on whether to column-align wrapped function arguments, so they've agreed to disagree on this.
So instead of automating away the mental overhead of checking for spaces vs tabs, or LF vs CRLF make sure you go over the PR with your own linter and nitpick every violation, even the suggestion level ones before you even consider trying to work out the validity of the submission.
The Ransom Note
> This particular patch seems especially important to the developer who submitted it. (Maybe they said that outright, as part of their argument trying to persuade you to take the patch. Or maybe you just read between the lines.)
> But this patch isn’t especially vital to you – so you’re in a position of power! Now you can hold the change they need hostage until they do lots of extra tangentially related work, which doesn’t really need to be in the same commit, but which is important to you.
The reviewer outright denies this occurs, and it occurs every time.
For a commit on changing the documentation, he will then want the entire documentation redone, including pages that aren't being changed.
And will refuse to approve it until every issue he's brought up has been addressed.
I've just stopped asking them for PR reviews, and ask other teams to review my PRs.
In my case, another engineer that was gunning for my position as tech lead started to use several of the tactics listed in OP's post to delay and bog down my pull requests (where previously we had kept a good relationship) while they were getting their PRs rubber-stamped by a junior on our team while I was asleep (America vs. European hours).
Management noticed and permanently soured on me and I was never able to recover from the damage done. I didn't fully realize what was happening until it was too late. I was laid off less than a month later and the other engineer was promoted. It was an extremely valuable lesson, though, and I got a better job inside of a month later, anyway, but it's something I keep a close eye on now.
FWIW:
1. Use a formatting tool and a linter in the build chain = zero formatting bullshit nitpicks.
2. Ask questions wherever possible rather than criticising. It's kinder and also less embarrassing when you thought you saw a bug and it wasn't.
3. This is the reviewer's chance to stay uptodate with the codebase.
4. This is the reviewee's chance to share blame for mistakes. Nobody who reviewed can crap on them for a bug discovered later. People who couldn't bother to review can't complain either.
5. Make positive comments whenever you can honestly do so - just reduces the stress and fosters goodwill.
6. People who behave like arseholes in PRs are usually the kind you don't want in your team. i.e. it's a way of detecting such people - see how they use a bit of power.
It is easy to find nits. Most people will notice a spelling error out of the corner of their eye and zoom in on it. (I'm a terrible speller and I do this all the time). It is much harder to notice that the code is wrong and has bugs. I've passed many code reviews with a few nits to fix and push, only the next day did I realize there were serious problems with the code.
> The Ransom Note
A developer submits a patch that's especially important to them. When you point out that additional changes are needed elsewhere to make the component stay logically consistent, you get told that those changes are unrelated to the problem being solved (their problem being solved).
> The Guessing Game
A developer submits a substantial patch. They contents of the patch indicates that the submitter has not understood the design of the code they're modifying. When the reviewer communicating this back to the submitter, the submitter expects the reviewer to drop what they're working on and instead work on designing the feature the submitter needs.
> The Priority Inversion
A large patch has been submitted with poor explanation of the underlying design. Properly reviewing if the patch makes sense takes time. The submitter (publicly) complains that they (verifyably) aren't getting responses to their code reviews and that this is holding back feature X.
> The Catch-22
There's a tradeoff to be made, and the decision made by the submitter is far off the mark.
---
The problem usually isn't picky reviewers or sloppy submitters, it's the lack of clear contribution guidelines.
For one-off reviews, set the expectation: The developer is asking for feedback, and not permission. He doesn't need to justify to the reviewer why he didn't incorporate all the changes. Don't give people pointless authority over others. This will solve most of the problems in the submission.
For more structured/important reviews, a third party moderator decides on disputed code changes. If you can't have this, then insist on > 1 reviewers and state you'll only make changes if the reviewers have consensus. This often solves style issues.
Changes I would like to see, but have not seen:
Reviewers should not ask questions - they should state concerns. So no "Why didn't you do this via X?". Instead, say "I think doing it via X instead of Y is better because ... ".
And definitely banned: "Why did you solve it this way?"
I'm sure they exist in the working world, but 5 years in across 4 different companies I've yet to encounter it (beyond what I've tried to provide to others in my code reviews on their PRs).
Just because your code forge has a code review feature doesn't mean that all your code review behavior must be tracked in it.
A lot of these have some natural root causes. These apply pretty generally to both reviewer and reviewee.
- not enough context (why is this change needed, why is the existing code this way)
- not enough shared direction (we should do this thing, we've already decided we're not doing this thing)
- misaligned expectations of time (review this 1000 line PR for me, I have a fulltime job)
- hidden background (why can't we just... / we can't implement that because of this historical precedent)
- different personality types / disorders (ASD and other things can mean nitpicks / technical issues are just as "important" as larger issues)
- tooling problems (no way to defer submitting just the big things and hold back on the nits)
- the code is a proxy for things which are part of the identity of the writer (tabs vs spaces)
===
The main things that I've seen that seem to help avoid these things are:
0. when possible establish some sort of rapport above and beyond just the review
1. assume positive intent. assume nothing else
2. communicate expectations / constraints
3. communicate appropriate background.
4. don't treat all communication the same, different reviewers sometimes need different approaches.
5. remember that the review is of the code, not the person (this applies to both parties)
One analogous challenge with code reviews occurs when the reviewer can't quickly see something like a (a) goal and (b) outline. Without these two, it can make it hard to "navigate" what kind of feedback would be helpful.
a. In many cases, the reviewer and reviewee might think they have the same goal, but this is often subtly or blaringly wrong. Even people in the same organization can serve different masters.
b. Different software cultures have different ideas of planning, outlines, design documents, whatever. Some cultures wing it. Some people keep this kind of thing in their head, but it is hard to quickly share your head with someone else. Without some notion of the contours of the problem, how can you properly assess the the code?
Every decent IDE has an option for setting up some kind of configuration for how code should be named and structured .
I think code reviews should be somewhat pedantic, it should be done with quality in mind and with the awareness of how the future changes.
The list of patterns here are valid but if they tend to happen more than feels comfortable for the team, look at the culture of how you work.
When you wrote the code together, a lot of these doesn't even come up, because you discussed everything already. Sometimes after pair programming, even no code review is needed!
This kind of dysfunction happens more often on teams with incompetent people in control but sometimes temporarily happens on otherwise high functioning teams when multiple people are fighting over philosophical control of the project.
But maybe it would only work for peers on the same project.
the more knotted and unaccepted the actual hierarchy vs the perceived hierarchy, the more of this kind of behavior. the more people feel respected the less they have to force behavior from others to make up for the feeling of lack of respect.
particularly toxic are places where a local tyrant believes that they deserve a lot more respect than they actually do (e.g. expert beginnerism)
On my own though, I’m certainly guilty of the 100 round trips. I tend to mentally group nits and not spell out every one that I find
The Nitpicker's Gambit: The reviewer fixates on trivial style issues like whitespace, bracket placement, variable naming conventions etc. They make the developer conform perfectly to their preferred style, even if it's not specified in the team's coding guidelines.
The Silent Treatment: The reviewer provides no feedback at all for a long time after the review is requested, but does respond just often enough to keep the review "active". The author has to ping several times to get any response.
The Tunnel Vision: The reviewer only looks at the specific lines changed, without considering the broader context of the code. They suggest changes that are locally valid but deliberately inconsistent with the overall design or architecture.
The Ad Hominem: The reviewer makes snarky comments about the code in a way that implies the author is inexperienced and/or incompetent without directly saying so (and opening themselves to accusations of meanness).
The Philosophical Debate: The reviewer gets into a long back-and-forth debate in the review comments about a matter of opinion like whether inheritance or composition is better. The actual issue at hand gets lost in the abstract discussion.
If you work in an environment where people lord code review over you in any way then you’re on a hiding to nowhere. That applies equally if you expect reviewers to not, you know, review your code.
I see this discussion of “nitpicks” constantly. If it’s a tiny thing that brings the code in line with the existing codebase then just do it rather than fighting it. If there’s some ambiguity then find a rule as a team and apply it.
I have no idea how people work in environments where you’re not all working together.
Not to be confused with the PR being “too big” because it is not atomic— that’s a different topic, something to teach the more junior teammates.
Related
Code Reviews Do Find Bugs
Code reviews are effective in finding bugs, despite past doubts. Research shows reviewers identify defects, with small code chunks being most efficient. Reviews aid learning, improve maintainability, and promote knowledge sharing.
Sonar is destroying my job and it's driving me to despair
A developer, Chris Hatton, voices frustration with SonarQube's rigid rules impacting Kotlin coding practices. Suggestions include more user flexibility, rule discussions, and improved communication for better code quality tool effectiveness.
Beyond Clean Code
The article explores software optimization and "clean code," emphasizing readability versus performance. It critiques the belief that clean code equals bad code, highlighting the balance needed in software development.
Features I'd like to see in future IDEs
Proposed improvements for IDEs include queryable expressions for debugging, coloration of comments, embedding images, a personal code vault for snippets, and commit masks to prevent accidental code commits.
Good Refactoring vs. Bad Refactoring
Refactoring is vital for code quality, enhancing readability and maintainability. However, poor practices can lead to complexity and bugs. Incremental changes, thorough understanding, and testing are essential for success.