Sonar is destroying my job and it's driving me to despair
A developer, Chris Hatton, voices frustration with SonarQube's rigid rules impacting Kotlin coding practices. Suggestions include more user flexibility, rule discussions, and improved communication for better code quality tool effectiveness.
Read original articleThe article discusses a developer's frustration with SonarQube, a code quality tool, impacting their job negatively. The developer, Chris Hatton, expresses concerns about Sonar's rigid rules affecting their Kotlin coding practices. They suggest improvements like allowing users more flexibility to override rules and engage in discussions about rule validity. Other users in the Sonar Community offer insights and solutions, emphasizing the importance of constructive feedback and communication between developers and management. They acknowledge the challenges of balancing code quality with practicality and suggest changes to Sonar's approach to address these issues. The discussion highlights the need for better understanding and collaboration between developers and stakeholders to improve the user experience and effectiveness of code quality tools like SonarQube.
Related
This superior (sic) is what a negative productivity employee looks like.
The only workaround I've found is to create a new function, fill it full of many useless no-op lines, and write a test for that function, just to bump the percentages back up. This is often harder than it sounds, because the linter will block many types of useless no-op code. We then remove the code as part of another ticket.
And Sonar is far from being alone in this. JIRA is the most glaring example I can think of. Growing companies implement cargo-culted tools without understanding the needs and requirements, and let themselves drift into templates or "best practices" that are not relevant or beneficial to their own operations as-is, resulting in a sum of frustrations, whose impact on the work and the teams they acknowledge only way too late.
The care you need to inject not only in your tools, but how they are apprehended by both your customers and their primary users (which may have very different, if not opposed, perspectives on how/why to use it), from pricing, to documentation, to use-cases...
This is especially very complex when your tool answers to a regulation requirement, because it's very often received as a constraining/oppressing "solution", rather than an enabling one: it may be confortable to you as a seller, and confortable to your customer, but it may also be a counter-sale point to your (customer's) users that will impact future consideration when they become purchasing agents themselves.
I am not saying it's snake oil, but honestly how i ve seen ut being used, it's not that far
# noqa: F401
- We support hold-the-line: we only lint on diffs so you can refactor as you go. Gradual adoption. - Use existing configs: use standard OSS tools you know. Trunk Check runs them with standardized rules and output format. - Better config management: define config within each repo and still let you do shared configs across the org by defining your own plugin repos. - Better ignores: You can define line and project level ignores in the repo - Still have nightly reporting: We do let you run nightly on all changes and report them to track code base health and catch high-risk vulnerabilities and issues. There's a web app to view everything.
Try it and let me know how it goes. https://docs.trunk.io/check/usage
Initially, people always come out of the woodwork insisting that the gate requirements must be hard blockers and that we can just hand wave away the issues OP listed by tweaking the project rules. I always fight them, insisting that teams should be the owners and to gain quick adoption it should just be considered as another tool for PR reviewers. Eventually, people back off and come to accept that Sonar can be really helpful, but at the end of the day the developers should be trusted to make the right call for the situation. It’s not like we aren’t still requiring code reviews. I feel for OP, but it’s not Sonar’s fault the tool is being used for evil instead of good.
This last time I implemented SonarCloud, I took an anonymous survey to get peoples opinion. For the most part people liked the feedback Sonar provided. More junior engineers and more senior engineers liked it the most- midlevel engineers not so much. The junior liked getting quick feedback prior to asking for code reviews. The more senior engineers - who spend a lot of their time doing PR reviews - liked that it handled more of the generic stuff so that they could focus more on the business logic or other aspects of the PR. It’s just another tool in the toolbox.
However, I saw it causing similar turd polishing behaviour: Sensible code needing to be changed because it exceeded some obstinate metric, any kind of code movement causing existing issues to appear as "new", false positives due to incomplete language feature support, etc.