Be Suspicious of Success
The article emphasizes skepticism towards successful software due to potential hidden bugs, advocating for comprehensive testing of both successful and error scenarios, and recommending resources for further learning in algorithms.
Read original articleThe article discusses the principle of "Be Suspicious of Success" (BSOS) in software development, emphasizing that successful software often contains hidden bugs. It references Leslie Lamport's insights on model checking, suggesting that a lack of errors in verification processes should raise suspicion. The author argues that code may appear to work for the wrong reasons, leading to potential failures in the future. Verification methods, while useful, cannot fully explain why code succeeds, making it essential to adopt practices like test-driven development and the "make it work, make it break" approach. The article also highlights the importance of testing both "happy paths" (successful scenarios) and "sad paths" (error handling scenarios), noting that many failures in distributed systems stem from trivial mistakes in error handling. The author concludes by recommending a blog focused on computer science algorithms, which provides valuable insights for programmers.
- Successful software may be buggy and should be approached with skepticism.
- Verification methods can indicate success but do not explain it.
- Testing should encompass both happy and sad paths to ensure robustness.
- Many system failures arise from errors in error handling mechanisms.
- The article recommends resources for further learning in computer science algorithms.
Related
Programmers Should Never Trust Anyone, Not Even Themselves
Programmers are warned to stay cautious and skeptical in software development. Abstractions simplify but can fail, requiring verification and testing to mitigate risks and improve coding reliability and skills.
On Building Systems That Will Fail (1991)
The Turing Lecture Paper by Fernando J. Corbató discusses the inevitability of failures in ambitious systems, citing examples and challenges in handling mistakes. It highlights the impact of continuous change in the computer field.
You've only added two lines – why did that take two days
The article highlights that in software development, the number of lines of code does not reflect effort. Effective bug fixing requires thorough investigation, understanding context, and proper testing to prevent recurring issues.
In the Labyrinth of Unknown Unknowns
The article highlights challenges in software testing, particularly "unknown unknowns," advocating for Property-Based Testing and advanced platforms to autonomously identify bugs and improve testing efficiency, preventing software failures.
Practices of Reliable Software Design
The article outlines eight practices for reliable software design, emphasizing off-the-shelf solutions, cost-effectiveness, quick production deployment, simple data structures, and performance monitoring to enhance efficiency and reliability.
- Many commenters emphasize the importance of thorough testing, including edge cases and mutation testing, to ensure software reliability.
- There is skepticism about the notion that software is "good" simply because it works, with concerns about hidden bugs and the implications of relying on software that may not be thoroughly vetted.
- Some discuss the role of code coverage tools and test-driven development as essential practices in improving software quality.
- Several comments highlight the tension between software marketing and actual reliability, suggesting that popular software may often be more buggy due to prioritization of features over thorough testing.
- There is a recognition that testing can only show the presence of bugs, not their absence, and that static analysis and type systems may play a crucial role in future software development.
I never measure coverage percentage as a goal, I don't even bother turning it on in CI, but I do use it locally as part of my regular debugging and hardening workflow. Strongly recommend doing this if you haven't before.
I'm spoiled in that the golang+vscode integration works really well and can highlight executed code in my editor in a fast cycle; if you're using different tools, it might be harder to try out and benefit from it.
This reminds me of the recent discussion of gettiers[1]. That article focused on Gettier bugs, but this passage discusses what you might call Gettier features.
Something that's gotten me before is Python's willingness to interpret a comma as a tuple. So instead of:
my_event.set()
I wrote: my_event,set()
Which was syntactically correct, equivalent to: _ = (my_event, set())
The auto formatter does insert a space though, which helps. Maybe it could be made to transform it as I did above, that would make it screamingly obvious.One mechanism to verify that is by running a mutation testing [0] tool. They are available for many languages; mutmut [1] is a great example for Python.
Type systems and various forms of static analysis are going to increasingly shape the future of software development, I think. Large software systems especially become practically impossible to work with and impossible to verify and test without types.
Me: Hmmm.
Managers, a week later: We’re starting everyone on a 50% on-call rotation because there’s so many bugs that the business is on fire.
Anyway, now I get upset and ask them to define “works”, which… they haven’t been able to do yet.
If someone else wrote the code, your model of why it works being wrong doesn't mean anything is wrong other than your understanding.
Sometimes even if you wrote something that works and your own model is wrong, you don't necessarily have to fix anything: just learn the real reason the code works, go "oh", and leave it. :) (Revise some documentation and write some tests based on the new understanding.)
Is that actually desirable? This article articulates my exact gut feeling.
To be precise, it’s one of the big reasons, but it’s far from the only reason to write the test first.
Related
Programmers Should Never Trust Anyone, Not Even Themselves
Programmers are warned to stay cautious and skeptical in software development. Abstractions simplify but can fail, requiring verification and testing to mitigate risks and improve coding reliability and skills.
On Building Systems That Will Fail (1991)
The Turing Lecture Paper by Fernando J. Corbató discusses the inevitability of failures in ambitious systems, citing examples and challenges in handling mistakes. It highlights the impact of continuous change in the computer field.
You've only added two lines – why did that take two days
The article highlights that in software development, the number of lines of code does not reflect effort. Effective bug fixing requires thorough investigation, understanding context, and proper testing to prevent recurring issues.
In the Labyrinth of Unknown Unknowns
The article highlights challenges in software testing, particularly "unknown unknowns," advocating for Property-Based Testing and advanced platforms to autonomously identify bugs and improve testing efficiency, preventing software failures.
Practices of Reliable Software Design
The article outlines eight practices for reliable software design, emphasizing off-the-shelf solutions, cost-effectiveness, quick production deployment, simple data structures, and performance monitoring to enhance efficiency and reliability.