Programmers Should Never Trust Anyone, Not Even Themselves
Programmers are warned to stay cautious and skeptical in software development. Abstractions simplify but can fail, requiring verification and testing to mitigate risks and improve coding reliability and skills.
Read original articleProgrammers are advised to be paranoid and never fully trust their code or even themselves due to the inherent complexity and uncertainty in software development. The concept of abstractions, which simplify complex systems, is highlighted as a crucial tool for programmers. However, it is emphasized that abstractions can be "leaky" and may fail, leading to performance issues or unexpected behavior. The importance of verifying information, testing assumptions, and being aware of "unknown unknowns" is stressed to mitigate risks in coding. The article suggests a "trust, but verify" approach, advocating for a healthy dose of skepticism and continuous learning to navigate the challenges of software development successfully. Balancing the need for efficient problem-solving with thorough understanding and verification processes is crucial for programmers to write reliable code and grow as skilled professionals in the field.
Related
Laziness is the source of Innovation and Creativity
Laziness can spur innovation in programming by encouraging efficiency and problem-solving. Embracing laziness responsibly can lead to creative and efficient solutions, promoting a balance between productivity and creativity.
The software world is destroying itself (2018)
The software development industry faces sustainability challenges like application size growth and performance issues. Emphasizing efficient coding, it urges reevaluation of practices for quality improvement and environmental impact reduction.
Getting 100% code coverage doesn't eliminate bugs
Achieving 100% code coverage doesn't ensure bug-free software. A blog post illustrates this with a critical bug missed despite full coverage, leading to a rocket explosion. It suggests alternative approaches and a 20% coverage minimum.
Misconceptions about loops in C
The paper emphasizes loop analysis in program tools, addressing challenges during transition to production. Late-discovered bugs stress the need for accurate analysis. Examples and references aid developers in improving software verification.
A Bunch of Programming Advice I'd Give to Myself 15 Years Ago
Marcus offers programming advice emphasizing prompt issue resolution, balancing speed and quality, improving tool proficiency, deep bug investigations, leveraging version control, seeking feedback, and enhancing team collaboration for optimal problem-solving.
So, good article with a misleading title. Don't be paranoid.
[0] I don't consider 100% test coverage as anywhere near close enough for that.
1. type checking, data marshaling, sanity checks, and object signatures
2. user rate-limits and quota enforcement for access, actions, and API interfaces
3. expected runtime limit-check with watchdog timers (every thread has a time limit check, and failure mode handler)
4. controlled runtime periodic restarts (prevents slow leaks from shared libs, or python pinning all your cores because reasons etc.)
5. regression testing boundary conditions becomes the system auditor post-deployment
6. disable multi-core support in favor of n core-bound instances of programs consuming the same queue/channel (there is a long explanation why this makes sense for our use-cases)
7. Documentation is often out of date, but if the v.r.x.y API is still permuting on x or y than avoid the project like old fish left in the hot sun. Bloat is one thing, but chaotic interfaces are a huge warning sign to avoid the chaos.
8. The "small modular programs that do one thing well" advice from the *nix crowd also makes absolute sense for large infrastructure. Sure a monolith will be easier in the beginning, but no one person can keep track of millions of lines of commits.
9. Never trust the user (including yourself), and automate as much as possible.
10. "Dead man's switch" that temporarily locks interfaces if certain rules are violated (i.e. host health, code health, or unexpected reboot in a colo.)
As a side note, assuming one could cover the ecosystem of library changes in a large monolith is silly.
Good code in my opinion, is something so reliable you don't have to touch it again for 5 years. Such designs should not require human maintenance to remain operational.
There is a strange beauty in simple efficient designs. Rather than staring at something that obviously forgot its original purpose:
https://en.wikipedia.org/wiki/File:Giant_Knife_1.jpg
https://en.wikipedia.org/wiki/Second-system_effect
Good luck, and have a wonderful day =3
This is true, but incomplete. All unicode encodings take linear time, not just utf-8.
That's because a character can contain multiple code points. Utf-32 allows for random access of code points in constant time, but not characters.
Utf-16 has variable length code points so it is the same as utf-8 in that regard.
Abstractions, in the mathematical sense, always hold (unless there is a flaw in the definition itself). Axioms in any sense are always going to throw a wrench in things. Thank Godel. But that shouldn't mean we cannot make progress.
Do the work, show your proof! Think hard!
Although sometimes all you need are a few unit tests.
They key is to develop the wisdom to know when unit tests aren't sufficient for the task at hand.
"If you feel very smart after writing a particularly intricate piece of code, it's time to rewrite it to be more clear."
Speaking of not trusting yourself.
> Read more documentation than just the bare minimum you need
I wish I practiced this before, I'd be as good and quick as some of my brilliant colleagues.
As somebody who primarily lives on the testing side of the house, I've definitely run into cases where the developer promises that their unit tests will make a new feature less buggy, then about 5 minutes later I either find a mistake in the test or I find a bug in something that the developer didn't think to test at all.
I've also seen instances where tests are written too early, using a data structure that gets changed in development, and then causes churn in the unit tests since now they have to be fixed too.
I've generally come to think that unit tests should be used to baseline something after it ships, but aren't that useful before that point (and could even be a waste of time if they take a long time to write). I don't think I'll ever be able to convince anybody at my company about this though lol
The "A Python script should be able to run on any machine with a Python interpreter." remark is amusing. Recently ended up installing a whole new distribution version just to get Python 3.11 and the new library versions that the script I was running depended upon.
Giving a program finish and polish does have this kind of unlimited depth, but it is critical to remember that comes after the initial coding which is always rough with many gaps since that is always where things start. And then we make many choices. Hearing people commit to strong typing because docs are always out of date is another chuckle. Maybe if the docs were kept right from the start, maybe with some use of automatically generated reference pages, then there wouldn't be such frequent problems getting types right in the first place? To each their own, but strong typing hype is just another currently popular method among many. Strong typing has its value, but like every other methodology cannot be absolutely trusted to save programmers from error.
As more and more stuff™ is moving from hardware into software, for totally understandable reasons, the absolute number of software that could absolutely ruin human lifes is growing. This calls for higher standards when it comes to the whole field of software engineering.
The post could also talk about yet another desirable paranoia, namely "Am I building the right thing?" - so talking to customers/clients/stakeholders/users is arguably one of the most important steps when seeking re-affirmation and controlling one's work. Nothing worse than people who think they understand a technical problem, but it's not the one that the customer wants solved...
Yep. Can confirm, the best programmers are often paranoid. I guess over time your brain becomes more logical and you start to notice inconsistencies everywhere around you... Then before you know it, it feels like you're living on planet of the apes.
Demonstrating that language and colloquial "logic" are also abstractions.
> It’s turtles all the way down.
Memes and catch phrases are abstractions.
> These layers of abstractions go down until we hit our most basic axioms about logic and reality.
Reality too is an abstraction. Luckily all humans run Faith, and it runs invisibly, otherwise I suspect we'd have not made it this far. Though, it now seems like that which saved us may now take us down (climate change, nuclear weapons, other/unknown).
> Trust, but verify.
Haha...of course, just use logic and critical thinking!
If the test doesn't do that - if it still passes even if you revert the implementation - then the test isn't doing its job.
Somehow, taking into account the state of our industry, yes. But this is not an absolute truth.
I mean, we do have the theoretical frameworks and even tools to come with solutions that allow to proof that code is correct. It’s just that mapping this "know how" with the "how to deal with the expected flow rate feature" is very uncommon.
perfectly readable as its Markdown on regular Github
Also performing a critical self review and ensuring you remain skeptical
Nope. In reality the money doesn't exist. Amazing how many people think that somewhere a cartload of money is physically moving every month when they get paid. When was the last time you deposited anything in a bank? The abstraction is even higher than that. It's just numbers in a computer system. The abstractions work because banks are "too big to fail".
If only :)
Far too often I find myself working with tests that patch one too many implementation details, putting me in a refactoring pickle
That's the point of code reviews.
Trust me; I'm an expert on never trusting myself.
Related
Laziness is the source of Innovation and Creativity
Laziness can spur innovation in programming by encouraging efficiency and problem-solving. Embracing laziness responsibly can lead to creative and efficient solutions, promoting a balance between productivity and creativity.
The software world is destroying itself (2018)
The software development industry faces sustainability challenges like application size growth and performance issues. Emphasizing efficient coding, it urges reevaluation of practices for quality improvement and environmental impact reduction.
Getting 100% code coverage doesn't eliminate bugs
Achieving 100% code coverage doesn't ensure bug-free software. A blog post illustrates this with a critical bug missed despite full coverage, leading to a rocket explosion. It suggests alternative approaches and a 20% coverage minimum.
Misconceptions about loops in C
The paper emphasizes loop analysis in program tools, addressing challenges during transition to production. Late-discovered bugs stress the need for accurate analysis. Examples and references aid developers in improving software verification.
A Bunch of Programming Advice I'd Give to Myself 15 Years Ago
Marcus offers programming advice emphasizing prompt issue resolution, balancing speed and quality, improving tool proficiency, deep bug investigations, leveraging version control, seeking feedback, and enhancing team collaboration for optimal problem-solving.