July 1st, 2024

Programmers Should Never Trust Anyone, Not Even Themselves

Programmers are warned to stay cautious and skeptical in software development. Abstractions simplify but can fail, requiring verification and testing to mitigate risks and improve coding reliability and skills.

Read original articleLink Icon
Programmers Should Never Trust Anyone, Not Even Themselves

Programmers are advised to be paranoid and never fully trust their code or even themselves due to the inherent complexity and uncertainty in software development. The concept of abstractions, which simplify complex systems, is highlighted as a crucial tool for programmers. However, it is emphasized that abstractions can be "leaky" and may fail, leading to performance issues or unexpected behavior. The importance of verifying information, testing assumptions, and being aware of "unknown unknowns" is stressed to mitigate risks in coding. The article suggests a "trust, but verify" approach, advocating for a healthy dose of skepticism and continuous learning to navigate the challenges of software development successfully. Balancing the need for efficient problem-solving with thorough understanding and verification processes is crucial for programmers to write reliable code and grow as skilled professionals in the field.

Link Icon 31 comments
By @kryptiskt - 4 months
I think "trust, but verify" (as mentioned in the article) is a much more useful motto than "never trust anyone". The latter isn't an useful attitude, if you took it seriously you would have carefully check or rewrite everything from the ground up. And then you'd either have to trust the hardware anyway or enlist in a course on VLSI design. "Trust, but verify" is much more practicable, at least if you don't feel the need to verify absolutely everything[0], but is content with doing spot checks of all the features.

So, good article with a misleading title. Don't be paranoid.

[0] I don't consider 100% test coverage as anywhere near close enough for that.

By @Joel_Mckay - 4 months
After many years, I settled on a constraint based design philosophy:

1. type checking, data marshaling, sanity checks, and object signatures

2. user rate-limits and quota enforcement for access, actions, and API interfaces

3. expected runtime limit-check with watchdog timers (every thread has a time limit check, and failure mode handler)

4. controlled runtime periodic restarts (prevents slow leaks from shared libs, or python pinning all your cores because reasons etc.)

5. regression testing boundary conditions becomes the system auditor post-deployment

6. disable multi-core support in favor of n core-bound instances of programs consuming the same queue/channel (there is a long explanation why this makes sense for our use-cases)

7. Documentation is often out of date, but if the v.r.x.y API is still permuting on x or y than avoid the project like old fish left in the hot sun. Bloat is one thing, but chaotic interfaces are a huge warning sign to avoid the chaos.

8. The "small modular programs that do one thing well" advice from the *nix crowd also makes absolute sense for large infrastructure. Sure a monolith will be easier in the beginning, but no one person can keep track of millions of lines of commits.

9. Never trust the user (including yourself), and automate as much as possible.

10. "Dead man's switch" that temporarily locks interfaces if certain rules are violated (i.e. host health, code health, or unexpected reboot in a colo.)

As a side note, assuming one could cover the ecosystem of library changes in a large monolith is silly.

Good code in my opinion, is something so reliable you don't have to touch it again for 5 years. Such designs should not require human maintenance to remain operational.

There is a strange beauty in simple efficient designs. Rather than staring at something that obviously forgot its original purpose:

https://en.wikipedia.org/wiki/File:Giant_Knife_1.jpg

https://en.wikipedia.org/wiki/Second-system_effect

Good luck, and have a wonderful day =3

By @bruce511 - 4 months
>> Random access of a character in a text buffer could take constant time (for ASCII) or linear time (for UTF-8) depending on the character encoding

This is true, but incomplete. All unicode encodings take linear time, not just utf-8.

That's because a character can contain multiple code points. Utf-32 allows for random access of code points in constant time, but not characters.

Utf-16 has variable length code points so it is the same as utf-8 in that regard.

By @agentultra - 4 months
To me, this is the argument for formal verification. I don't want to hear a hand-waving explanation that this algorithm will always complete. If the algorithm is sufficiently complex I want proof. Otherwise, why would I believe you?

Abstractions, in the mathematical sense, always hold (unless there is a flaw in the definition itself). Axioms in any sense are always going to throw a wrench in things. Thank Godel. But that shouldn't mean we cannot make progress.

Do the work, show your proof! Think hard!

Although sometimes all you need are a few unit tests.

They key is to develop the wisdom to know when unit tests aren't sufficient for the task at hand.

By @nottorp - 4 months
My favourite advice to give out is:

"If you feel very smart after writing a particularly intricate piece of code, it's time to rewrite it to be more clear."

Speaking of not trusting yourself.

By @pcwelder - 4 months
High quality article with some new advices.

> Read more documentation than just the bare minimum you need

I wish I practiced this before, I'd be as good and quick as some of my brilliant colleagues.

By @tm11zz - 4 months
This is a really useful mindset as a programmer, but backfires in real-life as it makes you an anxious person.
By @atribecalledqst - 4 months
> Failing tests indicate the presence of bugs, but passing tests do not promise their absence.

As somebody who primarily lives on the testing side of the house, I've definitely run into cases where the developer promises that their unit tests will make a new feature less buggy, then about 5 minutes later I either find a mistake in the test or I find a bug in something that the developer didn't think to test at all.

I've also seen instances where tests are written too early, using a data structure that gets changed in development, and then causes churn in the unit tests since now they have to be fixed too.

I've generally come to think that unit tests should be used to baseline something after it ships, but aren't that useful before that point (and could even be a waste of time if they take a long time to write). I don't think I'll ever be able to convince anybody at my company about this though lol

By @m0llusk - 4 months
Also from 8 days ago https://news.ycombinator.com/item?id=40764826 and 9 days ago https://news.ycombinator.com/item?id=40760885.

The "A Python script should be able to run on any machine with a Python interpreter." remark is amusing. Recently ended up installing a whole new distribution version just to get Python 3.11 and the new library versions that the script I was running depended upon.

Giving a program finish and polish does have this kind of unlimited depth, but it is critical to remember that comes after the initial coding which is always rough with many gaps since that is always where things start. And then we make many choices. Hearing people commit to strong typing because docs are always out of date is another chuckle. Maybe if the docs were kept right from the start, maybe with some use of automatically generated reference pages, then there wouldn't be such frequent problems getting types right in the first place? To each their own, but strong typing hype is just another currently popular method among many. Strong typing has its value, but like every other methodology cannot be absolutely trusted to save programmers from error.

By @vzaliva - 4 months
As someone who routinely formally verifies code, I can confirm that you should not trust yourself. I keep finding bugs and hidden assumptions in the most trivial code or even in code covered with unit tests. In my opinion, formal verification is the only way to truly trust some code.
By @atoav - 4 months
As an electronics guy this is really ingrained. Not only could you easily waste days on a problem if you assume things rather than check them, in some cases you might also get a painful experience or depending on what you're working on that mistake might even be your last, burn down a house, kill others or what not. Checking your priors is one thing, ensuring your stuff fails gracefully if they are abnormal another. Meanwhile most software won't even handle a network disconnect gracefully.

As more and more stuff™ is moving from hardware into software, for totally understandable reasons, the absolute number of software that could absolutely ruin human lifes is growing. This calls for higher standards when it comes to the whole field of software engineering.

By @jll29 - 4 months
This is a good post, and I agree with the "paranoid porgrammer" attitude being useful.

The post could also talk about yet another desirable paranoia, namely "Am I building the right thing?" - so talking to customers/clients/stakeholders/users is arguably one of the most important steps when seeking re-affirmation and controlling one's work. Nothing worse than people who think they understand a technical problem, but it's not the one that the customer wants solved...

By @cryptica - 4 months
> Programmers Should Never Trust Anyone, Not Even Themselves

Yep. Can confirm, the best programmers are often paranoid. I guess over time your brain becomes more logical and you start to notice inconsistencies everywhere around you... Then before you know it, it feels like you're living on planet of the apes.

By @mistermann - 4 months
> So if abstractions can be problematic, then should we try to understand a topic without abstractions (to know cars as they really are)? No. When you dig beneath abstractions, you just find more abstractions.

Demonstrating that language and colloquial "logic" are also abstractions.

> It’s turtles all the way down.

Memes and catch phrases are abstractions.

> These layers of abstractions go down until we hit our most basic axioms about logic and reality.

Reality too is an abstraction. Luckily all humans run Faith, and it runs invisibly, otherwise I suspect we'd have not made it this far. Though, it now seems like that which saved us may now take us down (climate change, nuclear weapons, other/unknown).

> Trust, but verify.

Haha...of course, just use logic and critical thinking!

By @megamix - 4 months
There's a leaky reasoning about the abstraction hierarchy I think. Beyond, or until a certain layer - I know that I can trust the process. I'm fine baking, I trust the physical process.
By @simonw - 4 months
This is why I try to have my code commits bundle tests along with any corresponding implementation changes. The job of the test is to PROVE that the updated implementation code did what it was supposed to do - both now and into the future.

If the test doesn't do that - if it still passes even if you revert the implementation - then the test isn't doing its job.

By @ramesh31 - 4 months
You do need a certain level of self confidence to actually get anything done though, and not get lost in analysis paralysis. The mantra goes "strong opinions, loosely held". Be prepared to vigorously defend your position, while simultaeneously being willing to immediately toss it aside in the face of overwhelming evidence.
By @BD103 - 4 months
I love the link to the article on time dilation in this article, fascinating! (https://pilotswhoaskwhy.com/2021/03/14/gnss-vs-time-dilation...)
By @keepworking - 4 months
I think is kinds of "Need for cognitive closure". It is diffrent with "Never trust anyone" and "Everyone can be wrong". the NTA is make us can not to move forward. But allow the Everyone can be wrong makes we can move forward.
By @leecommamichael - 4 months
What does it do to the psyche, to not be able to trust anyone?
By @psychoslave - 4 months
> verifying code correctness is impossible

Somehow, taking into account the state of our industry, yes. But this is not an absolute truth.

I mean, we do have the theoretical frameworks and even tools to come with solutions that allow to proof that code is correct. It’s just that mapping this "know how" with the "how to deal with the expected flow rate feature" is very uncommon.

By @cgannett - 4 months
page is crashing for me so I figure I can share this: https://github.com/carbon-steel/carbon-steel.github.io/blob/...

perfectly readable as its Markdown on regular Github

By @dwighttk - 4 months
The radical trust of turning on a computer
By @NoPicklez - 4 months
"Trust but verify"

Also performing a critical self review and ensuring you remain skeptical

By @throwitaway222 - 4 months
-Programmers- > People
By @mewpmewp2 - 4 months
But then you say "trust, but verify" in the post.
By @globular-toast - 4 months
> In reality, the bank does not just store the money we deposit. It loans away/invests most of the money that people deposit. Our money does not sit idle in a large pile in a vault.

Nope. In reality the money doesn't exist. Amazing how many people think that somewhere a cartload of money is physically moving every month when they get paid. When was the last time you deposited anything in a bank? The abstraction is even higher than that. It's just numbers in a computer system. The abstractions work because banks are "too big to fail".

By @mulmboy - 4 months
> Failing tests indicate the presence of bugs, but passing tests do not promise their absence.

If only :)

Far too often I find myself working with tests that patch one too many implementation details, putting me in a refactoring pickle

By @lenerdenator - 4 months
You sort of have to trust yourself, but verify it against what others expect.

That's the point of code reviews.

Trust me; I'm an expert on never trusting myself.