July 19th, 2024

Never Update Anything

The article critiques frequent software updates, citing time constraints, confusion over update types, and companies pushing paid upgrades. It highlights challenges and issues caused by updates, questioning their necessity.

Read original articleLink Icon
Never Update Anything

The article argues against the common practice of frequent software updates, highlighting the time-consuming nature of keeping up with updates and the potential for breaking changes. It discusses the challenges of differentiating between types of updates, such as major releases, security fixes, bug patches, and feature updates. The author suggests that the industry's approach to updates can be problematic, with examples like software companies discontinuing support for older versions to push users towards paid upgrades. The article emphasizes the burden of managing updates for individuals and organizations, pointing out instances where updates have caused issues or forced unwanted changes. It concludes by underscoring the time and effort required to stay updated while balancing other responsibilities, questioning the necessity and impact of constant software updates in a busy and demanding world.

Link Icon 40 comments
By @SoftTalker - 4 months
"In my eyes it could be pretty nice to have a framework version that's supported for 10-20 years and is so stable that it can be used with little to no changes for the entire expected lifetime of a system."

This is what applications used to be like, before the web and internet hit and regular or even push updating became easy.

It was simply so difficult and expensive to provide updates once the software was in the customer's hands that it was done intentionally and infrequently. For the most part, you bought software, installed it, and used it. That was it. It never changed, unless you bought it again in a newer version

By @KronisLV - 4 months
Oh hey, I was wondering why the VPS suddenly had over 100 load average, restarted Docker since the containers were struggling, now I know why (should be back now for a bit). Won't necessarily fix it, might need to migrate over to something else for the blog, with a proper cache, alongside actually writing better articles in the future.

I don't think the article itself holds up that well, it's just that updates are often a massive pain, one that you have to deal with somehow regardless. Realistically, LTS versions of OS distros and technologies that don't change often will lessen the pain, but not eliminate it entirely.

And even then, you'll still need to deal with breaking changes when you will be forced to upgrade across major releases (e.g. JDK 8 to something newer after EOL) or migrate once a technology dies altogether (e.g. AngularJS).

It's not like people will backport fixes for anything indefinitely either.

By @pron - 4 months
Because of this, in the JDK we've adopted a model we call "tip & tail". The idea is that there are multiple release trains, but they're done in such a way that 1/ different release trains target different audiences and 2/ the model is cheap to maintain -- cheaper, in fact, than many others, especially that of a single release train.

The idea is to realise that there are two different classes of consumers who want different things, and rather than try to find a compromise that would not fully satisfy either group (and turns out to be more expensive to boot), we offer multiple release trains for different people.

One release train, called the tip, contains new features and performance enhancements in addition to bug fixes and security patches. Applications that are still evolving can benefit from new features and enhancements and have the resources to adopt them (by definition, or else they wouldn't be able to use the new features).

Then there are multiple "tail" release trains aimed at applications that are not interested in new features because they don't evolve much anymore (they're "legacy"). These applications value stability over everything else, which is why only security patches and fixes to the most severe bugs are backported to them. This also makes maintaining them cheap, because security patches and major bugs are not common. We fork off a new tail release train from the tip every once in a while (currently, every 2 years).

Some tail users may want to benefit from performance improvements and are willing to take the stability risk involved in having them backported, but they can obviously live without them because they have so far. If their absence were painful enough to justify increasing their resources, they could invest in migrating to a newer tail once. Nevertheless, we do offer a "tail with performance enhancements" release train in special circumstances (if there's sufficient demand) -- for pay.

The challenge is getting people to understand this. Many want a particular enhancement they personally need backported, because they think that a "patch" with a significant enhancement is safer than a new feature release. They've yet to internalise that what matters isn't how a version is called (we don't use semantic versioning because we think it is unhelpful and necessarily misleading), but that there's an inherent tension between enhancements and stability. You can get more of one or the other, but not both.

By @palata - 4 months
What the article points to is that most updates are bad updates. We teach people that they should accept all updates for security reasons, but really they should only accept security updates.

But they can't, because this is not a possibility that is given to them. All updates are put together, and we as an industry suck at even knowing if our change is backward compatible or not (which is actually some kind of incompetence).

And of course it's hard, because users are not competent enough to distinguish good software from bad software, so they follow what the marketing tells them. Meaning that even if you made good software with fewer shiny features but actual stability, users would go for the worse software of the competitor, because it has the latest damn AI buzzword.

Sometimes I feel like software is pretty much doomed: it won't get any better. But one thing I try to teach people is this: do not add software to things that work, EVER. You don't want a connected fridge, a connected light bulb or a connected vacuum-cleaner-camera-robot. You don't need it; it's completely superfluous.

Also for things that actually matter, many times you don't want them either. Electronic voting is an example I have in my mind: it's much easier to hack a computer from anywhere in the world than to hack millions of pieces of paper.

By @hitpointdrew - 4 months
"Never Update Anything"

Author proceeds to add to two updates to the article, epic troll.

By @UniverseHacker - 4 months
I pretty much agree- most systems don't need updating. I've seen and setup OpenBSD servers that ran for a decade without issues never getting updates. I currently run some production web services on Debian where I do updates every 3 years or so, and no issues.

Leaving something alone that works good is a good strategy. Most of the cars on the road are controlled by ECUs that have never had, and never will have any type of updates, and that is a good thing. Vehicles that can get remote updates like Teslas are going to be much less reliable than one not connected to anything that has a single extensively tested final version.

An OS that is fundamentally secure by design, and then locked down to not do anything non-essential, doesn't really need updates unless, e.g. it is a public facing web server, and the open public facing service/port has a known remote vulnerability, which is pretty rare.

By @schiffern - 4 months
By @mikewarot - 4 months
When I was the sole IT guy for a small consulting company, once I got everything working, I never updated it.

We used Microsoft office 2000 for 12 years. Never had to retrain people, deal with the weird ribbon toolbar, etc.

It's only the deranged use of OSs with ambient authority that gums up what would otherwise be stable systems.

By @aspyct - 4 months
There's an alternative solution: update everything, but limit your dependencies.

Example: for my (personal) projects, I only use whatever is available in the debian repositories. If it's not in there, it's not on my dependency list.

Then enable unattended upgrades, and forget about all that mess.

By @Groxx - 4 months
To jump on a related article since it's linked and comments are now closed: https://blog.kronis.dev/articles/stable-software-release-sys...

The 2021/2022/2023/2024 version-numbering schemes are for applications, not libraries, because applications are essentially not ever semver-stable.

That's perfectly reasonable for them. They don't need semver. People don't build against jetbrains-2024.1, they just update their stuff when JetBrains breaks something they use (which can happen at literally any time, just ask plugin devs)... because they're fundamentally unstable products and they don't care about actual stability, they just do an okay job and call it Done™ and developers on their APIs are forced to deal with it. Users don't care 99%+ of the time because the UI doesn't change and that is honestly good enough in nearly all cases.

That isn't following semver, which is why they don't follow semver. Which is fine (because they control their ecosystem with an iron fist). It's a completely different relationship with people looking at that number.

For applications, I totally agree. Year-number your releases, it's much more useful for your customers (end-users) who care about if their habits are going to be interrupted and possibly how old it is. But don't do it with libraries, it has next to nothing to do with library customers (developers) who are looking for mechanical stability.

By @Plasmoid - 4 months
I was working at a place that delivered onprem software. One customer asked us "We like features of version N but we're running N-1. Can you backport them so you don't have to upgrade?". I replied we'd already done that, it was called version N.
By @knallfrosch - 4 months
We're being paid to migrate our hardware boxes programmatically to Windows 10 IoT LTSC so that new boxes ship with 10+ years of security. We're still supporting some XP devices (not connected to the internet.) So to anyone depending on us: You're welcome.

But let me tell you something: Long-Term Support software mostly doesn't pay well, and it's not fun either. Meanwhile some Google clown is being paid 200k to fuck up Fitbit or rewrite Wallet for the 5th time in the newest language.

So yeah. I'd love to have stable, reliable dependencies while I'm mucking around with the newest language de jour. But you see how that doesn't work, right?

By @eXpl0it3r - 4 months
Even as a developer not focused on web dev this sounds pretty bad, unless everyone in your dependency tree (from OS to language to libraries) decides to make a switch and even then, you'll be stuck with outdated ways to do things.

Who wants to continue maintaining C++03 code bases without all the C++11/14/17/20 features? Who wants to continue using .NET Framework, when all the advances are made in .NET? Who wants to be stuck with libraries full of vulnerabilities and who accepts the risk?

Not really addressed is the issue of developers switching jobs/projects every few years. Nobody is sticking around long enough to amass the knowledge needed to ensure maintenance of any larger code base.

Which is caused by or caused the companies to also not commit themselves for any longer period of times. If the company expects people to leave within two years and doesn't put in the monetary and non-monetary effort to retain people, why should devs consider anything longer than the current sprint?

By @walterbell - 4 months
> And lastly, choose boring technology. Only use software that you're sure will be supported in 10 years. Only use software, which has very few "unknown unknowns". Only use software, where the development pace has slowed down to a reasonable degree.

Perl has been stable for a couple of decades.

By @cupantae - 4 months
I’ve supported enterprise software for various big companies and I can tell you that most decision makers for DCs agree with this sentiment.

EMC had a system called Target Code which was typically the last patch in the second-last family. But only after it had been in use for some months and/or percentage of customer install base. It was common sense and customers loved it. You don’t want your storage to go down for unexpected changes.

Dell tried to change that to “latest is target” and customers weren’t convinced. Account managers sheepishly carried on an imitation of the old better system. Somehow from a PR point of view, it’s easier to cause new problems than let the known ones occur.

By @palata - 4 months
> My experience shows that oftentimes these new and supposedly backwards compatible features still break old functionality.

Well that's the first issue: downright malpractice. Developers should learn how to know (and test) whether it is a major change or not.

The current situation is that developers mostly go "YOLO" with semantic versioning and then complain that it doesn't work. Of course it doesn't work if we do it wrong.

By @cogman10 - 4 months
I find it pretty funny that immediately on the first click of this article I was greeted with an internal server error.
By @1vuio0pswjnm7 - 4 months
Avoid software that will need constant updates. Because that is a signal it is defective to begin with, or expected to be broken soon.

For example, I avoid graphical commercial OS, large, graphical web browsers. Especially mobile OS and "apps".

Avoidance does not have to be 100% to be useful. If it defeats reliance on such software then it pays for itself, so to speak. IMHO.

The notion of allowing RCE/OTA for "updates" might allegedly be motivated by the best of intentions.

But these companies are not known for their honesty. Nor for benevolence.

And let's be honest, allowing remote access to some company will not be utilised 100% for the computer owner's benefit. For the companies remotely installing and automatically running code on other peoples' computer, surveillance has commercial value. Allowing remote access makes surveillance easier. A cake walk.

By @AaronFriel - 4 months
A feature I've wanted for ages, for every OS package manager (Windows, apt, yum, apk, etc.), every language's package manager (npm, pypi, etc.), and so on is to update but filter out anything less than one day, one week, or one month old. And it applies here, too.

Now, some software, they effectively do this risk mitigation for you. Windows, macOS, browsers all do this very effectively. Maybe only the most cautious enterprises delay these updates by a day.

But even billion dollar corporations don't do a great job of rolling out updates incrementally. This especially applies as tools exist to automatically scan for dependency updates, the list of these is too long to name - don't tell me about an update only a day old, that's too risky for my taste.

So for OS and libraries for my production software? I'm OK sitting a week or a month behind, let the hobbyists and the rest of the world test that for me. Just give me that option, please.

By @userbinator - 4 months
There is also another type of update: security updates that don't actually matter in the environment that the software is used in. The question of whether the "new features" are for or against the user is another point to ponder.
By @hcarvalhoalves - 4 months
Extreme viewpoint, but agree strongly. Big reason why working in Common Lisp brings a smile to my face - it’s a standard, quicklisp works, ffi works, etc. I can run code and follow instructions written DECADES ago, it just damn works.
By @tonymet - 4 months
our industry could use a risk assessment index scanner on updates, similar to "npm audit" , that measures the delta between versions and gives a risk indicator based on a number of parameters.

The issue with changelogs is that they are an honor system, and they don't objectively assess the risk of the update.

Comparing changes in the symbol table and binary size could give a reasonable red/yellow/green indicator of the risk of an update. Over time, you could train a classifier to give even more granularity and confidence.

By @nickthegreek - 4 months
Previously (November 4, 2021 — 319 points, 281 comments): https://news.ycombinator.com/item?id=29106159
By @albertP - 4 months
He. I've been running always everything one or two versions behind latest (for my personal laptop, not servers). That means mainly OS (e.g., macOS), but as long as I can avoid automatic updates, I do so.

I believe the chances of having a bricked laptop because of a bad update are higher than the chances of getting malware because running one or two versions behind the latest one.

By @msoad - 4 months
Kinda ironic that the article itself was updated
By @exe34 - 4 months
> Not only that, but put anything and everything you will ever need within the standard library or one or two large additional libraries.

you can definitely do that with python today: assemble a large group of packages that conver a large fraction of what people need to do, and maintain that as the 1 or 2 big packages. nobody's stopping you.

By @CooCooCaCha - 4 months
I disagree, keep things constantly updated (within reason).

Most companies I've worked for have the attitude of the author, they treat updates as an evil that they're forced to do occasionally (for whatever reason) and, as a result, their updates are more painful than they need to be. It's a self-fulfilling prophecy.

By @apantel - 4 months
No no no, it’s “never update anything and don’t expose your machine to the internet”. Winning strategy right there.
By @gtirloni - 4 months
The world is not static and software these days is very interconnected. Dreams of not updating only work in a unchanging world. Sadly, this world is still to be found.
By @binary132 - 4 months
I had started to think I was the only one saying this.
By @mike741 - 4 months
Urgent updates can be necessary every once in a while but should be recognized as technical failures on the part of the developers. Failure can be forgiven, but only so many times. The comments saying "what about X update that had this feature I need?" are missing the point entirely. Instead ask yourself about all of the updates you've made without even looking at the patch notes, because there are just too many updates and not enough time. Instead of blaming the producers for creating a blackbox relationship with the consumers, we blame the consumer and blindly tell them to "just update." That's what needs to change. It's a bit similar to opaque ToS issues.
By @cpncrunch - 4 months
A reasonable strategy is to wait a week after release before applying an update, unless it's a zero day fix.
By @neontomo - 4 months
the react module bloat example is not a fair one, the recommended way to start a react project isn't to use create-react-app. other methods are more streamlined. but then again, the deprecation of create-react-app perhaps proves the point that updates create problems.
By @dzonga - 4 months
java over golang lol. when golang has literally been version stable for over a decade now
By @kkfx - 4 months
Or use NixOS/Guix Systems instead of living in the stone age of containers...
By @k_roy - 4 months
> In my eyes it could be pretty nice to have a framework version that's supported for 10-20 years and is so stable that it can be used with little to no changes for the entire expected lifetime of a system.

Yeah, me too. I also would like a few million bucks in the bank.

It's naive to think that every project wouldn't want to set this goal, simply because it's so unrealistic.

By @ricksunny - 4 months
The 'Skip this Update [pro]' button example (Docker Desktop) just made me facepalm and helped me internalize that I'm not a luddite from technology, I'm a luddite from the collectives of people (not the individual people...(!) ) feeling compelled to craft these dark business patterns.
By @ghawk1ns - 4 months
There is humor in this blog itself has 2 updates
By @spyspy - 4 months
Kinda weird to see Java over Go, when the former is basically an entirely new language from what it was 10 years ago and the latter has made it an explicit goal to never break older versions and (almost) never change the core language.
By @msiemens - 4 months
I know it's this is a rather long tangent and not the main point of the article, but regarding "Docker Swarm over Kubernetes", I've had a ton of bad experiences at my employer running a production Swarm cluster. Among them:

- Docker Swarm and Docker Compose use different parsers for `docker-compose.yaml` files, which may lead to the same file working with Compose but not with Swarm ([1]).

- A Docker network only supports up to 128 joined containers (at least when using Swarm). This is due to the default address space for a Docker network using a /24 network (which the documentation only mentions in passing). But, Docker Swarm may not always show error message indicating that it's a network problem. Sometimes services would just stay in "New" state forever without any indictation what's wrong (see e.g. [2]).

- When looking a up a service name, Docker Swarm will use the IP from the first network (sorted lexically) where the service name exists. In a multi-tenant setup, where a lot of services are connected to an ingress network (i.e. Taefik), this may lead to a service connecting to a container from a different network than expected. The only solution is to always append the network name to the service name (e.g. service.customer-network; see [3]).

- Due to some reason I still wasn't able to figure out, the cluster will sometimes just break. The leader loses its connection to the other manager nodes, which in turn do NOT elect a new leader. The only solution is to force-recreate the whole cluster and then redeploy all workloads (see [4]).

Sure, our use case is somewhat special (running a cluster used by a lot of tenants), and we were able to find workarounds (some more dirty than others) to most of our issues with Docker Swarm. But what annoys me is that for almost all of the issues we had, there was a GitHub ticket that didn't get any official response for years. And in many cases, the reporters just give up waiting and migrate to K8s out of despair or frustration. Just a few quotes from the linked issues:

> We, too, started out with Docker Swarm and quickly saw all our production clusters crashing every few days because of this bug. […] This was well over two years (!) ago. This was when I made the hard decision to migrate to K3s. We never looked back.

> We recently entirely gave up on Docker Swarm. Our new cluster runs on Kubernetes, and we've written scripts and templates for ourselves to reduce the network-stack management complexities to a manageable level for us. […] In our opinion, Docker Swarm is not a production-ready containerization environment and never will be. […] Years of waiting and hoping have proved fruitless, and we finally had to go to something reliable (albeit harder to deal with).

> IMO, Docker Swarm is just not ready for prime-time as an enterprise-grade cluster/container approach. The fact that it is possible to trivially (through no apparent fault of your own) have your management cluster suddenly go brainless is an outrage. And "fixing" the problem by recreating your management cluster is NOT a FIX! It's a forced recreation of your entire enterprise almost from scratch. This should never need to happen. But if you run Docker Swarm long enough, it WILL happen to you. And you WILL plunge into a Hell the scope of which is precisely defined by the size and scope of your containerization empire. In our case, this was half a night in Hell. […] This event was the last straw for us. Moving to Kubernetes. Good luck to you hardy souls staying on Docker Swarm!

Sorry, if this seems like like Docker Swarm bashing. K8s has it's own issues, for sure! But at least there is a big community to turn to for help, if things to sideways.

[1]: https://github.com/docker/cli/issues/2527 [2]: https://github.com/moby/moby/issues/37338 [3]: https://github.com/docker/compose/issues/8561#issuecomment-1... [4]: https://github.com/moby/moby/issues/34384