Never Update Anything
The article critiques frequent software updates, citing time constraints, confusion over update types, and companies pushing paid upgrades. It highlights challenges and issues caused by updates, questioning their necessity.
Read original articleThe article argues against the common practice of frequent software updates, highlighting the time-consuming nature of keeping up with updates and the potential for breaking changes. It discusses the challenges of differentiating between types of updates, such as major releases, security fixes, bug patches, and feature updates. The author suggests that the industry's approach to updates can be problematic, with examples like software companies discontinuing support for older versions to push users towards paid upgrades. The article emphasizes the burden of managing updates for individuals and organizations, pointing out instances where updates have caused issues or forced unwanted changes. It concludes by underscoring the time and effort required to stay updated while balancing other responsibilities, questioning the necessity and impact of constant software updates in a busy and demanding world.
Related
Is 2024 the year of Windows on the Desktop?
In 2024, the author reviews Windows 11, highlighting challenges like limited hardware support, lack of installation control, manual driver search, slow updates, and UI lag. They compare favorably to Linux distributions.
The software world is destroying itself (2018)
The software development industry faces sustainability challenges like application size growth and performance issues. Emphasizing efficient coding, it urges reevaluation of practices for quality improvement and environmental impact reduction.
Your Company's Problem is Hiding in Plain Sight - High Work-In-Progress (WIP)
The article explores the negative effects of high Work-In-Progress (WIP) in software development, drawing parallels to a WWII story. It highlights signs of high WIP and advocates for a balance between productivity and rest.
The IT Industry is a disaster (2018)
The IT industry faces challenges in IoT and software reliability. Concerns include device trustworthiness, complex systems, and security flaws. Criticisms target coding practices, standards organizations, and propose accountability and skill recognition.
Ubuntu Security Updates Are a Confusing Mess
The article delves into Ubuntu security update complexities, emphasizing Tomcat vulnerability issues. It discusses patch availability discrepancies across LTS versions, Canonical's support limitations, and considerations of switching to Debian for more consistent security fixes.
This is what applications used to be like, before the web and internet hit and regular or even push updating became easy.
It was simply so difficult and expensive to provide updates once the software was in the customer's hands that it was done intentionally and infrequently. For the most part, you bought software, installed it, and used it. That was it. It never changed, unless you bought it again in a newer version
I don't think the article itself holds up that well, it's just that updates are often a massive pain, one that you have to deal with somehow regardless. Realistically, LTS versions of OS distros and technologies that don't change often will lessen the pain, but not eliminate it entirely.
And even then, you'll still need to deal with breaking changes when you will be forced to upgrade across major releases (e.g. JDK 8 to something newer after EOL) or migrate once a technology dies altogether (e.g. AngularJS).
It's not like people will backport fixes for anything indefinitely either.
The idea is to realise that there are two different classes of consumers who want different things, and rather than try to find a compromise that would not fully satisfy either group (and turns out to be more expensive to boot), we offer multiple release trains for different people.
One release train, called the tip, contains new features and performance enhancements in addition to bug fixes and security patches. Applications that are still evolving can benefit from new features and enhancements and have the resources to adopt them (by definition, or else they wouldn't be able to use the new features).
Then there are multiple "tail" release trains aimed at applications that are not interested in new features because they don't evolve much anymore (they're "legacy"). These applications value stability over everything else, which is why only security patches and fixes to the most severe bugs are backported to them. This also makes maintaining them cheap, because security patches and major bugs are not common. We fork off a new tail release train from the tip every once in a while (currently, every 2 years).
Some tail users may want to benefit from performance improvements and are willing to take the stability risk involved in having them backported, but they can obviously live without them because they have so far. If their absence were painful enough to justify increasing their resources, they could invest in migrating to a newer tail once. Nevertheless, we do offer a "tail with performance enhancements" release train in special circumstances (if there's sufficient demand) -- for pay.
The challenge is getting people to understand this. Many want a particular enhancement they personally need backported, because they think that a "patch" with a significant enhancement is safer than a new feature release. They've yet to internalise that what matters isn't how a version is called (we don't use semantic versioning because we think it is unhelpful and necessarily misleading), but that there's an inherent tension between enhancements and stability. You can get more of one or the other, but not both.
But they can't, because this is not a possibility that is given to them. All updates are put together, and we as an industry suck at even knowing if our change is backward compatible or not (which is actually some kind of incompetence).
And of course it's hard, because users are not competent enough to distinguish good software from bad software, so they follow what the marketing tells them. Meaning that even if you made good software with fewer shiny features but actual stability, users would go for the worse software of the competitor, because it has the latest damn AI buzzword.
Sometimes I feel like software is pretty much doomed: it won't get any better. But one thing I try to teach people is this: do not add software to things that work, EVER. You don't want a connected fridge, a connected light bulb or a connected vacuum-cleaner-camera-robot. You don't need it; it's completely superfluous.
Also for things that actually matter, many times you don't want them either. Electronic voting is an example I have in my mind: it's much easier to hack a computer from anywhere in the world than to hack millions of pieces of paper.
Author proceeds to add to two updates to the article, epic troll.
Leaving something alone that works good is a good strategy. Most of the cars on the road are controlled by ECUs that have never had, and never will have any type of updates, and that is a good thing. Vehicles that can get remote updates like Teslas are going to be much less reliable than one not connected to anything that has a single extensively tested final version.
An OS that is fundamentally secure by design, and then locked down to not do anything non-essential, doesn't really need updates unless, e.g. it is a public facing web server, and the open public facing service/port has a known remote vulnerability, which is pretty rare.
We used Microsoft office 2000 for 12 years. Never had to retrain people, deal with the weird ribbon toolbar, etc.
It's only the deranged use of OSs with ambient authority that gums up what would otherwise be stable systems.
Example: for my (personal) projects, I only use whatever is available in the debian repositories. If it's not in there, it's not on my dependency list.
Then enable unattended upgrades, and forget about all that mess.
The 2021/2022/2023/2024 version-numbering schemes are for applications, not libraries, because applications are essentially not ever semver-stable.
That's perfectly reasonable for them. They don't need semver. People don't build against jetbrains-2024.1, they just update their stuff when JetBrains breaks something they use (which can happen at literally any time, just ask plugin devs)... because they're fundamentally unstable products and they don't care about actual stability, they just do an okay job and call it Done™ and developers on their APIs are forced to deal with it. Users don't care 99%+ of the time because the UI doesn't change and that is honestly good enough in nearly all cases.
That isn't following semver, which is why they don't follow semver. Which is fine (because they control their ecosystem with an iron fist). It's a completely different relationship with people looking at that number.
For applications, I totally agree. Year-number your releases, it's much more useful for your customers (end-users) who care about if their habits are going to be interrupted and possibly how old it is. But don't do it with libraries, it has next to nothing to do with library customers (developers) who are looking for mechanical stability.
But let me tell you something: Long-Term Support software mostly doesn't pay well, and it's not fun either. Meanwhile some Google clown is being paid 200k to fuck up Fitbit or rewrite Wallet for the 5th time in the newest language.
So yeah. I'd love to have stable, reliable dependencies while I'm mucking around with the newest language de jour. But you see how that doesn't work, right?
Who wants to continue maintaining C++03 code bases without all the C++11/14/17/20 features? Who wants to continue using .NET Framework, when all the advances are made in .NET? Who wants to be stuck with libraries full of vulnerabilities and who accepts the risk?
Not really addressed is the issue of developers switching jobs/projects every few years. Nobody is sticking around long enough to amass the knowledge needed to ensure maintenance of any larger code base.
Which is caused by or caused the companies to also not commit themselves for any longer period of times. If the company expects people to leave within two years and doesn't put in the monetary and non-monetary effort to retain people, why should devs consider anything longer than the current sprint?
Perl has been stable for a couple of decades.
EMC had a system called Target Code which was typically the last patch in the second-last family. But only after it had been in use for some months and/or percentage of customer install base. It was common sense and customers loved it. You don’t want your storage to go down for unexpected changes.
Dell tried to change that to “latest is target” and customers weren’t convinced. Account managers sheepishly carried on an imitation of the old better system. Somehow from a PR point of view, it’s easier to cause new problems than let the known ones occur.
Well that's the first issue: downright malpractice. Developers should learn how to know (and test) whether it is a major change or not.
The current situation is that developers mostly go "YOLO" with semantic versioning and then complain that it doesn't work. Of course it doesn't work if we do it wrong.
For example, I avoid graphical commercial OS, large, graphical web browsers. Especially mobile OS and "apps".
Avoidance does not have to be 100% to be useful. If it defeats reliance on such software then it pays for itself, so to speak. IMHO.
The notion of allowing RCE/OTA for "updates" might allegedly be motivated by the best of intentions.
But these companies are not known for their honesty. Nor for benevolence.
And let's be honest, allowing remote access to some company will not be utilised 100% for the computer owner's benefit. For the companies remotely installing and automatically running code on other peoples' computer, surveillance has commercial value. Allowing remote access makes surveillance easier. A cake walk.
Now, some software, they effectively do this risk mitigation for you. Windows, macOS, browsers all do this very effectively. Maybe only the most cautious enterprises delay these updates by a day.
But even billion dollar corporations don't do a great job of rolling out updates incrementally. This especially applies as tools exist to automatically scan for dependency updates, the list of these is too long to name - don't tell me about an update only a day old, that's too risky for my taste.
So for OS and libraries for my production software? I'm OK sitting a week or a month behind, let the hobbyists and the rest of the world test that for me. Just give me that option, please.
The issue with changelogs is that they are an honor system, and they don't objectively assess the risk of the update.
Comparing changes in the symbol table and binary size could give a reasonable red/yellow/green indicator of the risk of an update. Over time, you could train a classifier to give even more granularity and confidence.
I believe the chances of having a bricked laptop because of a bad update are higher than the chances of getting malware because running one or two versions behind the latest one.
you can definitely do that with python today: assemble a large group of packages that conver a large fraction of what people need to do, and maintain that as the 1 or 2 big packages. nobody's stopping you.
Most companies I've worked for have the attitude of the author, they treat updates as an evil that they're forced to do occasionally (for whatever reason) and, as a result, their updates are more painful than they need to be. It's a self-fulfilling prophecy.
Yeah, me too. I also would like a few million bucks in the bank.
It's naive to think that every project wouldn't want to set this goal, simply because it's so unrealistic.
- Docker Swarm and Docker Compose use different parsers for `docker-compose.yaml` files, which may lead to the same file working with Compose but not with Swarm ([1]).
- A Docker network only supports up to 128 joined containers (at least when using Swarm). This is due to the default address space for a Docker network using a /24 network (which the documentation only mentions in passing). But, Docker Swarm may not always show error message indicating that it's a network problem. Sometimes services would just stay in "New" state forever without any indictation what's wrong (see e.g. [2]).
- When looking a up a service name, Docker Swarm will use the IP from the first network (sorted lexically) where the service name exists. In a multi-tenant setup, where a lot of services are connected to an ingress network (i.e. Taefik), this may lead to a service connecting to a container from a different network than expected. The only solution is to always append the network name to the service name (e.g. service.customer-network; see [3]).
- Due to some reason I still wasn't able to figure out, the cluster will sometimes just break. The leader loses its connection to the other manager nodes, which in turn do NOT elect a new leader. The only solution is to force-recreate the whole cluster and then redeploy all workloads (see [4]).
Sure, our use case is somewhat special (running a cluster used by a lot of tenants), and we were able to find workarounds (some more dirty than others) to most of our issues with Docker Swarm. But what annoys me is that for almost all of the issues we had, there was a GitHub ticket that didn't get any official response for years. And in many cases, the reporters just give up waiting and migrate to K8s out of despair or frustration. Just a few quotes from the linked issues:
> We, too, started out with Docker Swarm and quickly saw all our production clusters crashing every few days because of this bug. […] This was well over two years (!) ago. This was when I made the hard decision to migrate to K3s. We never looked back.
> We recently entirely gave up on Docker Swarm. Our new cluster runs on Kubernetes, and we've written scripts and templates for ourselves to reduce the network-stack management complexities to a manageable level for us. […] In our opinion, Docker Swarm is not a production-ready containerization environment and never will be. […] Years of waiting and hoping have proved fruitless, and we finally had to go to something reliable (albeit harder to deal with).
> IMO, Docker Swarm is just not ready for prime-time as an enterprise-grade cluster/container approach. The fact that it is possible to trivially (through no apparent fault of your own) have your management cluster suddenly go brainless is an outrage. And "fixing" the problem by recreating your management cluster is NOT a FIX! It's a forced recreation of your entire enterprise almost from scratch. This should never need to happen. But if you run Docker Swarm long enough, it WILL happen to you. And you WILL plunge into a Hell the scope of which is precisely defined by the size and scope of your containerization empire. In our case, this was half a night in Hell. […] This event was the last straw for us. Moving to Kubernetes. Good luck to you hardy souls staying on Docker Swarm!
Sorry, if this seems like like Docker Swarm bashing. K8s has it's own issues, for sure! But at least there is a big community to turn to for help, if things to sideways.
[1]: https://github.com/docker/cli/issues/2527 [2]: https://github.com/moby/moby/issues/37338 [3]: https://github.com/docker/compose/issues/8561#issuecomment-1... [4]: https://github.com/moby/moby/issues/34384
Related
Is 2024 the year of Windows on the Desktop?
In 2024, the author reviews Windows 11, highlighting challenges like limited hardware support, lack of installation control, manual driver search, slow updates, and UI lag. They compare favorably to Linux distributions.
The software world is destroying itself (2018)
The software development industry faces sustainability challenges like application size growth and performance issues. Emphasizing efficient coding, it urges reevaluation of practices for quality improvement and environmental impact reduction.
Your Company's Problem is Hiding in Plain Sight - High Work-In-Progress (WIP)
The article explores the negative effects of high Work-In-Progress (WIP) in software development, drawing parallels to a WWII story. It highlights signs of high WIP and advocates for a balance between productivity and rest.
The IT Industry is a disaster (2018)
The IT industry faces challenges in IoT and software reliability. Concerns include device trustworthiness, complex systems, and security flaws. Criticisms target coding practices, standards organizations, and propose accountability and skill recognition.
Ubuntu Security Updates Are a Confusing Mess
The article delves into Ubuntu security update complexities, emphasizing Tomcat vulnerability issues. It discusses patch availability discrepancies across LTS versions, Canonical's support limitations, and considerations of switching to Debian for more consistent security fixes.