The Software Crisis
The software crisis, coined in 1968, highlights challenges in managing software complexity. Despite advancements, issues persist, emphasizing responsible construction, user agency, and sustainable development practices through constrained abstractions and user empowerment.
Read original articleThe term "software crisis" was coined in 1968 during the first NATO Software Engineering conference, highlighting the challenges of managing the increasing complexity of software development alongside hardware advancements. Edsger Dijkstra emphasized the struggle caused by powerful machines outpacing organizational methods. Despite advancements in programming practices, the industry still faces issues of abstraction layers, accelerated release cycles, and a lack of control for users. The software crisis extends beyond developers to all software users, emphasizing the need for responsible software construction and user agency. Efforts to address the crisis include advocating for shallower, composable programming models and increased awareness through movements like Handmade and Permacomputing. The solution lies in constraining abstraction layers, preserving information flow, and empowering users, signaling a shift towards more sustainable software development practices.
* I do not argue against abstractions, but against the unrestricted application of them.
* I do not advocate for a reversion to more constrained platforms as a solution.
* I do not advocate for users becoming "more technical" in a "suck it up" fashion.
The key to understanding the software crisis is the curves of "mastery of a platform" and "growth/release cycles". We have, in the past 40+ years, seen these curves diverge in all but a few sectors. We did not address the crisis when these curves were close in proximity, but the second best time is now.
As for folks calling this clickbait, it is the first in my log, and reflects my thoughts on the situation we find ourselves in as developers. The sentiments are mirrored, in various forms, around multiple communities, some of them based in counterculture.
I do want to deliver some part of the solution to these problems, so I do intend on following up on "I'll show you how". I am a single entity, so give me time and grace.
Projects running over-budget
Projects running over-time
Software was very inefficient
Software was of low quality
Software often did not meet requirements
Projects were unmanageable and code difficult to maintain
Software was never delivered
Now take the word "software" out and how many human endeavours have one or all of these things? And then how much software is actually pretty great? We tend only see the failures and the flaws and success is just a baseline that we completely ignore even as it gets continuously better.When we press the power button on our computer and it gets to a desktop, our computer has already run through hundreds of abstractions. Just at that desktop it is already the most complicated machine we have or will interact with all day. This happens billions of times a day, all across the world, and mostly flawlessly. And that's just one tiny example.
In agile software development on the other hand, technical competence usually ends at the lowest tier. A scrum team has folks on it who make software, that's it. Then, lots of scrum masters, business analysts have probably never coded much; the first actual boss in the hierarchy has mostly secretarial and managerial work and will hardly look at code at all.
Point is, it's not just that software development is done in ticket-sized portions which does not invite philosophical considerations about the numbers of abstraction layers that one builds and maintains. It's that software developers don't even have a seat at the table(); they get babysat by scrum masters, make compromises during code review, are discouraged from thinking beyond the ticket, and are then of course not usually promoted into leadership roles to which they would bring their technical competence.
It appears therefore that any movements to bring awareness to the "software crisis" will be relegated to hobbyists, as the article states at the end: to "Handmade, Permacomputing, and various retro-computing circles".
() I partly blame Hollywood and their incessant humiliation of software/IT people, while creating endless leading roles for doctors and lawyers, effortlessly weaving their complicated terminologies into fascinating storylines, which is aparently not possible to do for us? Maybe the scriptwriting AIs can come up with something here soon.
Rich Hickey said something along the lines of "A novice juggler may be able to juggle two or three balls, but the best juggler in the world can only juggle maybe nine balls. There isn't an order of magnitude difference in human ability, we hit a ceiling quickly." If we are to surpass those limits, we have no choice but to abstract
Of course there may be bad abstractions or too many abstractions in a given case, which is what I think the author is mad at. But that's an important distinction
This part is also plainly false:
> It is no longer easy to build software, and nothing comes with a manual.
Making software has never been easier or better-documented
It is very easy if you know the right tools for the right job, but information about these are suppressed so you never hear about them.
What the vast majority of people think the tech tooling landscape looks like and what it actually looks like are very different. The tools we know about are mostly horrible. They try to be silver bullets but they're really not good for anything... Yet they're the most popular tools. I think it's about capital influence, as hinted by the article.
For example, with the tool I'm using now, I made a video showing how to build a relatively complex marketplace app (5+ pages) in 3 hours with login, access control, schema validation, complex filtered views from scratch using only a browser and without downloading any software (serverless). The whole app is less than 700 lines of HTML markup and 12 lines of JavaScript code. The video got like 10 views.
GUIs are where this all falls apart as they are literal islands that don’t communicate with each other in a composable manner.
I’ve been experimenting with some GUI-meets-shell-pipeline ideas with a tool I’ve been working on call guish.
https://github.com/williamcotton/guish
I’m curious to know if anyone knows of any similar tools or approaches to composable GUIs!
It's just an intro for clickbait.
> It's a nice coincidence when they do.
> It's catastrophic when they don't.
Well, generally, it’s not my experience. Most software out there is not critical. Many bloated crappy webapp might end up badly doing what user is expecting while sucking irrelevantly large amount of resources all day through with erratic bugs showing here and there, yes all true.
But this is not as critical as the software that handle your pacemaker or your space rocket.
Most software can afford to be crap because most projects are related to human whims for which a lake of quality is at worst a bit of frustration that will not cause a feeling of devastation or pure death penalty. All the more, I guess that most software developers out there are neither working on Sillicon-Valley-like money motivation, nor paying their bill with the kind of software project they love to build on passion. So this means most software that hits market are generated through exogenous miserable remunerations. Who will expect anything else than crap to be the output of such a process?
However, we have a project management crisis, which is not only limited to software, where people in charge of planning are distanced from people in charge of the delivery. And we don't seem to be able to bridge the gap. Agile, Scrum, whatever are indicators of this gap where "gurus" take all of us as fools and we are not able to produce anything better ourselves.
Commoditization of software development is also contributing to this mess because people of all skill levels can find a way to partake in this activity with results of varying success. This is not good or bad, but just the nature of ease of entry into the field. Not much different than food business where we have both Michelin star restaurants as well as MacDonalds of the world both of which have consumers. But we don't say we have a restaurant crisis.
https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-a...
https://en.wikipedia.org/wiki/Tower_of_Babel
https://en.wikipedia.org/wiki/Hierarchy
https://en.wikipedia.org/wiki/Abstraction
https://en.wikipedia.org/wiki/Abstraction_(computer_science)
(Compare and contrast!)
I have found this not to be the case.
Very often, an inaccurate mental model is the ideal user state, and it's my job to afford and reinforce it.
But that's just me, and my experience. I'm sure there's a ton of folks that have found better ways.
I started my career in the era is 8 bit microcomputers. Yes it was great to know the entire stack from processor up through the GUI. But I would never want to go back to those days. Development was too slow and too limited.
We are in a golden era of software development.
A whole lot of, say, Bay Area software developer salaries/compensation have genuine comfort.
Defeat and acceptance doesn't come into this, for most organizations: they face little-to-no accountability for security problems or other system defects, so... comfort for the developers.
Sure I learned with DOS and Turbo Pascal and it was wonderful, but if you ask my teachers who learned with machine code and microcontrollers, they worried that computers have become too abstract and kids these days have little chance to learn the true details.
For some reason, my eyes cannot cope with white text on black backgrounds, so I usually just go to reader mode in cases like this article. But here, this option does not exist, for some reason?
If you’re going to advocate we change, it might start with recognition of the value we have and the effort it took to realize it. The flaws can only be resolved insomuch as they solutions don’t dilute the gift.
It's unconstrained side effects and dependencies, resulting in an increase in complexity, that seem to cause the major issues and have to be managed.
The real problem, of course, is the human capacity to comprehend (or not) the entirety of the system or subsystem by modeling it correctly in the brain.
The real problem with abstractions is when they are implemented poorly, or have side-effects, or just plain bugs. In other words, we will always be at the mercy of human-produced software.
In the case of Scrum, Scrum is implemented because it gives managers and stakeholders some semblance of observability and control over the software development process. Granted, Scrum shops are merely cosplaying at implementing a rigorous, controlled methodology, but if you take that semblance away you will have angry, frustrated decision makers who resent the software teams for being opaque and unwilling to commit to budgets or schedules.
In the case of abstractions... maybe there are a bunch of junior devs who just can't learn the complexities of SQL and need an ORM layer in order to reckon with the database. (I worked at a software shop where the most senior dev was like this; part of the reason I was brought on board was because I knew how to munge the PL/SQL scripts that formed part of their ETL process.) Maybe one part needs to be completely decoupled from another part in order to allow, for example, easy swapping of storage backends, or the creation of mocks for testing. Maybe some architect or staff SE is just empire-building, but at any rate they're way above your pay grade so the thing will be built their way, with their favorite toolkit of abstractions and advocating for the use of fewer abstractions will get you nowhere.
If you're working on a team of John Carmacks, great! You will be able to design and build a refined jewel of a software system that contains just the right amount of abstraction to enable your team of Carmacks to maintain and extend it easily while still running smoothly on a 386. Unfortunately, most software teams are not like that, most customers are not like that, so the systems they build will develop the abstractions needed to adjust to the peculiarities of that team.
Something as big and complex as the internet, which covers technologies from email to fiber, is held together by layered abstractions.
Also, software has gotten incredibly better since the 70s. We've built so much better tooling (and I believe tooling can help close the gap on growing complexity). When I was learning how to program, I had to double check I had every semicolon in the right place before hitting compile. I simply don't have to do that anymore. I can even get an entire API auto-completed using something like copilot on VSCode.
Nonetheless, a very thought-provoking article. Thank you for sharing!
We're often just hiding some mechanical details when in truth we should be searching for and codifying fundamental ontologies about the given domain.
It's hard because at the same time we can't yet be users because the thing does not yet exist, but yet we can't really know what we must build without a user to inform us. We can create some imaginary notions with various creative explorations, but images can often deceive.
I do believe the tools most used for software development are fundamentally terrible at modelling these ontologies and are really little more than telling computer to do A then do B and so have never really abstracted much at all.
We've found better ways to organize things over the years, and reduce incidental complexity too. We continue to chip away at small problems year after year. But, there's "no silver bullet," to give us a 10x improvement.
Believe I agree with the piece that we have too many layers building up lately. Would like to see several of them squashed. :-D
Sure layers of abstraction is leaky and has their issues, but I don’t want to write a hipster language in a hipster editor. If you enjoy that, great.
Also, it’s easy to look at the past with rose tinted glasses. Modern softwares are bloated mess but still a million times more productive.
I can't find a definition of the title term, "Software Crisis," anywhere in the post.
Is it "...[the] growing complexity of software..."?
It's difficult to reason about something with no definition.
Uh-huh, all hail to the coming of the layer police.
Ends - "Things can be better. I'll show you how." - @author - Maybe it would have been better to under promise and over deliver instead?
----
But that got me asking whether there might indeed be a software crisis and yes, I think there is a crises of sorts on the personal level. Maybe for others too. It's not one that is structural as the author proposes. It's that the software landscape is so vast and chaotic. There's so much going on that it's almost impossible to know where to focus. FOMO I suppose, too much to do and not enough time.
So many clever people, so much energy, doing all kinds of amazing things. For many different reasons, some good, some not. A lot of it looks, to coin a phrase, like fractal duplication, e.g. yet another JS framework, yet another game engine, yet another bullshit SAAS, just because. Seems inherent redundancy is built in to the systems.
Good times, I suppose.
The software crisis, if there is one, is caused by complexity. Complexity is the single enemy of a software developer. I would say that reducing complexity is the whole purpose of the software engineering field. I have many small hobby projects where I am the sole developer, and I still struggle with complexity sometimes... I've tried many languages and programming paradigms and still haven't found one that actually "solves" complexity. I am convinced, for now, that the only solution is developer discipline and, guess... good abstractions.
Because complexity doesn't necessarily come from abstractions. In fact, it's the exact opposite: the only way we know to make things "look" simpler, so that we can make sense of it, is to abstract away the problem! Do you need to know how the network works to send a HTTP request?? No!!! You barely need to know HTTP, you just call something like "fetch url" or click a link on a browser and you're done. This is not something we do because we are stuck on some local maximum. Whatever you do to "hide" complexity that is not crucial to solving a certain problem, will be called an "abstraction" of the problem, or a "model" if you will. They always have downsides, of course, but those are vastly offset by the benefits. I can write "fetch url" and be done, but if something goes wrong, I may need to actually have a basic understanding of what that's doing: is the URL syntax wrong, the domain down, the network down, the internet down, lack of authorization?? You may need to dig a bit, but 99% of the time you don't: so you still get the benefit of doing in one line what is actually a really complex sequence of actions, all done behind the layers of abstractions the people who came before you created to make your life easier.
> Various efforts have been made to address pieces of the software crisis, but they all follow the same pattern of "abstract it away"
Of course they do. Abstracting away is the very definition of addressing complexity. I believe what the author is actually trying to say is that some of the abstractions we have come up with are not the best abstractions yet. I would agree with that because as hardware evolves, what the best abstraction is for dealing with it also should evolve, but it hardly does so. That's why we end up with a mismatch between our lower level abstractions (Assembly, C) and the modern hardware we write software for. Most of the time this doesn't matter because most of us are writing software on a much higher level, where differences between hardware are either irrelevant or so far below the layers we're operating on as to be completely out of sight (which is absolutely wonderful... having to write code specific for certain hardware is the last thing you want if all you're doing is writing a web app, as 90% of developers do), but sure, sometimes it does.
> We lament the easy access to fundamental features of a machine, like graphics and sound. It is no longer easy to build software, and nothing comes with a manual.
I have trouble to take this seriously. Is the author a developer? If so, how can you not know just how wonderful we have it these days?? We can access graphics and sound even from a web app running on a browser!! Any desktop toolkit has easy to use APIs for that... we even have things like game engines that will let you easily access the GPU to render extremely complex 3D visualisations... which most of the time working on most Operating Systems in use without you having to worry about it.
Just a couple of decades ago, you would indeed have to buy a Manual for the specific hardware you were targeting to talk to a sound board, but those days are long gone for most of us (people in the embedded software world are the only ones still doing that sort of thing).
If you think it's hard to build software today, I can only assume you have not built anything like even 10 years ago. And the thing is: it's easy because the "hard problems" are mostly abstracted away and you don't even need to know they exist! Memory allocation?? No worries, use one of a million language that come with a GC... even if you want the most performant code, just use Rust, still you don't need to worry (but you can if you must!!! Go with Zig if you really want to know where your bytes go). Emit sound? Tell me which toolkit doesn't come with that ready off the box?? Efficient hash table? Every language has one in its standard lib. Hot code reloading so you can quickly iterate?? Well, are you using Lisp? If so, you've had that since the early 80's, otherwise, perhaps try Smalltalk, or even Java (use jdb, which lets you "redefine" each class on the go, or the IntelliJ debugger, just rebuild it while it's on, specially if you also use DCEVM which makes the JVM more capable in this regard) or Dart/Flutter, which has that out of the box.
Almost any other problem you may come across , either your language makes it easy for you already or you can use a library for it, which is as easy to install as typing "getme a lib"). Not to mention that if you don't know how to do something, ask AI, it will tell you exactly how to do it, with code samples and a full explanation, in most circumstances (if you haven't tried in the last year or so, try again, AI is getting scary good).
Now, back to what the problem actually is: how do we create abstractions that are only as complex as they need to be, and how many "layers" of abstractions are ideal? What programming model is ideal for a certain problem?? Sometimes OOP, sometimes FP, sometimes LP... but how to determine it? How to make software that any competent developer can come and start modifying with the minimum of fuss. These are the real problems we should be solving.
> Programming models, user interfaces, and foundational hardware can, and must, be shallow and composable. We must, as a profession, give agency to the users of the tools we produce.
This is the part where I almost agree with the author. I don't really see why there should be a limit on how "shallow" our layers of abstractions should be because that seems to me to limit how complex problems you can address... I believe the current limit is the human brain's ability to make sense of things, so perhaps there is a shallow limit, but perhaps in the future we may be able to break that barrier.
Finally, about user agency, well, welcome to the FOSS movement :D that's what it is all about!!