July 18th, 2024

Post-Architecture: Premature Abstraction Is the Root of All Evil

The article explores Post-Architecture in software development, cautioning against premature abstraction. It promotes simplicity, procedural programming, and selective abstraction for maintainable and efficient code.

Read original articleLink Icon
AbstractionComplexityExperience
Post-Architecture: Premature Abstraction Is the Root of All Evil

The article discusses the concept of Post-Architecture, emphasizing the importance of avoiding premature abstraction in software development. It highlights that while abstraction can be a useful tool, applying it too early can hinder maintainability. The author advocates for simplicity and a procedural programming approach, suggesting that unnecessary abstractions should be avoided. The article contrasts object-oriented programming with Post-Architecture, emphasizing the importance of choosing the right level of abstraction. It also touches on functional programming, recommending a functional style that sticks to the basics and uses pure functions. The author concludes by suggesting that starting with the basics of procedural programming and gradually expanding abstractions based on necessity is key to successful (post-)architecture. The article encourages developers to prioritize simplicity, avoid unnecessary complexity, and choose abstractions carefully to ensure code maintainability and efficiency.

AI: What people are saying
The comments on the article about Post-Architecture in software development discuss various perspectives on abstraction and simplicity in coding.
  • Many agree on the importance of avoiding premature abstraction and emphasize starting with simple, procedural code.
  • Some suggest co-locating code to avoid unnecessary abstractions and only abstracting when patterns emerge.
  • There is a debate on the role of Object-Oriented Programming (OOP) versus procedural and functional programming, with some defending OOP for domain modeling and others criticizing it for leading to complex code.
  • Several comments highlight the need for flexibility and the ability to adapt to changing requirements, rather than rigid upfront design.
  • References to established principles and laws, such as Gall’s law and Martin Fowler's "Speculative Generality," are used to support arguments against premature abstraction.
Link Icon 28 comments
By @recursivedoubts - 3 months
My general take (and w/ the caveat that every system is different) is as follows:

- procedural code to enter into the system (and perhaps that's all you need)

- object oriented code for domain modeling

- functional code for data structure transformations & some light custom control flow implementation (but not too much)

I like the imperative shell, functional core pattern quite a bit, and focusing on data structures is great advice as well. The anti-OO trend in the industry has been richly earned by the OO architecture astronauts[1], but the idea of gathering a data structure and the operations on that data structure in a single place, with data hiding, is a good one, particularly for domain modeling.

In general I think we are maturing as an industry, recognizing that various approaches have their strengths and weaknesses and a good software engineer can mix and match them when building a successful software project.

There is no silver bullet. If only someone had told us that years ago!

[1] - https://www.joelonsoftware.com/2001/04/21/dont-let-architect...

By @sevensor - 3 months
Although I agree with the recommendations, I cringe at the definition of abstraction. In a sane world, abstraction doesn't mean defining classes so much as it means identifying important unifying concepts. DRYing your code by moving a method to a common base class isn't abstraction in any important way, it's just adding a level of indirection. In fact, I'd argue that this example is the opposite of abstraction: it's concretion. Now every subclass relies implicitly on a particular implementation of that shared method. Not that doing this is never useful, but it's a mistake to call it abstraction when it's nothing of the sort. No wonder people complain that their abstractions leak.
By @jaynate - 3 months
Gall’s law: “A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system.”

Your theory of premature architecture reinforces Gall’s law.

This is from the book Systemantics: How systems work and especially how they fail (1977).

https://en.wikipedia.org/wiki/Systemantics

By @Zambyte - 3 months
Anyone who wants to do a deep dive into understanding effective abstractions, I highly recommend SICP. The full book[0] and lectures[1] are available online for free. You don't have to know Scheme to follow along.

[0] https://mitp-content-server.mit.edu/books/content/sectbyfn/b...

[1] https://ocw.mit.edu/courses/6-001-structure-and-interpretati...

By @t43562 - 3 months
I like to start with a fairly unambitious bit of procedural code and gradually introduce abstractions when it starts to get complicated or repetitious.

Straight code becomes functions, occasionally a group of functions cry out to become a class.

In C++ this is a huge effort to do - change hurts more there. In python it's much less painful and I end up with a program that is some imperfect composite of object oriented with functions. Next week when I want it to do more I move it further down the road to structure.

I also like keeping side effects in their own ghettos and extracting everything else out of those if possible but I'm not a big functional programming person - it's just about testing. Testing things with side effects is a pain.

By @stretch1414 - 3 months
Wonderful write-up. One way I really try to avoid premature abstractions is co-locating code wherever it is used. If you don't try to put everything in a shared lib or utils, you keep the surface area small. Putting a function in the same file (or same folder) makes it clear that it is meant to be used in that folder only. You have to be diligent about your imports though and be sure you don't have some crazy relative paths. Then, if you find yourself with that same function in lots of places in your code, you might have an stumbled upon an abstraction. That's when you put it in the shared lib or utils folder. But maybe not, maybe that abstraction should stay in a nested folder because it is specifically used for a subset of problems. Again, that's to avoid over-abstracting. If you are only using it for 3 use cases that are all within the same parent folder path (just different sub-folders), then only co-locate it as far up in the file tree as is absolutely necessary for keeping the import simple. Again, it requires due diligence, but the compartmentalization of the folder structure feels elegant in its simplicity.
By @aliasxneo - 3 months
I'm not "formally" trained in software engineering and am primarily self-taught. This area, in particular, has been confusing to me over the years, especially after consuming so many contradictory blog posts.

I tried to model DDD in a recent Golang project and am mostly unhappy with the result. Perhaps in my eagerness, I fell into the trap of premature abstraction, but there's not anything in particular that I can point to that might lead to that conclusion. It just feels like the overall code base is more challenging to read and has a ton of indirection. The feeling is made worse when I consider how much extra time I spent trying to be authentic in implementing it.

Now, I'm starting a new project, and I'm left in this uncertain state, not knowing what to do. Is there a fine balance? How is it achieved? I appreciate the author's attempt at bringing clarity, but I honestly walked away feeling even more confident that I don't understand how this all works out.

By @OhMeadhbh - 3 months
I think there's plenty of good advice in this post, though the OP doesn't talk as much about the evils of premature abstraction as one might like. Still, they do talk about how to avoid it using reasonable programming guidelines.

In the talk about data structures, I was reminded of Fred Brooks quote from MMM: "Show me your flowcharts, and conceal your table, and I shall continue to be mystified; show me your tables and and I won't usually need your flowchart; it'll be obvious." Several people have translated it to something like "Show me your code and conceal your data structures, and I shall continue to be mystified. Show me your data structures, and I won't usually need your code; it'll be obvious," for a modern audience.

Several years ago I was happy to work with several people with an interest in philosophical history. We whiled away the hours thinking about whether these quotes represented something of the tension between Hericlitus (You cannot step into the same river twice) and Plato (everything is form and substance.) So... I think the observation about the alternating utility of form and function is an old one.

By @resters - 3 months
A well chosen abstraction dramatically simplifies code and understanding, and a poorly chosen abstraction has the opposite effect.

When building a system some choices about abstractions can be made early, before the problem domain is fully understood. Sometimes they stand the test of time, other times they need to be reworked. Being aware of this and mindful of the importance of good abstractions is key to good system design.

By @BobbyTables2 - 3 months
Premature Abstraction is a common problem, affecting developers of all ages. Most are too embarrassed to talk about it.

Fortunately, there are non-invasive therapies that can reduce the frequency of occurrence.

By @yellowapple - 3 months
> /// BAD: This mutates `input`, which may be unexpected by the caller.

> [...]

> /// GOOD: `input` is preserved and a new object is created for the output.

Neither of these are good or bad without knowing their context and understanding their tradeoffs. In particular, sometimes you want to mutate an existing object instead of duplicating it, especially if it's a big object that takes awhile to duplicate.

By @aswerty - 3 months
> Post-Architecture is a method of defining architecture incrementally, rather than designing it upfront

For anyone else wondering what it means.

I'm going to be honest, almost all architecture I've seen out in the wild has followed a more incremental approach. But then again everywhere I've worked hasn't separated the architecture/coding roles.

By @phkahler - 3 months
>> Often, an abstraction doesn’t truly hide the data structures underneath, but it is bound by the limitations of the initial data structure(s) used to implement it. You want to refactor and use a new data structure? Chances are you need a new abstraction.

Data structures are abstractions :-)

By @MisterBastahrd - 3 months
There's no greater joy in life than jumping through an abstract object, an object interface, and a factory method only to find out that the factory only services one object.
By @ChrisMarshallNY - 3 months
In my experience, I've always found the devil to be in the [late] details.

I have learned (the hard way), that, no matter how far I go down the rabbithole, in architecture, I am never a match for Reality.

I. Just. Can't. Plan. For. Everything.

I've learned to embrace the suck, so to speak. I admit that I don't know how things will turn out, once I get into the bush, so I try to design flexible architectures.

Flexibility sometimes comes as abstractions; forming natural "pivot points," but it can also come from leaving some stuff to be "dealt with later," and segmenting the architecture well enough to allow these to be truly autonomous. That can mean a modular architecture, with whitebox "APIs," between components.

People are correct in saying that OO can lead to insane codeballs, but I find that this is usually because someone thought they had something that would work for everything, and slammed into some devilish details, down the road, and didn't want to undo the work they already did.

I have learned to be willing to throw out weeks worth of work, if I determine the architecture was a bad bet. It happens a lot less, these days, than it used to. Hurts like hell, but I've found that it is often beneficial to do stuff I don't want to do. A couple of years ago, I threw out an almost-complete app, because it was too far off the beam. The rewrite works great, and has been shipping since January.

Anyway, I have my experience, and it often seems to be very different from that of others. I tend to be the one going back into my code, months, or years, after I wrote it, so I've learned to leave stuff that I like; not what someone else says it should be like, or that lets me tick off a square in Buzzword Bingo.

My stuff works, ships, and lasts (sometimes, for decades).

By @feoren - 3 months
I know HN doesn't like quibbles about site design, but I'm literally having difficulty reading the article due to the font size being forced to be at least 1.3vw. Zooming out doesn't decrease the font size! Downvote if this is boring, but (a) I've never seen a site that did that before, so it's just notable from a "Daily WTF" kind of perspective, and (b) just in case the submitter is on HN: it's actually preventing me from reading the content (without changing it in DevTools anyway).
By @scotty79 - 3 months
I would love a language that has this gradual evolutional abstracting as a core concern. That makes it easy. Where you can start from simplest imperative code and easily abstract it as the need for this arises.

For example a language that requires "this." or "self." prefix is not such language because you can't easily turn a script or a function into a method of some object.

By @m3kw9 - 3 months
Here’s the scenario, hot shot intern comes in, calls a meeting to use generics so things can be done “easier”. He does a good job at presenting it and its value, the dumbass team lead oks it. Fast forward 1 week everyone complains behind on how much pain it is to use
By @agentultra - 3 months
This seems more like premature indirection than abstraction.

There are other design “paradigms,” such as denotational design. You start with and build from abstractions.

By @sesm - 3 months
Premature abstraction and premature optimization are just instances of Heffalump Trap pattern.
By @epgui - 3 months
The discussion of procedural code doesn't make sense to me, because it seems to mix together some orthogonal concepts.

Procedural is not the opposite of object-oriented (nor is it particularly contrasting); idiomatic OOP is procedural to a large degree. Effective functional programming happens when you ditch the procedural approach in favour of a more declarative approach.

By @k__ - 3 months
Isn't that mostly a question of composability?
By @cryptica - 3 months
Though I agree about the point about not creating objects/instances where a pure function will get the job done, I disagree with the general stance against OOP. I think OOP is absolutely essential to simplicity. FP tends to lead to too many indirections with data traversing too many conceptual boundaries. FP (if used dogmatically) tends to encourage low cohesion. I want high cohesion and loose coupling. Some degree of co-location of state and logic is important since that affects cohesion and coupling of my modules.

The key to good OOP is to aim to only pass simple primitives or simple cloned objects as arguments to methods/functions. 'Spooky action at a distance' is really the only major issue with 'OOP' and it can be easily solved by simple pass-by-value function signatures. So really, it's not a fundamental issue with OOP itself. OOP doesn't demand pass by reference. Alan Kay emphasized messaging; which is more leaning on 'pass by value'; a message is information, not an object. We shouldn't throw out the baby with the bathwater.

When I catch a taxi, do I have to provide the taxi driver with a jerrycan full of petrol and a steering wheel? No. I just give the taxi driver the message of where I want to go. The taxi driver is responsible for the state of his car. I give him a message, not objects.

If I have to give a taxi driver a jerrycan full of petrol, that's the definition of a leaky abstraction... Possibly literally in this case.

That said, I generally agree with this article. That's why I tend to write everything in 1 file at the beginning and wait for the file size to become a problem before breaking things up.

There are many different ways to slice things up and if you don't have a complete understanding of your business domain and possible future requirement changes, there is no way you will come up with the best abstractions and it's going to cost you dearly in the medium and long term.

A lot of developers throw their arms up and say stuff like "We cannot anticipate future requirement changes"... Well of course, not on day 1 of your new system!!! You shouldn't be creating complex abstractions from the beginning when you haven't fully absorbed the problem domain. You're locking yourself into anti-patterns and lots of busy-work by forcing yourself to constantly re-imagine your flawed original vision. It's easier to come up with a good vision for the future if you do it from scratch without conceptual baggage. Otherwise, you're just seeding bias into the project. Once you have absorbed it, you will see, you CAN predict many possible requirement changes. It will impact your architecture positively.

Coming up with good abstractions is really difficult. It's not about intelligence because even top engineers working in big tech struggle with it. Most of the working code we see is spaghetti.

By @ImHereToVote - 3 months
This is nothing to be ashamed of. We have all been there. I mean, I haven't.
By @commandlinefan - 3 months
> The real problem with the class Foo above is that it is utterly and entirely unnecessary

I see this sentiment a _lot_ in anti-OO rants, and the problem is that the ranter is missing the point of OO _entirely_. Hard to fault them, since missing the point of OO entirely is pretty common but... if you're creating classes as dumb-data wrappers and reflexively creating getters and setters for all of your private variables then yes what you're doing _is_ utterly and entirely unnecessary, but you're not doing object-oriented design at all. The idea, all the way back to the creation of OO, was to expose actions and hide data. If you're adding a lot of syntax just to turn around and expose your data, you're just doing procedural programming with a redundant syntax.

By @gspencley - 3 months
I've been a software eng professionally for 25 years. Have been coding more like 30 - 35. There is a fundamental principal here that I agree with and it surrounds a code smell that Martin Fowler termed "Speculative Generality" in his book "Refactoring."

Speculative Generality is when you don't know what will have to change in the future and so you abstract literally everything and make as many things "generic" as you possibly can in the chance that one of those generic abstractions may prove useful. The result is a confusing mess of unnecessary abstractions that adds complexity.

However, yet again I find myself staring at a reactionary post. If developers get themselves into trouble through speculative generality, then the answer is clearly "Primitive Obsession" (another code smell identified in "Refactoring") right?

Primitive Obsession is the polar opposite of abstraction. It dispenses with the introduction of high-level APIs that make working with code intuitive, and instead insists on working with native primitive types directly. Primitive Obsession often comes from a well meaning initiative to not "abstract prematurely." Why create a "Money" class when you can just store your currency figure in an integer? Why create a "PersonName" class when you can just pass strings around? If you're working in a language that supports classes and functions, why create a class to group common logical operations around a single data structure when you can instead introduce functions even if they take more parameters and could potentially lead to other problems such as "Shotgun Surgery."

This is not to say that the author is wrong or that one should embrace "premature abstraction." Only that I see a lot of reactionary thinking in software engineering. Most paradigms that we have today were developed in order to solve a very real problem around complexity at the time. Without understanding what that complexity was, historically, you are doomed to repeat the mistakes that the thinkers at the time were trying to address.

And of course, those iterations introduced new problems. Premature Abstraction IS a "foot gun." What software engineers need to remember is that the point of Design Patterns, the point of Abstractions, the point of High-Level languages and API design is to SIMPLIFY.

One term we hear a lot, that I have been on the war path against for the past decade or two is "over engineering." As engineers, part of our jobs is to find the simplest solution to a given problem. If, in your inappropriate use of a given design pattern or abstraction, you end up making something unnecessarily complicated, you did not "over engineer" it. You engaged in BAD engineering.

When it comes to abstractions, like anything else, the key to gain the experience needed to understand a) why abstractions are useful b) when abstractions can introduce complexity and then apply that to a prediction of what will likely benefit from abstraction because it is something that will be very difficult to change later.

All software changes. That's the nature of software and why software exists in the first place. Change is the strength of software but also a source of complexity. The challenge of writing code comes from change management. Being able to identify which areas of your code are going to be very difficult to change later, and to find strategies for facilitating that change.

Premature Abstraction throws abstractions at everything, even things that are unlikely to change, without the recognition that doing so makes the code more complex not less. Primitive Obsession says "we can always abstract this later if we need to" when in some situations, that will prove impossible(ex: integrating with and coupling to a 3rd party vendor; a form of "vendor lock-in" through code that is often seen).

/stream-of-consciousness-thoughts-on-article

By @TurboHaskal - 3 months
It seems to me the author just wants multiple dispatch.
By @jimmaswell - 3 months
Fine blog post overall, but the author fell to premature abstraction themselves in declaring that little Foo class bad. It's entirely too generalized for me to say anything negative about at all. Depending on the context, a tiny class like that could be completely sensible or utterly unnecessary.