July 26th, 2024

Fear of over-engineering has killed engineering altogether

The article critiques the tech industry's focus on speed over engineering rigor, advocating for "Napkin Math" and Fermi problems to improve decision-making and project outcomes through basic calculations.

Read original articleLink Icon
Fear of over-engineering has killed engineering altogether

In recent years, engineering in tech has faced criticism for being overshadowed by a focus on rapid shipping and iteration, often at the expense of thorough planning and optimization. This shift, influenced by the Agile Manifesto and Lean Startup principles, has led to a culture where developers prioritize speed over engineering rigor. The author argues that this trend has swung too far, neglecting the value of sound engineering practices, particularly in addressing linear problems related to time, space, and cost.

The article emphasizes the importance of "Napkin Math" and Fermi problems as tools for making informed decisions in software development. By estimating parameters such as processing time, memory usage, and financial feasibility, developers can avoid costly mistakes and streamline their projects. The author illustrates this approach through the development of a project called fika, detailing calculations made to assess user needs, storage requirements, and cost management.

Through practical examples, the author demonstrates how basic calculations can lead to significant insights, ultimately guiding architectural decisions and improving project outcomes. The piece concludes by encouraging developers to embrace simple mathematical assessments as a means of enhancing their engineering practices, suggesting that such efforts can yield substantial benefits without the pitfalls of over-engineering.

Related

The software world is destroying itself (2018)

The software world is destroying itself (2018)

The software development industry faces sustainability challenges like application size growth and performance issues. Emphasizing efficient coding, it urges reevaluation of practices for quality improvement and environmental impact reduction.

A dev's thoughts on developer productivity (2022)

A dev's thoughts on developer productivity (2022)

The article delves into developer productivity, emphasizing understanding code creation, "developer hertz" for iteration frequency, flow state impact, team dynamics, and scaling challenges. It advocates for nuanced productivity approaches valuing creativity.

Engineering Principles for Building Financial Systems

Engineering Principles for Building Financial Systems

The article delves into engineering principles for financial systems, highlighting accuracy, auditability, and timeliness in records. It stresses immutability, granularity, and idempotency. Best practices involve integers for amounts, detailed currency handling, and consistent rounding.

Htmx: Simplicity in an Age of Complicated Solutions

Htmx: Simplicity in an Age of Complicated Solutions

Erik Heemskerk discusses the pursuit of a 'silver bullet' technology in software development, emphasizing simplicity over complexity. He critiques over-engineering in front-end development, highlighting trade-offs in code solutions for better user experiences.

Don't Let Architecture Astronauts Scare You

Don't Let Architecture Astronauts Scare You

Joel Spolsky critiques "Architecture Astronauts" for prioritizing high-level abstractions over practical software solutions, urging developers to focus on user needs and tangible outcomes rather than complex architectural concepts.

Link Icon 25 comments
By @klodolph - 3 months
> Before the 2000s, academics ruled computer science. They tried to understand what "engineering" meant for programming. They borrowed practices from other fields. Like dividing architects from practitioners and managing waterfall projects with rigorous planning.

I don’t think this is anywhere close to an accurate account of history.

If you look at the history of “waterfall model” then you find Royce (1970), and if you dig back farther you find Bennington (1956). From their writings, it sounds like people understood how bad the waterfall model was, even back then. The waterfall model primarily shows up as an example of what to avoid.

> Developers must ship, ship, and ship, but, sir, please don’t bother them; let them cook!

My explanation for this is that corporations are just really bad at building incentives for long-term thinking. The developers who ship, ship, and ship get promoted and move up, and now they’re part of the leadership culture at the company. The right incentives are not in place because the right incentives are too difficult—we want nice, easy-to-measure metrics to judge employee performance. Shipping features is a nice metric, and if your features move the needle on other metrics (engagement), then so much the better. You get retained, you get promoted, because you gave management a nice little present full of data on why you’re a good employee, wrapped up with a bow.

The reason that managers want nice metrics is because they want to avoid being blamed. Managers want to avoid being blamed for the wrong decision more than they want to make the right decision.

The way to counteract it is to cultivate trust. With trust, you can work on other things besides avoiding blame. When you’re working on other things besides avoiding blame, you can take the long-term view. When you take the long-term view, you can advocate for employees that fix problems and give them resources.

By @skeeter2020 - 3 months
This opinion piece paints a pretty limited perspective as the defacto state, which I don't think is really true. From my perspective programming (in the vast majority of situations) is neither engineering nor computer science. The creations are not particularly complex, at least not in their initial manifestations where something like formal verification would help. Even the assembly patterns are not unique; the differences can only be determined by actually some form of build-test cycle. There are a lot more non-traditional developers in the world, by which I mean not comp sci or engineering (or uni) grads, which is maybe what the author interprets as "YOLO". I think that on the whole this is a really good thing, and even as a formally trained student in the area don't agree we're in some sort engineering desert because of the academics that came before us.
By @csours - 3 months
The big problem is "How do I connect the money to the work". In large corporations, this becomes project -> plan -> work. The project gets a budget based on the plan, then you do the work based on the plan.

The problem is the link between plan and work. As you work, you learn. That is the primary activity of software development. Learning is a menace to planning. As you learn, you have to replan, but your budget was based on the original project plan.

You can talk about engineering and culture and whatever you want, but if you're working for money, the problem remains of connecting the work to money and the money to the work.

I'm reminded of the Oxygen Catastrophe - https://en.wikipedia.org/wiki/Great_Oxidation_Event - we need oxygen to live, but it also kills.

By @ChrisMarshallNY - 3 months
There are ways to do "JiT" engineering, and evolutionary design.

However, they generally rely on the practitioner being both skilled, and experienced.

Since the tech industry is obsessed with hiring inexperienced, minimally-skilled devs, it's unlikely to end well.

By @mfer - 3 months
It's more than the fear of over-engineering.

For example, with startups the time to market, pivots, and not owning your decisions long term (which often happens) leads people to move fast and not consider consequences.

It's about goals and following the money. If a bridge fails there's significant legal liability, guilt over lost lives, and more. If software doesn't scale is can be rewritten or a company is hacked and customer info gets out there is a marketing black eye. It's different.

I say this as a classically trained engineer who thinks more engineering needs to be layered into software development. We need to justify it to the business.

By @asdefghyk - 3 months
Its OK to move fast if the cost of failure is very small.

I do not understand why the Crowdstrike change was not tested appropriately, why this problem not found in testing. My company has a automated test suite that takes some time ( several hours ) to run along with manual tests before any software is released. If its a risky change needs to be reviewed another developer. If it is a emergency production change the testing is much less , however change is still reviews by an experienced developer and still manually tested by a tester. The regression tests are not run...

By @simonw - 3 months
The title of this piece is almost unrelated to the content. The post itself is about using napkin-math to estimate things like how much disk space will be needed for a feature.
By @hintymad - 3 months
Maybe fear killed engineering. Even in a decently managed company, I can see so many engineering effort gets stalled, compromised, bastardized, or killed. Like your manager asks you to get sign-offs from 10 different orgs. Like you want to do a POC, yet somehow an infra team demanded that you integrate with their system even though your targeted users didn't give a shit. Like you just want to submit a workflow to Temporal Cloud yet a product manager demands that you create an abstraction because he is afraid of X and Y. Like you just want to serve 5TB of data, and you got sucked into this: https://www.youtube.com/watch?v=3t6L-FlfeaI.

I'm not sure how companies battle this kind of fear. It looks Amazon's Working Backwards, Netflix's Freedom and Responsibility, Uber's Let Builders Build more or less can counter such fear, but ultimately it's about finding the right people who have good sense of product and who strive to make progress.

By @asdefghyk - 3 months
Also the Crowdstrike change could have been largely avoided, if incremental release to customers was used. Not release to the whole world at once.

For example , release change to some group customers, say 5000, if thats OK release to another group of larger customers.

There was no planning for a failed update. There should have been a mechanism for auto rollback if problems encountered. To me this is 101, very basic

By @ethbr1 - 3 months
Nice perspective! And appreciate an actual use case of what the author is recommending being attached.

In my experience, a lot of ills in software come from the working set of facts around a particular problem becoming too large for one person to hold.

Then you get Healthcare.gov v1 -- everyone proceeds on incorrect assumptions about what their partners are doing/building, and the resulting system is broken.

As a salve to that problem, napkin-math upper/lower bounds estimation can be incredibly useful.

Especially because in system design "the exact number" is usually less important than "the largest/smallest likely number" and "rate of growth/reduction".

Simplifying things that aren't useful to know in detail (e.g. the exact numbers for author's users) leaves time/mental space for things that are (e.g. if it makes sense to outsource a particular component to SaaS).

By @zokier - 3 months
Calling 90s software development overly rigorous and academic is certainly an interesting take. It is also era where PHP and Perl ruled the world, and C was still domain of cowboy coders instead of language lawyers.

It's only in 00s when any sort of methodology (even if it's agile) starts to get wider recognition, and academic languages like Haskell spark interest. 00s was also the peak era for architecture astronauts, for example JavaEE and C++ Boost were almost completely a 00s products.

The counterreaction for that was the rise of low ceremony stuff like Ruby on Rails, or html5 toning down w3cs overwrought stuff (xhtml), and now the pendulum has been swinging back with typescript or rust as examples

By @FrustratedMonky - 3 months
Complexity is going to exist.

I've seen a lot of projects that try to 'make it simple', engineers go around saying "KISS". Hyper focused on simplifying everything. But they failed.

They only realize later that by simplifying, they have just shoved the complexity into some corner, and never dealt with it head on, and it just corrupts everything.

It's like cleaning house, and you just shove it all in the closet. Does it mean you are really neat? Is your life really simple?

It's like squeezing a water balloon. The complexity is going to bubble out and break somewhere. But you aren't in control of how it breaks.

So, just acknowledge that not everything can be 'simple' and deal with complexity.

By @tekla - 3 months
I was unaware that software devs encompassed all of engineering.
By @kkfx - 3 months
IMVHO the problem lie in "specialization": in the past most technicians of any fields have a certain generic culture and comprehension of the world, these days they are so "specialized" to be unable to see the big picture at CHILDISH levels.

As a result no matter the skills, if you do not know the world you working on it's only by chance if you design something good for such world. Developers MUST KNOW the big picture.

By @davedx - 3 months
Clickbait title, IMO
By @simpaticoder - 3 months
Software is an inherently chaotic space. There are 2^10^6 possible states for a ~10KB program. Computer science is not engineering, just as physics is not mechanical engineering. Both CS and physics have the privilege of working within tiny imaginary systems. Engineers do not. Humans are currently engaged in a privately funded search through an effectively infinite space for the patterns, methods, visualizations, rules-of-thumb, that can produce a binary that meets human expressible boundary conditions. It is natural that some humans quail at this seemingly impossible, slow, arduous journey, and they reject engineering. It is also natural that some humans cling so tightly to a concrete approach that they cannot absorb new models. In a very real sense, as humans select software methods, software methods select humans.
By @bjornsing - 3 months
I’m not sure it’s fear of over-engineering. The biggest difference during my career has been the switch to the “saas model” where software is never done and there is no clear line between development and operations.
By @renewiltord - 3 months
Rigorously plan your own company. I won’t. Then we’ll just meet each other in the market. If your thing is so good, people will buy your thing.
By @readthenotes1 - 3 months
TimeSpaceMoney and similar triads leave off one of the more important dimensions, quality. It's the iron tetrahadon
By @taneq - 3 months
Not in any actual engineering field. There it's being killed the traditional way, by 'do what you did last time' and 'meh it'll be fine'.
By @iancmceachern - 3 months
It's common on HN and things that are posted here to think of/refer to Comp Sci and Devs as the whole of engineering.

This title should be:

"Fear of iver-engineering has killed software engineering altogether"

By @ravenstine - 3 months
> Before the 2000s [...] [i]t was bad. Very bad. Projects were always late, too complex, and the engineers were not motivated by their work.

Though I've understood this to be true, it's not a problem unique to that era.

What was perhaps more unique to that era was that there was less room for bad software, and software businesses were more directly impacted by bad software.

I would argue that there's little to no objective evidence that the industry was actually made better by Agile-inspired methodologies. If anything, methodologies served as a means to distribute blame and, incidentally, allow bad software to continue to be written.

This phenomenon probably wouldn't have ended well if it weren't for hardware picking up the slack and the ever decreasing standards users have for their software. Today, everyone I know expects the apps and websites they use to be broken in some way. I know that every single effing day I run into bugs I consider totally unacceptable and baffling. No, I'm not making that up. I'm serious when I say that I run into bad software every day. Yet we've normalized bad software, which begs the question of what these artificial methodologies like SCrUM are actually for.

> To make things worse, engineers took Donald Knuth’s quote “Premature optimization is the root of all evil” and conveniently reinterpreted it as “just ship whatever and fix it later… or not."

People should stop listening to people like Knuth and "Uncle" Bob Martin as gods of programming bestowing commandments unto us.

> I do think the pendulum has gone too far, and there is a sweet spot of engineering practices that are actually very useful. This is the realm of Napkin Math and Fermi Problems.

> Developers must ship, ship, and ship, but, sir, please don’t bother them; let them cook!

I don't think it's a pendulum. This phenomenon is real, but I've just as often seen teams of developers ruled by inner circles of "geniuses" who either never ship anything that valuable themselves or only ship horribly convoluted code meant to "support" the rest of the peon developers.

These issues are less a reaction to something someone like Knuth said and more to do with businesses and teams that make software failing to understand what competence in software engineering actually means. Sure, there's subjectivity to how competence is defined in that domain, but I'll just say that I don't consider either YOLO or geniuses to be a part of that.

> Fermi problems and Napkin Math [...]

I honestly don't get what the author is trying to achieve with the rest of the article. Perhaps that engineers trying to do actual engineering should use math to approach problems? I guess that makes sense as a response to YOLO programming, but effectively just telling people to not YOLO it really doesn't address the organizational problems that prevent actual competent engineering from taking place. People didn't forget to use math; they're disincentivized from doing so because most companies reward "shipping" and big egos.

By @tomohawk - 3 months
This seems like it's aiming at something, but missing.

My take as an engineer (not a PE, but have the degree) is that engineering mindset is quite a bit different than computer science mindset, which is quite a bit different than technician mindset.

Each has their strengths and weaknesses. Engineering is pragmatically applying science. Computer science has more of a theoretical bent to it - less pragmatic. Technicians tend to jump in and get stuff done.

Especially for major work, I'll do paper designs and models. The engineers tend to get it, but the computer scientists tend to argue about optimal or theoretical cases, while the technicians are already banging something out that may or may not solve the problem.

More recently (past 5-10 years), I've seen a notable lack of understanding from new programmers about how to do a design. I'm currently watching a sibling team code themselves into a corner due to lack of adequate design. They'll figure it out in a few months, but they're "making progress" now and have no time to waste.