August 3rd, 2024

"We ran out of columns" – The best, worst codebase

The author reflects on a chaotic codebase, highlighting challenges with a legacy database and a mix of programming languages. Despite flaws, it fostered creativity and problem-solving, leading to notable improvements.

Read original articleLink Icon
FrustrationNostalgiaAmusement
"We ran out of columns" – The best, worst codebase

The author reflects on their experience working with a complex and chaotic codebase, which they describe as both the best and worst they have encountered. The legacy system's database, particularly the merchants table, exemplified the challenges faced, as it ran out of columns, leading to the creation of a second table, Merchants2. The database was central to the system's operations, with various tables interconnected, including a notably simple SequenceKey table used for generating IDs. The login system relied on a manually maintained calendar table, and the employees table was reset daily, causing temporary access issues.

The codebase itself was a mix of VB and C#, with numerous JavaScript frameworks integrated, often with custom modifications. The author highlights the chaotic nature of the code, including a shipping application with a convoluted structure and a history of unresolved bugs. Despite its flaws, the environment fostered creativity and problem-solving, as developers carved out their own solutions within the mess. A standout improvement was made by a senior developer who optimized a slow merchant search page, showcasing the potential for innovation in a disorganized system.

Ultimately, the author expresses nostalgia for the direct connection to users and the fast feedback loop that characterized their work in this imperfect codebase, contrasting it with more structured but less engaging environments encountered later in their career.

AI: What people are saying
The comments reflect a shared experience of dealing with chaotic and legacy codebases, highlighting both frustrations and unexpected benefits.
  • Many commenters recount their experiences with poorly organized code, often lacking version control and documentation.
  • Despite the challenges, some found creativity and problem-solving opportunities within these messy systems.
  • Several stories illustrate the consequences of legacy systems, including bugs and inefficiencies that arise from outdated practices.
  • There is a sense of nostalgia and camaraderie among developers who have faced similar struggles in their early careers.
  • Some comments suggest that while messy code can be frustrating, it can also foster innovation and adaptability in problem-solving.
Link Icon 79 comments
By @hleszek - 2 months
When I started at my first company, they had a very complex VB application running on dozens of customers around the country, each having some particular needs of course. There was a LOT of global variables (seemingly random 4 uppercase letters) controlling everything.

At some point, the application had some bugs which were not appearing when the application was run in debug mode in Visual Studio. The solution was obvious: installing Visual Studio for each customer on site and teaching the users to run the app in debug mode from Visual Studio. I don't know how they convinced the users to do this and how they managed with the license but it was done.

What happened next was even worse.

There was no version control of course, the code being available on a shared disk on the local network of the company with the code copied over in multiple folders each having its own version, with no particular logic to it either, V1, V2, V2.3, V2a, V2_customer_name, V2_customer_name_fix, ...

After that, when there was a problem for a customer, the programmer went there to debug and modified the code on site. If the bug/problem was impacting other customers, we had to dispatch some guys for each customer to go copy/edit the code for all of them. But if the problem was minor, it was just modified there, and probably saved on the shared folder in some new folder.

What happened next was to be expected: there was no consensus on what was the final version, each customer having slightly different versions, with some still having bugs fixed years before for others.

By @Nexialist - 2 months
My worst codebase story:

In my first real job, I worked for a company that maintained a large legacy product programmed in a combination of COBOL and Java.

In order to work on the Java side of the product, you checked out individual files from source control to work on, which 'locked' the files and prevented other developers from checking out the same files. This functionality was not part of our actual source control system, but was instead accomplished with a series of csh shell scripts you could run after ssh'ing into our development server.

Each of our customers had a 'master' jar file that represented the actual final compiled product (a jar file is really a zip file archive, which bundles together the resulting compiled java class files).

Once you had finished implementing your code changes, you ran another set of scripts which found the master jar file for each customer, unzips it, copies the compiled files from your local machine into it, and zips it back up again. Finally the source control lock is released.

This means, effectively, that the codebase was never compiled as a whole at any point in the process, instead, we just manually patched the jar file over time with individually compiled class files.

Over the years, small errors in the process allowed a huge amount of inconsistencies to creep into the codebase. Race conditions would allow two developers to lock the same file at once, or a developer would change a class that was a dependency of some other code that somebody else was changing. Sometimes code changes would make it into some of the customer jar files, but not others. Nobody knew why.

It took a small team two years to migrate the entire codebase to git with proper CI, and a huge chunk of that time was reproducing a version of the codebase that actually compiled properly as a whole. After the project was finished, I resigned.

By @dmd - 2 months
Probably some of the worst code I ever worked on was a 12k+ line single file Perl script for dealing with Human Genome Project data, at Bristol-Myers Squibb, in the late 1990s.

The primary author of it didn't know about arrays. I'm not sure if he didn't know about them being something that had already been invented, or whether he just didn't know Perl supported them, but either way, he reimplemented them himself on top of scalars (strings), using $foo and $foo_offsets. For example, $foo might be "romemcintoshgranny smithdelicious" and $foo_offsets = "000004012024", where he assumes the offsets are 3 digits each. And then he loops through slices (how does he know about slices, but not arrays?) of $foo_offsets to get the locations for $foo.

By the time I was done refactoring that 12k+ was down to about 200 ... and it still passed all the tests and ran analyses identically.

By @MikePlacid - 2 months
Oh, the shipping! Now, from the customer’s point of view.

My youngest worked in a furniture chain over the summer. And they got sent a big, heavy furniture set from the central warehouse, which the store actually didn't want. So, they sent it back. The problem was that the system didn't allow them to say: please, don't send this again. And the non-natural intelligence at the central base decided to send this set again. When my youngest started working - they were loading this set back for the seventh time.

Why 'loading'? Because no one could find a way in the program to send this ill-fated set back on the same truck that brought it. No one, except my youngest, that is. He managed to find the combination of keys and checkboxes that allowed them not to unload the unwanted set and ship it back on the same truck - and he immediately got a raise.

I suspect the set is still traveling. But now they only load-unload it at the central warehouse.

By @codeptualize - 2 months
What an amazing read, funny, nostalgic, the duality of the perfect mess but still so much opportunity and ability to make progress, and somehow it all just chugging along.

I feel a lot of this is the difference between theory and practice. Sure each of these things are bad, but probably a lot might have been the right choice at the time, and in a way, most companies, even most projects running for many years, end with similar quirky processes, messes, and hacks.

It's the war stories of what used to be, often only told by the code and the database as the creators have long left.

When I look at our couple year old startup, we have some of these things. Prototypes that just keep functioning and only get more attention when they break, manual processed that sort of make sense but also don't, integrations with systems that were build in the early 2000's (it works, but it ain't pretty). We fix many, but I'm sure some will survive way too long.

As software engineers, especially online, we like to discuss the ideal way to do things, best practices, testing strategies, redundancies, the whole nine yards. When time is of the essence and stakes are high, it all goes out of the window and you gotta just make it work in whatever way is possible to fight another day.

By @grishka - 2 months
The worst codebase I've ever worked on was the Telegram client for Android. I mean just look at this file, it's so huge GitHub gives up rendering it: https://github.com/DrKLO/Telegram/blob/master/TMessagesProj/...

Or this one, it implements EVERYTHING for rendering and interacting with a message in a single class, all non-service message types, all drawn manually on a canvas "for performance", and all input processed manually too: https://github.com/DrKLO/Telegram/blob/master/TMessagesProj/...

I even had the permission from the main developer to refactor some of it, but I never got around to actually doing it.

It's a miracle this app works at all.

By @lemme_tell_ya - 2 months
My first day at my first full-time programming gig, I was asked to look at some reporting job that had been failing to run for months. I logged in, found the error logs, found that it needed a bit more memory assigned to the script (just a tweak in the php.ini) and let the team lead know it should run fine that night. He was shocked, "Dude, if you just fixed that report you probably just got a promotion, no one has been able to figure that out for months." He was joking about the promotion, but my boss was just as shocked. I'd realize later that most the other people on the dev team didn't like linux and wanted to rewrite everything in .NET and move everything to Windows so no one even tried with anything related to any of the linux machines.
By @cmpalmer52 - 2 months
I once had a project to turn a customer’s Excel VBA application into a real application (I used ASP.Net). He had been hacking on this Excel spreadsheet for like 15 years. Once I printed the VBA code because it was hard to navigate and it was like 250+ pages printed out rather compactly.

The worse part wasn’t the code itself (although it was bad), but the fact that there was so much abandoned paths in it and there would be three or more different versions of crucial functions with the same name (or sometimes a different name but doing the same thing) in different places and sometimes all being called by different paths in the workflow. Or not being called at all.

And it was very math heavy (calculating natural gas pressures and flow rates through different sized pipes and fittings and sizing regulators to meet certain loads). Think Excel formulas on cells that referenced 15-20 other cells, each of which was a formula on their own that referenced other pages and cells, some of which were filled by VBA. And that wasn’t even involving the VBA code full of brute force solvers for multi-variable equations that used heuristics he’d worked out by trial and error (if it’s a delta over 1.5, add 5 to this variable, otherwise subtract 3, but if the delta was less than 0.5, add 1 and so on - it eventually converged or found no solution, but a binary solved did the same thing, only faster and easier).

It took me and a junior developer several months, during which, of course, multiple change requests were going through to make it multiuser and secure.

Both my nightmare project and one that I’m very proud of once it was completed.

By @agentultra - 2 months
The ending is pure gold. Some of the best times in my career were working on a codebase for an application serving folks I knew on a first name basis and had had lunch with.

I could talk through pain points they were having, we’d come up with a solution together, I’d hack up a quick prototype and launch it just to them to try out. We’d tweak it over a couple of weeks and when it was good I’d launch it to all customers.

Messy, ugly code base. But it worked well because it wasn’t over-managed. Just developers doing development. Amazing what happens when you get out of the way of smart, talented people and let them do their work.

By @RadiozRadioz - 2 months
I built a system as horrible as this. One of my many terrible inventions was as follows:

Originally our company only did business in one marketplace (the UK). When we started doing business in multiple marketplaces, it came time to modify the system to cope with more than one. Our system assumed, _everywhere_, that there was only ever one marketplace. I had a plan: I would copy-paste the system, make an "international" version that supported many marketplaces, then transition over our original marketplace to the international version and shut down the old system. This way, everyone could keep working on the original marketplace like normal, they'd get the new marketplaces on the new system, and we'd do a clean cutover once ready.

It started out quite well. I got the international version working and onboarded our first new marketplace. The business was very happy, there was lots of opportunity in the new marketplaces. They asked that I delay the cutover from the old system and focus on developing things for these new marketplaces. After all, the old system still works for our original marketplace, we can move it over once our work is done on the new marketplaces. I said yes, of course!

It's now 5 years later, it turns out there's a lot to do in those other marketplaces. To say that the system is split-brained is an understatement. When training new hires, we teach them the difference between a "product" and an "international product". When either system does something, it double checks and cross-references with the other system via a colourful assortment of HTTP APIs. There are a variety of database tables that we "froze" in the old system and continued in the new system, so you could tell that an ID referred to something in the new system or the old system if it was above or below the magic number we froze it at. We have overarching BI dashboards that query both systems to produce results, and they have to heavily manipulate the old system's data to fit with the new multi-marketplace model, and any other changes we made. Both systems are extremely tightly coupled, the old one is a ball and chain to the new one, but the new one is so tightly integrated into it that there's no hope of separating them now.

I've learned to say no since then.

By @gamepsys - 2 months
> All that remained were ragtag interns and junior developers.

For many people, their first job in software engineering is the worst codebase they will deal with professionally for this reason. The first job hires lot of people with little/no experience. As soon as someone gains some experience than can move on to better paying jobs, where there are better developers with better standards.

By @Twirrim - 2 months
Back about 15 years ago, I worked for a web hosting company that provided some sysadmin consultation services. Customer paid us, and I would take a look.

I had one customer who came back with the same request, slightly differently worded, every single month, and every single month I'd say the same thing. They had this site they were running that was essentially a Yellow Pages type site. They had a large set of companies with contact details, each with multiple business categories associated with it. You'd choose a category, and they'd return a list of matching companies.

The problem was the site was really slow. I took a quick look around, and saw that all the time was lost querying the database. Taking a quick look at the schema I discovered that their approach to categorisation was to have a TEXT column, with semicolon separated 4 character strings in it. Each 4 character string mapped to a business category.

So when someone wanted to load up, say, all pest control companies, it would check the category mapping table, get the 4 character string, and then go to the companies table and do:

    SELECT * FROM companies WHERE categories LIKE "%PEST%"
So on each page load of the main page type the site was there to provide, it did a full text search over the category field for every single record in the company table.

I guess that's probably okay for the developer without real world scale data, and real world traffic counts to worry about. But they had lots of data in the database, and that category field could have dozens of categories against a company. As soon as they had more than about 4-5 simultaneous customers performance started tanking.

I could never get them to accept that they needed to rethink the database schema. One month they were bleating about how is it possible that Google can manage to do such a search across a much larger amount of data, much faster. They really didn't like my answer that amounted to "By having a sane database schema". All they were willing to do was pay over the odds for our most powerful server at the time, which had enough capacity to hold the entire database in memory.

By @dave7 - 2 months
For online Poker, one of the current database tools runs in to this issue - PokerTracker 4 (PT4).

These tracker databases are usually used to generate a HUD - numerical readouts surrounding each player shown on the table. PT allows for custom HUDs to be saved/exported/shared, and there is a cottage industry building and selling such HUDS. These HUDs can often bundle hundreds of custom stats - basically a subset of SQL queries, a simple example would be "times_raised / opportunities_to_raise WHERE position= 'button'", that sort of thing.

For performance reasons these custom stats are cached, which obviously makes some sense. However, each cached stat creates a new column in the custom_cache table of PT4's current database, a PostgreSQL 9.0 backend. If you play mostly Heads-Up SNG or the 3-handed Spins, it's actually quite easy to go over the (IIRC) 4096 column limit there by purchasing and importing one too many fancy HU HUD packages!

This completely borks PT4, it can no longer open - and players can no longer make money! In my previous work, I've supported many a player and fixed this "too many columns" error numerous times (by deleting enough custom HUDs from their account until the PT4 can start up correctly once more). Funny to see this pop up elsewhere!

By @tryauuum - 2 months
Would be great to work in such a company as a Linux guru.

There are so many entangled services and machines that you feel like an Indiana Jones. You ssh into a machine and feel century-old dust beneath your footsteps. And you never know what will you find. Maybe a service which holds the company together. Maybe a CPU eating hog which didn't do anything useful last 3 years.

I don't enjoy writing new code much. But in such an environment even with my limited skills I can do decent improvements, especially from security point of view. Feels great

By @vstollen - 2 months
This made me think of my first job. I was the sole developer on a project because the old developer left. Nothing was documented and nobody knew why things were designed the way they were.

We had no code reviews, no design docs, no tests, nothing. We made the changes the way we thought they were right and would git pull them onto the production server.

After I struggled to get productive for the first four months, my manager went on a four-week Christmas vacation. In a moment of frustration, I seized the opportunity and rewrote the whole project from scratch. I don’t remember if my manager ever noticed, but that was the moment I finally got productive.

By @PreInternet01 - 2 months
> I miss that direct connection. The fast feedback. The lack of making grand plans.

There's no date on this article, but it feels "prior to the MongoDB-is-webscale memes" and thus slightly outdated?

But, hey, I get where they're coming from. Personally, I used to be very much schema-first, make sure the data makes sense before even thinking about coding. Carefully deciding whether to use an INT data type where a BYTE would do.

Then, it turned out that large swathes of my beautiful, perfect schemas remained unoccupied, while some clusters were heavily abused to store completely unrelated stuff.

These days, my go-to solution is SQLite with two fields (well, three, if you count the implicit ROWID, which is invaluable for paging!): ID and Data, the latter being a JSONB blob.

Then, some indexes specified by `json_extract` expressions, some clever NULL coalescing in the consuming code, resulting in a generally-better experience than before...

By @collinmanderson - 2 months
I’m seeing a lot of comments about terrible code bases, but I do think there’s something really beautiful here, something like “worse is better”:

> This may sound like a mess to you. But it was remarkably enjoyable to work in. Gone were the concerns of code duplication. Gone were the concerns of consistency. Gone were the concerns of extensibility. Code was written to serve a use, to touch as little of the area around it as possible, and to be easily replaceable. Our code was decoupled, because coupling it was simply harder.

By @jkestner - 2 months
Early on in my career, at a large company, I encountered someone who took “codebase” a little too literally. At the time every department had their own developers, sometimes just employees who had an aptitude for computers.

This one guy established himself by making an Access database for their core business, and when the web became a thing, built a customer site. But not on it—in it. He simply served ASP pages directly from the database, inserting dynamic content in queries. When I was asked to help improve their terrible design, I was forced to untangle that unholy mess of queries, ASP (new to me) and HTML. It was easiest to write all the HTML and insert their ASP right before I sent the files back (because I wasn’t given access to their DB/web server). Thinking “I could do better than this” got me into programming.

He was a Microsoft-everything head. Finally went too far when he presented a new web interface starring a Clippy-like parrot using Microsoft’s DirectX avatar API. The executives were unimpressed and then I noted that 20% of our customers couldn’t use his site. (I probably still had a “best viewed with IE” badge on the main site, lol)

By @nicopappl - 2 months
Wow, this is exactly how I felt with regard to my first job as well. This old codebase no one wants to touch but works somehow. The quite nice later-on additions. Even the "build a pipe-separated string using reflection and a 150 classes hierarchy" rings something.

The old codebase was an hairy ball of scala using a very outdated version of a (now) infamous actor framework. Before they figured out that untyped messages kinda left out one of the major selling point of Scala.

The code was readable, but the authors had this strange idea that every little piece of logic should be part of its own "Actor". An actor is pretty much equivalent to a class, and each one of them had their own little file. With this many classes, with very specialized purposes, you ended up with 90 character identifier names.

To understand what would be a single function in a normal code base, you would have to dive through half a dozen files through several repositories to piece together the logic. Generally, at the end, you find that most of the code is about passing around a value, and there is this one file where there is actual logic applied to the value.

It wasn't that awful. The thing was that it was impossible to change anything: no documentation, no test, no bug tracking, not even any PR process, laconic commit messages, no Wiki pages. And the actor framework made it very difficult to add tests. But later developers did manage it pretty well, they drew fences around the old services, with an HTTP API to communicate with it. And all later additions were part of different services that were very cleanly and consistently designed.

By @mrighele - 2 months
> Now the story I heard at the time was that once upon a time SQL Server didn't support auto-incrementing ids. This was the accepted, correct answer.

At a company that I used to work, they heard the same rumor, so instead of using identity columns or sequences, they kept a table with a number of ids "available" (one row per id). Whenever unique id was needed, the table would be locked, an id selected and marked as used. If there were no ids available, more ids would be added and then one used. A scheduled job would remove ids marked as used from time to time. Note that there was a single "sequence table", that was shared among all of the entities.

That was not even the weirdest part. That id was unique, but NOT the primary key of the entity, only part of it.

The structure of the database was fairly hierarchical, so you had for example a table CUSTOMER in 1-to-many relation with a USER table, with a 1-to-many relation with an ADDRESS table.

while the primary key of the CUSTOMER table was a single CUSTOMER_ID column, the primary key of the USER table was (CUSTOMER_ID,USER_ID), and the primary key of the ADDRESS table was (CUSTOMER_ID,USER_ID,ADDRESS_ID). There were tables with 5 or 6 columns as a primary key.

By @surfingdino - 2 months
I had dubious pleasure of working with similar codebases and devs. I'll remember one of those guys forever, because whenever he wanted to work on a new branch he would clone the repo, make changes to the master branch, and push code to a new repo numbered repo0001, repo002, ... He refused to change his ways, because "I have a PhD so you are wrong".

Another WTF moment was realisation that MS SQL Server does not support BOOLEAN type. That made porting code fun.

By @jbkkd - 2 months
Sounds awfully like my first job, with the addition of not having _any_ sort of test - functional, integration or unit. Nothing.

A few months in, when I approached the CTO and asked if I could start writing a test framework, he deemed it a waste of time and said "by the time you'd commit the test, it would go out of date and you'd need to rewrite it".

Naturally, the build would break about 5 times a week.

Boeing was a major customer of this system, so when shit hit the fan at Boeing a while ago, I wasn't surprised.

By @bdcravens - 2 months
A couple of weeks ago I had a flat on the way to the airport. It was a total blowout, and my car doesn't include a spare. We were already over budget on our trip, so I had the car towed to the closest tire shop and had them put on the cheapest tire that could get me to the airport. I know I'll need to replace other tires, as it's an AWD, and I know it's not a tire I really want. I made a calculated choice to make that a problem for future me due to the time crunch I was under.

Programming is a lot like this.

By @Waterluvian - 2 months
The first line really hits me hard. There’s something so incredibly freeing about being a kid and doing stuff like coding. There’s simply no expectations. Even the smallest project felt like such an achievement. But now I code professionally and I don’t know how to turn off engineering brain. I don’t know how to be okay doing something poorly, but on my terms.
By @compiler-guy - 2 months
It's hard for those who came into the discipline in the past twenty years to realize just how much things have changed around version control and building.

Joel Spolsky released the "Joel Test" for determining if the software team you were interviewing had good practices in 2000. One of the requirements was that they used version control. Not all that many teams actually did. Especially during the dotcom craze.

Today, passing the Joel Test is table stakes, and it's the rare shop that doesn't. But it took years for that sort of thing to become ubiquitous.

https://www.joelonsoftware.com/2000/08/09/the-joel-test-12-s...

By @ChilledTonic - 2 months
Man, I remember my Gilfoyle. We had an employee who would do one off programs for anyone with a use case - but ours wiped his computer before giving it back, so frequently we'd get tickets for software we'd never heard of that was somehow mission critical, and we'd have a few days to spin up a new version on the spot.

Probably some of the most fun I've ever had writing software was making Gilfoyle-2s.

By @Snacklive - 2 months
This hits close to home. At my current job we have a similar experience, We are building an Android app and the codebase is probably the same age as me (25yo). It started as a Windows Phone app that was later ported over to Android, you can easily find files and segments of the codebase that were auto generated by some code converter from C# to Java.

In the codebase itself you can see the evolution of code styles, older views and the core of the system is written in a very old and very stateful manner while newer parts of the application use modern architecture practices, we have two local databases that load configuration into the app for our different clients and their requirements, an global state is loaded at all times to check what is the correct business logic to follow.

Im a junior, few years into Android Programming and while sometimes its frustrating having to deal with some nasty bug because a random variable is updated for reasons only god knows, i think the experience its giving me its something im going to appreciate years down the road.

By @throwaway93982 - 2 months
When it comes to building things, the outcome is dictated by the constraints. But not the way most people realize.

Constraints affect outcomes in (at least) three ways:

  - by absolute limitation (you cannot violate these)
  - by path of least resistance (the constraint makes X easier)
  - by strategy (creative use of knowledge & skills leads to novel solutions)
---

There is an injured person on top of a mountain, and they need medical attention. You need to go up the mountain, get them, and bring them down.

You only have so much strength/endurance, so nothing you do will ever be faster than what your body is capable of. You need to get up and down in less than two hours. You need to carry an injured person. The mountain has two slopes: a long gradual one, and a short steep one.

Most people would climb the long gradual slope, because it's the path of least resistance. But it will take 4x as long to climb up and down it. Climbing straight up the steep slope would be incredibly tiring, and unsafe to bring someone down.

You can, however, zig-zag up and down the steep hill. It will take more time than going straight up, but faster than the long way, you will be less tired, and it's safer to bring someone down hill.

---

Constraints can be good and bad. Good use of constraints can allow you to get something done effectively. Bad use of constraints leads to failure. So it's important to have someone who can utilize those constraints effectively.

Vision, leadership, and wisdom is more important than the people, skills, materials, or time involved. The former determines the quality of the outcome more than the latter.

By @MBCook - 2 months
This reminds me of my first job at a very small shop. Here’s two stories:

The calendar table struck a chord. We had one for holidays. One system needed to know when they were to calculate pay dates. Except once a year it would “run out” and someone would have to go add in the next year’s worth after a bad calculation was spotted.

The second time I was told to do it, I put in 10 years worth. The company didn’t survive long enough to need more.

My first “big” project was actually that pay date code. Every once in a while the server would hang, and people had deduced the date calculation was the problem. But no one knew why. And it wasn’t frequent enough to be worth taking time from the other two programmers. But I was new and thus “spare”.

After not being able to find any errors, I brute forced it. I ran that calculation for every day of the year for every pay type (weekly, monthly, bi-monthly, etc) and it quickly got locked in an infinite loop. Armed with the “bad” data that triggered it, it was easy to solve and provide tests to prove it would never happen again. I don’t remember what it was, exactly.

By @bubblebeard - 2 months
This reminds me of my first internship. The company I worked at backed everything up on CD:s. I was tasked with writing a new service for their intranet, indexing the content of all CD:s, so staff members could lookup which CD contained a certain file.

I wrote a little program to scan the CD:s as I inserted them into my computer, indexing the data into a database I had created, and then labelling the CD.

It wasn’t exactly exciting work but I still miss those days sometimes, everything was new and unexplored.

By @omoikane - 2 months
This codebase sounds like a haunted graveyard[1], where everyone just fixes their local corner of things and avoid the risk of untangling the existing mess.

Not needing to conform to some company-wide standard is probably really pleasant while it lasted, but every such effort adds to the haunted graveyard, and the lack of consistency will eventually come back to bite whoever is still around.

[1] https://www.usenix.org/sites/default/files/conference/protec...

By @sanktanglia - 2 months
Im currently deep in rewriting a c# monolith thats 10+ years old that has thousands of lines of extra code that i was able to throw away because most of it was written before there were optional arguments so they made overloads for every permutation of arguments for every framework function
By @EFruit - 2 months
Oh, those kinds of columns. I thought we were talking text columns, and I was about to relate.

I work at a small business. Despite computer software being about the literal opposite of our business (plants), the founder built an entire suite of interconnected tools that runs off MS BASIC for Xenix, on a single HP machine running SCO OpenServer. The machine has so many customizations, self-scheduling cron/at jobs, odd nooks for files, weird tweaked programs, and special conventions that if a server with a dedicated hostname qualifies as a pet (as opposed to cattle), I'd be THIS THING'S pet.

The system handled EVERYTHING. Accounting, payroll, pesticide management, inventory, attendance, business contacts, shipping label printing... all out of a bunch of terminal menus (which are actually text files with control codes that get `cat`ed out).

But by God, the most horrifying part of it all are those BASIC files. They're IMPENETRABLE.

Firstly, I don't believe this version of BASIC supports named functions or subroutines. At all. But that's fine. MS BASIC being what it is, the interpreter only can deal with a certain number of characters per logical line, and that includes data definitions.

This version of BASIC (like so many others) includes its own serialization format and record/file access scheme. You declare the layout of the data file you want, open that file, and BASIC will handle (most of) the rest.

So when the founder started hitting the internal line limit while defining the data file's fields, he would cut the names of the fields down to fit more on that one line. Over time `30 AS EMPLOYEENAME` became `30ASEMPLNAME`, which became `30ASEMNAME` which became `30ASAF(1)`.

Every cent we transact, and every employee's timecards still flow through this old system, some even using bona fide Wyse terminals. To reiterate, this man was, first and foremost, a farmer. His contraption is terrifying, but commands immense respect. It's lasted 30-some years with continuous tweaking and refining, and we still have yet to replicate even half of its functionality. (Though there are other organizational issues that are making that difficult.)

On a personal note, aside from the calcified codebase and occasional spelling errors, it's a stellar business application. It's fast, mostly coherent, and keyboard-driven in such a way that experienced employees can navigate it faster than the terminal can refresh. We've been working for years to replace it, but at the same time, there's a lot our newfangled Angular+PHP+MySQL replacement could learn from it.

By @curiouscavalier - 2 months
This reminds me of working with a company that provided market-wide pricing data for a particular commodity across the US. They were the de facto aggregator of pricing for the sector and at the time I worked for one of their larger customers. We requested they add another vendor’s pricing in a particular region and received a response along the lines of “Sure, as soon as we can figure out how to add another entry. We’re at the maximum row count on the system.”

Needless to say it gave my team a few days of flabbergast and speculation on what system they must have built on to hit a row limit at only 5 digits. And yet for a non-tech industry it was mostly working.

By @airstrike - 2 months
This is glorious. I can imagine people could write versions of this for every industry out there and they would all be equally fun to read.
By @EGreg - 2 months
There were a couple times I was convinced that basic methods were as good or better than highfalutin ones, or got me thinking about solutions that hew closer to the basic method.

In 2014 or so, someone at a company told me that they just don’t do branches, and merge everything into trunk. I agree that long-lived branches are bad, but no branches at all? Sure enough, this was OK — meaning branches could be kept and synced by a few people outside the main repo, but forks are never pushed to the repo.

Using table columns for data, as long as you’re building for an actual specific business vertical and not just a general-purpose framework.

You never know when you’ll want to put an index on the data, and databases already have built-in mechanisms to select a subset of the fields, etc. You avoid all kinds of joins. If you need to extend the table, just start a new table with the same synthetic ID.

I would say, in tables where you don’t want to even have global locks (eg sharded tables), just don’t have an autoincrementing ID. Instead, try a random ID or a UUID, and insert it, then use it across tables.

In short, what was described here is actually good design.

By @dakiol - 2 months
Honestly, I like to work on such systems because:

- there is so much stuff to improve. There’s nothing better than the feeling of improving things with code (or by removing it)

- it’s back to school again and everything goes. You can implement features in any way you want because the constraints the system imposes. Granted, sometimes it’s painful to add functionality

- there’s usually no room to subjective topics like clean code and architecture. The most important thing with these systems is correctness (and this is usually an objective topic)

- nobody can blame you for something that doesn’t work. It’s always the fault of the legacy system

I wouldn’t recommend working on such systems to junior engineers, though.

I don’t really like to work on “perfect” codebases where everyone follows the same pattern, with linters, where if something breaks is because of your shitty code (because the codebase is “clean”). It’s very frustrating and limiting.

By @alexpotato - 2 months
And weird code can lead to crazy outages.

I put together a thread of all of the wild incidents I've seen over my many years of working as a FinTech SRE:

https://x.com/alexpotato/status/1215876962809339904

By @thayne - 2 months
I've seen something very similar to the Sequence key table. But there wasn't just one of them, it was a common pattern. And it had 2 rows and 2 columns.

The reason we had it is we had master-master replication, but without requiring the peer to acknowledge befor committing a transaction. To avoid inconsistencies, we preferred one server for even ids and the other for odd ids. But when writing a new record, an autogenerating sequence would just give the next id without regard to what "side" the request was on. So we had a table to keep track of the next id to use for each side, where we incremented the next id by 2 each time a new id was allocated.

It was a little weird, but it worked fairly well. Although we eventually changed the architecture and removed the need for those tables.

By @jancsika - 2 months
Oh man, this article reminds me of an article that was a parody of some horrid business logic.

Something like this: a "genius" programmer was somehow, for some reason using svn commits as a method dispatcher. Commit ids were sprinkled throughout the codebase. A new hire broke the entire system by adding comments, and comments weren't compatible the bespoke svn method dispatcher.

Does anybody remember this article? I really want to read it again.

By @gouggoug - 2 months
I worked many years with an open source eCommerce platform called Magento[0] which, at the time, used something called the "Entity Attribute Value" (or EAV) system[1].

One particularity of the EAV system is that you end up having tables with hundreds (and growing) of columns. It made Magento itself extremely hard to work with and optimize. I hope they moved away from this model since.

To be fair, this was before nosql databases were a thing.

[0]: https://en.wikipedia.org/wiki/Magento

[1]: https://en.wikipedia.org/wiki/Entity%E2%80%93attribute%E2%80...

By @philjohn - 2 months
I can second the sequence key table being because auto increment wasn't available on some databases. I ran into the same some years back at a company who's software dated back to the late 60's.
By @cowsandmilk - 2 months
He acts like sequence key is odd, but that’s quite normal in database world.

https://www.postgresql.org/docs/current/sql-createsequence.h...

By @tbm57 - 2 months
Working on something like that would drive me absolutely batty. I am happy you were able to find your zen in the middle of that chaos. This post truly speaks to the human condition
By @brunoarueira - 2 months
Once not long enough, I'd worked on 4 projects which was literally copied from the first made and changed parts of the customers and internal users according to each use of it. So, the main problem is bugs found in one project was found on the other 3 and I'd to fix the same bug!

Codebases like this or from the OP is cool to learn how to not do certain things.

By @rendall - 2 months
I'm glad OP was able to maintain a good sense of humor about it. Such discoveries as the nested classes of empty methods have sent me into teeth-gnashing fury followed by darkness and despair. One such experience is why I would rather change careers if my only option were to build on the Salesforce platform, for instance.
By @rk06 - 2 months
In my first job, I was part of an offshore ops team who maintained lot of code that no one (Onshore and offshore) wants to maintain. But like all ops. Code, it was business critical.

This included a project called IefParser, its job was to parse incoming data files and put the data into databases. It was a serious project. Like really serious. The software input format came with a full manual with a section highlighting the changes from previous versions. I have not seen that level of detail before or since.

And It was also very old. Written about a year or two after I was born.and never rewritten after. So the software side of things were less serious and more hacky.

Core code was written in C. Database used was oracle. And code was driven by Perl batch script triggered via cron job. And all that ran on IBM AIX (unix). Yes,not windows or Linux. It ran on unix.

The sheer difference in software requirements which were meticulously and the software was mind boggling.

Some fun facts:

- c code could not be compiled on windows . You need to login to dev server via putty and run makefile to do it.

- Perl code was not checked in to repository. Perl code also was different for each environment.

- unix version has a vi editor which for some reason didn’t tell if you were in edit mode or command mode. WTF! Yes, other editor didn’t exist. Apparently I was only one who bothered to learn vi in India. As no one else in India could reliably edit files

- cron job schedule was also no checked in. After a “great learning opportunity “, I decided indeed to check that in

- I once had the horror of fixing bugs in Perl and cron job. Apparently global variables are the state of art in Perl.

- cron job always failed on New Year’s Day because the batch script uses MMDD formatted folder for temp file storage, and was unable to understand 0101 is greater than 1231. I volunteered to track down the issue, and fix it for good. M3 shot it down saying, “we don’t to take risks on such a mission critical service”

By @arnorhs - 2 months
I really love these kinds of stories. Does anybody know if there's a collection of similar stories/code bases anywhere?

I guess there is dailywtf but that's mostly bugs. Probably good enough though

By @odyssey7 - 2 months
It’s a cute story, it made me laugh.

If this story reminds you of the codebase you work on, don’t let its romanticism deceive you into thinking that suffering is somehow actually good.

By @deathanatos - 2 months
Two early databases I worked on.

The first contained monetary values. These were split over two columns, a decimal column holding the magnitude of the value, and a string column, containing an ISO currency code. Sounds good so far, right? Well, I learned much later (after, of course, having relied on the data) that the currency code column had only been added after expanding into Europe … but not before expanding into Canada. So when it had been added, there had been mixed USD/CAD values, but no currency code column to distinguish them. But when the column was added, they just defaulted it all to USD. So and USD value could be CAD — you "just" needed to parse the address column to find out.

Another one was a pair of Postgres DBs. To provide "redundancy" in case of an outage, there were two such databases. But no sort of Postgres replication strategy was used between them, rather, IIRC, the client did the replication. There was no formal specification of the consensus logic — if it could even be said to have such logic; I think it was just "try both, hope for the best". Effectively, this is a rather poorly described multi-master setup. They'd noticed some of the values hadn't replicated properly, and wanted to know how bad it was; could I find places where the databases disagreed?

I didn't know the term "split brain" at the time (that would have helped!), but that's what this setup was in. What made pairing data worse is that, while any column containing text was a varchar, IIRC the character set of the database was just "latin1". The client ran on Windows, and it was just shipping the values from the Windows API "A" functions directly to the database. So Windows has two sets of APIs for like … everything with a string, an "A" version, and a "W" version. "W" is supposed to be Unicode¹, but "A" is "the computer's locale", which is nearly never latin1. Worse, the company had some usage on machines that were set to like, the Russian locale is, or the Greek locale. So every string value in the database was, effectively, in a different character set, and nowhere was it specified which. The assumption is the same bytes would always get shipped back to the same client, or something? It wasn't always the case, and if you opened a client and poked around enough, you'd find mojibake easily enough. Now remember we're trying to find mismatched/unreplicated rows? Some rows were mismatched in character encoding only: the values on the two DBs were technically the same, just encoded differently. (Their machines' Python setup was also broken, because Python was ridiculously out of date. I'm talking 2.x where the x was too old, this was before the problems of Python 3 were relevant. Everything in the company was C++, so this didn't matter much to the older hands there, but … god a working Python would have made working with character set issues so much easier.)

¹IIRC, it's best described as "nearly UTF-16"

By @ddgflorida - 2 months
In one of my early career jobs, we had applications that were so buggy we had 1 or 2 developers dedicated to logging into customers databases and fixing data.
By @EGreg - 2 months
I just wrote an answer regarding data denormalization:

https://stackoverflow.com/a/78831591/467460

It is from the point of view of an app developer, not a DBA, but should be relevant to most people on HN.

By @trte9343r4 - 2 months
Slicing columns into multiple tables is fairly common type of sharding. Sort of SQL way to do columnary store.
By @prudentpomelo - 2 months
It's honestly relieving to read all these stories because I feel like I am in the middle of this right now. Our legacy product is a mishmash of perl and php that was probably started in the early 2000s. Which I wouldn't feel bad about supporting if it wasn't for the new system that has been bolted on top of it. The previous team that started the migration is all gone and now we are halfway between two two systems that we have to support. On top of that my manager is afraid to say no to our eccentric CEO so we are flooded with new feature requests but can never get anything done because we spend so much time chasing down problems in the old system. Is there any sane way out of this? I just feel like I am white-knuckling it through every day.
By @deterministic - 2 months
I loved reading this article. It made me laugh and cry and rage scream all at the same time. It is a true miracle that most things work most of the time. Don't look behind the curtains fowks! It might give you nightmares.
By @xpressvideoz - 2 months
Elasticsearch and OpenSearch have a similar issue. They have a soft limit on the number of "searchable" fields (called "mapping") a table can have (by the way, a table is called an "index" in their terminology. How confusing!), which is 1000. We ran into this problem because we tried to jam all the logs in every single microservice into a single table. Each microservice had a different log format, so after combining them all the number of different fields surged. We did it this way because the operations team wanted to maintain a single table, by the name of simplicity, which to this day is a reason I can't completely fathom. We were surprised because we thought Elasticsearch and OpenSearch were some kind of magic box that can somehow ingest all the data we put yet still are perfectly performant. After that incident, we introduced a common log format that applies to every microservice.
By @greenthrow - 2 months
This is obviously a work of fiction. Not because there isn't bad code similae to this out there, but because the way it is told and many of the details don't add up and have all the hallmarks of fabrication.
By @summerlight - 2 months
My favorite, obligatory reference when we discuss horror of legacy codebase: https://news.ycombinator.com/item?id=18442941
By @interactivecode - 2 months
sure it might be a mess, but at least it's purpose built. I love that kind of performance gains. Honestly most companies die before purpose built code like that becomes a problem.
By @trustno2 - 2 months
You ship your org chart.

If you have a messy app, you have a messy organisation.

By @jabart - 2 months
This codebase sold for $4.3 billion. That table was cursed.
By @qxmat - 2 months
Jira's now discontinued server version had a sequence table to stop you sharding it. It also made disaster recovery from a hard shutdown awful. I have nothing good to say about Atlassian.
By @wyclif - 2 months
Jimmy Miller the drummer and record producer? Anyway, that's who I thought of when I first started reading this.
By @peteforde - 2 months
Man, this post is gold. I think we all have our own version of the best worst codebase.

My take on this story was a backend "deal capture" system for an emissions trading desk about 25 years ago. The application was written in classic ASP, T-SQL and VB6 COM components. Oh, and VBScript. It only ran in Internet Explorer 6.

The heart of the machine was a massive, unholy stored procedure. It was about 35kb, and what it did was generate a very long string which was - you guessed it - a gigantic SQL statement with countless JOINs across many temporary tables.

The business logic was deemed sophisticated enough that it was decided we needed a workflow engine and some genius decided that we should use an expensive proprietary enterprise-y tool which used Microsoft Exchange Server as its datastore and operating environment. This was as terrible an idea as it sounds, because the "API" to talk to this engine was single-threaded and massively bottlenecked by whatever made Exchange Server suck. Most significantly, it also could only be properly accessed by a very specific version of a DLL, requiring that a server setup follow an agonizingly specific software installation procedure that was scientifically proven to be impossible for IT to execute because they would always jump to the end and install the latest version of the service packs and suddenly the DLL we needed would be impossible to access without wiping the machine and starting over. This is what people meant when you hear the term "DLL Hell".

The most interesting aspect of this system was the client UX. This was 1999, and Ajax was literally not a thing yet. However, the precursor to XmlHttpRequest was an IE-only thing called ActiveXObject. You could use VBScript to wire up form elements to be updated by an ActiveXObject in the way we'd consider familiar today, except that instead of FORM GET/POST, your ActiveXObject would talk to a VB6 COM+ DLL running inside of Microsoft Transaction Server. Looking back on it now, the whole thing was simultaneously an abomination and years ahead of its time. The security profile would likely give a modern developer night terrors; I'm pretty sure that if you were on a page that was being served by an IIS instance that had COM+ registered, your ActiveXObject could call methods on that object so long as the VBScript had the UUID that identified the COM+ object. End of story. Just wow.

Final detail that my memory retains is that the room full of chain smoking women who operated this system did not use mice. That is, they were entering data almost exclusively through the number pad and furiously slamming the tab key to move to the next field. They would develop intense muscle memory for the 200-300 fields on this form. They needed repeatability or it would literally break them. Thing is, there were many instances where they didn't want to fill fields in the order they are defined in the HTML, so over time about 90% of those form elements gained in-line event handlers, again, written in VBScript, which replaced the built-in tab key handling functionality of the element with whatever was deemed the correct next step in their insanely specific process. If we accidentally broke their expectation, there would be hell to pay.

That is, we would login to the live server with VNC and live edit the changes into the codebase while they were on the phone over the remote screen share connection.

There was no source control, and if there were backups, I'm dubious that they would have been restorable.

By @recursive - 2 months
In SQL Server, you're likely to run into the 8000 byte row size limit before the column count limit. Ask me how I know.
By @nnurmanov - 2 months
I know about the limits on max column number per table. The other day I was thinking about the best table layout for AirTable like systems, I finally decided to use relational tables with json fields. E.g. row is a several fields (id, name, who fields) and one json field where I can put any number of fields. Hopefully, this is going to be the best from both worlds and work good from performance view.
By @mg - 2 months
That is why people these days tend to use a single JSON blob instead of multiple columns. And because it is so popular, SQLITE and other DBs are building better and better JSON support into the DB.

I wonder if better support of EAV tables would solve this issue better.

If one could do "SELECT price,color,year FROM cars! WHERE status='sold'" and the "!" would indicate that cars is an EAV table ...

    entity attribute value
    1      price     20750
    1      color     red
    1      year      2010
    1      status    sold
... and the result of the query would be ...

    20750 red 2010
That would solve most of the use cases where I have seen people use JSON instead.
By @telgareith - 2 months
First real job, stayed for 10yrs, 5yrs too long, put VB6 COM objects into the asp.net codebase.
By @codetrotter - 2 months
> went by the name Munch

How do you pronounce that?

Was it like the word munch in “munching on some snacks”?

Or like the name of the painter Edward Munch? https://www.nrk.no/kultur/nrk-endrer-munch-uttale-1.539667 (note: this link is in Norwegian)

By @nikodunk - 2 months
What a beautiful, humorously written, bitter-sweet piece of writing!
By @Nurbek-F - 2 months
Databases must not be a place to store any logic
By @20after4 - 2 months
When I was still in college studying Computer Science (~2001) I got a part time job at a Medical Claims clearing house doing VB and Java programming. My task was to re-write in Java the ancient and archaic system which would read in medical claims that were uploaded by providers and then translate them into the one of several formats, depending on which insurance provider we were sending them to. The inputs were big batches of data in one of two formats, "ASC X12"¹ or "NSF"² and the outputs were some ideocyncratic version of those two, or a 3rd option which was just ascii text, layed out (monospace) such that it would line up when you dump the file directly to a printer loaded with the pre-printed UB-92³ forms.

None of this is particularly interesting, unless you have a fascination with archaic file formats and/or an interest in historical and highly idiosyncratic government bureaucracy.

The really interesting (horrifying) thing about the job, though, was the state of the VB6 code which I was asked to translate into well structured and performant Java. There were some really hilarious and nightmare inducing subroutines like "BunchOfIfs" and "BigSelect", each of these monsters were several thousand lines long and consisted of exactly what you'd expect. Huge branching structures with absolutely no reasonable organization or design. I'm not even exaggerating to say it was just the accumulated cruft of 10 years of dirty hacks tacked on by the cheapest coders they could find where the only standards were if it works it ships. Literally the worst procedural code you can imagine with zero factorization, modularization, plenty of duplicated code copied and pasted then modified to do something slightly different than the 3 other versions of similar code elsewhere in the project.

Somehow, after a year of part-time work (20 hours a week, between classes) I managed to produce a working system to translate claims from any one of the weird formats into any other one of the weird formats, including 5 or 6 variants of said formats, each one which violated the spec in a unique way in order to satisfy the strange requirements of some particular insurance company. The Java version was less than 10% the size (lines of code) of the old system, ran 30x faster and produced correct output.

Still to this day it's objectively the most difficult, painstaking, excruciating, but also probably the best, most impressive work I've done. And it's the least I've ever been paid for writing code.

Oh and I forgot to mention, nothing was in source control and there were several variants of the codebase that had been modified slightly to do a similar but different task and then just continued to drift away from the codebase it was copied from.

1. https://en.wikipedia.org/wiki/ASC_X12 2. https://manuals.momed.com/edb_pdf/NSF%20(National%20Standard... 3. https://owcprx.dol.gov/static/UB-92.pdf

By @mystified5016 - 2 months
In my very first programming job, I got hired on right at the end of a major refactor that had been going on for the better part of a year. I was a hacker kid who started programming less than a year prior. My first task at this company was to merge and resolve all conflicts from the refactored branch back into mainline. No one checked what I did. As long as it compiled on my machine and passed the CI test (just one), it got merged and released into production.

Shockingly, this project was notorious for regressions. We were on a weekly(!) update cycle, and we constantly had bugs reappear that had been solved months prior.

This was 2010 or so, and we were using SVN because the project lead didn't trust or understand git. He also didn't understand SVN. The solution to our constant regressions was pretty simple. Instead of merging branches into trunk, we would delete the master branch every week and create a new one out of all of the finished develop branches.

Surprising absolutely nobody apart from that project manager, this plan was a spectacular failure.

He also stole code out of my personal git repos from before I worked there, and bragged about how smart he was for stealing my code. So, y'know, just a general idiot and asshat.

By @jiggawatts - 2 months
In case anyone is curious about what the proper solution to some of these problems is in modern SQL Server:

1) Many columns can happen with per-customer customisations to a shared table. The common way is to have a "customcolumns" table with "ID,ColumnName,ColumnValue" linked to the base table that has an "ID" key, but SQL Server also supports this natively now with Sparse Columns: https://learn.microsoft.com/en-us/sql/relational-databases/t...

2) Shared identity or globally sequential numbers have a built-in schema object type now: https://learn.microsoft.com/en-us/sql/t-sql/statements/creat...

3) Manually populated calendar tables are actually a common schema design pattern, especially in manufacturing, shipping, and OLAP reporting systems. This is not that bizarre, it's just a bit weird that it would break logins! These tables can let you define all sorts of things such as international holidays, manufacturing calendars, tax years, finance reporting schedules, etc...

4) Dropping and recreating tables is also common, albeit usually done in a transaction. The fancy way to do this is with partition switching, where a new table is populated from whatever (external data, business logic, etc...) and then instantly swapped for the original without any long-running operations like truncate & insert would. See: https://pragmaticworks.com/blog/table-partitioning-in-sql-se...

5) Delayed reporting replicas ("here was a copy of the database. Data in this copy was about 10 minutes out of date.") is also a common pattern. At this point, the blog author is just complaining about the realities of business databases. Typically you'd have a bunch of read only replicas with different delays: Synchronous for HA failover, Asynchronous for DR failover and real-time reporting, and deliberately delayed on a schedule ETL copies for "point in time" consistent reporting. These would typically be done at 3am or something to minimise inconsistencies in the data. The modern way to do this is a SQL Server Always On readable secondary: https://learn.microsoft.com/en-us/sql/database-engine/availa...

6) "The main codebase I worked in was half VB, half C#." this is also surprisingly common. It's often not worth rewriting old code, there's not enough return on the invested money. These days there are automated conversion tools for VB to C#, such as: https://marketplace.visualstudio.com/items?itemName=SharpDev...

7) The Gilfoyle character is honestly the only thing that stands out here as an actual problem, not just a typical thing that happens in large enterprise bespoke software development.

By @100pctremote - 2 months
"Merchants2 was the solution."
By @m463 - 2 months
"When you can't tie a knot, tie a lot"