"We ran out of columns" – The best, worst codebase
The author reflects on a chaotic codebase, highlighting challenges with a legacy database and a mix of programming languages. Despite flaws, it fostered creativity and problem-solving, leading to notable improvements.
Read original articleThe author reflects on their experience working with a complex and chaotic codebase, which they describe as both the best and worst they have encountered. The legacy system's database, particularly the merchants table, exemplified the challenges faced, as it ran out of columns, leading to the creation of a second table, Merchants2. The database was central to the system's operations, with various tables interconnected, including a notably simple SequenceKey table used for generating IDs. The login system relied on a manually maintained calendar table, and the employees table was reset daily, causing temporary access issues.
The codebase itself was a mix of VB and C#, with numerous JavaScript frameworks integrated, often with custom modifications. The author highlights the chaotic nature of the code, including a shipping application with a convoluted structure and a history of unresolved bugs. Despite its flaws, the environment fostered creativity and problem-solving, as developers carved out their own solutions within the mess. A standout improvement was made by a senior developer who optimized a slow merchant search page, showcasing the potential for innovation in a disorganized system.
Ultimately, the author expresses nostalgia for the direct connection to users and the fast feedback loop that characterized their work in this imperfect codebase, contrasting it with more structured but less engaging environments encountered later in their career.
Related
The saddest "Just Ship It" story ever (2020)
A developer shares a cautionary tale about the importance of releasing imperfect products promptly. Delaying a project led to missed opportunities, contrasting with a competitor's successful approach of shipping and updating continuously.
Ask HN: Business logic is slowing us down
Developers face challenges balancing internal stakeholder demands and external user needs, often spending time on code maintenance. Recognizing this work is crucial for enabling focus on new feature development.
Ask HN: I can't grok front end development
An experienced software engineer with over 10 years in backend systems is exploring frontend development, finding modern frameworks complex. They seek resources to improve their frontend skills and user experience understanding.
- Many commenters recount their experiences with poorly organized code, often lacking version control and documentation.
- Despite the challenges, some found creativity and problem-solving opportunities within these messy systems.
- Several stories illustrate the consequences of legacy systems, including bugs and inefficiencies that arise from outdated practices.
- There is a sense of nostalgia and camaraderie among developers who have faced similar struggles in their early careers.
- Some comments suggest that while messy code can be frustrating, it can also foster innovation and adaptability in problem-solving.
At some point, the application had some bugs which were not appearing when the application was run in debug mode in Visual Studio. The solution was obvious: installing Visual Studio for each customer on site and teaching the users to run the app in debug mode from Visual Studio. I don't know how they convinced the users to do this and how they managed with the license but it was done.
What happened next was even worse.
There was no version control of course, the code being available on a shared disk on the local network of the company with the code copied over in multiple folders each having its own version, with no particular logic to it either, V1, V2, V2.3, V2a, V2_customer_name, V2_customer_name_fix, ...
After that, when there was a problem for a customer, the programmer went there to debug and modified the code on site. If the bug/problem was impacting other customers, we had to dispatch some guys for each customer to go copy/edit the code for all of them. But if the problem was minor, it was just modified there, and probably saved on the shared folder in some new folder.
What happened next was to be expected: there was no consensus on what was the final version, each customer having slightly different versions, with some still having bugs fixed years before for others.
In my first real job, I worked for a company that maintained a large legacy product programmed in a combination of COBOL and Java.
In order to work on the Java side of the product, you checked out individual files from source control to work on, which 'locked' the files and prevented other developers from checking out the same files. This functionality was not part of our actual source control system, but was instead accomplished with a series of csh shell scripts you could run after ssh'ing into our development server.
Each of our customers had a 'master' jar file that represented the actual final compiled product (a jar file is really a zip file archive, which bundles together the resulting compiled java class files).
Once you had finished implementing your code changes, you ran another set of scripts which found the master jar file for each customer, unzips it, copies the compiled files from your local machine into it, and zips it back up again. Finally the source control lock is released.
This means, effectively, that the codebase was never compiled as a whole at any point in the process, instead, we just manually patched the jar file over time with individually compiled class files.
Over the years, small errors in the process allowed a huge amount of inconsistencies to creep into the codebase. Race conditions would allow two developers to lock the same file at once, or a developer would change a class that was a dependency of some other code that somebody else was changing. Sometimes code changes would make it into some of the customer jar files, but not others. Nobody knew why.
It took a small team two years to migrate the entire codebase to git with proper CI, and a huge chunk of that time was reproducing a version of the codebase that actually compiled properly as a whole. After the project was finished, I resigned.
The primary author of it didn't know about arrays. I'm not sure if he didn't know about them being something that had already been invented, or whether he just didn't know Perl supported them, but either way, he reimplemented them himself on top of scalars (strings), using $foo and $foo_offsets. For example, $foo might be "romemcintoshgranny smithdelicious" and $foo_offsets = "000004012024", where he assumes the offsets are 3 digits each. And then he loops through slices (how does he know about slices, but not arrays?) of $foo_offsets to get the locations for $foo.
By the time I was done refactoring that 12k+ was down to about 200 ... and it still passed all the tests and ran analyses identically.
My youngest worked in a furniture chain over the summer. And they got sent a big, heavy furniture set from the central warehouse, which the store actually didn't want. So, they sent it back. The problem was that the system didn't allow them to say: please, don't send this again. And the non-natural intelligence at the central base decided to send this set again. When my youngest started working - they were loading this set back for the seventh time.
Why 'loading'? Because no one could find a way in the program to send this ill-fated set back on the same truck that brought it. No one, except my youngest, that is. He managed to find the combination of keys and checkboxes that allowed them not to unload the unwanted set and ship it back on the same truck - and he immediately got a raise.
I suspect the set is still traveling. But now they only load-unload it at the central warehouse.
I feel a lot of this is the difference between theory and practice. Sure each of these things are bad, but probably a lot might have been the right choice at the time, and in a way, most companies, even most projects running for many years, end with similar quirky processes, messes, and hacks.
It's the war stories of what used to be, often only told by the code and the database as the creators have long left.
When I look at our couple year old startup, we have some of these things. Prototypes that just keep functioning and only get more attention when they break, manual processed that sort of make sense but also don't, integrations with systems that were build in the early 2000's (it works, but it ain't pretty). We fix many, but I'm sure some will survive way too long.
As software engineers, especially online, we like to discuss the ideal way to do things, best practices, testing strategies, redundancies, the whole nine yards. When time is of the essence and stakes are high, it all goes out of the window and you gotta just make it work in whatever way is possible to fight another day.
Or this one, it implements EVERYTHING for rendering and interacting with a message in a single class, all non-service message types, all drawn manually on a canvas "for performance", and all input processed manually too: https://github.com/DrKLO/Telegram/blob/master/TMessagesProj/...
I even had the permission from the main developer to refactor some of it, but I never got around to actually doing it.
It's a miracle this app works at all.
The worse part wasn’t the code itself (although it was bad), but the fact that there was so much abandoned paths in it and there would be three or more different versions of crucial functions with the same name (or sometimes a different name but doing the same thing) in different places and sometimes all being called by different paths in the workflow. Or not being called at all.
And it was very math heavy (calculating natural gas pressures and flow rates through different sized pipes and fittings and sizing regulators to meet certain loads). Think Excel formulas on cells that referenced 15-20 other cells, each of which was a formula on their own that referenced other pages and cells, some of which were filled by VBA. And that wasn’t even involving the VBA code full of brute force solvers for multi-variable equations that used heuristics he’d worked out by trial and error (if it’s a delta over 1.5, add 5 to this variable, otherwise subtract 3, but if the delta was less than 0.5, add 1 and so on - it eventually converged or found no solution, but a binary solved did the same thing, only faster and easier).
It took me and a junior developer several months, during which, of course, multiple change requests were going through to make it multiuser and secure.
Both my nightmare project and one that I’m very proud of once it was completed.
I could talk through pain points they were having, we’d come up with a solution together, I’d hack up a quick prototype and launch it just to them to try out. We’d tweak it over a couple of weeks and when it was good I’d launch it to all customers.
Messy, ugly code base. But it worked well because it wasn’t over-managed. Just developers doing development. Amazing what happens when you get out of the way of smart, talented people and let them do their work.
Originally our company only did business in one marketplace (the UK). When we started doing business in multiple marketplaces, it came time to modify the system to cope with more than one. Our system assumed, _everywhere_, that there was only ever one marketplace. I had a plan: I would copy-paste the system, make an "international" version that supported many marketplaces, then transition over our original marketplace to the international version and shut down the old system. This way, everyone could keep working on the original marketplace like normal, they'd get the new marketplaces on the new system, and we'd do a clean cutover once ready.
It started out quite well. I got the international version working and onboarded our first new marketplace. The business was very happy, there was lots of opportunity in the new marketplaces. They asked that I delay the cutover from the old system and focus on developing things for these new marketplaces. After all, the old system still works for our original marketplace, we can move it over once our work is done on the new marketplaces. I said yes, of course!
It's now 5 years later, it turns out there's a lot to do in those other marketplaces. To say that the system is split-brained is an understatement. When training new hires, we teach them the difference between a "product" and an "international product". When either system does something, it double checks and cross-references with the other system via a colourful assortment of HTTP APIs. There are a variety of database tables that we "froze" in the old system and continued in the new system, so you could tell that an ID referred to something in the new system or the old system if it was above or below the magic number we froze it at. We have overarching BI dashboards that query both systems to produce results, and they have to heavily manipulate the old system's data to fit with the new multi-marketplace model, and any other changes we made. Both systems are extremely tightly coupled, the old one is a ball and chain to the new one, but the new one is so tightly integrated into it that there's no hope of separating them now.
I've learned to say no since then.
For many people, their first job in software engineering is the worst codebase they will deal with professionally for this reason. The first job hires lot of people with little/no experience. As soon as someone gains some experience than can move on to better paying jobs, where there are better developers with better standards.
I had one customer who came back with the same request, slightly differently worded, every single month, and every single month I'd say the same thing. They had this site they were running that was essentially a Yellow Pages type site. They had a large set of companies with contact details, each with multiple business categories associated with it. You'd choose a category, and they'd return a list of matching companies.
The problem was the site was really slow. I took a quick look around, and saw that all the time was lost querying the database. Taking a quick look at the schema I discovered that their approach to categorisation was to have a TEXT column, with semicolon separated 4 character strings in it. Each 4 character string mapped to a business category.
So when someone wanted to load up, say, all pest control companies, it would check the category mapping table, get the 4 character string, and then go to the companies table and do:
SELECT * FROM companies WHERE categories LIKE "%PEST%"
So on each page load of the main page type the site was there to provide, it did a full text search over the category field for every single record in the company table.I guess that's probably okay for the developer without real world scale data, and real world traffic counts to worry about. But they had lots of data in the database, and that category field could have dozens of categories against a company. As soon as they had more than about 4-5 simultaneous customers performance started tanking.
I could never get them to accept that they needed to rethink the database schema. One month they were bleating about how is it possible that Google can manage to do such a search across a much larger amount of data, much faster. They really didn't like my answer that amounted to "By having a sane database schema". All they were willing to do was pay over the odds for our most powerful server at the time, which had enough capacity to hold the entire database in memory.
These tracker databases are usually used to generate a HUD - numerical readouts surrounding each player shown on the table. PT allows for custom HUDs to be saved/exported/shared, and there is a cottage industry building and selling such HUDS. These HUDs can often bundle hundreds of custom stats - basically a subset of SQL queries, a simple example would be "times_raised / opportunities_to_raise WHERE position= 'button'", that sort of thing.
For performance reasons these custom stats are cached, which obviously makes some sense. However, each cached stat creates a new column in the custom_cache table of PT4's current database, a PostgreSQL 9.0 backend. If you play mostly Heads-Up SNG or the 3-handed Spins, it's actually quite easy to go over the (IIRC) 4096 column limit there by purchasing and importing one too many fancy HU HUD packages!
This completely borks PT4, it can no longer open - and players can no longer make money! In my previous work, I've supported many a player and fixed this "too many columns" error numerous times (by deleting enough custom HUDs from their account until the PT4 can start up correctly once more). Funny to see this pop up elsewhere!
There are so many entangled services and machines that you feel like an Indiana Jones. You ssh into a machine and feel century-old dust beneath your footsteps. And you never know what will you find. Maybe a service which holds the company together. Maybe a CPU eating hog which didn't do anything useful last 3 years.
I don't enjoy writing new code much. But in such an environment even with my limited skills I can do decent improvements, especially from security point of view. Feels great
We had no code reviews, no design docs, no tests, nothing. We made the changes the way we thought they were right and would git pull them onto the production server.
After I struggled to get productive for the first four months, my manager went on a four-week Christmas vacation. In a moment of frustration, I seized the opportunity and rewrote the whole project from scratch. I don’t remember if my manager ever noticed, but that was the moment I finally got productive.
There's no date on this article, but it feels "prior to the MongoDB-is-webscale memes" and thus slightly outdated?
But, hey, I get where they're coming from. Personally, I used to be very much schema-first, make sure the data makes sense before even thinking about coding. Carefully deciding whether to use an INT data type where a BYTE would do.
Then, it turned out that large swathes of my beautiful, perfect schemas remained unoccupied, while some clusters were heavily abused to store completely unrelated stuff.
These days, my go-to solution is SQLite with two fields (well, three, if you count the implicit ROWID, which is invaluable for paging!): ID and Data, the latter being a JSONB blob.
Then, some indexes specified by `json_extract` expressions, some clever NULL coalescing in the consuming code, resulting in a generally-better experience than before...
> This may sound like a mess to you. But it was remarkably enjoyable to work in. Gone were the concerns of code duplication. Gone were the concerns of consistency. Gone were the concerns of extensibility. Code was written to serve a use, to touch as little of the area around it as possible, and to be easily replaceable. Our code was decoupled, because coupling it was simply harder.
This one guy established himself by making an Access database for their core business, and when the web became a thing, built a customer site. But not on it—in it. He simply served ASP pages directly from the database, inserting dynamic content in queries. When I was asked to help improve their terrible design, I was forced to untangle that unholy mess of queries, ASP (new to me) and HTML. It was easiest to write all the HTML and insert their ASP right before I sent the files back (because I wasn’t given access to their DB/web server). Thinking “I could do better than this” got me into programming.
He was a Microsoft-everything head. Finally went too far when he presented a new web interface starring a Clippy-like parrot using Microsoft’s DirectX avatar API. The executives were unimpressed and then I noted that 20% of our customers couldn’t use his site. (I probably still had a “best viewed with IE” badge on the main site, lol)
The old codebase was an hairy ball of scala using a very outdated version of a (now) infamous actor framework. Before they figured out that untyped messages kinda left out one of the major selling point of Scala.
The code was readable, but the authors had this strange idea that every little piece of logic should be part of its own "Actor". An actor is pretty much equivalent to a class, and each one of them had their own little file. With this many classes, with very specialized purposes, you ended up with 90 character identifier names.
To understand what would be a single function in a normal code base, you would have to dive through half a dozen files through several repositories to piece together the logic. Generally, at the end, you find that most of the code is about passing around a value, and there is this one file where there is actual logic applied to the value.
It wasn't that awful. The thing was that it was impossible to change anything: no documentation, no test, no bug tracking, not even any PR process, laconic commit messages, no Wiki pages. And the actor framework made it very difficult to add tests. But later developers did manage it pretty well, they drew fences around the old services, with an HTTP API to communicate with it. And all later additions were part of different services that were very cleanly and consistently designed.
At a company that I used to work, they heard the same rumor, so instead of using identity columns or sequences, they kept a table with a number of ids "available" (one row per id). Whenever unique id was needed, the table would be locked, an id selected and marked as used. If there were no ids available, more ids would be added and then one used. A scheduled job would remove ids marked as used from time to time. Note that there was a single "sequence table", that was shared among all of the entities.
That was not even the weirdest part. That id was unique, but NOT the primary key of the entity, only part of it.
The structure of the database was fairly hierarchical, so you had for example a table CUSTOMER in 1-to-many relation with a USER table, with a 1-to-many relation with an ADDRESS table.
while the primary key of the CUSTOMER table was a single CUSTOMER_ID column, the primary key of the USER table was (CUSTOMER_ID,USER_ID), and the primary key of the ADDRESS table was (CUSTOMER_ID,USER_ID,ADDRESS_ID). There were tables with 5 or 6 columns as a primary key.
Another WTF moment was realisation that MS SQL Server does not support BOOLEAN type. That made porting code fun.
A few months in, when I approached the CTO and asked if I could start writing a test framework, he deemed it a waste of time and said "by the time you'd commit the test, it would go out of date and you'd need to rewrite it".
Naturally, the build would break about 5 times a week.
Boeing was a major customer of this system, so when shit hit the fan at Boeing a while ago, I wasn't surprised.
Programming is a lot like this.
Joel Spolsky released the "Joel Test" for determining if the software team you were interviewing had good practices in 2000. One of the requirements was that they used version control. Not all that many teams actually did. Especially during the dotcom craze.
Today, passing the Joel Test is table stakes, and it's the rare shop that doesn't. But it took years for that sort of thing to become ubiquitous.
https://www.joelonsoftware.com/2000/08/09/the-joel-test-12-s...
Probably some of the most fun I've ever had writing software was making Gilfoyle-2s.
In the codebase itself you can see the evolution of code styles, older views and the core of the system is written in a very old and very stateful manner while newer parts of the application use modern architecture practices, we have two local databases that load configuration into the app for our different clients and their requirements, an global state is loaded at all times to check what is the correct business logic to follow.
Im a junior, few years into Android Programming and while sometimes its frustrating having to deal with some nasty bug because a random variable is updated for reasons only god knows, i think the experience its giving me its something im going to appreciate years down the road.
Constraints affect outcomes in (at least) three ways:
- by absolute limitation (you cannot violate these)
- by path of least resistance (the constraint makes X easier)
- by strategy (creative use of knowledge & skills leads to novel solutions)
---There is an injured person on top of a mountain, and they need medical attention. You need to go up the mountain, get them, and bring them down.
You only have so much strength/endurance, so nothing you do will ever be faster than what your body is capable of. You need to get up and down in less than two hours. You need to carry an injured person. The mountain has two slopes: a long gradual one, and a short steep one.
Most people would climb the long gradual slope, because it's the path of least resistance. But it will take 4x as long to climb up and down it. Climbing straight up the steep slope would be incredibly tiring, and unsafe to bring someone down.
You can, however, zig-zag up and down the steep hill. It will take more time than going straight up, but faster than the long way, you will be less tired, and it's safer to bring someone down hill.
---
Constraints can be good and bad. Good use of constraints can allow you to get something done effectively. Bad use of constraints leads to failure. So it's important to have someone who can utilize those constraints effectively.
Vision, leadership, and wisdom is more important than the people, skills, materials, or time involved. The former determines the quality of the outcome more than the latter.
The calendar table struck a chord. We had one for holidays. One system needed to know when they were to calculate pay dates. Except once a year it would “run out” and someone would have to go add in the next year’s worth after a bad calculation was spotted.
The second time I was told to do it, I put in 10 years worth. The company didn’t survive long enough to need more.
My first “big” project was actually that pay date code. Every once in a while the server would hang, and people had deduced the date calculation was the problem. But no one knew why. And it wasn’t frequent enough to be worth taking time from the other two programmers. But I was new and thus “spare”.
After not being able to find any errors, I brute forced it. I ran that calculation for every day of the year for every pay type (weekly, monthly, bi-monthly, etc) and it quickly got locked in an infinite loop. Armed with the “bad” data that triggered it, it was easy to solve and provide tests to prove it would never happen again. I don’t remember what it was, exactly.
I wrote a little program to scan the CD:s as I inserted them into my computer, indexing the data into a database I had created, and then labelling the CD.
It wasn’t exactly exciting work but I still miss those days sometimes, everything was new and unexplored.
Not needing to conform to some company-wide standard is probably really pleasant while it lasted, but every such effort adds to the haunted graveyard, and the lack of consistency will eventually come back to bite whoever is still around.
[1] https://www.usenix.org/sites/default/files/conference/protec...
I work at a small business. Despite computer software being about the literal opposite of our business (plants), the founder built an entire suite of interconnected tools that runs off MS BASIC for Xenix, on a single HP machine running SCO OpenServer. The machine has so many customizations, self-scheduling cron/at jobs, odd nooks for files, weird tweaked programs, and special conventions that if a server with a dedicated hostname qualifies as a pet (as opposed to cattle), I'd be THIS THING'S pet.
The system handled EVERYTHING. Accounting, payroll, pesticide management, inventory, attendance, business contacts, shipping label printing... all out of a bunch of terminal menus (which are actually text files with control codes that get `cat`ed out).
But by God, the most horrifying part of it all are those BASIC files. They're IMPENETRABLE.
Firstly, I don't believe this version of BASIC supports named functions or subroutines. At all. But that's fine. MS BASIC being what it is, the interpreter only can deal with a certain number of characters per logical line, and that includes data definitions.
This version of BASIC (like so many others) includes its own serialization format and record/file access scheme. You declare the layout of the data file you want, open that file, and BASIC will handle (most of) the rest.
So when the founder started hitting the internal line limit while defining the data file's fields, he would cut the names of the fields down to fit more on that one line. Over time `30 AS EMPLOYEENAME` became `30ASEMPLNAME`, which became `30ASEMNAME` which became `30ASAF(1)`.
Every cent we transact, and every employee's timecards still flow through this old system, some even using bona fide Wyse terminals. To reiterate, this man was, first and foremost, a farmer. His contraption is terrifying, but commands immense respect. It's lasted 30-some years with continuous tweaking and refining, and we still have yet to replicate even half of its functionality. (Though there are other organizational issues that are making that difficult.)
On a personal note, aside from the calcified codebase and occasional spelling errors, it's a stellar business application. It's fast, mostly coherent, and keyboard-driven in such a way that experienced employees can navigate it faster than the terminal can refresh. We've been working for years to replace it, but at the same time, there's a lot our newfangled Angular+PHP+MySQL replacement could learn from it.
Needless to say it gave my team a few days of flabbergast and speculation on what system they must have built on to hit a row limit at only 5 digits. And yet for a non-tech industry it was mostly working.
In 2014 or so, someone at a company told me that they just don’t do branches, and merge everything into trunk. I agree that long-lived branches are bad, but no branches at all? Sure enough, this was OK — meaning branches could be kept and synced by a few people outside the main repo, but forks are never pushed to the repo.
Using table columns for data, as long as you’re building for an actual specific business vertical and not just a general-purpose framework.
You never know when you’ll want to put an index on the data, and databases already have built-in mechanisms to select a subset of the fields, etc. You avoid all kinds of joins. If you need to extend the table, just start a new table with the same synthetic ID.
I would say, in tables where you don’t want to even have global locks (eg sharded tables), just don’t have an autoincrementing ID. Instead, try a random ID or a UUID, and insert it, then use it across tables.
In short, what was described here is actually good design.
- there is so much stuff to improve. There’s nothing better than the feeling of improving things with code (or by removing it)
- it’s back to school again and everything goes. You can implement features in any way you want because the constraints the system imposes. Granted, sometimes it’s painful to add functionality
- there’s usually no room to subjective topics like clean code and architecture. The most important thing with these systems is correctness (and this is usually an objective topic)
- nobody can blame you for something that doesn’t work. It’s always the fault of the legacy system
I wouldn’t recommend working on such systems to junior engineers, though.
I don’t really like to work on “perfect” codebases where everyone follows the same pattern, with linters, where if something breaks is because of your shitty code (because the codebase is “clean”). It’s very frustrating and limiting.
I put together a thread of all of the wild incidents I've seen over my many years of working as a FinTech SRE:
The reason we had it is we had master-master replication, but without requiring the peer to acknowledge befor committing a transaction. To avoid inconsistencies, we preferred one server for even ids and the other for odd ids. But when writing a new record, an autogenerating sequence would just give the next id without regard to what "side" the request was on. So we had a table to keep track of the next id to use for each side, where we incremented the next id by 2 each time a new id was allocated.
It was a little weird, but it worked fairly well. Although we eventually changed the architecture and removed the need for those tables.
Something like this: a "genius" programmer was somehow, for some reason using svn commits as a method dispatcher. Commit ids were sprinkled throughout the codebase. A new hire broke the entire system by adding comments, and comments weren't compatible the bespoke svn method dispatcher.
Does anybody remember this article? I really want to read it again.
One particularity of the EAV system is that you end up having tables with hundreds (and growing) of columns. It made Magento itself extremely hard to work with and optimize. I hope they moved away from this model since.
To be fair, this was before nosql databases were a thing.
[0]: https://en.wikipedia.org/wiki/Magento
[1]: https://en.wikipedia.org/wiki/Entity%E2%80%93attribute%E2%80...
https://www.postgresql.org/docs/current/sql-createsequence.h...
Codebases like this or from the OP is cool to learn how to not do certain things.
This included a project called IefParser, its job was to parse incoming data files and put the data into databases. It was a serious project. Like really serious. The software input format came with a full manual with a section highlighting the changes from previous versions. I have not seen that level of detail before or since.
And It was also very old. Written about a year or two after I was born.and never rewritten after. So the software side of things were less serious and more hacky.
Core code was written in C. Database used was oracle. And code was driven by Perl batch script triggered via cron job. And all that ran on IBM AIX (unix). Yes,not windows or Linux. It ran on unix.
The sheer difference in software requirements which were meticulously and the software was mind boggling.
Some fun facts:
- c code could not be compiled on windows . You need to login to dev server via putty and run makefile to do it.
- Perl code was not checked in to repository. Perl code also was different for each environment.
- unix version has a vi editor which for some reason didn’t tell if you were in edit mode or command mode. WTF! Yes, other editor didn’t exist. Apparently I was only one who bothered to learn vi in India. As no one else in India could reliably edit files
- cron job schedule was also no checked in. After a “great learning opportunity “, I decided indeed to check that in
- I once had the horror of fixing bugs in Perl and cron job. Apparently global variables are the state of art in Perl.
- cron job always failed on New Year’s Day because the batch script uses MMDD formatted folder for temp file storage, and was unable to understand 0101 is greater than 1231. I volunteered to track down the issue, and fix it for good. M3 shot it down saying, “we don’t to take risks on such a mission critical service”
I guess there is dailywtf but that's mostly bugs. Probably good enough though
If this story reminds you of the codebase you work on, don’t let its romanticism deceive you into thinking that suffering is somehow actually good.
The first contained monetary values. These were split over two columns, a decimal column holding the magnitude of the value, and a string column, containing an ISO currency code. Sounds good so far, right? Well, I learned much later (after, of course, having relied on the data) that the currency code column had only been added after expanding into Europe … but not before expanding into Canada. So when it had been added, there had been mixed USD/CAD values, but no currency code column to distinguish them. But when the column was added, they just defaulted it all to USD. So and USD value could be CAD — you "just" needed to parse the address column to find out.
Another one was a pair of Postgres DBs. To provide "redundancy" in case of an outage, there were two such databases. But no sort of Postgres replication strategy was used between them, rather, IIRC, the client did the replication. There was no formal specification of the consensus logic — if it could even be said to have such logic; I think it was just "try both, hope for the best". Effectively, this is a rather poorly described multi-master setup. They'd noticed some of the values hadn't replicated properly, and wanted to know how bad it was; could I find places where the databases disagreed?
I didn't know the term "split brain" at the time (that would have helped!), but that's what this setup was in. What made pairing data worse is that, while any column containing text was a varchar, IIRC the character set of the database was just "latin1". The client ran on Windows, and it was just shipping the values from the Windows API "A" functions directly to the database. So Windows has two sets of APIs for like … everything with a string, an "A" version, and a "W" version. "W" is supposed to be Unicode¹, but "A" is "the computer's locale", which is nearly never latin1. Worse, the company had some usage on machines that were set to like, the Russian locale is, or the Greek locale. So every string value in the database was, effectively, in a different character set, and nowhere was it specified which. The assumption is the same bytes would always get shipped back to the same client, or something? It wasn't always the case, and if you opened a client and poked around enough, you'd find mojibake easily enough. Now remember we're trying to find mismatched/unreplicated rows? Some rows were mismatched in character encoding only: the values on the two DBs were technically the same, just encoded differently. (Their machines' Python setup was also broken, because Python was ridiculously out of date. I'm talking 2.x where the x was too old, this was before the problems of Python 3 were relevant. Everything in the company was C++, so this didn't matter much to the older hands there, but … god a working Python would have made working with character set issues so much easier.)
¹IIRC, it's best described as "nearly UTF-16"
https://stackoverflow.com/a/78831591/467460
It is from the point of view of an app developer, not a DBA, but should be relevant to most people on HN.
If you have a messy app, you have a messy organisation.
My take on this story was a backend "deal capture" system for an emissions trading desk about 25 years ago. The application was written in classic ASP, T-SQL and VB6 COM components. Oh, and VBScript. It only ran in Internet Explorer 6.
The heart of the machine was a massive, unholy stored procedure. It was about 35kb, and what it did was generate a very long string which was - you guessed it - a gigantic SQL statement with countless JOINs across many temporary tables.
The business logic was deemed sophisticated enough that it was decided we needed a workflow engine and some genius decided that we should use an expensive proprietary enterprise-y tool which used Microsoft Exchange Server as its datastore and operating environment. This was as terrible an idea as it sounds, because the "API" to talk to this engine was single-threaded and massively bottlenecked by whatever made Exchange Server suck. Most significantly, it also could only be properly accessed by a very specific version of a DLL, requiring that a server setup follow an agonizingly specific software installation procedure that was scientifically proven to be impossible for IT to execute because they would always jump to the end and install the latest version of the service packs and suddenly the DLL we needed would be impossible to access without wiping the machine and starting over. This is what people meant when you hear the term "DLL Hell".
The most interesting aspect of this system was the client UX. This was 1999, and Ajax was literally not a thing yet. However, the precursor to XmlHttpRequest was an IE-only thing called ActiveXObject. You could use VBScript to wire up form elements to be updated by an ActiveXObject in the way we'd consider familiar today, except that instead of FORM GET/POST, your ActiveXObject would talk to a VB6 COM+ DLL running inside of Microsoft Transaction Server. Looking back on it now, the whole thing was simultaneously an abomination and years ahead of its time. The security profile would likely give a modern developer night terrors; I'm pretty sure that if you were on a page that was being served by an IIS instance that had COM+ registered, your ActiveXObject could call methods on that object so long as the VBScript had the UUID that identified the COM+ object. End of story. Just wow.
Final detail that my memory retains is that the room full of chain smoking women who operated this system did not use mice. That is, they were entering data almost exclusively through the number pad and furiously slamming the tab key to move to the next field. They would develop intense muscle memory for the 200-300 fields on this form. They needed repeatability or it would literally break them. Thing is, there were many instances where they didn't want to fill fields in the order they are defined in the HTML, so over time about 90% of those form elements gained in-line event handlers, again, written in VBScript, which replaced the built-in tab key handling functionality of the element with whatever was deemed the correct next step in their insanely specific process. If we accidentally broke their expectation, there would be hell to pay.
That is, we would login to the live server with VNC and live edit the changes into the codebase while they were on the phone over the remote screen share connection.
There was no source control, and if there were backups, I'm dubious that they would have been restorable.
I wonder if better support of EAV tables would solve this issue better.
If one could do "SELECT price,color,year FROM cars! WHERE status='sold'" and the "!" would indicate that cars is an EAV table ...
entity attribute value
1 price 20750
1 color red
1 year 2010
1 status sold
... and the result of the query would be ... 20750 red 2010
That would solve most of the use cases where I have seen people use JSON instead.How do you pronounce that?
Was it like the word munch in “munching on some snacks”?
Or like the name of the painter Edward Munch? https://www.nrk.no/kultur/nrk-endrer-munch-uttale-1.539667 (note: this link is in Norwegian)
None of this is particularly interesting, unless you have a fascination with archaic file formats and/or an interest in historical and highly idiosyncratic government bureaucracy.
The really interesting (horrifying) thing about the job, though, was the state of the VB6 code which I was asked to translate into well structured and performant Java. There were some really hilarious and nightmare inducing subroutines like "BunchOfIfs" and "BigSelect", each of these monsters were several thousand lines long and consisted of exactly what you'd expect. Huge branching structures with absolutely no reasonable organization or design. I'm not even exaggerating to say it was just the accumulated cruft of 10 years of dirty hacks tacked on by the cheapest coders they could find where the only standards were if it works it ships. Literally the worst procedural code you can imagine with zero factorization, modularization, plenty of duplicated code copied and pasted then modified to do something slightly different than the 3 other versions of similar code elsewhere in the project.
Somehow, after a year of part-time work (20 hours a week, between classes) I managed to produce a working system to translate claims from any one of the weird formats into any other one of the weird formats, including 5 or 6 variants of said formats, each one which violated the spec in a unique way in order to satisfy the strange requirements of some particular insurance company. The Java version was less than 10% the size (lines of code) of the old system, ran 30x faster and produced correct output.
Still to this day it's objectively the most difficult, painstaking, excruciating, but also probably the best, most impressive work I've done. And it's the least I've ever been paid for writing code.
Oh and I forgot to mention, nothing was in source control and there were several variants of the codebase that had been modified slightly to do a similar but different task and then just continued to drift away from the codebase it was copied from.
1. https://en.wikipedia.org/wiki/ASC_X12 2. https://manuals.momed.com/edb_pdf/NSF%20(National%20Standard... 3. https://owcprx.dol.gov/static/UB-92.pdf
Shockingly, this project was notorious for regressions. We were on a weekly(!) update cycle, and we constantly had bugs reappear that had been solved months prior.
This was 2010 or so, and we were using SVN because the project lead didn't trust or understand git. He also didn't understand SVN. The solution to our constant regressions was pretty simple. Instead of merging branches into trunk, we would delete the master branch every week and create a new one out of all of the finished develop branches.
Surprising absolutely nobody apart from that project manager, this plan was a spectacular failure.
He also stole code out of my personal git repos from before I worked there, and bragged about how smart he was for stealing my code. So, y'know, just a general idiot and asshat.
1) Many columns can happen with per-customer customisations to a shared table. The common way is to have a "customcolumns" table with "ID,ColumnName,ColumnValue" linked to the base table that has an "ID" key, but SQL Server also supports this natively now with Sparse Columns: https://learn.microsoft.com/en-us/sql/relational-databases/t...
2) Shared identity or globally sequential numbers have a built-in schema object type now: https://learn.microsoft.com/en-us/sql/t-sql/statements/creat...
3) Manually populated calendar tables are actually a common schema design pattern, especially in manufacturing, shipping, and OLAP reporting systems. This is not that bizarre, it's just a bit weird that it would break logins! These tables can let you define all sorts of things such as international holidays, manufacturing calendars, tax years, finance reporting schedules, etc...
4) Dropping and recreating tables is also common, albeit usually done in a transaction. The fancy way to do this is with partition switching, where a new table is populated from whatever (external data, business logic, etc...) and then instantly swapped for the original without any long-running operations like truncate & insert would. See: https://pragmaticworks.com/blog/table-partitioning-in-sql-se...
5) Delayed reporting replicas ("here was a copy of the database. Data in this copy was about 10 minutes out of date.") is also a common pattern. At this point, the blog author is just complaining about the realities of business databases. Typically you'd have a bunch of read only replicas with different delays: Synchronous for HA failover, Asynchronous for DR failover and real-time reporting, and deliberately delayed on a schedule ETL copies for "point in time" consistent reporting. These would typically be done at 3am or something to minimise inconsistencies in the data. The modern way to do this is a SQL Server Always On readable secondary: https://learn.microsoft.com/en-us/sql/database-engine/availa...
6) "The main codebase I worked in was half VB, half C#." this is also surprisingly common. It's often not worth rewriting old code, there's not enough return on the invested money. These days there are automated conversion tools for VB to C#, such as: https://marketplace.visualstudio.com/items?itemName=SharpDev...
7) The Gilfoyle character is honestly the only thing that stands out here as an actual problem, not just a typical thing that happens in large enterprise bespoke software development.
Related
The saddest "Just Ship It" story ever (2020)
A developer shares a cautionary tale about the importance of releasing imperfect products promptly. Delaying a project led to missed opportunities, contrasting with a competitor's successful approach of shipping and updating continuously.
Ask HN: Business logic is slowing us down
Developers face challenges balancing internal stakeholder demands and external user needs, often spending time on code maintenance. Recognizing this work is crucial for enabling focus on new feature development.
Ask HN: I can't grok front end development
An experienced software engineer with over 10 years in backend systems is exploring frontend development, finding modern frameworks complex. They seek resources to improve their frontend skills and user experience understanding.