June 28th, 2024

Our great database migration

Shepherd, an insurance pricing company, migrated from SQLite to Postgres to boost performance and scalability for their pricing engine, "Alchemist." The process involved code changes, adopting Neon database, and optimizing performance post-migration.

Read original articleLink Icon
Our great database migration

The article discusses the database migration process undertaken by Shepherd, a company specializing in insurance pricing. The migration involved moving from SQLite to Postgres to enhance performance and scalability. The pricing engine, named "Alchemist," underwent significant improvements to handle the growing complexity of pricing models. The decision to migrate was driven by the need to accommodate the company's expanding business operations and introduce new insurance products in 2024. The migration process involved selecting the Neon database solution, making code changes to adapt to a serverless architecture, and enhancing developer experience by automating data handling processes. Despite facing initial latency issues post-migration, the team implemented strategies such as server proximity, caching, and parallelizing functions to optimize performance. The article highlights the importance of maintaining data transparency, ensuring compatibility with existing systems, and improving developer efficiency throughout the migration process.

Related

Schema changes and the Postgres lock queue

Schema changes and the Postgres lock queue

Schema changes in Postgres can cause downtime due to locking issues. Tools like pgroll help manage migrations by handling lock acquisition failures, preventing application unavailability. Setting lock_timeout on DDL statements is crucial for smooth schema changes.

Postgres Schema Changes and Locking

Postgres Schema Changes and Locking

Schema changes in Postgres can cause downtime by locking out reads and writes. Migration tools help mitigate issues. Breakages during migrations can affect client apps or lock database objects, leading to unavailability. Long queries with DDL statements can block operations. Setting lock_timeout on DDL statements can prevent queuing. Tools like pgroll offer backoff and retry strategies for lock acquisition failures. Understanding schema changes and DDL impact helps ensure smoother migrations and less downtime.

Supabase (YC S20) Is Hiring Postgres SREs

Supabase (YC S20) Is Hiring Postgres SREs

Supabase seeks a Site Reliability Engineer to manage Postgres databases remotely. Responsibilities include enhancing reliability, ensuring high availability, and optimizing performance. Ideal candidates possess multi-tenant database experience, Postgres tools proficiency, and AWS deployment skills. Benefits include remote work, equity, health coverage, and tech allowance.

PostgreSQL Statistics, Indexes, and Pareto Data Distributions

PostgreSQL Statistics, Indexes, and Pareto Data Distributions

Close's Dialer system faced challenges due to data growth affecting performance. Adjusting PostgreSQL statistics targets and separating datasets improved performance. Tips include managing dead rows and optimizing indexes for efficient operation.

Serving a billion web requests with boring code

Serving a billion web requests with boring code

The author shares insights from redesigning the Medicare Plan Compare website for the US government, focusing on stability and simplicity using technologies like Postgres, Golang, and React. Collaboration and dedication were key to success.

Link Icon 23 comments
By @morgante - 4 months
> bundling an 80MB+ SQLite file to our codebase slowed down the entire Github repository and hindered us from considering more robust hosting platforms

This seems like a decent reason to stop committing the database to GitHub, but not a reason to move off SQLite.

If you have a small, read-only workload, SQLite is very hard to beat. You can embed it ~everywhere without any network latency.

I'm not sure why they wouldn't just switch to uploading it to S3. Heck, if you really want a vendor involved that's basically what https://turso.tech/ has productized.

By @dcmatt - 4 months
"Overall, this migration proved to be a massive success" but their metrics shows this migration resulted in, on average, slower response times. Wouldn't this suggest the migration was not successful. Postgres can be insanely fast, and given the volume of data this post suggests, it baffles me that the performance is so bad.
By @cmnzs - 4 months
What a bizarre article… performance ended up being worse, how can that be considered a resounding success? Doesn’t seem like it’s a slam dunk case for using neon
By @simonw - 4 months
Lots of comments about the drop in performance. No matter how well you tune network PostgreSQL it's going to have trouble coming close to the performance you can get from a read-only 80MB SQLite file.

They didn't make this change for performance reasons.

By @hobobaggins - 4 months
If most queries take ~ 1s on a relatively small 80MB dataset, then it sounds to me like they really needed to run EXPLAIN on their most complex queries and then tune their indexes to match.

They could have probably stayed with SQLite, in fact, because most likely it's a serious indexing problem, and then found a better way to distribute the 80MB file rather than committing it to Github. (Although there are worse ideas, esp with LFS)

By @skeeter2020 - 4 months
I don't see any mention of the data size or volume of transactions? Also, your API response times were worse after you finished and optimized, and that's a success? or you're comparing historical SQLite vs new PostgreSQL? I kinda see this more as a rewrite than a database migration (which I'm going through now from SQL Server to PostgreSQL)
By @willsmith72 - 4 months
> 79.15% of our pricing operations averaged 1 second or less response time

These numbers are thrown out there like they're supposed to be impressive. They must be doing some really complex stuff to justify that. For a web server to have a p79 of 1 second is generally terrible.

> 79.01% to average 2 seconds or less

And after the migration it gets FAR worse.

I get that it's a finance product, but from what they wrote it doesn't seem like a large dataset. How is this the best performance they're getting?

Also a migration where your p79 (p-anything) doubled is a gigantic failure in my books.

I guess latency really mustn't be critical to their product

By @chrisandchris - 4 months
> Ensure database is in same region as application server

People tend to forget that using The Cloud (tm) still means that there's copper between a database server and an application server and physics still exist.

By @shrubble - 4 months
If it is a read-only database, I don't fully understand where all the latency is coming from. Is it complex SQL queries?
By @ed_elliott_asc - 4 months
This post is 100% marketing “oh we had so few customers SQLite was great but now we need Postgres” ignore it
By @kwillets - 4 months
The latency before/after histograms unfortunately use different scales, but it appears that eg the under-200ms bucket is only a few percentage points smaller after the change, maybe 38 before and 33 after.

What I'm curious about is whether Neon can run pg locally on the app server. The company's SaaS model doesn't seem to support that, but it looks technically doable, particularly with a read-only workload.

By @cpursley - 4 months
If starting off with Elixir and Postgres from the get-go, all this could have been avoided - including the async pains. Said another way: don’t write you backend in JS and just use Postgres.
By @apithowaway - 4 months
Where is the cto or senior technical leader in this? The team seems to be trying hard and keeping the lights on, but honestly there are several red flags here. I’m especially skeptical about the painful and complex manual process that is now 1-click. I want to hope they succeed, but this sounds awfully naive.
By @banish-m4 - 4 months
PSA: If you're running a business and some databases store vital customer or financial data, consider EnterpriseDB (EDB). It funds Postgres and can be used almost like Oracle DBMS. And definitely send encrypted differential backups to Tarsnap for really important data.
By @hipadev23 - 4 months
Shepherd raised $13.5M earlier this year. Imagine being an investor in this company and seeing this post. They seriously wrote a lengthy post publicizing their struggles with an 80MB database and running some queries. The entire technical team at this company needs to be jettisoned.

These are the sort of technical struggles a high school student learning programming encounters. Not a well-funded series A startup. This is absolutely bonkers.

By @pantsforbirds - 4 months
I wonder if DuckDB with parquet storage on S3 (or equivalent) would have been a nice drop-in replacement. Plus DuckDB probably would have done quite well in the ETL pipeline.
By @zitterbewegung - 4 months
Not to be negative but it seems like many tech posts like this are thinly veiled hiring / recruitment blog posts .
By @pm2222 - 4 months
Does the sqlite java lib bundle support for many platforms which jacks up the app size?
By @pocketarc - 4 months
> Furthermore, bundling an 80MB+ SQLite file to our codebase slowed down the entire Github repository and hindered us from considering more robust hosting platforms.

It's... an 80MB database. It couldn't be smaller. There are local apps that have DBs bigger than that. There is no scale issue here.

And... it's committed to GitHub instead of just living somewhere. And they switched to Neon.

To me, this screams "we don't know backend and we refuse to learn".

To their credit, I will say this: They clearly were in a situation like: "we have no backend, we have nowhere to store a DB, but we need to store this data, what do we do?" and someone came up with "store it in git and that way it's deployed and available to the app". That's... clever. Even if terrible.

By @zie - 4 months
It's more complicated and slower but it's still a "success". LOL.
By @sgt101 - 4 months
You. Were. Running. An. Insurance. Company. On. SQLite?

What?

What possessed them?