October 6th, 2024

The Importance of Local Development

Local development is crucial for software efficiency, enhancing productivity and satisfaction. Challenges like running Twitter locally emphasize the need for better configurations, while modern tools simplify creating effective environments.

Read original articleLink Icon
The Importance of Local Development

Local development plays a vital role in software development by facilitating faster iteration, improved debugging, and consistency between local and production environments. A positive developer experience (DX) enhances productivity, reduces cognitive load, and boosts developer satisfaction, which in turn leads to higher code quality, quicker onboarding, and better talent retention. The article references a discussion involving Elon Musk, highlighting the challenges of running Twitter locally compared to Facebook, which can be fully operated on a laptop. This illustrates the importance of a well-configured local development environment. The author notes that while individual developers may find it easier to manage bespoke setups, the concept of DX-debt—similar to technical debt—deserves more attention. Modern tools like Draft, Skaffold, Tilt, and Garden have made it simpler to create local clusters or pseudo-local environments, making it easier than ever to enhance developer experience. The author invites readers to consider the significance of local development in their own work.

- Local development is essential for efficient software development.

- A good developer experience leads to increased productivity and satisfaction.

- The challenges of running complex systems like Twitter locally highlight the need for better configurations.

- DX-debt is an important but often overlooked concept in software development.

- Modern tools facilitate the creation of effective local development environments.

Link Icon 25 comments
By @benreesman - 7 months
There are lots of reasons to want a robust local development environment, but a paved path or sometimes even the possibility of "consistency between environments" is not one of them dear sir or madame.

The JankStack in which one piles Python environment jank on top of Ubuntu/Darwin jank, and piles Docker jank on top of the previous, and piles Docker Compose jank on the previous, until you finally arrive at Jank-As-A-Service via something like ECS or EKS gives a terrifyingly comforting illusion of such with roughly the risk profile of speedballing hard drugs while simultaneously free climbing El Capitan. It has the nice ancillary benefit of subsidizing some combination of mega-yachts and private space programs, so that's cool I guess.

Sooner or later you're going to link CUDA, or glibc, or some other thing that just doesn't play like that. And then your are capital-F fucked if you didn't invest early on in some heavy-metal hermitic shit and some Gazed into a Palantir foresight around feature flags.

By @cletus - 7 months
I live by this principle: If it takes you more than 30 seconds to test a change, you're going to have a huge productivity drain.

That means writing a test should be easy and running it should be fast (including any compilation steps). As soon as something takes more than 30 seconds, you've lost a lot of people. They've switched tabs. They're on HN or reddit or they've pulled out their phone.

You've broken the flow.

Some people can work effectively in an environment where it takes 1-10+ minutes to build something and then you run enough code to test a bunch of changes at once. You might even have multiple clients open at once and you switch between. This doesn't suit me and it doesn't suit a lot of people I've worked with.

Where does local storage fit in? Any test you write will probably need data. You don't want to mock out your data storage. You just want to use the same API and have it be backed by a hash map (or whatever) and have it easy to populate that with data.

Once you have that, local data for something interactive like a website becomes a natural extension.

By @sovietmudkipz - 7 months
At my work senior leads and architects always shoot down being able to develop locally, especially when devs bring up wanting the capability. The critique is some variation of “it would be impossible to run all services locally” or “it would be too expensive.”

So we develop the code and deploy into QA, often breaking things. Cycle times are measured in hours to days. Breaking QA is sometimes a fireable offense. Lol.

The leads and architects are correct in my case; it would be impossible and too expensive to do. This is because our services are built on hundreds of bespoke manual configurations made along the way to production. Discovering and pulling into code/automation would be a whole months/year long project in itself.

That said there are ways of developing locally without running everything locally. Pull in the code of the service you want to work on locally and just point to QA for its dependencies. Most times it takes some finagling of code but it usually works.

Even if everything was running locally, often generating usable data is the biggest barrier. In QA thousands of hands have generated partly useable data. It’s a hard problem to solve since I don’t want to have to know about data requirements for every service.

By @austin-cheney - 7 months
The article does not provide numbers. The reason why local development is important is because of speed, like the article says, but the article supremely under sells this. We could easily be talking about several orders of magnitude performance improvements.

When I first joined travel company Orbitz they had a build that took just over 90 minutes. Of course it was a Java first environment, so I needed to test a small one line change to the UI (that had absolutely to do with Java) I still had to wait 90 minutes. So, I just planned on doing nothing all day.

In my personal software I start crying if my builds take longer than 12 seconds. The difference is that I really enjoyed getting paid to watch YouTube for 90 minutes stretches 5 or 6 times a day. It wasn't my time. It was their time. With my own software, though, it absolutely is my time.

Small improvement increases to trivial items is a cost savings that adds up. Its like a flash flood. A rain drop isn't going to drown you. A bunch of rain drops are still insignificant, but when there are too many rain drops you are under water.

Just think about how floods work, because its still just tiny rain drops. One increase does almost nothing on its own, but it enables other things to occur more frequently. When that happens everywhere you have a performance flood. Suddenly you can test automate several hundred features of your application end-to-end in less than 10 seconds. When that does occur it changes peoples behavior and their perceptions of risk, because now all options are discoverable in a nearly cost free manner for everyone. You will never ever get to experience that freedom if you are drowning in dependency and framework stupidity.

By @simonw - 7 months
I go back and forth on this.

On the one hand, maintaining local development environments that work reliably across larger team if developers is a HUGE cost in terms of time and effort. Developers are amazing at breaking their environments in unexpected ways. It’s common for teams to lose many hours a week to problems related to this.

So I love the idea of cloud-based development environments where, if something breaks, a developer can click a button on a website, wait a few minutes and have a fresh working environment ready to go again.

But… modern laptops are getting SO fast now. It’s increasingly difficult to provide a cloud-based environment that’s remotely as performant as a M2 or M3 laptop.

By @plaguuuuuu - 7 months
Local development is extremely common in my area because the tooling is so good (dotnet).

However the most productive I've been was on a team where local development, even on a single service, was eschewed for the most part. We had tests that would cover 90+% of the codebase - more importantly nearly 100% of what was worth testing - and would routinely deploy things into staging from master that had never even been executed as a whole app, let alone run in anger and tested with live dependencies. The coverage was good enough that everything actually held together really well.

I'd never shipped anything that efficiently prior or since, it totally changed my view of TDD as being a time-consuming but safe and conservative regime vs my experience of it dramatically speeding up iteration.

The only thing that started gnawing at my brain (while admittedly operating a far lesser sized constellation of distributed services than, say, Facebook) was that there is no way of unit testing (or even statically verifying with TLA+ or something) the wider-scale structure of services, at least that I'm aware of. At some point I might knock something together involving specs and code generation but I dunno.

By @Dansvidania - 7 months
Good local dev-env is vital for productivity, but even more so for dev satisfaction and engagement.

I have been arguing with myself on what a good dev-env is for as long as I have been working in enterprise software dev which by now is about 10 years.

All I know is that attempts to reproduce prod via tilt and similar have not been successful (I have witnessed 3 attempts). The promise of "one dev env for all teams" quickly falls apart.

The main problems with this are IMO: - 1. there is a false sense of encapsulation of complexity: you don't need to know how the components of non-owned services work, until something breaks (and break it will), and then you really do. - 2. #1 + docker + k8s + kafka + graphql etc... make complexity seem very cheap. - 3. add a minimum of deadline pressure on teams, and quickly they will stop caring about keeping the dev-env images up-to-date and/or working.

I would rather have intimate familiarity with what my services depend on, which is much easier to get by running these dependencies somewhat manually, close to natively. You can be sure that your colleagues will come knocking if complexity is not respected.

But this seems hard to package as a product...

By @komali2 - 7 months
If you can't build and run the full environment locally, even in a diminished state, how can you debug the production stack? I'm always advocating for ridiculously fleshed out readmes that describe in detail how to run every part of our stack locally, how to deploy it, and how it's running on the prod servers. If I die I want one of the juniors the next day able to report on how to do anything I do.
By @bsnnkv - 7 months
Whenever I see a flake.nix in a project I'm more likely to contribute because I know I'm not going to get stuck in dependency hell before I can even test a single change.

Reproducible builds are what I've found to be the most important part of any local dev experience; standing up local databases, message queues etc required for whatever is being developed has always been relatively simple in comparison.

By @jawns - 7 months
Local development works great ... until it doesn't. And then getting it to scale past a certain point is excruciating.

My favorite argument against local development, however, is that isolation is a bug, not a feature.

When I want to show another developer what I've built, or get help debugging an issue, I don't want to have to call them over to my laptop or do a screensharing session. I want them to have access to the same machine that I have access to, with the same data and configuration, and having cloud-based dev boxes enables that.

By @CharlieDigital - 7 months
For this reason, I strongly recommend Google Firebase.

The local emulator suite is one of the best I've seen[0] (would love to see others). Powerful and easy to set up[1].

It includes a top notch emulator for auth which gives you a full SSO flow as if going through Google's OAuth flow and makes it easy to get otherwise complicated auth flows nailed down. The database and Functions runtime emulators are excellent and make it easy to prototype and ship. Comes with a Pub/Sub emulator to boot for more complex async processes. You can export the emulator state to disk so you can share it or bring it back up into the same state.

If you need to interface with relational DBs, you just use a Pg or MySQL container.

Really phenomenal and would love to find other stacks with a similarly solid emulator suite. It's a strong recommend from me for any teams that value speed because it really allows much, much faster iteration.

Edit: Dear GCP team, please, please - never kill this thing.

[0] https://firebase.google.com/docs/emulator-suite

[1] https://github.com/CharlieDigital/coderev/blob/main/web/util...

By @tmountain - 7 months
My team uses Supabase as our primary “backend” (API layer, auth, subscriptions, queuing, etc). It runs beautifully locally via the Supabase cli. We use migrations to sync the local DB to our production DB. Our app is written in NextJS. I can go from a fresh install of MacOS or Linux to running our app in less than 15 minutes. This has given us a huge advantage when testing, onboarding, and debugging weird issues in the production app.
By @bob1029 - 7 months
I'm fighting with a variant of this right now - Attempting to build a multiplayer game with dedicated servers & master server authority. I've got 4 computers involved to run the full dev iteration (2 for 1v1 clients, 1 for master server and 1 for dedicated server host). The clients I am running in my local LAN with both servers in AWS.

Attempting to do all of this on the local machine will mostly work, but it fails to exercise a lot of the networking concerns (public IP detection, port assignment, etc.), and weird edges crop up as latency grows beyond 0ms. It also makes it impossible to test with other players on other LANs without reaching for complicated networking setups that add even more confusion when things go wrong.

I could write a bunch of bandaid "if editor attached" code throughout, but I also like the idea that I am testing the final thing on the ~final hardware and there isn't going to be any weird dragon fight after this.

By @garydevenay - 7 months
Not having a local development env is a total productivity burner for me.
By @wiradikusuma - 7 months
Thanks to Docker, whenever I start a new project or team, I always ensure that everything can be run locally (the DB, Redis, services, website, and mobile app). But it's hard to be disciplined, especially when reproducing bugs, so developers usually end up using a "shared" test server.

Also, these days, I use Cloudflare more and more. They're very affordable, and deployment is a breeze for the simplest cases. But local development seems to be an afterthought. I built a service that uses some of their dev offerings. Some work locally (using Miniflare), and some can only work remotely (dev environment in your Cloudflare account). Imagine when you need both kinds of offerings!

By @Pxtl - 7 months
Imho this is a problem with many DevOps pipelines - we use Azure DevOps and the inability to run the Azure pipeline yaml files locally means I end up just writing all our deployment as PowerShell scripts and the DO pipeline just calls them.

Local deployment is not negotiable.

And I still come back to Spolsky's "Joel Test": there must be 1 command with no extra steps that you run to get running version of the software on your local developer machine.

By @rcxdude - 7 months
I do think most of the comments here are missing the core point, which isn't so much about whether your development environment is running on your laptop or on the cloud, but more that it's possible to stand up your own environment with a functional version of your product. It's distressingly common that this isn't possible, and you can only test and integrate your code in a single shared development environment, or worse, production, and that neither of these are actually reproducible.
By @ehaikawaii - 7 months
If you have a local development environment that actually works, you have already done the hardest piece:

Identified every single thing needed to stand up a functional version of your app and made the rails to do so in a repeatable fashion.

You should do that anyway, and whether the dev environment is true local or runs using some special k8s hooks to some dev clusters or what have you is immaterial.

By @Ozzie_osman - 7 months
Agreed. I've seen this referred to as inner loop (local dev environment and work flow) and outer loop (deployment, production, etc). A team needs to optimize both loops, though each should have slightly different priorities in terms of speed, safety mechanics, etc.
By @FpUser - 7 months
>"Nowadays, there are options such as Draft, Skaffold, Tilt or Garden to spin-up entire clusters locally, or provide pseudo-local development environments. It’s never been easier to provide a great developer experience. What do you think?"

I think fuck it. I run my own company where I develop software for clients and the last thing I need is for my environment and tools is to be controlled / selected by some corporate imbecile.

By @anonzzzies - 7 months
We went remote-only dev years ago; definitely never going back. It's such a pleasure to always have the same stuff everywhere, no matter what and on any device.
By @keybored - 7 months
[deleted]
By @keepamovin - 7 months
I concur with this. Developing BrowserBox locally is useful for many reasons.

Stress testing the application by stretching it like a rubber sheet to wrap around as many different operating systems as possible is a useful way to iron out various bugs that affect more than one system but may not have been triggered in an easier development process.

Running the application locally is also a way that many people first download and try it, so ensuring a reasonable experience on a laptop is quite important. Iterating on front-end code with a TLS certificate from mkcert provides access to all the JavaScript APIs you’d expect to see on the public internet under TLS.

Running the browser back-end on different OS and architecture targets is a good way to control for or “fuzz against” the quirks you might see in interactions between the operating system, file system layout, the behavior of common Linux-style command line utilities, and the creation of different users and running processes as them. Many of these things have slightly different behaviors across different operating systems, and stretching BrowserBox across those various targets has been one of the strong methods for building stability over time.

My respect for Bash as the lingua franca of scripting languages has grown immensely during this time, and I’ve felt validated by the Unix-style approach where commonly available tools are combined to handle much of the OS-level work. Adaptations are made as needed for different targets, while a lot of the business logic is handled in Node.js. Essentially, this approach uses Bash and Linux philosophy as the glue that sticks the Node.js application layer to the substrate of the target operating system. I made this choice a long time ago, and I’m very satisfied with it. I increasingly feel that it’s the validated approach because new features requiring interaction with the operating system, such as extensions or audio, have been well-supported by this design choice for building the remote browser isolation application.

An alternative approach might be to stick everything into first-class programming languages, seeking a Node library for each requirement and wrapping anything not directly supported in a C binding or wrapper. But I’ve never found that practical. Node is fantastic for the evented part of the application, allowing data to move around quickly. However, there are many touchpoints between the application and the operating system, which are useful to track or leverage. These include benefits like security isolation from user space, permissions and access control, and process isolation. The concept of a multi-user application integrated with a Unix-style multi-user server environment has been advantageous. The abundance of stable, well-established tools that perform many useful tasks in Unix, and targets that can run those tools, has been immensely helpful.

On the front end, the philosophy is to stay at the bleeding edge of the latest web features but only to use them once they become generally available across all major browsers—a discipline that is easier to maintain these days as browser vendors more frequently roll out major, useful features. There’s also a policy of keeping top-level dependencies to a minimum. Essentially, the focus is on the Node runtime, some WebRTC and WebSocket libraries, and HTTP server components. Most of the Node-side dependencies are actually sub-dependencies and not top-level. A lot of dependencies are at the operating system level, allowing us to benefit from the security and stability maintained by multiple package ecosystems. I think this is a sound approach.

Porting everything to a Windows PowerShell-type environment was a fascinating exercise. For the front end, having virtually no dependencies except some in-house tools fosters faster iteration, reduces the risk of breaking changes from external sources, and minimizes security risks from frequently updated libraries with thousands of users and contributors. Some of the ways we’ve approached security by design and stability by design include adopting a development process that is local-first but local-first across a range of different targets.

By @bagels - 7 months
You can't run Facebook on a laptop either.