June 25th, 2024

Local First, Forever

Local-first software emphasizes storing data on the user's device, occasionally syncing with the internet. Benefits include data control, but challenges like syncing between devices and company reliability exist. Using cloud services like Dropbox for syncing, including CRDT for conflict handling, is recommended for simplicity and reliability.

Read original articleLink Icon
Local First, Forever

The article discusses the concept of local-first software, which prioritizes keeping data on the user's device while occasionally syncing with the internet for various purposes. The author highlights the benefits of local-first software for end-users but points out challenges related to syncing data between devices and potential issues if the company providing the software goes out of business. The solution proposed involves using widely available cloud-based file-syncing services like Dropbox to ensure continuous syncing and data accessibility. The article explores different versions of implementing syncing, including using Conflict-free Replicated Data Types (CRDT) to handle conflicts efficiently. It emphasizes the simplicity and reliability of using basic file-sync services for local-first applications, even though they may lack advanced features found in custom solutions. The conclusion suggests that while basic file-sync services may not offer real-time syncing, they provide a practical and reliable solution for casual sync needs, ensuring data availability across devices.

Link Icon 64 comments
By @wim - 4 months
I think an important requirement for making the "forever" aspect of local-first possible is to make the backend sync server available for local self-hosting.

For example, we're building a local-first multiplayer "IDE for tasks and notes" [1] where simply syncing flat files won't work well for certain features we want to offer like real-time collaboration, permission controls and so on.

In our case we'll simply allow users to "eject" at any time by saving their "workspace.zip" (which contains all state serialized into flat files) and downloading a "server.exe/.bin" and switch to self-hosting the backend if they want (or vice versa).

[1] https://thymer.com/

By @braden-lk - 4 months
I love local first, but I struggle with how to monetize truly local-first applications. I know that's not everyone's favorite topic, but I've got bills to pay and payroll to make. Our product is about 80% local-first, with live collaboration and image hosting needing a server. I plan to change that eventually, but I worry that jailbreaking the app in this way will lead to financial troubles.

Obsidian's model seems nice: base app is free, and then payment for the networked portions like sync+publish. However, there's little data available on how well this works and how big of a TAM you need to make it sustainable. Or if it's even possible without an enterprise revenue channel.

For those interested in building robust local-first + collaborative apps, I've been using Yjs for a few years now and have overall really enjoyed it. Multi-master collaboration also poses some stimulating technical and design challenges if you're looking for new frontiers beyond the traditional client-server model.

By @fearthetelomere - 4 months
A bit of an aside, but CRDTs are not always the best approach to solving the local-first distributed consistency problem. For the specific given example of syncing files it might make sense, but I'm starting to see CRDTs used in places they don't need to be.

Where is your ground truth? How collaborative is a given resource? How are merge conflicts (or any overlapping interactions) handled? Depending on your answers, CRDTs might be the wrong tool.

Please don't forget about straightforward replicated state machines. They can be very easy to reason about and scale, although require bespoke implementations. A centralized server can validate and enforce business logic, solve merge conflicts, etc. Figma uses a centralized server because their ground truth may not be local.[1]

If you try a decentralized state machine approach the implementation is undoubtedly going to be more complex and difficult to maintain. However, depending on your data and interaction patterns, they still might be the better choice over CRDTs.

It could be argued that even for this example, two local-first clients editing the same file should not be automatically merged with a CRDT. One could make the case that the slower client should rename their file (fork it), merge any conflicts, or overwrite the file altogether. A centralized server could enforce these rules and further propagate state changes after resolution.

[1] https://www.figma.com/blog/how-figmas-multiplayer-technology...

By @SushiHippie - 4 months
https://remotestorage.io/ was a protocol intended for this.

IIRC the visison was that all applications could implement this and you could provide that application with your remotestorage URL, which you could self host.

I looked into this some time ago as I was fed up with WebDAV being the only viable open protocol for file shares/synchronization (especially after hosting my own NextCloud instance, which OOMed because the XML blobs for a large folder it wanted to create as a response used too much memory) and found it through this gist [0] which was a statement about Flock [1] shutting down.

It looks like a cool and not that complex protocol, but all the implementations seem to be unmaintained.

And the official javascript client [2] seems to be ironically be used mostly to access Google Drive or DropBox

Remotestorage also has an internet draft https://datatracker.ietf.org/doc/draft-dejong-remotestorage/ which is relatively easy to understand and not very long.

[0] https://gist.github.com/rhodey/873ae9d527d8d2a38213

[1] https://github.com/signalapp/Flock

[2] https://github.com/remotestorage/remotestorage.js

By @spectaclepiece - 4 months
Local first is the first software trend in a long time that has gotten me really excited. The aspect of co-located data and logic is what's most interesting for me for two reasons:

1. Easier to develop - Sync layer handles all the tricky stuff, no need for translation layer between server and client.

2. Better user experience - Immediate UI feedback, no network dependency etc.

I suspect there will be a major tide shift within the next year or two when a local first framework with the developer experience similar to Nuxt or Next comes about. The Rails of local first.

I can't recommend enough the localfirst.fm podcast which has been a great introduction to the people and projects in the space: https://www.localfirst.fm/

By @klabb3 - 4 months
> Dropbox! Well, not necessarily Dropbox, but any cloud-based file-syncing solution. iCloud Drive, OneDrive, Google Drive, Syncthing, etc.

> It’s perfect — many people already have it. There are multiple implementations, so if Microsoft or Apple go out of business, people can always switch to alternatives. File syncing is a commodity.

This doesn’t work for collaborative software. It’s also highly questionable for realtime software like chat. That’s a solution looking for a problem.

There is exciting movement in the space but imo people focus too much on CRDTs, seemingly in the hopes of optimal solutions to narrow problems.

What we need is easy-to-use identity management, collaborative features without vendor lock in and most importantly, a model that supports small-medium sized businesses that want to build apps while making a living.

By @poisonborz - 4 months
I expect that one mild catastrophe, one that is bound to occur every now and then, one just large enough to disrupt networks for maybe a few weeks or months on a continental scale - would make everyone realize how foolish the whole cloud (first) idea was, and it would be left in the dust along with the perhaps decade of its work and proponents.
By @joemaller1 - 4 months
Dropbox, OneDrive and others are dangerous because they default to cloud-first. To "save disk space", they upload your files and provide a proxy/placeholder for your actual content.

If something happens to the provider, or they decide they don't like you or your files, your data is gone. Worse than gone, because you still have the empty proxies -- the husks of your files.

I personally know of more than one instance where seemingly innocuous data triggered some automated system at Dropbox and the user was locked out of their files without recourse.

If you're using cloud storage, make *absolutely certain* you have it set to download all files. If your cloud storage exceeds the drive space of a laptop (small businesses, etc), get a cheap dedicated PC and a big drive, then set up at least one dedicated cloud mirror.

Local-first cloud storage is great, but the potential for catastrophic data-loss is not even remotely close to zero.

By @arendtio - 4 months
In 2016, I built a PWA that can synchronize using two different backends: AWS if the user doesn't care where the data is being saved to or WebDAV (in my case, a Nextcloud instance). Sadly, I built it in prototype style and didn't take the time to fix/rebuild things properly.

But I have used this app every week since, and one of the lessons is that operations-based files grow pretty quickly. If you want to keep sync times short and bandwidth usage to a minimum, you have to consider how you keep read and write times to a minimum. I use localStorage for the client-side copy, and reaching the 5 MB quota isn't that hard either. These things can be solved, but you have to consider them during the design phase.

So yes, it's cool stuff, but the story isn't over with using automerge and op-based files.

By @staticelf - 4 months
I have always liked the idea of local first. The problem with it though, is that it almost always suck or isn't that important or both.

At least for myself, I barely use any local first software. The software that I do use that is local in any important sense of the word is basically local-only software. I realize this every time I lose connection on my phone. It becomes pretty much a pretty bad camera compared to my Sony.

I live in a country were I have good 3G speed pretty much everywhere so internet connectivity is never an issue, not even on moving things like trains or boats. The very few times I have been flying or whatever I simply don't do any work because it's usually uncomfortable anyway.

This is the main reason I don't really care about local first and have been diving into Phoenix Liveview the last couple of weeks. The productivity boost I get and cool realtime apps it empowers me to build is more important to me than the dream of making local first web apps. A realtime demo of things updating with multiplayer functionality is a far easier sell than "look, the app works even when I turn on flight mode". And honestly, in like 99% of the time, it is also more useful.

I have done local first web apps before and it is always such a pain because syncing is a near impossible problem to solve. What happens if you and someone else have done changes to the same things like 2 hours or more ago? Who even remembers which value is correct? How do you display the diffs?

No matter what you do, you probably need to implement some kind of diffing functionality, you need to implement revisions because the other guy will complain that the changes were overwritten and so on. There is just so many issues that is very hard to solve and require so much work to be done that it isn't worth it unless you are a large team with a lot of resources. You end up with a complicated mess of code that is like git but worse in every way.

It's easier to simply say the app doesn't work offline because we rarely are offline and no one will pay for the effort required. Unfortunately.

By @jswny - 4 months
The thing about local first syncing options like this is that they mostly do not work on mobile. For example iPhones cannot handle dropbox syncing random text files in the background as a regular filesystem for an app to deal with.

Not saying that's not iPhone's fault, but I doubt any of this works on that platform

By @armincerf - 4 months
"syncing doesn’t work without a server"

I don't think this is true, granted there are some big challenges to transfering data between devices without a central server, but there are several projects like https://dxos.org/ which use p2p, and also there's https://ditto.live/ which uses bluetooth/wifi direct for cases where all users will be in the same room or on the same local network (imagine wanting to play chess with a friend sitting in a different row on a plane without wifi - I was in this situation recently and was pretty surprised that I couldn't find anything on the app store that could do this!)

Of course most of the time its better to have a server because p2p still has a lot of difficulties and often having a central 'source of truth' is worth the costs that come with a server based architecture. So imo things like https://electric-sql.com/ or https://www.triplit.dev/ or the upcoming https://zerosync.dev/ will be far better choices for anyone wanting to build a local first app used by many users.

By @milansuk - 4 months
> But file syncing is a “dumb” protocol. You can’t “hook” into sync events, or update notifications, or conflict resolution. There isn’t much API; you just save files and they get synced. In case of conflict, best case, you get two files. Worst — you get only one :)

Sync services haven't evolved much. I guess, a service that would provide lower APIs and different data structures (CRDTs, etc.) would be a hacker's dream. Also, E2EE would be nice.

And if they closed the shop, I would have all the files on my devices.

By @ngrilly - 4 months
I've been dreaming of Apple Notes and Obsidian doing what the author suggests. The approach seems similar to Delta Lake's consistency model, which is using object storage like S3, and yet allows concurrent writers and readers: https://jack-vanlightly.com/analyses/2024/4/29/understanding....
By @red_trumpet - 4 months
> If you set out to build a local-first application that users have complete control and ownership over, you need something to solve data sync.

> Dropbox and other file-sync services, while very basic, offer enough to implement it in a simple but working way.

That's how I use KeePassXC. I put the .kdbx file in Seafile, and have it on all my devices. Works like a charm.

By @mathnmusic - 4 months
One improvement of the first "super-naive" approach is to break down the state into a whole hierarchy of files, rather than a single file. This helps reduce (but not eliminate) conflicts when multiple clients are making changes to different parts of the state.
By @refset - 4 months
Along similar lines of "just use your preferred cloud-based file-syncing solution", see: https://github.com/filipesilva/fdb - the author spoke about it recently [0]. The neat thing about this general approach is that is pushes all multi-user permissions problems to the file-syncing service, using the regular directory-level ACLs and UX.

[0] "FDB - a reactive database environment for your files" https://www.youtube.com/watch?v=EvAFEC6n7NI

By @ubermonkey - 4 months
100% on board with this.

It's vexing how many tools just assume always-on connectivity. I don't want a tasks-and-notes tool that I need to run in a browser. I want that data local, and I may want to sync it, but it should work fine (other than sync) without the Internet.

This is also true for virtually every other data tool I use.

By @cben - 4 months
I wish apps that open a document for editing would embed some kind of p2p "signalling" (webrtc or otherwise) in or near the file to bootstrap more real-time collaboration. That could be magic because it'd kick in IFF the file happens to be shared (by dropbox or similar) AND somebody else has it open for editing at same time...
By @olavfosse - 4 months
people are always hating on the cursors, i think they're fun
By @throwaway346434 - 4 months
Just use RDF/knowledge graphs. Yes, easier said than done. But you own your claims on your facts. It's interoperable. You then need a tool chain for trust/provenance when mixing data locally or remote.
By @cranium - 4 months
I love the local first design but you have to understand conflicts are inevitable. With local first, you chose Availability and Partition tolerance over Consistency and slapping a CRDT on it does not solve every consistency problem. Think Git merge conflicts: is there an algorithm to resolve them every time?

However, I like the abstractions of CRDTs and libs like Automerge can solve most of the problems. If you must handle all types of file, just be prepared to ask the user to solve them by hand.

By @surrTurr - 4 months
Pouch DB is a great local first DB with optional sync for JavaScript: https://pouchdb.com/
By @poisonborz - 4 months
With this topic I think there should be a bigger thought framework at play. What about file formats? User data and settings import/export? Telemetry (also the useful type)? How should monetization/pro features be added? There are good answers to these, but views are scattered. The calling signs are too scoped: local first, selfhosted, open source, "fair software". The software industry is in need of a new GNU Manifesto.
By @kkfx - 4 months
There is another important and too often ignored situation: sw availability.

Let's say a day BeanCount (my preferred personal finance software) disappear, well, so far I can switch to Ledger or HLegder, the switch demand a bit of rg/sed works but it's doable. If let's say Firefly disappear I still have my data, but migrating them to something else it's a nightmare. Of course such events are slow, if the upstream suddenly disappear the local software still work, but after some times it will break due to environmental changes around it.

With classic FLOSS tools that's a limited problem, tools are simple without much dependencies and they are normally developed by a large spread community. Modern tools tend to be the opposite: with gazillion of deps often in https://xkcd.com/2347/ mode.

My digital life is almost entirely in Emacs, the chances Emacs disappear are objectively low and even if it happen even if it have a very big codebase there are not much easy-to-break deps BUT if I decide to go the modern path, let's say instead of org-attaching most of my files I decide to put them on Paperless and use them instead of via org-mode notes links with Dokuwiki or something else I get much more chances something break and even if I own anything my workflow cease to exists quickly. Recover would be VERY hard. Yes, paperless in the end store files on a file system, I can browse them manually, Zim have essentially the same Dokuwiki markup so I can import the Wiki, but all links will be broken and there is no direct quick-text-tweaking I can apply to reconstruct http links to the filesystem. With org-attach I can, even if it use a cache-like tree not really human readable.

Anyway to have personal guarantees of ownership of our digital life local-first and sync are not the only main points. The corollary is that we need the old desktop model "an OS like a single program, indefinitively extensible" to be safe, because it's much more fragile at small potatoes level, but it's much more resilient in the long run.

By @samuelstros - 4 months
A universal sync engine could be "files as zipped repositories."

A repository as a file is self-contained, tracks changes by itself, and is, therefore, free from vendor lock-in. Here is my draft RFC https://docs.google.com/document/d/1sma0kYRlmr4TavZGa4EFiNZA...

By @hs86 - 4 months
How well does Obsidian Sync's conflict resolution work compared to Dropbox? Dropbox now supports folder selections for 3rd party apps via the iOS Files app [1], and I wonder how well that stacks up against Obsidian's native sync.

[1] https://social.blach.io/@textastic/112519168798881056

By @rco8786 - 4 months
I’ve been pondering doing something like this with SQLite. The primary db is local/embedded on the user’s machine and use something like https://github.com/rqlite/rqlite to sync on the backend.

It also means it would be fairly trivial to allow users/orgs to host their own “backend” as well.

By @mav3ri3k - 4 months
Using theory of patches would better compliment the current approach. Integrating a scm such as https://pijul.org or atleast the underlying tech would allow for better conflict resolutions. Transferring patches should also allow for more efficient use of io.
By @jdvh - 4 months
Both "everything in the cloud" and "everything local" have their obvious technical advantages, and I think they are mostly well understood. What really drives the swing of the pendulum are the business incentives.

Is the goal to sell mainframes? Then tell customers than thin clients powered by a mainframe allow for easy collaboration, centralized backups and administration, and lower total cost of ownership.

Do you want recurring SaaS revenue? Then tell customers that they don't want the hassle of maintaining a complicated server architecture, that security updates mean servers need constant maintenance, and that integrating with many 3rd party SaaS apps makes cloud hosting the logical choice.

We're currently working on an Local First (and E2EE) app that syncs with CRDTs. The server has been reduced to a single go executable that more or less broadcasts the mutation messages to the different clients when they come online. The tech is very cool and it's what we think makes the most sense for the user. But what we've also realized is that by architecting our software like this we have torpedoed our business model. Nobody is going to pay $25 per user per seat per month when it's obvious that the app runs locally and not that much is happening on the server side.

Local First, Forever is good for the user. Open data formats are good for the user. Being able to self-host is good for the user. But I suspect it will be very difficult to make software like this profitably. Adobe's stock went 20x after they adopted a per seat subscription model. This Local First trend, if it is here to stay (and I hope it will be) might destroy a lot of SaaS business models.

By @wiseowise - 4 months
Absolutely love the cursors.
By @mike_hearn - 4 months
I'm pretty sure I'm one of the only people in the world who has actually built an app that works in this exact way, and shipped it, and supported it. It was called Lighthouse and it was for organizing crowdfunds using Bitcoin smart contracts, so it didn't only have the fun of syncing state via DropBox+friends but also via a P2P network.

Here's what I learned by doing that:

1. Firstly - and this is kinda obvious but often left unarticulated - this pattern more or less requires a desktop app. Most developers no longer have any experience of making these. In particular, distribution is harder than on the web. That experience is what eventually inspired me to make Conveyor, my current product, which makes deploying desktop apps waaaaay easier (see my bio for a link) and in particular lets you have web style updates (we call them "aggressive updates"), where the app updates synchronously on launch if possible.

2. Why do you need aggressive updates? Because otherwise you have to support the entire version matrix of every version you ever released interacting with every other version. That's very hard to test and keep working. If you can keep your users roughly up to date, it gets a lot simpler and tech debt grows less fast. There are no update engines except the one in Conveyor that offers synchronous updates, and Lighthouse predated Conveyor, so I had to roll my own update engine. Really a PITA.

3. Users didn't understand/like the file sharing pattern. Users don't like anything non-standard that they aren't used to, but they especially didn't like this particular pattern. Top feature request: please make a server. All that server was doing was acting as a little DropBox like thing specialized for this app, but users much preferred it even in the cryptocurrency/blockchain world where everyone pretends to want decentralized apps.

4. It splits your userbase (one reason they don't like it). If some users use DropBox and others use Google Drive and others use OneDrive, well, now everyone needs to have three different drive accounts and apps installed.

5. Users expect to be able to make state changes that are reflected immediately on other people's screens e.g. when working together on the phone. Drive apps aren't optimized for this and often buffer writes for many minutes.

You don't really need this pattern anyway. If you want to make an app that works well then programmer time is your biggest cost, so you need a business model to fund that and at that point you may as well throw in a server too. Lighthouse was funded by a grant so didn't have that issue.

Re: business models. You can't actually just sell people software once and let them use it forever anymore, that's a completely dead business model. It worked OK in a world where people bought software on a CD and upgraded their OS at most every five years. In that world you could sell one version with time-limited support for one year, because the support costs would tend to come right at the start when users were getting set up and learning the app. Plus, the expectation was that if you encountered a bug you just had to suck it up and work around it for a couple of years until you bought the app again.

In a world where everything is constantly changing and regressing in weird ways, and where people will get upset if something breaks and you tell them to purchase the product again, you cannot charge once for a program and let people keep it forever. They won't just shrug and say, oh OK, I upgraded my OS and now my $500 app is broken, guess I need to spend another $300 to upgrade to the latest version. They will demand you maintain a stream of free backported bugfixes forever, which you cannot afford to do. So you have to give them always the latest versions of things, which means a subscription.

Sorry, I know people don't want to hear that, but it's the nature of a world where people know software can be updated instantly and at will. Expectations changed, business models changed to meet them.

By @basic_banana - 4 months
glad to see more discussion about local-first, but there havn't been a good business model for local first products, which might lead to the unsustainable of the tech ecosystem
By @willhackett - 4 months
Loved the realtime cursors on a post talking CRDTs.
By @immibis - 4 months
We are going to reinvent an ad hoc, informally-specified, bug-ridden, slow implementation of half of Usenet.
By @amai - 4 months
Isn‘t git the best example for a local first approach? So why reinvent the wheel?
By @42lux - 4 months
I like what lobechat does everything in indexedb with webrtc for P2P sync.
By @uvu - 4 months
Not gonna lie, mouse icon moving around is quite annoying.
By @endisneigh - 4 months
I’ll go against the grain and say that local first isn’t really necessary for most apps and spending time on it is a distraction from presumably more fundamental product problems.

Talking about such things is like catnip on here though.

By @pietro72ohboy - 4 months
What is going on with the multiple stray mouse cursors? The site scrolls with a considerable lag and the mouse cursors are outright annoying.
By @sagebird - 4 months
IO Monads solve this by passing the world around.

Yeah, the best world for multiple users is a database right?

So it would seem that if apps have stores to buy things one of those things should be a store?

If you could buy a database place then you can dispense tickets. Share the tickets with friends as photos. Then your local - first app can store take the tickets to the places your friends shared with you. Some of them go offline, fine.

I don't like the experience of having to setup a dropbox externally from apps. Why should a database place not be a one-off purchase like an item in a game?

By @Kalanos - 4 months
sync services are sources of data corruption. local OR remote, not both.
By @dgroshev - 4 months
(this should probably be a post)

CRDTs and local first are ideas that is perpetually in the hype cycle for the last decade or so, starting around the time Riak CRDTs became a thing and continuing all the way to today.

Niki's post is a perfect illustration: CRDTs offer this "magical" experience that seems perfectly good until you try building a product with them. Then it becomes a nightmare of tradeoffs:

- state-based or operation-based? do you trim state? how?

- is it truly a CRDT (no conflicts possible), a structure with explicit conflict detection, or last-writer-wins/arbitrary choice in a trench coat that will always lose data? Case in point: Automerge uses pseudo-random conflict resolution, so Niki's solution will drop data if it's added without a direct causal link between the edits. To learn this, you have to go to "under the hood" section in Automerge docs and read about merge rules very attentively. It might be acceptable for a particular use case, but very few people would even read that far!

- what is the worst case complexity? Case in point: yjs offers an interface that looks very much like JSON, but "arrays" are actually linked lists underneath, which makes it easy to accidentally become quadratic.

- how do you surface conflicts and/or lost data to the user in the interface? What are the user expectations?

- how do you add/remove replication nodes? What if they're offline at the time of removal? What if they come online after getting removed?

- what's user experience like for nodes with spotty connection and relatively constrained resources, like mobile phones? Do they have to sync everything after coming online before being able to commit changes?

- what's the authoritative node for side effects like email or push notifications?

- how do you handle data migrations as software gets updated? What about two nodes having wildly different software versions?

- how should search work on constrained devices, unless every device has the full copy of the entire state?

Those tradeoffs infect the entire system, top to bottom, from basic data structures (a CRDT "array" can be many different things with different behaviour) to storage to auth to networking to UI. Because of that, they can't be abstracted away — or more precisely, they can be pretend-abstracted for marketing purposes, until the reality of the problem becomes apparent in production.

From Muse [1] to Linear [2], everyone eventually hits the problems above and has to either abandon features (no need to have an authoritative email log if there are no email notifications), subset data and gradually move from local first to very elaborate caching, or introduce federation of some sort that gravitates towards centralisation anyway (very few people want to run their own persistent nodes).

I think this complexity, essential for local-first in practice, is important to contextualise both Niki's post and the original talk (which mostly brushed over it).

[1]: https://museapp.com/podcast/78-local-first-one-year-later/

[2]: https://www.youtube.com/watch?v=Wo2m3jaJixU

By @chrisjj - 4 months
Reading this on phone. Disappointed not to see a dozen other fingers scrolling the page :)
By @koliber - 4 months
First there were mainframes with thin dumb clients. Classic client-server.

Then came thick clients, where some processing happened on the local terminal and some on the server.

Technology advanced far enough to install programs locally and they ran only locally. The 80's and 90's were quite the decades.

Someone had a bright idea to build server-based applications that could be accessed via a dumb web browser.

That went on for a while, and people realized that things would be better if we did some processing in JavaScript.

Now the push continues to do more and more computation locally.

Can't wait for the next cycle.

By @boomlinde - 4 months
Fat chance I'll read any of that with five mouse cursors flying around for no obvious reason.
By @dewey - 4 months
Tonsky blog post HN comment boilerplates:

- He writes about UX but there's no dark mode!

- The cursors are really distracting!

- The yellow background color makes it unreadable!

- Some comment about the actual content of the post

By @voidUpdate - 4 months
Its somewhat amusing every time this blog comes up on the front page and 50% of the comments are about the pointers. I guess its a good way to generate activity around the post haha
By @pecheny - 4 months
To get rid of pointers, you can add

  tonsky.me##div.pointers
to your uBlock Origin's custom filters list.
By @outime - 4 months
For someone who always complains about design choices, it's quite ironic that he ended up putting mouse cursors flying around, obstructing your view and constantly distracting. At the Blacksmith's House...
By @olavfosse - 4 months
people are always hating on the cursors, i think it's fun