gRPC: 5 Years Later, Is It Still Worth It?
The author reflects on five years of using gRPC at Torq, highlighting its benefits, improvements in the ecosystem, challenges with gRPC-web, and the emergence of alternatives like connectrpc.
Read original articleThe article reflects on the author's experience with gRPC over the past five years since joining Torq. Initially, the team decided against using OpenAPI/Swagger with Go due to past challenges, opting instead for gRPC and Protobuf. This decision proved beneficial, as it facilitated backward compatibility, enforced coding standards, and reduced discrepancies between client and server code. The author highlights improvements in the gRPC ecosystem, particularly through buf.build, which has enhanced tooling and developer experience. The introduction of the Buf Schema Registry (BSR) has simplified dependency management and ensured compatibility across API versions. The article also discusses the challenges of using gRPC-web in frontend development, including debugging difficulties and the need for a backend proxy. However, alternatives like connectrpc have emerged, offering better support for TypeScript clients and caching. The author concludes that gRPC remains their preferred communication protocol, emphasizing its advantages in modern software development.
- The author reflects positively on the decision to use gRPC and Protobuf over OpenAPI/Swagger.
- The gRPC ecosystem has improved significantly, particularly with tools from buf.build.
- The Buf Schema Registry simplifies dependency management and ensures API compatibility.
- gRPC-web presents challenges in frontend development, but alternatives like connectrpc are available.
- The author reaffirms gRPC as their preferred communication protocol for future projects.
Related
gRPC: The Bad Parts
gRPC, a powerful RPC framework, faces challenges like a steep learning curve, compatibility issues, and lack of standardized JSON mapping. Despite drawbacks, improvements like HTTP/3 support and new tools aim to enhance user experience and address shortcomings for future success.
Serving a billion web requests with boring code
The author shares insights from redesigning the Medicare Plan Compare website for the US government, focusing on stability and simplicity using technologies like Postgres, Golang, and React. Collaboration and dedication were key to success.
Eight Years of GraphQL
A critique of GraphQL's viability after 8 years raised concerns about security, performance, and maintainability. Emphasized the importance of persisted queries and understanding trade-offs before adoption.
Parsing Protobuf Definitions with Tree-sitter
The article discusses using Tree-sitter to parse Protocol Buffers definitions, addressing limitations of existing tools and providing a practical guide for developers to enhance workflows in software development.
gRPC: The Ugly Parts
The article highlights gRPC's drawbacks, including complex generated code, integration issues in Go, performance impacts from runtime reflection, lack of enforced requirements in proto3, and limited adoption due to a steep learning curve.
Having the libraries generating relatively optimized message parsers and server implementations and just throwing middlewares around them, with easy support for deprecating fields, enums, and a bunch of other goodies - all been a huge help and productivity gain. So much can be done by just understanding the gRPC config settings and throwing some bog-standard middlewares around things.
Couldn't you just make protoc part of your project's git repo?
> This approach ensured that everyone on the team was using identical versions of these tools throughout their development process.
While this does enforce using the same version during development, it also introduces possible differences between code built during development and code built for production (if those are separate processes, as they should be in mature software).
Ideally, production builds should be hermetic and their inputs should only come from the committed source code and tools, not externally hosted ones that evolve independently.
Granted, with a tool as stable (and with such strictly defined interfaces) as protoc, perhaps the risk is minimal, but IMO this isn't a generalizable architecture.
Early on in my career I worked on several projects that used WS-* "stack", and generally it was a very good experience. Once we had a project that was split between two subcontractor teams and each team worked on their portion of a system (a .NET application and an J2EE server) with API based on a common WSDL spec. The two teams (dozens of engineers on each side) worked independently for about a year, and after that they tried to run the two subsystems together, and it was really cool to see the parts "just click". There were some minor issues (like one side expected UTC timestamps and the other were sending localized time) but they took really little time to fix. The fact that two teams were not really talking to each other, were using different languages and libraries, relied on some manual testing through SoapUI and some mocks, and yet the whole thing even run at first attempt was very, very impressive!
WS-* was heavily criticized at the time: the standards and data formats were convoluted, the tooling beyond .NET and JVM was almost non-existent, Sun and Microsoft were not following the standards in their implementations and cared more about the interop with each other than about being standard-compliant. So, ultimately REST and JSON pushed the whole thing away. But I'm really happy to see people trying to replicate what was great about Web Services without making old mistakes, and I wish everyone involved all the best.
Which brings me to my actual question. Since software development history repeats / rhymes with itself every decade or two I now wonder if XML Web Services were not the first iteration of the formula? Was here another popular technology in 70s, 80s, or 90s that used had people describe an RPC contract and then used it to generate client and server glue code for it?
I know that both COM and CORBA used IDL to describe API, but I don't remember any code generation involved.
Then web browsers never implement any of those capabilities.
gRPC-web's roadmap is still a huge pile of workarounds they intend to build. These shouldn't be necessary! https://github.com/grpc/grpc-web/blob/master/doc/roadmap.md
Instead of giving us http-push, everyone said, oh, we haven't figured out how to use it for content delivery well. And we've never ever let anyone else use it for anything. So to got canned.
Http-trailers also seems to not have support, afaik.
Why the web pours so much into HTML, js, CSS, but utterly neglects http, to the degree where grpc-web will probably end up tunneling http-over-webtransport is so cursed.
1. I really like gRPC and protos for the codegen capabilities. APIs I've worked on that use gRPC have always been really easy to extend. I am always tempted to hand-roll http.Handle("/foo", func(w http.ResponseWriter, req *http.Request) { ... }), but gRPC is even easier than this.
2. grpc-web never worked well for me. It's hard to debug; in the browser's inspector you just have serialized protos instead of JSON, and developers find this hard to debug. Few people know about `protoc --decode[-raw]` options. (This comes up a lot when working with protos; you have binary or base64-encoded protos but just want key/value pairs. I ended up adding a command to our CLI to do this for you.)
I also thought the client side of the equation was a little too bloated. webpack and friends never tree-shook out code we didn't call, and as a result, the client bundle was pretty giant for how simple of an app we had. There are also too many protoc plugins for the frontend, and I feel like whenever a team goes looking for one, they pick the wrong one. I am sure I have picked the wrong one multiple times. After many attempts at my last job, I found the sane Typescript one. But at my current job, a team started using gRPC and picked the other one, which caused them a lot of pain.
3. grpc-gateway works pretty well, though. Like grpc-web, it suffers from promised HTTP features never being implemented, so it can't implement all of gRPC. (gRPC is really just bidirectional RPCs, but you can restrict what you do to have unary RPCs, server streaming RPCs, client streaming RPCs, and bidirectional RPCs. The web really only handles unary and server streaming. grpc-gateway doesn't remove these grpc-web limitations.)
But overall, I like it a lot for REST APIs. If I were building a new REST API from scratch today, it would be gRPC + grpc-gateway. I like protos for specifying how the API works, and grpc-gateway turns it into a Swagger file that normal developers can understand. No complaints whatsoever with any of it. (buf is unnecessary, and I feel like they just PR'd themselves into the documentation to sound like it's required, but honestly if it helps people, good for them. I just have a hand-crafted protoc invocation that works perfectly.)
4. For plain server-to-server communication, you'd expect gRPC to work fine, but you learn that there are middleboxes that still don't support HTTP/2. One problem that we have is that our CLI uses gRPC to talk to our server. Customers self-host all of this, and often work at companies that break gRPC because their middleboxes don't support HTTP/2. (I'll point out here that HTTP/3 is the current version of HTTP.) We have Zscaler at work and this mostly affects our internal customers. (We got acquired and had these conditions added 8 years into the development cycle, so we didn't anticipate them, obviously.) But if we were starting all over today, I'd use grpc-gateway-over-http1.1 instead of grpc-over-http2. The API would adjust accordingly; I wouldn't have bidirectional RPCs, but RPCs that simulate them. Something like a create session RPC, then a unary call to add another message to the session, and a unary call that returns when a message is ready. It sucks, but that's all that HTTP/1.1 in the browser really offers, and that's the maximum web compatibility level that works in Corporate America these days.
5. Some details are really confusing and opaque to end users trying to debug things. Someone set up a proxy. They connected to dns:///example.com and the proxy doesn't work properly. This is because gRPC resolves example.com and dials the returned IP addresses, and sets :authority to the IP addresses and not the hostname. You have to use passthrough:///example.com to have the HTTP machinery make an HTTP request for example.com/foopb/Foo.Method. Maybe this is Go specific, but it always confuses people. A little too many features available out of the box, that again work great on networks you control, but poorly on networks that your employer controls.
I can't stand gRPC. It's such a Google-developed product and protocol that trying to use it in a simpler system (e.g. everyone else) is frustrating at best, and infuriating at worst. Everything is custom and different than what you're expecting to deal with when at its core, it is still just HTTP.
Something like Twirp (https://github.com/twitchtv/twirp) is so much better. Use existing transports and protocols, then everything else Just Works.
And certainly if you want to get a job at one of these places, learn their technology, but then once even that company stops using it, where are you?
The ideal API supports `/api/call.json` and `/api/call.csv` and `/api/call.arrow` as well as `/api/call.grpc`, and the only thing that differs between these is the serializer (which is standardized and very well tested).
If the App is buggy, I want to (1) check what API calls it's making (2) be able to run those API calls myself, which is why I (ideally) need a text-based (human-readable) format.
Related
gRPC: The Bad Parts
gRPC, a powerful RPC framework, faces challenges like a steep learning curve, compatibility issues, and lack of standardized JSON mapping. Despite drawbacks, improvements like HTTP/3 support and new tools aim to enhance user experience and address shortcomings for future success.
Serving a billion web requests with boring code
The author shares insights from redesigning the Medicare Plan Compare website for the US government, focusing on stability and simplicity using technologies like Postgres, Golang, and React. Collaboration and dedication were key to success.
Eight Years of GraphQL
A critique of GraphQL's viability after 8 years raised concerns about security, performance, and maintainability. Emphasized the importance of persisted queries and understanding trade-offs before adoption.
Parsing Protobuf Definitions with Tree-sitter
The article discusses using Tree-sitter to parse Protocol Buffers definitions, addressing limitations of existing tools and providing a practical guide for developers to enhance workflows in software development.
gRPC: The Ugly Parts
The article highlights gRPC's drawbacks, including complex generated code, integration issues in Go, performance impacts from runtime reflection, lack of enforced requirements in proto3, and limited adoption due to a steep learning curve.