gRPC: The Ugly Parts
The article highlights gRPC's drawbacks, including complex generated code, integration issues in Go, performance impacts from runtime reflection, lack of enforced requirements in proto3, and limited adoption due to a steep learning curve.
Read original articleThe article "gRPC: The Ugly Parts" by Kevin McDonald discusses the less favorable aspects of gRPC, a popular tool for microservices. While gRPC offers efficiency and performance, it has several drawbacks. One major issue is the complexity and verbosity of the code generated from protobuf definitions, which can hinder readability and maintainability. Language-specific quirks also complicate integration, particularly in Go, where gRPC's design diverges from standard HTTP practices. The generated code often relies on runtime reflection, which can slow down performance, and there are calls for better optimization in the standard protobuf compiler. The removal of required fields in proto3 has led to flexibility but also a lack of enforced requirements, which can be problematic. The steep learning curve associated with gRPC and protobuf, coupled with resistance from backend developers, has limited its adoption in web development. Additionally, concerns about Google's long-term commitment to gRPC and protobuf, along with the immature ecosystem lacking essential features like a package manager, further complicate its use. The article emphasizes the need for improved tooling and community support to enhance the gRPC experience.
- gRPC's generated code is often complex and difficult to navigate.
- Language-specific quirks can hinder integration, particularly in Go.
- The reliance on runtime reflection in generated code can impact performance.
- The removal of required fields in proto3 offers flexibility but lacks enforcement.
- The steep learning curve and limited ecosystem maturity hinder broader adoption.
Related
gRPC: The Bad Parts
gRPC, a powerful RPC framework, faces challenges like a steep learning curve, compatibility issues, and lack of standardized JSON mapping. Despite drawbacks, improvements like HTTP/3 support and new tools aim to enhance user experience and address shortcomings for future success.
Serving a billion web requests with boring code
The author shares insights from redesigning the Medicare Plan Compare website for the US government, focusing on stability and simplicity using technologies like Postgres, Golang, and React. Collaboration and dedication were key to success.
Eight Years of GraphQL
A critique of GraphQL's viability after 8 years raised concerns about security, performance, and maintainability. Emphasized the importance of persisted queries and understanding trade-offs before adoption.
Web Crap Has Taken Control
The article critiques React's dominance in web development, citing complexity, performance issues, and excessive dependencies. It questions industry reliance on React, package management challenges, and suggests reevaluating development tools for efficiency.
An unordered list of things I miss in Go
The blog post highlights features missing in Go, such as ordered maps, default arguments, and improved nullability, suggesting these enhancements could benefit the language despite requiring significant revisions.
- Criticism of protobuf's limitations, such as lack of generics and support for custom types, which complicates code generation and flexibility.
- Discussion on the relationship between gRPC and protobuf, emphasizing their different governance and the tight coupling in usage.
- Concerns about front-end support for gRPC, with many users finding it inadequate for modern web development needs.
- Debate over the readability and maintainability of generated code, with some arguing it should not be a concern if handled correctly.
- General skepticism about the long-term commitment of Google to these projects, despite their deep integration into Google's infrastructure.
Google could not function as a company without protobuf. It is ingrained deeply into every inch of their stack.
Likewise, gRPC is the main public-facing interface for GCP. It's not going anywhere.
IMO, readability of generated code, is largely a non concern for the vast majority use cases. Additionally, if anything it's more so a criticism of the codegen tool. Same with the complaints around the http server used with go.
However, I totally agree with criticisms of the enum naming conventions. It's an abomination and super leaky. Made worse by the fact it's part of the official(?) style guide https://protobuf.dev/programming-guides/style/#enums
This kind of implies that the generated code is being checked into a repo.
While that works, it's not the way things are done at Google, where protobuf codegen happens at build time, and generated code is placed in a temporary build/dist directory.
Either way , you shouldn't need to do any maintenance on protobuf generated code, whether you add it to a repo or use it as a temporary build artifact.
[1]: https://github.com/helsing-ai/buffrs?tab=readme-ov-file
It’s actually ok that not everything need accomodate every single environment and situation. I’d personally like to see some _more_ RPC mechanisms for service-to-service that don’t need to accommodate the lowest-common-denominator-browser-http-requirements, there’s plenty of situations where there’s no browser involved.
- While nearly all fields are forced to be optional/nullable, lists and maps can only be empty, not null.
- No generics (leads to more repetition in some cases).
- Custom existing types are not supported. WKT thus require hardcoded codegen (this is a mistake IMO). It limits flexibility or requires much more manual code writing. For example, if I have codebase that uses Instant type (instead of DateTime from standard library) to represent UTC time, there is no build-in way to automate this mapping, even though it could equally well map to the same over-the-wire format as DateTime (which has a hardcoded support). If that kind of extensions would be supported, even specific cases like mapping a collection of timestamps to double delta encoded byte array over-the-wire could be supported. This wouldn't require any changes to the underlying over-the-wire format (just more flexible codegen).
Who wants to use one tech stack for microservices and an entirely different one for frontend. Better to just use the same one everywhere.
But the majority of these criticisms seem really superficial to me. Who cares that the API is inconsistent between languages? Some languages may be better suited for certain implementation styles to begin with.
Also regarding the reflection API I was under the impression that the codegenned protobuf code directly serializes and didn't reflect? Thrift worked that way so maybe I'm confused.
Protobuf is a Google-internal project, with opaque governance and no conference.
I find it striking that the one has such tight dependencies on the other. Indeed the article is mostly about protobuf.
The UX drives the shape of data it needs from back-ends.
Any UI work I do is interactive and fast. Anything that hinders that, like code generation, is a drag. Adding a suitable new endpoint or evolving data shapes coming from existing endpoints happens instantaneously in my dev environment.
I value that flexibility over tightly controlled specs. I might eventually add types when I'm reasonably sure I got the data shapes right.
I don't quite see how that is possible. ServeHTTP() seems like a most general HTTP interface. How could implementing that interface for a protocol built on top of HTTP result in a performance degradation!? If that is indeed the case, that seems like it would imply a flaw in go's standard http library, not grpc.
This is the most important part, the codegen products for many languages are pretty slow.
Related
gRPC: The Bad Parts
gRPC, a powerful RPC framework, faces challenges like a steep learning curve, compatibility issues, and lack of standardized JSON mapping. Despite drawbacks, improvements like HTTP/3 support and new tools aim to enhance user experience and address shortcomings for future success.
Serving a billion web requests with boring code
The author shares insights from redesigning the Medicare Plan Compare website for the US government, focusing on stability and simplicity using technologies like Postgres, Golang, and React. Collaboration and dedication were key to success.
Eight Years of GraphQL
A critique of GraphQL's viability after 8 years raised concerns about security, performance, and maintainability. Emphasized the importance of persisted queries and understanding trade-offs before adoption.
Web Crap Has Taken Control
The article critiques React's dominance in web development, citing complexity, performance issues, and excessive dependencies. It questions industry reliance on React, package management challenges, and suggests reevaluating development tools for efficiency.
An unordered list of things I miss in Go
The blog post highlights features missing in Go, such as ordered maps, default arguments, and improved nullability, suggesting these enhancements could benefit the language despite requiring significant revisions.