September 3rd, 2024

gRPC: The Ugly Parts

The article highlights gRPC's drawbacks, including complex generated code, integration issues in Go, performance impacts from runtime reflection, lack of enforced requirements in proto3, and limited adoption due to a steep learning curve.

Read original articleLink Icon
FrustrationSkepticismAppreciation
gRPC: The Ugly Parts

The article "gRPC: The Ugly Parts" by Kevin McDonald discusses the less favorable aspects of gRPC, a popular tool for microservices. While gRPC offers efficiency and performance, it has several drawbacks. One major issue is the complexity and verbosity of the code generated from protobuf definitions, which can hinder readability and maintainability. Language-specific quirks also complicate integration, particularly in Go, where gRPC's design diverges from standard HTTP practices. The generated code often relies on runtime reflection, which can slow down performance, and there are calls for better optimization in the standard protobuf compiler. The removal of required fields in proto3 has led to flexibility but also a lack of enforced requirements, which can be problematic. The steep learning curve associated with gRPC and protobuf, coupled with resistance from backend developers, has limited its adoption in web development. Additionally, concerns about Google's long-term commitment to gRPC and protobuf, along with the immature ecosystem lacking essential features like a package manager, further complicate its use. The article emphasizes the need for improved tooling and community support to enhance the gRPC experience.

- gRPC's generated code is often complex and difficult to navigate.

- Language-specific quirks can hinder integration, particularly in Go.

- The reliance on runtime reflection in generated code can impact performance.

- The removal of required fields in proto3 offers flexibility but lacks enforcement.

- The steep learning curve and limited ecosystem maturity hinder broader adoption.

AI: What people are saying
The comments reflect a range of opinions on gRPC and protobuf, with several recurring themes.
  • Criticism of protobuf's limitations, such as lack of generics and support for custom types, which complicates code generation and flexibility.
  • Discussion on the relationship between gRPC and protobuf, emphasizing their different governance and the tight coupling in usage.
  • Concerns about front-end support for gRPC, with many users finding it inadequate for modern web development needs.
  • Debate over the readability and maintainability of generated code, with some arguing it should not be a concern if handled correctly.
  • General skepticism about the long-term commitment of Google to these projects, despite their deep integration into Google's infrastructure.
Link Icon 16 comments
By @cbarrick - 3 months
> There’s always a lingering question about Google’s long-term commitment to gRPC and protobuf. Will they continue to invest in these open-source projects, or could they pull the plug if priorities shift?

Google could not function as a company without protobuf. It is ingrained deeply into every inch of their stack.

Likewise, gRPC is the main public-facing interface for GCP. It's not going anywhere.

By @kyle787 - 3 months
Originally, I was going to complain that this is more of a critique of the grpc ecosystem rather than protocol.

IMO, readability of generated code, is largely a non concern for the vast majority use cases. Additionally, if anything it's more so a criticism of the codegen tool. Same with the complaints around the http server used with go.

However, I totally agree with criticisms of the enum naming conventions. It's an abomination and super leaky. Made worse by the fact it's part of the official(?) style guide https://protobuf.dev/programming-guides/style/#enums

By @danans - 3 months
> Even though it’s not meant to be hand-edited, this can impact code readability and maintainability

This kind of implies that the generated code is being checked into a repo.

While that works, it's not the way things are done at Google, where protobuf codegen happens at build time, and generated code is placed in a temporary build/dist directory.

Either way , you shouldn't need to do any maintenance on protobuf generated code, whether you add it to a repo or use it as a temporary build artifact.

By @denysvitali - 3 months
Wow, what a nice article! Every point of it matches my experience (mostly positive) and buffrs [1] is a tool I wasn't aware of. Thanks for sharing this article!

[1]: https://github.com/helsing-ai/buffrs?tab=readme-ov-file

By @FridgeSeal - 3 months
I 100% agree with the enum rules, the frustrating lack of required, but I do disagree with the “oh no FE couldn’t use it out of the box”.

It’s actually ok that not everything need accomodate every single environment and situation. I’d personally like to see some _more_ RPC mechanisms for service-to-service that don’t need to accommodate the lowest-common-denominator-browser-http-requirements, there’s plenty of situations where there’s no browser involved.

By @zigzag312 - 3 months
Some more bad parts related to protobuf:

- While nearly all fields are forced to be optional/nullable, lists and maps can only be empty, not null.

- No generics (leads to more repetition in some cases).

- Custom existing types are not supported. WKT thus require hardcoded codegen (this is a mistake IMO). It limits flexibility or requires much more manual code writing. For example, if I have codebase that uses Instant type (instead of DateTime from standard library) to represent UTC time, there is no build-in way to automate this mapping, even though it could equally well map to the same over-the-wire format as DateTime (which has a hardcoded support). If that kind of extensions would be supported, even specific cases like mapping a collection of timestamps to double delta encoded byte array over-the-wire could be supported. This wouldn't require any changes to the underlying over-the-wire format (just more flexible codegen).

By @lopkeny12ko - 3 months
The criticisms the author levies against Protobuf are unfair. Inside Google, all source code is in a monorepo, and depending on other Protobuf files is a matter of code-sharing it as a Bazel library; it is trivial. There is no need for a package management system because its existence would be irrelevant.
By @nprateem - 3 months
I've tried to use gRPC several times over the years, but the lack of front-end support just always kills it for me. It'd be such a killer feature to have all that gRPC offers plus support for JS (or an easy way to deploy grpc-web that doesn't have loads of gotchas), but every time I look I realise it's not going to work. I've been surprised how little that situation changed over the 5 years or so I was tracking the project. I don't even consider it any more.

Who wants to use one tech stack for microservices and an entirely different one for frontend. Better to just use the same one everywhere.

By @zer0-c00l - 3 months
There's a definite UX problem imo with having to manage protobuf synchronization between repos.

But the majority of these criticisms seem really superficial to me. Who cares that the API is inconsistent between languages? Some languages may be better suited for certain implementation styles to begin with.

Also regarding the reflection API I was under the impression that the codegenned protobuf code directly serializes and didn't reflect? Thrift worked that way so maybe I'm confused.

By @bboreham - 3 months
Whilst gRPC is nearly always used together with protobuf, I think it’s important to note they are different projects. gRPC is a CNCF project with open governance. gRPC people show up at industry conferences like KubeCon and run their own conference.

Protobuf is a Google-internal project, with opaque governance and no conference.

I find it striking that the one has such tight dependencies on the other. Indeed the article is mostly about protobuf.

By @rollulus - 3 months
The remark about reflection for the Go implementation surprised me. I always treated the generated code as a black box, but occasionally saw their diffs, and assumed the byte blobs to be for clever generated marshalling. If not, what are they used for?
By @beders - 3 months
One of the reasons I'd disregard gRPC for front-end development is my belief that data exchange between front and back-ends should be driven by the front-end.

The UX drives the shape of data it needs from back-ends.

Any UI work I do is interactive and fast. Anything that hinders that, like code generation, is a drag. Adding a suitable new endpoint or evolving data shapes coming from existing endpoints happens instantaneously in my dev environment.

I value that flexibility over tightly controlled specs. I might eventually add types when I'm reasonably sure I got the data shapes right.

By @alex_smart - 3 months
>They eventually added a ServeHTTP() interface to grpc-go as an experimental way to use the HTTP server from the Go standard library but using that method results in a significant loss of performance.

I don't quite see how that is possible. ServeHTTP() seems like a most general HTTP interface. How could implementing that interface for a protocol built on top of HTTP result in a performance degradation!? If that is indeed the case, that seems like it would imply a flaw in go's standard http library, not grpc.

By @Redoubts - 3 months
> The generated code isn’t even that fast

This is the most important part, the codegen products for many languages are pretty slow.

By @dboreham - 3 months
The one useful purpose protobuf serves: if you run across someone who is enthusiastic about it, then you know to never trust anything that person says.