Three ways to think about Go channels
Channels in Golang are locked, buffered queues for message passing. They integrate with goroutines, select blocks, and more, offering efficient concurrency. Understanding their role and benefits is crucial for Golang developers.
Read original articleThe article discusses three key aspects of channels in Golang. Firstly, channels are described as locked, buffered queues, with senders adding to the queue and receivers reading from it. Secondly, channels are part of a broader ecosystem of concurrency primitives in Golang, including goroutines, select blocks, timeouts, tickers, and wait groups. Lastly, the article delves into the concept of message passing through channels, emphasizing the efficiency and convenience of using channels over user-implemented queues due to Golang's runtime capabilities. The piece highlights the importance of understanding channels as a queue abstraction, their integration with other concurrency primitives, and their efficiency compared to alternative concurrency options. It concludes by emphasizing the usefulness of these abstractions in different contexts and invites readers to engage with the Dolt team for further discussions on Golang performance and related topics.
Related
Interface Upgrades in Go (2014)
The article delves into Go's interface upgrades, showcasing their role in encapsulation and decoupling. It emphasizes optimizing performance through wider interface casting, with examples from io and net/http libraries. It warns about complexities and advises cautious usage.
Timeliness without datagrams using QUIC
The debate between TCP and UDP for internet applications emphasizes reliability and timeliness. UDP suits real-time scenarios like video streaming, while QUIC with congestion control mechanisms ensures efficient media delivery.
Atomic Operations Composition in Go
The article discusses atomic operations composition in Go, crucial for predictable results in concurrent programming without locks. Examples show both reliable and unpredictable outcomes, cautioning about atomics' limitations compared to mutexes.
At least old-school syntax like “GOTO foobar_step_23” and “LABEL foobar_step_23” is grepable).
I greatly prefer programs that “color” functions sync/async, and that use small state machines to coordinate shared state only when necessary.
Go technically supports this, but it doesn’t seem like it is idiomatic (unlike rust, the go compiler won’t help with data races, and unlike C++, people don’t assume that the language is a giant foot-gun).
Golang needs to have a way to manage the channels better. Naming them and waiting on them would simplify a lot of crusty stuff. Naming is becoming possible, goroutines can already have pprof labels (that are even inherited between goroutines!), so just adding pprof labels to stacktraces will help a lot.
But unfortunately, Go creators are allergic to anything that brings thread-local variables closer.
[1] https://blog.jgc.org/2024/03/the-formal-development-of-secur...
I think the point of Pike's quote is that, when a goroutine gets a pointer received from a channel, it gets the ownership of the values referenced by the pointer and other goroutines give up the ownership. This is a discipline Go programmers should hold but not a rule enforced by the language.
I can recommend making a utility function which accepts a set of anonymous functions and a concurrency factor. I've since extended this function with a version which accepts a rate limiter, jitter factor, and retry count. This handles most cases where I need concurrency (batches) in a simple and safe way.
As for uncovered topics or part 2 - long running Go channels can be a nightmare.
You need to implement kind of observability for them.
You must find the way to stop/run again/upgrade and even version payload to handle your channels/goroutine.
Common problem with channels overuse is so called goroutine leaks. Happens more often than most devs think. Especially, if lib writers initiate goroutines in init() to maintain cache or do some background cleanup job. It's good to scan all used packages for such surprises.
You might also find concepts like "durable execution" or "workflow" engines down the road.
I recently wrote a simple function that maps out tasks concurrently, can be canceled by a context.WithCancel, or if a task fails. The things that cancel the task mapper need to coordinate very carefully on both sides of the channel so that they're closed and publishers stop sending in the right sequence. The amount of switches/cancels/signals quickly explode around the coordination if you too cute on how to do it (e.g. read from the error channel to stop the work).
Frankly I'm not sure I still got it right [1]. And this is probably the most unsettling part. Rereading the code I can't possibly remember the cancellation semantics and ordering of the short mess I created. Now I'm wondering if mutexes would've made for more understandable code.
[1] https://gist.github.com/pnegahdar/1783f0a4e03dc9a3da43478994...
Related
Interface Upgrades in Go (2014)
The article delves into Go's interface upgrades, showcasing their role in encapsulation and decoupling. It emphasizes optimizing performance through wider interface casting, with examples from io and net/http libraries. It warns about complexities and advises cautious usage.
Timeliness without datagrams using QUIC
The debate between TCP and UDP for internet applications emphasizes reliability and timeliness. UDP suits real-time scenarios like video streaming, while QUIC with congestion control mechanisms ensures efficient media delivery.
Atomic Operations Composition in Go
The article discusses atomic operations composition in Go, crucial for predictable results in concurrent programming without locks. Examples show both reliable and unpredictable outcomes, cautioning about atomics' limitations compared to mutexes.