React has "Context", SwiftUI has "@Environment", Emacs LISP has dynamic scope (so I heard). C# has AsyncLocal, Node.JS AsyncLocalStorage.
This is one of those ideas that at first seem really wrong (isn't it just a global variable in disguise?) but is actually very useful and can result in cleaner code with less globals or less superfluous function arguments. Imagine passing a logger like this, or feature flags. Or imagine setting "debug = True" before a function, and it applies to everything down the call stack (but not in other threads/async contexts).
Implicit context (properly integrated into the type system) is something I would consider in any new language. And it might also be a solution here (altough I would say such a "clever" and unusual feature would be against the goals of Go).
We have DB sharding, so the DB layer needs to figure out which shard to choose. It does that by grabbing the user/tenant ID from the context and picking the right shard. Without contexts, this would be way harder—unless we wanted to break architecture rules, like exposing domain logic to DB details, and it would generally just clutter the code (passing tenant ID and shard IDs everywhere). Instead, we just use the "current request context" from the standard lib that can be passed around freely between modules, with various bits extracted from it as needed.
What’s the alternatives, though? Syntax sugar for retrieving variables from some sort of goroutine-local storage? Not good, we want things to be explicit. Force everyone to roll their own context-like interfaces, since a standard lib's implementation can't generalize well for all sitiations? That’s exactly why contexts we introduced—because nobody wanted to deal with mismatched custom implementations from different libs. Split it into separate "data context" and "cancellation context"? Okay, now we’re passing around two variables instead of one in every function call. DI to the rescue? You can hide userID/tenantID with clever dependency injection, and that's what we did before we introduced contexts to our codebase, but that resulted in allocations of individual dependency trees for each request (i.e. we embedded userID/tenantID inside request-specific service instances, to hide the current userID/tenantID, and other request details, from the domain layer to simplify domain logic), and it stressed the GC.
https://news.ycombinator.com/item?id=11240681 (March 2016)
* the Lisp that HN is written in
In my experience, these "thread-local" implicit contexts are a pain, for several reasons. First of all, they make refactoring harder: things like moving part of the computation to a thread pool, making part of the computation lazy, calling something which ends up modifying the implicit context behind your back without you knowing, etc. All of that means you have to manually save and restore the implicit context (inheritance doesn't help when the thread doing the work is not under your control). And for that, you have to know which implicit contexts exist (and how to save and restore them), which leads to my second point: they make the code harder to understand and debug. You have to know and understand each and every implicit context which might affect code you're calling (or code called by code you're calling, and so on). As proponents of another programming language would say, explicit is better than implicit.
I would like to think that somebody better at type systems than me could provide a way to encode it into one that doesn't require typing out the dynamic names and types on every single function but can instead infer them based on what other functions are being called therein, but even assuming you had that I'm not sure how much of the (very real) issues you describe it would ameliorate.
I think for golang the answer is probably "no, that sort of powerful but dangerous feature is not what we're going for here" ... and yet when used sufficiently sparingly in other languages, I've found it incredibly helpful.
Trade-offs all the way down as ever.
https://odin-lang.org/docs/overview/#implicit-context-system
Emacs Lisp retains dynamic scope, but it's no longer a default for some time now, in line in other Lisps that remain in use. Dynamic scope is one of the greatest features in Lisp language family, and it's sad to see it's missing almost everywhere else - where, as you noted, it's being reinvented, but poorly, because it's not a first-class language feature.
On that note, the most common case of dynamic scope that almost everyone is familiar with, are environment variables. That's what they're for. Since most devs these days are not familiar with the idea of dynamic scope, this leads to a lot of peculiar practices and footguns the industry has around environment variables, that all stem from misunderstanding what they are for.
> This is one of those ideas that at first seem really wrong (isn't it just a global variable in disguise?)
It's not. It's about scoping a value to the call stack. Correctly used, rebinding a value to a dynamic variable should only be visible to the block doing the rebinding, and everything below it on the call stack at runtime.
> Implicit context (properly integrated into the type system) is something I would consider in any new language.
That's the problem I believe is currently unsolved, and possibly unsolvable in the overall programming paradigm we work under. One of the main practical benefits of dynamic scope is that place X can set up some value for place Z down on the call stack, while keeping everything in between X and Z oblivious of this fact. Now, this is trivial in dynamically typed language, but it goes against the principles behind statically-typed languages, which all hate implicit things.
(FWIW, I love types, but I also hate having to be explicit about irrelevant things. Since whether something is relevant or not isn't just a property of code, but also a property of a specific programmer at specific time and place, we're in a bit of a pickle. A shorter name for "stuff that's relevant or not depending on what you're doing at the moment" is cross-cutting concerns, and we still suck at managing them.)
https://www.gnu.org/software/emacs/manual/html_node/elisp/Va...
> By default, the local bindings that Emacs creates are dynamic bindings. Such a binding has dynamic scope, meaning that any part of the program can potentially access the variable binding. It also has dynamic extent, meaning that the binding lasts only while the binding construct (such as the body of a let form) is being executed.
It’s also not really germane to the GP’s comment, as they’re just talking about dynamic scoping being available, which it will almost certainly always be (because it’s useful).
But many statically typed languages allow throwing exceptions of any type. Contexts can be similar: "try catch" becomes "with value", "throw" becomes "get".
@foo
is a list (well, it does Positional anyway), while @*foo
is a different variable that is additionally dynamically scoped.it's idiomatic to see
$*db
as a database handle to save passing it around explicitly, env vars are in %*ENV
things like that. it's nice to have the additional explicit reminder whenever you're dealing with a dynamic variable in a way the language checks for you and yells at you for forgetting.i would prefer to kick more of the complex things i do with types back to compile time, but a lot of static checks are there. more to the point, raku's type system is quite expressive at runtime (that's what you get when you copy common lisp's homework, after all) and helpful to move orthogonal concerns out into discrete manageable things that feel like types to use even if what they're doing is just a runtime branch that lives in the function signature. doing stuff via subset types or roles or coercion types means whatever you do plays nicely with polymorphic dispatch, method resolution order, pattern matching, what have you.
in fact, i just wrote a little entirely type level... thing? to clean up the body of an http handler that lifts everything into a role mix-in pipeline that runs from the database straight on through to live reloading of client-side elements. processing sensor readings for textual display, generating html, customizing where and when the client fetches the next live update, it's all just the same pipeline applying roles to the raw values from the db with the same infix operator (which just wraps a builtin non-associative operator to be left associative to free myself from all the parentheses).
not getting bogged down in managing types all the time frees you up to do things like this when it's most impactful, or at least that's what i tell myself whenever i step on a rake i should have remembered was there.
¹ or times where raku bubbles types up to the end-user, like the autogenerated help messages generated from the type signature of MAIN. i often write "useless" type declarations such as subset Domain-or-IP; which match anything² so that the help message says --host[=Domain-or-IP] instead of --host[=Str] or whatever
² well, except junctions, which i consider the current implementation of to be somewhat of a misstep since they're not fundamentally also a list plus a context. it's a whole thing. in any case, this acts at the level of the type hierarchy that you want anyway.
Those who forget monads are doomed to reinvent dozens of limited single-purpose variants of them as language features.
It’s reasonable, I think, to want the dynamic scope but not the control-flow capabilities of monads, and in a language with mutability that might even be a better choice. (Then again, maybe not—SwiftUI is founded on Swift’s result builders, and those seem pretty much like monads by another name to me.) And I don’t think anybody likes writing the boilerplate you need to layer a dozen MonadReaders or -States on each other and then compose meaningful MonadMyLibraries out of them.
Finally, there’s the question of strong typing. You do want the whole thing to be strongly typed, but you don’t want the caller to write the entire dependency tree of the callee, or perhaps even to know it. Yet the caller may want to declare a type for itself. Allowing type signatures to be partly specified and partly inferred is not a common feature, and in general development seems to be backing away from large-scale type inference of this sort due to issues with compile errors. Not breaking ABI when the dependencies change (perhaps through default values of some sort) is a more difficult problem still.
(Note the last part can be repeated word for word for checked exceptions/typed errors. Those are also, as far as I’m aware, largely unsolved—and no, Rust doesn’t do much here except make the problem more apparent.)
Furthermore in Go threads are spun up at process start, not at request time, so thread-local has a leak risk or cleanup cost. Contexts are all releasable after their processing ends.
I've grown to be a huge fan of Go for servers and context is one reason. That said, I agree with a lot of the critique and would love to see an in-language solution, but thread-local ain't it.
https://medium.com/androiddevelopers/under-the-hood-of-jetpa...
I’m still not sure how I feel about it. While more annoying, I think I’d like to see it, rather than just have magic behind the hood
Explicitly passing data and lexical scoping is better for understandability.
Personally I've used the (ugly) Python contextvars for:
- SQS message ID in to allow extending message visibility in any place in the code
- scoped logging context in logstruct (structlog killer in development :D)
I no longer remember what I used Clojure dynvars for, probably something dumb.
That being said, I don't believe that "active" objects like DB connection/session/transaction are good candidates for a context var value. Programmers need to learn to push side effects up the stack instead. Flask-SQLAlchemy is not correct here.
Even Flask's request object being context-scoped is a bad thing since it is usually not a problem to do all the dispatching in the view.
Nevertheless: just having it, be it implicit or explicit, beats having to implement it yourself.
This is such a bad take.
ctx.Value is incredibly useful for passing around context of api calls. We use it a lot, especially for logging such context values as locales, ids, client info, etc. We then use these context values when calling other services as headers so they gain the context around the original call too. Loggers in all services pluck out values from the context automatically when a log entry is created. It's a fantastic system and serves us well. e.g.
log.WithContext(ctx).Errorf("....", err)`ctx.Value` is an `any -> any` kv store that does not come with any documentation, type checking for which key and value should be available. It's quick and dirty, but in a large code base, it can be quite tricky to check if you are passing too many values down the chain, or too little, and handle the failure cases.
What if you just use a custom struct with all the fields you may need to be defined inside? Then at least all the field types are properly defined and documented. You can also use multiple custom "context" structs in different call paths, or even compose them if there are overlapping fields.
The docs https://pkg.go.dev/context#Context suggest a way to make it type-safe (use an unexported key type and provide getter/setter). Seems fine to me.
> What if you just use a custom struct with all the fields you may need to be defined inside?
Can't seamlessly cross module boundaries.
On a similar note, this is also why I highly dislike struct tags. They're string magic that should be used sparingly, yet we've integrated them into data parsing, validation, type definitions and who knows what else just to avoid a bit of verbosity.
It did not have to be this way, this is a shortcoming of Go itself. Generic interfaces makes things a bit better, but Go designers chose that dumb typing at first place. The std lib is full of interface {} use iteself.
context itself is an after thought, because people were building thread unsafe leaky code on top of http request with no good way to easily scope variables that would scale concurrently.
I remember the web session lib for instance back then, a hack.
ctx.Value is made for each go routine scoped data, that's the whole point.
If it is an antipattern well, it is an antipattern designed by go designers themselves.
People who have takes like this have likely never zoomed out enough to understand how their software delivery ultimately affects the business. And if you haven't stopped to think about that you might have a bad time when it's your business.
As I understand they propose to pass the data explicitly, like a struct with fields for all possible request-scoped data.
I personally don't like context for value passing either, as it is easy to abuse in a way that it becomes part of the API: the callee is expecting something from the caller but there is no static check that makes sure it happens. Something like passing an argument in a dictionary instead of using parameters.
However, for "optional" data whose presence is not required for the behavior of the call, it should be fine. That sort of discipline has to be enforced on the human level, unfortunately.
So basically context.Context, except it can't propagate through third party libraries?
func CancellableOp(done chan error /* , args... */) {
for {
// ...
// cancellable code:
select {
case <-something:
// ...
case err := <-done:
// log error or whatever
}
}
}
Some compare context "virus" to async virus in languages that bolt-on async runtime on top of sync syntax - but the main difference is you can compose context-aware code with context-oblivious code (by passing context.Background()), and vice versa with no problems. E.g. here's a context-aware wrapper for the standard `io.Reader` that is completely compatible with `io.Reader`: type ioContextReader struct {
io.Reader
ctx context.Context
}
func (rc ioContextReader) Read(p []byte) (n int, err error) {
done := make(chan struct{})
go func() {
n, err = rc.Reader.Read(p)
close(done)
}()
select {
case <-rc.ctx.Done():
return 0, rc.ctx.Err()
case <-done:
return n, err
}
}
func main() {
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
defer cancel()
rc := ioContextReader{Reader: os.Stdin, ctx: ctx}
// we can use rc in io.Copy as it is an io.Reader
_, err := io.Copy(os.Stdout, rc)
if err != nil {
log.Println(err)
}
}
For io.ReadCloser, we could call `Close()` method when context exits, or even better, with `context.AfterFunc(ctx, rc.Close)`.Contexts definitely have flaws - verbosity being the one I hate the most - but having them behave as ordinary values, just like errors, makes context-aware code more understandable and flexible.
And just like errors, having cancellation done automatically makes code more prone to errors. When you don't put "on-cancel" code, your code gets cancelled but doesn't clean up after itself. When you don't select on `ctx.Done()` your code doesn't get cancelled at all, making the bug more obvious.
What will go wrong if one stores a Context in a struct?
I've done so for a specific use case, and did not notice any issues.
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
reader := ioContextReader(ctx, r)
...
ctx, cancel := context.WithTimeout(ctx, 1*time.Second)
ctx = context.WithValue(ctx, "hello", "world")
...
func(ctx context.Context) {
reader.Read() // does not time out after one second, does not contain hello/world.
...
}(ctx)1) You're calling Read() directly and don't need to use functions that strictly accept io.Reader - then just implement ReadContext:
func (rc ioContextReader) ReadContext(ctx context.Context, p []byte) (n int, err error) {
done := make(chan struct{})
go func() {
n, err = rc.Reader.Read(p)
close(done)
}()
select {
case <-ctx.Done():
return 0, ctx.Err()
case <-done:
return n, err
}
}
Otherwise, just wrap the ioContextReader with another ioContextReader: reader = ioContextReader(ctx, r) type ioContextReadCloser struct {
io.ReadCloser
ctx context.Context
ch chan *readReq
}
type readReq struct {
p []byte
n *int
err *error
m sync.Mutex
}
func NewIoContextReadCloser(ctx context.Context, rc io.ReadCloser) *ioContextReadCloser {
rcc := &ioContextReadCloser{
ReadCloser: rc,
ctx: ctx,
ch: make(chan *readReq),
}
go rcc.readLoop()
return rcc
}
func (rcc *ioContextReadCloser) readLoop() {
for {
select {
case <-rcc.ctx.Done():
return
case req := <-rcc.ch:
*req.n, *req.err = rcc.ReadCloser.Read(req.p)
if *req.err != nil {
req.m.Unlock()
return
}
req.m.Unlock()
}
}
}
func (rcc *ioContextReadCloser) Read(p []byte) (n int, err error) {
req := &readReq{p: p, n: &n, err: &err}
req.m.Lock() // use plain mutex as signalling for efficiency
select {
case <-rcc.ctx.Done():
return 0, rcc.ctx.Err()
case rcc.ch <- req:
}
req.m.Lock() // wait for readLoop to unlock
return n, err
}
Again, this is not to say this is the right way, only that it is possible and does not require any shenanigans that e.g. Python needs when dealing with when mixing sync & async, or even different async libraries.As others have already mentioned, there won't be a Go 2. Besides, I really don't want another verbose method for cancellation; error handling is already bad enough.
It's funny, it really was just using strings as keys until quite recently, and obviously there were collisions and there was no way to "protect" a key/value, etc.
Now the convention is to use a key with a private type, so no more collisions. The value you get is still untyped and needs to be cast, though. Also there are still many older libraries still uses strings.
I kind of do wish we had goroutine local storage though :) Passing down the context of the request everywhere is ugly.
I've seen plenty of issues in Java codebases where there was an assumption some item was in the Thread Local storage (e.g. to add some context to a log statement or metric) and it just wasn't there (mostly because code switched to a different thread, sometimes due to a "refactor" where stuff was renamed in one place but not in another).
What if for the hypothetical Go 2 we add an implicit context for each goroutine. You'd probably need to call a builtin, say `getctx()` to get it.
The context would be inherited by all go routines automatically. If you wanted to change the context then you'd use another builtin `setctx()` say.
This would have the usefulness of the current context without having to pass it down the call chain everwhere.
The cognitive load is two bultins getctx() and setctx(). It would probably be quite easy to implement too - just stuff a context.Context in the G.
Really? Even years later in 2025, this never ended up being true. Unless your definition of 'general purpose' specifically excludes anything UI-related, like on desktop, web or mobile, or AI-related.
I know it's written in 2017, but reading it now in 2025 and seeing the author comparing it to Python of all languages in the context of it's supposed 'general purpose'ness is just laughable. Even Flutter doesn't support go. granted, that seems like a very deliberate decision to justify Dart's existence.
Link to previous discussion: https://news.ycombinator.com/item?id=14958989
> https://golang.org/doc/faq#What_is_the_purpose_of_the_projec...: "By its design, Go proposes an approach for the construction of system software on multicore machines."
> That page points to https://talks.golang.org/2012/splash.article for "A much more expansive answer to this question". That article states:
> "Go is a programming language designed by Google to help solve Google's problems [...] More than most general-purpose programming languages, Go was designed to address a set of software engineering issues that we had been exposed to in the construction of large server software."
By that definition no language is general purpose. There is no language today that excels in GUI (desktop/mobile), web development, AI, cloud infrastructure, and all the other stuff like systems, embedded...And all at the same time.
For instance I have never seen or heard of a successful Python desktop app (or mobile for that matter).
n, err := r.Read(context.TODO(), p)
> put a bullet in my head, please.Manually passing around a context everywhere sounds about as palatable as manually checking every return for error.
Context should go away for Go 2 - https://news.ycombinator.com/item?id=14951753 - Aug 2017 (40 comments)
What a nice attitude.
I was unsuccessful to convey the same message in my previous company (apart from being fired part). All around the codebase you'd see function with official argument and unofficial ones via ctx that would panic everything if you forgot it was used 3 layers down (not kidding). The only use case I've seen so far that is not terrible of context value is if you have a layer of opentelemetry as it makes things transparent and as a caller you don't have to give a damn how the telemetry is operated under the hood.
okay... so they dodged the thing I thought was going to be interesting, how would you solve passing state? e.g. if I write a middleware for net/http, I have to duplicate the entire http.Request, and add my value to it.
Yeah, okay. I tried to find reasons you'd want to use this feature and ultimately found that I really, really dislike it.