slog.Info("hello, world", "user", os.Getenv("USER"))
It's a little magical that "user" is a key. So what if you have multiple key-value pairs? Arguably it most likely going to be obvious which is the keys, but having every other value be a key and the rest values seems a little clumsy.I really like Pythons approach where you can have user="value" it makes things a bit more clear.
logger.Info("failed to fetch URL",
// Structured context as strongly typed Field values.
zap.String("url", url),
zap.Int("attempt", 3),
zap.Duration("backoff", time.Second),
)
They also have zap.Error(err) which generates the "error" key as a convention.For example: https://pkg.go.dev/golang.org/x/exp/slog#example-Group
sugar.Infow("failed to fetch URL",
"url", "http://example.com",
"attempt", 3,
"backoff", time.Second,
)
Really slog presents mostly the same two APIs as Zap, with a single entrypoint.You... add them afterwards? It's really just a plist (https://www.gnu.org/software/emacs/manual/html_node/elisp/Pr...), that's hardly novel. The method takes any number of parameters and pairs them up as key, value.
Or if you really hate yourself, you use LogAttrs with explicitly constructed Attr objects.
> I really like Pythons approach where you can have user="value" it makes things a bit more clear.
The trouble is, Go doesn't have keyword parameters so that doesn't work.
I don’t mind it. You can use LogAttrs if you want to be explicit.
Although I do wonder if there’s anything tricky with the type system that is preventing something like this from being supported: https://go.dev/play/p/_YV7sYdnZ5V
The reason it doesn’t use maps is that maps are significantly slower, TFA has an entire section on performances.
However if you prefer that interface and don’t mind the performance hit, nothing precludes writing your own Logger (it’s just a façade over the Handler interface) and taking maps.
slog.Info("hello, world", slog.String("user", os.Getenv("USER")))
[1]: https://pkg.go.dev/log/slog#hdr-Attrs_and_Values use slog::info;
...
info!("hello, world"; "user" => std::env::var("USER"));For better or for worse, Go doesn’t have that.
It would definitely not be better from the point of view of
> We wanted slog to be fast.
json_build_object ( VARIADIC "any" ) → json
jsonb_build_object ( VARIADIC "any" ) → jsonb
Builds a JSON object out of a variadic argument list.
By convention, the argument list consists of alternating keys and values.
Key arguments are coerced to text; value arguments are converted as per to_json or to_jsonb.
json_build_object('foo', 1, 2, row(3,'bar')) → {"foo" : 1, "2" : {"f1":3,"f2":"bar"}} json_build_object(
'foo', 1,
2, row(3, 'bar')
)I’ve been working on a side project to bring something like the DataDog log explorer to the local development environment. The prototype I made has already been extremely helpful in debugging issues in a very complex async ball of Rails code. Does something like that sound useful to other folks? Does it already exist and I just can’t find it?
Very much so!
One piece I'd like to have is a good output format from my program. Right now I have stuff outputting logs in JSON format to stderr/local file (then picked up by the Opentelemetry OTLP collector, Datadog agent, AWS CloudWatch, whatever) and the traces/spans actually sent from my process to the collector over the network. It baffles me why the traces/spans are done in that way rather than placed in a local file (and for compressibility, ideally the same file as the logs). The local file method would make it easier to have simple local-focused tools, would lower the requirements for outputting these records (no need to have async+http client set up, just the writer thread), and would better handle the collector being in a bad mood at the moment.
That's the most important piece, but if getting that done requires inventing a new format (it seems to?!?), there are some other details I'd like to it to do well. You should be able to see all the stuff associated with each log entry's matching span, which means that spans and attributes should be recorded on open, not just on close. (Attributes can also be updated mid-span; the Rust tracing library allows this.) (OpenTelemetry notably only supports complete spans). It could be more efficient by interning the keys. some care about how rotation is handled, timestamps are handled, etc.
The prototype I built is a web application that creates websocket connections, and if those connections receive messages that are JSON, log lines are added. Columns are built dynamically as log messages arrive, and then you can pick which columns to render in the table. If you're curious here's the code, including a screenshot: https://github.com/corytheboyd-smartsheet/json-log-explorer
With websockets, it's very easy to use websocketd (http://websocketd.com), which will watch input files for new lines, and write them verbatim as websocket messages to listeners (the web app).
To make the idea real, would want to figure out how to not require the user to run websocketd out of band, but watching good ol' files is dead simple, and very easy to add to most code (add a new log sink, use existing log file, etc.)
If you are concerned with cost of the log processor, Loki again has your back by being very easy and lightweight to deploy. Giving you same tools in dev as in prod.
Logs are data.
> With many structured logging packages to choose from, large programs will often end up including more than one through their dependencies. The main program might have to configure each of these logging packages so that the log output is consistent: it all goes to the same place, in the same format. By including structured logging in the standard library, we can provide a common framework that all the other structured logging packages can share.
This is IMO the right way of doing it. Provide an interface with simple defaults, usable out of the box. Those who need more can use a library that builds towards the interface.
So when evaluating any library, you can ask "How well does this integrate with interfaces in the standard library?". Discovering that some functionality is just a "Fooer" that pieces well together with existing stuff is calming. Not only do you already know how to "Foo", you also get a hidden stability benefit: There's an implied API surface contract here.
This is in stark contrast to the "builds on top of" approach, where you end up with competing, idiosyncratic interfaces. This is often necessary, but there is always an implied risk in terms of maintenance and compatibility.
It’s a lot of effort when all you want is to log everything to STDOUT, in JSON, but you have to choose one of half a dozen logging libraries that all behave extremely differently.
Good job Go though for being opinionated but rational.
[1] https://www.elastic.co/guide/en/ecs-logging/overview/current... [2] https://opentelemetry.io/docs/specs/otel/logs/data-model/
Step one would bee sum types, so only valid value space can be represented (return value or error, but not both or neither).
Just because Go made opinionated design decisions around their error handling a decade ago when developing the language doesn't mean that there's not practical room for improvement as the language is widely in production and shortcomings in its error handling have been found.
The number of hacks I've seen over the years to try and solve the "wait, where did this error originate" problem in Go are legion, with little standardization.
And no, using Errorf with '%w' to wrap error messages along the stack isn't exactly an elegant solution.
Even if they want to keep the core error behavior as it is for compatibility, providing a core library way of wrapping with stacktraces would be a very useful next step, particularly given the most popular package doing that previously is now unmaintained.
I don’t really see the enum thing happened. Is lack of enums a real problem? Theoretically, it would be convenient, but I can’t say that I see bugs caused by its lack.
The addition of enums would move all these runtime checks to compile time.
Handle(context.Context, Record) error
This method has access to the context, which means you can get the logging handler to extract values from the context. Instead of storing the logger on the context, you can extract the traceId, etc values from the context and log those.It's a little bit involved to write a whole logger from scratch, but you can 'wrap' the existing logger handlers and include values from the context relatively easily.
There are examples in this project (which aims to help solve your usecase): https://github.com/zknill/slogmw
https://github.com/indexsupply/x/blob/main/wslog/slog_test.g...
> As the call to LogAttrs shows, you can pass a context.Context to some log functions so a handler can extract context information like trace IDs.
I’m not a fan of slog’s syntax, but the convenience of having it in the stdlib trumps that, for me.
It’s really designed to be a minimum necessary package to allow interop (via handlers) and a baseline of standalone usability (via loggers). The stdlib only provides a text and a json handler, not even a no-op handler which I think is sorely neededor a multi handler which I think would make a lot of sense.
But nothing precludes you publishing a messagetemplates handler, or whatever else you may want.
It's easy to get started with log/slog and one of the built in handlers, but as soon as you want to change something the library design pushes you towards implementing an entire handler.
For example, if I want the built in JSON format, but with a different formatting of the Time field, that's not easy to do. It's not obvious how to change the built in handler.
I wrote slogmw[1] to solve this problem. It's a set of middleware and examples that make it easy to make small changes to the built in handlers without having to write a whole new handler from scratch.
Yes, it annoyed me to no end. But OTOH I think it may be wise of them to see what the ecosystem finds and provide more convenience later. After all, it's std we're talking about, and this takes time to get right.
I'm personally missing:
- A goddamn default no-op/nil logger, that can be declared before initialized in e.g. structs.
- Customization to TextHandler for changing format easily, and omit keys (AIUI ReplaceAttr cannot omit stuff like `time="..."` on each line), which is critical since CLI screen real estate is incredibly sparse.
- (Controversial) but I would like opinionated logger initialization guidance for package authors, so that you get consistency across the ecosystem. Doesn't have to be exactly one way, but say.. two ways? E.g. a package-global and a struct-initialized version? Right now, people are even confusingly wondering if they should accept slog.Handler or *slog.Logger.
I have to admit, the `log.InfoContext(ctx,...` style of redundancy that permeates the standard lib at this point is really gross, especially given that the most common use case for go is going to have contexts everywhere.
So,
slog.Info("failed to frob", "thing", GetThing(1))
Still calls GetThing(1) when the log level is greater than Info. The only solution right now for this is to test the log level before making the logging call. It would be amazing if language designers could make the arguments late bound instead or used aspect-oriented programming approaches to protect each logging call site. LOG(INFO) << WowExpensiveFunction();
This is safe when the log level is set to WARN or higher. For the same reasons, LOG_EVERY_N and LOG_FIRST_N in the same library are pretty cheap.It's not the level of type-safety that I expect from a strongly typed language.
What it comes down to is that zap special cases things like slice-of-int, slice-of-string, slice-of-timestamp, slog doesn't, and the benchmark includes all those special cases. I question whether your typical log statement includes slices. A more fair benchmark would be just scalar types, and zap & slog optimizations there look pretty similar.
https://github.com/uber-go/zap/blob/fd37f1f613a87773fc30f719...
https://github.com/uber-go/zap/blob/fd37f1f613a87773fc30f719...
I've got a few packages that accept a basic logger interface, eg:
type debugLogger interface {
Debugf(format string, args ...any)
}
type MyThing struct {
logger debugLogger
}
func New(logger debugLogger) *MyThing {
return &MyThing{logger}
}
I'd love to switch to slog but I'll have to v2 these packages now.It's like when you're trying to internationalize, you want to emit as constant of a string as reasonably practical, so that it can be straightforwardly matched and substituted into a different language. Except in this case that different language is regexes being used to change the thing into a SQL statement to fix the mess (or whatever).
So much easier to say "stop trying to Sprintf your logs, just add the values as key-value pairs at the end of the function call."
log.Printf("failed to frob %s: %s", thing, error)
then moving from that to: slog.Error("failed to frob", "thing", thing, "error", error)
isn't terribly difficult, and will make log analysis dramatically easier. log.Printf("failed to frob %s: %s", thing, error)
Wouldn't you just want to use slog.Error("failed to frob", thing, error)
That keeps the _value_ of `thing` as the key and the _value_ of `error` as the value. That would keep more in line with your first example.If you want to use slog, I can imagine setting up the logger with a handler specific for the CLI output in the CLI tool, and a json or text structured handler in the webservice app.
The quesiton is, do you actually want structured logging in the CLI app? Yes you probably want to print something out, but is it _structured_ in the sense that slog expects? Or is it just some output.
If it's not really structured, then you probably want some other interface/library that better represents the logging you want to do. Slog will push you towards structured key-value pairs, and you might find yourself fighting against this in the CLI app.
We need to start making Json viewer were we look into log files without tooling the default.
It's so much easier to use Json automatically and ship them to systems out of the box.
Linux logging should do that too. Not just container in k8s
The prevailing solution is SLF4J, which is a facade that can be implemented by any number of backends, e.g. Logback.
There is logging in the stdlib (java.util.logging), but it's the less common choice, for whatever reason.
And then I heard "... with a chainsaw. It's a chainsaw mill" and realized I may have misunderstood the context.