(It also does each of these fetches in a separate goroutine, leading me to believe that it's really designed to be a proxy in front of a bunch of microservices, not an API server for a relational database. Even in that case, I'm not convinced it's an entirely perfect design -- for a large resultset you're probably going to pop the circuit breaker on that backend when you make 1000 requests to it in parallel all at the exact same instant. Because our "microservice" was Postgres, we very quickly determined where to set our max database connection limit, because Postgres is particularly picky about not letting you open 1000 connections to it.
I had the pleasure of reading the generated code and noticing the goroutine-per-slice-element design when our code wrapped the entire request in a database transaction. Transactions aren't thread-safe, so multiple goroutines would be consuming the bytes out of the network buffer in parallel, and this resulted in very obvious breakages as the protocol failed to be decoded. Fun stuff! I'll point out that if you are a Go library, you should not assume the code you're calling is thread-safe... but they did the opposite. Random un-asked-for parallelism is why I will always prefer dumb RPCs to a query language on top of a query language. Sometimes you really want to be explicit rather than implicit, even if being explicit is kind of boring.)