Err, no, this is a misconception. All IO in Go is async - there is no sync IO in Go (as sync IO would block an entire OS thread). There is an internal registry mapping blocked file descriptors to goroutines - when a kernel IO function returns EAGAIN, the goroutine throws the file descriptor + goroutine info onto the registry and yields back to the scheduler. The scheduler occasionally checks all descriptors on the registry to mark goroutines that were waiting on IO as being alive. The scheduler is, therefore, essentially a multithreaded variation on a standard "event loop" - the only difference is that "callbacks" (continuations of a goroutine) can be run on any of M threads rather than just one.
From a Go programmer's perspective, this looks like "blocking a thread", but because goroutines are relatively lightweight in comparison to actual threads, it behaves similarly resource-wise to callback-based async IO. (Although yes, nginx is likely optimised so that it throws out data earlier than Go can free stack space and so can save some memory. Exactly how much is up to benchmarking to find out.)
Basically, the only differences between Go and e.g. a libev-based application as far as IO is concerned is a different syntax - the event loop is still there, just hidden from the programmer's point of view.
Note that this doesn't mean you shouldn't put nginx in front of Go to serve static files - nginx is likely more optimised for the job than Go's file server, might handle client bugs a little better, is more easily configurable (e.g. you can enable a lightweight file cache in just a few settings), you don't have to mess around with capabilities to get your application listening on port 80 as a non-root user, and so on and so forth.