Build time with 1.8.3:
real 0m7.533s
user 0m36.913s
sys 0m2.856s
Build time with 1.9: real 0m6.830s
user 0m35.082s
sys 0m2.384s
Binary size: 1.8.3 : 19929736 bytes
1.9 : 20004424 bytes
So... looks like the multi-threaded compilation indeed delivers better build times, but the binary size has increased slightly.[1] You can git-clone and try yourself: https://github.com/gravitational/teleport
Furthermore, when I see a second run that's faster than the first one, I immediately wonder if it's the cache being cold for the first run and warm for the second.
While I have your attention, https://zedshaw.com/archive/programmers-need-to-learn-statis... is worth reading.
PSA: There is no reason to behave like this and this is an incredible way to alienate a bunch of people. You either offend people directly with the murder implication or they don't take you seriously because you sound like you're throwing such an extended temper tantrum that you managed to write it all in a blog.
how, concretely, should I go about doing this particular analyzis of compile time for one project ? How many times should I run the build for each of the 2 compilers and what should I do with the result so I could; 1. Draw a conclusion 2. Come up with fair numbers of how they compare ?
I would hope someone could tech this hopefully simple and very concrete thing to the HN crowd and I do hope the answer is not "go learn statistics".
Not by default. You have to set CGO_ENABLED=0 to statically link libc.
func testServer(t *testing.T, port int) {
...do stuff...
if err != nil { t.Fatalf("failed to start server: %+v", err) }
}
similarly you can have func assertMapEquals(t *testing.T, a, b map[string]int)
It lets you hide such helper methods from the test failure's stack trace (where t.Fatal is actually called), making test errors more readable.Come join if your near the area.
https://en.wikipedia.org/wiki/Type_safety
> Type enforcement can be static, catching potential errors at compile time, or dynamic, associating type information with values at run-time and consulting them as needed to detect imminent errors, or a combination of both.
interface{} is type-checked at runtime. It's type-safe because because you can't e.g. fish out an integer out of interface{} value that represents a string. Runtime won't allow it.
You can either extract a string or the runtime will crash if you insist on extracting anything else. Unless you use unsafe package, in which case you explicitly want to skip type-safety.
"Mutex is now more fair."
Source: https://golang.org/doc/go1.9#syncDoes anyone know what that means?
I googled a little bit and found some good info, I guess I had forgotten a little bit of the concepts of mutex fairness/unfairness. I found a very nice explanation on cs.stackexchange:
"My understanding is that most popular implementations of a mutex (e.g. std::mutex in C++) do not guarantee fairness -- that is, they do not guarantee that in instances of contention, the lock will be acquired by threads in the order that they called lock(). In fact, it is even possible (although hopefully uncommon) that in cases of high contention, some of the threads waiting to acquire the mutex might never acquire it."
Source: https://cs.stackexchange.com/questions/70125/why-are-most-mu...With that computer science clarification, I think the comment "Mutex is now more fair" and the detailed description "Unfair wait time is now limited to 1ms" makes it a lot clearer.
Great improvement I think! It's one of those things that you don't notice until you have a bug, but it's really nice to never get that bug in the first place. =)
> Mutex fairness.
> Mutex can be in 2 modes of operations: normal and starvation.
> In normal mode waiters are queued in FIFO order, but a woken up waiter
> does not own the mutex and competes with new arriving goroutines over
> the ownership. New arriving goroutines have an advantage -- they are
> already running on CPU and there can be lots of them, so a woken up
> waiter has good chances of losing. In such case it is queued at front
> of the wait queue. If a waiter fails to acquire the mutex for more than 1ms,
> it switches mutex to the starvation mode.
> In starvation mode ownership of the mutex is directly handed off from
> the unlocking goroutine to the waiter at the front of the queue.
> New arriving goroutines don't try to acquire the mutex even if it appears
> to be unlocked, and don't try to spin. Instead they queue themselves at
> the tail of the wait queue.
> If a waiter receives ownership of the mutex and sees that either
> (1) it is the last waiter in the queue, or (2) it waited for less than 1 ms,
> it switches mutex back to normal operation mode.
> Normal mode has considerably better performance as a goroutine can acquire
> a mutex several times in a row even if there are blocked waiters.
> Starvation mode is important to prevent pathological cases of tail latency.
1. runtime/pprof package now include symbol information
2. Concurrent Map
3. Profiler Labels
4. database/sql reuse of cached statements
5. The os package now uses the internal runtime poller for file I/O.
!! that wasn't a thing until now?
See https://golang.org/doc/go1.9#database/sql
Go 1.9 adds reuse of statement handles created off an ephemeral transaction too. But all the other statement handle cases have cached from day 1.
I see there's (more) parallel compilation in 1.9 - so that should improve elapsed time (but not reduce cpu time) of compilation.
Would be nice to know if 1.9 is (still) on track catch up to/pass 1.4.
[1] https://dave.cheney.net/2016/11/19/go-1-8-toolchain-improvem...
Go 1.4: Around 2.1s Go 1.9: Around 2.5s
So within 20% of 1.4, not bad. That's on an old MacBook Air, dual core 1.7 GHz i7, 8GB ram.
And of course the binary performance and GC pause times w/ 1.9 will be much better.
Here's the raw times: https://pastebin.com/ULDHPmVu
Two awesome things:
It was super-easy and fast to download and compile Go 1.4
It was completely painless to compile my old code with Go 1.9
I fucking love how nice it is to work with the Go ecosystem. <3
Reality is that the Linux Kernel makes a big confusion between processes and threads in the userspace APIs.
Locking to threads is a solution that works but also sucks and defeats the niceties of Go N:M model. But that's the only way: if you use that broken system calls API you should know better.
So basically it's designed for cases like 'I have N goroutines and each one owns 1/N keys'?
With those constraints, why not create a small map for each goroutine at that point, and merge the maps afterwards?
fatal error: concurrent map writesI did a big rewrite of a current project, and clearly it was badly designed :(
// file tree.go
type T = YourConcreteType
type TreeNode struct {
Value T
}
// rest of tree implementation
Then you can just copy the file and replace YourConcreteType at the top and voila!Seems simpler to use than the unicode hack here https://www.reddit.com/r/rust/comments/5penft/parallelizing_...