You probably can't beat that in user space, especially if you want to preempt threads yourself. You'd have to check after every step, or profile your own process or something like that. And indeed, Go's scheduler is cooperative.
But then, why can't you get the performance of Goroutines with OS threads? Is it just because of legacy issues? Or does it only work with cooperative threading, which requires language support?
One thing I'm missing from that article is how the cooperativeness is implemented. I think in Go (and in Java's Project Loom), you have "normal code", but then deep down in network and IO functions, you have magic "yield" instructions. So all the layers above can pretend they are running on regular threads, and you avoid the "colored function problem", but you get runtime behavior similar to coroutines. Which only works if really every blocking IO is modified to include yielding behavior. If you call a blocking OS function, I assume something bad will happen.
It hasn't been cooperative for a few versions now, the scheduler became preemptive in 1.14. And before that there were yield points at every function prolog (as well as all IO primitives) so there were relatively few situations where cooperation was necessary.
> Without knowing too much assembly, I would assume any modern processor would make a context switch a one instruction affair.
Any context switch (to the kernel) is expensive, and way more than a single operation. The kernel also has a ton of stuff to do, it's not just "picks the thread to run", you have to restore the ip and sp, but also may have to restore FPU/SSE/AVX state (AVX512 is over 2KB of state), traps state.
Kernel-level context switching costs on the order of 10x what userland context switching does: https://eli.thegreenplace.net/2018/measuring-context-switchi...
> LOAD THREAD
There is no load thread instruction
Since co-op was most unnecessary, do you know why it was changed to preemptive or what the specific cases were that are resolved with preemptive scheduling?
More here: https://inside.java/2020/08/07/loom-performance/
As to why it's hard for the OS to allow that many threads, the OS would need to keep thread stacks small and resizable, and that is hard to do if you don't know the specifics of how the language uses the stack. For example, to accommodate low-level languages that allow pointers into the stack you would need to manipulate virtual memory (to keep addresses valid), but that only works at a page granularity, or to introduce split stacks, which would require a new kind of ABI known to compilers (and would probably have a cost to performance).
Why are OS threads scarce? The OS allocates thread stacks lazily. Given a kernel stack of ~8kb (two pages) and a user stack of ~4kb, one could spawn 100k threads with just over 1GB. A userspace runtime will allow you to bring that number down, but if you're at the scale of concurrency it is unlikely to matter much.
It can't be a single instruction, since the details of what a "context" contains depends on the OS and ABI. For example on Linux, the signal mask is a part of the OS thread context (but usually not user thread contexts) which requires a syscall to retrieve it from kernel memory before saving it in the context.
The reason why user threads are so much faster than OS threads is precisely because it can be reduced to a handful of instructions without caring about all the details that OS threads need to care about.
> Which only works if really every blocking IO is modified to include yielding behavior. If you call a blocking OS function, I assume something bad will happen.
That's exactly what Go does, they introduce yield points into function prologues and i/o ops. You don't have direct FFI calls in Go so it's not as big of an issue. It's roughly the same problem as GC safepoints in multithreaded interpreters that support FFI.
You get an interrupt, then the kernel needs to load its own context (the tables it needs to access), then the kernel needs to do the expensive switch.
In user space you have a lot less context. The actual switching is pretty much the cost of a function call. If you need preemption that's a different story and mostly depends on what facilities are available for that. Inserting preemption checks is a little hacky (hello Go ;) ) but what can you do.
EDIT: It's worthwhile noting there's indirect costs like caches being stale. Lightweight/green threads will often work on shared data structures so the caches are more likely to have something useful in them. They may even share the code.
Has been the historic assumption, has been proven to be wrong by every possible benchmark.
Consider tech empower[0] for raw stack performance , runtime level threads outperform IO threads since OS thread were designed to be mapped on physicals cores.
This is very expensive and inefficient.
Creating one thread for every request you have ( Apache + PHP ) will exhaust the hardware after a few thousands/qps target.
Runtime can indeed have millions of those “lightweight threads” without killing your machine since they create a pool from physical threads and tap into IO events to efficiently switch or resume contexts. This is by far much faster.
[0] https://www.techempower.com/benchmarks/#section=data-r20&hw=...
PHP installations more realistically use nginx and FastCGI. This is not one thread per request and it’s also a better design than hosting your entire server and every user request in the same process; that’s just asking for security issues.
Spawning millions of unnecessary user-space threads because they are supposed to be more efficient than kernel threads is rarely the best solution to any problem imho.
Reduced instruction set (complexity) is a hallmark of modern processor designs, not the other way around.
You might want to read about what is involved in a task switch (either "thread" with same memory mapping, or "process") but it is not something that is conducive to reasonably carry out in one instruction.
> A newly minted goroutine is given a few kilobytes
a line later
> It is practical to create hundreds of thousands of goroutines in the same address space
So it's not practical to create 100s of Ks of goroutines - it's possible, sure, but because you incur GBs of memory overhead if you are actually creating that many goroutines means that for any practical problem you are going to want to stick to a few thousand goroutines. I can almost guarantee you that you have something better to do with those GBs of memory than store goroutine stacks.
Asking the scheduler to handle scheduling 100s of Ks of goroutines is also not a great idea in my experience either.
You lost me in a couple places:
1) "GBs of memory overhead" being a lot. A rule of thumb I've seen in a datacenter situation is that (iirc) 1 hyperthread and 6 GiB of RAM are roughly equivalent in cost. (I'm sure it varies over processor/RAM generations, so you should probably check this on your platform rather than take my word for it.) I think most engineers are way too stingy with RAM. It often makes sense to use more of it to reduce CPU, and to just spend it on developer convenience. Additionally, often one goroutine matches up to one incoming or outgoing socket connection (see below). How much RAM are you spending per connection on socket buffers? Probably a lot more than a few kilobytes...
2) The idea that you target a certain number of goroutines. They model some activity, often a connection or request. I don't target a certain number of those; I target filling the machine. (Either the most constrained resource of CPU/RAM/SSD/disk/network if you're the only thing running there, or a decent chunk of it with Kubernetes or whatever, bin-packing to use all dimensions of the machine as best as possible.) Unless the goroutines' work is exclusively CPU-bound, of course, then you want them to match the number of CPUs available, so thousands is too much already.
We have HTTP ingress that needs ~100 cores but could theoretically all fit in 1GB. We have k/v stores that need only 16 cores but would like 500GB. And we have data points at most places in-between. We can't give the ingress 600GB instead, and we can't give the k/v stores 100 cores. So the fact they're financially interchangeable is meaningless for capacity planning.
Arguably, for most code and especially in a GCd language, using less memory and less CPU go hand-in-hand.
Hey! That's Java's argument!
Also there is a ton of nuance here like overcommitted pages and large address spaces which mitigate some of those downsides.
And go actually does a pretty good job of scheduling hundreds of thousands of threads. 6 months ago I did some fairly abusive high-thread-count experiments solving problems in silly ways that required all N goroutines to participate and I didn’t see much perf falloff on my laptop until I got 1.5-2 million goroutines.
This article (which I have not read but just skimmed) made me search for a simple example, and I landed at "A Tour of Go - Goroutines"[0]
That is one of the cleanest examples I've ever seen on this topic, and it shows just how well integrated they are in the language.
The least of which are type error things like forgetting to await an async function—these can be caught with a type checker (although this means you need to have a type checker running in your CI and annotations for all of your dependencies).
The most serious are the ones where someone calls a sync or CPU-heavy function (directly or transitively) and it starves the event loop causing timeouts in unrelated endpoints and eventually bringing down the entire application (load shedding can help mitigate this somewhat). Go dodges these problems by not having sync functions at all (everything is async under the covers) and parallelism means CPU-bound workloads don’t block the whole event loop.
>"Go run-time scheduler multiplexes goroutines onto threads and when a thread blocks, the run-time moves the blocked goroutines to another runnable kernel thread to achieve the highest efficiency possible."
Why would the Go run-time move the blocked goroutines to another runnable kernel thread? If it is currently blocked it won't be schedulable regardless no?
There's also a subtler benefit. Each user thread has a context, e.g. its local run queue. Now, if thread blocks, others need to help it out and steal its work. Go improves that by having a nice handover system, so no random stealing is necessary. Further, by taking context off blocked threads, it keeps all the tasks more centralised. There is probably at most a few tens of processors on any common hardware but there could be thousands threads. It's better to tie runqueues to the former rather than the latter.
There have been various implementations of M:N threads for some time now. The concept is simple, but the devil is in the details.