That said, with no offense intended to mirman, I'd really hesitate before using this for anything serious enough to reach that scale in the first place. Gevent, frankly, visibly pushes Python to the limits (and occasionally a bit beyond), trying to also tack on some preemption on an environment not fundamentally expecting it would scare me another notch.
There is a version in the history that used Greenlet instead of gevent which was potentially a bit less delicate, but it required wrapping of the main file and didn't work with time.sleep, and I didn't feel like it was worth writing my own locks, semaphores, mutexes, pipes and whatnot.
http://hackage.haskell.org/packages/archive/forkable-monad/0...
Also, this doesn't implement some stuff from gevent, it implements some stuff over and using gevent.
Version negative one? I don't think I've ever seen that before. Usually, the very earliest versions of software are numbered like 0.0.1 or something like that.
The disillusionment caused by having so many options for non-parallel "concurrency" in Python is, I believe, feeding the high defection rate from Python to Go.
Many of the people complaining about this issue don't have a demonstrated problem and could try any simple approach first (if the point is not just to slam Python in favor of something else, from the beginning).
Yes, you absolutely can use all your cores by combining multiprocess, gevent and custom C code. But, debugging that stack of a level of hell I will never return too, ever.
Different kind of user, but... just sayin
If I'd had the option to switch us to stackless easily, and I could guarantee it was as fast, worked with all the libraries, and was as stable, I probably wouldn't have written this. I imagine that there are a lot of people in the same boat, where switching interpreters isn't really an option.
This isn't true concurrency. Scaling to 20 million requests per second over 40 cores on a single machine is true concurrency.
[1] http://en.wikipedia.org/wiki/Concurrency_(computer_science)