No, that is not the case. To better understand this you should look at an async library, like https://github.com/aio-libs/aiohttp Look at what it actually calls all the way down under the hood.
If it were as simple as adding `asyncio.sleep(0)`, then that library seems as though it would have been much easier to write. :P
Just look at the code you posted at the end, it actually runs faster synchronously, without `asyncio.sleep(0)`. The sleep is what happens async, not the print statements, therefore, all you're doing is introducing delay.
Similarly, the Django ORM DB calls you make in the other examples are all still happening synchronously. However, you're just adding a delay that causes them to get picked off in an inconsistent order.
In other words what is really needed is an ORM that would allow you to write:
source = await Source.objects.get(id=source_id)
await source.update()
(?)This leads to a sort of infectious need to make everything async as even a single non cooperating coroutine can bring the whole show to a halt. It's essentially the red vs blue function problem of Python. However, there is actually a nice alternative, gevent. gevent will monkey patch all functions in the standard library which would block, e.g. reading from a socket, attaching an implicit await to them. If the author has used gevent, the example Django code would actually work as expected, since the code would execute until the database connection was written to/read from and then immediately await.
Either way, async IO is still a somewhat tricky thing to understand and get right. It took writing a non blocking event loop with epoll in my case to really grok what was going on under the hood of something like asyncio.
import requests
from multiprocessing import Pool
def fetch_things():
pool = Pool() # defaults to number of CPUs
urls = ['https://example.com/api/object/1',
'https://example.com/api/object/2',
'https://example.com/api/object/3']
return pool.map(requests.get, urls)
print(fetch_things())
Output (because those URLs are nonsense...): [<Response [404]>, <Response [404]>, <Response [404]>]
It's just as easy to do it in threading. Just switch that "from multiprocessing import Pool" with "from multiprocessing.dummy import Pool" $ python quicktest.py
['http://www.google.com', 'http://news.bbc.co.uk', 'http://news.ycombinator.com', 'http://www.cnn.com', 'http://www.foxnews.com', 'http://www.msnbc.com']
[<Response [200]>, <Response [200]>, <Response [200]>, <Response [200]>, <Response [200]>, <Response [200]>]
Serial: 1.23853206635
[<Response [200]>, <Response [200]>, <Response [200]>, <Response [200]>, <Response [200]>, <Response [200]>]
Multiprocess: 0.912357807159
[<Response [200]>, <Response [200]>, <Response [200]>, <Response [200]>, <Response [200]>, <Response [200]>]
Multithreaded: 0.708998918533
edit: Here's the code: import requests
import time
from multiprocessing import Pool
from multiprocessing import Pool as ThreadPool
session = requests.Session()
urllist = ['http://www.google.com',
'http://news.bbc.co.uk',
'http://news.ycombinator.com',
'http://www.cnn.com',
'http://www.foxnews.com',
'http://www.msnbc.com']
# Warm up?
responses = []
for url in urllist:
responses.append(session.get(url))
print urllist
start = time.time()
responses = []
for url in urllist:
responses.append(session.get(url))
print responses
print "Serial: {}".format(time.time()-start)
start = time.time()
pool = Pool()
responses = pool.map(requests.get, urllist)
print responses
print "Multiprocess: {}".format(time.time()-start)
start = time.time()
pool = ThreadPool()
responses = pool.map(requests.get, urllist)
print responses
print "Multithreaded: {}".format(time.time()-start)This is certainly one of the cases where you should just do whatever is simplest (to _you_ the programmer). The first step is always to optimize for cognitive overhead. I.e. make the code easy to reason about. Next (and relatively rarely) is it necessary to good to optimize for different bottlenecks in your code.
It feels very much like it was just kind of thrown in to keep up with the trends, without any thought as to whether it made sense or whether it was the most "Pythonic" way of implementing it.
Agreed with the author's sentiment of feeling stupid.
ps: for the author, my only theory about the 0s sleep is that coroutines aren't preempted like threads, they use collaborative concurrency, so unless they actually say "ok I agree to pause now and let others do something" well the interpreter will evaluate all the instructions until completion. My 2 cents
and its about NOT understanding AsyncIO.