And I think this assumption is what's killing us. Async != parallel for a start, and parallel IO is not guaranteed to be fast.
If you write a function:
async Task<ImageFile> LoadFile(string path)
{
var f = await load_file(path);
return new ImageFile(f);
}
And someone comes along and makes it into a batch operation; async Task<List<ImageFile>> LoadFiles(List<string> paths)
{
var results = new List<ImageFile>();
foreach(var path in paths) {
var f = await load_file(path);
results.Add(ImageFile(f));
}
return results;
}
and provides it with 2 files instead of 1, you won't notice it. Over time 2 becomes 10, and 10 becomes 500. You're now at the mercy of whatever is running your tasks. If you yield alongside await [0] in an event loop, you introduce a loop iteration of latency in proceeding, meaning you've now introduced 500 loops of latency.In case you say "but that's bad code", well yes, it is. But it's also very common. When I was reading for this reply, I found this stackoverflow [0] post that has exactly this problem.
[0] https://stackoverflow.com/questions/5061761/is-it-possible-t...