Here is a diagram: https://www.satake-usa.com/what-is-optical-sorting.html
Bonus slow motion footage of their processing machines: https://imgur.com/gallery/IK5zKkO
I work at an AI consultancy [1] that help them use deep neural nets in these high throughput and low latency conditions. It's an interesting challenge and the performance than can be squeezed from modern hardware are indeed impressive.
Musk needs like 99.9999% accuracy at near zero latency over several hours of operation. I think Tesla currently is at maybe 99.995% from driving my car. The last 0.005% results in phantom braking etc. It's actually a very hard nut to crack and I don't expect them to achieve the full self driving in all conditions for another 10-15 years maybe. The edge cases are just too many.
I like the trash idea though (or a Q/A robot at a factory etc).
[edit] actually stupid question. I assume it's more about throughput than fps, i.e. be able to process lots of streams on the same machine, for instance for doing mass analysis of CCTV streams.
As such the point isn't that you can detect objects >N fps, but rather that the object detection shouldn't take more than X% of the time per cycle so that the overall cycle time can run at a given rate.
I'm sure military and sports applications are obvious too.
Humans reaction times are much slower than that. In fact for some things it can take a whole second https://www.visualexpert.com/Resources/reactiontime.html
Maybe racing sports have shorter reaction times, but I'd be frankly surprised if it was something < 100ms
10fps for your average drive should be more than enough
On device (eg mobile phone) processing with battery usage that respects the user. Older hardware/models inclusion as well.
Of course the above aren't cases were the stream itself is 100+fps, but more of broader general benefits. For a 100+ fps stream.. well there are many things that go fast, imagine you wanted a robot that tracks or catches a fly before it takes off. Flies have a reaction time of 5ms (200fps), that's why it's hard for us to catch! Expand and apply the same concept to other things (that are fast, or happen very quickly) now...
Not yet.
Google's MediaPipe object detector (which is one of the most optimised mobile solutions around) can do "26fps on an Adreno 650 mobile GPU"[1].
The Adreno 650 is the GPU in the Snapdragon 865, ie the current high end SOC used by most non-Apple phones. This gives roughly the same performance as an iPhone 11.
[1] https://google.github.io/mediapipe/solutions/objectron.html
[2] https://www.tweaktown.com/news/69097/qualcomm-adreno-650-gpu...
[3] https://www.tomsguide.com/news/snapdragon-865-benchmarks
Can anyone comment on how often this is a problem and if this problem is truly fundamental to Python? Could it be solved in a Python 3.x release?
But there's a few more things that can be said about this. Python "threads" are really just a mental construct for designing programs. The selling point is that you can share variables and data between "threads" without having to worry about locks or data corruption or anything like that. It just works. But, even with that advantage, you're relying on Python to switch between "threads" on its own, and that could easily slow things down. If you're willing to drop the mental construct and go for better performance but still use a single process and be able to share variables, the asyncio module will let you control when the main Python process will move between points of code flow.
However, if you really want to use traditional multiple processes/threads just use the Multiprocessing module. It actually launches multiple Python processes and links them together. It's called in a similar fashion to Threading, so there isn't much code change for that part. But because it's no longer a single process - and no longer bound by the GIL - you can't share data between the processes as easily. With Multiprocessing, you'll need to create slightly more complex data structures (like a multiprocessing manager namespace) to share that data. It's not that hard, but it requires a bit of planning ahead of time.