Also, these guys could offer support for these models on private cloud servers, to enable privacy.
Er, Nvidia itself has an official Docker application which allows containers to interface with the host GPU, optimized for the deep learning use case: https://github.com/NVIDIA/nvidia-docker
Training models is one thing that can commoditized, like with this API, but building models and selecting features without breaking the rules of statistics is another story and is the true bottleneck for deep learning. That can't be automated as easily.
I agree that building models is still definitely a big challenge, but the tooling and knowledge is getting better every day. Either way, our goal with Algorithmia is to create a channel for people to make their models available, and create incentive for people to put in the effort to train really solid, useful models.
It is not the final solution for containerized GPU applications.
The real challenge is doing this on 100+ GPUs and leveraging multitenancy for an additional 100X+ economy of scale. We're actively working on it, and in my experience, this seems like a classic scheduling area where different domains will want to do it differently. However, even there, it'll end up something like "plugin a new user-level mesos scheduler x", and Nvidia is working on exactly that.
I'll wait for someone at Baidu or the Titan lab to blow up those numbers by another 100-1000X ;-)
Edit: If this sounds like a cool problem, we're leveraging GPU cloud computing and visual graph analytics for event analysis (e.g., core tool for teams in enterprise security). We would love help, esp. on cloud infrastructure or on connecting the eco-system together! Contact build@graphistry.com and we'll figure something out :)
You can run multiple containers on the same GPU with nvidia-docker, it's exactly the same as running multiple processes (without Docker) on the same GPU.
Also, much more minor grievance but I really dislike websites that don't work on my 15" laptop, what's going on here? http://i.imgur.com/q13lCLK.png
https://21.co/learn/deep-learning-aws/
Disclaimer: I work for 21.
Unless the users of this service then feed whether the answer given by the service was correct back into the service, I don't see how it would help to train their model.
Happy to be corrected by someone with a better understanding of the space.