The first few thoughts I had on seeing this: 1) Of course this is by DeepMind! Why would I think anything different. (I love the "basic" research they are doing on NNs & Deep Learning, and am always excited to see a new paper by them).
2) I would love to see more investment into this kind of basic ML research. (By that I don't mean "easy", but addressing the fundamentals of how to approach different types / classes of problems). A lot of where the DeepMind guys seem to be finding these big wins is in combining "classic" AI / CS techniques with Deep Learning / Optimization.
Examples (And I'm a novice at deep learning, so someone PLEASE PLEASE correct me if I'm wrong): AlphaGo - Take a technique like Tree Search for playing a game, and combine with deep networks for the tricky bit of evaluating play positions Deep Reinforcement Learning - Q-Learning and other reinforcement techniques have been around for a while, but they adapted them to a deep neural net architecture Neural Turing Machines - Took a classical model of computation and made it differentiable, alowing for a neural net to "learn" algorithms like sorting. Deep Neural Computing - Figured out how to add and address external memory in a differentiable computer, allowing a neural net to solve problems like path finding on a graph.
Where I think a lot of cool stuff is going to continue to come from is by revisiting classic techniques, and figuring out how they can be adapted to a differentiable / optimizible architecture. Or taking a classic problem and finding an efficient way to evaluate "goodness" of an answer that lends itself to being used in an optimization problem. Again, not saying it is easy, but I wonder how much "low hanging fruit" there is in revisiting classic algorithms and GOFAI techniques, and asking "can I use this in a Neural Net or adapt this to be differentiable so that I can learn or optimize the tricky bits?"
I'm sure I'm glossing over a lot / missing the point of a lot of it - like I said, just a noob whose super excited about this stuff :-)
Basically, this was about building neural networks based on propositional logic (e.g. Prolog-style statements), which was how some traditional expert systems were built.
Unfortunately, there wasn't a video of the presentation, and I can't find the slides anywhere.
If you're based around London, the London Machine Learning meetups are always worth attending!
It seems like it would be very powerful to be able to harness Neural Nets for dealing with fuzzy inputs (images, natural language, spoken words), but adding a propositional rule-base they can consult for whatever the actual task is once you have dealt with that input.
On that, if it was learning a rule base, it might also be really helpful with getting insight into what your model is doing, if you could somehow introspect on / view the rules that it learned on a higher level than "when these neurons fire, we do this to the output"
(I woouldn't be surprised of course, to find one day that the DeepMind guys are already on top of it, and come up with a "Neural Warren Abstract Machine" at some point)