Hats off for the authors' achievement, this is no small feat and something that has been tried for years. But IMHO it's time that field moved on from running after matrix accelerators and focused on the real advantages of event-based computing: asynchronous, low-latency, event-based signal processing.
Even for small-network tasks, training spiking networks has been non-trivial. This paper provides a way to get exact gradients, implying probably faster optimisation than using surrogate gradients or other approximation methods for SNNs.
Personally I think that way too many resources were wasted on trying to make better deep networks with spikes. In my opinion it is much more promising to apply spiking networks on problems that are inherently event-based.
Having a functional backpropagation algorithm such as the one provided can help with that, obviously.
There you get the full dose of hype for neuromorphic computing, but without any critical reflection (naturally, since it’s a press release advertising a product).
Unfortunately I am not aware of literature that provides critical review of neuromorphic computing. You have to read between the lines of the research papers to find out that the field has failed to live up to the promise of lower-energy deep learning (which was a misguided promise from the outset, IMHO).
https://electronicvisions.github.io/hbp-sp9-guidebook/pm/pm_...
Personally I think SNNs are a very exciting research field, both from a neuroscience as from a computer science angle. The work we are discussing here is deeply impressing for its rigour, and it addresses an important problem in spiking network research.
Whether spiking networks will provide lower-energy deep learning is a totally different question.
I have many ideas and questions regarding your paper:
- How do you adjust weights between different spikes?
- Do you use or implement a kind of wavelet for wave-propagation, in example for spike interferences?
- What neuromorphic hardware can I buy to run your code/ the SNN?
=)
- We only consider one kind of model system in this paper but this method would work for any kind of hybrid dynamical system, so also other physical substrates (a lot of exciting work to do there).
- We used to sell a neuromorphic hardware system Spikey for ~3000 Euro (basically at cost), we've recently completed a similar project, we also provide access to remote users via the ebrains collaboratory (https://ebrains.eu/service/collaboratory/). There are a number of commercial offers in the works (SynSense, Inatera). You can also buy SpiNNaker boards or access them via ebrains. Loihi and TrueNorth either don't sell or are pretty expensive, but they have "research agreements" in place.
Current neuromorphic hardware is not easily accesible, but you can simulate spiking neural networks. Check out, e.g. https://brian2.readthedocs.io/en/stable/ or Nengo.ai
Also, will you release your method as code?
My aim is to release the method as part of Norse https://github.com/norse/norse. There is some subtlety involved in implementing it for a given integration scheme, though. The event based simulator underlying the paper will also be released in due time.
See also our tutorial on neuron parameter optimization to understand how it's useful for machine learning: https://github.com/norse/notebooks#level-intermediate
There's also a great book on the topic by Gerstner available online: https://neuronaldynamics.epfl.ch/
Disclaimer: I'm a co-author of the library Norse
Regarding the target audience, it's actually not entirely clear to me. This lies in the intersection between computational neuroscience and deep learning, which isn't a huge set of people. So, I think you're question is highly relevant and we (as researchers) have a lot of work in front of us to explain why this is interesting and important.