For example, "Almost everybody accepts that the brain does a tremendous amount of analog processing. The controversy lies in whether there is anything digital about it." [1]
[1]: http://www.rle.mit.edu/acbs/pdfpublications/journal_papers/a... (page 30 of the PDF, labelled 1630)
Hardware implementations of ANNs, such as might be designed based on these FTJ-based artificial synapses, would have some fixed hyper parameters, and thus would be pseudo-specialized. This disadvantage could potentially be more than compensated for by a dramatic learning speedup and power-usage reduction. Transistors are highly scaled and low power, but it takes a lot of them and a lot of time to simulate each neural unit.
On a separate note, the best-performing software ANNs don't emulate spike time dependant plasticity, which is believed to be the primary learning mechanism of the human brain. Instead, they use variations of backpropagation and gradient descent, which is almost certainly not how the human brain learns. It remains to be fully understood how the two compare at various tasks. Most likely, they will have different strengths and weaknesses, making each useful in their own right.
The possibility space between relatively simple and insufficiently general unsupervise/clustering approaches and rigid SGD schemes is large, and probably contains the brain's true inference engine. Personally, I am excited by some of the ideas brought forward in this Bengio paper: https://arxiv.org/pdf/1602.05179.pdf
The properties of the memristors can be controlled electronically, and by having programmable and persistent resistance to current flow, they play the part of biological synapses.
This entire scheme relies on our gradually improving insight into how biological brains work. As these systems evolve to higher levels of complexity, they become a practical research basis for understanding the human brain.
I can imagine how it might emerge in this particular implementation from the electric current following the path of least resistance through the circuit, thereby preventing adjacent neurons from reaching criticality. This mechanism never occured to me before reading this article, though. Is anyone aware of any prior art on this topic?
Nevertheless, a STDP type learning rule can inspire interesting applications. One of my co-advisors authored an article [1] which shows a completely unsupervised classification on the MNIST challenge in a crossbar environment, achieving 93 \% . Nothing like state of the art CNNs etc, but considering this was done without labels, that's pretty impressive.
[1]http://www.ief.u-psud.fr/~querlioz/PDF/Querlioz_PIEEE2015.pd...