I'm not convinced it goes in that direction yet, though. Neural networks are loosely biologically inspired to begin with, and the idea of the primate visual system as a deep feedforward network predates the recent machine learning advances by many years.
If that's true, it undermines his entire thesis. What's missing: how a concept in machine learning allowed us to conceptualize something new in neuroscience, rather than just describe a process we have a vague intuition for (still obviously useful).
FWIW, and I'm a little biased here, I would argue that it's (high-level,vague) concepts in neuroscience that have been driving machine learning. There are ways we behave and learn that we've been trying to emulate in machines. Someday it will swing back the other way, but not yet.
_Because we do not understand the brain very well we are constantly tempted to use the latest technology as a model for trying to understand it. In my childhood we were always assured that the brain was a telephone switchboard. (‘What else could it be?’) I was amused to see that Sherrington, the great British neuroscientist, thought that the brain worked like a telegraph system. Freud often compared the brain to hydraulic and electro-magnetic systems. Leibniz compared it to a mill, and I am told some of the ancient Greeks thought the brain functions like a catapult. At present, obviously, the metaphor is the digital computer._ (John Searle, Minds, Brains and Science, 44)
Artificial neural networks do not teach us about biological neural networks, or 'Neuronal Networks', a term reluctantly used by a close neuroscientist for contradistinction. We don't need Google's cat research, but Hubel and Wiesel's cat research.
Let's see: Cheap reference to Kant, check. Vague parallel to the Sapir-Whorf hypothesis, check.
The 'intriguing' mapping that involves 3 ML terms is desperate.
This article appearing on the front page of HN shows how delusional some of today's ML lovers are with respect to neuroscience, the discipline that actually studies human brains.
In general, I think a neuroscientist would be a distraction to any ML team. I don't mean to say that neuroscience is what drives ML insight, but if asked to pick which field influences the other most, my choice is clear.
Take the idea, "We see with our brains, not with our eyes" as a criticism of the "naive view" that sense data / fabrications are neutral, that they are just "out there", and it is only when they come into contact with the mind that the mind infuses the raw sense data with desire and aversion. The idea that we are just passive observers of phenomena.
Thanissaro Bhikkhu critiques this idea from the Buddhist perspective:
"040920 Disenchantment & Dispassion \ \ Thanissaro Bhikkhu \ \ Dhamma Talks" https://www.youtube.com/watch?v=k8M-_Msav1Q
He says that on the contrary, desire and aversion are involved a priori in the formation of the fabrications (sense data).
So this is not a new idea. It is a very old idea. The idea that the technology of ML can confirm this particular critique of the naive view is novel (although I'm not convinced it is wise to draw conclusions about the mind in this way, just as I'm not convinced it is wise to draw conclusions about the way evolution operates based on artificial life simulations).
1. The brain is like a neural network (which is purely logical) in the sense of ML.
2. Human brains cannot be explained by purely logical things.
The author also uses "concept," which is a technical term in computational learning theory with a specific meaning, as if it meant "intuition." How can you present "intuition" to a neural network? This distinction is swept under the rug. Not to mention all the recent work showing how easily neural networks can be fooled by slightly adversarially noisy inputs.
There are many grains of salt required for a useful discussion on neural networks. Instead of taking something we have no understanding of and making grand philosophical claims, we should be using the tools we have to understand that thing.
As for "S-W is not correct", that's interesting - arguments countering it are not known to me.
"""
These findings suggest that the mastery of the English subjunctive is probably quite tangential to counterfactual reasoning in Chinese. In short, the present research yielded no support for the Sapir-Whorf hypothesis.
"""
Every serious study of S-W, results in the same: no evidence.Now, there is -minute- evidence that languages that have very short number words allows students to master the memorization of number sequences easier---the students literally have less information (in terms of phonemes) to memorize. This sort of thing is actually pretty prevalent; but it is not really what most people are thinking of when they discuss S-W.
Also, the Himba "study" about green is pretty much debunked. If you get a high-quality monitor, with good ambient lighting, go ahead and ask some colleagues to find the differently-colored green square. They'll do so, just fine, and quite quickly!
[1] http://www.sciencedirect.com/science/article/pii/00100277839...