story
It's still flight, even if it's not done like a bird. Just because nature does it one way doesn't mean it's the only way.
(On a side note, multilayer perceptrons aren't all that different from how neurons work - hence the term "artificial neural network". But they also bridge to a pure mathematical/statistical background. The divide between them is not clear-cut; the whole point of mathematics is to model the world.)
Nobody knows how neurons actually work: http://www.newyorker.com/online/blogs/newsdesk/2012/11/ibm-b.... We are missing vital pieces of information to understand that. Show me your accurate C. Elegans simulation and I will start to believe you have something.
Perhaps in a hundred years, this is the argument: for several hundred years, inventors tried to learn to build an AI by creating artificial contraptions, ignoring how biology worked, inspired by an historically fallacious anecdote about how inventors only tried to learn to fly by building contraptions with flapping wings. It was only when they figured out that evolution, massively parallel mutation and selection, is actually necessary that they managed to build an AI.
If you think they are insufficiently accurate, submit a pull request.
To quote Jeff Halwkings "This kind of ends-justify-the-means interpretation of functionalism leads AI researchers astray. As Searle showed with the Chinese Room, behavioral equivalence is not enough. Since intelligence is an internal property of a brain, we have to look inside the brain to understand what intelligence is. In our investigations of the brain, and especially the neocortex, we will need to be careful in figuring out which details are just superfluous "frozen accidents" of our evolutionary past; undoubtedly, many Rube Goldberg–style processes are mixed in with the important features. But as we'll soon see, there is an underlying elegance of great power, one that surpasses our best computers, waiting to be extracted from these neural circuits.
...
For half a century we've been bringing the full force of our species' considerable cleverness to trying to program intelligence into computers. In the process we've come up with word processors, databases, video games, the Internet, mobile phones, and convincing computer-animated dinosaurs. But intelligent machines still aren't anywhere in the picture. To succeed, we will need to crib heavily from nature's engine of intelligence, the neocortex. We have to extract intelligence from within the brain. No other road will get us there. "
As someone with a strong background in Biology who took several AI classes at an Ivy League school, I found all of my CS professors had a disdain for anything to do with biology. The influence of these esteemed professors and the institution they perpetuate is what's been holding the field back. It's time people recognize it.
The Chinese Room experiment doesn't show only that. It also shows how important is the inter-relationship that exists between the component parts of a system.
We're reducing the Chinese Room to the Chinese and the objects they are using such as a lookup table. But what we're missing is the complex pattern between the answers, the structure and mutual integration that exists in their web of relations.
If we could reduce a system to its parts our brains would be just a bag of neurons, not a complex network. We'd get to the conclusion that brains can't possibly have consciousness on account that there is no "consciousness neuron" to be found in there. But consciousness emerges from the inter-relations of neurons and the Chinese Room can understand Chinese on account of its complex inner structure which models the complexity of the language itself.
Honestly I imagine we'd find more out from philosophers helping to spec out what a sentient mind actually is than we would from having biologists trying to explain imperfect implementations of the mechanisms of thought.
It will deliver on all of the failed promises of past AI techniques. Creative machines that actually understand language and the world around it. The "hard" AI problems of vision and commonsense reasoning will become "easy". You need to program a computer the logic that all people have hands or that eyes and noses are on faces. They will gain this experiences and they learn about our world, just like their biological equivalent, children.
Here's some more food for thought from Jeff Hawkins:
"John Searle, an influential philosophy professor at the University of California at Berkeley, was at that time saying that computers were not, and could not be, intelligent. To prove it, in 1980 he came up with a thought experiment called the Chinese Room. It goes like this:
Suppose you have a room with a slot in one wall, and inside is an English-speaking person sitting at a desk. He has a big book of instructions and all the pencils and scratch paper he could ever need. Flipping through the book, he sees that the instructions, written in English, dictate ways to manipulate, sort, and compare Chinese characters. Mind you, the directions say nothing about the meanings of the Chinese characters; they only deal with how the characters are to be copied, erased, reordered, transcribed, and so forth.
Someone outside the room slips a piece of paper through the slot. On it is written a story and questions about the story, all in Chinese. The man inside doesn't speak or read a word of Chinese, but he picks up the paper and goes to work with the rulebook. He toils and toils, rotely following the instructions in the book. At times the instructions tell him to write characters on scrap paper, and at other times to move and erase characters. Applying rule after rule, writing and erasing characters, the man works until the book's instructions tell him he is done. When he is finished at last he has written a new page of characters, which unbeknownst to him are the answers to the questions. The book tells him to pass his paper back through the slot. He does it, and wonders what this whole tedious exercise has been about.
Outside, a Chinese speaker reads the page. The answers are all correct, she notes— even insightful. If she is asked whether those answers came from an intelligent mind that had understood the story, she will definitely say yes. But can she be right? Who understood the story? It wasn't the fellow inside, certainly; he is ignorant of Chinese and has no idea what the story was about. It wasn't the book, 15 which is just, well, a book, sitting inertly on the writing desk amid piles of paper. So where did the understanding occur? Searle's answer is that no understanding did occur; it was just a bunch of mindless page flipping and pencil scratching. And now the bait-and-switch: the Chinese Room is exactly analogous to a digital computer. The person is the CPU, mindlessly executing instructions, the book is the software program feeding instructions to the CPU, and the scratch paper is the memory. Thus, no matter how cleverly a computer is designed to simulate intelligence by producing the same behavior as a human, it has no understanding and it is not intelligent. (Searle made it clear he didn't know what intelligence is; he was only saying that whatever it is, computers don't have it.)
This argument created a huge row among philosophers and AI pundits. It spawned hundreds of articles, plus more than a little vitriol and bad blood. AI defenders came up with dozens of counterarguments to Searle, such as claiming that although none of the room's component parts understood Chinese, the entire room as a whole did, or that the person in the room really did understand Chinese, but just didn't know it. As for me, I think Searle had it right. When I thought through the Chinese Room argument and when I thought about how computers worked, I didn't see understanding happening anywhere. I was convinced we needed to understand what "understanding" is, a way to define it that would make it clear when a system was intelligent and when it wasn't, when it understands Chinese and when it doesn't. Its behavior doesn't tell us this.
A human doesn't need to "do" anything to understand a story. I can read a story quietly, and although I have no overt behavior my understanding and comprehension are clear, at least to me. You, on the other hand, cannot tell from my quiet behavior whether I understand the story or not, or even if I know the language the story is written in. You might later ask me questions to see if I did, but my understanding occurred when I read the story, not just when I answer your questions. A thesis of this book is that understanding cannot be measured by external behavior; as we'll see in the coming chapters, it is instead an internal metric of how the brain remembers things and uses its memories to make predictions. The Chinese Room, Deep Blue, and most computer programs don't have anything akin to this. They don't understand what they are doing. The only way we can judge whether a computer is intelligent is by its output, or behavior.
Second, despite running into it time and again over the years, Searle's Chinese room argument still does not much impress me. It seems to me clear that the setup just hides the difficulty and complexity of understanding in the magical lookup table of the book. Since you've probably encountered this sort of response, as well as the analogy from the Chinese room back to the human brain itself, I'm curious what you find useful and compelling in Searle's argument.
I remain interested in biological approaches to cognition and the potential for insights from brain modelling, but I don't see how it's useful to disparage mathematical and statistical approaches, especially without concrete feats to back up the criticism.
Here it is from the horse's mouth: http://youtu.be/15sh05wrQ6Y#t=16m34s