It looks to me, a layman, that the only approach that holds any water is the first one. But then again, it mostly looks like people are implementing software based on a flawed understanding of cognitive functions and basically hoping that something magic happens. How can a scattershot approach like this ever produce anything even remotely resembling human intelligence?
You're also crucially missing the possibility that someone comes up with an intelligent algorithm that doesn't mirror the human brain in much detail, but still manages to outperform it. Think of flight: inventing flapping machines didn't turn out to be very useful, but we figured out a workaround that was far more efficient. The most interesting (terrifying?) AI research is along these lines.
Are really we so special? Apart from being first (that we know of)? The history of science is a history of anthropocentrism iconoclasm!
Of course, copying what we know works seems like a reasonable starting point.
Secondly, who said it has to be done within a few decades? If it is done within centuries or millennia, it is still done. "Infeasible" would mean it can't be done, ever.
Personally, I tend towards 50 years off... as it has been for a long time (and probably will continue to be) <- this is a joke. I'm saying it's a looong way off.
Brain as a computer, in my opinion should be the default state for this discussions. Why? Consider the old and tired brain-made-of-matter argument. There's no reason to think there's something magical or supernatural inside the brain, so treat it as an organized collection of atoms doing cool stuff. The default state cannot be magic, it has to be something that can be disproved or ruled out.
Some parts of it seem to work, as fas as we know, in a (suspiciously) algorithmic way, or in other words, a highly abstract step-by-step chain of actions can be identify for a given part of the brain.
Why not start with the crazy assumption that the whole brain acts as a computer (the theoretical concept), and then identify which parts of it fail the analogy? The key part here is the word 'fail': it should not mean 'too complex for any computer we have built' nor 'we don't know any algorithm that does that', it should mean that there are parts that inherently cannot be modelled, under any circumstances, like the definition of algorithm. If some part is discovered not to hold the analogy, you should just then question if the analogy is question is apt or not.
Neurons are bit more general than our typical logic gates, but at a high level they have a lot in common.
Also, airplanes do imitate birds in a few fundamental ways.
There is another approach, what I would call the Airplane approach, since it is to the brain what airplanes are to birds. That is, to base machine intelligence on a new kind of mathematical logic that hasn't been invented yet.
As others have suggested, this is demonstrably wrong, trivially so.
Chess computers are better than humans, but I wouldn't trust them to manage the electricity grid. What if there was an equivalent quality of computer specialized for every significant area of society - electricity grid, packet routing, high speed trading, etc etc.
Approximate number of human neurons: 1.0e11
Approximate number synapses in a human: 1.0e14
These are big numbers, but not impossibly big numbers. There are different kinds of
neurons, and signals on synapses are not simply binary. However, even with these
complications, the hardware needed to reach these scales isn't hard to imagine. Transistors in XBox One: 1.0e09
Brains are biological computers so they suffer from very slow switching speeds at the neural
level. Neurons run in parallel, but they are not fast: Approximate neural switching speed: 1.0e03/sec
Even if all of the synapeses could sustain this rate in parallel (they can't) and even if all of the brain was 100% occupied with solving a single task (it isn't) this would mean that we absolutely can't compute faster than: Speed * synapses (brain ops per second): 1.0e17/sec
For comparison, the fastest bitcoin hardware I see is advertised to operate at the
following speed: Minerscube 15 (hashes per second): 1.5e12/sec
And a regular GPU is capable of simple instructions that run at the following speed: AMD Radeon HD 6990 32-bit instructions: 2.6e12/sec
From this we can see that hardware is catching up with the raw computing ability of
the human brain. Now consider the problem of programming a brain. It isn't necessary to
program every synapse. The brain learns, and essentially programs itself. To see
why this is true consider the programming that we are born with: Bits of information in human genome: 1.0e10
This is far less than the number of synapses that we have. Therefore, the brain must
program itself, somehow.Now, to address the argument about evolution taking millions of years. First, we can evolve programs much faster than nature can evolve humans. There have been, perhaps, 100 million generations of humans. Even if it takes six seconds of computing time to run an evolutionary computation for a single generation it will take no more than 20 years to run over 100 million generations.
Brute force evolution isn't the only way to build strong AI. A program can exhibit behavior that we don't anticipate. I've written simple programs that beat me easily at games such as Othello or Freecell.
Finally, once machines get smart enough to design other machines there may be a rapid acceleration of progress in this area as we employ them in designing subsequent generations.
I feel that strong AI may pose a significant risk to humans; consequently, we should proceed with caution. Here is a thought experiment. If a chimpanzee could be taught to drive, would you trust it to pick your kids up from school? What sort of value judgements would it make in the case of an impending emergency? Would you let an elephant baby sit for you? Even if was much "smarter" than a normal elephant?
Strong AI will not be like us. It will learn and develop without a human body, and it will not interact with the world and society as we do and may end up being very foreign to us. Will it be sociopathic? Or will it be like whales, intelligent, but mysterious, perhaps spending all its time singing AI songs to other AIs.