Arguing which definition is "true" is wrong; definitions can't really true or false, they are simply useful or not useful, nor do they have any direct impact on reality.
...typed the person into his quantum-mechanically-mediated instantaneous conversation with ten thousand of the best-educated technologists on Earth.
You're going to have trouble convincing me that Google doesn't make me more intelligent. You're also going to have trouble convincing me that, say, cheap printing and cheap food -- both products of the industrial revolution, and without which I might be a subsistence farmer -- do not make me more intelligent. I wouldn't know quantum mechanics without them, after all, so I wouldn't understand how computers work.
And if you strive to define "intelligence" such that human intelligence has not been improved by technology -- which can be done; I'm clearly less intelligent than my Cro-Magnon ancestors by many criteria -- you are going to have a hard time convincing me that the future will be any different. If technology hasn't worked to improve "intelligence" before, why should it do so later?
Edit: it's called the Flynn effect, and if this phone had decent copy and paste, I'd give you the wikipedia link, which you now are forced to find on your own. But at least I can rest reasonably assured that your general intelligence is sufficient for solving that task ;-).
Are you referring to HN?
We don't expect the singularity to improve the intelligence of transistors. Perhaps it is unreasonable to expect that previous intelligence explosions of a collective being massively improved the intelligence of the humans comprising it.
I bet it led to a significant increase in the average IQ. It may not have been a Singularity with a capital S, but it certainly did reflect back on its creators.
We're seeing it now, however.
There are already theories about how to build universal AI algorithms (check out Marcus Hutter's stuff at http://www.hutter1.net/). These theories are quite an accomplishment, no doubt. But they boil the problem down to one involving sequence prediction. Let's say that particular form of the problem is now "solved". OK now what sequences would you like to present to the algorithm for training? Ones from the environment? Great, but those will take time to collect. Simulated ones? It takes a lot of knowledge to generate good ones.
For these reasons, I suspect the Singularity is safe from us for while.
With AIXI and variants, since they are universal predictors, it doesn't really matter what sequences you supply so long as they have some connection. Heck, just your algorithm access to an email server or something.
Then my dog will find the bone faster than AIXI. See what I mean?
If you look at intelligence in that light we have gone through a huge transformation with the printing press, industrial revolution and now internet revolution.
Basically if you think of it as each human brain just being a small component able to try out new ideas then spread those ideas... Well that massive human spanning brain is undergoing massive ongoing upgrades in memory capacity and speed of communication.
If we think human-level AI is possible, then superhuman AI would also be possible. Note, though, that while not infinite, human cognition is complete, that is, a human or group of humans is only limited by time or resources. No matter how much time we have; a dog will never stumble upon quantum mechanics. But given enough time, a human being can construct an expert knowledge repository, even though he himself is unable to understand some of the intermediary steps or details.
Now, this is where Singularity takes a turn to religion. Some people believe "it" to a future AI with enough physical means to be indistinguishable from a god. Not just a god, but one single God.
i'm always amazed by such blatant anthropo-glorifing-centric statements which imply as obvious facts the doubtful things like this:
- that dog's brain has much less processing power. The sheer size difference is many times adjusted by efficiency difference (where human's one is less efficient)
- that quantum mechanics is a super-achievement, replication of which is a mark of higher intelligence. To me the ugliness of QM is bordering on a shame for human intelligence.
-The difference in efficiency should tell you that that power spent is _worth_ something, like astonishingly superior abstraction capabilities and the ability to speak and write.
-The ugliness (read; complexity) of QM could easily be interpreted as a tribute. Whether you like it or not, it is the best we have. To paraphrase HHG2G, if it's ugly, then it's the universe that got it wrong, not us. (I'm partially joking, mostly because I don't suspect you really have a firm enough grasp of QM, let alone QFT, to determine its aesthetic beauty.)
I mean, I'm all for a little humbling perspective. We are a race of people descended from stupider people unto apehood (and before that, lizards!) that are stuck on a rock in a universe fantastically larger than us.
That doesn't mean we should be debbie downers.
Uh, what? How does OAuth have anything to do with the singularity?
The technological singularity, Skynet style, is possibly just crutches for the human v1.0 where is v2.0 may possibly just skip this step.
And who gives The Republic serious consideration for the present?...Oh i see you're speaking in Ann Arbour tomorrow, of course you do, just like that "The Republic Is America and The Republic Is Awesome" lecture that is taught in full seriousness in the Yale online courses...now it makes sense, the delusions of apparently omnipotent wealth.
So much high school lovin tonight, what's up pups?!