Our intelligence could just as well be a complicated mesh of a huge number of 'rules' that have evolved over a long period of time in response to many difficult situations that had threatened the survival of our ancestors. They are also what we call intuition/pre-born knowledge/natural tendencies and reflex responses.
Having said that, we have not ruled out the possibility of intelligence (albeit of a form different than ours) arising as an emergent phenomena from simple rules. But if we were to replicate human intelligence on silicon, I'm more inclined to believe that we'll have to 'manually' encode a huge number of situation-specific rules, e.g. when you see wriggly thing run away.
I urge you to watch Andrew Ng's talk that I linked to in the post, and read On Intelligence (http://www.amazon.com/On-Intelligence-Jeff-Hawkins/dp/080507...) by Jeff Hawkins, a book that totally changed the way I look at intelligent behavior.
So yes, a general purpose learning algorithm, using the correct paradigm, would learn to think in a way as powerful as we do. And it'll do so in a way that its programmers would never be able to predict.
In the same vein, I would say that Google search results is an emerging phenomena, albeit not quite as interesting as general purpose intelligence. This is because it's intractable to predict what Google will return for certain queries, even if we know all of its rules. Keep in mind that there are degrees of emergence, it's not black and white. (On the other hand, I don't think Google's algorithm is as "simple" as it originally was, but that's for another discussion.)
I surmise that that's the "nature" part, and what emerges is the "nurture" part. But these are all just potshots in the dark. Passing our eventual understanding on to faster machines with better retention seems suicidal.
http://www.amazon.com/dp/0738201421
I'm not sure if we can solve AI in an 8 bit cellular automata, but the general concept of intelligence emerging versus being designed certainly rings true with human evolution.