There may be philosophical (i.e. fundamental) challenges to AGI. Consider, e.g., Godel's Incompleteness Theorem. Though Scott Aaronson argues this does not matter (see e.g., youtube video, "How Much Math Is Knowable?"). There would also seem to be limits to the computation of potentially chaotic systems. And in general, verifying physical theories has required the carrying out of actual physical experiment. Even if we were to build a fully reasoning model, "pondering" is not always sufficient.
It’s also easy to forget that “reason is the slave of the passions” (Hume) - a lot of what we regard as intelligence is explicitly tied to other, baser (or more elevated) parts of the human experience.