I think until we know the answer to this, we can't make predictions about how to build true AGI.
Rarely, actually.
More generally humans use all kind of inferences where problem at hand is intertwined with all other attention points that is occupying the mental load of the person. Giving a topic full mental attention and finding a path through pure deduction about a circumscribed subject is a rarity, even if you consider only those situations that require any conscious attention at all to perform some action before moving on.
For humans, it is emergent. But when we reason about reason, we invent special sauce.
If we build our theories of reason into our models, they achieve the strengths and limitations of our models.
If we don't, we're limited by the pace of evolution, because we don't have enough connections in our graph.
So I think we'll have something immediately more useful if we embed ALU special instructions into a neural network.
However, humans have the ability to reason about things (whether most people use this ability is a different question). So then we must ask the question: is this ability just a more advanced form of probabilistic pattern matching, or is it a different architecture altogether? Will current AI models be able to develop this ability, or will we need new models?
nope. most humans fall in various traps such as pattern recognition, confirmation bias, and many others instead of relying on deductive analysis. Even scientists fail at being rigorous.
Just our visual object recognition is immensely powerful and far beyond and current AI. A simple task like walking to the fridge requires a ton of pattern recognition and spatial reasoning. Recognizing people's moods/predicting behaviors is also incredibly involved imo.
Ive said this many times but perhaps we should focus on achieving dog level intelligence first before we start worrying about human level AGI.