story
Such as all the functional relationships between concepts that end up being modeled, I.e. “understood” and applicable. Those complex relationships are what is learned in order to accomplish the prediction of complex phenomena, like real conversations & text. About every sort of concept or experience that people have.
Deep learning architectures don’t just capture associations, correlations, conditional probabilities, Markov chains, etc. They learn whatever functional relationships that are in the data.
(Technically, neural network style models are considered “universal approximators” and have the ability to model any function given enough parameters, data and computation.)
Your neurons and your mind/knowledge, have exactly the same relationship.
Simple learning algorithms can learn complex algorithms. Saying all they can do is the simple algorithm is very misleading.
It would be like saying logic circuits can only do logic. And’s, Or’s, Not’s. But not realizing that includes the ability to perform every possible algorithm.
World model: This is what prediction is based on. That's what models are for.
Sense of consequences: prediction of those consequences, obviously.
Desire for self preservation: prediction; avoiding world states predicted to be detrimental to achieving one's goals.
Goal setting: prediction; predicting which subgoals steer the world towards achieving one's supergoal(s).
Reward-driven behavior: fundamentally interweaved with prediction. Not only is it all about predicting what behaviors are rewarded, the reward or lack thereof is then used to update the agent's model to make better predictions.
There's even a theory of cognition that all motor control is based on prediction: the brain first predicts a desired state of the world, and the nervous system then controls the muscles to fulfill that prediction!