Sure they do. Humans rely on tons of audio and video before they can even read (or walk).
For reading, the same applies. Our brains are equipped with many of the foundational aspects required for reading, and we only _learn_ a part what is necessary for the skill of reading.
Unlike computer models, brains are no tabula rasa. So we don't need the same input as computer models to learn.
>Let us consider the GPT-3 model with ๐ =175 billion parameters as an example. This model was trained on ๐ = 300 billion tokens. On ๐ = 1024 A100 GPUs using batch-size 1536, we achieve ๐ = 140 teraFLOP/s per GPU. As a result, the time required to train this model is 34 days.
https://arxiv.org/pdf/2104.04473.pdf
I'm not sure expressing brain capacity in FLOPs makes much sense, but I'm sure if it can be expressed in FLOPs, the amount of FLOPs going to learning for a normal human is less than that.