An infinitely-fast computer wouldn't meaningfully change the "expensive training vs fast, static inference" workflow that neural networks have always been developed around (except in the most brute force-y "retrain on the entire world, every single nanosecond" sense).
The brain is supremely efficient at what the brain has evolved to do. It is almost tautological! Because if it wasn't, it wouldn't have evolved to that.
Silicon comes from an alien land, and is emulating. Even with the best algorithms there has to be a limit on how efficient a computer-based intelligence can be without changing how the chips work.
You could spin it around and say, well computers are better at many things than humans, and there is no way you could get a biological brain to be as good for the same amount of power (e.g. a raspberry pi can do calculations our brain couldn't possibly do).
Much of these threads make the binary mistake: can these systems be compared, or are they fundamentally different? A bit of both, almost certainly.
This echoes an extremely naive view of evolution.
There are many phenotypes in the living world which have evolved but for which there is no reason to believe that the phenotype is either (a) supremely efficient and/or (b) under selection pressure (the two are obviously related).
Evolution has no tautology. Brains do not evolve to be supremely efficient, just like humans do not evolve to be supremely efficient.
What exists today is that which has survived, for whatever reason. It's not even possible to say something as apparently simplistic as "the only purpose evolution respects is leaving behind more copies" because that ignores (a) group selection (b) changing ecosystems that favor plasticity in the long run.
> Evolution has no tautology. Brains do not evolve to be supremely efficient, just like humans do not evolve to be supremely efficient.
> What exists today is that which has survived, for whatever reason. It's not even possible to say something as apparently simplistic as "the only purpose evolution respects is leaving behind more copies" because that ignores (a) group selection (b) changing ecosystems that favor plasticity in the long run.
A primary example of this are our legs, they would be much more efficient if the knees pointed backwards. They are not the most efficient design, but simply good enough.
Not really, evolution doesn't guarantee the brain will be supremely efficient. It just guarantees that it will be efficient ENOUGH.
https://en.m.wikipedia.org/wiki/Catastrophic_interference
Which practically requires full retraining at every step to integrate new knowledge. I think we have some partial solutions like learning to select between finetunings, but not if the task needs to crosscut between them.
The human brain doesn't seem to suffer with catastrophic interference to nearly the same degree, independent of its computational efficiency, though there are possibly related things like developmental stages that if they are delayed may never be able to take place.
The primary difference, and likely the reason that brains are unreasonably effective, is the specifics of the architecture and internal representations (in the rigorous, information-theoretic sense) of its computational systems. It's not quite analog but it uses analog means. It's not quite digital but it does process via abstractions.
You can still reasonably call the brain a "computer" if you decide it can shed the laden history of that word and its close association with binary operations using transistors. You can do so because it uses internal structures to process inputs and emit outputs. But like I said above, it requires a generalized interpretation of the word to start to understand where and how the two fields of study may be unified.