> Otherwise they are simply two completely different objects.
That's where you're wrong. Both objects reflect the same mathematical operations in their structure.
Even if those were inscrutable alien artifacts to you, even if you knew nothing about who constructed them, how or why? If you studied them, you would be able to see the similarities laid bare.
Their inputs align, their outputs align. And if you dug deep enough? You would find that there are components in them that correspond to the same mathematical operations - even if the two are nothing alike in how exactly they implement them.
LLMs and human brains are "inscrutable alien artifacts" to us. Both are created by inhuman optimization pressures. Both you need to study to find out how they function. It's obvious, though, that their inputs align, and their outputs align. And the more you dig into internals?
I recommend taking a look at Anthropic's papers on SAE - sparse autoencoders. Which is a method that essentially takes the population coding hypothesis and runs with it. It attempts to crack the neural coding used by the LLM internally to pry interpretable features out of it. There are no "grandmother neurons" there - so you need elaborate methods to examine what kind of representations an LLM can learn to recognize and use in its functioning.
Anthropic's work is notable because they have not only managed to extract features that map to some amazingly high level concepts, but also prove causality - interfering with the neuron populations mapped out by SAE changes LLM's behaviors in predictable ways.