If cognition magically exists outside of math and science, then sure, all bets are off.
We don't even know if the flow of water in a river can always be represented by a mathematical function - this is one of the Millennium Problems. And we've known the partial differential equations that govern that system since the 1850's.
We are far, far away from even being able to write down anything resembling a mathematical description of cognition, let alone being able to say whether the solutions to that description are in the class of Lebesgue-integrable functions.
There was, past tense, no reason to believe cognition could be represented as a mathematical function. LLMs with RLHF are forcing us to question that assumption. I would agree that we are a long way from a rigorous mathematical definition of human thought, but in the meantime that doesn't reduce the utility of approximate solutions.
The Navier-Stokes equations are a set of partial differential equations - they are the problem statement. Given some initial and boundary conditions, we can find (approximate or exact) solutions, which are functions. But we don't know that these solutions are always Lebesgue integrable, and if they are not, neural nets will not be able to approximate them.
This is just a simple example from well-understood physics that we know neural nets won't always be able to give approximate descriptions of reality.
"Neural networks are universal approximators" is a fairly meaningless sound bite. It just means that given enough parameters and/or the right activation function, a neural network, which is itself a function, can approximate other functions. But "enough" and "right" are doing a lot of work here, and pragmatically the answer to "how approximate?" can be "not very".
A lot of people who argue that cognition is special to biological systems seem to base the argument on our inability to accurately model the detailed behavior of neurons. And yet kids regularly build universal computers out of stuff in Minecraft. It seems strange to imagine the response characteristics of low-level components of a system determine whether it can be conscious.
But GP specifically says neural nets should be able to do it because they are universal approximators (of Lebesgue integratable functions).
I'm saying this is clearly a nonsense argument, because there are much simpler physical processes than cognition where the answers are not Lebesgue integratable functions, so we have no guarantee that neural networks will be able to approximate the answers.
For cognition we don't even know the problem statement, and maybe the answers are not functions over the real numbers at all, but graphs or matrices or Markov chains or what have you. Then having universal approximators of functions over the real numbers is useless.
Consciousness cannot be accounted for in physical terms. For consciousness is absolutely fundamental. It cannot be accounted for in terms of anything else.
-- Erwin Schrödinger
- Carl Sagan
Sagan, while he did a little bit of useful work on planetary science early in his career, quickly descended into the realm of (self-promotional) pseudo-science. This was his fanciful search for 'extra-terrestrial intelligence'. So it's apposite that you bring him up (even if the quote you bring is a big miss against a philosophical statement), because his belief in such an 'ET' intelligence was a fantasy as much as the belief in the possibility of creating an artificial intelligence is.
We don't know if physics is the fundamental substrate of being, and given Agrippa's trillemma we can't know.