> However, we can enumerate all the things that we create can do.
Not really, no. Even before AI, "Turing Complete" makes things extremely hard to enumerate; see Busy Beaver numbers for how small a system can be and still outside our ability to fully comprehend — needing to use up-arrow notation because exponentials aren't big enough is always good for a laugh.
You example of the Busy Beaver numbers, which was a recent interesting read, is a good example of what I was trying to point out. We have a definition and even if we cannot enumerate each number, we discuss and think about these in a rational way. At the moment, I am quite interested in Computer Algebra Systems (of which there are a variety) and I find it interesting just how limited these systems are and just how difficult it is to program into them the capabilities that humans use to solves various problems. The various discussions have been quite enlightening.
Mathematics is an interesting subject and I think shows up the intractability of ever getting that highly feared singularity.
All artificial computing systems are limited in ways we are not. Your "Turing Machine" example is one such case. The Halting Problem being a class example.
I think that far too often, we fail to recognise that what we create is not that great. We often stand in awe of the things we make without comprehending that these things are a very poor reflection of what is around us and what we ourselves are.
Every time some hype comes about these artificial stupidity systems, I look at my youngest granddaughter and see in her, capabilities that far exceed anything that we have created. Even my old buck of a goat demonstrates capabilities far, far in excess of anything we have created in all of our computational systems.
As I have said elsewhere here, we have to be careful that we do not cede control of our lives to systems that we think are more than they really are - systems that are limited, fragile and prone to failure.
You appear to be asserting that humans can tell if a loop will end, when that loop is defined so that if it does it doesn't and if it doesn't it does.
> Even my old buck of a goat demonstrates capabilities far, far in excess of anything we have created in all of our computational systems.
How so?
Not saying this is necessarily false — GPT-3 is about as complex as the brain of a rodent, so it wouldn't exactly be surprising even though the LLM only does text and completely different AI do other things — but still, what exactly do goats do that's "far in excess"?
We can determine by looking at certain problems (The Halting Problem is one such example) what the outcome will be without actually having to execute that code. The Halting Problem is one of the simpler problems that cannot be solved by computational means, which includes all artificial computational system.
You ask [How so?] to my comment about my goat. I would suggest that to understand this you need to go and observe what happens in the environment with such beasts, whether it be a cow in a local paddock or a pet dog or cat. Take time to observe the interactions that occur and think about how little [training] is involved here.
Watch children around you, take some serious time and observe them in their interactions and I think that when you think about how we program our various artificial stupidity systems that we are still at the caveman stage in our computational systems. We have barely discovered fire so to speak.
As for [GPT-3 is about as complex as the brain of a rodent], I don't think GPT-3 has even reached a single bacterium cell state of intelligence.
I would like you to try the following: Using your index finger on your left hand, touch the tip of your nose.
Now think about this: How did you do that very simple task? When did you learn and how did you learn to do that simple task?
If you think about it carefully, the task that I describes is incredibly complex.
Now what would be required to get an artificial stupidity system (AS) to do the same task? What programming do we need to do to achieve this task? What programming was done to you to achieve that same task?
When you start asking questions like this, it becomes very clear that all of our computational systems (including all of the AS systems) are incredibly simple and not at all comparable to what we find within ourselves.
We can build very useful tools that we can use to good purpose. But no tool is ever more than a tool for us.
I suppose what concerns me about our current state of affairs is that we are far too impressed by our caveman antics. There is not a single industrial system built by mankind that comes close to the integrated control and manufacturing systems found in a single living cell. None of our communication systems come close to what is found in the various control/communication systems found in even the simplest of chordate organisms.
It was very obvious, 40 odd years ago during my engineering undergraduate days, just how fragile much of our technological base was then. It is far more fragile today and yet we appear to be enamored by our [current technological prowess] which is actually far more fragile than it was 40 years ago.