This seems like the logical fallacy of "begging the question" since it is far from apparent to me that they are "more capable than us at the majority of tasks."
There's a lot of stuff we consider to be "common sense", sometimes those things are used to criticise AI and sometimes they're used to criticise other humans for not knowing them, but that is a category that we don't even think about until we notice the absence.
For the things not considered common sense, like playing chess (beats all humans) or speaking/reading foreign languages (more than I can name to a higher standard than my second language), to creating art (even if it regularly makes the common sense mistake of getting the number of fingers and limbs wrong it's still better and not just faster than most humans), to arithmetic (a Raspberry Pi Zero can do it faster than all humans combined), to symbolic maths, to flying planes…
A dev conference I was at recently had someone demonstrate how they hooked up their whatsapp voice calls to speech recognition, speech synthesis trained on their own voice, and an LLM, and the criticism of the people who got the AI replies was not "you're using an AI" (he had to actively demonstrate his use of AI to conversation partners who didn't believe him) but "you can't have listed to my message, you replied too quickly to have even played it all back."
We make machines that are stronger, faster, and can have much finer motor control than we have as individual abilities. No machine we have created has the dexterity that we have.
Every computational system can be analysed in fine detail to determine the limits that we have built into them. It may take an enormous amount of time and effort to do so, but we can do it. No computational system that we have built is able to exceed the limited programming we place in it.
There is an enormous amount of hype that goes on about the current generation (and future generations) of these systems, but all of them are in the abilities that we have programmed into them. They are in all essentials completely stupid (in the worst possible way - non-sentient, non-intelligent).
Every logic error that we have made in building these systems is hidden in that code. One day, those errors will come back and bite us, but there is nothing intelligent or sentient in these systems. It is our errors, for which we are responsible, that will cause those problems.
We can use them as adjuncts to our sentience and intelligence - but all they are are tools, never anything more.
However, if we cede control to these systems, we are ceding control to something that is no better than fire (a good servant - a horrendous master). After forty years, I have seen far too often, hype by humans convince other humans to cede control to the systems that humans have made and the result has been various levels of chaos.
If anything, what we need to be careful of is how humans use these systems against other humans. This is the perennial problem that we face as we build new technology.
> However, we can enumerate all the things that we create can do.
Not really, no. Even before AI, "Turing Complete" makes things extremely hard to enumerate; see Busy Beaver numbers for how small a system can be and still outside our ability to fully comprehend — needing to use up-arrow notation because exponentials aren't big enough is always good for a laugh.
The answer is that we have programmed these systems to do what we require. They cannot exceed but they fail becasue of errors that we have placed in these systems.
All of the tasks that you have mentioned have been programmed that way. It has taken human ingenuity to work out how to do this programming. The end result is a machine (non-sentient, non-intelligent) that is doing what we require.
If you look at game playing, a system was created to play Go and won and yet that same system fails to win against humans under many circumstances. The literature is there, yet not publicised for all the world to see. A result of keeping the hype in play.
If you look at speech recognition, these systems still fail when we humans work against them and yet, we humans still recognise what the machines fail at.
Just keep in mind that a tractor can move a greater amount of material than a human can, but it is still only a tool. A plane can travel faster and fly higher that a human can, but it is still only a tool.
We use these systems to augment our abilities and yet they are all limited in so many ways that we are not.
The upshot is that we can do amazing things with the things we create, but none of those things exist without us and all those things fail without us.
The successful Go AI were programmed to learn; we still can't program a decent Go AI with rules humans come up with.
> The literature is there
Do you have a link? Two Minute Papers just had a video about an AI systematic finding ways to confound other AI, but I thought we'd passed the point where the best Go AI could be so manipulated by humans…