I appreciate your sentiment but can't agree with it. What I mean is, if I had the resources to not have to work for 10 years, I give myself greater than a 50% chance of building an AGI. So I don't understand why the world is taking so long to do it.
The flip side is that these narrow use cases progressed so quickly that we have to worry about stuff like deep fakes now.
Something's not right here.
As a programmer, I feel that what went wrong is that we invested too much in profit-driven endeavors, basically stuff that's mainstream. To be blunt, the academic side of me doesn't care about use cases. I care about theory, formalism, abstraction, reproducibility, basically the scientific method. From that perspective, all AI is equivalent, it just takes input, searches a giant solution space using its learned context as clues, and returns the closest solution it can in the time given. It's an executable piping data around. The rest is hand waving.
And given that, the stuff that AI is doing now is orders of magnitude more complex than running a Roomba. But a robot vacuum actually helps people.
To answer your question, a KNN could solve this if the user reshapes the image data into a different coordinate system where the data can be partitioned (all inference comes down to partitioning):
https://en.wikipedia.org/wiki/Change_of_basis
Tensors are about reshaping data into a coordinate system where relationships become obvious, like going from rectangular to polar coordinates, or using a Fourier transform:
https://en.wikipedia.org/wiki/Tensor
My frustration with all of this is the same one I have with physics or any other evolving discipline. The lingo obfuscates the fundamental abstractions, creating artificial barriers to entry.
Edit: I should add a disclaimer here that my friend and I worked on a video game for like 11 years. I'm no expert in AI, I'm just acutely sensitive to how the realities of the workaday world waste immeasurable potential at scale.