It is impressive and very unintuitive just how far that can get you, but it's not reductive to use that label. That's what it is on a fundamental level, and aligning your usage with that will allow it to be more effective.
But these endless claims that the fact they're "just" predicting tokens means something about their computational power are based on flawed assumptions.
Given that they are Turing complete when you put a loop around them, that claim is objectively false.
The last time I had this discussion with people I pointed out how LLMs consistently and completely fail at applying grammar production rules (obviously you tell them to apply to words and not single letters so you don't fight with the embedding.)
LLMs do some amazing stuff but at the end of the day:
1) They're just language models, while many things can be described with languages there are some things that idea doesn't capture. Namely languages that aren't modeled, which is the whole point of a Turing machine.
2) They're not human, and the value is always going to come from human socialization.
They absolutely are. It's trivial to test and verify that you can tell one to act as a suitably small Turing machine and give it instructions to use to manipulate the conversation as "the tape".
Anything else would be absolutely astounding given how simple it is to implement a minimal 2-state 3-symbol Turing machine.
> Assuming the model can even follow your instructions the output is probabilistic so in the limit you can guarantee failure.
The output is deterministic if you set the temperature to zero, at which point it is absolutely trivial to verify the correct output for each of the possible states of a minimal Turing machine.
So the argument goes: LLMs were trained to predict the next token, and the most general solution to do this successfully is by encoding real understanding of the semantics.
It's even crazier that some people believe that humans "evolved" intelligence just by nature selecting the genes which were best at propagating.
Clearly, human intelligence is the product of a higher being designing it.
/s
There's a branch of AI research I was briefly working in 15 years ago, based on that premise: Genetic algorithms/programming.
So I'd argue humans were (and are continuously being) designed, in a way.
Sure, I would agree with that wording.
In the same way, neural networks which are trained to do a task could be said to be "designed" to do something.
In my view, there's a big difference in what the training data is for a neural network, and what the neural network is "designed" for.
We can train a network using word completion examples, with the intent of designing it for intelligence.
I could also argue that the word "design" has a connotation strictly opposing emergent behaviour like evolution, as in the intelligent design "theory". So not the best word to use perhaps.
And in your example, just because we made a system that exhibits emergent behaviour to some degree, we can't assume it can "design" intelligence the way evolution did, on a much, much shorter timeline, no less.