There's no sense in which symbolic AI is at the end of its life and if you pay close attention you'll see that LLMs are trying to do all the things that symbolic AI is good at: major examples being reasoning, and planning from world models.
And as nextos says in the sibling comment most of the recent successes of LLMs in tasks that go beyond language generation, e.g. solving math olympiad problems, are the result of combining LLMs with symbolic verifiers.
>> While ML is cracking open entirely new fields - and might go all the way to AGI, the way it's going now.
I don't agree. Everything that neural nets do today, speech recognition, object identification in images, machine translation, language generation, program synthesis, game playing, protein folding, research automation, I mean every single thing really, is a task that comes from the depths of AI history. There's a big discussion to be had about why those tasks are "AI" tasks in the first place and what they have to do with "intelligence" in the broader sense (e.g. cats are intelligent but they can't generate any sort of text) but this discussion is constantly postponed as we all breathlessly run up the hill that neural nets are climbing. When we get to the top and find it was the wrong hill to climb, maybe we'll have that discussion at last, or maybe the entire industry, academia in tow, will run after the Next Big Thing in AI™ all over again. But- cracking open new fields? Nah. Not really.
AGI is not going to happen any time soon though. We have no idea what we're doing in terms of reproducing intelligence, that much is clear.