Some people in the "symbolic AI" camp didn't take the loss well, so they pivoted towards "ML is not real AI and it needs a symbolic component to be a real AI", which is: the neurosymbolic garbage.
This work isn't exactly that, and I do think it can amount to something useful, but the justification for it reeks of something similar.
I think you're confusing various different things as "neurosymbolic AI". There is a NeSy symposium and I happen to have met many of the people there, and they are not GOFAI ideologues, rather they recognise the obvious limitations of neural nets (i.e. they're crap at deduction, though great at induction) and they look for ways to address them. Most of that crowd also has a predominantly statistical ML/ neural nets background, with symbolic AI as an afterthought.
I don't think I've ever heard anyone say that "ML is not real AI" and I mainly move in symbolic AI circles. I would check my sources, if I were you.
Anwyay, honestly, this is 2026, there is no sensible reason to be polarised about symbolic vs. statistical AI (or whatever distinction anyone wants to make). An analogy I like to make is as follows: a jetliner is a flying machine, a helicopter is a flying machine. We can use both for their advantages and disadvantages, but a flying machine is something too useful to give up on any one kind for ideological reasons. The practical benefits overwhelmingly make up for any ideological concerns (e.g. "jets bad" or "propellers bad").
And just to be clear, symbolic AI is still in rude health: automated theorem proving, planning and scheduling, program verification and model checking, constraint satisfaction, discrete optimisation, SAT solving, all those are fields where symbolic approaches are dominant, and where neural nets have not made significant inroads in many decades; nor are they likely to, not any more than symbolic approaches are likely to make any inroads in e.g. machine vision, or speech recognition. And that's just fine: lots of tools, lots of problems solved.
One is near the end of its potential while another is only picking up steam.
In many ways, the space ML dominates now is the space of "all the things symbolic approaches suck ass at". Which is a very wide space with many desirable things in it.
There's no sense in which symbolic AI is at the end of its life and if you pay close attention you'll see that LLMs are trying to do all the things that symbolic AI is good at: major examples being reasoning, and planning from world models.
And as nextos says in the sibling comment most of the recent successes of LLMs in tasks that go beyond language generation, e.g. solving math olympiad problems, are the result of combining LLMs with symbolic verifiers.
>> While ML is cracking open entirely new fields - and might go all the way to AGI, the way it's going now.
I don't agree. Everything that neural nets do today, speech recognition, object identification in images, machine translation, language generation, program synthesis, game playing, protein folding, research automation, I mean every single thing really, is a task that comes from the depths of AI history. There's a big discussion to be had about why those tasks are "AI" tasks in the first place and what they have to do with "intelligence" in the broader sense (e.g. cats are intelligent but they can't generate any sort of text) but this discussion is constantly postponed as we all breathlessly run up the hill that neural nets are climbing. When we get to the top and find it was the wrong hill to climb, maybe we'll have that discussion at last, or maybe the entire industry, academia in tow, will run after the Next Big Thing in AI™ all over again. But- cracking open new fields? Nah. Not really.
AGI is not going to happen any time soon though. We have no idea what we're doing in terms of reproducing intelligence, that much is clear.
Lots of the successes by LLMs that have been much celebrated rely on these.