Two things that stand out:
- The knowledge incorporation results (47% vs 46.3% with GPT-4.1 data, both much higher than the small-model baseline) show the model does discover better training formats, not just more data. Though the catastrophic forgetting problem remains unsolved, and it's not completely clear whether data diversity is improved.
- The computational overhead is brutal - 30-45 seconds per reward evaluation makes this impractical for most use cases. But for high-value document processing where you really need optimal retention, it could be worth it.
The restriction to tasks with explicit evaluation metrics is the main limitation. You need ground truth Q&A pairs or test cases to compute rewards. Still, for domains like technical documentation or educational content where you can generate evaluations, this could significantly improve how we process new information.
Feels like an important step toward models that can adapt their own learning strategies, even if we're not quite at the "continuously self-improving agent" stage yet.
"NEAT/HyperNEAT" (Neuroevolution of Augmented Topologies) [0]
I'm no ML practictioner, but as I understood it, the primary difference between NEAT and what is described in this paper is that while NEAT evolves the topology of the network, this paper seems to evolve the weights.
Seems like two approaches trying to solve the same problem -- one evolving networking structure, and the other the weights.
Those 2 friends are quite possibly the most intelligent people I've ever met, and they were very convinced that RL and evolutionary algorithms were the path forward in ML.
[0] https://en.wikipedia.org/wiki/Neuroevolution_of_augmenting_t...
SethBling's MarI/O - Machine Learning for Video Games
Finding a good scoring algorithm is hard as it is so easy for a GA to cheat...
Source: experience
"when assessed by Claude 3.5 Sonnet’s production-grade RM, our unsupervised assistant policy wins 60% of head-to-head comparisons against the policy trained with the human-supervised RM." So now the models can even post-train the new models better than a human can
Unsupervised Elicitation of Language Models - https://news.ycombinator.com/item?id=44276041
I’m sure this is something the big labs are trying but from the outside as a user of LLMs it feels like people don’t talk about this very much and instead the focus right now is on better training (eg reinforcement learning) with the assumption that anything else not learned during training will be stuffed into the context somehow as needed. But from a naive perspective the lack of learning from experience after training seems like the biggest thing standing between us and AGI.
Many people here are right, compute, collapse, forgetting whatever.
The only "real" way to do this would be: 1. Train a model 2. New data 3. Retrain the model in full + new data 4. Repeat 5. You still have no garuntee on the "time" aspect though.
But CL as a field basically has zero answers on how to do this in a true sense. It's crazy hard because the "solutions" are hypocritical in many ways.
We need to expand the model's representation space while keeping the previous representation space nearly the same?
Basically, you need to modify it without changing it.
Most annoying is that even the smallest of natural brains do this easily. I have a long winded theory but basically it boils down to AI likely needs to "sleep" or rest somehow.
LoRA paper: https://arxiv.org/abs/2106.09685
1. Preventing collapse -> model gets "full" https://arxiv.org/pdf/1612.00796
2. Forgetting causes better generalization https://arxiv.org/abs/2307.01163
3. Unknow paper that connects this - allow a "forgetting" model that improves generalization over time. - I tried for a long time to make this but it's a bit difficult
Fun implication is that if true this implies AGI will need "breaks" and likely need to consume non task content of high variety much like a person does.
I completely agree that figuring out a safe way to continually train feels like the biggest blocker to AGI
Hypothetically (and perhaps more plausibly), a continually learning model that adapts to the context of a particular org / company / codebase / etc., could even be desirable.
There are tons of benchmarks around this you can easily run with 1 gpu.
It's compute only in the sense that the only way to do it is retrain a model from scratch at every step.
If you solve CL with a CNN you just created AGI.
I'd love to see a full circle of hypernetworks, with both models continuously updated through generated LoRAs, the hypernetwork updated to accommodate the new model state. You'd need a meta-hypernetwork to apply LoRAs to the hypernetwork, and then you could effectively have continuous learning.
The learning and inference process are entirely separate, which is very confusing to people familiar with traditional notions of human intelligence. For humans, learning things and applying that knowledge in the real world is one integrated feedback process. Not so with LLMs, we train them, deploy them, and discard them for a new model that has "learned" slightly more. For an LLM, inference is the end of learning.
Probably the biggest misconception out there about AI. If you think LLMs are learning, it's easy to fantasize that AGI is right around the corner.
In any case, this is a far cry from what I was discussing. At best, this shows an ability for LLMs to "learn" within the context window, which should already be somewhat obvious (that's what the attention mechanism does). There is no global knowledge base or weight updates. Not until the content gets published, rescraped, and trained into the next version. This does demonstrate a learning feedback loop, albeit one that takes months or years, driven by external forces - the company that trains it. But it's way too slow to be considered intelligent, and it can't learn on its own without help.
A system that truly learned, ie incorporated empirical data from its environment into its model of the world, would need to do this in millisecond time frames. Single celled organisms can do this. Where you at AGI?
"Forgetting correctly" is something most human brains are exceptionally good at, too. I wonder how that works...
The other thing is the brain down values and prunes paths we don't use and strengthens one's we do. This is why something you've not done it a while might need a refresher for you to do right again.
This is often associated with learning tools like anki and stuff, but the real world is all about encountering things at certain frequencies (day night cycles, seasons, places you visit, people you see.... everything, really)
I'm wondering if there maybe some sort of inverse to SR, maybe?
They don't just "forget" that information can come back at a later time if you continue to train.
So basically any time a model is trained you need to check it's entire memory not just a small part.
2028 is pretty much tomorrow… fascinating insight