Skip to content
Better HN
Top
New
Best
Ask
Show
Jobs
Search
⌘K
0 points
fulafel
1y ago
0 comments
Share
No.
undefined | Better HN
0 comments
default
newest
oldest
belter
1y ago
Is this paper wrong? -
https://arxiv.org/abs/2311.09807
simonw
1y ago
It shows that if you deliberately train LLMs against their own output in a loop you get problems. That's not what synthetic data training does.
belter
1y ago
I understand and appreciate your clarification. However would it not be the case some synthetic data strategies, if misapplied, can resemble the feedback loop scenario and thus risk model collapse?
j
/
k
navigate · click thread line to collapse