It's as if what I wrote implies "all other things being equal", just like any technical claim.
All other things were not equal: the architectures were tweaked, the human data set is still not exhausted, and more money and energy was thrown into their performance since it's a pre-IPO game with huge VC stakes.
We've already seen a plateau non-the-less compared to the earlier release-over-release performance improvements. Even the "without any backward movement every 3-4 months for 2 years now" is hardly arguable. Many saw a backward movement with GPT 4.1 vs 4.0, and similar issues with 4.5, for example. Even if those are isolated, they're hardly the 2 to 3.5 to 4.0 gains.
And no, there are absolutely no "rigorous methods of filtering and curation" that can separate the avalance of AI slop from useful human output - at least not without diminishing the possible training data. The problem after all is not just to tell AI from human with automated curation (that's already impossible), the problem is to have enough valuable new human output, which becomes near a losing game as all aspects of "human" domains previously useful as training input (from code to papers) are tarnished by AI output.