On your first line -- is it clear that's a good thing? Massive "it depends".
Sadly, enterprise fizzbuzz style is wildly successful compared to ghostty style.
Put another way, a gem of code versus the masses of mess. It's amazing new models aren't worse. And now most of this human interaction is with vibers.
LLMs trained by the crowd risk being medianizers, or rather, mediocritizers.
One need not look further than "Absolutely!" to see this in play -- user selection matters for corpus matters for model. Suddenly content everywhere is “Little houses, all alike.”
On your second line -- I couldn't agree more strongly.
ANTHROP\C has been sitting inside high performance white collar industries with top builders, that signal is priceless compared to feedback farms in Kenya.
Bet on models that see spikey pointy mastery at play.