Any considerable bump in model capability craters my willingness to tolerate the ineptitude of less capable models. And I'm far from being alone in this.
Ever wondered why those stupid "they secretly nerfed the model!" myths persist? Why users report that "model got dumber", even if benchmarks stay consistent, even if you're on the inference side yourself and know with certainty that they are actually being served the same inference over the same exact weights on the same hardware quantized the same way?
Because user demands rise over time, always.
Users get a new flashy model, and it impresses them. It can do things the old model couldn't. Then they push it, and learn its limitations and quirks as they use it. And then it feels like it "got dumber" - because they got more aggressive about using it, got better at spotting all the ways it was always dumb in.
It's a treadmill, and you pretty much have to keep improving the models just to stay ahead of user expectations.