As a developer that uses LLMs, I haven't seen any evidence that LLMs or "AI" more broadly are improving exponentially, but I see a lot of people applying a near-religious belief that this is happening or will happen because... actually, I don't know? because Moore's Law was a thing, maybe?
In my experience, for practical usage LLMs aren't even improving linearly at this point as I personally see Claude 3.7 and 4.0 as regressions from 3.5. They might score better on artificial benchmarks but I find them less likely to produce useful work.