Obviously it's hard to measure this objectively, but I can't imagine having done this pre-AI with zero downtime and having replaced those SaaS applications in that timeframe.
(Not the exact same chart but similar idea, I guess it's sort of a meme: https://imgur.com/a/YrNGYOR)
So I looked at the most recent CC release notes on Github and the majority look like this:
Fixed /clear not resetting the terminal tab title after a conversation
Fixed session title chip from /rename disappearing while a permission or other dialog is active
Fixed agent panel below the prompt being hidden when subagents are running (regression in 2.1.122)
Fixed external-editor handoff (Ctrl+G) blanking the conversation history above the prompt
Fixed /context dumping its rendered ASCII visualization grid into the conversation, wasting ~1.6k tokens per call
Fixed OAuth refresh race after wake-from-sleep that could log out all running sessions
Fixed 1-hour prompt cache TTL being silently downgraded to 5 minutes
Fixed cache-miss warning appearing spuriously after /clear or compaction when changing /effort or /model
I'd be extremely interested to know what percentage of these were just fixing last week's Claude Code written PR that no human ever set eyes on.But hey, all that churn looks great on charts being circulated on social media as free advertising for their flagship product (and consequently the company's valuation) so never mind, LGTM!