This is going to be interesting.
At least for coding, there's little correlation between token spend and the quality (and impact) of the resulting AI suggestion.
This is fine when inference prices are capped (eg via a monthly subscription plan or self-hosting), but rapidly discombobulates the relationship between provider and user otherwise.
It still seems like OpenAI has no moat and neither does anyone else, as the only reasonable way to use the coding slot machines is going to be via open source models on inference-optimized hardware.
Still better than the secret lobotomization they were doing on subscription plan models though.