In that scenario the case is even weaker for the rented-hardware model - if you're going to have a gaming rig, you're only paying a little bit more on top for a GPU with more RAM, not the full cost of the rig.
The comparison then is the extra cost of using a 24GB GPU over a standard gaming rig GPU (12GB? 8GB?) versus the cost of renting the GPU whenever you need it.
I could either spend $20 a month for my cursor license.
Or
Spend $2k+ upfront to build a machine to run models locally. Pay for the electricity cost and time to set both the machine and software up.
You said this was in the context of a gaming rig. You're not spending an extra $2k on your gaming rig to run models locally.
If you're building a dedicated LLM machine OR you're using less compute than you are paying the provider for, then, yup - $20/m is cheaper.
When you start using the model more, or if you're already building a gaming rig, then it's going to be cheaper to self-host.
So again, the economics don’t really make sense except in specific edge cases or for folks that don’t want to pay vendors. Also please don’t use italics, I don’t know why but every time you see them used it’s always a silly comment.