Well in the first years of AI no, it wasn't because nobody was using it.
But at some point if you want to make money you have to provide a service to users, ideally hundreds of millions of users.
So you can think of training as CI+TEST_ENV and inference as the cost of running your PROD deployments.
Generally in traditional IT infra PROD >> CI+TEST_ENV (10-100 to 1)
The ratio might be quite different for LLM, but still any SUCCESSFUL model will have inference > training at some point in time.