The AI models have a bunch of different consumption models aimed at different types of use. I work at a huge company, and we’re experimenting with different ways of using LLMs for users based on different compliance and business needs. The people using all you can eat products like NotebookLM, Gemini, ChatGPT use them much more on average and do more varied tasks. There is a significant gap between low/normal/high users.
People using an interface to a metered API, which offers a defined LLM experience consume fewer resources and perform more narrowly scoped tasks.
The cost is similar and satisfaction is about the same.