What I’m saying is that a model not as large out performed in my prompt which is I imagine complex enough to cover a lot of consumers needs. Not to mention, with the right setup, it’d be easy to forward the prompts that need the foundational models through openrouter and only get charged API token prices per use.