> we know they are not the only ones out there to use and it is better than none.
Oh, my apologies. It's not fair to compare shoddy upstarts like OpenAI and X to Deepseek or Qwen or GLM or Kimi or MiniMax for not releasing their best model weights as soon as they're out. I'm just not appreciating all of what OSS-120b and Grok 2.5 have to offer to the homelabbing community.
> Google isn't hostile to local inference
I didn't include Google among them. "all three" meant X, OpenAI and Anthropic.
> You literally get nothing for defending Anthropic
My quote was not "Anthropic at least doesn't lie" full stop, was it? They don't lie about being a frontier local model provider.
OpenAI hasn't released a genuinely boundary-pushing model since GPT-2. That's a contemptible output for a trillion-dollar research organization, say what you will about Anthropic's corporate policy.