If Anthropic actually cared about humans, they would have the best customer support (staffed by humans, for humans) and communications team (again, staffed by humans, for humans).
As both of these are actually on par with Silicon Valley standards (between medicore and atrociously bad), Anthropic cannot and should not be trusted with anything to do with AI, because whatever they do will not benefit humanity.
I know Anthropic support is slow from firsthand experience, but it has to be pretty difficult to scale support 10-80x per year. And even more so when you have a long-tail of very low revenue usage in the form of $20/month subscriptions.
I don't get it. None of the hyperscalers have human support teams at scale because it's obviously infeasible. Why, just because it would be nice, do we take leave of the requirement that something actually be possible before demanding it.
Are you picturing them running a lottery for who’s allowed to use it, or an auction?
And with the loss of scale economies, it would have to be much more expensive.
So you end up charging, what, $10,000/month and only making it available to the very wealthy?
I don’t see how this game plan is better for humans. And I’m honestly not being snarky. Have you thought through how your proposed limits would work? Am I missing something?
Very humanitarian
I can imagine scaling may be difficult, but that should be a temporary problem.
As a side note, how do you make up that billion user number? Claude has 10 million users.