FWIW I have two(!!) close friends working for Anthropic, one for nearly two years and one for about 4 months.
Both of them tell me that this is not just marketing, that the company actually is ethical and safety conscious everywhere, and that this was the most surprising part about joining Anthropic for them. They insist the culture is actually genuine which is practically unicorn rarity in corporate America.
We have worked for FAANG so I know where they're coming from; this got me to drop my cynicism for once and I plan on interviewing with them soon. Hopefully I can answer this question for myself.
From the outside, I find Anthropic's hyperbolic marketing to be an indication that they are basically the same as every other bay area tech startup - more or less nice folks who are primarily concerned with money and status. That's not a condemnation, but I reject all the "do no evil" fanfare as conveniently self serving.
If the leadership doesn’t bend it might get replaced. It’s annoying. I think Claude is atm the best AI assistant, by far.
This isn't remotely true in my experience. The senior folks I know at Meta, for example, pretty much concede they're ersatz drug dealers.
Google was "do no evil" until they had to choose between that and making the money. The culture has to be not only professed but tested.
Certainly most of us know we are just in it for the money, and the soul-grinding profit machine will continue to grind souls for profit regardless of what we want.
So that's why it is surprising to me when my (fairly senior) grizzled ex-FAANG friends, that share the same view, start waxing poetic about Anthropic being different and genuine. I think "maybe it is" and decide to interview. IDK, I guess some part of me wants to believe that nice things can exist.
It only showed they were marginally more ethical than OpenAI and XAI which isn't saying much.
It's unfair to sweep provision of methods to the military under a "respect the service" catch-all justification.
Two things can simultaneously be true: (1) individuals serving in the military are making sacrifices (in terms of pay, family life, personal safety) that deserve respect and (2) the military as a political institution will amorally deploy whatever capabilities it has access to, to achieve political aims.
There's a reason the US stopped offensive chemical, biological warfare, and tactical nuclear device research and production -- effective capabilities will be used if they exist.
[1] "Unless Its Governance Changes, Anthropic Is Untrustworthy" https://anthropic.ml/
So much so that I worry they won't be Machiavellian enough to survive. Hope I am wrong.
Anthropic is emphatically not safe. None of the AI labs with customers (i.e., excluding a few small nonprofits whose revenue comes from donations) are anything like safe -- because of extinction risk. The famous positive regard that Anthropic employees have for their organization's mission means almost nothing because there have been hundreds of quite destructive cults and political parties whose members believed that theirs is the most ethical and benign organization ever.
The best thing you can say about Anthropic is that if you have to support some AI lab by becoming a customer, investor or employee, it is slightly less dangerous for the world to support Anthropic than OpenAI although IMHO (and I admit I am in a minority on this among extinction-risk activists) it is slightly less dangerous to support Google Deep Mind or Mistral than Anthropic.
All four organizations I mentioned should be shut down tomorrow with their assets returned to shareholders.
The current crop of services provided by the leading AI labs are IMHO positive on net in their effect of people and society, but the leading AI labs are spending a large fraction of the 100s of billions of dollars they've received from investors on creating more powerful models, and they might succeed in their goal of creating models that are much more powerful than the ones they have now, which is when most of the danger would manifest.
The leaders of all of the leading AI labs have the ambition of completely transforming society and the world through AI.
I wonder what Anthropic tries to achieve by spreading such blatant lies with their bot accounts. I'm definitely not buying anything from a company so morally corrupt to smear the competition while claiming to be somehow "ethical". And I'm not talking just about this thread, it's a recurring pattern on Reddit.