https://www.sellerscommerce.com/blog/saas-statistics/
Salesforce has had your client list, the amount the deal is worth, the status of the deal, which of your employees are working on the deal, their bill rate etc. for years.
Zoom/Gong/Microsoft Teams knows every conversation yoh have with a client if you turn transcriptions on.
Your email provider gets your company email in plain text.
Slack has all of your interoffice communications.
Atlassian gets exactly what you are working on, whose working on it and the status of every task.
AWS/GCP/Azure know everything about your infrastructure.
BTW, Amazon is one of the most paranoid companies about confidentiality you can imagine (former employer). They use Microsoft Office, Slack (they were moving away from Chime before I left) - and the internal consulting division uses Salesforce.
Why the moral panic about Anthropic? I doubt very seriously they are going to start in my company’s case - a cloud consulting division
Any company that doesn’t have an enterprise contract with Anthropic and uses Claude Code is an idiot.
But if you really want to have that warm and fuzzy, you can always use Claude Code via an AWS account and Bedrock hosted Anthropic models. I assure you that AWS (former employer) is not using your data when you use Claude with Bedrock/Anthropic to train their models. Amazon may be evil. But they are not stupid.
I’ve had a long history of managing my digital privacy and even I’ve been quite lax with this. It’s just so easy to dump stuff in the black box. I try to use ZDR endpoints when I can via openrouter for certain tasks.
Google’s policies regarding data collection on paying customers is so shady as well. From what I understand: they train on all days of all paying customers unless you turn Gemini apps and activity off. This completely disables your ability to save chats. They obviously merge these two settings to collect as much data as possible. They allegedly do not train on temporary chats, but the UX for them is annoying and requires so many more button clicks.
Ultimately I just treat any endpoint as a public record at this point. If I wouldn’t be happy letting the world see it, I don’t attach it. Welp.
My company uses Github Copilot. We have a very specific enterprise agreement that states that data does go to Microsoft's servers where it gets processed in an ephemeral environment and wiped after 3 months.
I'm guessing Anthropic has something similar in their agreements. Now, if you have some proof that Anthropic is stealing highly confidential and/or trade secrets, that'd be good to see, but also whomever is throwing that kind of information into an off-premises and non airgapped model is just asking for a data leak.
This is anecdotal, but once when I worked for a well known NGO as an experiment I created a document outlining what our positions would be for a meeting with representatives from a certain country[2].
We in fact were taking very different positions, and using different points to support those positions.
The delegation was visibly shaken, surprised, and I daresay upset that they were completely unprepared for our meeting -- they basically refused to dialogue (the entire purpose of the meeting) and ran home to ask their overlords in the embassy what to say.
I am doubtful that they deployed zero day malware onto our network -- I suspect they had an insider at company whose cloud offering I used to create the false flag document.
Sometimes it surprises me, as someone who got my education in tech by reading Slashdot comments and researching the terms I did not understand, how trusting this generation of hackers is.
[1] https://en.wikipedia.org/wiki/Saudi_infiltration_of_Twitter [2] I won't say who other than "not KSA"