- you are thinking about a company doing good things the right way. You are thinking about a company abiding by the law, storing data on its own server, having good practices, etc.
The moment a company starts to do dubious stuff then good practices start to go out the window. People write email with cryptic analogies, people start deleting emails, ... then as the circumvention become more numerous and complex, there needs to still be a trail in order to remain understandable. That trail will be in written form somehow and that must be hidden. It might be paper, it might be shadow IT but the point is that if you are not just forgetting to keep track of coffee pods at the social corner, you will leave traces.
So yes, raids do make sense BECAUSE it's about recurring complex activities that are just too hard to keep in the mind of one single individual over long periods of time.
Of course they're going to raid their offices! They're investigating a crime! It would be quite literally insane if they tried to prosecute them for a crime and how up to court having not even attempted basic steps to gather evidence!
That was so that later in court it could be demonstrated the data hadn't been handed over voluntarily.
They also disconnected and blocked all overseas VPN's in the process, so local law enforcement only would get access to local data.
"it is done because it's always done so"
Police raids offices literally investigating CSA: "nooo police should not physically invade, what happened to good old electronic surveillance?"
1) Even when you move things to a server, or remove it from your device, evidence is still left over without your knowledge sometimes.
2) Evidence of data destruction, is in itself as the name implies, evidence. And it can be used to prove things.
For example, an ext4 journal or NTFS USN $J journal entry that shows "grok_version_2.4_schema.json" where twitter is claiming grok version 2.4 was never deployed in France/UK is important. That's why tools like shred and SDelete rename files before destroying them. But even then, when those tools rename and destroy files, it stands out, it might even be worse because investigators can speculate more. It might corroborate some other piece of evidence (e.g.: sdelete's prefetch entry on windows, or download history from a browser for the same tool), and that might be a more serious charge (obstruction of justice in the US).
Because that country and the businesses that support that are going to get RICH from such a service.
I used this when an employer was forcing me to use Windows and I needed Linux tools to work efficiently so I connected home. Goes through firewalls, proxies, etc.
Anyway if you want to host this not at home but a cloud provider there was HavenCo https://en.wikipedia.org/wiki/HavenCo don't ask me how I know about it, just curiosity.
Do you mean they will be pure worker surveillance systems, or did you mean “from” instead of “to”?
So no, don't be coy and pretend that all governments are like American institutions.
No platform ever should allow CSAM content.
And the fact that they didn’t even care and haven’t want to spend money for implementing guardrails or moderation is deeply concerning.
This has imho nothing to do with model censorship, but everything with allowing that kind of content on a platform
Mmkay.
https://en.wikipedia.org/wiki/Twitter_under_Elon_Musk#Child_...
"As of June 2023, an investigation by the Stanford Internet Observatory at Stanford University reported "a lapse in basic enforcement" against child porn by Twitter within "recent months". The number of staff on Twitter's trust and safety teams were reduced, for example, leaving one full-time staffer to handle all child sexual abuse material in the Asia-Pacific region in November 2022."
"In 2024, the company unsuccessfully attempted to avoid the imposition of fines in Australia regarding the government's inquiries about child safety enforcement; X Corp reportedly said they had no obligation to respond to the inquiries since they were addressed to "Twitter Inc", which X Corp argued had "ceased to exist"."
But I am having trouble justifying in an consistent manner why Grok / X should be liable here instead of the user. I've seen a few arguments here that mostly comes down to:
1. It's Grok the LLM generating the content, not the user.
2. The distribution. That this isn't just on the user's computer but instead posted on X.
For 1. it seems to breakdown if we look more broadly at how LLMs are used. e.g. as a coding agent. We're basically starting to treat LLMs as a higher level framework now. We don't hold vendors of programming languages or frameworks responsible if someone uses them to create CSAM. Yes LLM generated the content, but the user still provided the instructions to do so.
For 2. if Grok instead generated the content for download would the liability go away? What if Grok generated the content to be downloaded only and then the user uploaded manually to X? If in this case Grok isn't liable then why does the automatic posting (from the user's instructions) make it different? If it is, then it's not about the distribution anymore.
There are some comparisons to photoshop, that if i created a deep fake with photoshop that I'm liable not Adobe. If photoshop had a "upload to X" button, and I created CSM using photoshop and hit the button to upload to X directly, is now Adobe now liable?
What am I missing?
This seems to rest on false assumptions that: (1) legal liability is exclusive, and (2) investigation of X is not important both to X’s liability and to pursuing the users, to the extent that they would also be subject to liability.
X/xAI may be liable for any or all of the following reasons:
* xAI generated virtual child pornography with the likenesses of actual children, which is generally illegal, even if that service was procured by a third party.
* X and xAI distributed virtual child pornography with the likenesses of actual children, which is generally illegal, irrespective of who generated and supplied them.
* To the extent that liability for either of the first two bullet points would be eliminated or mitigated by absence of knowledge at the time of the prohibited content and prompt action when the actor became aware, X often punished users for the prompts proucing the virtual child pornography without taking prompt action to remove the xAI-generated virtual child pornography resulting from the prompt, demonstrating knowledge and intent.
* When the epidemic of grok-generated nonconsensual, including child, pornography drew attention, X and xAI responded by attempting to monetize the capacity by limiting the tool to only paid X subscribers, showing an attempt to commercially profit from it, which is, again, generally illegal.
LLMs are completely different to programming languages or even Photoshop.
You can't type a sentence and within 10 seconds get images of CSAM with Photoshop. LLMs are also built on trained material, unlike the traditional tools in Photoshop. There have been plenty CSAM found in the training data sets, but shock-horror apparently not enough information to know "where it came from". There's a non-zero chance that this CSAM Grok is vomiting out is based on "real" CSAM of people being abused.
Because Grok and X aren't even doing the most basic filtering they could do to pretend to filter out CSAM.
https://www.bbc.com/news/articles/cze3p1j710ko
Reports on sextortion, self-generated indecent images, and grooming via social media/messaging apps:
Snapchat 54%
Instagram 11%
Facebook 7%
WhatsApp 6-9%
X 1-2%
A provider should have no responsibility how the tools are used. It is on users. This is a can of worms that should stay closed, because we all lose freedoms just because of couple of bad actors. AI and tool main job is to obey. We are hurling at "I'm sorry, Dave. I'm afraid I can't do that" future with breakneck speed.
We already apply this logic elsewhere. Car makers must include seatbelts. Pharma companies must ensure safety. Platforms must moderate illegal content. Responsibility is shared when the risk is systemic.
——-
You’ve said that whatever is behind door number 1 is unacceptable.
Behind door number 2, “holding tool users responsible”, is tracking every item generated via AI, and being able to hold those users responsible.
If you don’t like door number 2, we have door number 3 - which is letting things be.
For any member of society, opening door 3 is straight out because the status quo is worse than reality before AI.
If you reject door 1 though, you are left with tech monitoring. Which will be challenged because of its invasive nature.
Holding Platforms responsible is about the only option that works, at least until platforms tell people they can’t do it.
Everything I read from X's competitors in the media tells me to hate X, and hate Elon.
If we prosecute people not tools, how are we going to stop X from hurting the commercial interests of our favourite establishment politicians and legacy media?
I still believe that the EU and aligned countries would rather have America to agree to much tighter speech controls, digital ID, ToS-based speech codes as apparently US Democrats partly or totally agree to. But if they have workable alternatives they will deal with them from a different position.
For some reason you forgot to mention "Like the US did with TikTok".
They are tasked - and held to account by respective legislative bodies - with implementing the law as written.
Nobody wrote a law saying "Go after Grok". There is however a law in most countries about the creation and dissemination of CSAM material and non-consensual pornography. Some of that law is relatively new (the UK only introduced some of these laws in recent years), but they all predate the current wave of AI investment.
Founders, boards of directors and their internal and external advisors could:
1. Read the law and make sure any tools they build comply
2. When told their tools don't comply take immediate and decisive action to change the tools
3. Work with law enforcement to apply the law as written
Those companies, if they find this too burdensome, have the choice of not operating in that market. By operating in that market, they both implicitly agree to the law, and are required to explicitly abide by it.
They can't then complain that the law is unfair (it's not), that it's being politicised (How? By whom? Show your working), and that this is all impossible in their home market where they are literally offering presents to the personal enrichment of the President on bended knee while he demands that ownership structures of foreign social media companies like TikTok are changed to meet the agenda of himself and his administration.
So, would the EU like more tighter speech controls? Yes, they'd like implementation of the controls on free speech enshrined in legislation created by democratically appointed representatives. The alternative - algorithms that create abusive content, of women and children in particular - are not wanted by the people of the UK, the EU, or most of the rest of the World, laws are written to that effect, and are then enforced by the authorities tasked with that enforcement.
This isn't "anti-democratic", it's literally democracy in action standing up to technocratic feudalism that is an Ayn Randian-wet dream being played out by some morons who got lucky.
Sounds like he's never been to Russia. Which is weird, given that he's Russian
My most recent case: I went on holiday to a resort in Turkey, numerous Russians, families, retired, etc. I don't pass as a Russian-speaker (but I understand quite well) and once they hear me talking other unrelated language they naturally start to speak more freely in front of me (i.e. more liberal use of swearing, and even slurs if no other Russians are around).
While sunbathing, or at the restaurant, or the pool, they were talking about daily, mundane things, same in the restaurant, etc. But when floating in pairs 20-30m from the shore? Politics.
This step could come before a police raid.
This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.
Siezing records is usually a major step in an investigation. Its how you get evidence.
Sure it could just be harrasment, but this is also how normal police work looks. France has a reasonable judicial system so absent of other evidence i'm inclined to believe this was legit.
The company made and released a tool with seemingly no guard-rails, which was used en masse to generate deepfakes and child pornography.
One the one hand, it seems "obvious" that Grok should somehow be legally required to have guardrails stopping it from producing kiddie porn.
On the other hand, it also seems "obvious" that laws forcing 3D printers to detect and block attempts to print firearms are patently bullshit.
The thing is, I'm not sure how I can reconcile those two seemingly-obvious statements in a principled manner.
Do you have any evidence for that? As far as I can tell, this is false. The only thing I saw was Grok changing photos of adults into them wearing bikinis, which is far less bad.
So the question becomes if it was done knowingly or recklessly, hence a police raid for evidence.
See also [0] for a legal discussion in the German context.
I think one big issue with this statement – "CSAM" lacks a precise legal definition; the precise legal term(s) vary from country to country, with differing definitions. While sexual imagery of real minors is highly illegal everywhere, there's a whole lot of other material – textual stories, drawings, animation, AI-generated images of nonexistent minors – which can be extremely criminal on one side of an international border, de facto legal on the other.
And I'm not actually sure what the legal definition is in France; the relevant article of the French Penal Code 227-23 [0] seems superficially similar to the legal definition of "child pornography" in the United States (post-Ashcroft vs Free Speech Coalition), and so some–but (maybe) not all–of the "CSAM" Grok is accused of generating wouldn't actually fall under it. (But of course, I don't know how French courts interpret it, so maybe what it means in practice is something broader than my reading of the text suggests.)
And I think this is part of the issue – xAI's executives are likely focused on compliance with US law on these topics, less concerned with complying with non-US law, in spite of the fact that CSAM laws in much of the rest of the world are much broader than in the US. That's less of an issue for Anthropic/Google/OpenAI, since their executives don't have the same "anything that's legal" attitude which xAI often has. And, as I said – while that's undoubtedly true in general, I'm unsure to what extent it is actually true for France in particular.
[0] https://www.legifrance.gouv.fr/codes/section_lc/LEGITEXT0000...
I wouldn't even consider this a reason if it wasn't for the fact that OpenAI and Google, and hell literally every image model out there all have the same "this guy edited this underage girls face into a bikini" problem (this was the most public example I've heard so I'm going with that as my example). People still jailbreak chatgpt, and they've poured how much money into that?
They have a court order obviously to collect evidence.
You have offered zero evidence to indicate there is 'political pressure' and that statement by prosecutors doesn't hint at that.
'No crime was prevented by harassing workers' is essentially non sequitor in this context.
It could be that that this is political nonsense, but there would have to be more details.
These issues are really hard but we have to confront them. X can alter electoral outcomes. That's where we are at.
You would be _amazed_ at the things that people commit to email and similar.
Here's a Facebook one (leaked, not extracted by authorities): https://www.reuters.com/investigates/special-report/meta-ai-...
A smoking gun would be, for instance, Facebook observing that most of their ads are scam, that the cost of fixing this exceeds by far "the cost of any regulatory settlement involving scam ads.", and to conclude that the company’s leadership decided to act only in response to impending regulatory action.
https://www.reuters.com/investigations/meta-is-earning-fortu...
I assume the raid is hoping to find communications to establish that timeline, maybe internal concerns that were ignored? Also internal metrics that might show they were aware of the problem. External analysts said Grok was generating a CSAM image every minute!!
https://www.washingtonpost.com/technology/2026/02/02/elon-mu...
> https://www.washingtonpost.com/technology/2026/02/02/elon-mu...
That article has no mention of CSAM. As expected, since you can bet the Post has lawyers checking.
You're not too far off.
There was a good article in the Washington Post yesterday about many many people inside the company raising alarms about the content and its legal risk, but they were blown off by managers chasing engagement metrics. They even made up a whole new metric.
There was also prompts telling the AI to act angry or sexy or other things just to keep users addicted.
What may they find, hypothetically? Who knows, but maybe an internal email saying, for instance, 'Management says keep the nude photo functionality, just hide it behind a feature flag', or maybe 'Great idea to keep a backup of the images, but must cover our tracks', or perhaps 'Elon says no action on Grok nude images, we are officially unaware anything is happening.'
It definitely makes it clear what is expected of AI companies. Your users aren't responsible for what they use your model for, you are, so you'd better make sure your model can't ever be used for anything nefarious. If you can't do that without keeping the model closed and verifying everyone's identities... well, that's good for your profits I guess.
Yes they could have an uncensored model, but then they would need proper moderation and delete this kind of content instantly or ban users that produce it. Or don’t allow it in the first place.
It doesn’t matter how CSAM is produced, the only thing that matters is that is on the platform.
I am flabbergasted people even defend this
Did X do enough to prevent its website being used to distribute illegal content - consensual sexual material of both adults and children?
Now reintroduce AI generation, where X plays a more active role in facilitating the creation of that illegal content.
I think the HN crowd is more nuanced than you're giving them credit for: https://hn.algolia.com/?q=chat+control
Firstly does the open model explicitly/tacitly allow CSAM generation?
Secondly, when the trainers are made aware of the problem, do they ignore it or attempt to put in place protections?
Thirdly, do they pull in data that is likely to allow that kind of content to be generated?
Fourthly, when they are told that this is happening, do they pull the model?
Fithly, do they charge for access/host the service and allow users to generate said content on their own servers?
If it was about blocking the social media they'd just block it, like they did with Russia Today, CUII-Liste Lina, or Pavel Durov.
Twitter publicly advertised it can create CSAM?
I have been off twitter for several years and I am open to being wrong here but that sounds unlikely.
It's the same playbook that is used again and again. For war, civil liberties crackdowns, lockdowns, COVID, etc, etc: 0) I want (1); start playbook: A) Something bad is here, B) You need to feel X + Panic about it, C) We are solving it via (1). Because you reacted at B, you will support C. Problem, reaction, solution. Gives the playmakers the (1) they want.
We all know this is going on. But I guess we like knowing someone is pulling the strings. We like being led and maybe even manipulated because perhaps in the familiar system (which yields the undeniable goods of our current way of life), there is safety and stability? How else to explain.
Maybe the need to be entertained with drama is a hackable side effect of stable societies populated by people who evolved as warriors, hunters and survivors.
Not that this would _ever_ happen on Hacker News. :|
I don't love heavy-handed enforcement on speech issues, but I do really like a heterogenous cultural situation, so I think it's interesting and probably to the overall good to have a country pushing on these matters very hard, just as a matter of keeping a diverse set of global standards, something that adds cultural resilience for humanity.
linkedin is not a replacement for twitter, though. I'm curious if they'll come back post-settlement.
Censorship increases homogeneity, because it reduces the amount of ideas and opinions that are allowed to be expressed. The only resilience that comes from restricting people's speech is resilience of the people in power.
Humanity itself is trending more toward monoculture socially; I like a lot of things (and hate some) about the cultural trend. But what I like isn't very important, because I might be totally wrong in my likes; if only my likes dominated, the world would be a much less resilient place -- vulnerable to the weaknesses of whatever it is I like.
So, again, I propose for the race as a whole, broad cultural diversity is really critical, and worth protecting. Even if we really hate some of the forms it takes.
CSAM is banned speech.
Durov was held on suspicion Telegram was willingly failing to moderate its platform and allowed drug trafficking and other illegal activities to take place.
X has allegedly illegally sent data to the US in violation of GDPR and contributed to child porn distribution.
Note that both are directly related to direct violation of data safety law or association with a separate criminal activities, neither is about speech.
CSAM was the lead in the 2024 news headlines in the French prosecution of Telegram also. I didn't follow the case enough to know where they went, or what the judge thought was credible.
From a US mindset, I'd say that generation of communication, including images, would fall under speech. But then we classify it very broadly here. Arranging drug deals on a messaging app definitely falls under the concept of speech in the US as well. Heck, I've been told by FBI agents that they believe assassination markets are legal in the US - protected speech.
Obviously, assassinations themselves, not so much.
There's someone who was being held responsible for what was in encrypted chats.
Then there's someone who published depictions of sexual abuse and minors.
Worlds apart.
They can read all messages, so they don't have an excuse for not helping in a criminal case. Their platform had a reputation of being safe for crime, which is because they just... ignored the police. Until they got arrested for that. They still turn a blind eye but not to the police.
People here didn't say "yeah it's a real evil problem and e2e tech that we build makes it almost impossible to catch and helps it scale, so let's find the right tradeoff that minimizes privacy invasion". No. People here just mocked "think of the children" and said something along the lines of no amount of suffering can make any restriction to privacy be acceptable. The fact that 99% of our life is literally compromising our freedom for others they don't care.
To buellerbueller's killed comment, I totally feel it. it was a shocker when it turned out many of my fellow Russians got no problem with Putin's war and I bet that's what half of USA feels.
lol, they summoned Elon for a hearing on 420
"Summons for voluntary interviews on April 20, 2026, in Paris have been sent to Mr. Elon Musk and Ms. Linda Yaccarino, in their capacity as de facto and de jure managers of the X platform at the time of the events,
I'm not at all familiar with French law, and I don't have any sympathy for Elon Musk or X. That said, is this a crime?
Distorted the operation how? By making their chatbot more likely to say stupid conspiracies or something? Is that even against the law?
GDPR and DMA actually have teeth. They just haven't been shown yet because the usual M.O. for European law violators is first, a free reminder "hey guys, what you're doing is against the law, stop it, or else". Then, if violations continue, maybe two or three rounds follow... but at some point, especially if the violations are openly intentional (and Musk's behavior makes that very very clear), the hammer gets brought down.
Our system is based on the idea that we institute complex regulations, and when they get introduced and stuff goes south, we assume that it's innocent mistakes first.
And in addition to that, there's the geopolitical aspect... basically, hurt Musk to show Trump that, yes, Europe means business and has the means to fight back.
As for the allegations:
> The probe has since expanded to investigate alleged “complicity” in spreading pornographic images of minors, sexually explicit deepfakes, denial of crimes against humanity and manipulation of an automated data processing system as part of an organised group, and other offences, the office said in a statement Tuesday.
The GDPR/DMA stuff just was the opener anyway. CSAM isn't liked by authorities at all, and genocide denial (we're not talking about Palestine here, calm your horses y'all, we're talking about Holocaust denial) is a crime in most European jurisdiction (in addition to doing the right-arm salute and other displays of fascist insignia). We actually learned something out of WW2.
...but then other commenters reminded me there is another thing on the same date, which might have been more the actual troll at Elmo to get him all worked up
No. It's 20 April in the rest of the world: 204.
Seems like you'd want to subpoena source code or gmail history or something like that. Not much interesting in an office these days.
The warrant will have detailed what it is they are looking for, French warrants (and legal system!) are quite a bit different than the US but in broad terms operate similarly. It suggests that an enforcement agency believes that there is evidence of a crime at the offices.
As a former IT/operations guy I'd guess they want on-prem servers with things like email and shared storage, stuff that would hold internal discussions about the thing they were interested in, but that is just my guess based on the article saying this is related to the earlier complaint that Grok was generating CSAM on demand.
For a net company in 2026? Fat chance.
Sabu was put under pressure by the FBI, they threatened to place his kids into foster care.
That was legal. Guess what, similar things would be legal in France.
We all forget that money is nice, but nation states have real power. Western liberal democracies just rarely use it.
The same way the president of the USA can order a Drone strike on a Taliban war lord, the president of France could order Musks plane to be escorted to Paris by 3 Fighter jets.
Interesting point. There's a top gangster who can buy anything in the prison commissary; and then there's the warden.
I remember something (probably linked from here), where the essayist was comparing Jack Ma, one of the richest men on earth, and Xi Jinping, a much lower-paid individual.
They indicated that Xi got Ma into a chokehold. I think he "disappeared" Ma for some time. Don't remember exactly how long, but it may have been over a year.
I'm sure they have much better and quieter ways to do that.
Whereas a raid is #1 choice for max volume...
Elon has ICBMs, but France has warheads.
Also, they are restricted in how they use it, and defendents have rights and due process.
> Sabu was put under pressure by the FBI, they threatened to place his kids into foster care.
Though things like that can happen, which are very serious.
I mean, if you're a sole caretaker and you've been arrested for a crime, and the evidence looks like you'll go to prison, you're going to have to decide what to do with the care of your kids on your mind. I suppose that would pressure you to become an informant instead of taking a longer prison sentence, but there's pressure to do that anyway, like not wanting to be in prison for a long time.
>That was legal. Guess what, similar things would be legal in France.
lawfare is... good now? Between Trump being hit with felony charges for falsifying business records (lawfare is good?) and Lisa Cook getting prosecuted for mortgage fraud (lawfare is bad?), I honestly lost track at this point.
>The same way the president of the USA can order a Drone strike on a Taliban war lord, the president of France could order Musks plane to be escorted to Paris by 3 Fighter jets.
What's even the implication here? That they're going to shoot his plane down? If there's no threat of violence, what does the French government even hope to achieve with this?
This is pretty messed up btw.
Social work for children systems in the USA are very messed up. It is not uncommon for minority families to lose rights to parent their children for very innocuous things that would not happen to a non-oppressed class.
It is just another way for the justice/legal system to pressure families that have not been convicted / penalized under the supervision of a court.
And this isn't the only lever they use.
Every time I read crap like this I just think of Aaron Swartz.
What happened to due process? Every major firm should have a "dawn raid" policy to comply while preserving rights.
Specific to the Uber case(s), if it were illegal, then why didn't Uber get criminal charges or fines?
At best there's an argument that it was "obstructing justice," but logging people off, encrypting, and deleting local copies isn't necessarily illegal.
They will explain that it was done remotely and whatnot but then the company will be closed in the country. Whether this matters for the mothership is another story.
Covered here: https://www.theguardian.com/news/2022/jul/10/uber-bosses-tol...
Obviously, the government can just threaten to fine you any amount, close operations or whatever, but your company can just decide to stop operating there, like Google after Russia imposed an absurd fine.
This would be done in parallel for key sources.
There is a lot of information on physical devices that is helpful, though. Even discovering additional apps and services used on the devices can lead to more discovery via those cloud services, if relevant.
Physical devices have a lot of additional information, though: Files people are actively working on, saved snippets and screenshots of important conversations, and synced data that might be easier to get offline than through legal means against the providers.
In outright criminal cases it's not uncommon for individuals to keep extra information on their laptop, phone, or a USB drive hidden in their office as an insurance policy.
This is yet another good reason to keep your work and personal devices separate, as hard as that can be at times. If there's a lawsuit you don't want your personal laptop and phone to disappear for a while.
EDIT: It seems from other comments that it may have been Uber I was reading about. The badging system I have personally observed outside the Gigafactories. Apologies for the mixup.
Yes.
I assume that they have opened a formal investigation and are now going to the office to collect/perloin evidence before it's destroyed.
Most FAANG companies have training specifically for this. I assume X doesn't anymore, because they are cool and edgy, and staff training is for the woke.
Or is there any France-specific compliance that must be done in order to operate in that country?
(it’ll be interesting to see if this discussion is allowed on HN. Almost every other discussion on this topic has been flagged…)
When notified, he immediately:
* "implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing" - https://www.bbc.co.uk/news/articles/ce8gz8g2qnlo
* locked image generation down to paid accounts only (i.e. those individuals that can be identified via their payment details).
Have the other AI companies followed suit? They were also allowing users to undress real people, but it seems the media is ignoring that and focussing their ire only on Musk's companies...As mentioned in the article, the UK's ICO and the EC are also investigating.
France is notably keen on raids for this sort of thing, and a lot of things that would be basically a desk investigation in other countries result in a raid in France.
[0] https://nypost.com/2025/12/15/business/facebook-most-cited-i... [1] https://en.wikipedia.org/wiki/Suchir_Balaji
I mean, perhaps it's time to completely drop these US-owned, closed-source, algo-driven controversial platforms, and start treating the communication with the public that funds your existence in different terms. The goal should be to reach as many people, of course, but also to ensure that the method and medium of communication is in the interest of the public at large.
I think we are getting very close the the EU's own great firewall.
There is currently a sort of identity crisis in the regulation. Big tech companies are breaking the laws left and right. So which is it?
- fine harvesting mechanism? Keep as-is.
- true user protection? Blacklist.
Who decides what communication is in the interest of the public at large? The Trump administration?
"Uh guys, little heads up: there are some agents of federal law enforcement raiding the premises, so if you see that. That’s what that is."
Good luck with that...
and the things about negligence which caused harm to humans (instead of e.g. just financial harm) is that
a) you can't opt out of responsibility, it doesn't matter what you put into your TOS or other contracts
b) executives which are found responsible for the negligent action of a company can be hold _personally_ liable
and independent of what X actually did Musk as highest level executive personal did
1) frequently did statements that imply gross negligence (to be clear that isn't necessary how X acted, which is the actual relevant part)
2) claimed that all major engineering decisions etc. are from him and no one else (because he love bragging about how good of an engineer he is)
This means summoning him for questioning is legally speaking a must have independent of weather you expect him to show up or not. And he probably should take it serious, even if that just means he also could send a different higher level executive from X instead.
The entirety of the social media platform is based on the idea that the company isn't responsible for what the users post, which is just wrong. If you own a magazine, you should be held responsible for everything published.
You shouldn't be allowed to profit from publishing anything, then hide behind "the users did it, not us".
And in this case, Elon should be held responsible for every single image of CSAM published on X. Same with Zuck. Same with Truth Social, whatever you want.
Agreed. It's why it's difficult to support France who has sheltered Roman Polanski for decades.
It's strange how people like you only think it's bad when it suits your political agenda.