Also, I remember reading this guy has close ties to Anthropic. Also, I find it suspicious how he came to prominence out of nowhere. Like Big Tech and the establishment are propping podcasts of controlled narrative/opposition. I don't buy any of it.
Really Anthropic doesn't seem to be fighting for anyone but a narrow subset of people.
So who cares, none of the but AI providers are particularly ethical. Pick your poison as your conscious and needs allow.
After that, he become well-known to the general public through his Sarah Paine podcasts (which are excellent).
He was first funded by FTX
SBF was in Patel's previous podcast in July 2022 and FTX unraveled in November 2022. Hmm.
https://www.dwarkesh.com/p/sbf
> I flew to the Bahamas to interview Sam Bankman-Fried, the CEO of FTX! He talks about FTX’s plan to infiltrate traditional finance, giving $100m this year to AI + pandemic risk, scaling slowly + hiring A-players, and much more.
And that was right in the middle of FTX being accused by many prominent people .
April 29, 2022 https://x.com/AlderLaneEggs/status/1520023221294145536
June 20, 2022 https://x.com/MartyBent/status/1538645746655936519
This creates a dangerous dynamic. AI can generate targets that a human operator might not be able to justify manually, and when something goes wrong the blame can always be shifted to the system, such as the recent incident where roughly 180 children were killed due to faulty targeting.
Israel’s way of fighting this war looks more like pure destruction than a conventional military campaign, and AI systems like this are very easy to abuse in that context. At this point it’s clear that even the U.S. is willing to eliminate targets even when the collateral damage includes the person’s family or neighbors. I don’t think that would have been acceptable in previous administrations. Israel has lowered the bar.
That may be why Anthropic moved early to denounce this kind of usage, even though they had previously partnered with the Department of War.
Now let’s look at the statements made by Anthropic and Hegseth:
https://www.anthropic.com/news/where-stand-department-war
https://x.com/SecWar/status/2027507717469049070
From Anthropic’s own statement, we hear that they have actually been quite closely partnered. In Hegseth’s tweet we see:
“Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.”
This shows that Anthropic is still currently being actively used by the Department of War.
My view is that Anthropic and its investors eventually realized that the American war machine will use their technology in reckless ways, and that this will certainly create a massive PR disaster or, in an ideal world, even legal consequences. That realization likely pushed them to adopt what they now frame as a more “humanitarian” position.
Sometimes people succeed without earning it, and what matters is what they do with the success afterwards. I'd say Dwarkesh earned it, but got lucky and caught the right waves, and has surfed the hell out of his success. He's had consistently well informed, level headed takes, and has engaged the field with insight and honest curiousity.
When I see people surf like that, I applaud it. There's nothing grifty or shady, he's just had a great series of excellent opportunities and has played them for everything they're worth. Once he had a few billionaires on, that was all the social cache he needed to continue attracting guests and high level researchers and other figures in AI.
Also, somewhat spitefully, find it funny that he has multiple roommates.
> The whole background of this AI conversation is that we’re in a race with China, and we have to win. But what is the reason we want America to win the AI race? It’s because we want to make sure free open societies can defend themselves. We don’t want the winner of the AI race to be a government which operates on the principle that there is no such thing as a truly private company or a private citizen.
In the US currently, there are private citizens, and there are 'not-the-1%' citizens, where a Kavanaugh stop is legal, your voter information may be (or may have already been) seized by the DoJ or FBI, you may be tracked by out of state or federal agents on ALPRs with no warrant, for any reason, and where attending a legal protest may have your biometrics added to a database of potential domestic terrorists.
Or maybe your tax money will just be used to blow up unidentified boaters or bomb girls' schools and homes, and you'll get no say in whether that's the case because the elected body that is there to issue a declaration of war (or not) as representatives of you, has abdicated that power to a cabinet of unelected white nationalists.
But go off about how we're such a better country that believes in freedom and goodness.
It’s easy to point to China as a place where freedom of speech isn’t present, but try asking members of the current administration or even Supreme Court judges who won the 2020 election and see what kind of responses you get. That alone says a lot about the current state of things.
Freedom of speech and regard for the facts are independent concerns. People absolutely have the right to call out lies about the 2020 election and have repeatedly done so.
More like the past 200 years. America have never been the "good guys", and it is only Americans who seem to think they ever were.
-- signed, rest of the world :|
Better than China as a global model? Still, yes, probably. Potentially. Depends on how the next few years ago.
Even if America fails, I’d argue a global republic is a brighter potential future than a global dictatorship.
Just like being a billionaire (or, super-wealther, if you will), you don't get to be a superpower by doing good things.
China and the US can both be bad, and they're both going to use AI for mass internal and external surveillance and weapon targeting.
The idea that anyone would be better off with China supplanting the US is asinine. This is the same government that committed the Tiananmen square massacre and still doesn't acknowledge that anything happened.
ICE asking for a list of social media profiles of its detractors doesn't sound like "without fear of jail or shunning or anything like that." to me. Through data mining and 3rd parties, the local PD has a dossier on me based on what I write here that would come up if I did something to get their attention. That has a chilling effect on what say on here in public.
[1] https://www.washingtonpost.com/national-security/2026/02/13/...
[2] https://www.nytimes.com/2026/02/13/technology/dhs-anti-ice-s...
People have also been detained with intention to be deported for their views about Palestine, with online comments being part of how they're chosen for targeting:
[3] https://www.columbiaspectator.com/news/2026/01/28/federal-go...
There was also someone jailed for a month for quoting Trump's own words about a school shooting, "we have to get over it", in the context of Charlie Kirk's death, along with many other noted instances of retaliation against online comments around that incident:
[4] https://www.cnn.com/2025/12/17/politics/retired-cop-jailed-o...
> But within 20 years, 99% of the workforce in the military, the government, and the private sector will be AIs. This includes the soldiers (by which I mean the robot armies), the superhumanly intelligent advisors and engineers, the police, you name it.
Autonomous weaponry is one of the few ways that a fascist state could reasonably maintain violent control over a large and hostile populace.
I guarantee Trump would rather have perfectly obedient killbots than critically thinking soldiers, or even just the 5 murderous assholes required to oversee tasking for 1000 semi-autonomous police drones.
The least plausible part is the private sector, which just doesn't work that way.
1. Democracy and freedom worldwide
2. Economic access+prosperity with Asia
3. Pro-American sentiment
(Not in order of importance, which shifts constantly)
I think assuming China would beat the US in conventional war if they reach 'AGI' first is a stretch, even if this actually grants them a force advantage it's not like the US can no longer reach AGI. The risk is really more that if they reach 'AGI' and subsequently a force advantage, that they would no longer be deterred and more decisively move on Taiwan next year. Taiwan is key to [1] and [2] above.
You _could_ argue that this is a flaw in the constitution, and that none of the above should be legal, and that people who support those things should be restricted in their speech or ability to hold office. This was the status quo in politics for a while! These things have all existed for a long time but this seems particularly targeted at Trump, who was famously banned from most social media platforms for years.
There are a lot of democracies (most of the EU for example) that take this stance on freedoms and will even overturn elections to prevent those who support those policies. The question is really 'does doing that protect freedom and democracy or infringe it?'
As for the second paragraph, this is just a lie, Congress has not abdicated any type of war powers to the Cabinet. There has not been any type of declaration of war, and if Congress wanted to stop the DoD, they very much could and in fact came very close to doing so. If your Congress representative did not represent your interests (in this case voted nay), you can call email etc. them and their office or vote them out.
> better country that believes in freedom and goodness
I think you're letting your strong feelings here cloud your judgement, you can hold all of these opinions above without needing to fellate China, which is objectively worse on freedoms than the US. It's also important not to conflate "believes in freedom" with "perfectly meets my line of freedom."
The current amount of horsepower on the hoof is a rounding error, but before mechanized farming and war-fighting, these distinctions were the difference.
If we consider the capacity of technology to act as a force multiplier, it is reasonable to assume that current and future AI-assisted fighting forces can achieve more with less traditional materiel and with fewer personnel.
Drones are an especially likely way that these many AIs will become embodied and diversify, in which case I don’t think the percentages are so far-fetched.
https://www.bbc.com/news/articles/c62662gzlp8o
> Further ahead in the future, it wants its machines to be programmed to travel autonomously to a location, carry out its task - such as watching out for advancing enemy soldiers and engaging them if necessary - and then return to base after a certain time.
> “Preface to the highest stakes negotiations in history.”
Like come on. The cuban missile crisis, for starters? Bro needs to calm tf down.
The part of the Pentagon that did this is, to put it politely, not the part that's good at planning.
The problem with democracy is that it can easily become a revolving door wherein capital holders can choose which candidates are allowed to approach the door.
I think democracy works well when the monetary system is constrained; for example on gold or other scarce asset because that creates a better separation between money and state because then there would be less of an incentive for big companies to corrupt the revolving door to gain a financial advantage.
In a monetary system where the government can create an unlimited amount of money, the incentive to corrupt the government and political process keeps increasing.
I think democracy with a soft fiat money system is probably the most dangerous system because any moral objection can be filtered out of the running as we saw happen with Anthropic and the Department of War. It's because clearly it's the weapon manufacturers running that department behind the scenes; they have a huge financial interest to do so. The Department of War is the bread and butter of weapon manufacturers and defense contractors.
I've reached similar conclusions about the problems with democracy but always struggle with any potential solutions. Looking at the the world, I don't see any viable alternative forms of government, just slight variations that still suffer from the same core problems but to a lesser degree
I haven't seen this much hype and hopium since the dot com boom. The whole open AI -> Anthropic saga just reeks of the same evolution of Viant/Scient.
Look we have an amazing tool, but it has some fundamental shortcomings that the industry seems to want to burry its head in the sand about. The moment the hype dies and we get to engineering and practical implementations a lot is going to change. Does it have the potential to displace a lot of our current industry: why yes it does. Agents can force the web open (have you ever tried to get all your amazon purchase history?) can kill dark patterns (go cancel this service for me), and crush wedge services (how many things are shimmed into sales force that should really be stand alone apps). And the valuable engagement is going to be by PEOPLE, good UI, good user experiences are gonna be what sells (this will hit internet advertising hard for the middle men like google and Facebook).
The notion that 99% of the workforce and military will be AIs isn't "copium", it's grounds for absolute terror. One of two things will be true:
1. The AIs will be controlled by the Epstein class, who will then have no use for most of humanity, either as workers or soldiers.
2. Or the AIs will be controlled by the AIs themselves, which also seems worrisome.
Really, any situation where 99% of the workforce and military are AIs should be deeply concerning, for reasons that should be obvious to any student of history or evolution.
And, sure, maybe we won't get there in our lifetimes. But if we did, I wouldn't expect an automatic utopia.
The GP is saying that it’s a major over-extrapolation of the current progress.
You seem to be assuming we will get there instead of expecting the cracks will become more and more obvious.
Who is learning this for the first time only now? Even just restricting ourselves to the current administration, look at how many times Trump has directed punitive actions against private entities! Look at his actions against law firms like Perkins Coie or Covington & Burling. This is not something that just arose out of nowhere with Anthropic.
A teenager, probably. Not everyone is 100 years old.
I mean... isn't that pretty much the way the current administration behaves in general? If the answer to this question is "yes", and the US executive does not in fact share the values of the author about free and open society, then the rest of the article is kinda moot (except the point that we should be talking about these things now, and encouraging congress to act).
This administration believes that they don't need to treat all businesses equally under the law, and can use strong-arm intimidation tactics to get what they want. That is the problem.
I remember thinking about this - basically AGI - decades ago, and it was always obvious to me that if you created such a thing there'd come a day when the MIB would be ringing the doorbell.
At least the AI push has saved the human race from an uncomfortable obsession with cryptocurrency.
I speculate we'll discover there's very few unambiguously ethical uses of AI, much less for military applications. Them's the breaks.
As for whether code written with Claude Code should be so considered - if it’s just code that is subject to human review, I would argue that this use shouldn’t be a supply chain risk. But with Claude Code PR Review and similar products, the chance that an AI product (not limiting to Anthropic here) could own a load-bearing part of the lifecycle of a critical piece of code becomes much larger, and deserves scrutiny.
Because you can't designate a company a SCR because you don't like the contract you signed with them.
What Hegseth/Trump want to do is not just stop Anthropic models from being used by any military supplier pursuant to goods/services they are providing to the military, but rather say that if you do business with the military then you must not use Anthropic at all, even if that usage is entirely unrelated to your military contracts.
It is also common corporate doctrine to use a subsidiary for government contracting to avoid having to evidence that a commercial vendor is utilized for government, so this won't even be 'annoying' for contractors.
ITAR and compliance frameworks (e.g. FedRAMP and CMMC) already mandate this for any non-US company, yet AWS commercial still has offerings in other countries and from non-US vendors, Palantir still has an IG business, etc.
The lawfare part of it is that to coerce an individual or a company, governments are willing to abuse their power. The Biden administration did it when pressuring social media companies to censor content. The Trump administration is doing it to a much greater extent with things like ordering every government agency to stop using Anthropic and by labeling them a supply chain risk.
The ideological part of it is when Defense Sec Hegseth and Trump and AI Czar / PayPal Mafia member David Sacks repeatedly attack Anthropic as “woke”, and it is clear they’re undermining this company from their government positions based on Anthropic’s speech (first amendment violation). This obviously is part of why they attacked Anthropic in such a public way.
And the corruption part of it is OpenAI’s leaders being big supporters of the MAGA movement and the Trump administration. Greg Brockman, president of OpenAI, is the biggest donor ever to the MAGA PAC. Why did Hegseth grant a contract to OpenAI after banning Anthropic, even though OpenAI has the same red lines in their agreement (what Sam Altman claimed)? It’s because of the corruption - give Trump and his family/friends money, and you’ll get something back.
The fight against these types of government abuse have ALWAYS been happening. But the abuse is much more in the open today, and much larger in scale than ever before. Scandals like Watergate would not even make the news today. And that is what the public should be waking up to and focusing on. We need to rethink our political system significantly and add a lot more protections against the kind of things the Trump 2.0 administration has done.
> Our future civilization will run on AI labor. And as much as the government’s actions here piss me off, in a way I’m glad this episode happened - because it gives us the opportunity to think through some extremely important questions about who this future workforce will be accountable and aligned to, and who gets to determine that.
I stopped reading there because this is a pointless exercise.[1]
This isn’t a roundtable. You are not even at the table. There isn’t some “thankfully time to discuss this...”—you are just out.
The Machine doesn’t need your labor? You are out. No norms. No discussions.
You either try to forcefully take control of the situation or you see yourself get discarded.
(I am here just assuming all the AI Maximalist (doom maximalist in this context, Trump and all) premises for the sake of the argument.)
[1] I did read the last paragraphs and the tenor is the same. “We must make laws and norms through our political system”… just like with nuclear bombs, of all things.
AI is just computers doing things we typically associate with human intelligence, and having a conversation with a computer that effectively passes the Turing test, is definitely AI. If LLMs aren't AI, then AI isn't a useful term. (though agreed that LLMs aren't AGI, which I assume is what you're thinking of)
Wikipedia's list of AI applications: https://en.wikipedia.org/wiki/Artificial_intelligence#Applic...
There’s a similar thing with transhumanist “enhancement” or “life extension” stuff. When it actually works we call it medicine. Statistically one of the most powerful life extension techs ever developed was the cardiac bypass, which would have been sci-fi in 1900.
I’ve been using stuff like Claude Code and personally feel comfortable calling this stuff AI. Is it AGI? I don’t think so, but then again I’m not totally sure what that is. Am I AGI? I’m not universally able to handle all forms of cognition well and I can’t self modify much, so I’m not sure either. I’m not even sure if AGI is a well formed concept.
Intelligence is a pretty broad concept too. My pet rabbit is intelligent. Plants are intelligent. Bacteria are intelligent. Anything that can run an OODA loop, learn, adapt, and move toward a goal function is intelligent. By that definition some computer systems have been AI for decades. They’re just getting better.
I think there’s intelligence all around us. We just don’t get the wow factor from it unless it talks.
I personally would prefer "AI" to be "AGI" but there's no point fighting the way people use language (see: every damned pedantic comment about English usage ever!! :-)
But beyond the pedanticness and authority appeals, I think keeping the term AI distinct from AGI is just useful so it can be an umbrella term for all the human-like smart-ish things computers do. And so its Wikipedia page doesn't have to be re-written.