The Pentagon Just Blacklisted Anthropic. Your AI Provider Could Be Next.
Dario Amodei told CBS News that “disagreeing with the government is the most American thing in the world.” Twenty-four hours later, every federal agency was ordered to stop using Claude, and Pete Hegseth designated Anthropic a supply-chain risk to national security. A label historically reserved for adversary nations, now applied to an American AI company because it wouldn’t let the Pentagon use its model for mass surveillance and autonomous weapons without human oversight.
And Claude shot to #2 on the App Store. Downloads breaking all-time records. Free user signups tripled.
Nobody saw this escalation coming
Anthropic had a $200 million Pentagon contract signed back in July 2025. Two non-negotiable terms: no mass domestic surveillance of Americans, no fully autonomous lethal weapons. The Pentagon wanted the ability to use Claude for “all lawful purposes” without restrictions. Anthropic said no. Hegseth called them “sanctimonious.” Trump called them “radical left, woke.”
And then Sam Altman did something nobody expected. He posted on X that OpenAI shares the exact same red lines, announced an OpenAI-Pentagon deal that includes the same safeguards that got Anthropic blacklisted, and essentially told the Defense Department that its position was unreasonable without quite saying those words out loud. So either those safeguards are reasonable (Altman’s position) or two of the three major AI providers are now national security threats (Hegseth’s position).
Your workflow is not a weapons system
Nobody reading this is building autonomous weapons. But your access to an AI model can vanish overnight because of someone else’s political fight.
That should bother you
If you’re a consultant who uses Claude to draft client proposals, or a recruiter running candidate outreach through an AI tool, or a small business owner automating customer responses, the Pentagon’s beef with Dario Amodei has nothing to do with you. And yet your tools become collateral damage in someone else’s war. Government agencies running Claude now have six months to switch providers. But the precedent extends further, because any company doing defense work (and that’s a huge chunk of enterprise America) is now pressured to drop Anthropic entirely.
I wrote about this a few days ago when Anthropic revealed that Chinese labs ran 16 million queries to mine Claude’s capabilities. The argument then was about data security. The argument now is about availability. And both point in the same direction.
When you build your entire daily workflow around a single AI provider’s cloud service, you are betting that nothing goes wrong between that company and its regulators, its government, its investors, or its competitors. That is a long list of failure points for something you depend on every Tuesday at 2pm. Anthropic is fighting the Pentagon. OpenAI is getting sued by half the internet. Google is under antitrust pressure. Every major provider is one political dispute away from some version of what happened Friday.
Swap the model, keep working
dassi runs in your browser with your API keys. If Anthropic gets blacklisted tomorrow, you switch to GPT or Gemini and keep working. If OpenAI does something you do not like, you switch to Claude. Because the AI is not the product. The browser agent is the product, and the AI is a swappable component underneath.
This is not some abstract privacy argument anymore. A week ago, BYOK was mainly a data sovereignty play. After Friday, it is also a business continuity play. The Pentagon just demonstrated that AI providers can lose access to their entire customer base overnight, for reasons that have absolutely nothing to do with the quality of the model, the satisfaction of the user, or anything the user did wrong. And if that can happen to a $200 million defense contract, it can happen to your $20 monthly subscription.
Amodei picked his hill
I respect it. Refusing $200 million because you won’t build mass surveillance tools is not nothing. And the App Store numbers suggest millions of people feel the same way.
But respect and reliance are different things. You can admire Anthropic’s backbone and still recognize that hitching your entire workflow to any single provider is a crap strategy when the ground shifts this fast. Flexibility is not about distrusting your AI provider. It’s about not needing to.
dassi’s free, and it doesn’t care which company is in the Pentagon’s crosshairs this week.