Pentagon Labels Anthropic Supply Chain Risk

The 10 second story

The Pentagon has officially blacklisted Anthropic as a supply-chain risk after the AI company refused military demands for control over its models, including use in autonomous weapons and mass surveillance. When their £160 million contract collapsed, the Department of Defence switched to OpenAI, which accepted the terms but then watched ChatGPT uninstalls surge 295%.

Why it matters

This marks the first time a major AI provider has been formally designated a supply-chain risk by a Western military, creating a precedent that could ripple through procurement decisions across NATO allies including the UK. The fallout demonstrates how AI companies now face a stark choice between military contracts and public trust, with OpenAI’s user exodus showing that accepting defence work carries real commercial costs. For UK businesses relying on these AI tools, this split creates new risks around vendor stability, data security, and potential access restrictions if geopolitical tensions escalate.

AI companies are now splitting into military-aligned and civilian-focused camps, forcing businesses to consider which side their vendors fall on.

What this means for your business

  • Vendor due diligence becomes more complex as AI companies’ military ties now directly affect their public reputation and user base stability
  • Data sovereignty concerns intensify since military-aligned AI providers may face different oversight and access requirements from defence agencies
  • Switching costs between AI tools matter less when providers can lose significant user trust overnight, making vendor lock-in arguments weaker
Read the full story on TechCrunch

Ready to explore what automation could do for your business.

No hard sell, no commitment. Just a clear conversation about what is possible.