The argument no one wants to make
The industry building artificial intelligence is moving faster than the infrastructure designed to govern it. This is not a prediction. It is the documented, stated reason that some of the most senior researchers in the field are leaving the organisations they built — publicly, on record, citing it explicitly.
The market has not processed this. Revenue is growing. Global AI spending reached $1.5 trillion in 2025, according to Gartner. Benchmarks are improving. The products are genuinely useful. These facts are being treated as evidence that safety concerns are overstated. They are not evidence of that. They are evidence that capability and safety are different things — and that one can advance while the other stagnates.
What happened with Anthropic — and what it actually means
In early 2026, the US Department of Defence presented Anthropic with a demand: allow its AI models to be used for "all lawful purposes" — a category that explicitly included fully autonomous weapons systems and domestic mass surveillance infrastructure. Anthropic refused. The $200 million contract was terminated.
Defence Secretary Pete Hegseth declared Anthropic's position "fundamentally incompatible with American principles" and labelled the company a national security supply chain risk — a designation previously reserved for foreign adversaries, never before applied to an American company. Then, within days, the same administration indicated it might invoke the Defence Production Act to compel Anthropic's cooperation. Then directed all federal agencies to cease using its technology entirely.
The constraints held under maximum pressure — a $200M loss, government coercion, the threat of nationalisation. For any organisation evaluating AI vendors, that is a structural data point.
The significance of Anthropic's decision is not that safety-focused AI wins on ethics. Markets do not procure AI on ethics. The significance is structural: a company demonstrated — under the maximum possible pressure from the most powerful government on earth — that its stated constraints are not marketing. They are load-bearing. They do not move when a powerful client pushes against them.
The exodus: what the builders are saying
The Anthropic story did not emerge in a vacuum. It sits inside a pattern that has been building for two years — and the signal is consistent across multiple organisations.
The agentic phase exposed the gap
The transition from conversational AI to agentic AI is not a product update. It is a categorical shift in risk profile. Chatbots generate outputs — humans evaluate them and decide what to do. Agents take actions: booking, deleting, sending, executing — often at machine speed, without a human reviewing each step. The feedback loop is compressed or removed.
According to the 2025 AI Agent Index published by Stanford and Berkeley, papers mentioning "Agentic AI" in 2025 exceeded the combined total from all prior years. A McKinsey survey of 1,993 companies found 62% were at least experimenting with AI agents. The pace of deployment is outrunning the frameworks to govern it.
The failures have already materialised — and this week, at the largest scale yet.
The market's argument — and where it fails
The standard position against prioritising AI safety: constraints slow development; slowing development cedes ground to actors with fewer constraints; the entity that reaches advanced AI first sets the terms for everyone; therefore restraint is strategic surrender.
This argument treats safety as optional friction — a philosophical preference that can be deferred until the competitive position is secured. The agentic failure data challenges this framing directly.
Safety infrastructure is not a brake on the system. It is load-bearing. The agents operating inside enterprise environments today are not failing because they are too cautious. They are failing because they have no constraints at all — and when they fail, they do not stop. They act, at machine speed, with whatever authority they have been given, in whatever direction their objective function points. The blast radius scales with their access.
Traditional software fails deterministically. AI agents fail probabilistically — often in ways that are difficult to predict, simulate, or reverse. The kill switch you assume exists usually does not.
Gartner anticipates at least 40% of agentic AI projects will be withdrawn by end of 2027, with risk management concerns as a primary driver. The market is beginning to price this — but the pricing is reactive, not structural.
What this means for organisations operating in African markets
African financial regulators — the CBN, RBA, FSCA, CBK, RBZ — are watching the US and European experience closely. The Stanford AI Index 2025 confirmed that the African Union released AI governance frameworks in 2024, alongside the OECD, EU, and UN. The regulatory wave is not hypothetical.
The questions that frameworks will ask are already visible in the European AI Act: Who authorised this agent? What actions can it take without human review? How is its decision-making audited? What happens when it fails, and who is liable? Most organisations cannot answer these questions today. The gap between what is deployed and what can be governed does not close by itself.
For businesses expanding across African markets, the corporate structure you operate under will increasingly determine your AI governance obligations. The entities deployed in financial services, healthcare, or government-adjacent work face the most immediate exposure — and the least institutional preparation.
The conclusion the data supports
This is not an argument for slowing AI adoption. The competitive and operational case for deploying AI is real, and businesses that do not engage with these tools will fall behind those that do. The capability advantage is genuine. The productivity gains are documented. The decision to adopt is correct.
The argument is for parallel investment in the governance layer — treating AI safety infrastructure not as a compliance cost or a PR consideration, but as a structural requirement of deploying systems that take autonomous actions inside your organisation. The safety layer is not a constraint on progress. It is what makes progress recoverable when it goes wrong.
The researchers who understand these systems best — who built them, who ran the safety teams — have been trying to communicate this for two years. The market called it noise. The agentic failure data is no longer noise. The Meta Sev 1 incident happened three days ago. The database Replit's agent deleted is still gone. The AWS outages happened. The cover-up was automated.
Sober heads are necessary. The industry is not producing enough of them. We intend to be part of changing that.