← Writing

Most Companies Know Their AI Spend. Almost None Know Their AI Readiness.

I keep having the same conversation. A peer calls to talk through their AI strategy. They’ve got executive sponsorship, a funded program, a vendor shortlist. Ten minutes in, I ask a simple question about decision rights or model governance, and the line goes quiet.

It’s not that these are bad teams. They’re sharp people working hard problems. But they’ve skipped something fundamental: an honest assessment of whether the organization is ready for what they’re trying to do.

The numbers tell the same story at scale. AI investment is rising fast, but organizational conversion remains weak: enterprise spending is projected at $644 billion, yet 42% of companies report scrapping most AI initiatives (BCG), and only 39% see meaningful EBIT impact (McKinsey). The capital is flowing. The returns are not.

I’d call it readiness debt. Organizations are scaling AI ambition faster than organizational absorptive capacity. They invest in capability without investing in the management system required to convert that capability into value. The gap is never technology. It’s readiness. And it shows up the same way everywhere.

$644B projected AI spend, 2025 42% scrapping most initiatives BCG 39% with meaningful EBIT impact McKinsey

Five Questions Most Leadership Teams Can’t Answer

I’ve started using five diagnostic questions when someone asks me to look at a stalled AI program. They’re simple. Almost no one can answer all five cleanly.

FIVE READINESS DIMENSIONS Operating Model Governance Deployment speed Architecture Lab to workflow Measurement Balance sheet linkage Workforce Human-agent design Typical enterprise readiness gap

1. Who owns AI decisions, and how do they get made?

Most companies have a declared AI lead. Few have documented decision rights that people actually follow. The CTO thinks they own model selection. The CDO thinks they own data access. The business unit thinks they own use case prioritization. When I ask “who decides whether a model goes to production?” in a room of ten leaders, I usually get four different answers.

2. Can you deploy a model without a compliance firefight?

If your governance architecture operates at quarterly-review speed, it will bottleneck every deployment. I hear this from colleagues in financial services constantly, but it’s not just a regulated-industry problem. Any organization that treats AI governance as a gate rather than a continuous process will struggle to move models from pilot to production.

3. Is your AI stack connected to business workflows, or sitting in a lab?

The gap between a working prototype and a system that changes how people work is where most AI value gets stranded. A friend running AI at a major insurer told me they had 40 models in notebooks and three in production. The issue was never model quality. It was that no one had mapped the capability stack to the actual processes where decisions get made.

4. Can you tie AI investment to balance sheet impact?

If reporting stops at “models deployed” or “hours saved,” you cannot make the case for continued investment at board level. The teams that sustain funding are the ones that establish financial linkage between AI capabilities and revenue, cost, or risk outcomes. Everything else is a science fair.

5. Are your teams designed for human-agent collaboration?

The roles, skills, and reporting structures built for traditional software do not transfer to AI-augmented work. I’ve watched organizations try to staff AI programs with the same team shapes they use for application development. It does not work. Role evolution toward human-agent collaboration requires new competency models, not just new headcount.

Start With Where You Are, Not Where You Want to Be

The teams that succeed, mine and the ones I hear about from peers, always start by being honest about where they are. Not where the vendor pitch says they should be.

This is harder than it sounds. There’s organizational pressure to present a confident posture. Nobody wants to stand up in a steering committee and say “we’re at stage one.” But the alternative is building on assumptions that collapse when reality hits.

The readiness curve has four stages. Each one has a characteristic failure mode.

READINESS STAGES Fragmented Siloed pilots. No coordination. Fails at: ownership Experimenting Pockets of capability. Ad-hoc governance. Fails at: scaling Operationalizing AI in workflows. Defined ownership. Fails at: measurement Scaling with Discipline Feedback loops. Continuous. “Where we think we are” Where most actually are

Most organizations I talk to believe they’re Operationalizing. Most are actually somewhere between Fragmented and Experimenting. The gap between perception and reality is where programs stall.

I created an assessment tool to make this diagnostic concrete. It covers 25 questions across five dimensions: Data, Process, Talent, Governance, and Organizational Maturity. It produces a radar chart showing exactly where the gaps are, assigns a maturity tier with specific recommendations, and takes about 10 minutes to complete. No login, no email capture. Just an honest diagnostic.

I built it because I kept watching the same readiness gaps derail programs that had everything else going for them. Good teams, good models, real business problems. But no honest inventory of whether the organization was ready.

The next advantage in enterprise AI will not come from access to better models. It will come from knowing whether your organization is actually ready to use them well.

Take the Assessment

Find out where your organization actually stands.

AI Readiness Assessment

25 questions. 5 dimensions. 10 minutes. No sign-up.

Take the Assessment →

The assessment tells you where you stand. The playbook shows you what to do about it. Together they give you a starting point that’s grounded in your actual situation, not a generic maturity model that assumes you’re further along than you are.

The scorecard is shareable. Bring it to your next leadership conversation. It’s easier to align on a plan when everyone is looking at the same honest baseline.

Related


In regulated environments, readiness is not a side concern. It is the difference between pilots that impress and systems that endure. I work on these problems at Deutsche Bank, and the pattern holds everywhere I look. If you’re working through similar challenges, I’d welcome the conversation.