Matt Hudson
AI
Apr 1, 2026 · Updated Apr 9, 2026

Your AI Strategy Is Burning Cash, Not Building Leverage

Matt HudsonChief Financial Officer
Matt Hudson

Over the past year, I’ve had the same conversation with CFOs, CIOs, and operators at dozens of companies.

Everyone feels the same pressure: Are we doing enough with AI? Are we innovative enough? Are our teams moving fast enough? Boards are asking. CEOs are asking. Employees are experimenting. Every leadership team is trying to turn experimentation into an advantage.

From a distance, the story looks encouraging. McKinsey says that 90% of enterprises report deploying AI in at least one function. AI appears to be everywhere. Knowledge workers are using tools like Claude, ChatGPT, and others to draft, summarize, and code. SaaS vendors are racing to bolt agents and copilots onto every workflow.

But when I ask a simple follow-up question—“How are you measuring the success of all this AI?”—the most common answer is still the wrong one: tokens consumed.

Not output. Not impact. Not time saved or revenue created. Just spend.

Yikes.

We’re in a phase of the cycle where exploration is healthy and expected. You give teams latitude to try new tools and see what sticks. That’s the right instinct. But without a strategy, that exploration quietly hardens into architecture—and architecture determines economics.

As such, many businesses are drifting toward an AI architecture that is fragmented, expensive, and fundamentally misaligned with how value is created.

From my seat, I see that we have a unique opportunity to change the trajectory and prevent foreseeable pitfalls:

Pitfall # 1: The productivity tax

AI is supposed to remove friction from work, but in fragmented environments, it simply redistributes it.

As AI capabilities get embedded into email, docs, CRM, finance tools, HR systems, and ticketing platforms, employees accumulate agents. Each one is helpful in isolation. Taken together, they create a new layer of complexity.

To get anything meaningful done, people still have to switch between tools, reexplain context to each assistant, and ultimately copy and paste data from one system to another.

If a knowledge worker spends even one hour per day searching for information, jumping between systems, or reentering context, that’s roughly 33 working days per year. At an average salary of $150,000, that’s about $19,000 in lost productivity per employee annually—a 12% productivity tax for a single hour of friction per day.

Now extend that across hundreds or thousands of employees.

Each new assistant becomes another boundary where work has to be translated. Each new “copilot” is another UI to manage and another context that doesn’t carry over.

This looks like higher software spend layered on top of largely unchanged productivity. The tools are working, just not together.

Pitfall #2: Lock-in and sprawl

The second foreseeable pitfall isn’t “too many tools.” It’s ending up in a world where your data, workflows, and agents all live inside someone else’s stack—and none of it truly talks to each other.

Bottom-up, vendor-by-vendor AI adoption scatters your data across AI-enabled walled gardens. Each system has its own schema, embeddings, and view of your business. Interoperability is rare, and vendors are incentivized to keep it that way: When your AI runs on their rails, your switching costs go up and your leverage goes down. Over time, your AI strategy starts to look a lot like their product roadmap.

When systems don’t share context, employees become the integration layer:

  • Searching across multiple tools for the same information

  • Reconciling conflicting versions of reality

  • Reexplaining objectives and constraints to each assistant

  • Manually moving content between applications

The result isn’t just technical debt. It’s a structural lock-in with painful migrations, rigid vendor relationships, and an AI stack that’s hard to reconfigure as strategy evolves. On top of that, every AI-enabled application comes with its own control surface that has separate permission models, audit logs, data handling terms, model usage policies, and integration requirements. Bleh.

Individually, each vendor can pass review. But collectively, they multiply complexity, leading to simple questions like, “Who had access to this data, through which tools, under which policies?” taking days to answer.

Net effect: AI spend rises, but flexibility, visibility, and control all go down. I’ll wager that in most cases the benefits won’t outweigh the costs.

Pitfall #3: ROI that doesn’t compound

The third pitfall isn’t that AI doesn’t work. It’s that most AI value today is local, not systemic. You get pockets of improvement—but they don’t add up to an AI-enabled business.

As more tools ship their own built-in agents, you end up with a swarm of narrow assistants that don’t really talk to each other—or only do so through slow, brittle proxies like APIs or MCP. Each agent has its own context, its own rules, its own protocol. Value gets trapped inside tools.

The result: Context doesn’t flow, and ROI doesn’t compound.

  • Marketing drafts faster, but those insights don’t automatically shape sales plays or product decisions.

  • Sales refines outreach, but none of that learning feeds directly into success, finance, or strategy.

  • Support resolves certain tickets more quickly, but the patterns don’t reliably inform roadmap, pricing, or risk.

You’re improving individual jobs to be done, but you’re not improving how the whole system thinks and operates.

I’m also skeptical that “one mega-super-agent that does everything” is where most real workflows will land. A more realistic pattern looks a lot more like how organizations already work:

  • Many specialist agents, each with specific context, access, and protocols—like domain experts with clear roles and permissions.

  • One or a few orchestration agents that understand broader objectives and know how to coordinate those specialists, much like management assembling the right people, in the right sequence, to deliver an outcome.

That model can absolutely create compounding value, but only if the orchestration layer has rich, shared context and can move that context fluidly between agents and systems. It works where you work. When each agent is locked in a tool silo, orchestration becomes duct tape, not leverage.

A simple example that’s top of mind for me: Someone recently walked me through a beautifully automated workflow built with the latest AI tools. The only problem? That exact workflow had already been automated with traditional code years ago. We had just rebuilt a solved problem with a shinier, more expensive stack.

So now we’re paying more for a job that was already done.

When peers talk to me about collapsing gross margins in software, this is what I think about. It’s not that AI can’t deliver extraordinary ROI; it’s that we’re still in the experimentation phase. We’re layering new tools and agents on top of existing systems, without yet matching the cost of those workflows to the value they create.

Until organizations design for compounding—shared data, shared context, orchestrated agents—the default outcome is predictable: more AI line items, more activity, and only marginal improvements to unit economics and operating leverage.

What’s next?

It’s healthy to let teams experiment with AI. Bottom-up exploration is how people learn, how real use cases emerge, and how you discover unexpected value. But experimentation on its own isn’t a strategy. If we’re not deliberate, we end up with a maze of disconnected tools—agents that don’t talk to one another, overlapping features, and pockets of usage that never add up to real business impact.

The opportunity now is to steer that experimentation toward a platform strategy: one that we know can work across the entire business, not one that quietly creates new silos.

A platform like that should:

  1. Work where your people already work: It should plug into the systems your teams already live in—email, documents, CRM, project management, knowledge bases—rather than asking them to adopt yet another interface. The more it fits into existing workflows, the more adoption and impact you get without adding friction.

  2. Connect many agents through one brain: Most people won’t use a single super-agent for everything. They’ll use many specialized agents for different jobs. The key is having one coordinating layer that routes tasks, shares context, and enforces policies across those agents so they behave like a coherent system instead of a pile of tools.

  3. Centralize governance and observability: Policies, permissions, logging, and compliance can’t be scattered across dozens of vendors. A unified platform gives you one place to set guardrails, track usage, monitor risk, and ensure you’re scaling AI safely without drowning in oversight.

  4. Unify data and context, not just interfaces: It’s not enough to put a chat box on top of your tools. The platform needs to understand your customers, your content, and your workflows—and make that context securely available wherever work happens. That’s how you move from isolated AI features to compounding intelligence across the business.

  5. Make the economics visible and controllable: Leaders should be able to ask: Where is AI actually saving time? Where is it creating revenue? Where is it just adding cost? A platform model lets you see those answers clearly and direct investment toward the highest-impact use cases, instead of chasing one-off feature releases.

Economically, I expect this to be a journey. In the near term, as we experiment more aggressively, you may see pressure on gross margin and opex as we invest in new capabilities and absorb the costs of learning. But over time, a true platform should drive us back to strong margins: lowering the cost to serve each customer, automating low-value work, and making each AI investment reusable across teams and workflows.

The companies that win won’t be the ones that simply spend the most on AI. They’ll be the ones that turn exploration into a coherent architecture—using a platform to connect agents, data, and workflows, and measuring success in outcomes rather than tokens consumed. That’s how AI stops being a line item in the budget and starts becoming a driver of durable enterprise value.


Making the case for an AI platform

Enterprises don’t need more AI tools. They need the infrastructure that allows AI to scale and compound value across the business.

To explore what this shift means for CIOs, CTOs, and AI leaders, download our two-page brief on the move from point tools to an enterprise AI platform.