Agentic AI Won't Fail Because of AI.
It'll Fail Because of Your Data.
The three infrastructure layers nobody wants to fix.
Everyone’s talking about AI agents.
The boardroom conversation has shifted from “should we explore AI?” to “when are we deploying agents?” The reasoning engines are powerful. The orchestration frameworks are maturing. The protocol layer — MCP, the Model Context Protocol — is consolidating under the Linux Foundation with backing from Anthropic, OpenAI, and Block. Gartner projects 40% of enterprise applications will embed task-specific AI agents by end of 2026, up from less than 5% in 2025.
And yet.
42% of AI initiatives failed in 2025. Up from 17% the year before. 78% of enterprises have agent pilots running. Only 14% have reached production scale.
That’s not a model problem. It’s not a prompt engineering problem. It’s not even a talent problem.
It’s a data infrastructure problem — and it sits three layers below the agent.
I’ve spent 15 years working with enterprise data in asset-intensive industries — manufacturing, energy, utilities, defence. The pattern I’m seeing now is one I’ve watched play out before: a new technology arrives promising transformation, and enterprises discover that the prerequisite isn’t the technology itself. It’s the plumbing underneath.
Here’s what that plumbing looks like, layer by layer.
Layer 1: Your systems can't keep up with how agents think
An AI agent operates in continuous loops — observe, reason, act, observe again. It needs to query a system, evaluate the result, trigger an action in a second system, and verify the outcome in a third. All within a single reasoning cycle.
Most manufacturing backends don’t support this.
Consider what happens when an agent tries to execute a seemingly simple workflow: detect a deviation in production quality, check the maintenance history of the equipment involved, verify spare part availability, and raise a work order. That’s four systems — MES, EAM, ERP inventory, and the work order module — each needing to respond within the agent’s reasoning window.
Legacy platforms — SAP ECC, AS/400, older Maximo instances — operate in batch cycles. No real-time triggers. No event listeners. No execution endpoints. The agent can observe, but it can’t act.
Then there’s latency. Legacy systems average 3.1 seconds per API response. Agentic operations need 0.4 seconds. That’s not a tuning problem — it’s a fundamental architectural mismatch. An agent completing a multi-step workflow across three or four systems accumulates latency that breaks the reasoning loop entirely.
And the documentation gap makes it worse. 75% of enterprise APIs have drift between their documented specification and what actually runs in production. For a human developer, that’s an annoyance — you learn the workarounds. For an autonomous agent making decisions at speed, an undocumented response means a failed call, a retry loop, or worse — a confidently wrong action triggered against your production ERP.
53% of executives say legacy integration failures directly derailed their AI initiatives. Gartner projects over 40% of agentic AI projects will be abandoned by 2027. Not because the AI failed. Because enterprises are forcing modern autonomy onto decades-old infrastructure.
Layer 2: You have terabytes of logs — and almost none of it is useful to an agent
This is the layer nobody’s talking about, and I think it’s the real sleeper issue.
Every manufacturer has logs. Terabytes of them. Application logs, system logs, error logs, audit trails. They’re designed for one purpose: debugging. An engineer queries them when something breaks.
But agents don’t need debug logs. Agents need business event streams — structured records that capture what happened, why it happened, and what the outcome was.
There’s a meaningful architectural distinction here. An application log tells you that an API call was made at 14:32:07 and returned a 200 status code. A business event tells you that a purchase order was created for Vendor X, triggered by a reorder point threshold breach on Part Y, approved by the procurement desk based on the vendor’s lead time history, and the expected delivery is 14 days.
The first is useful for an engineer troubleshooting a failed transaction. The second is useful for an agent reasoning about whether to approve the next order, flag the vendor for review, or suggest an alternative source because that vendor’s last three deliveries were late.
Most manufacturers have invested heavily in the first category — observability stacks, log aggregation, APM tools, maybe a SIEM. Almost none have built the second. The result is a paradox: organisations are drowning in data about how their systems behave, but have almost no structured data about what their business actually did and why.
We’ve built event architectures for enterprise platforms — structured business events with intent, context, and outcome captured at the point of action, streamed into analytical stores designed for agent queries. The difference is night and day. An agent with access to structured business events can reason about patterns, anomalies, and decisions. An agent with access to application logs can tell you that the server is up.
Until this layer is fixed, an agent deployed into your manufacturing environment is reasoning in the dark. It can call your APIs, but it can’t understand the context of what it’s looking at.
Layer 3: The data your agent is reading is wrong — and it doesn't know that
This is the layer that connects everything, and it’s the one I’ve lived inside for 15 years.
MCP is consolidating as the standard protocol for connecting agents to enterprise data. It’s real infrastructure — not vaporware. The specification is maturing, security is being hardened, and the ecosystem is growing fast.
But here’s what nobody wants to say out loud: MCP connecting to ungoverned data just accelerates garbage.
Consider what lives inside a typical manufacturing ERP. A single spare part — a bearing, a gasket, a valve — might exist under four different descriptions, three different manufacturer part numbers, and two conflicting unit-of-measure entries. A bearing housing assembly might be catalogued as “BRG HSG ASSY 6205-2RS,” “BEARING, DEEP GROOVE, 25MM, SEALED,” “HOUSING ASSY — PUMP SIDE,” and a fourth record with just a vendor part number and no description at all. All four are the same part.
A human maintenance planner learns to navigate this over years. They know which descriptions map to which physical parts. They know that Vendor A’s part number cross-references to Vendor B’s. They carry an entire disambiguation layer in their head.
An agent doesn’t.
Deploy an agent on top of this data, and you get confident, fast, wrong decisions. Duplicate purchase orders. Incorrect inventory positions. Maintenance work orders referencing the wrong specification. Predictive maintenance models trained on conflicting equipment hierarchies. Each decision made at machine speed, compounding before anyone notices.
56% of organisations cite poor data quality as a major obstacle to AI adoption. And McKinsey’s recent research lands the point precisely: for every $1 spent developing an AI model, you need to spend $3 on the surrounding infrastructure and change management. The enterprise conversation has the ratio inverted. Most of the investment goes into the model. Almost none goes into the data the model will reason against.
The data quality problem isn’t new. What’s new is that agentic AI makes the consequences immediate and automated. When a human encounters dirty data, they slow down, cross-reference, ask a colleague. When an agent encounters dirty data, it acts — because that’s what agents do.
What "agentic-ready" actually means
The enterprise conversation about AI agents is backwards.
The question isn’t “which agent framework should we adopt?” or “how do we fine-tune the model for our domain?” Those are important questions — but they’re Layer 4 questions. They assume Layers 1 through 3 are solved.
Being agentic-ready means something specific:
Your systems can respond in real-time, with documented, reliable APIs that behave the way their specifications say they do. Your business events are captured as structured, queryable records — not buried in application logs designed for debugging. And your master data is governed at the point of entry — not cleaned up after the pollution has already spread through every downstream system.
The organisations that will succeed with agentic AI in the next two years aren’t the ones with the best models or the most sophisticated prompt engineering. They’re the ones that did the unglamorous infrastructure work first — and did it fast enough to matter.
Manufacturing is at an inflection point. 77% adoption and climbing, agents being embedded into production workflows, predictive maintenance, procurement, quality. The ones who get the plumbing right will pull ahead. The ones who bolt agents onto broken data will join the 40% that Gartner says won’t make it.
The question isn’t whether your enterprise needs AI agents.
The question is whether your data infrastructure is ready for an agent to reason against it.
Fix the plumbing. Then deploy the agent.

