The Layer SAP and IBM Cannot Build From Inside

Why the platforms cannot solve this from inside.

A manufacturing CIO showed me a remediation proposal for his SAP Material Master two years ago.

1.8 million records. Eleven plants. Eighteen months. Eight figures.

He signed it.

Last quarter, his successor — three CIOs later — showed me a near-identical proposal. Same SI. Same scope. Same eleven plants. Same eight figures.

The records had drifted back. Not partially. Substantially. Back to where they were before the first engagement, before the second, before any of it.

This is not a story about negligence. The first project was executed competently. So was the second. So will the third one be, when it arrives in 2027.

It is a story about an architectural position the platforms are in — one they cannot exit from inside their own products.

The MRO data quality problem on SAP and IBM Maximo will not be solved by SAP or IBM.

Not because they have not tried.

Because they architecturally cannot.

This is not a critique. It is a recognition of a structural position they are in — and a question about who builds the layer they architecturally cannot.

The Schemas That Won the Enterprise

SAP’s Material Master traces back to R/2 in the early 1980s. IBM Maximo’s asset and item schemas to 1985.

These were excellent designs for the problems of their era. MRO data volumes measured in tens of thousands of records. Classification meaning a four-digit commodity code. “Data quality” not yet a discipline because the volumes did not yet demand one.

These schemas did not fail.

They succeeded so completely that they became impossible to change.

That distinction is the entire thesis of this piece.

The Material Master is load-bearing infrastructure for the global manufacturing economy. Maximo’s asset and item models underpin maintenance operations at virtually every major utility, mining company, oil and gas operator, defense contractor, and aerospace manufacturer in the world.

Billions of transactions per year run through these primitives.

That is the credit line. Without it, the rest of the argument reads as critique. With it, it reads as architectural diagnosis — which is what it actually is.

The Customization Layer That Locked It In

SAP customers run, on average, hundreds of Z-tables and thousands of customizations against the Material Master. Custom fields. Custom classification structures. Custom approval workflows.

Maximo customers have customized item attributes, asset specifications, classification hierarchies, work order data models. The customizations vary by industry. Utilities customize differently than oil and gas. Oil and gas customizes differently than aerospace.

But every customer of meaningful size has built on the schema.

This was not a mistake. This was the platform’s promise. Extensibility was the differentiator that won the category. SAP and IBM invited customers to build on the foundation, and customers did, because that is what enterprise software is supposed to allow.

But every customization is now a constraint on refactoring.

SAP cannot redesign the Material Master without breaking thirty years of customer code. IBM cannot redesign Maximo’s classification model without breaking the customizations of every utility and refinery and mine that built on it.

The platforms are architecturally captive to their own success.

This is the innovator’s dilemma at the schema layer, not the market layer. Christensen wrote about new entrants disrupting incumbents commercially. The MRO data version is harder, because schemas do not pivot.

The platforms can see the next wave clearly. They have white papers and analyst briefings and product roadmaps about it.

They still cannot move. Because their installed base is the constraint that defines them.

No platform of SAP’s or IBM’s scale can refactor the foundational schemas of its primary product. That is not a strategic choice. That is a fact of installed-base architecture.

The Bolt-On Pattern Is the Only Move

The platforms have responded the only way they architecturally can. By adding layers above the schema rather than refactoring it.

SAP MDG launched in 2009 — the right move given the constraint, because it is a governance layer above the Material Master rather than a redesign of it. IBM’s Asset Information Management for Maximo follows the same pattern. Oracle Fusion Cloud SCM’s MDM module, the same.

SAP Datasphere. Watsonx for Maximo. Joule. The recent wave of “AI-native” announcements.

Every one of them is a layer on a layer on the original schema.

The bolt-on pattern is not a failure of imagination. It is the only architecturally available move when the underlying schema cannot be touched.

Credit where it is due. MDG is a sophisticated piece of engineering. So is AIM. They do what they can do given the foundation they are sitting on. The teams behind them are building real software solving real problems.

But the consequence is unavoidable.

Every layer inherits the primitives underneath. MDG can enforce workflow on Material Master, but it cannot fix the fact that Material Master was not designed for semantic deduplication, attribute completeness rules, manufacturer cross-references, or classification taxonomies with the depth that modern asset-intensive operations now require.

The governance is real. The foundation it governs is the foundation it always was.

Which brings us to the AI moment.

Every “AI-native” announcement from the platform vendors in the last eighteen months is an agent layer on a governance layer on a 1980s schema.

The agents are good. The governance is good.

The schema is the schema.

AI does not fix structural issues. It amplifies them. The cleaner the foundation, the better AI gets. The more compromised the foundation, the more confidently wrong AI becomes.

Right now, the most ambitious AI initiatives in enterprise software are running on the most structurally compromised data foundations in enterprise software.

That is not a criticism of anyone’s intentions. That is the situation.

Every CIO running an AI initiative on top of SAP MRO data or Maximo asset data is discovering the same wall. The AI is good. The agents work. The orchestration is sound. And the underlying records were never designed for what is now being asked of them.

What Comes Next

The next wave of MRO data infrastructure will not come from refactoring the platforms.

It architecturally cannot.

The schemas that won the enterprise are the schemas that won the enterprise. They will continue to be — for decades more — because the installed base demands it and because no realistic alternative exists at the platform layer.

The next wave will come from a layer built natively for the problem the platforms inherited but cannot solve. A layer designed from the ground up for semantic deduplication, prevention-first governance, multi-master federation, AI-grade provenance, and the kind of taxonomic depth that modern MRO operations actually require.

Designed without the constraint of compatibility with a 1980s schema. Because that constraint is precisely what blocks the work.

That layer will not be built inside SAP or IBM.

The same customization-base constraint that prevents schema refactoring also constrains internal greenfield builds. Any new internal module has to interoperate with the existing schema, and the moment it does, it inherits the primitives anyway.

This is not a hypothesis. It is an observation about every internal modernization initiative in enterprise software for the past twenty years.

It will be built — and is being built — outside the platforms. By infrastructure companies whose entire architecture is designed around the MRO data problem rather than around installed-base compatibility.

The interesting question is not whether such a layer is needed.

It is. Every CIO running an AI initiative on legacy MRO data is discovering it the hard way.

The question is what the relationship looks like between that layer and the platforms it complements.

History suggests three patterns.

Deep partnership — the Salesforce-Snowflake model. Two infrastructure layers operating independently but designed to interoperate cleanly.

Deep integration — the SAP-Concur or IBM-Red Hat model. A specialty capability acquired but operated semi-independently because its DNA is too different from the parent.

Absorption — the specialty layer acquired and folded entirely into the platform’s roadmap, becoming the foundation for the next generation of the parent product.

All three patterns have happened in adjacent enterprise categories.

None has happened yet in MRO master data governance.

The pattern is consistent across enterprise software history. Specialty depth at scale rarely emerges inside platforms. It emerges adjacent to them, and is then absorbed.

It will. The platforms will need owned depth in this layer — not partnered, owned — once the AI initiatives running on top of their schemas hit the structural wall they are now approaching.

That is the conversation worth having.

Not whether the platforms have failed at MRO data quality. They have not. They are simply in an architectural position that prevents them from solving it from inside.

The question is how the layer that solves it gets built. Who builds it.

And which platform owns it when the structural wall is reached.

This is the first article in a three-part series examining the structural reasons the MRO master data problem has persisted for two decades despite billions in spend, repeated remediation cycles, and now a wave of AI investment.

About the Author

Raghu Vishwanath

Raghu Vishwanath is Managing Partner at Bluemind Solutions. He has spent fifteen years building MRO master data infrastructure for asset-intensive industries.