When AI Meets Three Decades of MRO Knowledge

Same records. Different intelligence.

March 17, 2026

In the last article, we fed three MRO records into generic AI and watched it fail.

Same bearing. Same valve. Same gasket. Ninety-plus percent confidence on output that was fabricated, hallucinated, or inferred without disclosure.

Same records. Different result.

The Bearing — Again

 

Part Number:    6205-BEARING
Description:    6205 2RS BEARING
Manufacturer:   [blank]
MPN:            [blank]

Same record. Same blank fields. Same compressed description created during a 2 AM emergency five years ago.

Domain-grounded AI output:

 
 
Classification:       BEARING, BALL, RADIAL
Confidence:           HIGH
Extracted Attributes:
  Series:             6205         [extracted]
  Seal Type:          Rubber Seal  [extracted - from "2RS"]
  Bearing Type:       Deep Groove  [inferred - standard for 6200 series]
Flagged:
  Manufacturer:       NOT EXTRACTED — not present in description
  MPN:                NOT EXTRACTED — not present in description
  Bore Diameter:      INFERRED ONLY — 25mm is standard for 6205
                      but cannot be verified from description alone
Note:                 "2RS" is a widely used seal designation
                      indicating rubber seal on both sides.
                      Equivalent designations: 2RS1 (SKF legacy),
                      2RSH (SKF current), 2RSR (FAG), DDU (NSK),
                      LLU (NTN). Manufacturer identification
                      required for precise cross-reference.

The classification is the same. The difference is everything else.

What the generic model presented as extracted fact, domain-grounded AI separates into what it knows and what it doesn’t. “2RS” means rubber seal — that’s in the text. The 6200 series is standard deep groove — that’s domain knowledge applied transparently. The bore diameter is probably 25mm — but it’s flagged as inference, not presented as measurement.

The manufacturer field isn’t filled with a guess. It’s left blank with an explanation of why: the description doesn’t contain enough information to determine the manufacturer, and here are the sealing designation variations across manufacturers that would help identify it.

No hallucinated part number. No fabricated manufacturer. The AI told you what it knew, what it inferred, and what it couldn’t determine.

The Valve — Resolution

Part Number:    VLV-4812
Description:    GATE VALVE 4IN 150# CS FLGD
Manufacturer:   [blank]
MPN:            [blank]

Domain-grounded AI output:

Classification:       VALVE, GATE
Confidence:           HIGH
Extracted Attributes:
  Size:               4 inches     [extracted — "4IN"]
  Pressure Class:     150 ANSI     [extracted — "150#"]
  Body Material:      Carbon Steel [extracted — "CS"]
  End Connection:     Flanged      [extracted — "FLGD"]
Flagged:
  Bore Type:          NOT EXTRACTED — not specified in description.
                      Gate valves are available in full bore and
                      reduced bore configurations. This is a
                      safety-relevant attribute that must be
                      verified from engineering documentation.
  Manufacturer:       NOT EXTRACTED — not present in description
  MPN:                NOT EXTRACTED — not present in description
  Material Grade:     NOT EXTRACTED — "CS" indicates carbon steel
                      but does not specify grade (A216 WCB, A105,
                      etc.). Grade determines temperature and
                      pressure service limits.

Every attribute that was in the description is extracted. Every attribute that wasn’t is flagged as missing — with a specific explanation of why it matters. The bore type isn’t guessed. The material grade isn’t assumed. The manufacturer isn’t fabricated.

The generic model gave you a complete-looking record with hidden fictions. Domain-grounded AI gives you an honest record with visible gaps — and tells you exactly what needs to be resolved and why.

For a safety-critical valve, “I don’t know the bore type, and here’s why it matters” is infinitely more valuable than “Full Bore” with a 94% confidence score.

The Gasket — Honesty

Part Number:    GSK-0091
Description:    GASKET FLANGE 4IN RF 150#
Manufacturer:   [blank]
MPN:            [blank]

Domain-grounded AI output: 

Classification:       GASKET, FLANGE
Confidence:           MEDIUM
Extracted Attributes:
  Size:               4 inches        [extracted — "4IN"]
  Flange Face:        Raised Face     [extracted — "RF"]
  Pressure Class:     150 ANSI        [extracted — "150#"]
Flagged:
  Gasket Type:        NOT EXTRACTED — description does not specify
                      spiral wound, sheet, ring joint, or other type.
                      Gasket type determines pressure/temperature
                      suitability and must be verified.
  Material:           NOT EXTRACTED — no material specified. For 150#
                      RF service, common materials include graphite/SS
                      spiral wound (ASME B16.20) and compressed fiber
                      sheet, but selection depends on service conditions
                      (temperature, media, cycling).
  Thickness:          NOT EXTRACTED — not present in description
  Manufacturer:       NOT EXTRACTED
  MPN:                NOT EXTRACTED
Confidence Note:      Classification as GASKET, FLANGE is certain.
                      However, insufficient attributes for procurement
                      specification. This record requires human review
                      to determine gasket type, material, and service
                      conditions.

The generic model filled every blank and returned 91% confidence. Domain-grounded AI extracted three attributes, flagged five as missing, and gave you a medium confidence score with a clear explanation: this record can be classified but cannot be fully specified from the description alone.

The gasket’s material compatibility depends on what’s flowing through the pipe, at what temperature, under what pressure cycling. That information isn’t in the part description. It never was. An AI that claims to know it is fabricating.

An AI that tells you it can’t know it — and tells you specifically what additional information is needed — is an AI you can trust.

How Humans and AI Work Together

Here’s what changes when AI understands its own limitations.

Before: a maintenance data researcher receives a spreadsheet of 10,000 records. Every record requires the same process — search the description, identify the part, look up the manufacturer catalog, verify specifications, classify per taxonomy, extract attributes, fill the template. The work is research-intensive, repetitive, and slow. A skilled researcher processes 40-60 records per day. Ten thousand records takes months.

After: the AI processes the same 10,000 records. More than 60% — the records with clear descriptions, identifiable part numbers, and sufficient specification data — are classified and attributed without human intervention. The AI is confident because the evidence supports confidence. These records pass through automated validation that checks physical consistency, cross-references known catalogs, and verifies taxonomy compliance.

The remaining records arrive on the researcher’s screen differently than before. They aren’t raw — they’re pre-analyzed. The AI has already classified them to the best of its ability, extracted what it could, and flagged specifically what it couldn’t determine and why.

A researcher’s morning looks different.

Instead of: “Here are 60 records. Research each one from scratch.”

It becomes: “Here are 15 records the AI couldn’t fully resolve. For each one, here’s what the AI determined, here’s what it couldn’t, and here’s why it needs your judgment.”

The researcher isn’t doing research anymore. They’re adjudicating. Confirming the AI’s work where it’s right. Correcting it where it’s close. Providing the expertise where the AI reached the limit of what text analysis can determine.

The same researcher who processed 40-60 records per day now resolves 200-400 — because the nature of the work has changed. The AI handles volume. The human handles judgment.

The Economics

The traditional approach to MRO data quality is labor-intensive by design. Every record touches a human. Classification, attribute extraction, manufacturer verification, taxonomy mapping — each step requires trained personnel working through records one at a time. For a catalog of 200,000 records, you’re looking at months of work by a team of specialists. The cost runs into millions.

AI changes the cost structure fundamentally — but only if the AI is accurate enough to trust.

When more than 60% of records process through automated classification and extraction without human intervention, the economics shift. The remaining records — the ones that need human expertise — arrive pre-analyzed, reducing the time per record from research to adjudication. What used to take months now takes weeks. What used to require large-scale specialist teams now costs a fraction of the traditional approach.

But here’s what makes the economics real, not theoretical: the validation layer. AI that produces output you have to re-check manually hasn’t saved you anything. It’s just moved the work from “research” to “quality assurance” — which is equally labor-intensive. The economics only work when the automated output is trustworthy enough to flow through without human review.

That requires two things: accuracy that’s verifiable, and honesty about limitations. When the AI tells you “I’m confident” and it’s right 95%+ of the time, you can trust the automated path. When it tells you “I need help” and it’s right about needing help, you can focus human effort where it matters.

That’s the economic shift: not replacing humans with AI, but transforming what humans do — from research to judgment.

Why Domain Expertise Is the Moat

AI models improve quarterly. Processing power increases. Costs decrease. Within a few years, every vendor in the MRO space will claim AI-powered data quality. Many already do.

But AI without domain context will keep producing the same confident errors from the first article in this series. The generic model will keep hallucinating manufacturer part numbers. Keep fabricating bore types. Keep presenting inferred data as extracted fact.

Because the AI isn’t the differentiator. The knowledge behind the AI is the differentiator.

Three decades of knowing what makes a valve specification safety-critical. Why two part numbers are functionally equivalent. How a bearing’s sealing designation varies across manufacturers. What “CS” means in one context and what it means in another. Which attributes are cosmetic and which are life-safety.

This knowledge doesn’t live in a model. It lives in the people who build the model. Who tell it what matters. Who define the rules it validates against. Who determine when confidence is warranted and when it isn’t.

AI is the accelerant. Domain expertise is the fuel.

Without the fuel, you just have a very fast way to make the same mistakes.

One question remains: how does this AI handle the records where even it isn’t sure? That’s the next article.

About the Author

Raghu Vishwanath

Raghu Vishwanath is Managing Partner at Bluemind Solutions and serves as CTO at KeyZane, a financial inclusion platform live in Central and West Africa. Over 30+ years in software engineering, he has built AI systems that know the difference between what they can determine and what they can’t — because in MRO, that difference is the whole game.