What the GP Sees That the Surgeon Misses

Diagnosis before procedure. The right question before the brilliant answer.

February 8, 2026

The patient arrives with a persistent rash.

The dermatologist examines it carefully. Considers the pattern, the texture, the distribution. Autoimmune possibilities come to mind. A biopsy is scheduled. Bloodwork ordered.

The general practitioner asks a different question.

“Changed anything recently? New detergent? Different soap?”

New laundry detergent. Three weeks ago. Switch back, rash clears.

The dermatologist wasn’t wrong. The dermatologist was narrow.

The Specialist's Brilliance

Let’s be clear: specialists are valuable.

The surgeon who has performed a thousand hip replacements sees things a generalist never could. The tax attorney who lives inside the code catches nuances that a business lawyer would miss. The machine learning engineer who has tuned a hundred models knows which parameters matter before running experiments.

Depth creates perception. The specialist sees more detail within their domain than any generalist could hope to match.

This is genuine expertise. It took years to develop. It commands a premium.

None of that is changing.

The Specialist's Blind Spot

But depth creates something else too.

Peripheral blindness.

The deeper you go into a domain, the more the domain becomes your frame. Everything starts to look like a problem you’re equipped to solve. The dermatologist sees skin pathology. The surgeon sees surgical candidates. The database architect sees schema problems.

This isn’t a character flaw. It’s a structural consequence of specialization.

You were trained to see deeply within boundaries. You were not trained to question whether the boundaries are right.

The dermatologist sees the rash. The GP sees the patient’s life.

Same person in the room. Different questions being asked.

Diagnosis Before Procedure

Here’s what gets lost in the celebration of expertise: the procedure is not the hard part.

The hard part is knowing which procedure. Or whether any procedure at all.

Diagnosis precedes treatment. The question precedes the answer. Get the diagnosis wrong, and the most brilliant procedure in the world solves the wrong problem.

The GP’s value isn’t knowing more. It’s knowing which question to ask.

Is this rash autoimmune? Allergic? Contact dermatitis? Stress-related? Something in the environment? The GP holds all of these possibilities simultaneously, looking for the pattern that fits.

The specialist holds one possibility deeply.

Both are necessary. But we’ve spent a century building systems that overproduce specialists and undervalue the diagnostic function.

A Brilliant Answer to the Wrong Question

You’ve seen this in your own work.

The engineering team that builds exactly what was specified — and misses what was actually needed. The architecture that optimizes for the wrong constraint. The feature that solves a problem nobody has.

Brilliant execution. Wrong target.

A brilliant answer to the wrong question isn’t just unhelpful. It’s worse than no answer. It consumes resources. It creates confidence. It makes the real problem harder to see because everyone believes it’s been solved.

The specialist’s risk isn’t incompetence. It’s misdirection.

They’re so good at answering that nobody stops to question whether they’re answering the right thing.

AI Accelerates the Surgeon

Now add AI to the picture.

AI is making specialists faster. Dramatically faster.

Robotic surgery. Automated image analysis. Code generation. Legal document review. Financial modeling. The tasks that defined specialist value — the procedures — are being augmented, accelerated, and in some cases automated.

The surgeon’s hands are becoming more precise. The radiologist’s pattern recognition is being enhanced. The developer’s implementation speed is multiplying.

Within domains, AI is a force multiplier for specialists.

But AI doesn’t make specialists see wider. It makes them execute faster within the same boundaries.

The dermatologist with AI can analyze more images, cross-reference more conditions, process more patients. The dermatologist with AI still sees dermatology problems.

AI Can't See the Whole Patient

Diagnosis is different.

Diagnosis requires integrating signals across domains. It requires context that doesn’t fit in structured data. It requires noticing the hesitation before the patient answers, the detail that doesn’t match the story, the pattern that only emerges when you’re not looking for anything specific.

The GP asks: “Changed anything recently?”

That question doesn’t come from a decision tree. It comes from having seen thousands of patients, across dozens of problem types, and developing an instinct for where the real issue hides.

AI can process inputs. It cannot wonder.

AI can match patterns within training data. It cannot ask whether the framing is wrong.

AI can accelerate the procedure. It cannot question whether the procedure should happen.

Platform Engineering as Diagnostic Work

Most engagements arrive with a specification.

The client knows what they want. They’ve written it down. They’re ready to discuss timelines and costs.

The specification is the symptom.

Our job — the part that doesn’t show up in statements of work — is diagnosis. What’s actually needed here? Is the specification pointing at the real problem? Or is it a solution that made sense when someone wrote it, based on assumptions that may not hold?

We’ve watched specifications that seemed clear turn out to be solutions to the wrong problem. We’ve seen architectures designed for requirements that weren’t the actual requirements. We’ve inherited platforms that optimized beautifully for constraints that didn’t matter.

The specialist builds what’s specified. The diagnostic mind asks whether the specification is right.

This is uncomfortable. Clients don’t always want their assumptions questioned. They came for execution, not inquiry.

But execution without diagnosis is the dermatologist scheduling a biopsy when the answer is sitting in the laundry room. Technically sophisticated. Potentially useless. Expensive either way.

The Seeing That Remains Human

The surgeon’s hands are being augmented. Robotic precision exceeds human steadiness.

The specialist’s eyes — within the domain — are being augmented. AI spots patterns humans miss.

The GP’s eyes are not being augmented. Not in the way that matters.

The ability to see the whole patient. To integrate across specialties. To notice that the rash presenting in dermatology is actually a detergent in the laundry room. To ask the question that reframes everything.

This seeing requires something AI doesn’t have: the experience of being a generalist in a world of specialists. The pattern recognition that comes from not having a domain. The peripheral vision that specialization deliberately sacrifices.

The diagnostic function is becoming the scarce resource.

What This Means

The specialists aren’t going away. We need them.

But the balance is shifting.

When specialist execution is abundant — amplified by AI, multiplied by tools — the premium moves to the diagnostic layer. The judgment that precedes execution. The question that precedes the answer.

Which specialist should we call?

Should we call any of them?

What’s actually going on here?

These questions don’t come from depth. They come from breadth. From having seen enough problems across enough domains to recognize the shape of things.

The GP’s way of seeing isn’t a lesser form of medicine. It’s a different function entirely.

We’ve been undervaluing it for decades.

That’s becoming expensive.

About the Author

Raghu Vishwanath

Author Bio: Raghu Vishwanath is Managing Partner at Bluemind Solutions and serves as CTO at KeyZane, a financial inclusion platform live in Central and West Africa. Over 30+ years across software engineering and technical leadership, he has watched the terms of specialization change — and learned that the only sustainable expertise is the willingness to build it again.