Healthcare AI Will Be Won by Verticals. The Recipe Has Been Around for a Decade.

The four layers nobody wants to build — and why the ones who do win
I spent four years building the middleware between hospitals and every system that needed to talk to them. Claims clearinghouses, EHR integrations, HL7 pipes, FHIR servers, lab results flowing into intake workflows, ADT feeds triggering billing. The invisible plumbing that makes healthcare data move.
I did not enjoy most of it. But I learned something that's taken the AI industry a decade to rediscover.
Healthcare AI doesn't have a model problem. It has a foundation problem.
The Palantir Playbook (That Nobody Talks About)
Palantir is a strange company to admire. They have a reputation for opacity, government contracts, and making enterprise software that requires an anthropology degree to understand. But they've figured something out that most software companies — and almost every AI startup — gets completely wrong.
Here's how Palantir actually works, stripped of the mythology:
Step 1: Send a forward-deployed engineer. Not a salesperson. Not a solutions engineer who gives demos. A real engineer who goes to the customer site and stays there. Embeds. Learns the operational reality — what data exists, where it lives, how clean it is, what the workflows actually look like vs. how they're described in the RFP.
Step 2: Build an ontology. Before any dashboards, before any "AI features," before any user interface: define what things are in this domain. What is a patient in this context? What is an encounter? What is an adverse event? What is a claim? What relationships exist between these objects? The ontology is not a data dictionary. It's a coherent model of the domain that everything downstream can be built on top of.
Step 3: Do the integrations. Now that you understand the domain and have a model for it, pull the data in. Clean it, map it, normalize it. This is unglamorous. It takes longer than anyone wants it to take. It is also the entire ballgame.
Step 4: Build AI tooling on solid ground. With clean data flowing through a coherent ontology, you can build things that actually work. Anomaly detection. Predictive models. Search. Now agents. Because the foundation holds.
Most companies skip to step 4. Some of them get to step 3 eventually. Very few ever build a real ontology. Almost none send a forward-deployed engineer as step zero.
Why "AI for Healthcare" Is Not a Company
Here's what the current generation of horizontal healthcare AI companies are actually building: a natural language interface on top of a mess.
The EHR data is unstructured. Physician notes are free text that varies wildly by specialty, institution, and individual. Lab values are in different units across systems. Medications are spelled seventeen different ways. Diagnoses are coded in ICD-10 but the codes are entered inconsistently, sometimes used as billing proxies rather than clinical truth.
A general-purpose LLM has no idea what any of this means at the clinical level. It can parse the text — it's remarkably good at that. But parsing text and reasoning about clinical significance are different things. And "clinical significance" isn't a universal concept. It means something different for a dermatologist than it does for an oncologist.
This is why generic healthcare AI copilots keep producing outputs that clinicians don't trust. It's not the model's fault. The model is working exactly as designed. The problem is that the inputs are garbage, and there's no ontology telling the model what relationships to respect.
You can't fix that with a better prompt. You can't fix it with RAG on top of the same messy data. You fix it by doing the Palantir steps in order.
Why the Winners Are All Doing the Same Thing
Rex Woodbury wrote recently about the companies in healthcare actually getting traction, and a pattern jumps out immediately.
AI scribes — Freed, Ambience, Deepscribe — aren't "AI for healthcare." They're AI for a specific workflow (clinical documentation) in a specific context (the physician encounter). They embedded with clinicians. They built an ontology of what a SOAP note is, what goes in which field, how a dermatology note differs from a psychiatry note. They did the integration work to get outputs into the EHR in a format that complies with billing requirements. Then they built the AI layer.
Honeydew is doing the same thing for dermatology access. 100 million Americans have chronic skin conditions; nine out of ten don't get treatment because the access problem is so severe. Honeydew isn't trying to build AI for all of healthcare. They built an ontology for dermatological care. They understand the specific data, the specific triage logic, the specific treatment protocols. The AI on top of that foundation is actually useful because the foundation is solid.
Grow Therapy and Headway are doing it in behavioral health. They didn't start with AI. They started by going deep on the operational nightmare that is being a private practice therapist: credentialing, billing, insurance claims, patient matching. They built the platform around those workflows first. The AI tooling is extending something that already had structural integrity.
This is not a coincidence. Every healthcare AI company that's achieving real scale followed the Palantir recipe, whether they called it that or not.
The PHI Moat Nobody Talks About
There's a structural reason why vertical beats horizontal in healthcare that goes beyond just product quality: medical records are private.
The general-purpose LLM cannot train on your patient data. It never will. ChatGPT does not know what a clinical note from your cardiology unit looks like. It does not have access to your payer mix, your billing patterns, your population's comorbidity profile.
This means the general model will always be starting from scratch on the domain-specific details that matter most. The vertical player who has embedded with a specific care setting, built the ontology, done the integrations, and trained (or fine-tuned) on domain-specific data has a moat that no frontier model can erode just by getting bigger.
In most software markets, scale advantages go to whoever has the most data overall. In healthcare, the data isn't portable and the privacy constraints are real. Scale advantages go to whoever goes deepest in a specific domain. This inverts the usual tech playbook and it's why Palantir-style forward deployment actually wins.
What This Means If You're Building
If you're building healthcare AI, here's the checklist:
Do you have a forward-deployed engineer with the customer? Not a customer success manager. Not a technical account manager. An engineer who is sitting in the clinical workflow, watching what actually happens, learning where the data breaks down and what the edge cases look like. If you don't have this, everything you build is a guess.
Have you built an ontology for your specific domain? Not a data model — an ontology. What are the objects? What are their relationships? What invariants hold? A dermatology ontology looks nothing like an oncology ontology. If you're trying to build for "healthcare broadly," you don't have an ontology. You have wishful thinking.
Have you done the integrations? This is the part everybody wants to skip. The HL7 feeds, the FHIR servers, the EHR write-backs, the billing code mappings. This work is slow, annoying, full of edge cases, and completely non-negotiable. The companies winning in healthcare have all paid this toll. The companies failing are the ones who thought they could build AI on top of a demo dataset and worry about the real integrations later.
Are you building AI on solid ground? If you've done the first three steps, you're ready. Your AI features will work because they have coherent inputs. Your agents will be reliable because they understand the domain they're operating in. Your outputs will be trustworthy because the data model enforces the relationships that matter clinically.
Skip the steps, and you have a copilot that impresses in demos and frustrates in production.
Why I'm Watching Verticals Win and Most of the Market Miss It
When I built Bridge Connector, we were essentially doing the Palantir recipe without the branding. We sent engineers into health systems. We built ontologies for specific integration scenarios — an ADT feed connecting to a billing system looks nothing like a lab result feed connecting to a care management platform. We did tens of thousands of integration mappings. We got the data flowing cleanly.
We didn't call any of that "AI" because the tooling wasn't there yet. But the foundation was. What I know now, having watched the current generation of healthcare AI companies, is that the ones who skip that foundation work are building on sand.
The market is large enough that sand-based products can raise money, hire teams, and run for years. Eventually the clinicians stop using them. The hospital contract doesn't renew. The payer pilot doesn't expand. The data from the product doesn't actually match the data in the chart.
The vertical specialists who do the foundation work will look slower at first. Narrower. Less ambitious. "Just dermatology?" "Just behavioral health scribes?"
Then one day they're the operating system for their specialty. And no horizontal platform can displace them, because the integration depth and domain ontology are the product.
The Palantir recipe is fifteen years old. It's still the only way to build durable enterprise software in complex domains. Healthcare AI is just relearning it, slowly and expensively.
Build for cardiologists. Not for healthcare.
