Healthcare Workers Are Striking Over AI. Here's the Bigger Problem Nobody's Protesting.
The Kaiser strike raised real fears about AI in healthcare. But the deeper risk isn't job displacement—it's that clinical AI reads from records that are already broken.
On March 18, about 2,400 Kaiser Permanente mental health professionals walked off the job in Northern California. They were joined in sympathy by more than 23,000 Kaiser nurses. The flashpoint: concerns that AI is coming for clinical roles. Together, they provide care for roughly 4.6 million patients in the Bay Area, Central Valley, and Sacramento regions.
Kaiser says the fear is unfounded. "AI does not replace human assessment, and it does not make care decisions." That response is probably true, at least right now.
But the real problem nobody is protesting isn't in that response.
What the debate got wrong
The public argument about healthcare AI follows a predictable track. Will AI replace nurses? Will it violate patient privacy? Can we trust something that wasn't trained by a clinician? These are reasonable questions. They're also not the sharpest ones.
Here's the thing about AI systems in healthcare: they don't think. They read. They synthesize. They summarize. They route. And what they read from is the existing medical record—a document stack that has been accumulating inconsistencies, outdated entries, and fragmented information since long before any AI touched it.
The accuracy problem in healthcare AI isn't primarily a model problem. It's a source document problem.
Enter Microsoft Copilot Health
The same week Kaiser workers walked out, Microsoft announced Copilot Health: an AI health companion that reads from records across more than 50,000 US hospitals and provider organizations via HealthEx and TEFCA. It connects to 50-plus wearable devices. Microsoft is already handling 50 million consumer health questions per day across its products.
This is genuinely useful infrastructure. People need help making sense of their health data. They show up to appointments with incomplete context, forget half their questions, and stare at test results they don't understand. Copilot Health addresses a real gap.
But scale changes the risk profile. When one system reads from 50,000 hospital networks, the quality of what those networks have on file becomes a system-level question, not just an individual care question.
A stale allergy entry in one patient's chart is a personal risk. The same category of error across millions of records, processed by a system designed to surface actionable insights, is something different.
The record quality problem clinicians already know about
Doctors aren't surprised by this. In a Doximity survey published in March 2026, 94% of physicians said they had adopted AI tools or were interested in doing so (Healthcare Dive). More than half are already using AI in practice.
But more than 70% cited accuracy and reliability as their top concern. Only 8% said AI decision-making policies were clear and understood at their organizations.
That last number deserves more attention than it gets. These are physicians who are actively using AI tools, in organizations that have deployed them—and most can't explain what those tools are doing or why. The explanation isn't that clinicians are being difficult. The explanation is that accuracy is genuinely hard to evaluate when the inputs themselves are unreliable.
We've written before about how ECRI named AI the top patient safety risk in healthcare—and how the underlying concern wasn't the AI model itself, but what the model was reading from. The Doximity numbers confirm that practicing physicians are arriving at the same place independently.
What grounded AI actually means in this context
The standard response to hallucination concerns in clinical AI is: "We use retrieval-augmented generation. The system is grounded in real records." This is correct and also insufficient.
Grounding guarantees that the AI will read from your actual documents. It says nothing about whether those documents are accurate, current, or internally consistent.
A RAG system reading from a contradictory chart doesn't hallucinate. It faithfully reports the contradiction, or worse, resolves it based on recency or confidence thresholds in ways the clinician can't see. The answer it produces is grounded and wrong.
When consumer health AI gains access to medical records through wearables and health platforms, the conversation tends to focus on privacy. That's the visible risk. The less visible risk is that the AI reads old medication lists, outdated diagnoses, and conditions that were never properly closed in the record, and treats all of it as current truth.
The missing layer
The healthcare AI conversation needs a layer it currently lacks: systems that maintain the source record, not just read from it.
Detecting contradictions across documents. Flagging entries that haven't been reviewed in years. Identifying which information is structurally stale versus clinically current. Triggering review workflows when a patient-facing AI system pulls from a record section that was last touched before a significant care event.
This isn't a futuristic feature. It's the category of work that makes grounded AI safe enough to trust at scale. In a single-patient context, a thoughtful clinician catches these things manually. At the scale Microsoft is building toward, that manual review isn't possible.
The knowledge maintenance layer, the part that ensures what AI reads is actually worth reading, is where the work needs to happen. Mojar AI builds for this problem: contradiction detection, stale-document remediation, feedback-loop correction when AI surfaces a wrong answer and someone flags it. In healthcare, that's not an operational nicety. It's patient safety infrastructure.
The real question coming out of this week
Kaiser workers were asking a reasonable question: what happens to clinical relationships when AI intermediates them? That question deserves a serious answer.
But there's a harder question underneath it. Not whether AI should read the chart. Whether the chart is trustworthy enough to read.
Twenty-six thousand workers walked out over one concern. The other concern—nobody organized a strike around it. That doesn't mean it's smaller.
Frequently Asked Questions
About 2,400 Kaiser mental health professionals walked out on March 18, 2026, over concerns that AI could eventually replace therapists. More than 23,000 nurses joined in sympathy. Kaiser says it does not currently use AI for therapy and that AI will not replace human clinical judgment or make care decisions.
Microsoft Copilot Health is a new AI health companion that reads from records across 50,000+ US hospitals and provider organizations via HealthEx and TEFCA. It connects to over 50 wearable devices and already handles 50 million consumer health questions per day.
According to a Doximity survey, over 70% of physicians cite accuracy and reliability as a top barrier to AI adoption. Only 8% say AI decision-making policies are clear and understood at their organizations—meaning even enthusiastic adopters can't fully explain what the systems are doing.