Healthcare AI Has a Trust Infrastructure Problem
Trust in healthcare AI fell from 52% to 44% of Americans in two years. The fix isn't better messaging. It's source-grounded, auditable AI — and most deployments don't have it.
Healthcare AI is expanding fast. OpenAI launched ChatGPT Health in late 2025. Anthropic followed with a healthcare-specific Claude offering in January 2026. Insurance companies are running AI for prior authorization. Hospitals are piloting ambient scribes, diagnostic aids, and clinical decision support tools across hundreds of workflows.
Public trust is moving in the opposite direction.
The numbers are going the wrong way
According to the first wave of Reach3 Insights and Rival Technologies' 2026 Digital Health Trends study, just 44% of Americans trust the use of AI in healthcare today — down from 52% in 2024. More striking: 53% now report negative sentiment toward AI's integration into healthcare overall (Reach3 Insights/Rival Technologies).
This isn't a static baseline. It's a trend moving against adoption, not with it.
Only 14% of Americans currently use AI for health or wellness purposes. Among that group, trust is high — 88% say they trust AI in healthcare. Among everyone else: 38% — a 50-point divide that tells you something important about why skepticism persists.
People who experience healthcare AI firsthand tend to trust it. People who haven't, and who are watching from the outside, are less convinced. That gap doesn't close with better marketing.
Why the skepticism isn't irrational
In February 2026, a study published in Nature Medicine tested ChatGPT Health on 60 clinical scenarios drawn from real emergency cases. The chatbot under-triaged 51.6% of those cases: it recommended delayed care in situations where urgent action was needed (NBC News).
This was research, not live deployment. OpenAI states explicitly that ChatGPT Health is "not intended for diagnosis or treatment." But the context matters: over 40 million people globally use ChatGPT for health questions, and nearly 2 million weekly messages are about insurance, per OpenAI's own data. Intended use and actual use are not the same thing.
What's uncomfortable is that the error rate wasn't caught before public rollout. A product used by tens of millions for health guidance — and a significant failure mode required a third-party study to surface.
That's not primarily a model problem. It's a testing and transparency problem.
This is landing in a healthcare system already running on low trust
The STAT analysis published today contextualizes the AI numbers against something broader. General trust in healthcare has been declining for years — trust in physicians and hospitals fell over 30 percentage points between 2020 and 2024, from 72% to 40%, according to a national survey of 443,000 U.S. adults (Angus Reid panel).
For Black, Latino, and Indigenous communities, that collapse layers onto medical mistrust with much deeper roots — a legacy of harm and exclusion that preexists AI by decades.
When healthcare AI enters that environment — often without patient awareness of how it works, used in insurance denial workflows, making consequential recommendations with opaque reasoning — it doesn't arrive as a neutral tool. It arrives as another system making decisions without clear accountability. The trust deficit isn't AI's alone to fix. But AI deployments that lack transparency make it harder to fix.
What "trust infrastructure" actually means
Healthcare providers and health systems generally understand that AI systems need to be secure, HIPAA-compliant, and bias-tested. Those are necessary. They're not sufficient.
The more difficult requirement is that AI systems need to be explainable about what they know and where they learned it. When a clinician gets an AI-assisted recommendation, they need to know which protocol, which formulary version, which clinical guideline informed it. When a patient uses a health system's AI assistant to check coverage, that answer needs to come from the actual current plan documents — not a training dataset from 18 months ago.
Most healthcare AI deployments can't do this. The model runs. An answer comes back. The sourcing is a black box.
ECRI named AI the No. 1 patient safety risk for 2026, with knowledge accuracy as the central concern. HIMSS26 closed with a frank acknowledgment that healthcare AI deployment lacks standardized guardrails. These aren't fringe concerns — they're coming from the institutions that run healthcare AI procurement and safety programs.
The pattern is consistent: capability is scaling. The infrastructure to verify what AI systems know is not.
What source-grounded AI looks like in practice
Source-grounded AI isn't a new category. It's RAG — retrieval-augmented generation — applied to institutional documents: clinical protocols, formularies, compliance policies, patient education materials, payor contracts.
When an AI system retrieves answers from governed documentation, every response comes with a citation: "Based on Section 4.2 of your current formulary, updated January 2026." That's auditable. Compliance teams can verify it. Clinicians can check the source. When an answer is wrong, there's a trail to investigate — and correct.
Keeping that documentation accurate is the second half of the problem. A health system deploying AI on its knowledge base needs contradiction detection — if the current protocol says one thing and an older policy says another, the system should surface that conflict before the AI does. It needs the ability to update documentation without IT tickets ("Update the post-op pain protocol with the new dosing guidelines"). It needs version control so you know what the AI was reading when it gave a specific answer last Thursday.
Ambient clinical AI is already generating new documentation governance problems — AI-generated notes entering medical records without the same review rigor as human-written ones. The documentation layer is where healthcare AI trust either gets built or quietly falls apart.
Mojar AI is built for this layer: RAG on governed documents, source attribution on every answer, contradiction detection across the knowledge base, and a management interface that keeps content current as policies and protocols change. The application in healthcare is operational and compliance documentation — not clinical diagnosis.
What to watch
Three things worth tracking over the next year.
Procurement standards are quietly evolving. Health systems and payors that make source auditability an explicit requirement — not just security compliance, but verifiable documentation of what the AI reads — will start separating from those that don't. The ChatGPT Health study gives procurement teams a legitimate question to put to every AI vendor: can you show us exactly what your system is drawing on when it generates an answer?
Regulation is moving in the same direction. The White House AI framework, California's pending healthcare AI legislation, and emerging CMS guidance are all pushing toward requiring clearer provenance controls on AI-generated outputs in clinical and administrative contexts. Source-grounded systems are better positioned for what's coming than black-box deployments.
And trust may stay stubbornly low even as adoption grows — if the underlying problems don't get solved. The 14% of Americans with firsthand experience trust healthcare AI at 88%. That should be a signal worth building on. But it only works if the next wave of users encounters AI that can explain itself, that operates on accurate documentation, and that gives compliance teams something to stand behind.
Adoption without accountability has a well-established failure mode. Healthcare has already seen it play out once this year.
Frequently Asked Questions
Trust in healthcare AI fell from 52% to 44% of Americans between 2024 and 2026, according to Reach3 Insights and Rival Technologies. The drop follows a string of safety concerns, including a Nature Medicine study showing ChatGPT Health incorrectly triaged half of medical emergency test cases, alongside growing awareness that many healthcare AI systems operate without clear source attribution or audit trails.
Among the 14% of Americans who currently use AI for health or wellness, 88% trust AI in healthcare. Among those who don't use it, that number is 38% — a 50-point gap, per Reach3 Insights and Rival Technologies' 2026 Digital Health Trends research.
Trust infrastructure refers to the systems that make AI behavior verifiable: source-grounded retrieval that cites which documents informed each answer, contradiction detection across clinical and policy documentation, version control on knowledge bases, and audit trails that compliance teams can inspect. Without these, healthcare AI can't be held accountable.