HIMSS26 Closed With a Confession: Healthcare AI Has 'Few Guardrails'
At HIMSS26, federal officials admitted healthcare AI has 'few guardrails' while 1,300+ AI devices run in hospitals. Here's what that gap means in practice.
The Veradigm VP had just come off a panel at the biggest healthcare IT conference on the calendar. Her summary of the regulatory environment for clinical AI: "Few guardrails."
That was Monday, March 10. HIMSS26 ran through today. By the time it wrapped, Epic had launched AI agents reaching 85% of US hospitals, Oracle had gone generally available with AI that both generates and reads clinical notes, and the FDA had confirmed it has cleared more than 1,300 AI medical devices since 1995 — with submissions spiking sharply in recent years. The official position from the people responsible for overseeing all of it: we're working on it.
That gap is the story HIMSS26 didn't tell about itself.
Deployment Is Ahead of Governance
This isn't a doom take. The deployment numbers are real, and so are the efficiency gains driving them. Oracle's AI deployment at Billings Clinic saved more than 200,000 physician hours (Oracle, HIMSS26 announcement). Epic's Agent Factory extends AI workflows across a hospital system that covers more than 85% of US patients. No health system CFO is looking at those numbers and choosing to slow down. The economics are too clear.
But ECRI — the independent patient safety organization whose annual list directly shapes accreditation and JCAHO conversations — put "Navigating the AI Diagnostic Dilemma" at #1 for 2026. That's the first time AI has ever topped their list. Their diagnosis of the core problem is brief and exact: "AI models are only as reliable as their training data."
Hold that next to what federal officials said on stage at HIMSS26, and there it is: an on-record admission that the deployment curve and the regulatory curve have diverged. Not projected to diverge. Already have.
Three Layers of the Same Week
What Deployed
Epic's Agent Factory, announced Day 1, extends AI agent orchestration across a system that touches the majority of US hospital patients. Oracle went GA with clinical AI that does something clinically consequential: it reads prior patient notes to inform what it generates, not just a blank-slate model producing outputs. The feedback loop this creates is exactly what our earlier piece on Oracle's clinical notes loop examined — and the GA announcement means it's now production infrastructure, not a pilot.
The FDA's approval backlog tells the broader story. More than 1,300 AI medical devices cleared since 1995, per MedTech Dive's tracker — and the submission rate has been climbing steeply. These aren't experimental tools under controlled observation. They're running in hospitals today.
What ECRI Said
ECRI's #1 safety concern isn't about AI hallucinating in the abstract. It's about what happens downstream when clinicians accept AI-generated outputs without validation. David Lareau, CEO of Medicomp, named the failure mode directly at an Oracle panel on March 11: "When AI-generated outputs are accepted without validation, unsupported diagnoses, incomplete clinical context and subtle inaccuracies can become part of the permanent record."
That mechanism has a name — automation bias — and healthcare already has a long literature on it from earlier decision-support tools. The difference now is scale. At HIMSS26, the AI that's subject to automation bias is operating across most of the US hospital market. Our Epic and ECRI coverage from earlier this week covers the knowledge accuracy problem at Epic's specific implementation. The ECRI finding this week makes clear it's not an Epic-specific issue. It's structural.
What Regulators Said
The HHS deputy chief AI officer, Arman Sharma, was clear about where things stand: the department has been working to align AI projects across its agencies and released an RFI in December asking the industry how regulations might need to change. That RFI received nearly 450 comments (regulations.gov). No framework has been published in response.
Sharma's summary, from the stage at HIMSS26: "We recognize that this is a space that is very volatile. It's moving very quickly. It's very important that the industry gets clear signals from the agency about what we care about." Note the tense: important that the industry gets signals. Future, not present.
The FDA's Jared Seehafer said the agency is trying to set policy that works in 2026 and will also adapt as the technology evolves. That's an accurate description of a hard problem. It's not a framework. The conference closed with the regulatory picture exactly where it opened: in progress.
Tina Joros of Veradigm summarized the week's regulatory reality plainly: "It remains a very complex environment, with few guardrails for the use of AI in healthcare, and still a lot of work to do to really create an environment that is going to produce safe, reliable artificial intelligence for clinical use." (Healthcare Dive)
What Healthcare Organizations Need to Do Right Now
Here's the structural problem HIMSS26 surfaced: regulators are focused on governing AI outputs. Nobody has published standards for AI inputs — the clinical policies, protocols, and reference documentation that AI reads before it generates anything. ECRI identified the reliability problem at exactly that layer. The Medicomp CEO described what happens when that layer fails: permanent record errors.
Federal standards aren't coming in Q1. Colorado's AI Act takes effect June 30 — the first state with broad AI guardrails — and it will create patchwork compliance complexity for systems operating across state lines. But it's not a clinical accuracy standard either.
In the absence of federal guidance, the defensible position is concrete: if your organization is deploying any of the 1,300+ FDA-cleared AI tools, or Epic's Agent Factory, or Oracle's clinical note AI, the quality and currency of what those systems read is patient safety infrastructure. Not IT hygiene. Patient safety. The clinical policies, protocols, and documentation those models use as their source of truth need audit trails, contradiction detection, and controlled update processes. The Medicomp CEO's warning about permanent record errors isn't a technology failure — it's a knowledge governance failure. And right now, that's the organization's problem to solve.
What to Watch After the Conference
HHS has 450 RFI comments to process with no published timeline for response. Colorado's AI Act effective June 30 will be the first real test case for state-level clinical AI enforcement — watch for guidance that establishes accuracy standards by default. ECRI's #1 ranking means this enters accreditation conversations in the coming months; when it shows up on a JCAHO survey, the "we're still working on governance" answer becomes harder to give.
The conference is over. The gap is documented. The clock is running.