Fitbit's AI Coach Can Read Your Medical Records. The Problem Isn't Who's Watching — It's What It's Reading.
Google's Fitbit will link full medical records to its AI coach in April. Everyone's worried about privacy. The quieter risk is that those records might be wrong.
Google Just Made Your Medical Records an AI Training Ground. The Privacy Story Is Missing the Point.
Starting in April 2026, Fitbit users in the US can link their lab results, prescription medications, and doctor visit history directly to the app's AI coach — powered by Google's Gemini. The announcement came Tuesday at The Check Up, Google's annual health event, where the company framed it as a safety improvement: "When your coach understands your medical history, its guidance becomes safer, more relevant and more personalized" (Google).
The coverage since has been almost entirely about privacy. Who can access your records. What Google will do with the data. Whether this crosses a line.
Those are real questions. But there's a quieter problem that nobody is asking, and it may actually matter more:
What if the records are wrong?
Our Take: The Dangerous Assumption Is That the Records Are Accurate
The privacy debate assumes the medical records flowing into Fitbit's AI are a faithful representation of your health status. They often aren't.
Medical record systems — EHRs — are the product of years of fragmented inputs. Your GP enters a prescription. Your cardiologist's office adds a different medication. A hospital stay creates a third record that may or may not sync with either. Lab values accumulate without timestamps prominently surfaced. Diagnoses from one provider contradict notes from another. Allergies get recorded inconsistently across systems.
The AI isn't designed to adjudicate these conflicts. It reads the documents it's given.
This isn't a Google engineering failure or a Gemini problem. It's a knowledge-quality problem that lives in the underlying data, and it follows those records into every recommendation the AI generates.
The Records Your AI Coach Will Read Are Not What You Think They Are
Medication reconciliation is one of the most studied problems in healthcare informatics. The numbers are uncomfortable. Research published in JAMIA — the Journal of the American Medical Informatics Association — has found discrepancy rates of 40-60% in medication lists at the point of care encounters. Patients get prescriptions from multiple providers, fill them at different pharmacies, stop taking medications without formally telling anyone, and start supplements or OTC drugs that never make it into any record at all.
This is known. It's why trained nurses spend time on medication reconciliation at hospital admissions. The manual check exists precisely because the record can't be trusted without it.
When that record becomes the primary input for an AI health coach — one that synthesizes it automatically, without a human reconciliation step — the known problem doesn't go away. It just moves downstream into the AI's recommendations.
The same applies to lab results. A glucose reading from 18 months ago sits in your chart looking exactly like a reading from last Tuesday. An AI system that doesn't flag temporal context will treat both with equal confidence. If your values have shifted since then, the advice could be calibrated to a version of your health that no longer exists.
And it applies to conflicting diagnoses. If your records contain a diagnosis from one provider that another never confirmed, or a clinical impression that was later revised, the AI will read all of it — and try to reason across it as if it's a coherent picture.
The Model Won't Hallucinate. That's Almost the Problem.
The more reliable AI systems become at grounding responses in source documents, the more document quality matters. A model that hallucinates can at least be called out for fabrication. A model that faithfully reads flawed records sounds authoritative while being wrong — which is harder to catch, especially for someone without medical training.
Google's framing — that grounding the AI in your clinical history makes it "safer" — is correct in theory. Grounded AI is generally safer than ungrounded AI. But "grounded" means anchored to your documents, not anchored to your actual current health status. Those are not the same thing, and the gap between them is where the risk lives.
This isn't hypothetical. HIMSS26 surfaced the same issue at the enterprise level: federal officials acknowledged that healthcare AI operates with "few guardrails," even as AI-powered systems run across thousands of hospital workflows. The documentation problem underlies most of it. Epic's agentic rollout across 85% of US healthcare raised the same question nobody wanted to answer: who is checking the accuracy of what the AI reads?
At the consumer level, the stakes are different — nobody's ordering surgery based on Fitbit's coaching. But confident, personalized-sounding health advice built on outdated inputs is not a trivial concern. People act on it.
What a Responsible Approach Looks Like
The integration isn't inherently wrong. A coach that knows you have hypertension and a sulfa allergy is more useful than one that doesn't.
The gap is in what happens to the records before the AI reads them.
A responsible approach would flag staleness — this lab value is 18 months old, treat it as provisional — surface conflicts where two medication records contradict each other, and prompt the user to reconcile rather than silently collapse competing data sources into one confident answer.
Platforms built around RAG and document intelligence already work with these problems at the enterprise level — contradiction detection across documents, automated staleness alerts, source attribution that preserves provenance rather than flattening all inputs into one confident answer. The architecture exists. The question is whether it gets applied to consumer health AI before the harm cases accumulate, or after.
That's a design choice. Right now, it looks like it's being made by default rather than intention.
The Closer
The Fitbit announcement is a bellwether. Every healthcare AI system is a knowledge management problem wearing a machine learning costume. The AI is only as safe as the data it reads — and medical records, for structural reasons that predate AI entirely, are not uniformly accurate, current, or internally consistent.
Privacy is worth fighting about. But the question of what happens after the AI has permission to read those records — specifically, what it does when those records are wrong — is the one that's going to determine whether this goes well.
Nobody is asking it yet. That's usually when it matters most.
Frequently Asked Questions
The safety question is more complex than it appears. The AI won't hallucinate — it will read your actual records. But if those records contain stale medication lists, outdated lab values, or conflicting diagnoses from different providers, the AI may generate confident-sounding guidance built on inaccurate inputs. Safety depends as much on record quality as on model quality.
Medication reconciliation is notoriously difficult. Patients receive prescriptions from multiple providers, fill them at different pharmacies, and stop medications without formally notifying anyone. Studies published in JAMIA have found discrepancy rates of 40-60% in medication lists at the point of care — a problem that doesn't disappear when those lists are fed into an AI system.
The model doesn't flag uncertainty the way a clinician might. A lab result from 18 months ago looks the same as one from last week. An AI health coach reading it will reason from that value as if it's current. The result is not hallucination — it's confident reasoning from stale inputs, which may be harder to catch precisely because it sounds authoritative.
Privacy concerns focus on who can access your records. Document quality concerns focus on what happens after the AI reads them. Both matter, but most coverage is treating the access problem as the only problem. In practice, a bad recommendation from an AI reading your outdated records can affect your health today — without any third party ever seeing the data.
At minimum: contradiction detection across data sources, staleness indicators so AI systems treat old values as provisional rather than authoritative, and ongoing reconciliation processes. The challenge is that medical records weren't designed to be machine-read at the speed and confidence AI operates at.