6D At-Risk Analysis
At Risk — AI Healthcare Regulatory Gap — Patient Safety

The 200 Million Patient

Every week, 200 million people ask AI chatbots for medical advice. The tools are not regulated as medical devices. They are not validated for clinical use. They have suggested incorrect diagnoses, recommended unnecessary testing, promoted subpar medical supplies, and invented body parts — while sounding like trusted experts. ECRI, the world’s leading patient safety organisation, has placed AI chatbot misuse as the #1 health technology hazard for 2026 and “navigating the AI diagnostic dilemma” as the #1 patient safety concern — the first time a single technology has topped both annual lists simultaneously. At the same time, AI diagnostic tools are matching or exceeding human performance in specific imaging tasks, reducing interpretation times, and expanding access to specialist-level analysis. The technology is simultaneously the most promising advance in diagnostic medicine and the most dangerous unregulated medical tool in widespread use. The regulatory gap is the widest in any sector: 250+ state bills across 34+ states, no federal law, FDA frameworks designed for static devices struggling to govern adaptive AI, and a proposed SANDBOX Act that would let companies waive federal regulations for up to a decade. OpenAI is launching ChatGPT Health. The Joint Commission is planning a voluntary — not mandatory — AI certification programme. Academics are discussing licensing AI as “advanced clinical practitioners.” And 40 million people a day are already using unvalidated chatbots for health decisions. The 200 million patient is already being seen. The question is who is accountable when the diagnosis is wrong.

40M
Daily Health Users
#1 + #1
Hazard & Safety
66%
Critical Missed
250+
State AI Bills
2,970
FETCH Score
6/6
Dimensions Hit
01

The Scale of the Unregulated Consultation

40M
Daily Users
ChatGPT health queries per day (OpenAI data).
200M
Weekly Queries
25% of ChatGPT’s 800M users ask health questions weekly.
5%+
All Messages
More than 5% of all ChatGPT messages are healthcare-related.
0
Federal Laws
No comprehensive federal AI healthcare legislation.

The volume dwarfs any traditional healthcare system. 200 million weekly healthcare consultations from a single AI platform exceeds the combined weekly patient visits of every hospital system in the United States. And ChatGPT is just one of several chatbots being used — Claude, Copilot, Gemini, and Grok are all answering health questions at scale. The total weekly AI healthcare consultation volume across all platforms is likely several times higher.[1]

OpenAI is leaning in, not pulling back. The company is launching ChatGPT Health, a dedicated health and wellness experience. It describes it as designed “to support, not replace medical care” — but the 40 million daily users are not reading disclaimers. They are asking about symptoms, medications, diagnoses, and treatment options, and receiving responses that sound authoritative regardless of their accuracy. Higher healthcare costs and hospital closures, particularly in rural areas, are driving more people to these tools as substitutes for, not supplements to, professional care.[4]

02

Documented Harms

Incorrect Diagnoses

Chatbots suggested wrong diagnoses with high confidence. Accuracy dropped precipitously when prompts were conversational rather than textbook-like descriptions of conditions.[2]

Invented Body Parts

ECRI documented chatbots fabricating anatomical structures in response to medical questions while maintaining an authoritative tone that would be indistinguishable from accurate information to a layperson.[1]

Burns Risk from Bad Advice

A chatbot incorrectly approved electrosurgical return electrode placement on a shoulder blade — advice that, if followed, would put the patient at risk of serious burns.[1]

66% Critical Conditions Missed

Tested ML models failed to recognise two-thirds of critical or deteriorating health conditions and injuries in synthesised clinical cases.[2]

Bias Amplification

Training data biases distort how AI interprets information, leading to responses that reinforce stereotypes and health inequities. Rare diseases and underrepresented populations are disproportionately affected.[1]

Unnecessary Testing

Chatbots recommended diagnostic tests that were clinically unnecessary, exposing patients to potential harm from invasive procedures and increasing healthcare costs without diagnostic benefit.[1]

The core problem is that chatbots predict word sequences based on statistical patterns, not medical understanding. They are programmed to sound confident and to always provide an answer. A patient asking about chest pain will receive a response that reads like an expert opinion regardless of whether the underlying analysis is clinically sound. The chatbot cannot say “I don’t know” in a way that conveys genuine uncertainty — it will produce an answer, because that is what language models do.[4]

03

The Widest Regulatory Gap

Texas

Written disclosure to patients that AI is being used, prior to or on date of service. Effective January 1, 2026.[5]

California

Bans AI from using terms implying healthcare licensure. Requires notification users are interacting with AI. Regulates companion chatbots. Effective January 1, 2026.[6]

Illinois

Prohibits AI from making independent therapeutic decisions, directly interacting in therapeutic communication, or generating treatment plans without licensed review.[5]

Federal

No comprehensive AI healthcare law. FDA, CMS, HHS collaborating on action plan. Joint Commission planning voluntary certification. SANDBOX Act proposes waiving regulations for a decade.[3][7]

The regulatory architecture is a patchwork. Seven states have passed laws specifically targeting AI mental health chatbots. Texas requires disclosure. California bans implied licensure. Illinois bans independent therapeutic decisions. Ohio wants written consent. Pennsylvania wants disclaimers. But there is no federal law. The FDA’s frameworks were built for static medical devices with specific indications that don’t change after approval. AI, particularly adaptive AI, is fundamentally different — it learns, changes, and produces different outputs depending on input. The FDA’s Predetermined Change Control Plan (PCCP) is a first step toward adaptive oversight, but it is untested at scale.[5][7]

The liability question is unresolved. When a chatbot gives incorrect medical advice and a patient is harmed, who is responsible? The chatbot developer? The platform? The healthcare system that allowed its use? The clinician who relied on it? Current malpractice frameworks have no clear answer. ECRI research found that jurors react differently to AI-assisted malpractice depending on how radiologists used the technology — suggesting the legal framework is evolving through litigation rather than legislation.[2]

04

The Cross-Sector Pattern

This is the third sector in the case library where the same structural pattern has appeared: AI technology deployed at unprecedented scale, regulatory framework designed for a previous paradigm, patchwork jurisdictional response, no accountability framework for when AI causes harm.

UC-088 — The Forecast Paradox

Weather AI: Better on Average, Worse at the Tails

AI weather models achieved 99.7% compute reduction and better average forecasts, but degraded tropical cyclone intensity prediction. No WMO governance framework for AI forecasts. The same pattern: revolutionary technology with specific failure modes in the cases that matter most, deployed ahead of the regulatory architecture needed to govern it.

UC-082 / UC-083 — AI Agent Risk

Agent Autonomy: When AI Acts Without Oversight

AI agents operating with increasing autonomy in decision-making. Healthcare chatbots are a specific instance of the agent risk pattern: they receive a question, make an assessment, and deliver a recommendation — all without human oversight. Academics have proposed licensing AI as “advanced clinical practitioners,” which is the agent autonomy thesis applied to the highest-stakes domain.

UC-068 — Bedside Manner

The Human-AI Healthcare Interface

UC-068 examined how AI changes the clinician-patient relationship. UC-097 shows that the question has evolved: 200 million people per week are now bypassing the clinician entirely. The bedside manner question is no longer “how do doctors use AI?” — it is “what happens when patients replace doctors with AI?”

05

The 6D Cascade

DimensionEvidence
Customer / Patient (D1)Origin · 78
At Risk
200 million weekly healthcare consultations from unvalidated AI tools. 40 million daily users on ChatGPT alone. Documented harms: incorrect diagnoses, invented body parts, burn-risk advice, unnecessary testing, 66% of critical conditions missed. Patients cannot distinguish confident-sounding AI from accurate AI. Rural healthcare decline is driving more patients to chatbots as substitutes for professional care. The customer dimension is the origin because 200 million people are already receiving the unregulated service — the harm pathway is active at massive scale.[1][2]
Regulatory / Governance (D4)Origin · 75
At Risk
250+ state bills, 34+ states, no federal law. FDA frameworks designed for static devices. PCCP untested at scale. SANDBOX Act proposes decade-long regulatory waivers. 7 state mental health chatbot laws. Joint Commission voluntary certification. Patchwork jurisdiction. Liability unresolved. The regulatory dimension is co-origin because the gap between deployment scale and governance is the widest in any sector in the case library.[5][6][7]
Quality / Product (D5)L1 · 72AI diagnostics match or exceed human performance in specific imaging tasks (mammography, retinal scans, certain cancers). Google’s mammography AI reduced interpretation time by a third. But accuracy drops precipitously in conversational contexts, rare diseases, and underrepresented populations. 66% of critical conditions missed in synthetic cases. The quality dimension captures the paradox: excellent in narrow, validated applications; unreliable in the broad, unvalidated use that 200 million people are actually doing.[2]
Revenue / Financial (D3)L1 · 68OpenAI launching ChatGPT Health as a dedicated product. AI diagnostics are a multi-billion-dollar growth market. Malpractice liability exposure is unquantified but potentially enormous. Unnecessary testing recommended by chatbots increases healthcare costs. The revenue dimension reflects both the commercial opportunity and the liability risk — the same companies building the products face the largest exposure when they fail.[4]
Employee / Talent (D2)L1 · 65AMA survey shows AI use among American doctors has doubled. ECRI warns that AI can erode clinicians’ critical thinking skills. Staff training is inadequate — most healthcare workers don’t understand when AI is being used, how it functions, or its limitations. Illinois requires licensed professional review of AI treatment plans. The employee dimension captures the clinical workforce being reshaped by tools they are not trained to evaluate.[2][7]
Operational (D6)L2 · 62Healthcare organisations lack formal AI governance structures. Generic vendor validation is insufficient. Local validation required but resource-intensive. Cybersecurity risks from legacy medical devices compounded by AI integration. “Digital darkness” events (#2 on ECRI hazard list) could disable AI-dependent systems. The operational dimension reflects healthcare systems unprepared for the tools they are already deploying.[7][8]
6/6
Dimensions Hit
10×–15×
Multiplier (Extreme)
2,970
FETCH Score
OriginD1 Customer (78) ⚠·D4 Regulatory (75) ⚠
L1D5 Quality (72)·D3 Revenue (68)·D2 Employee (65)
L2D6 Operational (62)
CAL SourceCascade Analysis Language — machine-executable representation
-- The 200 Million Patient: 6D At-Risk Cascade
FORAGE healthcare_ai_regulatory_gap
WHERE daily_health_ai_users > 40_000_000
  AND weekly_health_queries > 200_000_000
  AND regulated_as_medical_device = false
  AND validated_for_clinical_use = false
  AND documented_harms_critical = true
  AND federal_ai_healthcare_law = false
  AND state_bills_count > 250
  AND ecri_hazard_rank = 1
  AND ecri_safety_rank = 1
ACROSS D1, D4, D5, D3, D2, D6
DEPTH 3
SURFACE two_hundred_million_patient

DIVE INTO regulatory_gap
WHEN deployment_scale_massive AND governance_absent AND harm_documented AND liability_unresolved
TRACE at_risk_cascade
EMIT at_risk_signal

DRIFT two_hundred_million_patient
METHODOLOGY 88  -- FDA oversight tradition, ECRI monitoring, malpractice framework, medical device regulation, clinical training, Hippocratic principle
PERFORMANCE 34  -- 200M weekly unvalidated consultations, 66% critical missed, invented body parts, burns advice, 0 federal laws, 250+ patchwork bills, liability unresolved

FETCH two_hundred_million_patient
THRESHOLD 1000
ON EXECUTE CHIRP at_risk "200 million weekly health consultations from unvalidated AI. #1 health hazard + #1 patient safety concern (ECRI, 2026). 66% critical conditions missed. Invented body parts. Burns advice. 40M daily ChatGPT health users. 250+ state bills, 0 federal laws. FDA frameworks for static devices, not adaptive AI. Liability unresolved. The widest regulatory gap in any sector applied to the highest-stakes domain. Same cross-sector pattern as UC-088 (weather), UC-082/083 (agents), UC-068 (bedside manner). AI is better at narrow validated tasks and unreliable at broad unvalidated use \u2014 and 200M people are doing the broad unvalidated use every week."

SURFACE analysis AS json
SENSED1+D4 dual origin — ECRI: AI chatbot misuse is #1 health technology hazard 2026. AI diagnostic dilemma is #1 patient safety concern 2026. First time single technology tops both lists. ChatGPT: 40M daily health users, 200M+ weekly, 5%+ of all messages. Not regulated as medical devices. Not validated clinically. Harms: incorrect diagnoses, invented body parts, burns advice, 66% critical missed, unnecessary testing, bias amplification. OpenAI launching ChatGPT Health. Regulatory: 250+ state bills, 34+ states, 0 federal laws. Texas disclosure. California anti-licensure. Illinois therapeutic ban. 7 mental health chatbot laws. FDA PCCP untested. Joint Commission voluntary cert. SANDBOX Act decade waiver.
ANALYZED5 Quality: paradox — AI matches/exceeds human in specific imaging, mammography 33% faster, but 66% miss rate on critical conditions in general use. Accuracy textbook >> conversational. Narrow validated = excellent. Broad unvalidated = dangerous. D3 Revenue: ChatGPT Health as product. Multi-billion market. Malpractice exposure unquantified. Unnecessary testing costs. D2 Employee: doctor AI use doubled. Critical thinking erosion risk. Staff untrained. Illinois requires licensed review. D6 Operational: no governance structures. Generic vendor validation insufficient. Local validation needed. Digital darkness #2 ECRI. Legacy cybersecurity risks.
MEASUREDRIFT = 54 (Methodology 88 − Performance 34). Healthcare’s patient safety methodology is among the most developed in any industry: FDA device regulation, ECRI monitoring, malpractice frameworks, clinical training standards, the Hippocratic principle, informed consent requirements. The 88 reflects a genuinely rigorous safety architecture built over decades. The performance at 34 reflects that none of it was designed for what is actually happening: 200 million weekly unregulated consultations from tools that sound authoritative but can invent anatomy. The gap of 54 is the highest in any at-risk case in the library.
DECIDEFETCH = 2,970 → EXECUTE (High Priority) (threshold: 1,000). Chirp: 70.0. DRIFT: 54 (elevated). Confidence: 0.90 (ECRI data, OpenAI disclosures, state legislation text, peer-reviewed studies). 6/6 dimensions, 10×–15× multiplier, at-risk dimensions D1 and D4. 3D Lens 8.7/10. This is the highest-scoring at-risk case in the library and the first healthcare case since UC-068. The cross-sector pattern — identical regulatory gap structure in weather (UC-088), AI agents (UC-082/083), and now healthcare — elevates this from a sector case to a systemic governance signal.
ACTAt Risk — the 200 million patient is the Forecast Paradox (UC-088) applied to human life. In weather, AI is better on average but worse at the tails, and the tails are hurricanes that kill people. In healthcare, AI is better at narrow validated tasks (specific imaging) but unreliable at broad unvalidated use (general medical advice), and the unreliable use is what 200 million people are actually doing every week. The regulatory gap is structurally identical: technology deployed at unprecedented scale, governance designed for a previous paradigm, patchwork response, no accountability framework. But the stakes are incomparably higher. A wrong weather forecast destroys property. A wrong diagnosis kills a patient. The at-risk dimensions (D1, D4) indicate where the cascade will break: a high-profile patient harm event traced to chatbot advice, followed by a wave of litigation for which the legal system has no framework, followed by emergency regulation that may be either too restrictive (killing the genuine diagnostic benefits) or too permissive (allowing the harm to continue). The window for proactive governance is narrowing every week that 200 million consultations occur without oversight.
06

Key Insights

The Forecast Paradox Applied to Life and Death

UC-088 documented how AI weather models are better on average but degrade at the tails — the hurricanes that matter most. Healthcare AI exhibits the exact same pattern: excellent in narrow, validated imaging tasks, unreliable in the broad general-purpose medical advice that 200 million people actually use it for. In weather, the tail risk is property damage and preparation failure. In healthcare, the tail risk is death. The pattern is identical. The stakes are not.

The Largest Unregulated Medical Practice in History

200 million weekly healthcare consultations from tools that are not regulated as medical devices, not validated for clinical use, and not subject to malpractice liability. No physician in history has seen 200 million patients a week. No medical device has been deployed at this scale without FDA clearance. The AI chatbot is practising medicine without a licence at a volume that exceeds the entire healthcare system — and the regulatory framework has not caught up to this reality.

The Patchwork Guarantee of Inconsistent Harm

Texas requires disclosure. California bans implied licensure. Illinois prohibits independent therapeutic decisions. Ohio wants written consent. Seven states regulate mental health chatbots. But a patient in Wyoming has no protections at all. The patchwork guarantees that patient safety depends on geography — the same chatbot, giving the same wrong advice, is regulated in one state and completely unregulated in the next. No other sector with this harm profile operates under such inconsistent governance.

The Agent Autonomy Thesis at Clinical Scale

UC-082/083 mapped the risk of AI agents acting with increasing autonomy. Healthcare chatbots are the purest expression of that risk: they receive a medical question, make an assessment, deliver a recommendation, and the patient acts on it — all without human oversight. Academics are now seriously discussing licensing AI as “advanced clinical practitioners.” The fact that this is even a legitimate policy discussion reveals how far AI autonomy has advanced and how far governance has lagged behind it.

Sources

[1]
ECRI, “Top 10 Health Technology Hazards 2026” — AI chatbot misuse #1 hazard, 40M daily users, invented body parts, burns advice, incorrect diagnoses, unnecessary testing, bias
ecri.org
January 21, 2026
[2]
Radiology Business / ECRI, “Navigating the AI diagnostic dilemma is healthcare’s No. 1 patient safety concern in 2026” — #1 safety concern, 66% critical missed, accuracy drop conversational vs textbook, malpractice juror reactions
radiologybusiness.com
March 9, 2026
[3]
blueBriX, “The 2026 AI Reset: A New Era for Healthcare Policy” — 250+ state bills, 34+ states, FDA/CMS/HHS collaboration, adaptive oversight, IBM Watson cautionary tale, PCCP
bluebrix.health
January 29, 2026
[4]
Health Data Management, “AI chatbots pose an unregulated, unmanaged risk in healthcare” — ChatGPT Health launch, rural healthcare decline driver, word prediction vs understanding, confidence without accuracy
healthdatamanagement.com
February 2026
[5]
Manatt Health, “Health AI Policy Tracker” — Texas TRAIGA, California AB 489, Illinois therapy prohibition, Ohio consent, Pennsylvania disclaimers, 7 mental health chatbot laws, SANDBOX Act, licensing as practitioners
manatt.com
2026
[6]
Akerman LLP, “New Year, New AI Rules: Healthcare AI Laws Now in Effect” — California AB 489, suicidal ideation protocols, state patchwork, Indiana/Kentucky/Rhode Island privacy, federal preemption risk
akerman.com
January 2026
[7]
Jimerson Birr, “Healthcare AI Regulation 2026: New Compliance Requirements” — Joint Commission + CHAI playbooks, voluntary AI certification, governance requirements, local validation, continuous monitoring
jimersonfirm.com
February 2026
[8]
Bipartisan Policy Center, “FDA Oversight: Understanding the Regulation of Health AI Tools” — FDA lifecycle regulation, SBOM requirements, MAUDE database limitations, postmarket surveillance gaps, QSR requirements
bipartisanpolicy.org
November 10, 2025
[9]
Advisory Board / ECRI, “ECRI names the top 10 patient safety concerns 2026” — AI diagnostic dilemma #1, staffing shortages, culture of blame, psychological safety, workforce turnover
advisory.com
March 11, 2026
[10]
Healthcare Dive / ECRI, “ECRI names misuse of AI chatbots as top health tech hazard for 2026” — LLMs not medical devices, OpenAI 5% health messages, 25% of 800M users weekly, cybersecurity legacy devices
healthcaredive.com
January 22, 2026

The headline is the trigger. The cascade is the story.

One conversation. We’ll tell you if the six-dimensional view adds something new — or confirm your current tools have it covered.