Skip to content
OnticBeta
Tier 2 — Industry Standardindustry oracle

Healthcare — AI Governance Landscape

Publisher

Ontic Labs

Version

v1

Last verified

February 15, 2026

Frameworks

21st Century Cures Act (information blocking)42 CFR Part 2 (substance use confidentiality)ACACAP accreditation standardsCLIA (42 CFR 493)CMS Conditions of ParticipationCMS Conditions of Participation (42 CFR 483)CMS/HHS regulationsControlled substance prescribing rulesDuty to warn/protect statutesEMA guidelinesEMTALAERISAEU MDR/IVDRElder abuse reporting statutesFDA 21 CFR Parts 11, 50, 56, 312, 812, 820FDA AI/ML SaMD guidanceFDA IVD regulationsFTC Health Breach Notification RuleHIPAA

Industries

healthcare

Healthcare - Overview

Physician AI adoption jumped from 38% to 66% in one year. $3.3B in drug discovery VC. But governance is 35%, and HIPAA, HITECH, and CMS Conditions of Participation are non-negotiable. When a clinical decision traces back to a model, the evidentiary chain must be complete.

Physician AI adoption jumped from 38% to 66% in one year. The clinical value is real. The governance floor is strong -- HIPAA, HITECH, CMS Conditions of Participation create dense regulatory coverage. But that floor was built for human clinical judgment, not model-assisted decisions. When a clinical decision support system influences a treatment plan and the outcome is adverse, the malpractice discovery process asks for the evidence chain: what data entered the model, what the model recommended, what the clinician did with it. CMS audit requirements ask the same question in a different format. EHR vendors do not provide that chain. Compliance teams cannot reconstruct it after the fact. The evidentiary bridge between model output and clinical action is the gap that remains open.

This industry includes 8 segments in the Ontic governance matrix, spanning risk categories from Category 1 — Assistive through Category 2 — Regulated Decision-Making. AI adoption index: 5/5.

Healthcare - Regulatory Landscape

The healthcare sector is subject to 43 regulatory frameworks and standards across its segments:

  • 21st Century Cures Act (information blocking)
  • 42 CFR Part 2 (substance use confidentiality)
  • ACA
  • CAP accreditation standards
  • CLIA (42 CFR 493)
  • CMS Conditions of Participation
  • CMS Conditions of Participation (42 CFR 483)
  • CMS/HHS regulations
  • Controlled substance prescribing rules
  • Duty to warn/protect statutes
  • EMA guidelines
  • EMTALA
  • ERISA
  • EU MDR/IVDR
  • Elder abuse reporting statutes
  • FDA 21 CFR Parts 11, 50, 56, 312, 812, 820
  • FDA AI/ML SaMD guidance
  • FDA IVD regulations
  • FTC Health Breach Notification Rule
  • HIPAA
  • HIPAA Privacy & Security Rules
  • HIPAA Privacy/Security/Breach Rules
  • HITECH Act
  • ICH E6(R2) GCP
  • Interstate Medical Licensure Compact
  • LDT oversight framework
  • MHPAEA
  • MLR requirements
  • No Surprises Act
  • OBRA 87
  • Ryan Haight Act (DEA)
  • Stark Law / Anti-Kickback
  • State clinical lab licensing
  • State external review rules
  • State health privacy laws
  • State insurance codes
  • State licensing boards
  • State medical practice acts
  • State mental health parity laws
  • State ombudsman programs
  • State pharmacy laws
  • State survey agency authority
  • State telehealth parity laws

The specific frameworks that apply depend on the segment and scale of deployment. Cross-industry frameworks (GDPR, ISO 27001, EU AI Act) may apply in addition to sector-specific regulation.

Healthcare - Healthcare -- Digital Health Startup

Risk Category: Category 1 — Assistive Scale: SMB Applicable Frameworks: HIPAA Privacy & Security Rules, FTC Health Breach Notification Rule, State health privacy laws, 21st Century Cures Act (information blocking)

HIPAA does not have a startup exemption for AI-generated patient content.

The Governance Challenge

Digital health startups ship AI-powered patient FAQs, clinical note summarization, and eligibility explanations. The product velocity is high. HIPAA Privacy and Security Rules apply to every AI interaction that touches PHI — regardless of company size or funding stage. FTC Health Breach Notification Rule adds a second layer. Most digital health companies discover the governance gap when a patient complaint or a payer audit surfaces.

Regulatory Application

HIPAA Privacy and Security Rules govern PHI in AI systems without exception. FTC Health Breach Notification Rule applies to non-covered entities handling health data. State health privacy laws (e.g., Washington MHMD, California CMIA) add jurisdiction-specific requirements. 21st Century Cures Act information blocking rules constrain how AI systems handle health records.

AI Deployment Environments

  • Studio: Patient FAQ drafting | Internal clinical note summarization | Research literature triage
  • Refinery: Patient-facing content guardrails | Simple eligibility / benefit explanation checks

Typical deployment path: Studio → Studio → Refinery

Evidence

  • Physician AI adoption jumped from 38% to 66% in one year
  • FTC Health Breach Notification Rule enforcement expanded in 2024
  • Digital health VC funding remains robust despite regulatory tightening

Healthcare - Healthcare Provider -- Hospital System

Risk Category: Category 2 — Regulated Decision-Making Scale: Mid-Market Applicable Frameworks: HIPAA Privacy/Security/Breach Rules, HITECH Act, CMS Conditions of Participation, State medical practice acts, Stark Law / Anti-Kickback, EMTALA

CMS does not have a separate standard for AI-generated patient explanations. The existing standard applies.

The Governance Challenge

Hospital systems deploy AI for staff productivity, scheduling, patient communications, billing code validation, and prior authorization narratives. HIPAA Privacy/Security/Breach Rules, HITECH Act, and CMS Conditions of Participation apply to every AI-generated output touching patient data or clinical workflows. When an AI-generated patient explanation contains an error — wrong copay, inaccurate diagnosis description, misleading treatment option — the liability falls on the system, not the model. Existing compliance workflows were built for human-authored content at human speed.

Regulatory Application

HIPAA Privacy and Security Rules govern PHI in every AI workflow. HITECH Act breach notification requirements apply to AI-related PHI exposure. CMS Conditions of Participation apply to AI-generated clinical and administrative outputs. State medical practice acts govern AI-assisted clinical documentation. Stark Law and Anti-Kickback Statute apply to AI-generated referral communications. EMTALA obligations apply to AI-assisted triage.

AI Deployment Environments

  • Studio: Staff productivity tools | Scheduling and admin copilots | Draft patient communications
  • Refinery: Patient-facing explanations | Billing code validation context | Prior-authorization narratives
  • Clean Room: Clinical incident review bundles | High-risk determination narratives with chain-of-custody

Typical deployment path: Refinery → Refinery → Clean Room

Evidence

  • AMA survey data show physician AI use rising from 38% in 2023 to 66% in 2024
  • CMS Conditions of Participation apply to AI-generated outputs without exception
  • Malpractice discovery beginning to examine AI contribution to clinical decisions
  • Prior authorization AI errors are the fastest-growing patient complaint category

Healthcare - Healthcare -- Pharma & Device

Risk Category: Category 2 — Regulated Decision-Making Scale: Enterprise Applicable Frameworks: FDA 21 CFR Parts 11, 50, 56, 312, 812, 820, FDA AI/ML SaMD guidance, ICH E6(R2) GCP, EU MDR/IVDR, EMA guidelines, HIPAA, State pharmacy laws

FDA 21 CFR Part 11 traceability requirements do not exempt AI-generated submission narratives.

The Governance Challenge

Pharma and device companies deploy AI for R&D summarization, protocol design assistance, internal safety memos, adverse event report drafting, and label compliance checking. FDA 21 CFR Part 11 requires electronic records in regulatory submissions be traceable, attributable, and reproducible. ICH E6(R2) GCP requirements apply to AI-assisted clinical trial documentation. When an AI-generated adverse event narrative or label claim flows into an FDA submission, the provenance chain must meet the same GxP standard as human- authored work.

Regulatory Application

FDA 21 CFR Parts 11, 50, 56, 312, 812, and 820 create a comprehensive traceability framework. FDA AI/ML SaMD guidance applies to diagnostic and decision-support AI. ICH E6(R2) GCP requires full documentation of clinical trial processes including AI-assisted ones. EU MDR/IVDR and EMA guidelines add cross-border requirements. HIPAA applies to patient data in AI research workflows. State pharmacy laws add jurisdiction-specific constraints.

AI Deployment Environments

  • Studio: R&D and protocol summarization | Internal safety memo drafting
  • Refinery: Adverse event report drafting | Label and IFU compliance checking
  • Clean Room: Clinical trial reporting (FDA) | Diagnostic / decision-support AI governance

Typical deployment path: Refinery → Refinery → Clean Room

Evidence

  • $5.6B invested in AI-driven biopharma in 2024, up from $1.8B in 2023 (Silicon Valley Bank)
  • FDA AI/ML SaMD guidance expanding in scope
  • Part 11 compliance gaps are the most common FDA warning letter trigger for electronic records
  • ICH E6(R2) requirements apply to AI-assisted trial documentation

Healthcare - Healthcare -- Telehealth Platform

Risk Category: Category 2 — Regulated Decision-Making Scale: Mid-Market Applicable Frameworks: State medical practice acts, Ryan Haight Act (DEA), Interstate Medical Licensure Compact, HIPAA, State telehealth parity laws, Controlled substance prescribing rules

AI-generated care instructions cross state lines. The medical practice act in each state applies separately.

The Governance Challenge

Telehealth platforms deploy AI for clinician productivity, patient intake summarization, scheduling, patient-facing care instructions, and prescription- related communications. Multi-state operations create compound regulatory exposure — each state's medical practice act applies independently. Ryan Haight Act (DEA) governs AI-involved prescription workflows. HIPAA applies to every AI interaction with PHI. When an AI-generated care instruction contains an error and the patient is in a different state than the clinician, the liability and jurisdictional questions compound.

Regulatory Application

State medical practice acts govern AI-assisted clinical outputs in each jurisdiction independently. Ryan Haight Act (DEA) restricts AI involvement in controlled substance prescribing workflows. Interstate Medical Licensure Compact simplifies licensing but does not harmonize AI governance. HIPAA applies to all PHI in AI systems. State telehealth parity laws create jurisdiction-specific reimbursement and documentation requirements.

AI Deployment Environments

  • Studio: Clinician productivity tools | Patient intake summarization | Scheduling assist
  • Refinery: Patient-facing care instructions | Prescription-related communication governance | Consent and disclosure enforcement
  • Clean Room: State medical board compliance evidence | DEA prescribing audit trails | Cross-state licensure documentation

Typical deployment path: Refinery → Refinery → Clean Room

Evidence

  • Telehealth AI adoption accelerating post-pandemic
  • Ryan Haight Act enforcement expanding to AI-involved prescribing workflows
  • State medical board AI guidance proliferating — no two states identical
  • Cross-state AI governance liability is emerging case law

Healthcare - Healthcare -- Clinical Labs & Diagnostics

Risk Category: Category 2 — Regulated Decision-Making Scale: Mid-Market Applicable Frameworks: CLIA (42 CFR 493), CAP accreditation standards, FDA IVD regulations, State clinical lab licensing, HIPAA, LDT oversight framework

CLIA does not distinguish between a pathologist's interpretation and the model that assisted it.

The Governance Challenge

Clinical labs deploy AI for report drafting assistance, protocol summarization, QC documentation, patient-facing result explanations, and critical value communication. CLIA (42 CFR 493) and CAP accreditation standards govern laboratory operations with detailed documentation requirements. FDA IVD regulations apply to AI-assisted diagnostic tools. When an AI-generated patient result explanation contains an error — wrong reference range context, misleading critical value communication — the lab carries the liability.

Regulatory Application

CLIA (42 CFR 493) governs laboratory AI outputs with detailed documentation requirements. CAP accreditation standards add proficiency testing and QC requirements for AI-assisted workflows. FDA IVD regulations apply to AI diagnostic tools — LDT oversight framework evolving. State clinical lab licensing adds jurisdiction-specific requirements. HIPAA applies to patient data in all AI workflows.

AI Deployment Environments

  • Studio: Lab report drafting assist | Protocol summarization | QC documentation
  • Refinery: Patient-facing result explanations | Reference range context | Critical value communication governance
  • Clean Room: CLIA/CAP inspection evidence bundles | FDA IVD submission documentation

Typical deployment path: Refinery → Refinery → Clean Room

Evidence

  • CLIA documentation requirements are among the most granular in healthcare
  • FDA LDT oversight framework expanding to cover AI-assisted diagnostics
  • CAP inspection findings for AI documentation gaps increasing
  • AI-assisted pathology is one of the fastest-growing clinical AI segments

Healthcare - Healthcare -- Health Plans & Payers

Risk Category: Category 2 — Regulated Decision-Making Scale: Mid-Market-Enterprise Applicable Frameworks: ACA, ERISA, CMS/HHS regulations, State insurance codes, HIPAA, MLR requirements, No Surprises Act, State external review rules

An AI-generated claims denial is still a denial. The member's appeal rights are identical.

The Governance Challenge

Health plans deploy AI for benefit analysis, network adequacy summaries, member-facing coverage explanations, EOB narratives, claims determination rationales, and appeal response drafting. ACA, ERISA, CMS/HHS regulations, and state insurance codes govern every member-facing output. The No Surprises Act adds specific disclosure requirements. When an AI-generated denial letter omits a required appeal right or mischaracterizes a coverage exclusion, the plan faces CMS audit, state DOI examination, or member litigation.

Regulatory Application

ACA governs essential health benefit coverage explanations. ERISA governs self-funded plan communications. CMS/HHS regulations apply to Medicare and Medicaid managed care AI outputs. State insurance codes govern every member communication. No Surprises Act requires specific disclosure language. MLR requirements constrain administrative cost allocation for AI governance. State external review rules apply to AI-influenced adverse determinations.

AI Deployment Environments

  • Studio: Internal benefit analysis | Network adequacy summaries | Provider communication drafting
  • Refinery: Member-facing coverage explanations | EOB and claims determination narratives | Appeal response governance
  • Clean Room: CMS audit evidence packages | State DOI examination files | MLR compliance documentation

Typical deployment path: Refinery → Refinery → Clean Room

Evidence

  • CMS Medicare Advantage audits and related enforcement actions have produced aggregate penalties and recoveries in the hundreds of millions of dollars in recent years
  • No Surprises Act enforcement expanding to AI-generated billing communications
  • State DOI AI examination guidance proliferating
  • Claims denial appeal rates increasing with AI-assisted determinations

Healthcare - Healthcare -- Long-Term Care & SNFs

Risk Category: Category 2 — Regulated Decision-Making Scale: Mid-Market Applicable Frameworks: CMS Conditions of Participation (42 CFR 483), OBRA 87, State survey agency authority, State ombudsman programs, HIPAA, Elder abuse reporting statutes

CMS survey deficiencies for documentation failures apply to AI-generated care plans identically.

The Governance Challenge

Long-term care facilities deploy AI for staff scheduling, care plan drafting, family communication, resident care explanations, discharge planning narratives, and quality measure documentation. CMS Conditions of Participation (42 CFR 483) govern SNF operations with the most detailed documentation requirements in healthcare. OBRA 87 resident rights apply to AI-generated communications. State survey agencies and ombudsman programs investigate AI-related complaints. When an AI-generated care plan omits a required element, the facility faces survey deficiency — the most consequential regulatory event in long-term care.

Regulatory Application

CMS Conditions of Participation (42 CFR 483) govern AI-generated documentation with granular requirements. OBRA 87 resident rights apply to AI-generated communications. State survey agency authority extends to AI-assisted care planning. State ombudsman programs investigate AI-related complaints. HIPAA applies to all PHI in AI workflows. Elder abuse reporting statutes apply to AI systems monitoring resident safety.

AI Deployment Environments

  • Studio: Staff scheduling assist | Care plan drafting | Family communication templates
  • Refinery: Resident-facing care explanations | Discharge planning narratives | Quality measure documentation
  • Clean Room: CMS survey readiness packages | State ombudsman inquiry response files | Incident investigation bundles

Typical deployment path: Refinery → Refinery → Clean Room

Evidence

  • CMS survey deficiencies for documentation failures are the most common citation category
  • State ombudsman AI complaints are a new and growing category
  • Long-term care staffing shortages driving AI adoption faster than governance
  • Remediation costs for serious deficiencies often fall in the $50K-$300K range when penalties, consultants, and corrective actions are included

Healthcare - Healthcare -- Mental & Behavioral Health

Risk Category: Category 2 — Regulated Decision-Making Scale: Mid-Market Applicable Frameworks: 42 CFR Part 2 (substance use confidentiality), MHPAEA, State mental health parity laws, Duty to warn/protect statutes, HIPAA, State licensing boards

42 CFR Part 2 confidentiality is absolute. AI systems that process substance use data are not exempt.

The Governance Challenge

Mental and behavioral health providers deploy AI for clinician note assistance, treatment plan drafting, session summarization, patient-facing psychoeducation, and crisis protocol communication. 42 CFR Part 2 imposes the strictest confidentiality requirements in healthcare — substance use disorder records cannot be disclosed without specific written consent, even to other treating providers. MHPAEA parity requirements apply to AI-assisted coverage determinations. Duty-to-warn statutes create affirmative obligations that AI systems must not undermine. When an AI system inadvertently discloses Part 2-protected information or generates a crisis communication that fails to meet duty-to-warn standards, the consequences are both legal and clinical.

Regulatory Application

42 CFR Part 2 governs substance use disorder records with confidentiality requirements stricter than HIPAA. MHPAEA requires parity in AI-assisted mental health coverage determinations. State mental health parity laws add jurisdiction-specific requirements. Duty-to-warn/protect statutes create affirmative obligations that AI-generated communications must not compromise. HIPAA applies to all PHI. State licensing boards govern AI-assisted clinical documentation.

AI Deployment Environments

  • Studio: Clinician note assist | Treatment plan drafting | Session summarization
  • Refinery: Patient-facing psychoeducation | Informed consent governance | Crisis protocol communication
  • Clean Room: 42 CFR Part 2 compliance evidence | Parity violation investigation files | Duty-to-warn documentation

Typical deployment path: Refinery → Refinery → Clean Room

Evidence

  • Following the 2024 final rule, Part 2 violations are subject to HIPAA-level civil monetary penalties (up to roughly $2M per year in some tiers) and potential criminal liability in egregious cases
  • MHPAEA parity enforcement expanding to AI-assisted coverage decisions
  • Duty-to-warn liability for AI-generated crisis communications is emerging case law
  • Mental health AI adoption accelerating due to provider shortages