Skip to content
OnticBeta
Tier 2 — Industry Standardindustry oracle

Cross-Sector — AI Governance Landscape

Publisher

Ontic Labs

Version

v1

Last verified

February 15, 2026

Frameworks

All sector-specific frameworks applicable to operating segmentsBoard-level fiduciary obligations re: AI riskEU AI ActEU AI Act (if EU operations)EU AI Act (if applicable)General: state privacy laws (CCPA/CPRA, etc.)ISO/IEC 42001ISO/IEC 42001 (AI management systems)Internal IP and confidentiality policiesNIST AI RMFSEC disclosure requirements (if public)Sector-specific AI guidance (OCC, FDA, etc.)State AI legislation (e.g., CO SB 24-205, TX TRAIGA, IL HB 3773, CA FEHA AI rules)Varies by sector

Industries

cross sector

Cross-Sector - Overview

30% enterprise deployment. 21% governance. 40% of AI spend is shadow AI. The gap is structural -- horizontal use cases cross every regulatory boundary but inherit none of the industry-specific governance frameworks. When AI touches five industries at once, the organization needs one authority model that scales across all of them.

30% of enterprises have deployed AI. 21% have governance. 40% of AI spend is shadow AI -- tools adopted by employees without IT oversight or policy coverage. The gap here is structural, not sectoral. Horizontal use cases -- research copilots, document drafting, meeting summarization, knowledge search -- cross every regulatory boundary but inherit none of the industry-specific governance frameworks. A single enterprise deployment may simultaneously touch HIPAA patient data, GLBA financial records, FERPA student information, and attorney-client privileged material. No industry-specific governance framework covers that intersection. The 42% abandonment rate for enterprise AI projects is partially a governance failure: without a horizontal authority model, organizations cannot scale what they cannot control. Shadow AI is not a technology problem. It is a governance vacuum.

This industry includes 3 segments in the Ontic governance matrix, spanning risk categories from Category 1 — Assistive through 6_control_plane. AI adoption index: 4/5.

Cross-Sector - Regulatory Landscape

The cross-sector sector is subject to 14 regulatory frameworks and standards across its segments:

  • All sector-specific frameworks applicable to operating segments
  • Board-level fiduciary obligations re: AI risk
  • EU AI Act
  • EU AI Act (if EU operations)
  • EU AI Act (if applicable)
  • General: state privacy laws (CCPA/CPRA, etc.)
  • ISO/IEC 42001
  • ISO/IEC 42001 (AI management systems)
  • Internal IP and confidentiality policies
  • NIST AI RMF
  • SEC disclosure requirements (if public)
  • Sector-specific AI guidance (OCC, FDA, etc.)
  • State AI legislation (e.g., CO SB 24-205, TX TRAIGA, IL HB 3773, CA FEHA AI rules)
  • Varies by sector

The specific frameworks that apply depend on the segment and scale of deployment. Cross-industry frameworks (GDPR, ISO 27001, EU AI Act) may apply in addition to sector-specific regulation.

Cross-Sector - Horizontal -- Any Knowledge Work

Risk Category: Category 1 — Assistive Scale: SMB-Enterprise Applicable Frameworks: Varies by sector, General: state privacy laws (CCPA/CPRA, etc.), EU AI Act (if applicable), Internal IP and confidentiality policies

Shadow AI crosses every regulatory boundary but inherits none of the governance.

The Governance Challenge

40% of enterprise AI spend is shadow AI — tools adopted by employees without IT oversight or policy coverage. Research copilots, document drafting, meeting summarization, and knowledge search all touch data that may include HIPAA- protected records, GLBA financial information, or attorney-client privileged material. No horizontal governance framework exists to cover these intersections.

Regulatory Application

Regulatory exposure varies by sector but accumulates horizontally. CCPA/CPRA applies to personal data in any AI workflow. GLBA applies if financial records are processed. HIPAA applies if health information enters the context window. EU AI Act obligations attach regardless of use case category. The absence of an AI-specific horizontal framework does not reduce exposure — it increases it.

AI Deployment Environments

  • Studio: Internal research copilots | Draft emails and documents | Meeting and call summarization
  • Refinery: Internal policy / playbook generation with guardrails | Knowledge base curation with source tracking

Typical deployment path: Studio → Studio → Refinery

Evidence

  • 40% of enterprise AI spend is shadow AI (McKinsey 2024)
  • 42% of enterprise AI projects are abandoned, partially due to governance failures
  • 30% of enterprises have deployed AI; 21% have governance frameworks

Cross-Sector - Enterprise -- Technology & Data Teams

Risk Category: 6_control_plane Scale: Mid-Market-Enterprise Applicable Frameworks: EU AI Act (if EU operations), NIST AI RMF, ISO/IEC 42001 (AI management systems), Sector-specific AI guidance (OCC, FDA, etc.), State AI legislation (e.g., CO SB 24-205, TX TRAIGA, IL HB 3773, CA FEHA AI rules)

40% of AI spend is shadow AI. The governance gap is not a policy problem. It is a visibility problem.

The Governance Challenge

Enterprise technology and data teams deploy AI for model catalog summarization, policy draft assistance, runbook and SOP drafting, model usage policy enforcement, approval workflow narratives, and change control documentation. EU AI Act (if EU operations), NIST AI RMF, and ISO/IEC 42001 impose enterprise-wide AI governance requirements. State AI legislation is proliferating — Colorado, Texas, Illinois, California each with different frameworks. The challenge is not writing an AI policy. It is enforcing one across every model, every team, and every use case — including the 40% of AI spend that IT does not control.

Regulatory Application

EU AI Act imposes enterprise-wide AI governance with risk classification and conformity assessment requirements. NIST AI RMF provides the U.S. governance framework. ISO/IEC 42001 establishes AI management system standards. State AI legislation (Colorado SB 24-205, Texas TRAIGA, Illinois HB 3773, California FEHA AI rules) adds jurisdiction-specific requirements. Sector-specific guidance (OCC, FDA, etc.) applies based on the enterprise's industry.

AI Deployment Environments

  • Studio: Model catalog summarization | Policy draft assistants | Runbook and SOP drafting
  • Refinery: Model usage policy enforcement | Approval workflow narratives | Change control documentation
  • Clean Room: Full AI governance evidence store | Board and regulator briefing packs on AI risk

Typical deployment path: Refinery → Refinery → Clean Room

Evidence

  • Analyst estimates suggest around 40% of enterprise AI spend is shadow AI, outside formal IT oversight
  • 30% deployed; 21% governed — structural gap
  • EU AI Act compliance deadlines approaching with significant penalties
  • ISO/IEC 42001 certification emerging as enterprise AI governance standard

Cross-Sector - Enterprise -- Regulated Conglomerate

Risk Category: 6_control_plane Scale: Enterprise Applicable Frameworks: All sector-specific frameworks applicable to operating segments, EU AI Act, NIST AI RMF, ISO/IEC 42001, Board-level fiduciary obligations re: AI risk, SEC disclosure requirements (if public)

When the board asks about AI risk across six operating segments, the answer must be evidence — not assurance.

The Governance Challenge

Regulated conglomerates deploy AI across multiple operating segments — each with its own regulatory framework, risk profile, and AI maturity. All sector- specific governance frameworks apply to their respective segments. EU AI Act imposes enterprise-wide obligations regardless of segment structure. Board- level fiduciary obligations require AI risk governance at the enterprise level. SEC disclosure requirements apply to AI risk in public company filings. The challenge is providing the board with a consolidated view of AI risk and governance across segments that operate under different regulatory regimes, use different AI tools, and are at different governance maturity levels.

Regulatory Application

All sector-specific regulatory frameworks apply to their respective operating segments. EU AI Act imposes enterprise-wide governance obligations. NIST AI RMF provides the enterprise governance framework. ISO/IEC 42001 establishes AI management system standards. Board-level fiduciary obligations require AI risk governance. SEC disclosure requirements (if public) apply to material AI risk in annual filings.

AI Deployment Environments

  • Studio: Group-wide policy drafting | Training and awareness content
  • Refinery: Segment-specific AI usage standards | Cross-entity exception handling documentation
  • Clean Room: Enterprise AI control plane with consolidated audit trail across Studio, Refinery, and Clean Room

Typical deployment path: Clean Room → clean_room (enterprise) + refinery (segment-level) + studio (team-level)

Evidence

  • Board-level AI risk governance is emerging as a fiduciary obligation
  • SEC disclosure requirements for AI risk in public company filings increasing
  • EU AI Act enterprise-wide obligations apply regardless of segment structure
  • Consolidated AI governance evidence is the #1 gap in conglomerate risk reporting