Skip to content
OnticBeta

74 Industries. One Governance Model.

Ontic curates regulatory Encyclopedias for your segment — across federal rules, state AI acts, supervisory guidance, standards, and physics-level constraints — and keeps them versioned as a single governed knowledge base.

Load your segment once; the Oracle blends your policies with the curated spine, then enforces evidence-backed answers every time your teams or agents query it.

How it works, end‑to‑end

1

74 curated top-level segments (e.g., regional banking, hospital systems, defense subcontractors, energy utilities, regional law) each ship with a pre-mapped regulatory spine.

2

Your internal policies, playbooks, SOPs, and contracts are ingested and versioned against that spine as a YAML “Customer Encyclopedia.”

3

The Runtime Oracle loads only what each prompt scope needs (jurisdiction, product, channel, risk class), detects missing required state, and rebuilds on change while using a TTL cache for repeat scopes.

4

The Ontology → Gate layer enforces “no output without evidence,” blocking or flagging when the Encyclopedia can’t support a safe, compliant answer.

How Encyclopedias work

Curated Top-Level (74 segments)

Ontic ships 74 Regulatory Encyclopedias that pre-map the core obligations, standards, and guidance for each segment — SOX to GAAP, OCC SR 11‑7 to model-risk governance, NERC CIP‑015 to internal network security monitoring, Colorado’s SB24‑205 AI Act to state-level AI duties, and more. Each segment label is bound to the relevant supervisory letters, handbooks, and AI laws so you’re not building mappings from scratch.

banking_regional → OCC SR 11‑7 model risk guidance + FFIEC IT handbooks + applicable state AI Acts

energy_utility_transmission → NERC CIP‑015 Internal Network Security Monitoring + FERC cybersecurity directives

Customer Encyclopedia (your blend)

Your Customer Encyclopedia is a versioned YAML artifact that merges Ontic’s curated spine with your internal artifacts — policies, playbooks, runbooks, risk registers, and model documents — into machine-enforceable state.

  • required_stateThe evidence and approvals that must exist for a given prompt scope (e.g., model risk assessments, fairness testing, audit trails).
  • sourcesLinks to statutes, guidance, standards, internal policies, and model cards that satisfy each requirement.
  • missing_actionInstructions on what to do when required state is absent: flag_human, fail_closed, or restricted_template.

Runtime Oracle (prompt-scope engine)

At runtime, every prompt is evaluated through the Oracle, which computes the minimum regulatory and policy state required for that specific request and context.

  • Resolves scope: segment, jurisdiction, product, channel, and risk level.
  • Loads the relevant Encyclopedia slice into a TTL cache; changes to laws or internal policies trigger selective rebuilds.
  • Computes “required vs. present”: which evidence exists, which is missing, and what the configured missing_action demands.

Ontology → Gate (evidence enforcement)

The Ontology expresses your governance model — how regulations, controls, and evidence types relate — and the Gate enforces it at the point of answer.

  • Permit with cited evidence — links back to regulations, guidance, and internal policy.
  • Permit with warnings — emerging AI regulation applies but is still in rollout.
  • Flag human review — key evidence is missing or regulatory interpretation is ambiguous.
  • Fail closed and log — policy requires strict non-disclosure without a complete evidence chain.

Example: regional bank Encyclopedia

For a regional bank using high-risk AI and quantitative models, Ontic binds the Encyclopedia to the core supervisory and AI-law obligations.

Knowledge Base (curated spine)

Segment: financialservices_banking_regional

  • SOX 404 internal control requirements for financial reporting.
  • FFIEC IT Handbooks for information security, development, and operations controls.
  • BSA/AML obligations around monitoring, reporting, and suspicious activity.
  • Colorado SB24‑205 “Consumer Protections for Artificial Intelligence Act” when your customers or operations fall under Colorado jurisdiction.

Illustrative Encyclopedia (YAML)

segment: financialservices_banking_regional
jurisdictions:
  - federal
  - colorado
knowledge_base:
  - sox_404
  - ffiec_it_handbook
  - bsa_aml_program
  - co_sb24_205_ai_act
required_state:
  - model_risk_assessment
  - fairness_evidence
  - bsa_aml_monitoring_evidence
  - audit_trail
missing_action: flag_human | fail_closed
model_risk_assessmentSatisfies OCC SR 11‑7 expectations on model development, validation, and governance.
fairness_evidenceTied to state AI laws like Colorado’s AI Act, which aim to prevent algorithmic discrimination in high-risk AI systems.
audit_trailEnsures traceability of decisions and model changes, aligning with supervisory guidance on governance and documentation.

Oracle (runtime behavior)

Generate a customer-facing explanation of our Colorado banking credit model decision for this declined applicant.

The Oracle:

  • Detects scope: regional banking, Colorado, high-risk AI (credit decision), customer-facing communication.
  • Loads federal model risk guidance (SR 11‑7), FFIEC expectations, BSA/AML context, and Colorado SB24‑205 AI Act duties.
  • Checks Encyclopedia state: is there a current model risk assessment, fairness analysis, explanation templates, and an audit trail?
  • If any required state is missing, it triggers the configured missing_action — flagging a human or failing closed.

Gate (enforcement)

  • Blocks customer disclosure if BSA/AML or model-governance evidence is absent.
  • Allows a response only when the Encyclopedia evidences that Colorado AI Act transparency and discrimination-risk requirements are met.

Global to Local Regulatory Landscape

Ontic honors regulatory requirements across jurisdictions by structuring them as curated, versioned sources in the Oracle. Regulations → rules → prompts → guarded outputs.

Global Frameworks (Cross-Jurisdictional)

Baseline standards ingestible as universal Oracle sources.

EU AI Act

Europe, extraterritorial

Risk assessments, conformity docs, human oversight, transparency logs

Clean Room defaults to high-risk controls; audit trails for conformity evidence

NIST AI RMF 1.0

US, voluntary global

Impact assessments, bias monitoring, lifecycle governance

Core Oracle taxonomy; Gate enforces measurement thresholds

ISO/IEC 42001

International standard

Policies, risk treatment, controls certification

Refinery/Studio certification baseline; versioned control mappings

Regional Regulations

Europe

EU AI Act dominates (27 countries + EEA); UK AI Framework (pro-innovation, sector bills 2026); Switzerland aligns.

North America

US patchwork (CO AI Act, CA SB1047, NY LL144); Canada AIDA (high-impact transparency).

Asia-Pacific

China GenAI Measures (algorithm registration); South Korea Basic AI Act; Singapore Model Framework.

Latin America

Brazil AI Bill of Rights (risk-based).

Local / Sector-Specific (US States + Agency Rules)

US states lead with binding laws; agencies fill gaps.

Colorado

SB24-205 (first comprehensive; impact assessments Feb 2026)

Deployer obligations → Gate blocks non-compliant outputs

California

SB1047 (frontier models); CCPA/CPRA AI clauses

Large model safety + consumer AI rights

NYC / Illinois

LL144 AEDT (hiring AI); BIPA AI biometrics

Employment + biometric high-risk

Federal US

OMB M-24-10 (gov AI); NIST RMF

FedRAMP-ready environments

Canada

AIDA (high-impact mitigation)

Similar to EU high-risk

Sector examples (from your matrix)

  • Banking: OCC SR 11‑7 model risk → Oracle cites FFIEC AI guidance
  • Healthcare: FDA AI/ML SaMD → Clean Room evidentiary chain
  • Defense: DoD AI Principles + CMMC AI → Air-gapped Appliance

How Ontic Ingests & Enforces

1

Oracle Ingestion

Regulations as versioned YAML/JSON. Pull from official sources via API (EUR-Lex, NIST, RegTech feeds).

2

Ontology Generation

Rules derive automatically (e.g., high_risk → fail_closed + audit_trail).

3

Prompt Contract

System prompts embed jurisdiction/sector rules (deterministic compile).

4

Gate Enforcement

Runtime checks block violations (e.g., "no output without EU AI Act Article 13 documentation").

End-to-end governance: a banking output in Germany inherits EU AI Act + BaFin rules; a US hospital gets FDA + HIPAA AI clauses. Update once in Oracle → propagates everywhere.

Coverage: representative segments

How Encyclopedias look across high-value segments. The full 74-segment matrix is available as a CSV.

Banking – regional

Sources

OCC SR 11‑7; FFIEC IT Handbooks; BSA/AML; Colorado SB24‑205

Gate enforcement

Blocks credit, deposit, or marketing outputs without model risk documentation, fairness testing, and BSA/AML monitoring evidence.

Banking – digital / fintech

Sources

SR 11‑7; FFIEC outsourcing; CFPB rules; state AI and consumer protection laws

Gate enforcement

Enforces evidence for explainability, algorithmic discrimination controls, and consumer disclosures.

Hospital system

Sources

FDA SaMD; CMS prior-auth; state health privacy and AI restrictions

Gate enforcement

Blocks clinical decision-support outputs without linkage to approved indications and safety evidence.

Payer / health plan

Sources

CMS regulations; state insurance and utilization rules; AI prior-auth policy

Gate enforcement

Requires evidence of coverage policies and model fairness assessments for benefit decisions.

Defense subcontractor

Sources

CMMC 2.0; NIST SP 800‑171 and 800‑53

Gate enforcement

Blocks AI-driven handling of controlled data unless CMMC and 800‑171 evidence is present.

Defense prime

Sources

DFARS; CMMC Level 2/3; NIST 800‑171 and 800‑172

Gate enforcement

Enforces fail closed when prompts would send covered defense information without enclave evidence.

Energy – transmission

Sources

NERC CIP‑015 INSM; NERC CIP standards; FERC cybersecurity

Gate enforcement

Blocks changes affecting BES operations unless internal network monitoring and anomaly detection evidence exists.

Energy – retail

Sources

NERC CIP; state utility commission rules; privacy and AI regulations

Gate enforcement

Enforces evidence-backed controls on AI for demand response, billing, or disconnection decisions.

Legal – regional firm

Sources

ABA ethics opinions; FRCP e-discovery; local bar AI guidance

Gate enforcement

Blocks drafting or disclosure that would violate confidentiality or AI-related ethics duties.

Legal – e-discovery vendor

Sources

FRCP e-discovery framework; model ESI and privilege protocols; client SLAs

Gate enforcement

Requires evidence of defensible process before allowing AI-generated review summaries.

High-risk employer (multi-state)

Sources

Colorado SB24‑205; anti-discrimination laws; EEOC guidance

Gate enforcement

Enforces impact assessment, transparency, and bias controls before AI hiring or promotion decisions.

Consumer credit / lending

Sources

Federal fair-credit laws; SR 11‑7; state AI and consumer protection laws

Gate enforcement

Blocks adverse action notices unless model documentation, fairness evidence, and audit trails are present.

Full 74-segment matrix available on request. Browse all segments →

Benefits

Pre-curated 74 segments

Launch in your segment on Day Zero with a curated regulatory spine; you’re not starting from a blank sheet or generic control library.

Prompt-scope aware

A prompt like "Colorado banking disclosure" automatically loads Colorado’s AI Act duties and your banking Encyclopedia slice into scope.

Human-in-loop by design

When the Oracle detects an Encyclopedia gap — missing model risk assessment, no fairness testing, incomplete BSA/AML evidence — it escalates to humans instead of letting the model improvise.

TTL efficiency

Repeat prompt scopes reuse a cached Encyclopedia slice, while legal updates or policy changes trigger targeted rebuilds.

Find your segment

See how Ontic maps regulatory obligations for your industry. Or check your risk profile in two minutes.