When an ADAS decision results in injury, NHTSA will subpoena the model's decision chain. It must be reconstructable.
Automotive OEMs deploy AI across internal safety analysis, warranty issue summarization, recall analysis governance, regulatory submission drafting, ADAS systems, and autonomous driving stacks. NHTSA (49 CFR 573, 576, 577, 579) governs safety reporting and recall obligations. The governance gap in automotive is only 5 points, but the consequences are measured in fatalities. When an ADAS or autonomous system makes a decision that results in injury, the litigation discovery process will subpoena the model's decision chain — the specific input state, model version, and output that produced the action. In most current architectures, it is not fully reconstructable.
What Ontic Does Here
Ontic's Clean Room produces NHTSA-defensible reporting, safety-critical system governance evidence, and FMEA chain of custody with full provenance. The Refinery enforces recall analysis governance and regulatory submission accuracy as deterministic guardrails. The chain of custody extends from model decision through safety analysis to regulatory filing — every link documented, every link provable.
Recommended Deployment
Studio
Assists judgment
- •Internal safety analysis drafts
- •Warranty issue summarization
Refinery
Enforces authority
- •Recall analysis governance
- •Regulatory submission drafting
Clean Room
Enforces defensibility
★ Start here
- •NHTSA-defensible reporting
- •Safety-critical system governance
- •FMEA chain-of-custody
Expansion path: clean_room (primary) | refinery for non-safety operations
Regulatory Context
NHTSA (49 CFR 573, 576, 577, 579) governs safety reporting and recall obligations for AI-assisted vehicles. TREAD Act requires early warning reporting for AI-related safety signals. FMVSS applies to AI-assisted vehicle systems. EPA emissions regulations apply to AI-optimized powertrains. State lemon laws apply to AI-related vehicle defects. EU type-approval (WVTA) requires AI system governance for European markets. UNECE regulations govern autonomous driving AI internationally.
Applicable Frameworks
Common Objections
"We have a functional safety team that governs ADAS. They use ISO 26262."
ISO 26262 governs deterministic safety-critical software. Probabilistic AI systems operate outside the deterministic verification framework — they produce different outputs for similar inputs by design. The functional safety team provides the governance standard. Ontic provides the runtime evidence infrastructure that captures what the AI actually did at the decision point, not what the safety case predicted it would do.
Evidence
- →Roughly 200 million vehicles globally have shipped with Mobileye EyeQ technology
- →NHTSA Standing General Order requires ADS crash reporting
- →Autonomous vehicle litigation discovery is establishing AI evidence requirements
- →EU AI Act classifies autonomous vehicle AI as high-risk
Questions to Consider
- ?Can the OEM reconstruct the complete decision chain for any individual ADAS or AV decision at the time of an incident?
- ?If NHTSA subpoenaed the AI's input state, model version, and output for a specific event, could it be produced?
- ?Has functional safety governance been extended beyond deterministic ISO 26262 to cover probabilistic AI?
Primary Buyer
VP Safety / Chief Technology Officer / General Counsel
Deal Size
Enterprise ($150K+ ACV)
Implementation
High — Months with dedicated team
Start With
Clean Room
Ready to see how Ontic works for oem?