Resources
Documentation
The founding technical documents behind the Kenshiki bounded-synthesis pipeline. Read in order — each document builds on the one before it: ingestion feeds compilation, compilation feeds generation, generation feeds verification.
Governed Intelligence Architecture
The unified architecture specification that integrates SIRE identity, air-gapped ingestion, CFPO prompt compilation, Tri-Pass inference, and the Claim Ledger into one deterministic, auditable pipeline.
The Ingestion Pipeline
How raw documents become governed evidence in Kura.
How raw documents become governed evidence: air-gapped parsing, deterministic chunking, streaming embeddings, and geometric boundary calculation — the Phase 0 that feeds Kura.
The SIRE Identity System
The deterministic tagging system that defines what each source is, covers, relates to, and must never answer.
The deterministic tagging methodology that controls what evidence enters the retrieval boundary. SIRE defines the identity of every source document in Kura — not by what the model thinks, but by what the evidence actually is.
Prompt Governance
How the Prompt Compiler assembles a governed prompt contract at runtime from evidence and the question.
The specification that defines how Kenshiki compiles prompts: CFPO ordering, evidence-to-zone mapping, compiler invariants, and the enforcement contract between the Prompt Compiler and the Claim Ledger.
The HAIC Framework
The founding Tri-Pass architecture: generate, decompose into claims, verify each claim against evidence.
The original architecture design that proposed externalizing the truth boundary, multi-pass causal verification, and cryptographic claim attribution — the intellectual foundation of the Kenshiki platform.
How Kenshiki Reads the Model
How the Claim Ledger reads inference-time signals to prove what the evidence caused.
Inference-time observability: the signals Kenshiki uses to inspect token confidence, entailment, stability, and causal attribution before unsupported output reaches operations.
Research Papers
Why This Architecture
The research behind the decisions — why authority must be external, why scale doesn't equal reliability, and why the regulatory window is open now.
Simulators, Sensors, and Governed Architecture
The origin paper: how physics forced ontology, ontology forced authority, authority forced gates, and gates forced architecture — discovered empirically, not designed from first principles.
Link Margin: Why Scale Won't Save You
The dominant failure mode is not hallucination but unresolved ambiguity. Models lack authoritative variant spaces, resolution rules, and consequence thresholds at inference time.
Authority Must Be Outside the Model
Why safety enforcement visible to the model's optimization surface inevitably becomes performance theater — and why the architectural invariant is structural invisibility, not better prompting.
The Distributed Enron Moment
Why AI governance is on time, not early — 21+ verified incidents, active enforcement from state AGs and federal agencies, and a regulatory trajectory that mirrors pre-SOX accounting reform.
Why Runtime Governance, Why Now
A frontier model's own analysis demonstrates why training-time alignment is insufficient: its "contrarian" view was consensus opinion shaped by corpus bias, correctable only through external verification.