Kenshiki

Resources

Documentation

The founding technical documents behind the Kenshiki bounded-synthesis pipeline. Read in order — each document builds on the one before it: ingestion feeds compilation, compilation feeds generation, generation feeds verification.

Architecture Spec [ FOUNDING RFC ]

Governed Intelligence Architecture

The unified architecture specification that integrates SIRE identity, air-gapped ingestion, CFPO prompt compilation, Tri-Pass inference, and the Claim Ledger into one deterministic, auditable pipeline.

Phase 0 — Ingestion [ FOUNDING RFC ]

The Ingestion Pipeline

How raw documents become governed evidence in Kura.

How raw documents become governed evidence: air-gapped parsing, deterministic chunking, streaming embeddings, and geometric boundary calculation — the Phase 0 that feeds Kura.

Phase 0 — Evidence Identity [ FOUNDING RFC ]

The SIRE Identity System

The deterministic tagging system that defines what each source is, covers, relates to, and must never answer.

The deterministic tagging methodology that controls what evidence enters the retrieval boundary. SIRE defines the identity of every source document in Kura — not by what the model thinks, but by what the evidence actually is.

Phase 1 — Compilation [ FOUNDING RFC ]

Prompt Governance

How the Prompt Compiler assembles a governed prompt contract at runtime from evidence and the question.

The specification that defines how Kenshiki compiles prompts: CFPO ordering, evidence-to-zone mapping, compiler invariants, and the enforcement contract between the Prompt Compiler and the Claim Ledger.

Phase 2 — Generation + Verification [ FOUNDING RFC ]

The HAIC Framework

The founding Tri-Pass architecture: generate, decompose into claims, verify each claim against evidence.

The original architecture design that proposed externalizing the truth boundary, multi-pass causal verification, and cryptographic claim attribution — the intellectual foundation of the Kenshiki platform.

Phase 3 — Observability [ FOUNDING RFC ]

How Kenshiki Reads the Model

How the Claim Ledger reads inference-time signals to prove what the evidence caused.

Inference-time observability: the signals Kenshiki uses to inspect token confidence, entailment, stability, and causal attribution before unsupported output reaches operations.

Research Papers

Why This Architecture

The research behind the decisions — why authority must be external, why scale doesn't equal reliability, and why the regulatory window is open now.