Kenshiki

No evidence, no emission.

Fluency used to mean something. Now it does not. We built the control plane that makes fluency trustworthy again.

When fluency breaks, everything looks right until it isn't.

AI systems don't fail loudly. They fail convincingly. The same models that produce useful answers can fabricate case law, hallucinate intelligence, and leak sensitive data—without any visible signal that something is wrong. The problem is not that they fail. It's that you can't tell when they do.

Curated from the AI Incident Database and Kenshiki operational baselines for high-stakes systems.

1,425+

Documented AI failures in the AI Incident Database

36%

Impacting vulnerable populations

Zero

Margin for error in the decisions that matter

More parameters don't fix the problem.

If fluency is no longer a signal of truth, making a model more fluent doesn't restore that signal. It just makes the answers more convincing.

Scaling increases capability, but it does not introduce grounding. A larger model can still fabricate, still omit, still overgeneralize — only with greater confidence and fewer visible cracks.

This is why the problem doesn't go away with better models. It gets harder to detect. The system becomes more useful, but less interrogable.

Certainty does not come from making the model smarter. It comes from constraining what the model is allowed to say based on what can be proven.

Kenshiki does not treat the model as an authority. It treats it as a synthesizer. Every answer is bounded by governed evidence inside your trust boundary, and every claim must be supported before it is allowed to exist.

This is not a filter on top of a model. It replaces the assumption that the model can be trusted.

You don't have to take this on faith.

Prove an Answer

Where Kenshiki fits

However you use AI today, Kenshiki sits around it — making sure every answer holds up before you act on it.

Use it with the models you already have

Bring your own model or use GPT, Claude, or others. Kenshiki sits around the model — pulling in real sources, constraining what it can say, and checking every answer before it reaches you.

Learn more →

Workshop /01

Run it inside your environment

Run the same system inside your AWS environment or your own infrastructure. Your data stays with you, and every answer is still checked against real evidence before it's used.

Learn more →

Refinery /02

Use it where nothing can leave

For environments that can't connect to anything else. Kenshiki runs entirely inside — model, data, and verification — so every answer is still grounded and accountable.

Learn more →

Clean Room /03

Two APIs. One contract.

Define what's real in Kura. Ask Kadai for answers that can be proven from it.

01 Store what counts as real Kura Index
02 Enter the system Prompt Sanitizer
03 Constrain what gets asked Prompt Compiler
04 Prove what the system is willing to say Claim Ledger
Store what counts as real

Kura Index

Kura is the evidence store. You POST source material into Kura, and the system preserves provenance, structure, and retrieval boundaries so every downstream answer can be traced back to something real.

Enter the system

Prompt Sanitizer

Input is the secure entry point where every request enters Kenshiki. It establishes who is asking, what evidence they can access, and binds identity through the entire pipeline via OpenFGA/ReBAC.

Constrain what gets asked

Prompt Compiler

Compiler turns a loose prompt into a disciplined query. Before the model sees anything, the system narrows the question to what can actually be answered from evidence instead of letting the model improvise.

Get a grounded answer

Kadai Inference

Kadai is the reasoning API. You query Kadai and get back an answer grounded in the evidence available to the system. Kadai does not act as the authority. It synthesizes across what Kura contains and what the Claim Ledger can support.

Prove what the system is willing to say

Claim Ledger

The Claim Ledger breaks the answer into claims, checks those claims against the evidence, and records what is supported, what is unsupported, and what evidence is missing. Unsupported claims do not get through.

Keep priors from leaking past proof

Boundary Gate

The Boundary Gate keeps model priors from slipping past the evidence layer unchecked. It is the final separation between fluent generation and claims the system is actually willing to let reach a user.