Kenshiki

Private deployment. Stronger proof.

Refinery

Run the same Kenshiki system in a private environment — without the public-model boundary.

Refinery is Kenshiki's private deployment tier. It keeps the full Kenshiki control plane but removes the public-model boundary. Refinery can run as a shared or dedicated instance in Kenshiki-managed AWS, inside a customer VPC or GovCloud account, or on premises when full air-gap is not required. In every mode, the answer path stays inside a private runtime: prompt compilation, retrieval, generation, claim evaluation, and output gating all happen under controlled infrastructure.

Without this: you can move AI into a private environment and still get answers no one can defend. Data residency solves where the model runs. It does not solve whether the output holds up.

Today

Your team moved AI into a private environment to satisfy data-residency, confidentiality, or procurement requirements. The model is no longer public, but the output is still just fluent prose that humans have to interpret and defend manually. When challenged, you can say where it ran. You still cannot show why a specific claim should be trusted.

With Refinery

The request now runs through Kenshiki inside a private deployment. The prompt is compiled, evidence is retrieved from governed sources, a private inference engine generates a proposal, and the Claim Ledger checks that proposal against evidence and local telemetry before assigning an output state.

How Refinery works

Refinery runs the same bounded-synthesis pipeline as Workshop, but moves generation into a private deployment. The prompt is compiled, governed evidence is retrieved, a private inference engine produces a proposal, and the Claim Ledger uses source checks plus local telemetry to decide what is allowed to leave.

Kenshiki control plane · Signed envelope · Chain of custody
Your data · Outside Kenshiki

Output states

AUTHORIZED
PARTIAL
REQUIRES_SPEC
NARRATIVE_ONLY
BLOCKED
AUTHORIZED Claims sufficiently supported by evidence
PARTIAL Evidence exists but coverage is incomplete
REQUIRES_SPEC Question needs tighter scope or missing detail
NARRATIVE_ONLY Descriptive but not decision-grade
BLOCKED Policy or evidence conditions not met

What Refinery is

A private deployment of the full Kenshiki stack. It uses the same Prompt Compiler, retrieval, Claim Ledger, and output-state contract as Workshop, but generation happens on a private inference engine instead of a public endpoint.

  • Private deployment tier for production workflows
  • No public model API in the critical path
  • Same bounded-synthesis contract as Workshop

The Kenshiki contract

Same contract. Private runtime.

Refinery runs the same Kura/Kadai contract as the rest of the platform. Kura defines what counts as real. Kadai is the answer contract the caller sees. The difference in Refinery is that the backing inference runtime is private: no public model API in the critical path, and no release without Claim Ledger evaluation.

  • Same Kura/Kadai contract as Workshop
  • Generation moves into a private runtime
  • Claim Ledger evaluates both source support and local model telemetry

Who this is for

The Platform Team

deploying AI into private infrastructure while preserving data control, enterprise integration, and repeatable proof of what the system relied on.

The Reviewer

receives an answer that is already classified, traceable, and fit for use in production workflows — not raw model prose that still has to be defended by hand.