Kenshiki

Use public models without blind trust

Workshop

Public models. Governed outputs.

Workshop wraps external models — GPT, Claude, OpenRouter endpoints — in the full Kenshiki control plane. It is built for the moment when a fluent answer sounds right, but you hesitate. Kenshiki turns that hesitation into system behavior instead of a personal guess. The model still generates the language. Kenshiki controls what it sees, evaluates what comes back, and decides what is allowed to reach a user. Workshop does not control what happens inside the external model during generation. It controls everything around that boundary.

Without this: the external model answers directly, and the burden of deciding whether it is usable falls on the person reading it. You discover problems only after someone relies on them.

Today

Your team calls a public model API. The model returns fluent text. Someone reads it, decides it sounds right, and passes it along. When that output is questioned — in a review, an audit, a legal proceeding — no one can show what it was based on.

With Workshop

Your team routes through Kenshiki instead. The same public model still generates the language, but inside a governed loop. The prompt is compiled, evidence is retrieved, claims are checked, and the response gets an explicit state before it reaches anyone.

How Workshop works

A user question enters the Kenshiki pipeline before the public model sees it. The prompt is compiled, evidence is retrieved, and bounded context is sent to the external model. The response comes back as a proposal, is decomposed into claims, checked against evidence, and assigned a state before it reaches anyone.

Kenshiki control plane · Signed envelope · Chain of custody
Your data · Outside Kenshiki

Output states

AUTHORIZED
PARTIAL
REQUIRES_SPEC
NARRATIVE_ONLY
BLOCKED
AUTHORIZED Claims sufficiently supported by evidence
PARTIAL Evidence exists but coverage is incomplete
REQUIRES_SPEC Question is underspecified
NARRATIVE_ONLY Descriptive but not decision-grade
BLOCKED Policy or evidence conditions not met

What Workshop is

The full Kenshiki bounded-synthesis pipeline with a public model at the generation layer. The Prompt Compiler, retrieval engine, coverage tracker, Claim Ledger, and output-state assignment all run inside Kenshiki. The external model sits at the terminal end as the renderer, receiving only the constrained context Kenshiki provides.

  • Use GPT, Claude, or another public endpoint as the generation layer
  • The full Kenshiki control plane runs in front of and behind that model
  • Answers are evaluated before they reach a user

The Kenshiki contract

Two APIs. One contract.

Workshop runs inside the same Kura/Kadai contract as the rest of the platform. You define what counts as real in Kura. Kadai returns answers bounded by that evidence. The difference in Workshop is that the generation layer is a public model endpoint instead of a self-hosted one.

  • Kura defines the evidence boundary
  • Kadai returns an answer bounded by that evidence
  • The public model acts as a renderer inside the contract, not as the authority

Who this is for

The Team Already Using Public Models

integrating GPT, Claude, or similar APIs into real workflows, but unable to justify what those systems produce under review, audit, or challenge.

The Decision-Maker

receives an answer only after Kenshiki has checked what supports it, what is missing, and whether it is allowed to leave the system.