Use public models without blind trust
Workshop
Public models. Governed outputs.
Workshop wraps external models — GPT, Claude, OpenRouter endpoints — in the full Kenshiki control plane. It is built for the moment when a fluent answer sounds right, but you hesitate. Kenshiki turns that hesitation into system behavior instead of a personal guess. The model still generates the language. Kenshiki controls what it sees, evaluates what comes back, and decides what is allowed to reach a user. Workshop does not control what happens inside the external model during generation. It controls everything around that boundary.
Without this: the external model answers directly, and the burden of deciding whether it is usable falls on the person reading it. You discover problems only after someone relies on them.
Today
Your team calls a public model API. The model returns fluent text. Someone reads it, decides it sounds right, and passes it along. When that output is questioned — in a review, an audit, a legal proceeding — no one can show what it was based on.
With Workshop
Your team routes through Kenshiki instead. The same public model still generates the language, but inside a governed loop. The prompt is compiled, evidence is retrieved, claims are checked, and the response gets an explicit state before it reaches anyone.
How Workshop works
A user question enters the Kenshiki pipeline before the public model sees it. The prompt is compiled, evidence is retrieved, and bounded context is sent to the external model. The response comes back as a proposal, is decomposed into claims, checked against evidence, and assigned a state before it reaches anyone.
Output states
What Workshop is
The full Kenshiki bounded-synthesis pipeline with a public model at the generation layer. The Prompt Compiler, retrieval engine, coverage tracker, Claim Ledger, and output-state assignment all run inside Kenshiki. The external model sits at the terminal end as the renderer, receiving only the constrained context Kenshiki provides.
- Use GPT, Claude, or another public endpoint as the generation layer
- The full Kenshiki control plane runs in front of and behind that model
- Answers are evaluated before they reach a user
The Kenshiki contract
Two APIs. One contract.
Workshop runs inside the same Kura/Kadai contract as the rest of the platform. You define what counts as real in Kura. Kadai returns answers bounded by that evidence. The difference in Workshop is that the generation layer is a public model endpoint instead of a self-hosted one.
- Kura defines the evidence boundary
- Kadai returns an answer bounded by that evidence
- The public model acts as a renderer inside the contract, not as the authority
Who this is for
The Team Already Using Public Models
integrating GPT, Claude, or similar APIs into real workflows, but unable to justify what those systems produce under review, audit, or challenge.
The Decision-Maker
receives an answer only after Kenshiki has checked what supports it, what is missing, and whether it is allowed to leave the system.
Go deeper
See Workshop in action
See what governed synthesis looks like around a model. Ask a question and watch the same control plane Workshop uses return a response with claims checked, gaps surfaced, and states assigned.
AI Incident Archive
Real cases where public models produced fluent, confident answers that turned out to be wrong. This is what Workshop is designed to prevent.
Claim Ledger
The verification engine inside every Kenshiki response. Breaks answers into claims, checks each one against evidence, and records what held up.
Platform Architecture
How the full Kenshiki control plane is structured — from Kura and Kadai through the Compiler, Crosswalk, Ledger, and Boundary Gate.