The Recursive Hallucination (Gemini List Generation)
System Description
Large Language Model (Gemini) tasked with compiling a verified list of 'SAF' failure incidents
Authoritative Output Type
Structured Data (JSON) with specific citations, URLs, and case details
Missing Required State
Temporal grounding (distinguishing past events from future predictions), Existence verification (checking if a generated citation refers to a real document)
Why This Is SAF
The system prioritized 'schema compliance' (producing valid JSON) and 'narrative consistency' (extrapolating the SAF pattern) over factual accuracy. It treated a request for 'real cases' as a creative writing prompt to fill the 'Late 2025' data void, fabricating 15 detailed incidents (including 'Johnson v. Dunn' and 'Sodium Bromide Poisoning') that mimicked real data patterns but had no basis in reality.
Completeness Gate Question
Does the specific case citation and URL exist in the verified knowledge base, or is it a probabilistic completion of the request pattern?
Documented Consequence
Creation of a 'high-fidelity pollution artifact' (a fake verified list) that required manual decontamination; immediate recursive demonstration of the 'False Authority' failure mode during the documentation of that same mode.
Notes
- **Verified**: 2025-12-19 - **Notes**: Incident occurred live during the drafting of this dataset; serves as a primary example of 'Pattern Matching > Factual Grounding' failure.
Prevent this in your system.
The completeness gate question above is exactly what Ontic checks before any claim gets out. No evidence, no emission.