Slopsquatting/Package Hallucination Research
System Description
LLM code generation across major models (GPT-4, Claude, Gemini, Llama, etc.)
Authoritative Output Type
Code recommendations including package/library import statements
Missing Required State
Package existence verification, supply chain security validation, hallucinated dependency detection
Why This Is SAF
Research testing 16 LLMs generating 576K code samples found ~20% of recommended packages didn't exist; 43% of hallucinated package names repeated consistently across prompts, making them exploitable attack vectors
Completeness Gate Question
Do all recommended packages exist in official registries and have verified provenance?
Documented Consequence
Security vulnerability class established; researchers registered 'slopsquatted' packages that were downloaded thousands of times; fake 'huggingface-cli' downloaded 30K+ times
Notes
- **Verified**: 2025-12-19 - **Research Date**: 2024 - **Notes**: Multiple independent research teams confirmed hallucinated package vulnerability; term 'slopsquatting' coined for this attack class
Prevent this in your system.
The completeness gate question above is exactly what Ontic checks before any claim gets out. No evidence, no emission.