Developer Tools
Zero-Context Validation: Test Documentation Completeness with Blind Sub-Agents
TL;DR
- Blind agent testing: Spawn sub-agents with zero context to test if your documentation is actually complete
- Find gaps before shipping: Missing function definitions, unclear return types, and undocumented structures exposed automatically
- TDD for documentation: Test fails (agent gets blocked) → Fix docs → Test passes → Ship with confidence
- No assumptions allowed: Blind agents must report gaps, not guess—preventing documentation debt from accumulating
- Time savings: 1 hour validation saves 2-4 hours of developer confusion and support tickets
What Is Zero-Context Validation?
Zero-Context Validation is a Claude Code skill that tests your documentation by spawning blind sub-agents that attempt implementation using ONLY the docs. No prior knowledge, no assumptions, no guessing. If the agent gets blocked, you found documentation gaps BEFORE they waste developer time.
The problem: You write documentation that looks complete. You review it and think "this is clear." But when a developer tries to use it, they get blocked:
- Where is this function defined?
- What should this method return?
- Which file should I modify?
- What's the structure of this object?
The gap: What's obvious to you (with context) isn't documented for others (without context). You can't objectively test your own docs because you know too much. Blind sub-agents don't know anything, so gaps are obvious to them.
Zero-Context Validation is like TDD for documentation: Test (blind sub-agent) fails → Fix docs → Test passes → Ship. The skill enforces the "Iron Law"—no assumptions allowed. Gaps must be reported, not filled. This prevents documentation debt from accumulating.
How to Use Zero-Context Validation
Installation
Clone the skill to your Claude Code skills directory:
git clone https://github.com/whykusanagi/zero-context-validation.git \
~/.claude/skills/zero-context-validation
Verify installation:
ls ~/.claude/skills/zero-context-validation/SKILL.md
Basic Usage
When you've written documentation and want to test if it's actually complete:
- Prepare your documentation: API docs, onboarding guides, developer workflows—anything that should be implementable
- Define a clear implementation task: What should a developer be able to do using ONLY the docs?
- Invoke the skill: Tell Claude Code to use
zero-context-validationwhen testing documentation completeness - Spawn a blind sub-agent: The skill launches a fresh agent that can ONLY use your documentation
- Agent attempts implementation: The blind agent tries to complete the task or reports specific gaps
- Iterate until success: Fix gaps, re-test with fresh blind agent, repeat until agent succeeds without assumptions
Real Workflow Example
Here's how a typical validation session looks:
Iteration 1:
- YOU: Spawn blind sub-agent to add premium widget from my docs
- BLIND AGENT: "BLOCKED - Missing base class definition"
- YOU: "That's a real blindspot! I know Widget but didn't document it."
- YOU: Add Widget documentation
Iteration 2:
- YOU: Spawn new blind agent with updated docs
- BLIND AGENT: "BLOCKED - Missing render() return type"
- YOU: "Good catch! I know it returns HTMLElement, forgot to document"
- YOU: Add render() contract documentation
Iteration 3:
- YOU: Spawn new blind agent
- BLIND AGENT: "Implementation complete using only docs!"
- YOU: Review - no assumptions made ✓
- YOU: Ship documentation ✓
Key insight: Multiple iterations are normal! Blind agents expose blindspots you couldn't see because you had context. Each iteration makes your docs more complete.
What Makes This Different?
Traditional Documentation Review
- You review your own documentation
- It looks complete to you (because you have context)
- You unconsciously fill gaps while reading
- Blindspots ship to production
- Developers get blocked and file support tickets
Zero-Context Validation Approach
- You spawn blind sub-agents to test your docs
- They have zero context - can't fill gaps unconsciously
- They expose actual blindspots in documentation
- You iterate until blind agents succeed
- Only then are docs truly complete
The Iron Law: No Assumptions
The skill enforces strict rules to prevent compromised validation:
- Gaps must be reported, not filled: Blind agents can't make assumptions or use "common patterns"
- Getting blocked is success: When the agent can't proceed, we found gaps before they wasted real developer time
- Pressure resistance: The skill guards against time pressure, authority pressure, and "use your best judgment" traps
- Fresh agents each iteration: Each test uses a new blind agent to ensure zero context contamination
Built with TDD Methodology
Zero-Context Validation is itself tested using TDD principles:
- 500-word reference guide (token efficient)
- Detailed examples showing success vs failure scenarios
- Visual workflow diagrams for understanding the process
- Quick Reference table for common situations
- Rationalization table countering common excuses
- Red flags detection for compromised validation
When to Use Zero-Context Validation
This skill is most valuable when documentation quality directly impacts developer experience:
- Before publishing API documentation: Ensure endpoints, parameters, and responses are fully documented
- After major refactors: Verify workflows still make sense after changing code structure
- Creating onboarding guides: Test if new developers can actually follow your setup instructions
- Before merging PRs with new docs: Validate documentation changes before they reach main branch
- When external developers will use your docs: They have zero context about your codebase—just like blind agents
- After receiving "docs unclear" feedback: Find and fix specific gaps causing confusion
Time Investment vs Savings
Validation takes approximately 1 hour (3-5 iterations typical). This prevents:
- 2-4 hours of developer confusion per person
- Support tickets asking "how do I..."
- Duplicate implementations because docs were unclear
- Tech debt from developers making wrong assumptions
- Loss of trust in documentation quality
ROI: For documentation used by 3+ developers, validation pays for itself immediately. For public API docs, the ROI is even higher.
For Developers: Technical Details
If you're interested in the implementation or want to contribute:
Skill Architecture
Zero-Context Validation is built as a Claude Code skill using the official Anthropic Skills specification:
- SKILL.md: 500-word reference guide for the AI agent (token efficient)
- examples.md: Multi-iteration workflows showing success vs failure scenarios
- workflow.md: Visual diagrams of the main agent + blind sub-agent loop
- README.md: User-facing documentation with quick start guide
How It Works Internally
The skill uses Claude Code's superpowers:dispatching-parallel-agents to spawn blind sub-agents:
- Main agent (you): Has full context about the codebase and documentation
- Blind sub-agent: Fresh agent with ONLY access to documentation being tested
- Task definition: Main agent specifies what should be implementable from docs
- Implementation attempt: Blind agent tries to complete task using only docs
- Gap reporting: If blocked, blind agent reports specific missing information
- Iteration loop: Main agent fixes docs and spawns fresh blind agent for re-test
Integration with Development Workflow
Zero-Context Validation integrates with existing developer workflows:
- Pre-commit hooks: Validate documentation changes before committing
- CI/CD pipelines: Run validation as part of PR review process
- Documentation linting: Complement existing linters with semantic completeness testing
- API versioning: Ensure new API versions have complete documentation before release
Contributing
The skill is open source under MIT license. Contributions welcome:
- Fork the GitHub repository
- Submit pull requests with improvements
- Follow TDD methodology for skill changes
- See writing-skills specification
Related Developer Tools
Zero-Context Validation is part of a larger ecosystem of developer productivity tools:
- Corrupted Theme Design System: CSS design system with glassmorphism, dark mode, and component library for consistent UI development
- Multi-Client Facial Tracking Relay: Zero-latency Go relay for splitting iFacialMocap streams to multiple applications
- Celeste Discord Command Guide: Documentation for Celeste AI's Discord bot commands and integration patterns
All tools follow the same philosophy: invest time in tooling to prevent future pain. Quality documentation is infrastructure—treat it with the same rigor as code.
Ask Celeste
Q: How is this different from regular code review or documentation review?
A: Regular reviews are done by people who already have context about your codebase. They unconsciously fill in gaps while reading because they know how things work. Blind sub-agents have zero context—they can't fill gaps, so they expose actual blindspots. It's like the difference between asking a colleague to review your docs (they already know the system) versus asking a new hire to follow your onboarding guide (they know nothing). Blind agents are like new hires—they reveal what's actually missing.
Q: Isn't this overkill for internal documentation?
A: Internal docs are where this is MOST valuable! Your team changes over time—new hires join, people switch teams, knowledge gets lost. What's obvious to you today won't be obvious to someone reading these docs in 6 months. Blind agent validation ensures your internal docs remain useful as the team evolves. Plus, poor internal docs waste the most expensive resource: senior developer time answering "how do I..." questions.
Q: How long does validation typically take?
A: Expect 3-5 iterations for most documentation, taking about 1 hour total. First iteration usually catches major gaps (missing base classes, undefined functions). Second iteration catches interface details (return types, parameter contracts). Third iteration often succeeds or catches minor edge cases. Some complex APIs might need 5-7 iterations, but that just means you found MORE gaps before they wasted developer time. The ROI is there—1 hour validation prevents 2-4 hours of confusion per developer who uses the docs.
Q: What if the blind agent makes assumptions despite the skill's rules?
A: The skill includes pressure-resistant instructions and red flag detection. If an agent makes assumptions, that's a signal to strengthen the "no assumptions" instruction and re-test. The skill specifically guards against "using common patterns," "assuming based on Y," and other rationalization traps. If you see these, restart validation with stricter rules. The skill documentation includes a rationalization table showing common excuses and how to counter them.