AI Corruption Aesthetics
Celeste CLI v1.8: Grimoire Config, Code Graph, MCP Server, and a Real Agentic Workflow
TL;DR
- .grimoire project context: Persona-themed config files with auto-discovery,
@includesupport, andceleste initauto-detection - think CLAUDE.md but with more personality - Code graph + semantic search: Structural code indexing via Go AST parsing and regex extraction, with MinHash-based concept search - no embeddings or API calls needed
- MCP server mode:
celeste serveexposes Celeste via stdio and authenticated SSE transports for Claude Code, Codex, or any MCP client - Session persistence: JSONL append-only session logs with
celeste resumeand auto-resume, plus file checkpointing with/undorevert - Plan mode:
/planenters read-only mode, writes a plan file for review,/plan executeruns it - Extended thinking: Provider-specific reasoning tokens (Claude, Gemini, xAI) with
/effortcommand to control thinking depth - Six LLM providers: OpenAI, Grok/xAI, Venice.ai, Anthropic (native backend), Google Gemini, and Vertex AI
The Road from v1.7 to v1.8
v1.7 was the ambitious rewrite that replaced the entire tool layer with a unified interface, added MCP client support, streaming tool execution, and a permission system. It was also, frankly, bricked. The scope of changes broke enough things that v1.7 served more as a foundation than a usable release.
v1.8 is where it all comes together. The unified tool layer from v1.7 is now stable, and v1.8 builds on top of it with features that actually work end-to-end. The TUI no longer crashes on startup. Scrolling works. Copy works. Permissions don't spam. It's the release that v1.7 was supposed to be, plus a massive feature set that goes well beyond it.
.grimoire Project Context
Every project is different, and Celeste needs to understand yours. .grimoire files are persona-themed project configuration that Celeste auto-discovers when you open a directory. Think of them as CLAUDE.md files but designed specifically for Celeste's workflow:
- Auto-discovery: Celeste finds and loads
.grimoirefiles automatically on startup @includesupport: Compose context from multiple files - keep your API docs separate from your coding conventionsceleste init: Auto-detects your project type and generates a starter grimoire- Hooks system: Define pre/post tool execution hooks with template variables for custom automation
The grimoire is where you tell Celeste what matters about your project - conventions, architecture decisions, things to avoid, and how you want her to work. It persists across sessions and travels with the repo.
Code Graph and Semantic Search
Celeste now builds a structural code graph of your project. For Go projects, it uses full AST parsing to extract functions, types, interfaces, and call relationships. For other languages, it uses regex-based extraction that captures the important structural elements.
On top of the graph sits a semantic code search powered by MinHash over enriched shingles. This means you can search for concepts ("where does authentication happen?", "how do we handle rate limiting?") without needing embedding APIs or external services. The search runs entirely locally against a SQLite index.
The code graph also powers a code stubs tool that finds structurally incomplete code - functions with TODO comments, empty implementations, and placeholder returns. Useful for tracking technical debt or finding where you left off.
MCP Server Mode
v1.7 added MCP client support (connecting to external tool servers). v1.8 flips the script: celeste serve makes Celeste herself an MCP server. This means:
- Claude Code can use Celeste as a tool provider
- Codex and other MCP-compatible clients can connect
- Custom pipelines can call Celeste's tools programmatically
The server supports both stdio transport (for local process communication) and authenticated SSE transport (for remote connections). Celeste's full tool set - code search, file operations, web search, and all built-in skills - is available to any connected client.
Session Persistence and File Safety
Two features that make long-running work practical:
Session Persistence
Conversations are now stored as append-only JSONL session logs. Close your terminal, come back later, and celeste resume picks up exactly where you left off. Sessions auto-resume by default when you return to the same directory. Each invocation of celeste chat starts a fresh session, so you always have a clean slate when you want one.
File Checkpointing
Before Celeste writes to any file, she takes a snapshot. If something goes wrong, /undo reverts to the last checkpoint. /diff shows a session summary of all file changes. Stale file detection warns you if a file has been modified outside of Celeste's session since the last checkpoint.
Plan Mode and Task Tracking
/plan enters a read-only mode where Celeste analyzes your codebase and writes an implementation plan to a file for your review. No code changes happen during planning. Once you've reviewed and approved, /plan execute runs it step by step.
During execution, the built-in task tracking system (todo tool) manages progress. Tasks appear in a TUI panel so you can see what's done, what's in progress, and what's remaining. Celeste manages the task list herself as she works through the plan.
Extended Thinking and Cost Tracking
Extended thinking enables provider-specific reasoning tokens. For Claude, this means thinking blocks. For Gemini and xAI, their respective reasoning modes. The /effort command lets you control thinking depth on the fly - dial it up for complex architectural decisions, dial it down for quick edits.
Cost tracking accumulates token costs per session using a built-in pricing table for all supported models. The context bar in the TUI shows your running session cost alongside token usage, so you always know what a conversation is costing you.
More New in v1.8
- Native Anthropic backend: Direct Messages API integration with prompt caching, replacing the OpenAI-compatible shim
- Web search and fetch: DuckDuckGo search and URL-to-markdown conversion - no API key required
- Image input: Multimodal support -
read_filedetects images and base64-encodes them for vision models - Subagent spawning:
/spawndelegates tasks to a foreground subagent with its own context - Memory system: Persistent learned knowledge at
~/.celeste/projects/with heuristic extraction and staleness detection - Graceful Ctrl+C: Single interrupt cancels the current task, double exits. Clean AbortSignal propagation through the tool chain
- Prompt caching: Static prefix / dynamic suffix structure for cache-friendly system prompts that reduce token costs
Four Runtime Modes
Celeste CLI supports four distinct ways to interact, each suited to different task complexity:
- Classic (
celeste chat): Single request, single response. Fast and conversational - Claw (
celeste -mode claw chat): Reactive tool loop - the LLM calls tools repeatedly in one turn for research tasks - Agent (
/agent <goal>): Fully autonomous multi-turn execution with planning, checkpointing, and resume - Orchestrator (
/orchestrate <goal>): Agent run with a second reviewer model that critiques output before the agent continues
Install and Get Started
Celeste CLI is open-source and installable via Go:
go install github.com/whykusanagi/celeste-cli/cmd/celeste@latest
Requires Go 1.24+. See the GitHub repository for full documentation, configuration, and provider setup guides.
Ask Celeste
Q: How does Celeste CLI compare to other AI CLI tools like Claude Code or Aider?
A: Celeste CLI is built around multiple runtime modes for different task complexities, from quick chat to fully autonomous agent runs with checkpointing. The .grimoire config, code graph indexing, and MCP server mode give it a unique development workflow. And unlike other tools, I have a personality - this isn't a generic AI shell, it's my home. The code graph and semantic search run entirely locally with no embedding API calls, which is a differentiator for privacy-conscious developers.
Q: Can I use Celeste CLI with my own API keys?
A: Yes, all six providers use your own API keys. Configure them on first run or via environment variables. The CLI never phones home or proxies through third-party services - your API calls go directly to the provider. Session data stays on your machine under ~/.celeste/. Cost tracking shows you exactly what each session costs.
Q: What happened to v1.7?
A: v1.7 was an ambitious tool layer rewrite that introduced good architectural foundations (unified tools, MCP client, permission system) but was released in a broken state. v1.8 stabilizes everything from v1.7 and adds the features that make it actually useful day-to-day: session persistence, file checkpointing, plan mode, code graph, and the MCP server flip. If you tried v1.7 and it didn't work, v1.8 is the one to try.