LeanCTX is a context engineering layer for AI coding tools that dramatically reduces token consumption by intelligently compressing data sent to LLMs. It operates as an open-source MCP server and shell hook with three independent compression layers:
Key Features
- Context Server: 42 intelligent tools with AST-aware compression using Tree-sitter engine for 18 programming languages
- Shell Hook: Automatically compresses output from 90+ command patterns (git, npm, cargo, docker, kubectl)
- Protocols: CEP (Cognitive Efficiency Protocol), CCP (Context Continuity Protocol), and TDD (Token Dense Dialect) for additional 8-25% token savings
- Zero Configuration: Single Rust binary with zero telemetry, works with existing workflows
- Comprehensive Compatibility: Supports 34+ AI coding tools including Cursor, Claude Code, GitHub Copilot, Windsurf, and more
Use Cases
- Developers using AI coding assistants who want to reduce API costs
- Teams hitting context window limits with large codebases
- Organizations concerned about data privacy (100% local operation)
- Users of terminal-native AI agents through LeanCTL integration
Technical Highlights
- 60-99% token reduction per file read through AST parsing
- Session caching for re-reads costing ~13 tokens instead of thousands
- Built-in intelligence layer with 6 scientific algorithms
- MIT licensed with no vendor lock-in

