Me smash text, make small. Feed URL, file, diff, log. Get Wenyan.
{
"mcpServers": {
"caveman": {
"command": "npx",
"args": ["-y", "@standardbeagle/caveman-mcp"],
"env": { "LLM_API_KEY": "sk-or-v1-..." }
}
}
}
| Tool | Input |
|---|---|
| condense_url | Any URL โ webpage, GitHub repo/PR, YouTube, arXiv, HN, Reddit, RSS |
| condense_file | pdf, docx, xlsx, pptx, md, html, png/jpg (vision), mp3/wav (whisper) |
| condense_git | Unified diff, git log, git blame, GitHub PR URL |
| condense_log | Stack traces and error logs (Go, Python, JS, Java, Rust) |
| condense_text | Raw text โ Wenyan, no fetching |
input
โ mechanical: drop articles, fillers, redundancy
โ LLM Wenyan: classical Chinese ultra-compression
โ output: compressed text + ratio + method
Works with any OpenAI-compatible API. Default: OpenRouter + claude-haiku-4-5.
Set LLM_BASE_URL + LLM_MODEL to use any provider. Set skip_llm: true for mechanical-only (zero API cost).