๐Ÿชจ caveman-mcp

Me smash text, make small. Feed URL, file, diff, log. Get Wenyan.

65โ€“80%
typical compression
5
tool types
2-pass
pipeline

Install

{
  "mcpServers": {
    "caveman": {
      "command": "npx",
      "args": ["-y", "@standardbeagle/caveman-mcp"],
      "env": { "LLM_API_KEY": "sk-or-v1-..." }
    }
  }
}

Tools

ToolInput
condense_urlAny URL โ€” webpage, GitHub repo/PR, YouTube, arXiv, HN, Reddit, RSS
condense_filepdf, docx, xlsx, pptx, md, html, png/jpg (vision), mp3/wav (whisper)
condense_gitUnified diff, git log, git blame, GitHub PR URL
condense_logStack traces and error logs (Go, Python, JS, Java, Rust)
condense_textRaw text โ†’ Wenyan, no fetching

Pipeline

input
  โ†’ mechanical: drop articles, fillers, redundancy
  โ†’ LLM Wenyan: classical Chinese ultra-compression
  โ†’ output: compressed text + ratio + method

Any LLM endpoint

Works with any OpenAI-compatible API. Default: OpenRouter + claude-haiku-4-5.
Set LLM_BASE_URL + LLM_MODEL to use any provider. Set skip_llm: true for mechanical-only (zero API cost).

GitHub ยท npm ยท MIT License