docs/tools/memory/productivity-token-optimizer
AgenticMemoryProductivity

Token Optimizer (Code Compressor)

CLI Tool Name: token_optimizer

Compresses code or text structure to significantly reduce LLM token consumption. The tool auto-detects programming languages (Python, JS/TS, Go, Rust, Java, C/C++, Shell) and removes comments and whitespace while preserving syntax.

Parameters

ParameterTypeRequiredDescription
contentstringyesThe raw source code or text data.
modeenumno"compress" (default), "summarize", or "both". Summarize extracts structural outlines like class schemas.

Example output

json
{
  "language_detected": "python",
  "original_tokens_est": 2045,
  "output_tokens_est": 512,
  "compression_pct": "75.0%",
  "output": "def foo(x):\n  return x * 2\nclass Client:\n  def __init__(self):\n    self.db = None"
}

How to use it

example prompt

Compress the file src/utils.py using the 'both' mode so I have its structural summary without burning my token limit.

[tip]
Combine with context_pruner to radically decrease context overhead when providing the agent an entire repository structure!