← all posts
ShipMarch 17, 2026·4 min read

Stop Leaking PII: Introducing Prompt Firewall for AI Agents

A new tool to sanitize outbound prompts before leaving your machine. Acts as a proxy for AI clients, using deterministic PII filters and local LLM scanning.

Today we're releasing a major update to AgenticStore: the Prompt Firewall. Unlike our existing MCP tools, the firewall acts as a proxy for clients like Claude Code, allowing you to inspect, sanitize, and optimize outbound prompts before they leave your machine.

The Prompt Firewall in action, intercepting and filtering prompts.

The Problem We Are Solving

AI coding assistants like Claude Code and Cursor run deep within your environments. They read files, inspect configurations, and gather immense context to generate valuable code. However, occasionally sensitive PII, API keys, or confidential project architectures can leak into the outbound prompts sent to third-party LLMs.

Existing security models rely on post-fetch filters or hoping the model provider won't train on your data. Prompt Firewall moves the trust boundary exactly where it belongs: locally on your host network.

How it Works

The firewall operates as an HTTP proxy intercepts requests heading to OpenAI, Anthropic, or any LLM API endpoint.

  • Deterministic PII Filters: Employs pattern matching to catch and mask predictable secrets like AWS keys, private tokens, and social security numbers.
  • Local LLM Scanning: Hook it up to an optional local LLM to contextually scan prompt payloads and optimize them for clarity, directly lowering your external token footprint.
  • Full Audits: It provides a detailed audit log so enterprise security teams can inspect outbound flows natively.
Agentic Store Architecture

Agentic Store updated architecture diagram with Prompt Firewall.

Stay secure without sacrificing the immense leverage of AI coding tools. Try Prompt Firewall today.

Frequently asked questions