Data Path and AI Architecture
Bridge Town is an MCP server, not an LLM service
Section titled “Bridge Town is an MCP server, not an LLM service”Bridge Town does not invoke server-side language models. It does not proxy your prompts, send your model code to AI providers, or run hidden inference on your data.
Bridge Town is a Model Context Protocol (MCP) server. It exposes tools that an AI agent can call — tools like create_model, patch_model, query_data, and queue_run. The intelligence that decides which tools to call and what to write comes from your agent, running on your own model provider account.
How the data path works
Section titled “How the data path works”You (natural language) ↓Your AI agent (Claude, Claude Code, Codex, opencode, kimicode, …) ↓ — MCP tool calls (structured JSON) —Bridge Town MCP server ↓PostgreSQL · Gitea · S3 · DuckDB · Docker sandbox- You describe what you want in natural language to your AI agent.
- Your agent’s language model (running under your model provider account) decides which Bridge Town MCP tools to call and with what parameters.
- Bridge Town receives the structured tool call, executes it, and returns a structured result.
- Bridge Town never sees your prompts, your conversation history, or your agent’s reasoning.
What Bridge Town receives and stores
Section titled “What Bridge Town receives and stores”| Category | Examples | Stored? |
|---|---|---|
| MCP tool inputs | Model names, file paths, patch content, query strings | Yes — CloudWatch audit log |
| Model files and version history | Python source committed to Gitea | Yes — versioned project storage |
| Sandboxed execution outputs | stdout/stderr from queue_run | Yes — S3, retained per policy |
| Data-source snapshots | CSV/Parquet ingested via import_data or GSheet sync | Yes — S3, per-tenant namespace |
| Query results | DuckDB SQL output from query_data | No — computed on demand, not persisted |
| Audit and security events | Tool invocations, auth events, errors | Yes — CloudWatch |
| Configured credentials | Google OAuth refresh tokens (AES-256-GCM encrypted) | Yes — encrypted at rest |
What Bridge Town does NOT receive
Section titled “What Bridge Town does NOT receive”- The natural-language instructions or questions you type in your agent.
- Your agent’s chain-of-thought or intermediate reasoning.
- Any data processed by your model provider on their infrastructure.
Which agents are supported
Section titled “Which agents are supported”Bridge Town has been tested with:
- Claude (claude.ai) — OAuth connection, no token needed
- Claude Code — Streamable HTTP + bearer token
- Claude Desktop —
mcp-remotebridge + bearer token - Codex — Streamable HTTP + bearer token via self-hosted marketplace plugin
Any MCP-compatible client that supports Streamable HTTP transport and bearer-token authentication can connect. Bridge Town is model-agnostic: the MCP tool surface is identical regardless of which language model your agent uses.
Clients that Bridge Town has not tested are expected to work if they implement the MCP Streamable HTTP transport, but are not officially supported. Do not claim compatibility guarantees for untested clients in customer-facing material.
Implications for security and compliance
Section titled “Implications for security and compliance”Because Bridge Town does not intermediate LLM inference:
- No LLM data-processing agreements are required with Bridge Town. You negotiate directly with your model provider (Anthropic, OpenAI, Google, etc.).
- Prompt confidentiality is governed by your model provider’s terms, not Bridge Town’s.
- Bridge Town’s data processing covers only the stored artifacts listed in the table above. These are all subject to tenant isolation (PostgreSQL RLS) and the retention policy in the Privacy Policy.
Future server-side LLM features
Section titled “Future server-side LLM features”If any future Bridge Town feature requires server-side LLM invocation (e.g. automated analysis pipelines, embedded summarisation), that feature must be:
- Documented separately and clearly distinguished from the current no-server-side-LLM architecture.
- Gated behind an explicit opt-in.
- Accompanied by a new data-processing disclosure in the Privacy Policy before shipping.
A blocker bead under the security label must be filed before any such feature ships. Do not amend this document to cover undisclosed server-side LLM features; update the Privacy Policy first.