Skip to content

Data Path and AI Architecture

Bridge Town is an MCP server, not an LLM service

Section titled “Bridge Town is an MCP server, not an LLM service”

Bridge Town does not invoke server-side language models. It does not proxy your prompts, send your model code to AI providers, or run hidden inference on your data.

Bridge Town is a Model Context Protocol (MCP) server. It exposes tools that an AI agent can call — tools like create_model, patch_model, query_data, and queue_run. The intelligence that decides which tools to call and what to write comes from your agent, running on your own model provider account.

You (natural language)
Your AI agent (Claude, Claude Code, Codex, opencode, kimicode, …)
↓ — MCP tool calls (structured JSON) —
Bridge Town MCP server
PostgreSQL · Gitea · S3 · DuckDB · Docker sandbox
  1. You describe what you want in natural language to your AI agent.
  2. Your agent’s language model (running under your model provider account) decides which Bridge Town MCP tools to call and with what parameters.
  3. Bridge Town receives the structured tool call, executes it, and returns a structured result.
  4. Bridge Town never sees your prompts, your conversation history, or your agent’s reasoning.
CategoryExamplesStored?
MCP tool inputsModel names, file paths, patch content, query stringsYes — CloudWatch audit log
Model files and version historyPython source committed to GiteaYes — versioned project storage
Sandboxed execution outputsstdout/stderr from queue_runYes — S3, retained per policy
Data-source snapshotsCSV/Parquet ingested via import_data or GSheet syncYes — S3, per-tenant namespace
Query resultsDuckDB SQL output from query_dataNo — computed on demand, not persisted
Audit and security eventsTool invocations, auth events, errorsYes — CloudWatch
Configured credentialsGoogle OAuth refresh tokens (AES-256-GCM encrypted)Yes — encrypted at rest
  • The natural-language instructions or questions you type in your agent.
  • Your agent’s chain-of-thought or intermediate reasoning.
  • Any data processed by your model provider on their infrastructure.

Bridge Town has been tested with:

  • Claude (claude.ai) — OAuth connection, no token needed
  • Claude Code — Streamable HTTP + bearer token
  • Claude Desktopmcp-remote bridge + bearer token
  • Codex — Streamable HTTP + bearer token via self-hosted marketplace plugin

Any MCP-compatible client that supports Streamable HTTP transport and bearer-token authentication can connect. Bridge Town is model-agnostic: the MCP tool surface is identical regardless of which language model your agent uses.

Clients that Bridge Town has not tested are expected to work if they implement the MCP Streamable HTTP transport, but are not officially supported. Do not claim compatibility guarantees for untested clients in customer-facing material.

Because Bridge Town does not intermediate LLM inference:

  • No LLM data-processing agreements are required with Bridge Town. You negotiate directly with your model provider (Anthropic, OpenAI, Google, etc.).
  • Prompt confidentiality is governed by your model provider’s terms, not Bridge Town’s.
  • Bridge Town’s data processing covers only the stored artifacts listed in the table above. These are all subject to tenant isolation (PostgreSQL RLS) and the retention policy in the Privacy Policy.

If any future Bridge Town feature requires server-side LLM invocation (e.g. automated analysis pipelines, embedded summarisation), that feature must be:

  1. Documented separately and clearly distinguished from the current no-server-side-LLM architecture.
  2. Gated behind an explicit opt-in.
  3. Accompanied by a new data-processing disclosure in the Privacy Policy before shipping.

A blocker bead under the security label must be filed before any such feature ships. Do not amend this document to cover undisclosed server-side LLM features; update the Privacy Policy first.