Sandbox Execution
Overview
Section titled “Overview”When you call run (with scope='project' or scope='model'), Bridge Town executes your Python code inside an isolated Docker container and returns the results inline. The sandbox blocks network access and constrains filesystem access to mounted runtime paths only.
Security constraints
Section titled “Security constraints”| Constraint | Value |
|---|---|
| Network access | --network none — no outbound or inbound connections |
| Root filesystem | Read-only container root filesystem |
| Mounted paths | /repo (read-only), /data (read-only), /outputs (writable tmpfs), /tmp (writable tmpfs), /upstream (writable tmpfs, project runs only) |
| Timeout | 5 minutes (hard cap) |
| Memory | Capped per container |
| Packages | Standard library + numpy, pandas, openpyxl pre-installed |
Execution flow
Section titled “Execution flow”- Pull code — the project archive is fetched at the specified commit and mounted read-only at
/repo/ - Mount data — connected data snapshots are mounted read-only at
/data/ - Prepare writable scratch/output paths —
/outputs/and/tmp/are provided as writable tmpfs mounts; forrun(scope='project')calls,/upstream/is also mounted as a writable tmpfs so pipeline models can exchange intermediate results - Run — the sandbox executes
run.py(forscope='project') ormodel/<name>.py(forscope='model') inside the container - Capture — stdout, stderr, and files written to
/outputs/are collected - Record — a
ModelRunrecord is written with status, duration, and results - Return — all terminal results are returned inline to the MCP client
Output
Section titled “Output”run(scope='project') and run(scope='model') return results synchronously:
{ "run_id": "uuid", "project": "my-project", "branch": "main", "commit_sha": "abc123", "status": "success", "exit_code": 0, "stdout": "...", "stderr": "...", "stdout_truncated": false, "stderr_truncated": false, "outputs": {"forecast.json": {"q1": 1200000}}, "duration_seconds": 3.2, "data_snapshot_ref": "s3://..."}stdout and stderr are capped at 4 KB each in the inline response. When you need one named output from a completed run, call get_run_output with the returned run_id and output_name; it returns that output inline up to 10 MiB. Use get_run when you need the full run envelope or status details.
Synchronous vs. asynchronous execution
Section titled “Synchronous vs. asynchronous execution”Primary path — synchronous (run with mode='sync'):
run(scope='project', mode='sync') executes the project’s run.py entrypoint and waits for completion, returning all results inline. run(scope='model', mode='sync') runs a single model/<name>.py directly. No follow-up get_run call is needed.
Background path — asynchronous (run with mode='async' / get_run / list_runs):
run(mode='async') dispatches execution via Celery and returns a run_id immediately without waiting for the container to finish. Poll get_run until the status reaches a terminal state (success, failed, timed_out, or cancelled). Use this path when you need to queue many runs concurrently or want to submit work without blocking.
To review run history or locate a previous run’s run_id, use list_runs — it returns run summaries for a project ordered most-recent-first, with optional status filtering. Pass the run_id from list_runs to get_run_output for one named output or to get_run for the full run envelope.
Related guides
Section titled “Related guides”- Multi-Model Pipelines — chaining models with
PIPELINEand/upstreamtransport