Data Integrations
This page is the source of truth for how data gets into Bridge Town. It
reconciles the connectors named on the marketing site against what is actually
implemented in the platform today, and gives agents and users a clear
recommended path for every common “connect my
| Source | Status | How to bring it in |
|---|---|---|
CSV / Excel files (.csv, .xlsx, .xls, ≤100 MB) | Live | upload_data |
| Google Sheets (manual or scheduled snapshot) | Live | connect_google_sheet |
| Databases & data warehouses (Snowflake, BigQuery, Postgres, Redshift, etc.) | Live, via export | Export to CSV/Excel/Parquet, then upload_data — or land in a Google Sheet and connect_google_sheet. There is no live JDBC/ODBC connector. |
| Salesforce, HubSpot, Stripe | Planned | Today: export the relevant report/dataset to CSV and upload_data. |
| Ramp / Brex, Rippling, Deel | Planned | Today: export the relevant report to CSV and upload_data. |
If a connector is not in the Live rows above, treat it as planned and use the export-then-upload path until it ships.
How Bridge Town thinks about data
Section titled “How Bridge Town thinks about data”Bridge Town’s execution model is snapshot-based, not live-connected:
- Data is captured into Bridge Town as an immutable Parquet snapshot on
S3 (
{tenant_id}/{project_name}/uploads/{source_name}.parquet). - Models read from that snapshot via DuckDB at run time.
- New uploads create new snapshots; older snapshots stay reproducible.
This means there is no “live database connector” in the product today —
even Google Sheets goes through a snapshot import via connect_google_sheet.
DuckDB then runs queries in-process against the snapshot. No external data
warehouse is required, and nothing in the model run reaches out to the source
system at execution time. See
Data Sources & Snapshots for the
full mental model.
What works today
Section titled “What works today”CSV and Excel uploads
Section titled “CSV and Excel uploads”The upload_data tool ingests .csv, .xlsx, and .xls files up to
100 MB and converts them to Parquet for query and modelling.
{ "name": "upload_data", "arguments": { "project_name": "q2-forecast", "source_name": "actuals", "filename": "actuals_q1.csv", "file_content": "<base64-encoded file bytes>" }}Schema is inferred automatically with best-effort type coercion. Re-uploading
to the same source_name creates a new snapshot — earlier model runs keep
pointing at the snapshot they ran against. See
upload_data for the full parameter list.
Google Sheets
Section titled “Google Sheets”The connect_google_sheet tool reads selected tabs from a Google Sheet,
converts each tab to CSV, and stores it as a Parquet snapshot. OAuth must be
authorised once per Google account in the web app
(app.bridgetown.builders → Data Sources →
Connect Google Sheet) before the tool can run.
{ "name": "connect_google_sheet", "arguments": { "sheet_url": "https://docs.google.com/spreadsheets/d/ABC123/edit", "project_name": "q2-forecast", "tab_names": ["Sales", "Costs"], "schedule_interval_minutes": 1440 }}Pass schedule_interval_minutes to refresh the snapshot on a schedule (common
values: 60 hourly, 1440 daily, 10080 weekly). See the full
Google Sheets Integration guide for
the end-to-end workflow including write-back.
Querying with DuckDB
Section titled “Querying with DuckDB”Once data is uploaded or linked, query it with query_data:
-- Uploaded file (source_name = "actuals"):SELECT region, SUM(revenue) AS totalFROM actualsGROUP BY regionORDER BY total DESC;
-- Google Sheet snapshot (source = "budget", tab = "Q1 Revenue"):SELECT * FROM budget_Q1_Revenue LIMIT 10;DuckDB runs queries in-process inside the MCP server — no external data
warehouse is required. Use list_data_sources
first to discover exact table names and schemas.
Bringing database and warehouse data in today
Section titled “Bringing database and warehouse data in today”Bridge Town does not ship a live JDBC/ODBC connector for Snowflake, BigQuery, Postgres, Redshift, MySQL, or other warehouses. Two patterns work well today:
Option 1 — Export to file, then upload_data
Section titled “Option 1 — Export to file, then upload_data”For most warehouse data, the simplest path is to export the relevant
table or query result to CSV/Excel/Parquet and call upload_data.
Examples:
| Warehouse | One-time export pattern |
|---|---|
| Snowflake | COPY INTO @stage/file.csv FROM (SELECT ... ); then download from the stage and upload_data |
| BigQuery | EXPORT DATA OPTIONS(uri='gs://…/data-*.csv', format='CSV') AS (SELECT ...), then download and upload_data |
| Postgres / Redshift | \copy (SELECT ...) TO 'data.csv' CSV HEADER, then upload_data |
| dbt / Airflow / Fivetran outputs | Land the model output as a CSV/Parquet in object storage or a shared drive, then upload_data |
For recurring loads, automate the export from your orchestrator (Airflow,
dbt Cloud, GitHub Actions, etc.) and have the final step call upload_data
with a stable source_name. Each run produces a new snapshot.
Option 2 — Land the warehouse data in a Google Sheet first
Section titled “Option 2 — Land the warehouse data in a Google Sheet first”When a warehouse view is small enough to fit in Google Sheets and your team already publishes scorecards there, you can:
- Use a warehouse-to-Sheets connector (e.g. Google Sheets BigQuery connector, Connected Sheets, Census, Hightouch, Coefficient) to push the query result into a sheet your team owns.
- Connect that sheet with
connect_google_sheetand setschedule_interval_minutesso Bridge Town re-snapshots on its own cadence.
This gives you a refresh cadence without writing your own export pipeline, at the cost of an extra hop through Google Sheets.
Planned connectors
Section titled “Planned connectors”The marketing site lists business-system connectors as planned. None of these are live today. Until they ship, agents and users should fall back to the export-then-upload path.
| Connector | Status | Recommended path today |
|---|---|---|
| Salesforce | Planned | Export the report (Reports → “Export”) to CSV, then upload_data. |
| HubSpot | Planned | Export the list/report to CSV, then upload_data. |
| Stripe | Planned | Export from the Stripe Dashboard (Payments / Balance / Reports), then upload_data. |
| Ramp / Brex | Planned | Export the transactions or spend report to CSV, then upload_data. |
| Rippling | Planned | Export the headcount/payroll report to CSV, then upload_data. |
| Deel | Planned | Export the contractor/payment report to CSV, then upload_data. |
| Live database / warehouse JDBC connector | Planned | Use Option 1 or Option 2 above. |
If a customer needs a connector that is not yet live, capture the request at integrations@bridgetown.builders or support@bridgetown.builders — that is the input we use to pick what to build next.
Agent playbook: answering “connect my ”
Section titled “Agent playbook: answering “connect my ””When a user asks an agent to “connect Salesforce”, “pull from our warehouse”,
“hook up Stripe”, or similar, do not invent a tool call. There is no
connect_salesforce, connect_warehouse, connect_stripe, etc. Instead,
follow this script:
- Acknowledge the request and name the current state. Confirm the system is on the roadmap but not live, and explain the snapshot model — Bridge Town runs models against captured snapshots, not live source systems.
- Offer the supported path now.
- Direct DB / warehouse → ask the user to export the query/table to CSV,
Excel, or Parquet (≤100 MB) and use
upload_data. - Salesforce / HubSpot / Stripe / Ramp / Brex / Rippling / Deel → ask
the user to export the relevant report to CSV from that system’s UI
and use
upload_data. - If the data already lives in a Google Sheet, prefer
connect_google_sheetwith a refresh schedule.
- Direct DB / warehouse → ask the user to export the query/table to CSV,
Excel, or Parquet (≤100 MB) and use
- Help structure the upload. Suggest a clear
source_name(e.g.salesforce_pipeline,stripe_payments,warehouse_revenue_actuals), and confirm withlist_data_sourcesafterwards so downstream tools can discover the schema. - Capture the connector demand. If the user is blocked because the manual path is too painful, suggest emailing integrations@bridgetown.builders so it feeds the roadmap.
Example agent reply
Section titled “Example agent reply”Bridge Town doesn’t have a live Salesforce connector yet — that’s on our roadmap but not shipped. The fastest way to model your pipeline today is to export the Salesforce report you care about (Reports → Export → CSV) and upload it as a data source. I can call
upload_datawithsource_namesalesforce_pipelineonce you have the file. After that, we can query it withquery_dataand use it in your forecast model. If you want this automated, you can also push the same data into a Google Sheet from Salesforce and I’ll connect the sheet on a daily refresh.
Example agent reply (warehouse)
Section titled “Example agent reply (warehouse)”There isn’t a direct Snowflake connector yet — Bridge Town runs models against captured snapshots rather than live database connections. For now the cleanest path is: run your
SELECTin Snowflake, export the result to CSV (or Parquet ≤100 MB), and I’ll callupload_datawith that file and asource_nameofwarehouse_revenue_actuals. If this is a recurring load, your orchestrator (Airflow, dbt, GitHub Actions) can do the export and callupload_dataon a schedule. Want me to outline that pipeline?
When to escalate to support
Section titled “When to escalate to support”If a user requires:
- A connector that does not yet exist and cannot work via export/upload
- Help wiring an export pipeline from a warehouse or BI tool to
upload_data - Larger-than-100 MB datasets, partitioned warehouse loads, or live-DB read patterns
…direct them to Bridge Town Services or support@bridgetown.builders. The services team can scope a one-off integration or migration without inventing tool calls that do not exist.
Related references
Section titled “Related references”| Page | Why it’s useful |
|---|---|
| Data Sources & Snapshots | Conceptual model: snapshots, immutability, DuckDB-in-process. |
| Google Sheets Integration | End-to-end walkthrough: OAuth, import, query, write-back, scheduled refresh. |
upload_data | Tool reference for CSV/Excel ingestion. |
connect_google_sheet | Tool reference for Sheets snapshot import. |
list_data_sources | Discover the tables and schemas of every source attached to a project. |
query_data | Run read-only DuckDB SQL against uploaded files and Sheet snapshots. |
| Bridge Town Services · support@bridgetown.builders | Hands-on help for migrations and bespoke integration pipelines. |