Skip to content

Data Integrations

This page is the source of truth for how data gets into Bridge Town. It reconciles the connectors named on the marketing site against what is actually implemented in the platform today, and gives agents and users a clear recommended path for every common “connect my ” request.

SourceStatusHow to bring it in
CSV / Excel files (.csv, .xlsx, .xls, ≤100 MB)Liveupload_data
Google Sheets (manual or scheduled snapshot)Liveconnect_google_sheet
Databases & data warehouses (Snowflake, BigQuery, Postgres, Redshift, etc.)Live, via exportExport to CSV/Excel/Parquet, then upload_data — or land in a Google Sheet and connect_google_sheet. There is no live JDBC/ODBC connector.
Salesforce, HubSpot, StripePlannedToday: export the relevant report/dataset to CSV and upload_data.
Ramp / Brex, Rippling, DeelPlannedToday: export the relevant report to CSV and upload_data.

If a connector is not in the Live rows above, treat it as planned and use the export-then-upload path until it ships.

Bridge Town’s execution model is snapshot-based, not live-connected:

  1. Data is captured into Bridge Town as an immutable Parquet snapshot on S3 ({tenant_id}/{project_name}/uploads/{source_name}.parquet).
  2. Models read from that snapshot via DuckDB at run time.
  3. New uploads create new snapshots; older snapshots stay reproducible.

This means there is no “live database connector” in the product today — even Google Sheets goes through a snapshot import via connect_google_sheet. DuckDB then runs queries in-process against the snapshot. No external data warehouse is required, and nothing in the model run reaches out to the source system at execution time. See Data Sources & Snapshots for the full mental model.

The upload_data tool ingests .csv, .xlsx, and .xls files up to 100 MB and converts them to Parquet for query and modelling.

{
"name": "upload_data",
"arguments": {
"project_name": "q2-forecast",
"source_name": "actuals",
"filename": "actuals_q1.csv",
"file_content": "<base64-encoded file bytes>"
}
}

Schema is inferred automatically with best-effort type coercion. Re-uploading to the same source_name creates a new snapshot — earlier model runs keep pointing at the snapshot they ran against. See upload_data for the full parameter list.

The connect_google_sheet tool reads selected tabs from a Google Sheet, converts each tab to CSV, and stores it as a Parquet snapshot. OAuth must be authorised once per Google account in the web app (app.bridgetown.buildersData Sources → Connect Google Sheet) before the tool can run.

{
"name": "connect_google_sheet",
"arguments": {
"sheet_url": "https://docs.google.com/spreadsheets/d/ABC123/edit",
"project_name": "q2-forecast",
"tab_names": ["Sales", "Costs"],
"schedule_interval_minutes": 1440
}
}

Pass schedule_interval_minutes to refresh the snapshot on a schedule (common values: 60 hourly, 1440 daily, 10080 weekly). See the full Google Sheets Integration guide for the end-to-end workflow including write-back.

Once data is uploaded or linked, query it with query_data:

-- Uploaded file (source_name = "actuals"):
SELECT region, SUM(revenue) AS total
FROM actuals
GROUP BY region
ORDER BY total DESC;
-- Google Sheet snapshot (source = "budget", tab = "Q1 Revenue"):
SELECT * FROM budget_Q1_Revenue LIMIT 10;

DuckDB runs queries in-process inside the MCP server — no external data warehouse is required. Use list_data_sources first to discover exact table names and schemas.

Bringing database and warehouse data in today

Section titled “Bringing database and warehouse data in today”

Bridge Town does not ship a live JDBC/ODBC connector for Snowflake, BigQuery, Postgres, Redshift, MySQL, or other warehouses. Two patterns work well today:

Option 1 — Export to file, then upload_data

Section titled “Option 1 — Export to file, then upload_data”

For most warehouse data, the simplest path is to export the relevant table or query result to CSV/Excel/Parquet and call upload_data.

Examples:

WarehouseOne-time export pattern
SnowflakeCOPY INTO @stage/file.csv FROM (SELECT ... ); then download from the stage and upload_data
BigQueryEXPORT DATA OPTIONS(uri='gs://…/data-*.csv', format='CSV') AS (SELECT ...), then download and upload_data
Postgres / Redshift\copy (SELECT ...) TO 'data.csv' CSV HEADER, then upload_data
dbt / Airflow / Fivetran outputsLand the model output as a CSV/Parquet in object storage or a shared drive, then upload_data

For recurring loads, automate the export from your orchestrator (Airflow, dbt Cloud, GitHub Actions, etc.) and have the final step call upload_data with a stable source_name. Each run produces a new snapshot.

Option 2 — Land the warehouse data in a Google Sheet first

Section titled “Option 2 — Land the warehouse data in a Google Sheet first”

When a warehouse view is small enough to fit in Google Sheets and your team already publishes scorecards there, you can:

  1. Use a warehouse-to-Sheets connector (e.g. Google Sheets BigQuery connector, Connected Sheets, Census, Hightouch, Coefficient) to push the query result into a sheet your team owns.
  2. Connect that sheet with connect_google_sheet and set schedule_interval_minutes so Bridge Town re-snapshots on its own cadence.

This gives you a refresh cadence without writing your own export pipeline, at the cost of an extra hop through Google Sheets.

The marketing site lists business-system connectors as planned. None of these are live today. Until they ship, agents and users should fall back to the export-then-upload path.

ConnectorStatusRecommended path today
SalesforcePlannedExport the report (Reports → “Export”) to CSV, then upload_data.
HubSpotPlannedExport the list/report to CSV, then upload_data.
StripePlannedExport from the Stripe Dashboard (Payments / Balance / Reports), then upload_data.
Ramp / BrexPlannedExport the transactions or spend report to CSV, then upload_data.
RipplingPlannedExport the headcount/payroll report to CSV, then upload_data.
DeelPlannedExport the contractor/payment report to CSV, then upload_data.
Live database / warehouse JDBC connectorPlannedUse Option 1 or Option 2 above.

If a customer needs a connector that is not yet live, capture the request at integrations@bridgetown.builders or support@bridgetown.builders — that is the input we use to pick what to build next.

Agent playbook: answering “connect my

Section titled “Agent playbook: answering “connect my ””

When a user asks an agent to “connect Salesforce”, “pull from our warehouse”, “hook up Stripe”, or similar, do not invent a tool call. There is no connect_salesforce, connect_warehouse, connect_stripe, etc. Instead, follow this script:

  1. Acknowledge the request and name the current state. Confirm the system is on the roadmap but not live, and explain the snapshot model — Bridge Town runs models against captured snapshots, not live source systems.
  2. Offer the supported path now.
    • Direct DB / warehouse → ask the user to export the query/table to CSV, Excel, or Parquet (≤100 MB) and use upload_data.
    • Salesforce / HubSpot / Stripe / Ramp / Brex / Rippling / Deel → ask the user to export the relevant report to CSV from that system’s UI and use upload_data.
    • If the data already lives in a Google Sheet, prefer connect_google_sheet with a refresh schedule.
  3. Help structure the upload. Suggest a clear source_name (e.g. salesforce_pipeline, stripe_payments, warehouse_revenue_actuals), and confirm with list_data_sources afterwards so downstream tools can discover the schema.
  4. Capture the connector demand. If the user is blocked because the manual path is too painful, suggest emailing integrations@bridgetown.builders so it feeds the roadmap.

Bridge Town doesn’t have a live Salesforce connector yet — that’s on our roadmap but not shipped. The fastest way to model your pipeline today is to export the Salesforce report you care about (Reports → Export → CSV) and upload it as a data source. I can call upload_data with source_name salesforce_pipeline once you have the file. After that, we can query it with query_data and use it in your forecast model. If you want this automated, you can also push the same data into a Google Sheet from Salesforce and I’ll connect the sheet on a daily refresh.

There isn’t a direct Snowflake connector yet — Bridge Town runs models against captured snapshots rather than live database connections. For now the cleanest path is: run your SELECT in Snowflake, export the result to CSV (or Parquet ≤100 MB), and I’ll call upload_data with that file and a source_name of warehouse_revenue_actuals. If this is a recurring load, your orchestrator (Airflow, dbt, GitHub Actions) can do the export and call upload_data on a schedule. Want me to outline that pipeline?

If a user requires:

  • A connector that does not yet exist and cannot work via export/upload
  • Help wiring an export pipeline from a warehouse or BI tool to upload_data
  • Larger-than-100 MB datasets, partitioned warehouse loads, or live-DB read patterns

…direct them to Bridge Town Services or support@bridgetown.builders. The services team can scope a one-off integration or migration without inventing tool calls that do not exist.

PageWhy it’s useful
Data Sources & SnapshotsConceptual model: snapshots, immutability, DuckDB-in-process.
Google Sheets IntegrationEnd-to-end walkthrough: OAuth, import, query, write-back, scheduled refresh.
upload_dataTool reference for CSV/Excel ingestion.
connect_google_sheetTool reference for Sheets snapshot import.
list_data_sourcesDiscover the tables and schemas of every source attached to a project.
query_dataRun read-only DuckDB SQL against uploaded files and Sheet snapshots.
Bridge Town Services · support@bridgetown.buildersHands-on help for migrations and bespoke integration pipelines.