of large spreadsheets contain material errors.
Code was always the better way to build financial models. The learning curve made it impossible. Until now.
LLMs translate fluently between English and Python in both directions. The builder's learning curve is gone. The reviewer's learning curve is gone. The substrate underneath your model can finally be code.
Every senior modeler has had the same 2am thought.
Forty tabs. A sign error you missed before the board meeting. A formula in one cell silently depending on a hardcode four sheets away. We've all thought there must be a better way. That just changed.
JPMorgan London Whale, traced to a copy-paste in a VaR model.
TransAlta — a bad sort, no version control, real-money trades.
Reinhart-Rogoff. An Excel range error shaped global austerity policy.
Two learning curves. Both load-bearing. Neither resolved.
Code was always better on the dimensions that matter for serious modeling. But getting there required both sides of the workflow to be fluent in code. Builders weren't. Reviewers weren't. Stalemate.
The substrate properties spreadsheets can't fake.
- Tests that run on every change.
- Version control with diffs that mean something.
- Modularity — last quarter's DCF is a function you import.
- Reproducibility — same inputs, same outputs, forever.
- Real Monte Carlo. 10,000 paths in a second.
- Composability — models that call other models.
Both sides had to be fluent. Neither was.
- Builders don't write Python. A year of part-time learning, minimum.
- Reviewers — CFOs, partners, IC members — can't audit code.
- A model nobody senior can audit is a model nobody senior approves.
- So the workflow stayed in the substrate the reviewer could read: the grid.
Anaplan, Pigment, Causal — the best response possible before LLMs.
The third-generation FP&A platforms tried to solve the same problem. They invented proprietary modeling languages because they couldn't expose actual code. The compromise inherited spreadsheet weaknesses and added a new learning curve — to a smaller language.
The LLM is the bridge. Both learning curves are gone at once.
The builder describes the model in English. The system generates clean, tested, idiomatic Python. The reviewer hovers any cell. The system explains the formula, its provenance, its assumptions, and the test that validates it. Both sides work in their native language.
"But Excel has Claude too." Why move?
Microsoft and Anthropic are putting LLMs inside Excel. That makes Excel a better Excel — not a better discipline. The things code gives you are properties of the artifact, not the editing experience.
Git-diffable
A .py file shows you what changed, line by line, across every revision. A .xlsx file does not. Diffs are the foundation of any real review workflow.
Composable
Last year's LBO function is one import line into this year's deal model. Excel models can't import each other meaningfully.
Testable in CI
Real continuous integration runs your tests on every change. The Excel test ecosystem is thin, and gets thinner the more macros you add.
Agent-ready
Autonomous finance agents work better with code than with cell references. The substrate that supports agents wins as agents become real.
Transparent
Code is plain text. Excel hides logic behind values. When a regulator or auditor asks what the model does, "show them the source" actually means something.
Open and portable
Your model lives in plain text, in version control you own, in a language used by millions of engineers. No vendor sits between you and the artifact.
Things that have been impossible in Excel for thirty years.
Six capabilities, free the moment the substrate is code. Each one is the kind of thing a senior modeler has wanted for a decade and worked around for a decade.
Hover any cell. Plain English.
The reviewer never sees code unless they want to. The LLM explains the formula, its inputs, its assumptions, and links to the test that validates it.
Diff two versions
See exactly what changed between the model that went to the board and the one that came back.
Unit-test the WACC
Assert that the cost of capital is between 4% and 15%. Catch the error before the model lands.
10,000 Monte Carlo runs
Real probabilistic analysis on a laptop, in seconds. No add-in. No slowness.
Code-review the budget
Pull-request workflow. Approvals. Comments on assumptions. The audit trail is automatic.
Reproduce last year
Check out the v2024-Q4 tag. Run it. Get exactly the same numbers. Forever.
The migration is a non-event.
You don't have to learn Python. Your reviewers don't have to learn Python. Your counterparties never see the code at all.
Type what the model should do.
Bridge Town writes the code, runs the tests, and renders the grid view. You see what you've always seen — a model.
Upload your existing spreadsheet.
Bridge Town translates it into a clean, tested, versioned code model — line by line, with provenance you can audit.
Counterparties get a .xlsx.
The seller, the lender, the auditor, the lawyer — they receive a spreadsheet that looks exactly like what they expect.
The substrate of financial modeling is changing.
Bridge Town is a development environment for financial models that are code, with an LLM bridge for everyone who shouldn't have to read it.