fin123.dev

Run spreadsheet models you can trust.

AI built in. Audit built in. No black box.

The spreadsheet became the operating system of business. fin123 makes it governable.

Build in the grid. Mark Assumptions and Outputs directly in the spreadsheet surface your team already understands.

Run saved model states. Save immutable Versions, execute explicit Scenarios, and review historical Runs without mutating current work.

Diff runs, not just spreadsheets. Compare Scenarios, Run Sets, and Sensitivity Cases with persisted Results and Audit evidence.

Connect governed data directly in the grid. =DATA() brings databases, APIs, files, and approved internal datasets into the model through one consistent formula interface.

AI is just another part of the model. =AI() executes only as an approved formula step during Run.

Capture how your firm works. Approved Methods turn institutional playbooks into governed execution logic for AI formulas.

Your spreadsheet can remember how your team thinks. Model Memory turns institutional judgment into executable context for AI formulas.

Prove every result. Every Run preserves the Version, Scenario, inputs, outputs, and execution evidence.

Replay made simple. Re-run historical Runs with preserved inputs, outputs, and execution evidence.

fin123 root spreadsheet surface with model grid and results panel
One surface: spreadsheet first, Results beside it, Audit behind every Run.

A control plane for spreadsheet models

Model -> Version -> Scenario -> Run -> Results -> Audit

Traditional spreadsheets recalculate cells. fin123 runs saved model states.

Save a Version. Pick a Scenario. Run the model. Review Results. Open Audit.

Analysts still work in the grid, but the system can now run, compare, schedule, replay, and audit the model without rewriting it in Python or JavaScript.

=DATA() brings governed external data into the spreadsheet using one familiar formula interface for databases, files, APIs, and approved data sources.

There is no File menu because a fin123 model is not a document. It is an executable system.

The product surface

The root surface is the product. No separate console is required for normal modeling.

There is no File menu and no Print menu because fin123 is not a document editor. It is a system for selecting, running, restoring, and auditing model states.

fin123 spreadsheet model surface
1 / Model Surface

The grid is the primary object.

Mark cells as Assumptions or Outputs, save model versions, select Scenarios, Run, and inspect Results without leaving the spreadsheet surface.

The abstraction is not open, print, and close. It is select state, execute, compare, audit, restore.

AssumptionsOutputsScenario: BaseSave VersionRun
fin123 scenario and results surface
2 / Results

Run produces a stable Results surface.

Results are based on marked Outputs and persisted Run artifacts. Run All creates a Run Set. Run Sensitivity creates temporary Cases inside a Run Set without saving them as Scenarios.

Charts and visualizations can help, but they are secondary to the modeling motion.

Run ResultsRun SetSensitivity CasesOpen Audit
fin123 AI-assisted Formula definition modal
3 / AI-assisted Formula

AI-assisted formulas are approved compute steps.

Type =AI() into a cell. The approved definition lives behind the cell, and it may reference an approved Method: versioned institutional methodology for how the formula should reason, validate output, and preserve evidence during a Run.

Review FormulaPreview OutputApproveValidated outputMethod evidence
fin123 Audit modal with run and AI-assisted Formula evidence
4 / Audit

Audit proves the Run.

Audit opens from Results. It shows the immutable Run or Run Set record, model version, Scenario, Outputs, AI-assisted Formula definition version, Method version and hash where present, input snapshot, prompt snapshot, validated output, validation status, and replay evidence.

model_version_idscenario_idMethod hashtyped outputreplay from artifact

What changed in the model

Assumptions and Outputs

Assumptions are cells that may vary by Scenario. Outputs are cells measured after Run.

  • Cell-first marking in the formula bar and context menu.
  • Formula cells are blocked as Assumptions.
  • Results use marked Outputs only.

Scenarios and Results

Base is the default state. Other Scenarios are named variations of Base.

  • Create Base Scenario from current Assumption values.
  • Save Bull, Bear, and other Scenario states manually.
  • Run and Run All persist Results from immutable Run and Run Set artifacts.

Sensitivity

Run Sensitivity explores Assumption ranges and measures marked Outputs through controlled execution, not spreadsheet recalculation.

  • Ranges are defined at the Assumption cell.
  • Cases are temporary execution rows, not saved Scenarios by default.
  • Run Set rows are authoritative Results snapshots.
  • AI-assisted Formula execution is blocked during Sensitivity for now.

Runs, Run Sets, and Audit

Every execution writes immutable evidence.

  • Every Run has a run_id and model_version_id.
  • A Run Set is an ordered group of Runs or Sensitivity Cases.
  • Child Runs support Audit evidence.
  • Replay reads stored artifacts. It does not recompute.

AI lives in the spreadsheet as a formula

An AI-assisted formula is part of the model: authored by the analyst, approved like model logic, and executed only during a Run.

=AI() sits in a normal worksheet cell. The formula definition lives behind it, and the definition can reference an approved Method: versioned institutional methodology that defines how the formula should reason, validate output, and preserve evidence during a Run.

Methods stay behind the formula. Analysts still work in the grid. Results receive only validated typed output. Audit shows the Method version, hash, prompt snapshot, inputs, raw output where applicable, and validation evidence.

Model Memory adds the missing layer: approved institutional context attached to the model. It can inform AI-assisted formulas during execution, while Audit records exactly what context was supplied and Run Diff shows when that context changed.

same_store_sales_guidance
Cell B12: =AI()

Behind the cell:
- approved Formula definition
- approved Method version
- compiled prompt snapshot
- validated typed output
- Audit evidence

Inputs:
- prior quarter SSS
- management guidance text
- recent comp commentary
- macro / consumer read-through
- historical seasonality

Run output:
Forecast SSS:      0.035
Forecast revenue:  1035

Run later

Scheduled execution is the same Run model, just delayed. It references a saved model version and selected Scenarios. It never runs the mutable grid.

fin123 Run later modal showing saved version, selected Scenario, and time
Run later

Run this saved version at a time.

The modal is a delayed Run confirmation: saved version, selected Scenario, and execution time. Unsaved changes are not included.

Run later means: run this saved version with these Scenarios at this time. When it triggers, fin123 creates child Runs, a Run Set, Results, and Audit artifacts using the same execution path as interactive Run.

  • References immutable model_version_id.
  • Uses selected scenario_ids.
  • Executes approved AI-assisted Formula definitions only.
  • Records scheduled_run_id on Run Set and child Runs.

How it works

The product contract is Model -> Version -> Scenario -> Run -> Results -> Audit.

01
Model
Declare Assumptions, Outputs, formulas, and Methods in the grid.
02
Version
Save an immutable model state before execution.
03
Scenario
Choose Base, Bull, Bear, or controlled Assumption ranges.
04
Run
Run one Scenario, all Scenarios as a Run Set, Sensitivity Cases, or Run later.
05
Results
Inspect persisted outputs from the immutable Run artifact.
06
Audit
Prove Version, Scenario, Methods, Model Memory, lineage, and replay evidence.
governed_data_evidence
Fund
  -> Pod
      -> Data Entitlement
          -> Data Binding
              -> Frozen Binding Ref
                  -> Snapshot Metadata
                      -> Immutable Snapshot Blob
                          -> Run Snapshot Ref
                              -> Safe Data Evidence
                                  -> Audit / Replay / Run Diff

Get started

Open the hosted app: app.fin123.dev

Want the walkthrough? Email reckoningmachines@gmail.com

Quants already have replay culture: pipelines, snapshots, backtests, immutable datasets, and simulation frameworks. Spreadsheet-native investors usually have copied tabs, exports, emailed files, overwritten assumptions, and disappearing provenance. fin123 makes that work replayable and auditable without forcing analysts out of the grid.

YAP remembers how the team thinks. fin123 executes what the team approved.

YAP institutional conversation surface
YAP is a separate Reckoning Machines prototype for pod-native institutional conversation. Over time, selected thesis state or qualitative context can be promoted into fin123 Model Memory as approved execution context. YAP does not mutate Runs.