NFL betting analysis as a 4-phase probabilistic pipeline. Context aggregation, parameter estimation, Monte Carlo simulation, and edge analysis — composed as a single prompt you paste into your own ChatGPT Plus or Claude session.
Ledger is in Labs— it does not run on the Brainboot runtime yet. Instead, it's a curated prompt that encodes the full 4-brain architecture, including an embedded Monte Carlo simulator in Python. You fill out the form below with a game you want to analyze, copy the generated prompt, and paste it into Claude or ChatGPT Plus.
This means zero cost to youbeyond your existing ChatGPT or Claude subscription — Brainboot doesn't charge anything for this page, and we don't see your queries or the results. It's also the most honest way to validate the architecture before we build the production runtime: if the outputs are useful to you, the full circuit is worth building.
Takes two team names and uses web search to auto-populate every data vector: live lines from multiple books, injury reports, last 5 games for each team, head-to-head history weighted by personnel continuity, advanced EPA stats, weather, stadium, rest differential, and primetime flags. Outputs a fully-sourced GameState with every claim citable. No manual data entry required.
Consumes the GameState and runs three sub-steps: (a) parameter estimation with Bayesian regression-to-mean and continuity-weighted H2H, (b) a 30,000-run drive-level Monte Carlo simulation in pure Python, and (c) edge analysis with Kelly sizing. Refuses to recommend any bet where the lower bound of the 95% CI on the edge crosses zero. Every bet gets its own falsifier.
Tell Ledger which two teams are playing. The Aggregator brain does the rest — it uses web search to pull Pinnacle + US book lines, public money %, reverse line movement, nfelo power ratings, DVOA, PFF grades, injuries, weather, referee crew tendencies, and H2H history, all automatically. You don't paste anything.
The total amount you're treating as your betting roll — not what's in your checking account. Ledger uses this to compute each recommended bet's exact dollar stake via Kelly sizing.
Kelly / 4 — industry standard
Enable web search + code execution in your Claude or ChatGPT session before pasting. Claude Sonnet 4.6 has both on by default; ChatGPT Plus users may need to toggle on Browsing and Data Analysis. Web search powers the Aggregator brain; code execution runs the 4-model Python ensemble in Phase 2. If either is missing, Ledger tells you up front and falls back to analytical approximations.
Paste the copied prompt into a new chat in either Claude or ChatGPT Plus. The prompt is self-contained — the model will announce each phase as it runs, search the web for current data, and produce a final Bet Card at the end.
Home/away win probabilities, expected margin and total with standard deviations, first-half totals, and a list of recommended bets (or an honest “no bet”) with Kelly sizing at every step.
Because you see every phase, you see the exact parameters the simulator used, the historical weighting it applied, and the invariants it enforced. No black box.
Every recommendation includes a specific condition that would invalidate it. If that condition arises before kickoff, you already know to pull the bet.
Most games don't have edge. Ledger will tell you when the market is efficient and refuse to manufacture a pick for the sake of giving you something to act on.
Most sports betting AI tools optimize the one lever that matters least — pick accuracy — and sell you a black box that says “bet this.” Ledger optimizes all five levers: picks, line shopping, bankroll sizing, tilt avoidance, and closing line value. Even with worsepicks than a competitor, Ledger's discipline stack produces better realized ROI because the other four levers are where the compounding happens.
Every output is fully transparent. You see the distributions the simulator used, the historical weighting, the injury adjustments, and the confidence interval on the edge. If the lower bound of the confidence interval crosses zero, Ledger refuses to recommend the bet — even if the central estimate looks good. That's the invariant layer working.
The test with friends is the whole point. If the outputs feel materially smarter than the picks they're getting elsewhere, we know the architecture is worth building into a real runtime with scheduled circuits and API-driven data ingestion. If it doesn't, we learned something cheap.
No promises. No win rate guarantees. No hype. Just a framework that tries to be honest about probability under uncertainty, and refuses to bet when the honesty says you shouldn't.