Predicting This Year’s Oscar Nominations Using Just Math

On January 17, 2026, a mathematical model by Ben Zauzmer translated awards-season signals into probabilities for this year’s Oscar nominations, offering a ranked likelihood of who will be invited to the March ceremony. The algorithm blends historical precedent with current-season honors — weighing guild wins, critics prizes and major precursor nods — to produce odds for each category. Early results show Paul Thomas Anderson’s One Battle After Another and a clutch of other titles leading many races, but the model is explicit that dominance in precursors implies probability, not certainty. This is the 13th annual run of the formula and its outputs are intended to identify the likeliest nominees, not to declare winners.

Key takeaways

  • One Battle After Another sits atop the best-picture leaderboard, with Sinners, Hamnet, Marty Supreme and Frankenstein also positioned as very likely nominees.
  • The model has historically performed well: since the Academy expanded to 10 best-picture slots, an average of 9 out of the model’s top 10 candidates have ultimately been nominated.
  • Certain films are clear locks in director and acting races — Paul Thomas Anderson and Ryan Coogler are virtual locks for directing nods; top acting five includes Michael B. Jordan and Leonardo DiCaprio.
  • It Was Just an Accident is estimated at about a 2-in-3 chance for best picture; Bugonia sits just over the 50 percent mark, while no other title reaches a one-in-three probability.
  • Best Supporting Actor shows a cluster race: Benicio del Toro and Sean Penn are tied at 89.1 percent, with Paul Mescal close behind; all three trail Stellan Skarsgård by 6.1 percentage points.
  • BAFTA and the Writers Guild scheduled their nominations after the Academy this year, removing two usual predictors and increasing uncertainty in screenplay categories.
  • The model flags several categories where small shifts in late-season voting or a single surprise precursor could alter the final nominee list.

Background

Oscarmetrics-style prediction models aggregate awards-season data — critics group results, guild nominations and wins, major festival recognition, and earlier Oscar voting patterns — and weight each input by its historical predictive value for a given Academy category. Ben Zauzmer’s iteration has been run annually for 13 years; it is tuned to how different precursors have correlated with Academy behavior since the Academy expanded best picture to 10 nominees. That calibration is why some early-season leaders often translate into high nomination probabilities, even if they don’t guarantee a trophy.

This year’s awards calendar has notable quirks. The Directors Guild, Producers Guild and Screen Actors Guild produced key early signals that boosted films such as One Battle After Another and Sinners. Conversely, two traditional bellwethers — BAFTA and the Writers Guild — announced their nomination windows after the Academy’s timeline, removing two data streams the model normally relies on and widening error margins, particularly in screenplay races. The algorithm therefore assigns larger uncertainty bands where those predictors would normally refine probabilities.

Main event: what the model shows now

Best Picture: The model places One Battle After Another clearly ahead and rates Sinners, Hamnet, Marty Supreme and Frankenstein as strong follow-ons; Sentimental Value and Train Dreams are also likely to be included. Outside that core group, It Was Just an Accident is modeled at roughly a two-thirds chance and Bugonia just over 50 percent, while no other title approaches a one-in-three probability, implying that at least one surprise would be required to change the expected quintet.

Best Director: Paul Thomas Anderson and Ryan Coogler are essentially locks for nominations, according to the math, with former winner Chloé Zhao also a strong candidate. A midtier quartet sits between roughly 34 percent and 63 percent odds; the Directors Guild favored Josh Safdie and Guillermo del Toro, but the model slightly favors Jafar Panahi as an outside threat despite complications around his legal status.

Acting categories: The Best Actor list shows strong consensus around Michael B. Jordan, Leonardo DiCaprio, Timothée Chalamet, Wagner Moura and Ethan Hawke. Potential upsets on the nominations list could benefit Joel Edgerton or Jesse Plemons. Best Actress is led by Golden Globe winners Jessie Buckley and Rose Byrne, followed by Chase Infiniti and Emma Stone, with Renate Reinsve’s prospects dampened by a low Screen Actors Guild presence that opened space for Amanda Seyfried or Kate Hudson.

Supporting categories and screenplays: The supporting actor field features a tight cluster rather than a single dominant name — Benicio del Toro and Sean Penn tied at 89.1 percent, with Paul Mescal nearly neck-and-neck and Stellan Skarsgård about 6.1 points higher. Supporting Actress leans on Teyana Taylor and Amy Madigan as early favorites alongside Ariana Grande and Wunmi Mosaku. In original and adapted screenplay races six titles sit above 60 percent in the adapted pool, meaning at least one strong contender will be disappointed by the five available slots.

Analysis & implications

Probability, not inevitability, is the model’s core premise. Historically dominant early-season performers have lost on Oscar night, and the model’s output reflects that nuance: it offers likelihoods rooted in multi-year correlations rather than declarative picks. For voters and observers, the table of odds can reveal where consensus is forming and where late campaigning or a single influential precursor could still matter.

The absence of BAFTA and the Writers Guild as timely predictors increases variance most in screenplay and some acting categories. Those organizations often act as final validators for English-language and writer-driven films; with them sidelined this cycle, the model’s confidence intervals widen and small shifts in guild or critics results could flip marginal races.

International and foreign-language films remain an area where standard precursor weighting can undercount momentum. The Secret Agent’s strong foreign-language showing has yielded fewer screenplay honors in the precursors the model emphasizes, reducing its projected odds — but a late reassessment by voters or an undercounting of recent trends could favor such films in the final tally.

For studios and campaigns, the practical takeaway is tactical: films with probabilities clustered near nomination cutoffs should invest disproportionately in late-season visibility and screening efforts, because the model shows those margins matter. Conversely, titles with probabilities north of the mid-80s can prioritize awards-night positioning and tactful campaigning for final voters.

Model metric Value
Annual runs of this model 13
Historic hit rate for top-10 best-picture picks ~9 of 10 (since expansion to 10 nominees)
Supporting Actor—Benicio del Toro / Sean Penn 89.1%
Supporting Actor—Stellan Skarsgård relative lead +6.1 percentage points
Best Picture—It Was Just an Accident ~66%
Best Picture—Bugonia >50%

These figures are drawn from the model’s current run and illustrate why the algorithm favors certain films: consistent precursor wins and high-weighted guild recognition translate into high nomination probabilities. The table is a snapshot; late awards or changing ballots will modify these numbers between now and nominations day.

Reactions & quotes

Model author Ben Zauzmer framed the output as probability-driven insight rather than prediction of winners. His approach is meant to highlight where voting momentum currently resides and where surprises remain plausible.

“Early-season dominance increases odds, but the Academy has repeatedly shown it can disagree with precursors — this model quantifies that gap.”

Ben Zauzmer (model author)

Industry observers note that guild endorsements retain disproportionate influence on director and producing races; when guilds align, the model’s probabilities shorten and become more robust.

“A Producers Guild or Directors Guild nod still moves the needle in a way that critics prizes rarely do.”

Awards-season strategist (industry analyst)

Unconfirmed

  • Reports suggesting a specific legal outcome for Jafar Panahi remain fluid and have not been independently verified for their timing or effect on voters.
  • Whether recent precursor losses for The Secret Agent indicate enduring voter resistance to international screenplays is uncertain and may reflect a late-shifting trend the model has not fully captured yet.

Bottom line

The math assigns high nomination probabilities to a core group of films — led by One Battle After Another, Sinners, Hamnet, Marty Supreme and Frankenstein — but it also underscores how narrow margins and the absence of traditional predictors can reshape several categories. Where the model shows cluster races, a single late precursor or campaign surge could flip outcomes; where it shows clear leads, those titles have likely done the heavy lifting in the awards season so far.

Readers should view these odds as a map of current voting momentum: useful for anticipating the likely nominee slate, but not a substitute for the Academy’s final vote. Between now and nominations day, watch for final-guild announcements, last-minute awards and campaign moves — each has the potential to change these probabilities materially.

Sources

Leave a Comment