A Playoff Odds Check Supplement

Yesterday, I tested how well our playoff odds have predicted eventual playoff teams. Today, I’m going to slice the data a few more ways to get a more robust look at what our odds do well, and where they have fewer advantages over other models. It will be number- and picture-heavy, word-light. Without further ado, let’s get started.

A discussion with Tom Tango got me wondering about why our Depth Charts-based odds do so well early in the season relative to other systems. Their advantage is particularly strong at the beginning of the season and fades as the year goes on. For all charts in the article that are based on days into a season, I’ve excluded the 2020 season for obvious reasons. Here are the mean average errors for each of the three systems over the first 60 days of the season:

What’s driving that early outperformance? In essence, it comes down to one thing: the projection-based model is willing to give teams high or low probabilities of making the postseason right away. Our season-to-date stats mode is hesitant to do that, and the coin flip mode obviously can’t do it. Take a look at the percentage of teams that each system moves to the extremes of the distribution — either less than 5% or more than 95% to make the playoffs — by day of season:

Why does this matter? If you’re judging based on mean absolute error, making extreme predictions that turn out to be right is a huge tailwind. If you predict something as 50% likely, you’ll have an error of 0.5 no matter what. The further you predict from the center of the distribution, the more chance you have to reduce your error.

Of course, that only works if you get it right. If you simply randomly predicted either 5% or 95% chances without any information about the teams involved, you’d do just as poorly as predicting 50% for everything. Making extreme picks when you have information that suggests they’re likely to be right is the name of the game.

By and large, our model does that. Take, for example, the probability that a team we give 5% or lower odds of making the postseason in the first 60 days of the season. From 2014-19, those teams made the playoffs 3.5% of the time. We’ve been too aggressive in writing these teams off! We gave them an aggregate 1.3% chance. But the other models have been far too forgiving, to their detriment.

Here is the chart that Tango was driving at that led me to the rest of this discussion. On each of the first 60 days of the season, I took all the teams that the FG Projections model assigned a slim chance of making the playoffs (less than 5%). I compared the playoff odds that each of the three models gave those teams. As you can see, it takes season-to-date and coin flip models a long time to catch up to the reality on the ground:

That’s where we’re strongest: writing off teams who aren’t good right away rather than waiting a few months to get that impression. By season’s end, the advantage is far less pronounced. The same is true in the opposite direction, teams with 95% or greater playoff odds on a given day. These teams have made the playoffs 96.6% of the time in real life:

In a nutshell, that’s what makes our projections work. Late in the season, they’re still better than season-to-date and coin flip modes, but by less. First, season-to-date stats and projections tend to converge. Second, the games that have already been played take on ever-increasing importance, and that’s data that all three models have equal access to.

Another question: how do our forecasts look in each month? First, here are our day one projections, bucketed into groups of 5 percentage points:

That graph looks pretty rough. Why? Well, there simply aren’t enough observations. Take the 55%-60% playoff odds bucket. From 2014-19, exactly two teams fell into that category: the soon-to-be-Guardians in 2015 and the Mets in ’17. They both missed the playoffs. The overall predictive power is still quite strong, as we saw in the discussions of MAE. Here’s the data in chart form:

Day Zero Playoff Odds vs. Observed
Our Odds Observed Count
1.7% 5.3% 38
7.1% 14.3% 14
12.2% 38.5% 13
17.2% 28.6% 14
22.8% 20.0% 10
27.5% 26.7% 15
32.6% 18.2% 11
36.6% 50.0% 4
42.5% 57.1% 7
47.8% 42.9% 7
53.1% 25.0% 4
57.3% 0.0% 2
62.3% 16.7% 6
67.5% 50.0% 4
71.6% 80.0% 5
78.4% 100.0% 3
82.6% 100.0% 4
88.2% 80.0% 5
93.1% 57.1% 7
96.1% 100.0% 7

From there, I made a chart for each month. March and April:

May:

June:

July:

August:

September and October:

If you wanted to, you could read into these. I find it amusing, for example, that the September and October chart suggests that the model is too high on low-odds teams in September, exactly the opposite problem of what raised ire this year. I’d caution against that kind of close reading, though. There aren’t enough observations in any of these groups to say much, and they’re serially correlated as well: a team with 5% playoff odds on May 7 probably has 5% playoff odds on May 8, and so on. These are more useful pictures for thinking about the problem in broad strokes than anything else.

There are plenty of other ways to slice this data up, but I think I’ve hit the high points of the analysis between yesterday’s and today’s looks. Our model is good but certainly not perfect. Its best trait is its willingness to move the needle far to the left or right early in the season, based on the caliber of players on a given team. The later in the year we get, the better every model will be, whether you’re using differential equations or flipping coins.

None of this means that we won’t attempt to further improve our projections. Being better than a season-to-date model is good, but you could always do better. If you’re wondering whether our odds have any bearing on reality, though, rest assured: they’ve done pretty well at figuring out whether teams are good or bad, and they provide signal well in excess of the actual standings starting from day one.





Ben is a writer at FanGraphs. He can be found on Twitter @_Ben_Clemens.

15 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Cameronmember
2 years ago

Cool deep dive. One thing is confusing me though. Why don’t the coin flip and season-to-date methods coincide at 0 games into the season? Am I missing something in how the season-to-date method works?

MorboTheAnnihilator
2 years ago
Reply to  Ben Clemens

You mentioned it in the last article.

mikejuntmember
2 years ago
Reply to  Ben Clemens

I’d like to see a full season chart of the first two graphs.

I’d also like to see the version that does playoff series with those, so we can see if/when/to what degree season-to-date ever approaches FG projections in terms of actual series outcomes. There’s a lot of folks who disagree with how the FG projections have only risen up to about .550 on the Giants and don’t consider them a .600 team. I suspect FG’s right, but there’s a particularly big gap between season-to-date and projections with the Giants, and it’d be interesting to see this analysis carried through to help people consume those during the postseason.

Cameronmember
2 years ago
Reply to  Ben Clemens

Gotcha thanks!