The 2020 ZiPS Projection Wrap-up, Part I: The Teams

While there’s still a bit of baseball left to be played, this is always the time of the year when I dissect the current season’s ZiPS projections. Baseball history is not so long that we suffer from a surfeit of data, and another season wrapped means more for ZiPS to work with. ZiPS is mature enough at this point that (sadly) the major sources of systematic error have been largely ironed out, but that doesn’t mean that the model doesn’t learn new things from the results.

2020 was a highly unusual season (for very unfortunate reasons); its shortness will hopefully provide us some insight into baseball played in a truncated format. In terms of projections, I tend to have a conservative bent, and I like to be very careful about making sure I know which things have predictive value before I integrate them into the myriad models that make up the various ZiPS projections. A lot of my assumptions going into this season required far more guesswork than usual; I had no idea how teams would actually use prospects in a shorter season, what the injury rates would look like once we brought COVID-19 into the mix, or if we would even complete a 60-game slate.

In light of the risks involved, I kept player totals in the playing time model lower than I would have in a normal season, but I had little clarity into what the league’s COVID-19 case rate would be over the course of the season. Even the way-smarter-than-me epidemiologists didn’t know and I, alas, didn’t major in mathemagical science. With more volatility in projected roster construction, the ZiPS standings gave larger error bars than I’d expect over a “normal” 60-game season, but I wasn’t really sure if that was right.

In the end, the strangest thing to me was just how normal everything turned out being. After an inauspicious start to the season — with testing delays the first weekend of summer camp, early outbreaks on the Marlins and Cardinals, and poor team communication as to just what the rules were — I wasn’t optimistic. But in the end, 28 of the 30 teams played all 60 games, and the two teams that didn’t, the Tigers and Cardinals, were ready and able to play their missing games if they were needed to decided the standings.

Let’s start with a chart of the basic win totals vs. what reality handed down.

ZiPS 2020 Projected Wins vs. Actual
Team ZiPS Actual Difference
Miami Marlins 25 31 6
Seattle Mariners 22 27 5
San Diego Padres 32 37 5
Tampa Bay Rays 35 40 5
Toronto Blue Jays 27 32 5
Baltimore Orioles 20 25 5
Los Angeles Dodgers 38 43 5
Chicago White Sox 31 35 4
San Francisco Giants 26 29 3
Oakland A’s 33 36 3
Atlanta Braves 33 35 2
Chicago Cubs 32 34 2
Cleveland Indians 34 35 1
Minnesota Twins 35 36 1
Philadelphia Phillies 28 28 0
Detroit Tigers 23 23 0
Kansas City Royals 26 26 0
Colorado Rockies 26 26 0
Cincinnati Reds 31 31 0
St. Louis Cardinals 31 30 -1
Milwaukee Brewers 31 29 -2
Los Angeles Angels 30 26 -4
New York Yankees 37 33 -4
Arizona Diamondbacks 30 25 -5
New York Mets 31 26 -5
Texas Rangers 28 22 -6
Boston Red Sox 30 24 -6
Houston Astros 35 29 -6
Pittsburgh Pirates 26 19 -7
Washington Nationals 34 26 -8

The average miss in win totals was 3.5 wins per 60 games, slightly better than the 3.7 I projected my average miss to be going into the season. Solid start! But is 3.5 good or bad? Those can be loaded terms when it comes to evaluating accuracy, so I always like to weigh ZiPS against the Vegas results. Disclaimer: this was the first time, since online betting became a thing, that I placed no bets on a season. While I’m not one to judge others, I personally felt a little queasy gambling on team results given the backdrop of this year.

As for the Vegas results, I used these from the same day I ran the projections. I’ve compared that to what ZiPS foresaw in the table below, highlighting hypothetical successes in yellow, misses in blue, and the pushes in red.

ZiPS 2020 Projected Wins vs. Vegas
Team ZiPS Vegas Over/Under ZiPS Pick Actual
Philadelphia Phillies 28 31.0 UNDER 28
Seattle Mariners 22 24.5 UNDER 27
Los Angeles Angels 30 32.0 UNDER 26
Texas Rangers 28 29.5 UNDER 22
San Diego Padres 32 30.5 OVER 37
Arizona Diamondbacks 30 31.5 UNDER 25
Chicago White Sox 31 32.5 UNDER 35
San Francisco Giants 26 24.5 OVER 29
Detroit Tigers 23 21.5 OVER 23
Kansas City Royals 26 24.5 OVER 26
Colorado Rockies 26 27.5 UNDER 26
Washington Nationals 34 33.0 OVER 26
Tampa Bay Rays 35 34.0 OVER 40
New York Mets 31 32.0 UNDER 26
Toronto Blue Jays 27 28.0 UNDER 32
Cincinnati Reds 31 32.0 UNDER 31
Pittsburgh Pirates 26 25.5 OVER 19
Miami Marlins 25 24.5 OVER 31
Boston Red Sox 30 30.5 UNDER 24
Baltimore Orioles 20 20.5 UNDER 25
Los Angeles Dodgers 38 38.5 UNDER 43
New York Yankees 37 37.5 UNDER 33
Oakland A’s 33 33.5 UNDER 36
Milwaukee Brewers 31 30.5 OVER 29
Atlanta Braves 33 33.5 UNDER 35
Cleveland Indians 34 33.5 OVER 35
Minnesota Twins 35 34.5 OVER 36
St. Louis Cardinals 31 31.5 UNDER 30
Houston Astros 35 35.0 PUSH 29
Chicago Cubs 32 32.0 PUSH 34

If you had bet the over every time ZiPS projected more wins than Vegas and the under every time the opposite was true, how would you have done? Out of 26 bets — ZiPS matched Vegas exactly for two teams, and most sportsbooks gave refunds to Cardinals/Tigers bets — you would have been correct 16 out of 26 times. If you only made bets in situations where ZiPS disagreed with Vegas by more than a win, you would have earned your cash eight out of 10 times. If the sportsbook did not cancel the Tigers bet given that they already clinched the over, that would have been nine of 11.

Unsurprisingly, the more ZiPS agreed with Vegas, the more ZiPS vs. Vegas became a coin-flip, with ZiPS losing five to six on the 12 teams with a half-win disagreement entering the season.

Another way to look at team results is the win probabilities for each team. ZiPS projected the Oakland Athletics to win 33 games while they actually won 36. In the preseason distribution of ZiPS win totals, 36 wins would have been the 70th percentile projection for the A’s. Let’s wash-rinse-repeat for the other 29 teams as well! For the Cardinals and Tigers, the percentile is based on a 58-game season.

ZiPS 2020 Projected Wins vs. Actual Percentile
Team ZiPS Actual Actual Percentile
Toronto Blue Jays 27 32 87%
Miami Marlins 25 31 86%
Baltimore Orioles 20 25 85%
Seattle Mariners 22 27 85%
Tampa Bay Rays 35 40 82%
Los Angeles Dodgers 38 43 81%
San Diego Padres 32 37 80%
Chicago White Sox 31 35 77%
San Francisco Giants 26 29 76%
Oakland A’s 33 36 70%
Chicago Cubs 32 34 65%
Atlanta Braves 33 35 63%
Cleveland Indians 34 35 57%
Minnesota Twins 35 36 56%
Detroit Tigers 23 23 55%
Kansas City Royals 26 26 50%
Colorado Rockies 26 26 50%
Philadelphia Phillies 28 28 50%
Cincinnati Reds 31 31 50%
St. Louis Cardinals 31 30 46%
Milwaukee Brewers 31 29 37%
New York Yankees 37 33 25%
Los Angeles Angels 30 26 21%
New York Mets 31 26 15%
Arizona Diamondbacks 30 25 15%
Texas Rangers 28 22 14%
Houston Astros 35 29 12%
Boston Red Sox 30 24 10%
Pittsburgh Pirates 26 19 8%
Washington Nationals 34 26 5%

In terms of beating expectations, the Blue Jays came out on top. While they didn’t have the largest positive separation from ZiPS (the Marlins did at six wins), the projection did not see the Blue Jays as a team that was as volatile as the Marlins, in this case because of the team’s large surplus of young, uncertain pitching. While ZiPS was more positive than most when it came to projecting the playoff probability of basement-dwellers like the Orioles and Mariners — 8% and 15%, respectively — both teams did outperform what ZiPS expected.

On the negative side, ZiPS did not foresee the degree to which the Pirates fell apart, and though the loss of Stephen Strasburg hurt the Nats by a win or two, the non-Max Scherzer portion of the rotation and a number of positions (1B, 3B, RF, CF) were far worse than the computer projected.

ZiPS 2020 Projection Buckets
Percentile Teams
80th+ 7
60th-80th 5
40th-60th 8
20th-40th 3
20th- 7

Ideally, you would want six teams in each bucket. I normally track Brier scores, but with a short season leaving me little to compare to, they’re not terribly useful here.

Coming up in Part II, we’ll look at the sources for team projection errors (player quality vs. roster composition) and which parts of which teams were the most responsible for the misses.





Dan Szymborski is a senior writer for FanGraphs and the developer of the ZiPS projection system. He was a writer for ESPN.com from 2010-2018, a regular guest on a number of radio shows and podcasts, and a voting BBWAA member. He also maintains a terrible Twitter account at @DSzymborski.

6 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Smiling Politelymember
3 years ago

Is there any value to using ZIPS to project the final 102 games for each team to “see” how that end would have compared to the actual 60 game end?