Trying to predict how a reliever will perform from one season to the next can be a pretty frustrating exercise. Some amount of uncertainty surrounds all players, but being a pitcher makes things a little more difficult and being a pitcher who is often asked to throw with max effort with little to no rest complicates further still. And even after we move past those factors, we’re faced with a smaller sample of outcomes for bullpen arms. A 60-inning season is a complete season for a reliever, considerably fewer innings than a starter throws and many fewer batters faced than the number of times most starting position players come to the plate. Trying to predict reliever performance in half a season is even more difficult; attempting to put a value on relievers in a potentially condensed, shorter season becomes quite challenging.
Consider that last season, there were 158 qualified relievers with at least 48 innings pitched. Ken Giles produced 1.9 WAR, ranking 10th in baseball among his bullpen brethren. Brett Martin ranked 60th among relievers with a 0.8 WAR and Matt Albers ranked 130th as a replacement-level reliever. Now, let’s cut those seasons in half. Giles still ranks 10th with just under a win, but he’s now closer to Matt Albers in half a season than he was to Brett Martin in a full season. It is considerably harder to tell, in terms of results, the difference between a good and bad reliever under those constraints. This is further complicated by the fact that the smaller the sample size, the less likely that the results will match the actual performance.
I separated pitchers into three groups from last season: pitchers with at least 100 innings, qualified relievers, and pitchers with at least 20, but less than 40, innings on the season. Then I ran some correlations between WPA, which shows how the actual results on the field mattered to the team, and ERA, FIP, and WAR, to show some measures of performance.
|Starters (min. 100 IP)||0.61||0.56||0.77|
|Relievers (min. 48 IP)||0.40||0.38||0.54|
|Pitchers (20-40 IP)||0.24||0.23||0.47|
With starting pitchers, we see a good relationship between all three metrics (i.e. the better or worse a pitcher pitched, the better or worse the result was for the team). When we drop the innings requirement down and look at relievers, the relationship gets weaker (i.e. the results vary more, with the pitcher’s performance showing a little more randomness). When we drop the innings requirement down further, we see the relationship grow even weaker. That last set is most representative of the number of innings we might see from relievers in a shortened season.
The problem grows even worse when we consider the possibility of a condensed season. If a standard one-win reliever pitches 60 innings and puts up a solid ERA and FIP over the course of a full season, that reliever might be projected to pitch 30 innings and put up 0.5 WAR in half a season. A condensed schedule actually reduces the innings total further. In a 186-day season, a reliever will, on average, pitch one inning every three days and get into 37% of games. In a 93-day season with 81 games, we’d see 30 appearances still amounting to 37% of games. However, if there is a 93-day season with teams playing 100 games, we still see 30 appearances but those appearances only amount to 30% of games. A condensed scheduled might knock out almost 20% of a reliever’s appearances relative to a normal, complete schedule.
Over the winter, there were seven free agent contracts totaling more than $10 million for relievers, with another eight deals paying at least $5 million. Some of those teams might be experiences buyer’s remorse on some of these deals given the increased volatility.
The counter to the reduction in percentage of games is the potential for the overall quality of pitching to decline. If more pitchers are forced into service by a condensed schedule, those extra pitchers aren’t likely to be as good, causing the overall level of pitching to decline. In an easy example, let’s say we have a good relief pitcher who gives up three runs per nine innings and pitches 60 innings. The average pitcher gives up 4.5 runs per nine innings and over 60 innings, the good relief pitcher saves about 10 runs more than average. If the good relief pitcher pitches only 50 innings and gives up three runs per nine innings, he still saves 10 runs over the average pitcher if the average pitcher gives up 4.8 runs per nine innings instead of 4.5 like before. It’s possible that a good reliever will be just as valuable even pitching in a lower percentage of games if the overall quality of pitching decreases.
In addition, it isn’t just relievers who become more volatile with a shorter season. As Dan Szymborski noted, a shorter season compresses everyone’s playoff odds. That means that a team’s record in one-run games could be more important than it is over the course of a long season when it is more likely to even out. Plus, the greater chances of making the playoffs for many teams means there are more likely to be important playoff innings where relievers can become increasingly valuable.
Having good relievers might actually be more important in a shortened season, though there is still the issue of figuring out which relievers are going to be good. Ultimately, if a team decided that a relief pitcher was a good investment in the offseason, there isn’t much to think baseball-wise that should prevent them from feeling the same way now. Teams give up good prospects every trade deadline for relievers despite those relievers pitching for just two months of the regular season. The current situation isn’t significantly different from the trade deadline, particularly if this season leads to an expanded playoff structure.
Craig Edwards can be found on twitter @craigjedwards.