This is Matt Swartz’ third piece as part of his July residency at FanGraphs. A former contributor to FanGraphs and the Hardball Times — and current contributor to MLB Trade Rumors — Swartz also works as consultant to a Major League team. You can find him on Twitter here. Read the work of all our residents here.
In this series of articles, I analyze the average cost per WAR on the free-agent market, as well as looking back at previously discovered market inefficiencies to see how they have changed over time. However, in doing this analysis, it is important to ensure that any assumptions I make have theoretical and empirical backing, including perhaps the largest such assumption — namely, the linearity of the Cost per WAR on the free-agent market. Does a four-win player earn twice as much as a two-win one? Some analysts have argued that, due to scarcity, a 4-WAR player could earn more than twice as much, although I have shown in the past why I believe this is not likely. Today, I will confirm linearity is still a fair assumption to make.
First, it’s useful to discuss the economic implications in theory. The question of linearity comes down to how easy it is to replace a four-win player on the free-agent market, and if teams would be better off going after two 2-WAR players. If so, teams would drive up the price of 2-WAR players and drive down the price of 4-WAR players as they got smarter over time, until both methods of acquiring 4 WAR cost the same. However, perhaps teams cannot upgrade at any enough positions to enable this kind of arbitrage. As revealed by analysis I’ve performed in the past, there are, in practice, many different options a teams has. Nearly every team has a lineup spot, a rotation spot, and a couple of bullpen spots open in any given offseason. Many have more, and teams also have the option of conducting trades, as well, to make room for upgrades if so desired.
None of this says that some teams would never choose to take the approach of going after more 2-WAR players in lieu of going after big names. Individual teams are bound to have different assessments of replacement level both for their own team and the market in general. A team that felt that they had a high replacement level internally would be more inclined to go after big-name players and fill in the remaining spots with their internal high-replacement-level players. Alternatively, a team that felt replacement level was much lower than the market suggests would spread their spending across multiple players to avoid having to fill a vacancy with such a poor player.
As mentioned, my previous findings suggested that Dollars per WAR was linear. To see if this is still true, I split the market into three periods — 2006-09, 2010-13, and 2014-16 — and looked at the cost per WAR using my framework discussed in the previous article in different ranges of salaries (net of the league minimum). This does lead to some sample-size issues, but here is the relevant table:
Dollars per WAR, by Salary Range
Net AAV Range |
2006-09 |
2010-13 |
2014-16 |
$0-2 million |
$3.3 |
$2.7 |
$26.5 |
$2-5 million |
$5.3 |
$5.7 |
$13.1 |
$5-10 million |
$5.9 |
$5.7 |
$7.5 |
$10-15 million |
$5.4 |
$7.6 |
$7.2 |
$15-20 million |
$5.6 |
$7.6 |
$11.6 |
$20+ million |
$4.9 |
$7.4 |
$10.3 |
Overall |
$5.4 |
$6.5 |
$9.0 |
And here’s that data rendered into visual form:
![](http://www.fangraphs.com/blogs/wp-content/uploads/2017/07/Swartz-Chart-2-1.png)
As you can see, the dollar amounts per win retain a general proximity to the overall averages for each time period. Early numbers did show some non-linearity in the very low-$ part of the market (under $2 million net AAV) but that was probably related to measurement error. Such deals are often one-year deals with sizable incentives that are poorly reported. They also overwhelmingly go to players just above 0 WAR, and therefore are highly vulnerable to measurement error of WAR itself if replacement level isn’t measured correctly. A slightly higher approximation of replacement level could lead to a much higher $/WAR estimate in this range.
I probably was less likely to miss out on incentives in more recent deals when collecting data, and there is actually a large over-correction where $/WAR is very high in the lowest salary bucket for 2014-16. Overall, I think it is best to focus on deals more than $2 million above the league minimum. You will see that the above issue led me to focus only on deals in excess of the amount for much of the subsequent analyses.
But once we get past that first row, we can see strong evidence of linearity in all ranges. The most recent years (2014-16) do show a little bit higher cost per WAR in the high-salary ranges, but since they also do in the low-salary ranges, I suspect this is just noise, and I am comfortable using a linear framework to Dollars per WAR in subsequent articles. This jump in $/WAR at high-$ levels (in the last column) is probably also a function of the small sample sizes as well. There are just 80 and 74 player-seasons respectively in the top two salary groupings for 2014-16.
Any non-linearity in cost per WAR would severely complicate the analysis of the free-agent market. I would certainly welcome this complexity if it were warranted, but I think the evidence and theory both clearly point to linearity making far more sense.
In my next article, I will explain the calculation of draft-pick cost in the Dollars per WAR framework, and the importance of discount rate while doing so. Once that piece is finished, the framework will be defined clearly enough that we can begin looking at the evolution of market inefficiencies.