An Early Look at the Price of a Win This Off-Season

Over the last few years, we have analyzed nearly every notable contract signed in Major League Baseball, and one of the tools that we have used regularly is a pricing model that we often refer to as $/WAR. Basically, this calculation takes a look at the expected production from a player during the life of the contract that he just signed, then also the total cost of the contract over the length of the deal, and divides the production by the price. This calculation attempts to estimate the price paid for the expected production, and gives us an idea of what teams are paying for projected wins in baseball’s closest thing to a free market.

To be clear, FanGraphs didn’t invent this calculation, and this isn’t an idea specific to us. Doug Pappas was doing similar calculations a decade ago using a method he called Marginal Payroll and Marginal Wins. Nate Silver also wrote about the marginal value of a win during his time at Baseball Prospectus, and Tom Tango has been calculating $/WAR for contracts for years on his blog. Over the last few years, plenty of others have written about the price of a win in MLB, and there are multiple methods to perform this kind of calculation.

Even here at FanGraphs, we’ve published differing methods for calculating the price of a win. Matt Swartz wrote a pair of articles on the site last year explaining his model, and his estimates come out a bit higher than what I’ve calculated. More recently, Lewie Pollis wrote a piece at Beyond the Boxscore suggesting that his methodology places the cost of a win at around $7 million, even higher than Matt’s estimates.

As with any model, there are going to be different assumptions one has to make along the way that will lead to different results. In the case of Matt and Lewie’s models, they calculated the price of a win retrospectively, using the actual performance of the players after they signed their contracts, while I’ve always calculated the cost of a win based off a projection of what the player was expected to do when he signed the contract. Using actual past performance data has some advantages, and this kind of model probably comes closer to answering the question of what teams ended up paying for wins in a given season, but that isn’t necessarily the question that I’m most interested in answering.

After all, teams do not know what players are going to do in the future when they sign them. Every contract is based on a projection of future performance, and those projections include uncertainty around the expectation. Uncertainty has a cost of its own, and I think it’s more helpful, when discussing the market price of a win, to calculate the price a team is paying for an uncertain expectation of future value than to calculate what they actually paid in retrospect once we know what the player did. Teams don’t have that benefit, and they have to make decisions based on forecasts.

And we can only talk about the players on the free agent market in terms of forecasted production. So, perhaps it would be more correct to call this model $/projected WAR, while Matt and Lewie have calculated $/actual WAR. The fact that free agents have consistently underachieved their projections is interesting, and suggests that maybe the forecasting systems we’re using are systematically overrating free agents. Or, perhaps as Matt has suggested, teams that let free agents leave know something that the other 29 teams do not, and that piece of information correctly lowers their own forecast while the other teams do not make that adjustment, since they do not have that piece of information.

Or this could all just be another example of the Winner’s Curse. I think the difference in projected value and actual value produced by team-switching free agents is a topic worth exploring further. But I’ll stop short of agreeing that we should be modeling $/WAR based on actual performanc after the fact, as at that point, we’ve moved away from using the information available at the time of the decision to using future information that couldn’t have been known when the contract had to be signed. I think, for the purposes of establishing an opportunity cost by looking at the current market rate for wins, using forecasted WAR is more helpful than waiting until after the players have taken the field and give us more data to calculate their actual $/WAR in retrospect. After all, teams are buying forecasts, not guarantees.

So, let’s go ahead and look at the data from the first dozen or so contracts that are helping to shape the current market price of a win. Because forecasting usage for guys signed to reserve or bullpen roles is much more difficult than just forecasting playing time, we’re going to exclude bench guys and relievers for now; we’ll look at the price of these types of players later, since the market for bench and bullpen guys is very different than the market for starters and regular position players.

Also, just as a note, we’re including players who signed deals before free agency began, so Tim Lincecum and Hunter Pence are included in the calculation even though they didn’t technically become free agents. The price the Giants paid to keep them from free agency still helps set the market price for other players, so their contracts are included in our first dozen signings. To the table.

PLAYER Years DOLLARS Total Projected WAR 2014 Projected WAR $/WAR
Hunter Pence 5 $90,000,000 11.5 3.3 $7,826,087
Brian McCann 5 $85,000,000 12.9 3.6 $6,589,147
Jhonny Peralta 4 $52,000,000 8.2 2.8 $6,341,463
Tim Lincecum 2 $35,000,000 3.5 2.0 $10,000,000
Jason Vargas 4 $32,000,000 5.0 2.0 $6,400,000
Carlos Ruiz 3 $26,000,000 7.5 3.0 $3,466,667
Tim Hudson 2 $23,000,000 2.7 1.6 $8,518,519
Marlon Byrd 2 $16,000,000 0.9 0.7 $17,777,778
David Murphy 2 $12,000,000 4.3 2.4 $2,790,698
Dan Haren 1 $10,000,000 3.0 3.0 $3,333,333
Josh Johnson 1 $8,000,000 2.6 2.6 $3,076,923
Chris Young 1 $7,250,000 1.4 1.4 $5,178,571
Total 32 $396,250,000 63.5 28.4 $6,240,157

And a chart, for those who are more visual.

2014$WAR

The WAR projections are based on Steamer’s forecasts and the expected playing time from our depth charts, and then for each year beyond 2014, each player was simply forecast for a half WAR decrease per season from their baseline. Using a single aging curve for all players of all ages is a very rough assumption that is certainly wrong, but even changing the aging assumptions slightly for each player won’t really change the conclusion. For ease of explanation, we’ll just go with a half WAR decrease per season for all free agents. You are certainly free to replace that aging assumption with your own and re-do the numbers to see what the results would be with more complicated aging assumptions.

As you can see, Steamer kind of hates Marlon Byrd, so he stands out as something of an outlier on both the table and the graph. I’m sure the Phillies won’t agree with a forecast that has Byrd as a below average player in 2014 and essentially useless in 2015, and Byrd’s inconsistent history probably gives you enough wiggle room to project almost anything you want for him. I think we can say that the Phillies are optimistic and Steamer isn’t, and that’s probably all we can say.

The rest of the 11 players all fit fairly evenly into an overall average, though of course some players have come cheaper than others. David Murphy, Dan Haren, and Josh Johnson are all hanging out around $3 million per forecasted WAR, and all have been lauded as good buy-low opportunities for the signing team. Meanwhile, Lincecum’s at $10 million per win, as the Giants paid a much larger premium to keep their bounce back candidate, and the Giants look to have paid on the higher side to re-sign Pence and add Tim Hudson as well.

But overall, the early market price of a (projected) WAR is just a hair over $6 million, coming in at $6.2 million based on these 12 contracts. This number is a good deal higher than the $5 million per win we were using as a rough guide last winter, which isn’t surprising given that MLB teams are currently enjoying the fruits of the new national television contracts. However, even without that new money flowing in, $/WAR was going to be higher this winter simply because of the change we made to our replacement level baseline back in March.

When we unified our replacement level with Baseball-Reference so that both sites were handing out the same number of WAR per season, we ended up cutting down on the number of WAR we were allocating by about 14% per season. The $5 million per win estimate from last off-season was based on the old replacement level, and with fewer WAR being handed out under our calculations now, the price of each win is naturally going to be higher. If we had adjusted the replacement level baseline prior to last off-season, we would have likely calculated the cost of a win at around $5.5 million instead of the $5 million that we often referred to.

So, really, $6.2 million — so far, it has to be stressed, as this could easily change as more contracts get signed — isn’t quite as big of a jump as it might appear. In fact, it’s pretty close to the ~10% or so annual inflation we’ve seen in free agency over the last decade. While it’s possible that we’re still going to see some contracts that just blow the doors off of expectations, so far, this winter has been pretty normal for an MLB winter in terms of inflation.

Again, these are all rough estimates. In general, I prefer to not even really use the decimal points, and will likely keep referring to the market price of a win as around $6 million, because there are so many rough assumptions in this model that these are definitely not precise calculations. Different projections will give different results. Changing the aging curves would tweak things. Adding in relief pitchers would drive the price up substantially, since the forecasts think most relief pitchers are hardly worth anything.

Just like there’s no single way to calculate WAR, there’s never going to be one way to estimate the market price of a win. This is the way I do it, and I like that it lets us compare free agent signings ahead of time on a common scale. If we accept the idea that teams are not buying “power hitters” or “innings eaters”, but are instead just buying wins in various packages, then its useful to know what other teams are paying for the wins they’re projecting to add this winter.

And by and large, it doesn’t look like MLB has gone off the rails just yet. Maybe when Cano signs, he’ll blow this whole calculation up, and at the end of the winter, we’ll have seen the large inflation that was expected based on the new television money. So far, though, prices have gone up about the same way they go up every winter, and a good amount of the early free agent contracts look downright reasonable.





Dave is the Managing Editor of FanGraphs.

101 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Dave from Pittsburgh
10 years ago

The average goes down to $5.6M/win if you ignore the Marlon Byrd signing.

TheBirds
10 years ago

6.24 comes from $396.25M/63.5.

Subtracting Byrds contract we get …

(396.25-16)/(63.5-.9)= 6.07

Please double check yourself so you don’t mislead people.

will sanchez
10 years ago
Reply to  TheBirds

sorry but you got the answer wrong…since we r taking bird contract off both the $ and the war come off..please try again..

AK7007
10 years ago

And it would go up if you were to remove Murphy, Haren, and Johnson. What are you trying to prove? its an average, some will be above and below. Removing the high and low ones is obviously going to move the average.

JimNYC
10 years ago
Reply to  AK7007

I think that the reason he’s saying it should be removed is that any contract for a player projected to create negligible WAR isn’t really helpful for the analysis — a 0 WAR player still has value to a team, but just by using $/WAR you’d get an infinite amount and skew the average.

brendan
10 years ago
Reply to  JimNYC

does a 0 WAR player really have value? I think that, by definition, 0 WAR players are freely available as minor league free agents. such players should not be valued by teams.

Peter Litman
10 years ago
Reply to  AK7007

It isn’t unusual in this sort of analysis to throw out the high and the low and average the rest of the data points, lest the extremes have too much impact on the average. Looking at the median (the value that splits the data points into half above and half below) can also help in that respect.

Paul Wilson
10 years ago
Reply to  Peter Litman

difficult at n=12