During this series of articles that have comprised my FanGraphs Residency, I have updated my analysis of the free-agent market that I last researched over three years ago. The vast majority of my new findings have suggested that teams have gotten smarter about spending in line with true player talent, all the while spending roughly the same share of league revenue as they were spending before.
Perhaps my biggest finding is that the OPP Premium has declined. Teams used to receive significantly less WAR for signing other team’s players as they did for re-signing their own players, and this seemed largely related to private information that teams knew about their own players. As teams have become more aware of this phenomenon, the evidence suggests that they have become more careful and have driven up the price of their own players while being more reluctant to sign players on other teams.
This is especially true for pitchers, who used to have the largest OPP Premium. Hitters appear to have actually increased their OPP Premium, which is probably more related to a handful of expensive players who did not pan out rather than teams collectively getting sloppier about signing hitters.
Read the rest of this entry »
In my previous articles in this series, I have looked at trends in free-agent spending over time, and specifically I have reviewed more recent data to see if market inefficiencies that I discovered in earlier work have disappeared over time. In this piece, I will review the findings on pitchers in my 2013 Hardball Times Annual article. In that piece, I discovered that teams tended to overvalue old-school statistics that did not translate to actual value. This included wins for starting pitchers and saves for relief pitchers. I also noticed that free-agent pitchers with strong peripheral statistics (e.g. those with good FIPs, usually) were often undercompensated, suggesting teams did not all realize the importance of peripheral statistics in projecting future performance. Much of this seems to have been corrected by the market over the years, although a handful of players have created some noise.
Read the rest of this entry »
In this series of articles, I have analyzed the changes in the free agent market since I last did public analysis on the topic over three years ago. I have found that teams no longer are overpaying by as much for “Other People’s Players” or for relievers. In my 2013 Hardball Times Annual article, I found a number of other types of players for which teams over- or underpaid relative to value, and those are the players I will be reviewing in my next two articles. In today’s article, I will focus on hitters.
Teams were already pretty smart about spending relative to value on hitters when I looked at free agent spending for hitters back in that piece. However, the main discovery about position players that I found was that defense and baserunning tended to be under-compensated by the free-agent market. I had suspected at the time that I began researching that article that teams would overpay for power hitters, but I found that this was not true once I controlled for position group (which I lump roughly into defense-first positions of catcher, second base, third base, and shortstop, and offense-first positions of first base, outfield, and designated hitter).
Read the rest of this entry »
This is Matt Swartz’ sixth piece as part of his July residency at FanGraphs. A former contributor to FanGraphs and the Hardball Times — and current contributor to MLB Trade Rumors — Swartz also works as consultant to a Major League team. You can find him on Twitter here. Read the work of all our residents here.
Unlike findings about statistical persistency or the physics of batted balls, any discovery about Major League teams’ propensities to spend is based on something less than an inviolable law. As I showed in my previous article about the decline in OPP Premium, teams wising up to an inefficient spending pattern can adjust their behavior in a way that collectively eliminates it.
A related finding from my earlier work on cost per WAR is that players get paid very different amounts per WAR by position. I remained agnostic about whether this was evidence of irrational spending patterns, so much as a feature of competitiveness.
Because teams have become smarter about their free-agent contracts, I decided to review this pattern to see if any changes had occurred. To do so, I looked at positional cost per WAR figures from 2006 to -11 (which was pre-discovery) and then also 2012-16 (the post-discovery era), roughly lining up with my first public work on this topic.
Although there’s some evidence of teams spending on free agents based on outdated valuation methods, there’s also some notable evidence that competitiveness for different positions in free agency plays a role in spending on those positions. When evaluating the numbers, I isolated “defense-first” positions, which included catcher, second base, third base, and shortstop, from “offense-first” positions of first base, outfield, and designated hitter. The key feature of the “offense-first” positions is that many players can easily move between those positions and often do, so teams with a player under contract at the same position as a potential free agent could still safely bid on that player, knowing that one of the two could be shifted to another position. The high cost per WAR of center fielders contradicts the idea that teams were undervaluing players at important defensive positions, because center field certainly is a crucial spot on the diamond. But the inferior center fielder can easily move to left or right field if a team wants two of them under contract. The common thread in high cost per WAR positions is positional flexibility rather than defensive importance.
Pitchers can also be moved around as needed. A great ace can easily be moved to the No. 2 slot in a rotation if another great ace is available as a free agent. A solid closer can become a setup man. The price per WAR for pitchers is definitely higher than defense-first positions, for whom the market is often more likely to be limited.
Read the rest of this entry »
This is Matt Swartz’ fifth piece as part of his July residency at FanGraphs. A former contributor to FanGraphs and the Hardball Times — and current contributor to MLB Trade Rumors — Swartz also works as consultant to a Major League team. You can find him on Twitter here. Read the work of all our residents here.
When I tell people about my side career as a baseball analyst, they frequently ask me of what research I’m most proud. The answer? The work I did establishing that teams receive fewer WAR per Dollar when signing free agents away from other teams than when re-signing their own players.
My clearest and most thorough analysis of this topic came in the 2012 Hardball Times Annual. The results were initially met with strong skepticism when I published a post on the topic at Baseball Prospectus back in 2010. It took a couple years of evidence before I was able to persuade the sabermetric community that it was true — and, more importantly, that the reason for this phenomenon was that teams re-signing their own players had better information on them.
My 2012 Hardball Times Annual article tested and confirmed that this held true for a variety of players. Traded MLB players and traded minor-league prospects both tended to underperform their projections when compared to untraded players.
What’s the significance of this discovery? Generally speaking, it means that an “average” player who reaches free agency is overvalued by his projections relative to another “average” player who doesn’t reach free agency. So much of sabermetric analysis involves looking at free agents. Suddenly, I had research indicating that such analysis was based on a biased sample. The results immediately colored every potential free-agent signing. With every free agent I encountered afterward, I began asking myself: “is there some reason this player’s original team let him go?”
This is Matt Swartz’ fourth piece as part of his July residency at FanGraphs. A former contributor to FanGraphs and the Hardball Times — and current contributor to MLB Trade Rumors — Swartz also works as consultant to a Major League team. You can find him on Twitter here. Read the work of all our residents here.
The most distinct feature of my approach to calculating the cost per WAR on the free-agent market is my inclusion of the draft-pick-based costs to signing free agents, in addition the more obvious monetary costs. This requires a greater collection of assumptions than a simple focus on the dollars spent on free agency, but provides a more robust estimate of what teams give up when they dip into the free-agent market. It also requires a logical economic framework, including opportunity costs, so it also requires estimating the foregone costs of draft picks that a club could have received had they not re-signed their own players.
The gap between my actual estimates of the cost per WAR and the same calculation absent draft-pick compensation is not trivial. While it normally is only around 7%, it reached as high as 20% in 2015. Of course, with the new CBA lowering draft-pick compensation, this difference is likely to drop, making this part of the analysis somewhat less important. However, it remains essential to consider changes in draft-pick compensation to understand changes in cost per WAR over time. What may appear, in some years, like a collective decision by clubs to spend more aggressively in the free-agent market is frequently just a product of lower opportunity cost of foregone draft picks, leading teams to pump more dollars into free-agent contracts.
The biggest challenge when utilizing this framework is determining the appropriate discount rate to use. This isn’t easy to do and can easily vary from team to team and over time, as well. This article won’t pin down a perfect number; it’s almost certain that a better estimate of the discount rate requires a more detailed analysis of trades and other decisions that teams make when considering how to value player performance at different points in the future. It’s also challenging to use this approach to determine if the discount rate that teams use has changed, because it appears that the method of estimating said rate is noisy enough that it varies over time within a very large range. However, it’s worth understanding the approach.
In this article, I attempt to present that approach. Before I begin, one note: some of what follows is rather technical. I feel much of it is necessary, though, to establish the entirety of my methodology before moving on, in later posts, to actual illustrative cases.
The simpler part of using this analysis is looking at draft-pick bonus money saved. While this pales in comparison to lost WAR values from missing out on draft picks, a full picture does require netting out how much a team saves by not paying bonuses on those draft picks. I’ve performed a slightly more sophisticated nonlinear approach to estimate bonuses for this series than in my previous work, basically assuming that bonuses paid to draftees have the same exponential structure (relative to pick number) as the WAR they produce varies by pick number. I’ve also found better estimates of the specific slot values for draftees by pick, leading to a better estimate.
Of course, the larger issue is analyzing the picks themselves. While the average pick surrendered has been around roughly the 30th overall, this has varied significantly and has been higher (at times) in the past. It also will certainly be lower in the future due to the new CBA rules. To estimate the value of picks, I continue to use the Draft Pick WAR Calculator developed by Sky Andrecheck way back in 2009. While the precise outputs have possibly changed over time, they probably haven’t changed much, and Andrecheck’s model is certainly the best publicly available one.
In addition to this estimate of the WAR produced by players according to their draft pick, I’ve also found (in my own research) that prospects tend to debut roughly three years after being drafted. Therefore, a player’s WAR tends to accrue to the team who drafted him from three to nine years after said player is drafted, after which the player is a free agent. That’s roughly equivalent in value to all the WAR accruing exactly six years after the player is drafted, so that’s what I use in my estimate. I also had to net out the actual salaries through arbitration that successful draft picks will eventually receive, knocking down the net value of the WAR created by about 20%.
I decided to continue using a 10% discount rate (meaning that teams currently value the ability to obtain future WAR 10% less each year into the future). This is still my best guess about how teams are valuing draft picks. This means the team values the WAR 56% less than they would if it all came right away. And since they have to pay roughly 20% of market value due to arbitration in the latter years, they value the WAR 20% less than that.
In the three tables below, I’ve split the free-agent market data I have available into three time periods: 2006-09, 2010-13, and 2014-16. I’ve looked at all players who earned salaries at least $2 million in excess of the league minimum and compared the cost per WAR for those free agents with and without draft-pick compensation attached.
This is Matt Swartz’ third piece as part of his July residency at FanGraphs. A former contributor to FanGraphs and the Hardball Times — and current contributor to MLB Trade Rumors — Swartz also works as consultant to a Major League team. You can find him on Twitter here. Read the work of all our residents here.
In this series of articles, I analyze the average cost per WAR on the free-agent market, as well as looking back at previously discovered market inefficiencies to see how they have changed over time. However, in doing this analysis, it is important to ensure that any assumptions I make have theoretical and empirical backing, including perhaps the largest such assumption — namely, the linearity of the Cost per WAR on the free-agent market. Does a four-win player earn twice as much as a two-win one? Some analysts have argued that, due to scarcity, a 4-WAR player could earn more than twice as much, although I have shown in the past why I believe this is not likely. Today, I will confirm linearity is still a fair assumption to make.
First, it’s useful to discuss the economic implications in theory. The question of linearity comes down to how easy it is to replace a four-win player on the free-agent market, and if teams would be better off going after two 2-WAR players. If so, teams would drive up the price of 2-WAR players and drive down the price of 4-WAR players as they got smarter over time, until both methods of acquiring 4 WAR cost the same. However, perhaps teams cannot upgrade at any enough positions to enable this kind of arbitrage. As revealed by analysis I’ve performed in the past, there are, in practice, many different options a teams has. Nearly every team has a lineup spot, a rotation spot, and a couple of bullpen spots open in any given offseason. Many have more, and teams also have the option of conducting trades, as well, to make room for upgrades if so desired.
None of this says that some teams would never choose to take the approach of going after more 2-WAR players in lieu of going after big names. Individual teams are bound to have different assessments of replacement level both for their own team and the market in general. A team that felt that they had a high replacement level internally would be more inclined to go after big-name players and fill in the remaining spots with their internal high-replacement-level players. Alternatively, a team that felt replacement level was much lower than the market suggests would spread their spending across multiple players to avoid having to fill a vacancy with such a poor player.
As mentioned, my previous findings suggested that Dollars per WAR was linear. To see if this is still true, I split the market into three periods — 2006-09, 2010-13, and 2014-16 — and looked at the cost per WAR using my framework discussed in the previous article in different ranges of salaries (net of the league minimum). This does lead to some sample-size issues, but here is the relevant table:
And here’s that data rendered into visual form:
As you can see, the dollar amounts per win retain a general proximity to the overall averages for each time period. Early numbers did show some non-linearity in the very low-$ part of the market (under $2 million net AAV) but that was probably related to measurement error. Such deals are often one-year deals with sizable incentives that are poorly reported. They also overwhelmingly go to players just above 0 WAR, and therefore are highly vulnerable to measurement error of WAR itself if replacement level isn’t measured correctly. A slightly higher approximation of replacement level could lead to a much higher $/WAR estimate in this range.
I probably was less likely to miss out on incentives in more recent deals when collecting data, and there is actually a large over-correction where $/WAR is very high in the lowest salary bucket for 2014-16. Overall, I think it is best to focus on deals more than $2 million above the league minimum. You will see that the above issue led me to focus only on deals in excess of the amount for much of the subsequent analyses.
But once we get past that first row, we can see strong evidence of linearity in all ranges. The most recent years (2014-16) do show a little bit higher cost per WAR in the high-salary ranges, but since they also do in the low-salary ranges, I suspect this is just noise, and I am comfortable using a linear framework to Dollars per WAR in subsequent articles. This jump in $/WAR at high-$ levels (in the last column) is probably also a function of the small sample sizes as well. There are just 80 and 74 player-seasons respectively in the top two salary groupings for 2014-16.
Any non-linearity in cost per WAR would severely complicate the analysis of the free-agent market. I would certainly welcome this complexity if it were warranted, but I think the evidence and theory both clearly point to linearity making far more sense.
In my next article, I will explain the calculation of draft-pick cost in the Dollars per WAR framework, and the importance of discount rate while doing so. Once that piece is finished, the framework will be defined clearly enough that we can begin looking at the evolution of market inefficiencies.
This is Matt Swartz’ second piece as part of his July residency at FanGraphs. A former contributor to FanGraphs and the Hardball Times — and current contributor to MLB Trade Rumors — Swartz also works as consultant to a Major League team. You can find him on Twitter here. Read the work of all our residents here.
I first began estimating the average cost per WAR on the free-agent market after the 2009 season, but have not done so since my three–part series at the end of the 2013 season, leaving three extra seasons during which the market for free agents has evolved. In the first piece of my residency, I discussed the labor implications of using this framework. Many of my subsequent pieces in this series will look for which types of players are undervalued or overvalued by the free-agent market.
But first this piece will explain how I actually calculate average value — the reference point for whether players are undervalued or overvalued. It is also the appropriate reference point when considering the opportunity cost of any other number of baseball moves. For example, when a team is considering the value of acquiring a young player who will produce a large volume of team-controlled WAR, the reference point for valuing him is the cost of acquiring that amount of WAR on the free-agent market. This is an important concept for team construction.
This is Matt Swartz’ first piece as part of his July residency at FanGraphs. A former contributor to FanGraphs and the Hardball Times — and current contributor to MLB Trade Rumors — Swartz also works as consultant to a Major League team. You can find him on Twitter here. Read the work of all our residents here.
I’m excited to begin my FanGraphs Residency this month, during which I’ll present an updated analysis of the Dollars per WAR estimates that I’ve used for a long time. I’ve written about the Dollars per WAR framework for analyzing the free-agent market for nearly a decade now, most recently in a three–part series at Hardball Times using data through the 2013 season. In that collection of posts, I established the important definition of Dollars per WAR that I will use throughout this series of articles — namely, the average cost of acquiring one win above replacement on the free-agent market.
Since I’ve written about this, however, there has been a progressively minded, labor-sympathetic pushback against this framework that I felt it was important to address, because if the criticism were fair it would cast a long shadow across all of the analysis in the coming articles. Fortunately, I believe that this criticism is misguided, even if you accept the value system that proponents of this line of criticism generally espouse.
From my perspective, I will remain agnostic on the value system itself in these criticisms, but simply explain why I think this type of analysis does not line up with an anti-labor view at all. I will admit up front that I consult to a Major League team and therefore, when working for them, I do represent the interests of that employer. What I say in these articles, however, will represent only my own views — and, in general, I’m writing this from my perspective as a frequent contributor on this topic predating this good fortune, and as an economist — but neither as a team employee representing ownership nor as a former Department of Labor employee, either.
I’d like to address two well-written and well-argued articles here that I believe characterize some of the labor-related concerns. One by Mike Bates asks if statheads are pro-ownership and another by Michael Baumann reframes a series of team-friendly contracts as inherently bad and unfair. What I’d like to consider here is the implicit suggestion made by both authors that, when teams individually target lower cost-per-WAR players, that this doesn’t affect the prices of these lower cost-per-WAR players and drive them up, but rather that it serves only to drive down the price of higher cost-per-WAR players. This seems very unlikely to be true according to some of the increased prices for lower cost-per-WAR categories of players I find in later pieces in this series.
People sometimes ask what initially got me interested in economics. The truth is that one of the first things that caught my attention was an application of supply-and-demand graphs that explained the war on drugs. What seemed like a set of policies with unpredictable effects actually had some very predictable — and undesired — consequences. Applying these concepts to Major League Baseball’s war on performance-enhancing drugs is naturally an article I was destined to write. I’ll start off by running through the basics of supply and demand for illegal drugs, show the concepts I found so fascinating years ago, and then show you how well they apply to what MLB is trying to do with PEDs and with Biogenesis in particular. I understand that drugs are a somewhat sensitive topic, and I have no interest in preaching any normative points of view. I will instead trust that readers can think of my commentary as descriptive, and not assume any agenda. I’ll also be peppering in references to The Wire throughout, because I’m definitely never going to get to do it again.