Historical Position Adjustments

Here on Fangraphs there have been a lot of posts on the relative value of different positions. I agree with the idea that position adjustments should be based on relative defensive skill and player scarcity (as the infield positions draw talent from a smaller pool; lefthanders can’t play there) as opposed to the inverse of batting production.

Using my TotalZone defensive data (based on retrosheet play by play files) I looked at defensive differentials by decade, from the 1950’s (actually starting with 1953) to the 2000’s. I look at players who played two different positions in the same year, and aggregate all such player-seasons grouped by decade. With grouping by decade, I’ve increased the sample size of players. I’m also limiting my sample size by only looking at players who played multiple positions in one season, but this is necessary to avoid distortion caused by aging – such as a player who was a shortstop in 1961 and a third baseman in 1968. I’m leaving out catchers and first basemen and focusing on the relative value of outfielders, second and third basemen, and shortstops.

For the most recent time period, I use Tango Tiger’s position adjustments, which are also used here on Fangraphs on the player valuation sections. They are:

SS +7.5
2B, 3B, CF +2.5
LF, RF –7.5

The TotalZone data is reasonably close to this. Center fielders are 8.7 runs per full season better than corner outfielders. Shortstops are 4.7 runs better than third basemen and 4.2 runs better than second basemen. Second basemen are 1.4 runs better than third basemen. The results are not identical, but close enough that I don’t see value in arguing about them. You could make 2B +3 and 3B +2, and still be in balance, but it’s only half a run.

Tango’s adjustments show an average gap of 8.3 runs between the three infield positions and the 3 outfield positions. TotalZone shows only a 3.2 run difference when looking at players who played both infield and outfield in the same season. Are we overvaluing infielders?

There are two problems here. One is handedness, all players can potentially play the outfield, but only right-handed throwers play the infield. In addition, movement between the positions is almost entirely one-way. Teams have no trouble taking an infielder and asking him to play the outfield. Some examples off the top of my head are Jerry Hairston, Willie Bloomquist, Ryan Freel, and Chone Figgins. How many outfielders are sent to play second, third, and short? Very, very few, mostly in emergency situations, such as when multiple players get hurt or the game goes deep into extra innings. I’m not forgetting Juan Rivera’s second base appearance last year, but I doubt the Angels plan on him doing it again.

I think that in the case of infielders vs. outfielders it is appropriate to look at the relative offense at the positions. I don’t think this is appropriate to compare center fielders to left fielders, if center fielders outhit left fielders and outfield them, then they are better players, period, and we should not artificially set the positions as equal in value. For 2000 to 2008, outfielders hit better than infielders to the tune of 11.3 runs per season. Average this with the observed defensive difference and we get 7.3 runs, just one off from what Tango’s adjustments imply. For previous decades, I will use the average offensive and defensive adjustments between the infield and outfield groups. Within the groups, my adjustments will be completely based on the defensive differences.

1990’s:

The gap between infield and outfield is very similar- 12.7 on offense, 3.5 on defense, an average of 8.1. Center fielders were 10.6 runs better than corner outfielders. So far, it looks pretty similar. Shortstops, however, were 7.6 runs better than third basemen and 6.3 runs better than second basemen. There was only a 0.2 gap between second and third, in favor of the second basemen. For this decade, I’ll change things a bit by giving more credit to the shortstops. The 1990s adjustments:

SS +9
2B +2
3B +1.5
CF +2.5
RF, LF –7.5

The 1980’s

Here we have a larger gap between infield and outfield, 15.8 on offense, 5.1 on defense, for a 10.5 average. There is only a 7.5 run gap between corner outfielders and center, and this should not be a surprise. In recent years we’ve seen Adam Dunn, Manny Ramirez, Raul Ibanez, Josh Willingham, Carlos Lee, Hideki Matsui, Pat Burrell, and Jack Cust play left field. That lumbering group represents more than 25% of starting left fielders. In the 1980’s we had Rickey!, Tim Raines, Vince Coleman, Willie Wilson, and Gary Redus playing left field. I’m sure there were some terrible left fielders back then as well, and guys like Carl Crawford, Eric Byrnes, and Dave Roberts buck the trend, but I think that on average the left fielder of 1985 was faster than his 2005 counterpart.

Shortstops were 6.6 runs better than third basemen and 3.3 better than second basemen. Second basemen were 4.7 runs ahead of third basemen.

For the 1980’s I use these adjustments:

SS + 9, 2B +5, 3B +1
CF +0, RF, LF –7.5

The 1970’s

The infield/outfield gap keeps getting bigger as we go back in time, for this decade it’s 20.2 on offense and 8.6 on defense. Center fielders had only a 5.6 run advantage on the corners. Second basemen were 7.6 runs worse than shortstops, but 3.4 run better than third basemen. So if we add that up, shortstops must have been 10 or more runs better than third basemen, right?

If only it was that easy. In fact, players who played both third and short in the 1970’s were 1.1 runs worse as third basemen. Sometimes the pieces of this data puzzle do not fit very well together. In every other decade, shortstops were at least 4.7 runs better than third basemen and at least 6.6 runs excluding the 2000’s. For the 1970’s the other pieces – shortstop to 2B, 2B to 3B, show the normal pattern. Chalk this one up to a fluke, and I’ll try to make the adjustments make sense.

The adjustments:

SS +10
2B, 3B +4
CF –2
LF, RF –8

The 1960’s

The infield/outfield gap was the same as the 1970’s on offense, 20.2 runs, and 3.7 runs on defense. Center fielders were 8.7 runs ahead of corner outfielders. Shortstops had a 7.2 run advantage on third and 5.1 on second. 2B and 3B were essentially even with a 0.2 run difference (2B slightly better). Since those two were equal, I kept them as equals in the adjustments, and averaged their gap with shortstop to make the shortstops 6 runs better than other infielders.

The adjustments:

SS +10
2B, 3B +4
CF +0
RF, LF -9

And finally the 1950’s

Outfielders out-hit infielders by 19.9 runs, while infielders had an 8.7 run defensive advantage. Center fielders were 7.2 runs better than corner outfielders. Shortstops were 7.5 runs ahead of third base, and 5.3 runs ahead of second. Those last two figures are similar to the 1960’s, but since second baseman had a 3 run advantage on third base, I changed their relative value a bit in the adjustments:

SS +10
2B +5
3B +2
CF –1
LF, RF –8

Using these position adjustments will lead to conclusions that not all positions are created equal. Sometimes we might find that center fielders hit as well or better than corner outfielders. Or else shortstops hit the same or equal as second basemen. Their defensive value is still superior, and in such situations we’ll find that one position has a better collection of baseball talent than another has. Most likely you’ll find that teams recognize this, and pay the positions differently as well.





9 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Jeff Petersen
15 years ago

While I think it’s a great concept, I wonder if your decade selection is a bit arbitrary and possibly damaging. Consider a third baseman in 1979 and 1980. (Say, Ron Cey in his age 31 and 32 seasons). Regardless of his offensive prowess, his defense is likely to remain fairly consistent between the two years. However, he’s going to be affected both by players who were more prominent in the early part of the 70s (when he didn’t have much playing time) and the late 80s (likewise.) But your selection means that, from one season to the next in the prime of his career, he dropped in value? Doesn’t make sense to me.

It seems like it’d be better to calculate the positional values by year. The game’s achanging all the time, after all. And if that has some concerns about small sample size, you could probably do three or five year roving averages.

Brian Cartwright
15 years ago
Reply to  Jeff Petersen

Last month at StatSpeak, I posted wOBA’s and BRAA’s for each position in each year in the RetroSheet years, using Marcel for a rolling mean.
http://statspeak.net/2008/12/batting-runs-above-position-1.html
Using these values, you can use Marcel to calculate a player’s “True Talent Level” each year, compared to the expected values for each season with the same weights.

Greg Smith
10 years ago

You really think you can calculate a players “True Talent Level”? I find this hard to believe no matter how you calculate it. Everything I’ve read reeks of so much hopefulness, desperation, and “weighting” (“smoothing”) when it never works out. BUT, the lemmings who fawn to the elite never cease to praise the outcome. Am I the only one who sees this?