# Statcast’s Outs Above Average and UZR

Given the relative novelty of Statcast data, it remains unclear for the moment just how useful the information produced by it can and will be. As with any new metric or collection of metrics, it’s necessary to establish baselines for success. How good is an average exit velocity of 90 mph? What does a 10-degree launch angle mean for a hitter? How does sprint speed translate to stolen bases or defensive ability?

In an effort to begin answering such questions, the Statcast team has rolled out a few different metrics over the past few years that attempt to translate some of the raw material into more familiar terms. Hit probability uses launch angle and exit velocity to determine the likelihood that a batted ball will drop safely. Another metric, xwOBA, takes that idea a step further, using batted-ball data to estimate what a player should be hitting.

Another example is Outs Above Average. In the case of OAA, the Statcast team has accounted for all the balls that are hit into the outfield, determined how often catches are made based on a fielder’s distance from the ball, and then distilled those numbers down to find what an average outfielder would do. The final result: a single number above or below average.

At Reddit, mysterious user 903124 has published research showing that the year-to-year reliability for Outs Above Average has been considerably higher than the Range component for UZR. The user was kind enough (or foolish enough) to create a Twitter account reproduce some graphs of his results, which are shown below. There is a subsequent tweet in the thread that shows left field.

For those who can’t see the charts and would prefer not to open up a new window, what you’d see here is that, for the group selected, the r-squared is much higher for Outs Above Average than for the Range component for UZR. If Statcast could produce something that is much more reliable and much more accurate than the Range component of UZR, that would be a pretty significant breakthrough and win for Statcast, potentially improving the way WAR is calculated and providing a better measure of a player’s talent and results on the field.

I wanted to retest some of this myself using slightly different parameters, but before getting to that, allow me to explain a few differences from UZR to discuss with Outs Above Average.

• Outs Above Average does not differentiate based on position. We know that center fielders are generally going to have the best range, but Outs Above Average, based on Catch Probability, goes across all outfielders. If you look at the charts above, you’ll see that center fielders nearly always receive positive figures. In some ways, that is a good thing: it helps deduce which players might be best suited for center field given how good they are at tracking down the ball. Yet, this is different from UZR, which compares averages by position. If you look at the leaderboards for UZR, the values average much closer to zero. If you include all players, the average is zero.
• Outs Above Average is not runs above average. In UZR, the outs are already converted to runs. Of course, not all outs are the same. If a fielder is very good at going back on the ball or getting balls in the gap, those outs are going to be worth considerably more than outs that might be saved by running in towards the infield. UZR includes these calculations while Out Above Average does not.

Given the differences between what the two defensive numbers calculate, I thought I would run some numbers of my own. I am a fan of bigger sample sizes, so I made two changes to the work done above. I increased the minimum threshold for qualification to 600 outfield innings and 150 chances, and I eliminated positional distinctions and put all outfielders into one bucket. Those changes made, 61 outfielders met the criteria for both 2016 and 2017, the only seasons for which we have Outs Above Average. Just as the researcher before me, I scaled the numbers to 1000 innings.

This is pretty close to what we would have expected based on the previous research. There is a pretty strong correlation between years. For those of you wondering about aging, there were roughly 35 fewer outs above average from the same group moving from 2016 to 2017, which is about half a play per 1,000 innings. Here’s the same graph, except using Defensive Runs Saved without the arm component.

Here we find a similar result to the one above. As for potential effects of aging here, this group was about two runs above average in 2016 and just one run above average in 2017.

Now let’s get to the range portion of UZR.

That, as you can see, does not feature the same sort of correlation as the other two metrics — although, by increasing the minimum number of innings, one does arrive at a higher r^2 than the previous research. As for the influence of aging here, these players went from roughly one run above average to about half a run above average the following season. For what it is worth, the standard deviation for RngR/1000 was around eight and six in 2016 and 2017, respectively, compared to 11 and 12 for DRS-ARM/1000 and 10 and 9 for OAA/1000.

This certainly doesn’t look great for RngR and UZR. I was curious, though, about the position-specific adjustments that are included in UZR, but aren’t included in OAA. Because of general sample-size issues, Mitchel Lichtman has stressed using three years of data before drawing firm conclusions regarding a player’s defense. Here, we’re dealing with just about one season, so we can expect the relationship to be low. Nevertheless, it seems quite low compared to several alternatives.

The graph below shows a player’s defensive value overall, which is UZR including arm plus positional adjustment. I’ve expressed the numbers here per 600 plate appearances for my ease in using the data.

In this case, we arrive at something that features roughly the same year-to-year correlation as Outs Above Average. Since DRS above had a stronger correlation than RngR in the graphs above, I thought adding the positional adjustment to DRS might show a relationship even stronger than the one we see defense in the last graph. That proved false, however, as the r-squared for that calculation also ended up at .46. For those that don’t trust defensive numbers, keep in mind that r-squared for Offensive runs above average per 600 plate appearances for this same group of players was .21. Defense had a higher correlation than wOBA, wRC+, slugging percentage, and on-base percentage and was roughly equivalent to walk percentage and ISO. Much of this relationship is going to be due to positional factors, but positional factors are a very important part of determining a player’s value and overall defensive value is pretty consistent year to year.

Outs Above Average is very promising and the strong year-to-year correlation shows that the MLBAM group is on to something. Eventually, OAA could be a very good tool for evaluating outfield defense. To provide a real comparison to UZR and DRS, we do need OAA to go a bit further, however. I’d like to see those outs converted to runs, as all outs are not the same. I’d also like to see some sort of positional averages in order to separate center fielders from corner outfielders so we have a better idea of the value we are getting by position. Outs Above Average is a very good start, and I look forward to seeing more years of data and the expansion of metric in order to more properly evaluate it on the same terms as our most commonly used metrics.

Update: I neglected to mention that UZR was tweaked in 2017 and hasn’t yet been updated for 2016. This could be a contributing cause to the lower correlation for 2016 to 2017. Once 2016 is updated to use the same methodology, I’ll follow up with those results.

Craig Edwards can be found on twitter @craigjedwards.