Ultimate Zone Ratings (UZR), provided by Mitchel Lichtman, are now available for the 2018 season! These will update weekly as usual.
In addition to the the new 2018 data, the 2012 through 2017 data has been updated. You might recall that in 2017, there were some changes to the UZR methodology that were to be backported to 2012 – 2016. This has now happened. Here is a brief refresher on what those changes were.
– UZR now uses hit timer data (hang time) rather than hit type designations, which is an improvement on the methodology and thus the results.
– The methodology has changed a little that allows UZR to account for some of the noise associated with imperfect data. The net result of this change is that extreme UZR’s, which were likely caused by, to some extent at least, noise in the data, rather than extreme performance, will be slightly ‘dampened.’ We think that these new values, while very close to the old ones in most cases, more accurately reflect the actual performance of the players in question.
Below you will find the changes of 5 or more runs in each season:
David Appelman is the creator of FanGraphs.
This is pretty fascinating. The overall picture on two of the players we would have previously considered as candidates for the best defensive players in the game (Hamilton & Kiermaier) changes substantially.
Yeah, these seems like some significant differences for some guys. KK had his 2016 UZR essentially cut in half.
Can someone smarter (or at least who has not been up since 4am) explain if this is a big deal or not?
What do you mean by a big deal?
So here’s my take: I’ve been noticing for a while that we hadn’t been getting as many “extreme” UZR values in 2017. Some people had interpreted that as a decline in athleticism for a bunch of players, and I had assumed it was because positioning was making elite defenders less valuable. Both may be true (and I think the extreme defense values actually dipped in 2016 too), but what it looks like is that the dip was at least partially related to changing methodology.
In short, if we have an image of someone as providing exceptionally good value with the glove in our head, we should probably temper that a bit until we can check their FG page to see updated results. We shouldn’t think that “glove-first” players like Pillar and Heyward were quite as valuable as their old FG pages made them look. Similarly, guys who were famously disasters in the outfield (like Matt Kemp) are really just regular old disasters (although we should check on the FG to verify).
Also interesting: Adam Eaton’s and Charlie Blackmon’s work in center field looks better under this new measurement.
At the surface level this represents a change from more to less crude understanding of the challenge presented to the fielder. Rather than using hit type buckets like flyball, liner, grounder, fliner (or whatever exactly was used), the system is now using hang time data which gives a better sense of what the fielder had to deal with.
Off the top of my head, this probably seems to help remove the effects of fielder positioning a bit (remember back in the day when Brett Lawrie was posting insane fielding numbers but it was due to him playing where the 2b was? Those extreme shift data were removed a while back but smaller shift or positioning effects were never really accounted for). This doesn’t totally remove those effects (UZR doesn’t know exactly where a fielder was positioned) but it provides better date for the model’s understanding of the challenge presented to the fielder.
The new range estimates for fielders *should* be improved – for players who appeared to ‘get worse’ like KK – he’s still elite….just normal elite and not video game numbers elite.
That explains why there is general differences between the new and old UZR (as does this article). But the question remains: why are guys like Billy Hamilton and Kiermaier seeing consistently reduced UZR numbers?
It’s a good question – the added granularity of these data may just help reduce the extremes – elite defenders are seeing reduced numbers while poor defenders are seeing improved numbers. It also may be helping with understanding defense in COL’s OF as those players tend to post uniformly negative numbers despite playing a solid defense when playing for other teams.
Hard to say without some QA on the old method. Could be that there was some bias towards those who have incredible range (and vice versa on the low end). This could’ve been true if the same person was analyzing all home games for a particular team.
I wish they had performed and published duplicates and replicates to determine the amount of accuracy/bias that existed in the dataset.
It seems to have ironed out the magnitude of the extremes, but the ordering has stayed roughly the same. Kiermaier is still the best overall CF in 2015 and 2016. However, instead of being 0.64 WAR better than Pagan in 2015, he is now considered to have only been 0.29 WAR better than him.
yeah, there are a lot of big name defenders on the negative end of these changes … Hamilton and Kiermaier. also Heyward, Machado, and Simmons.