Which Defenders Make the Plays They are Supposed To?
Defensive statistics have been open to debate since they were first created. This back and forth probably will continue on for years to come, even with some new technologies offering the promise of better data. One limitation with giving individual players values for their defensive metrics is positioning. The player’s coaches may have them completely out of position for a seemingly routine play and zone based metrics are going to downgrade the player because they didn’t make the play. While it may be impossible to know the correct player position before each play, the chances of a defender making a play knowing their initial position can be estimated with Inside Edge’s fielding data. By using their Plays Made information, I will add another stat to the defensive mix: Plays Made Ratio.
The concept is fairly simple. Inside Edge provides FanGraphs with the number of plays a defender should make given a range of possible chances. Inside Edge watches each play multiple times and grades the difficulty of the play. Here is their explanation for how they collect the data.
Inside Edge’s baseball experts include many former professional and college players. Every play is carefully reviewed, often more than once. It is not uncommon for IE scouts to review certain plays together in order to reach a consensus on the defensive play rating. IE also performs a thorough post game scrubbing process before the data is made official.
At FanGraphs, the fielding data is displayed on each player’s page as seen here with Jason Heyward as an example:
Most of the fielding data falls into two categories. The zero percentage plays are just that, impossible plays, and make up 23.2% of all the balls in play. Balls in this bucket are never caught and always have a 0% value. The other major range is the Routine Plays or the 90% to 100% bin. Defenders make outs on 97.9% of these plays, which make up 64.0% of all the plays in the field; the 2.1% which aren’t made are mostly errors. In total, 87.2% of all plays are graded out as either automatic hits or outs; it is the final ~13% which really determine if a defender is above or below average.
Between almost always and never, four categories remain. Even though each category has a defined range, like 40% to 60%, the average amount of plays made is not exactly in the middle of each range. Here are the actual percentage of plays made in each of the four ranges.
Range | Actual Percentage |
---|---|
1% to 10% | 6% |
10% to 40% | 29% |
40% to 60% | 58% |
60% to 90% | 81% |
With these league average values and each individual player’s values, a ratio of number of plays made compared to the league average value can be calculated. To have the same output of stats like FIP- and wRC+, I put Plays Made Ratio on a 100 scale where a value like 125 is 25% better than the league average. Here is the long form formula and Jason Heyward’s value determined for an example.
Plays Made Ratio = ((Plays made from 1% to 90%)/((1% to 10% chances * .063%)+( 10% to 40% chances * .289)+ (40% to 60% chances * .576) + (60% to 90% chances * .805))) * 100
Heyward’s Plays Made Ratio = ((1+10+9+26)/((14*.063)+(16*.289)+(9*.576)+(27*.805)))*100
Heyward’s Plays Made Ratio = (46/32.4)*100
Heyward’s Plays Made Ratio = 142
Heyward had a heck of a season. Of the 66 playable balls hit to him, normally only 32 of them would have been caught for an out. Heyward was able to get to 46 of them, or 42% better than the league average. He has consistently had above league average values with a 133 value in 2012 and 125 in 2013.
Now, on to if/when the data stabilizes/becomes predictive. The stabilization point (r = .50) seems to be around 70 of these plays. The main issue I am having is a lack of data. I have two matched paired seasons (’12 to ’13 and ’13 to ’14) to work with. If I start dividing up the data into individual positions, the numbers seem to be right, but I am at most looking at most 60 matched season (2 sets of data * 30 teams). Don’t take the 70 plays value to heart as I could easily see it change as more data becomes available.
Additionally, some positions get more chances than others. For reference, here of the maximum number of possible plays made at each position in 2014:
C: 96
1B:63
2B: 124
3B: 130
SS: 133
RF: 67
CF: 54
LF: 70
Outfielders have fewer opportunities to show off their fielding abilities than infielders and their Plays Made Ratio may take a bit longer to stabilize. Keep in mind, though, that a play made in the outfield has a larger run impact than a play made in the infield — there’s a greater chance for an extra base hit or more runner advancement if an outfielder doesn’t make a play — so fewer opportunities doesn’t automatically equal a lower defensive value.
Now remember this stat only looks at how often a fielder would have made the play considering their position on the field. The team could be playing its outfielders back to prevent a double or their infielders in for a bunt which could put the defender out of position. Additionally, it doesn’t look at the final results of the play (at least for now). If Sir Dive Alot is playing in the outfield and he loves to try to catch every ball hit his way, then he will get to a few extra flyballs by diving all the time, but those he doesn’t get to will pass him by for more doubles and triples. Also, an outfielder could be good at making plays while coming in versus going deep; balls which fall in over his head would be more damaging than those which fall for shallow singles. While his Plays Made Ratio may be high, the number of runs he saves, as seen by UZR or Defensive Runs Saved, may be lower by comparison.
Here are three recent articles at FanGraphs on defensive metrics and how Plays Made Ratio (PMR) would work in with the conclusions.
An Attempted Defense of Colby Rasmus’ Defense by Mike Petriello
As recently as 2013, saying he was a league-average defender in center would be selling him short. That year, UZR/150 scored him a 15.2. DRS and raw UZR both said 11. That’s quite good. But last season, he wasn’t league-average, or really anything close to it. That UZR/150 plummeted to -15.3 — a 30-run difference — although since he only played in 104 games, we can use raw UZR, which put him at -9.1. DRS saw it similarly, dropping him to -7.
It’s fair to say the change in defensive ranking cost Rasmus two or more wins above re-placement, which, if paired with a league-average bat, is the difference between replacement level and a capable starter.
….
In fact, if you didn’t know better, you might say Rasmus was positioning himself shallower, which would make it easier to come in on the shorter ball while making himself more vulnerable to the deep one.
Season: PMR, Chances
2012: 81, 45
2013: 109, 29
2014: 74, 41
Overall: 87, 115
Comparison: Ramus gets to a below average number of balls than the average defender particularly the deep ball. These deep hits will likely go for extra bases and be more damaging than if a few extra singles fell in front of him.
On Nick Markakis and Defensive Metrics by Dave Cameron
And this is why there’s such a stark gap between the evaluation of Markakis’ defensive value between the advanced metrics and the Fan’s Scouting report. As Jeff noted back in February, the fan’s have consistently rated Markakis’ glove work far higher than the numbers have, putting him nearly 15 runs better than what UZR suggests. If you add 15 runs of defensive value to his ledger, 4/$45M for Markakis is perfectly rational.
…..
The Braves are essentially betting that UZR and DRS are so systematically wrong that they add no real information even after 12,000 innings of data collection.
Season: PMR, chances
2012: 101, 21
2013: 117, 34
2014: 126, 62
Overall: 119, 117
Comparison: The PMR values are almost in line with his Fan’s Scouting Report. Could it be that Markakis is out of position to make common plays for a right fielder? The team could of had him shading over to center field to help Adam Jones. Maybe the Braves think they can better position Markakis to produce like an above average defending outfielder.
How Good of a Defender is Adam Eaton? by Jeff Sullivan
Let’s look only at defenders last season with at least 900 innings. On a per-1000 basis, no one had a bigger DRS/UZR gap than Eaton did, at roughly 15 runs. No one had a bigger difference in the same direction, and no one had a bigger difference in the opposite direction. So you could say there was no greater source of disagreement than Adam Eaton, at least as far as defensive metrics are concerned. That seems like the sort of thing that ought to be at least quickly investigated.
…
We can try to make some use of Inside Edge data. This time I’m isolating 2014 — the Inside Edge processes have changed each year, and I don’t want to blend inconsistencies. Eaton just made 312 putouts. Based on the batted balls he saw, as evaluated by Inside Edge, he would’ve been expected to make about 310 putouts
Season: PMR, chances
2012: 119, 3
2013: 139, 3
2014: 117, 46
Overall: 119, 52
Comments: After removing all the for sure and impossible plays from 2014, Eaton has produced like an above average fielder, but his overall sample size is still quite low to make much of a call on his defense.
Plays Made Ratio doesn’t give an answer to each of the above defensive looks. What it does do is give a person another piece of information to make a more informed decision.
For reference, here are all the players by position from 2012 to 2014 with their Plays Made Ration for each season along with the total plays make and number of chances.
Determining a player’s defensive value has been and always will be a struggle. By using Inside Edge’s information on plays made or not made by defenders, a person can get an idea if player is making the plays they are supposed to independent of positioning.
Jeff, one of the authors of the fantasy baseball guide,The Process, writes for RotoGraphs, The Hardball Times, Rotowire, Baseball America, and BaseballHQ. He has been nominated for two SABR Analytics Research Award for Contemporary Analysis and won it in 2013 in tandem with Bill Petti. He has won four FSWA Awards including on for his Mining the News series. He's won Tout Wars three times, LABR twice, and got his first NFBC Main Event win in 2021. Follow him on Twitter @jeffwzimmerman.
While it seems like a good way to go about it. I found vastly different league average values for each bin by position. You can’t use the same average values for each position.
OK, I will look into it.
For each position, Average PMR = sum(actual)/sum(estimated)
Pos: Avg PMR
C: 1.0588
1B: 0.9807
2B: 0.9466
3B: 0.9410
SS: 0.8837
LF: 0.9945
CF: 0.9869
RF: 1.0655
That’s using the league-average actual percentages for each range of plays, though, as opposed to position-specific percentages. I’m wondering, however, if the results would be much different, if at all. Jason Heyward’s 142 would be scaled back to 133 when compared to just RFs, by the way.