Where the Fans and the Numbers Disagree

It’s pretty convenient and informative to have two different and legitimate versions of something. For example, there are a bunch of different baseball stat projection systems. Most of the time, they’re going to agree with one another, because they’re founded on more or less the same information and in the end all projection systems project based on recent performance. But it can be fascinating to identify players or teams where projection systems don’t see figurative eye to figurative eye. For example, for this coming season, Steamer doesn’t love the Reds’ starting rotation. ZiPS, meanwhile, is considerably more optimistic. It’s of interest to examine the reasons why that might be.

Along the same lines, we can turn our attention to player defense. FanGraphs keeps track of both UZR and Defensive Runs Saved, and those are two different methodologies, yielding two different sets of results. On several occasions, people have compared and contrasted the two. But FanGraphs also keeps track of the results of the annual Fan Scouting Reports. To my knowledge, less has been attempted with that data, and I thought it could be fun to see how fan opinion compares to stat opinion.

In case you don’t know, the idea behind the Fan Scouting Report is this: fans watch a lot of baseball. Especially baseball with their favorite teams. So they should be able to give some evaluation of the defensive abilities of their teams’ players, and this information becomes more valuable as the sample size grows. In the end, players can be given an overall defensive rating, and then players at different positions can be directly compared. In theory, it should match up well with FanGraphs’ Defense rating — that is, UZR plus position. They’re different approaches to try to get to the same destination.

Now, there are some potential issues. The Fan Scouting Reports can have pretty small sample sizes. The people filling them out are different for every team. And many of the people filling them out are likely to be familiar with advanced defensive metrics, which can bias them in certain directions, even if subconsciously. We don’t have perfect data, but we have data we can play with, and if we expect that the Fan Scouting Report and Defense rating will largely agree with one another, then, well, where don’t they agree?

I decided to look at the window from 2011 to 2013, and I identified all players who put in at least 1,000 innings in the field. I then removed the catchers from the sample, because our Defense ratings for catchers are probably incomplete. Catcher defense is a little mystifying. For each player, I calculated Defense per 1,000 innings. I made some necessary adjustments for part-time DHs. I then plotted that against their overall ratings from the Fan Scouting Reports. Not surprisingly, there’s a lot of agreement:

def_fsr_11_13

The linear relationship is unmistakable, and it would be weird if it weren’t there. Generally, a worse defender is a worse defender by both systems, and a better defender is a better defender by both systems. Using the best-fit equation, one’s able to calculate an “expected” Def/1000. This is the Defense rating, per 1,000 innings, we’d expect based just on the Fan Scouting Report results. You can then compare the actual Def/1000 to the expected Def/1000. The pool includes 318 players, and for 38 of them — 12% — the two systems differ by at least ten runs per 1,000 innings. The greatest individual difference is about 19 runs. The average difference is about five runs, and the median difference is about four runs.

Right here, you can look at a spreadsheet for all 318 players. That’ll show you both Def/1000 and expected Def/1000, so you can look up your favorite regular or semi-regular. But what about the extremes? There might as well be a couple tables in this post, since the point is to identify the players with the greatest disagreements. First, the ten players the fans have liked more than the numbers the most:

Player Innings Overall Def/1000 expDef/1000 Difference
Eric Hosmer 3807 57 -14.5 0.3 -14.8
Nick Markakis 3715 64 -10.9 3.6 -14.5
Carlos Gonzalez 3046 77 -4.4 9.6 -14.0
Carlos Beltran 3466 59 -11.7 1.2 -12.9
Eduardo Nunez 1709 41 -19.5 -7.1 -12.3
Kosuke Fukudome 1262 59 -11.1 1.2 -12.3
Matt Kemp 2867 63 -8.8 3.1 -11.9
Albert Pujols 2633 64 -8.3 3.6 -11.9
Domonic Brown 2039 48 -15.6 -3.9 -11.7
Shin-Soo Choo 3428 56 -11.3 -0.2 -11.1

Leading the way is Eric Hosmer. Hosmer, by DRS, has been a below-average defensive first baseman. UZR has thought even less of him over the same period of time. Yet the Fan Scouting Report has pegged him as being above-average, rubbing shoulders with Brandon Belt and Adrian Gonzalez. Hosmer’s defensive numbers picked up in 2013, but the fans thought he was equally good in 2011.

Right behind Hosmer is Nick Markakis. The fans have him slipping — he’s gone from a 79 overall rating to 71 to 64 to 58. Still, the advanced numbers suggest the trend began earlier, and they haven’t liked Markakis much since 2008.

Then there’s Carlos Gonzalez. The fans have loved him. The numbers have liked him all right in 2011 and 2013, but they didn’t like him in 2012, and the fans have rated him as highly as Carlos Gomez and Peter Bourjos. It’s possible that there’s some kind of Coors Field park effect that’s making Gonzalez’s advanced numbers look worse. Alternatively, he could just be a little overrated. We don’t actually know who’s right!

You can go over the rest of the table yourself. How about the other side? Here are the ten players the numbers have liked more than the fans the most:

Player Innings Overall Def/1000 expDef/1000 Difference
Juan Uribe 1904 61 20.7 2.2 18.6
Luis Valbuena 1552 50 14.7 -2.9 17.6
Craig Gentry 1665 69 22.0 5.9 16.1
Lorenzo Cain 1441 65 19.2 4.0 15.1
Nick Punto 1298 55 14.2 -0.6 14.8
Jarrod Dyson 1271 50 11.7 -2.9 14.7
Jhonny Peralta 3479 52 12.4 -2.0 14.4
Casey McGehee 1954 35 3.9 -9.9 13.9
Miguel Tejada 1083 37 4.2 -9.0 13.2
Pete Kozma 1254 56 13.0 -0.2 13.2

And there’s our headliner, in Juan Uribe. You look at Uribe, and you wouldn’t have any reason to believe he’d be an elite-level defender. The fans have seen him as being fine, even above-average, but the numbers have fallen head-over-heels in love with his performance. This is the greatest difference in baseball, and defense is a big reason why Uribe’s player page puts his 2013 season at 5.1 WAR. DRS and UZR are both rather fond of the guy.

Luis Valbuena shows up in second, and I don’t really have anything to say about him, except that, again, DRS and UZR both suggest good things of his work at third base. In a few ways, this could be sample-size noise. Or it could be a real thing, I don’t know. I can’t imagine having a strong opinion about Luis Valbuena.

Then there’s Craig Gentry. Fans have liked him a lot — they’ve ranked him around Darwin Barney, A.J. Pollock, and Jack Hannahan. The metrics put him in second in all of baseball, between Andrelton Simmons and Manny Machado. DRS and UZR say the exact same things. The stats think Gentry has been an elite-level sprinter in the outfield. The fans have thought, hey, that guy’s pretty good. Pollock, incidentally, would show up right after Kozma, if the table included an 11th player.

And again, you can review the rest. Jhonny Peralta remains an interesting case. UZR has liked him a lot. DRS has thought he’s been fine. The fans have thought he’s been mediocre. A lot of other people around the game would agree with the fans, and based on his free-agent contract, the Cardinals don’t agree with the fans. The quality of Peralta’s defense is controversial, which means Peralta’s controversial, but then if the Cardinals believe something, you might want to stop and think about that.

I realize I haven’t dug into why the fans and the numbers might disagree on certain players. I mean, disagreement in some places is inevitable, just because they’re different systems with different inputs. There are probably some sample-size issues. It’s important to recognize that we don’t know which side is right, in the case of a disagreement. The numbers are objective, but the fans might also see some things that the numbers just don’t. It’s interesting enough that there are disagreements, sometimes significant ones, and I would think one would have the greatest confidence in defensive numbers that match up across the board. So, when the fans and the numbers disagree, perhaps that’s cause for additional consideration. Maybe the fans are actually full of crap, but given the well-known issues with the statistical data, they probably shouldn’t be ignored.





Jeff made Lookout Landing a thing, but he does not still write there about the Mariners. He does write here, sometimes about the Mariners, but usually not.

42 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
triple_rmember
10 years ago

Where does Adam Jones rank?

triple_rmember
10 years ago
Reply to  Jeff Sullivan

Bah! Did not see that.