Fans Scouting Report, Part 2

Following up David’s announcement, let me give you the lowdown on a project that is near and dear to my heart. You know how saberists and statheads are accused of being all about the numbers, and ignoring the human component? This project is the antithesis of that. This is all about the human component.

The idea behind it goes all the way back to the mid-1980s when Bill James in his Baseball Abstracts asked his fans to rate each player by position, 1-30. He compiled their results, and it became the rankings in at least one of his annuals. I was sick and tired of how the media would tell us the fans about how good and bad newly-traded fielders were, when invariably, what they said did not matchup to reality. Really, Kaz Matsui was such a good-fielding shortstop that he could displace the young Jose Reyes? I believed that stuff each time, even though we kept getting confirmation that it wasn’t true.

Put the two things together (James’ crowdsourcing plus distrust of media’s objectivity), and you get The Scouting Report, By the Fans, For the Fans, of which I’ve ran it for the last eight years. I can’t tell you how incredibly insightful you guys are. Well, individually, you aren’t. Indeed, individually, you are as useless as I am, and any other individual. That’s just the way it is. The power is when you get just a few of you guys together. That coalescing is where the real brains of the operations lie. All I do is provide the brawns to bring you guys together under one voice (and remove any obvious party-crashers).

The end result is that you have a bunch of Giants fans evaluate Giants players, and a bunch of Rangers fans evaluate Rangers players, and if a Giants fan is interested in the fielding talent traits of Elvis Andus or Vladimir Guerrero, all he has to do is ask 20 or 50 Rangers fans. He does that asking simply by looking at the results of the Fans Scouting Report.

And, the power of crowdsourcing is really making waves. A few years ago, just days before Opening Day, I started collecting your views on the depth chart of your team (expected games played, expected innings). Once again, rather than try to aggregate from 30 media sources on the latest depth for the 30 teams, I instead pooled the power you guys provide. And you continue to impress me with how much, collectively, you know. Fangraphs last year expanded on that idea even more, by including forecasts of performance as well.

I’ve done here-and-there crowdsourcing on contracts. And Fangraphs has really expanded on that idea too this year. I’ve crowdsourced for favorite movies, most outstanding players, among other things. Really, there is so much that, individually, I would never listen to any single person (myself included, because, after all, who am I?), but that collectively, it trumps anything and everyone out there. It seems like a paradox.

Anyway, so here we are. I’ve provided to Fangraphs the results for the 2009 and 2010 seasons, and am working on preparing the other six seasons back to 2003. There are seven traits that the fans are asked to evaluate, on the idea of capturing the entire spectrum of fielding talents on display (from crack of bat, to last out). Furthermore, by focusing on the particulars, it removes from the fan the impulse to give an overall evaluation. When I compile it, I weight them a certain way to give that overall evaluation. The end-result is a score from 0-100, with 50 as the average. One standard deviation is 20, meaning that for each trait, 16% of the players will exceed a score of 70, and the same number will be worse than 30. Furthermore, I convert those scores to runs, so they are directly comparable to MGL’s UZR and Dewan’s plus/minus.

The pinnacle of sabermetrics is the convergence of performance analysis and scouting observations. And I think that the voice you guys provide, as a group, is part of that convergence.

Newest Most Voted
Inline Feedbacks
View all comments
13 years ago

The problem is that I know who the numbers say are good fielders. I don’t see how I can be objective.