It’s been a while since we posted anything comprehensive and transparent about how we draw our conclusions about prospects. Player evaluation and development are changing very quickly in the game, and those changes obviously influence how we think about prospects here at FanGraphs, enough to merit a refreshing primer before we start publishing this offseason’s organizational lists. In addition to teeing up the lists, this post is meant to act as a central hub that can serve to answer commonly asked questions about prospects and how they’re evaluated, specifically for those in the near future who want to start swimming in the deep end of the prospect pool. As we continue to augment our thinking and methodology, so too will we update this document, which will live in The Essentials section of the Prospects Coverage landing page. Feel free to direct any applicable correspondence to firstname.lastname@example.org. Common queries sent our way may find their way onto this webpage.
What information drives your opinions on prospects?
We see a lot of players ourselves. We talk to scouts from amateur, pro, and international departments about players they’ve seen. We talk to in-office analysts, front-office executives, and people in player development. We also use publicly available data we think is relevant. Some combination of these things fuels each player’s evaluation.
What are some of your shortcomings as far as information is concerned?
Increasingly, teams are using proprietary data as part of the player-evaluation process. TrackMan and Yakkertech aid evaluations on many different components of pitching and hitting, high-speed video of players from Edgertronic cameras allows clubs to better understand and alter hitting and pitching mechanics, and Motus sleeves and Rapsodo are used in pitch engineering. The mere existence and demonstrable efficacy of this stuff has altered the way we’re projecting players, but we don’t have access to the data generated by these devices across the entire population or prospects.
What is FV?
FV stands for Future Value, and it’s the way we distill each player’s scouting evaluation into a single expression. Broadly stated, Future Value is a grade on the 20-80 scale that maps to anticipated annual WAR production during the player’s first six years of service. But there’s also quite a bit of nuance underlying that definition, so let’s break down its components.
The 20-80 scale is used by scouts and team analysts to evaluate prospects’ individual tools, as well as their entire future projection. The center of the scale (50) represents major-league average with each whole grade away from 50 representing a standard deviation away from it. The industry also uses grades of 45 and 55 as a means of assessment because there are so many players, fastballs, throwing arms, etc. hovering right around average that it’s necessary to be more granular at that point on the talent curve. It’s rare for a scout to use grades like 65 or 35 because that level of visual precision on those parts of the curve isn’t all that feasible. Analysts are more likely to use those grades since they’re looking at a Trackman readout or some other form of objective measure and that is literally where someone’s exit velos or curveball spin might be on the curve.
Here is fresh math from the 2018 season regarding WAR distribution of hitters mapped to the 20-80 scale. Eric calculated it twice, first by using all hitters with 200 plate appearances (which produced a sample of 355 players, or approximately every team’s starting lineup and its three most used bench players) and then the best 255 players from that sample (effectively, the starters). The WAR bands created by these two samples are very similar to the ones you’ve seen on this site before.
|30||Up & Down||<-0.1|
|40||Bench Player||0.0 to 0.7|
|45||Low End Reg/Platoon||0.8 to 1.5|
|50||Avg Everyday Player||1.6 to 2.4|
|55||Above Avg Reg||2.5 to 3.3|
|60||All Star||3.4 to 4.9|
|70||Top 10 overall||5.0 to 7.0|
|80||Top 5 overall||> 7.0|
Pitching is a little more complicated because pitching standards and roles are changing. Velocity and strikeout rates are higher than ever. Starting-pitcher innings are lower than ever. Whatever the case, here’s how 2018’s starters shake out based on WAR distribution. For this, Eric’s first pass included all starters who threw at least 50 innings (which ended up being 180 pitchers, essentially starters No. 1 through 6 on each team) then the top 150 pitchers from that sample (all 30 teams’ starting rotations).
|30||Up & Down||< -0.1|
|40||Backend starters, FIP typically close to 5.00||0.0 to 0.9|
|45||#4/5 starters, FIP approx 4.20||1.0 to 1.7|
|50||#4 starters. Approx 4.00 FIP, at times worse but then with lots of innings||1.8 to 2.5|
|55||#3/4 starters. Approx 3.70 FIP along with about 160 IP||2.6 to 3.4|
|60||#3 starters, 3.30 FIP, volume approaching 200 innings||3.5 to 4.9|
|70||#2 starters, FIP under 3, about 200 IP||5.0 to 7.0|
|80||#1s. Top 1-3 arms in baseball. ‘Ace’ if they do it several years in a row.||>7.0|
The pitcher and position-player WAR curves are similar enough that, especially when we consider WAR’s margin for error, it’s reasonable to round and combine these two tables pretty much exactly the way Kiley did back in 2014.
Relievers are punished in any WAR-based measure because they throw fewer innings. The best relievers on the planet typically yield about 3 WAR, while really excellent relievers are 1.5-2.0 WAR players. Some clubs do consider elite relievers to be 80s and think WAR-only analysis of relievers ignores too much of the leverage component of their jobs. This is probably true, but we also think relief usage is about to change in a way that both accentuates that component but also changes the volume of innings they throw. For now, single-inning middle-relief types get a 40 FV from us, we slap 45 FVs on arms we think will be dominant bullpen pieces (Seranthony Dominguez and A.J. Minter are two examples from recent years), and anything more than that means we think they’re elite, but most elite relievers were starters as prospects.
Please note we are not projecting peak seasons here but rather the average annual WAR over the player’s first six years of big league employment. Sonny Gray peaked at 3.8 WAR in 2015 (a strong 60) but averaged 2.2 annual WAR (50) over his first six years, and we think the latter is a better representation of his profile because it accounts not only for talent but also the ability to stay healthy and perform consistently. Matt Duffy posted a 4.4 WAR campaign in 2015 but has averaged just shy of two wins per season, and again we think that’s more representative of his true talent.
Why the six-year bound?
As just stated, we think basing rankings on multi-year projection for a player is more useful than a single-year peak. But we also want our projections to be bound by some amount of time because, in the absence of such a constraint, we’re just projecting career WAR. In that case, we’d have to consider who might still be playing into their late 30s and 40s and what kind of players they would be at that time. That seems ridiculous. We love Bartolo Colon and LaTroy Hawkins and Jamie Moyer — and cherish big leaguers who stick around forever — but it’s a fool’s errand to look at a high-school prospect and try to decide if he’ll play into his 40s. The six-year interval we’ve chosen lines up with the six years of service time players are forced to accrue before they can enjoy a free(ish) market for their abilities. During that time, players are being re-evaluated for free agency by means that are vastly different than when they’re prospects, enough that it makes sense to view them as separate processes.
A finite scope also forces us to be more specific about defensive projections. It’s easy to look at Vladimir Guerrero Jr. and say that he’ll eventually outgrow third base and move to first, but it’s more useful to be reasonably specific about when.
Keep in mind that, for some older prospects, the six-year window will encompass the start of their decline phase, which is part of why our rankings skew young.
Do you treat upper-level minor leaguers and low-level teenagers the same way as far as FV is concerned?
No. For many reasons, we value proximity to the big leagues and attempt to bake it in to FV. If there are two minor leaguers who are exactly the same in every way, and one is in Double-A and the other is in the GCL, it makes sense that the player closer to the big leagues would rank higher than the Rookie-level ball. There are some 50 FV players in our rankings who we think have a chance to be 60s or better in the big leagues, but the risk/proximity aspect of their profile needs to be captured in FV somewhere. The amount we deduct for things like risk due to injury or player demographics is subjective, and we don’t have a guideline to share for deducting FV when someone has a surgery or gets busted for PEDs, but we’re okay with that because so much of this process is already subjective.
What are some other areas where you think there’s wiggle room to argue with your rankings?
We think you can make strong arguments to rank players within the same FV tier in a lot of different ways, especially as we move down the scale. Lots of the 40 FV prospects on our lists are risky young players who might be really good one day but might also be nothing, and several of our 40 FV players are near-ready back-end starters. Some teams would, if given the chance, take the lottery ticket instead of the low-ceiling starter. Rebuilding teams with bad farm systems would probably want that guy. Contending teams with thin pitching staffs would probably rather have the low-ceiling arm. In short, teams behave logically, but we don’t think that logic is uniform across baseball, and it’s fine if it isn’t across the prospect-ranking landscape, as well.