An Alternative Hall of Fame Rating System by Craig Edwards January 6, 2016 The Major League Baseball Hall of Fame is a lot like the game itself: wondrous, fascinating and great in scope. The voting process for the Hall of Fame, meanwhile, resembles the umpiring aspect of the game: even though the arbiters typically perform their job well enough, their failures receive considerable attention — nor is it particularly easy to determine who should be in charge of different aspects of gatekeeping. Predicting who will get into the Hall of Fame using any statistical measures has become much more difficult in recent years due to changes in rules, changes in the electorate, and confusion about steroids. Analyzing who should get into the Hall of Fame statistically is also wrought with difficulty, but perhaps presents a clearer process. This is my attempt. Jay Jaffe has been the standard-bearer for Hall of Fame analysis over the last decade, with most of his work appearing at Sports illustrated. Inventor of the JAWS system, he designed a great metric — one which appears on Baseball-Reference — to compare Hall of Fame candidacies. JAWS takes a player’s bWAR (that is, WAR as calculated by the methodology employed by Baseball Reference) and averages it with the player’s seven highest bWAR seasons, meant to represent a player’s peak. Jaffe then compares every player in the Hall of Fame to those who might gain election in order to provide a basis for the player’s candidacy. Jaffe’s work is fantastic, and while I don’t claim to have improved on JAWS, I’d like to introduce an alternative method of combining a player’s peak with his overall value to compare to Hall of Famers. JAWS will be discussed below, not because it is full of flaws, but because it provides the basic framework for creating a method for evaluating players for the Hall of Fame. The first, most noticeable departure from Jaffe’s system is that this one use fWAR (that is, FanGraphs WAR) instead of bWAR. While many people use one or both metrics and each has their own group of devotees, I have always been partial to fWAR when evaluating players even before my time writing at FanGraphs. A simple solution would be to repeat Jaffe’s exact methodology using fWAR, but creating a metric from scratch (sort of), we can look for alternate methods to look at the Hall of Fame. To look at a player’s peak, I wanted to include all productive seasons — for our purposes, anything above 2.0 WAR, or the mark which represents an “average” year — and then weight those seasons by emphasizing great seasons. I originally created a points system wherein a two-win seasons would be worth one point; a four-win season would be worth three points (1+2); a six-win season, six points (1+2+3); an eight-win season, 10 points (1+2+3+4); and a 10-win season, 15 points (1+2+3+4+5). However, once averaged with a player’s WAR, that method made a 7.8 WAR season worth just 6.9 points and I thought that discounting a great season by nearly a win did not seem to embody the spirit of the exercise. So I created a sub-category between 7 and 8 wins worth 8 points. For the points side of the scale, later to be averaged with overall WAR, we see the following distributions: HOF Points System Season Total Points 10+ WAR 15 8-10 WAR 10 7-8 WAR 8 6-7 WAR 6 4-6 WAR 3 2-4 WAR 1 To provide some perspective on the weighting, consider: in baseball history, 22 position players have combined for 52 10-WAR seasons. Among those players eligible for election (i.e. not Mike Trout) only Barry Bonds, Fred Dunlap (DOB: 5/21/1859), and Norm Cash (second-best season: 5.2 WAR) are not in the Hall of Fame. A single 10-WAR season is a greater predictor of the Hall of Fame (18 of 21) than joining the 500 Home Run Club (16 of 21). Twenty-seven position players have at least three eight-win seasons and all who have appeared on a ballot, with the exception of Barry Bonds, are in the Hall of Fame. Forty-one players have at least two eight-win seasons and appeared on a Hall of Fame ballot, and only Bonds, Benny Kauff and Snuffy Stirnweiss (played more than 100 games just five times), Dick Allen, and John Olerud are not in the Hall of Fame. Half of the 48 players with a single eight-win season are in the Hall of Fame. For all position players who have appeared on the ballot, if you knew only that they had at least one eight-win season, and if you were to guess if they were in the Hall of Fame, you would be right roughly two-thirds of the time (60 of 91), with the percentages going up if Jeff Bagwell and Mike Piazza gain induction. To see how those points total end up in the ratings system and providing JAWS as a comparison, here is how individual seasons are scored by both systems, provided the seasons are part of the best 7 years of a player’s career. HOF Points and JAWS: Player’s Top-Seven Seasons WAR JAWS New Rating 10 10 12.5 9 9 9.5 8 8 9.0 7 7 7.5 6 6 6.0 5 5 4.0 4 4 3.5 3 3 2.0 2 2 1.5 1 1 0.5 As I endeavored to do, the very best seasons are given more weight, while those seasons under six WAR are given slightly less weight than the JAWS system provided those sub-six-win seasons are among the best seasons of a player’s career. Here’s one place where my proposed system diverts from Jaffe’s slightly, however: if a player put up those very good seasons outside of his best seven years, they are discounted by JAWS. However, they remain the same here. Regard: HOF Points and JAWS: Seasons Not in Player’s Top Seven WAR JAWS New Rating 10 5.0 12.5 9 4.5 9.5 8 4.0 9.0 7 3.5 7.5 6 3.0 6.0 5 2.5 4.0 4 2.0 3.5 3 1.5 2.0 2 1.0 1.5 1 0.5 0.5 To provide an extreme example: Babe Ruth has multiple ten-win seasons that are worth less in JAWS than the 5.4 WAR Cal Ripken put up in 1985. This does not have a great impact on determining inclusion in the Hall of Fame for one of the greatest players of all time, but it does illustrate the discount in JAWS of having very good seasons outside of a player’s seven-best seasons. To provide a more concrete, relevant example, consider this: both Matt Williams and Edgar Martinez produced seven seasons of between four and six wins, over which Williams totaled 33.0 WAR and Martinez, 37.9. As those were Williams’ seven-best seasons, they’re worth 33 points under JAWS (if using fWAR), but because Edgar Martinez had three other seasons above 6.0 WAR, those seven seasons are be worth only 30.3 under JAWS. Under the system outlined here, Martinez’s seven seasons between four and six wins are worth a similar 29.5, but instead of being three points behind Williams, he is three points ahead of Williams’ 27 total. Players receive a bonus for great individual seasons under this system, but they also do better compared to other players in JAWS by having more good seasons that end up being discounted less than they would under JAWS. Fundamentally, there are not going to be many major differences between JAWS and this system. Both use the same WAR framework and both account for a player’s peak and high productivity. Whether this is an enhancement and an addition to help forward the Hall of Fame discussion or does more to confuse than aid is in the eye of the beholder. A new method can provide a helpful, different perspective, just as looking at both bWAR and fWAR can help gauge the quality of players. The list below contains the top-25 position players in baseball history using this method. The columns include the player’s Hall of Fame points, WAR, and then the average between the two. HOF Rating Leaders: Position Players Player Points WAR HOF RATING Babe Ruth 182 168.4 175.2 Barry Bonds 173 164.4 168.7 Willie Mays 161 149.9 155.5 Ty Cobb 141 149.3 145.2 Rogers Hornsby 146 130.3 138.2 Honus Wagner 137 138.1 137.6 Ted Williams 144 130.4 137.2 Hank Aaron 133 136.3 134.7 Tris Speaker 127 130.6 128.8 Stan Musial 120 126.8 123.4 Lou Gehrig 128 116.3 122.2 Eddie Collins 109 120.5 114.8 Alex Rodriguez 106 114.1 110.1 Mickey Mantle 106 112.3 109.2 Mike Schmidt 103 106.5 104.8 Jimmie Foxx 102 101.8 101.9 Mel Ott 91 110.5 100.8 Rickey Henderson 86 106.3 96.2 Nap Lajoie 89 102.2 95.6 Frank Robinson 82 104 93.0 Eddie Mathews 83 96.1 89.6 Joe Morgan 79 98.8 88.9 Albert Pujols 87 90.5 88.8 Wade Boggs 79 88.3 83.7 Carl Yastrzemski 69 94.8 81.9 One of the most valuable tools of the JAWS system is taking the average Hall of Famer by position for comparison to current candidates. The chart below shows the average and median Hall of Famer by position, as well as those players who were voted in by the writers. The JAWS positional average (for bWAR) is also included. HOF Rating Positional Average Position (HOF, BBWAA) HOF AVG HOF MEDIAN BBWAA AVG BBWAA MEDIAN JAWS pos AVG Catcher (13, 8) 38.8 37.9 47.7 46.0 43.1 First Base (19, 10) 58.4 57.0 65.8 57.1 54.2 Second Base (20, 11) 59.8 52.8 77.1 65.4 56.9 Third Base (13, 7) 57.3 52.6 71.9 75.3 55.0 Shortstop (21, 11) 55.0 52.5 62.0 57.8 54.7 Left Field (19, 10) 55.7 49.7 63.6 51.1 53.3 Center Field (18, 8) 64.3 47.9 94.8 93.1 57.2 Right Field (24, 13) 63.1 51.5 85.0 71.8 58.1 Outfield (61, 31) 61.2 49.5 80.6 64.4 56.3 OVERALL (147, 78) 57.4 50.8 71.4 58.6 54.6 Writers have been much harder on a player’s candidacy than the Hall of Fame overall, especially in center field. As for this year’s top candidates, here is where they fit along with their positional standing. 2016 HOF Ballot and HOF Rating Player HOF RATING HOF AVG HOF MEDIAN BBWAA AVG BBWAA MEDIAN Barry Bonds 168.7 55.7 49.7 63.6 51.1 Jeff Bagwell 71.6 58.4 57.0 65.8 57.1 Ken Griffey Jr. 70.4 64.3 47.9 94.8 93.1 Jim Edmonds 58.8 64.3 47.9 94.8 93.1 Mark McGwire 57.7 58.4 57.0 65.8 57.1 Larry Walker 56.4 63.1 51.5 85.0 71.8 Edgar Martinez 54.8 57.3 52.6 71.9 75.3 Mike Piazza 54.8 38.8 37.9 47.7 46.0 Tim Raines 54.2 55.7 49.7 63.6 51.1 Gary Sheffield 51.6 55.7 49.7 63.6 51.1 Alan Trammell 50.4 55.0 52.5 62.0 57.8 Sammy Sosa 50.1 63.1 51.5 85.0 71.8 Jeff Kent 44.1 59.8 52.8 77.1 65.4 Fred McGriff 44.0 58.4 57.0 65.8 57.1 Nomar Garciaparra 38.2 55.0 52.5 62.0 57.8 Jason Kendall 29.4 38.8 37.9 47.7 46.0 Based on this system, Bonds, Bagwell, Griffey Jr. and Piazza are the no-doubters. In the next tier of borderline-should-be-ins, one finds McGwire, Edmonds, Walker, Martinez, and Raines. The tier below that — in this case, of borderline-likely-outs — one finds Trammell, Sheffield, and Sosa followed by the should-be-outs Kent, McGriff, Garciaparra, and Kendall. No system is going to be able to fully explain the Hall of Fame voting process, but in the case of several candidates like Raines and Trammell, it does help explain why they have had a tougher time gaining traction. Raines is still among the candidates who should be in the Hall of Fame, but here he finds himself behind other candidates who are further from election. There is a great peak, and there is longevity, but there is a middle ground between the two requiring remaining highly productive in the middle portion of a player’s career. This system attempts to bridge that divide, and provides more perspective on the statistical side to Hall of Fame candidacy.