When we created the FanGraphs Power Rankings this year, we didn’t know how they would play out. At the outset, there was scorn over the Indians’ ranking. As the season wore on, that changed to scorn about the Rockies’ ranking. By the end of the season though, things seemed to work out pretty well. Eight of the top nine teams — with the Red Sox being the one exception — reached the postseason. That in and of itself is not a justification for the Rankings mind you, but it seemed to show that we were on the right track.
Being on the right track doesn’t necessarily mean that things can’t improve, however. So what I’ve done is taking a smattering of the more constructively critical comments from this season and compiled them — unedited — below, broken up by date of article. After you read the selected comments, there are a few polls for your voting pleasure.
Before you read the comments though, I ask that you take a minute and refresh your memory on the Rankings’ methodology, which you can find here.
From May 2nd:
May 2, 2011 at 3:44 pm
I guess in some measure of support for the The Rhino, I like the idea of these rankings, but the methodology is very flawed. I like balancing previous success with realistic future expectations, but FAN% is a very poor tool for these purposes. In the case of the Twins, which The Rhino was quick to observe as overrated, the .537 FAN% reflects a team that doesn’t even exist on the feild anymore. The Astros FAN% at .370 (.050 lower than any other team in the feild) is absurd, and they will be #30 by this methodology for the entire season. A better system would use ZiPS(R)%, Marcel%, Tango% to balance FAN% (if not completely replace it).
May 2, 2011 at 4:08 pm
I really like this idea overall, but I agree with others that Fan% is probably not the best way to balance out WAR%. Perhaps Zips (ROS) or some equivalent would be preferable.
May 2, 2011 at 5:11 pm
PS, some credit should also be given for overall record and especially the team’s performance during the most recent time period. These current rankings wont have much credibility if in the middle of August a team that has accumulated a bunch of WAR but then gets hit by some major injuries shouldn’t be floating near the top of the rankings for weeks on end.
May 2, 2011 at 6:02 pm
To help alleviate some of the concerns people have about the weighting… A square root rule could be used to assign the weighting, or credibility, of the games already played. For example a team who has played 26 games would receive 40% from WAR (=sqrt(26/162)) and 60% from preseason fan rankings. This accelerates the weight given to performance in the season without going bonkers. Love the idea of this. I actually like the current weighting structure more, just trying to throw out some alternatives to help others in the comment section who want the Indians higher.
May 3, 2011 at 9:24 pm
Very cool idea for rankings, certainly preferable to the standard power ranking, wherein they list the teams in order of records, with few differences.
Any thought about some kind of nonlinear function for the weights? I feel like a linear function underweights the WAR% – for example, I feel like halfway through the year, WAR% should be something closer to ~75% than 50%.
From May 9th:
May 9, 2011 at 4:35 pm
It isn’t that the fan standings should be replaced with ZiPS, it is that what we thought 2 months ago is not a proper way of evaluating how good a team is.
It’d be like combining the November election results with the Gallup polls so we didn’t overrate one of the cadidates based on how they were doing today. Things change over time.
Baseball has a large portion of luck involved in the results and I’d hope that fangraphs would find a way to rate a team based both on their actual results and their expected results by stripping luck out of their performance instead of using some irrelivent rankings from before the season started to help keep teams grounded.
There also MUST be some sort of evaluation on how a teams roster is made up. The Yankees are in a better situation then their performance if they trade for Felix Hernandez. It cannot be power rankings without thought or it is no better than the rest.
From May 16th:
May 16, 2011 at 8:32 pm
To be clear, I’m generally a fan of this ranking method. I like how it adjusts the %’s throughout the season, and doesn’t overreact. However I do feel like there was something missing, and a comment above illuminated that I think. The missing element is “banked wins”. I know I know, WAR% covers that right?
Almost, but if WAR % is out of whack with actual win total to date, that team IS more likely to make the playoffs than these rankings give them credit for. Something like: (% of Season Played*Current Real Winning %) + (% Remaining * [Existing Calculation]). That would weight games that have already happened appropriately and use your existing calculation to estimate winning % going forwards, thus arriving(and eventually converging) to an estimated end of year W-L record.
Perhaps I’m wrong, but I believe this calculation would better embody the spirit of this exercise.
From May 30th:
May 31, 2011 at 3:09 am
Sorry for the long post but I have one suggestion that I think would improve the rankings and eliminate many complaints:
Give WAR% a weight of: (% of season played)*2 instead of
(% of season played)*1 (giving WAR% double the weight it has now)
This is very similar to how Football Outsider’s phase out the preseason projections for their DVOA rankings. This method completely eliminates the projections by the halfway point of the season. This makes more sense to me because the projections don’t reflect injuries (Mauer/Twins), and teams are often very different than their projection by this point. FO and FG are both great, but I think FO’s method makes more sense.
Giving WAR% more weight would move the Indians up and the Twins down, but not too far. I agree that it doesn’t make much sense to have them at 20 and 21, even if it’s only May 31st. Using WAR % *2 would hopefully put the Indians in the 10-15 range, and the Twins in the 25-30 range.
I would love to see the results with this tweak, and I think a lot of FG readers would agree that this method makes more sense. I think it would make the rankings more credible and the best out there much like FO’s DVOA rankings. Anyone else agree? Thanks
From June 13th:
June 13, 2011 at 4:52 pm
Because these rankings are meant to dig deeper than win-loss record. We also got outscored while splitting the series against the Tigres.
The fairly pessimistic preseason expectations for the team are affecting the ranking, but also our current team fWAR is pretty low. However, our pythag win% is right around .500, with rWAR more or less agreeing (we have 14.3 rWAR).
With the current UZR samples being so unreliable, I wonder if they could be usefully regressed to zero based on the amount of season played (as MGL suggests in his UZR primer). Or, perhaps pythag could be incorporated into these rankings as an additional factor.
From June 27th:
matt w says:
June 28, 2011 at 1:38 pm
Here’s an attempt at a constructive criticism: Even if Cleveland isn’t the second-best team in the division, and Minnesota isn’t the first, there’s something wrong with their rankings. At this point in the season, should the system really rank the team with the worst WAR% ahead of one whose WAR% is over .500?
I’d like to hear more discussion of the methodology, how it should be arrived at, and how much preseason performance should be weighted and whether more recent performance should be weighted more as well; but I think the current formula pretty clearly overweights the preseason poll.
(For the record, I don’t have a dog in this fight; I’m a Pirates fan, and I’m not going to complain about a formula that ranks them 28th instead of 25th.)
From July 4th:
matt w says:
July 4, 2011 at 7:37 pm
I’m not going to complain about any specific rankings anymore (and I’m definitely not going to argue that my particular team hasn’t been somewhat lucky), but aren’t the WAR winning percentages rather Lake Wobegonish? I haven’t run the math, but I’m pretty sure that they average out to something well above .500.
From July 11th:
July 11, 2011 at 10:35 pm
I did some power rankings by ranking teams on WAR%, winning percentage against teams above .500, an average of SOS and expected winning percentage, and winning percentage and averaged them all out in one stat, i gave each of the stat’s weights, like since WAR% is the most important, it’s worth 40%, average of SOS and expected winning percentage is worth 30%, winning percentage against teams >.500 is 20%, and winning percentage is 10%, these are the rankings
From August 1st:
August 1, 2011 at 5:22 pm
@Paul that may be the way it is, but it’s a little asinine, don’t you think? If these rankings are meant to measure ONLY performance so far this season, then fan% should be 0 from week 1. If they’re meant to predict future performance, then the contributions of a player acquired through trade should come with him. Subtract out the contributions of whoever that player is replacing if you must, but the fact of the matter is Beltran will spend the rest of the year playing RF in SF, Pence will spend the rest of the year in PHI, and Jiminez won’t be pitching for COL in another game this year.
I can understand why you wouldn’t necessarily want to just go adding and subtracting WAR willy nilly, but unless you want the rankings to be a straight WAR leaderboard, you need to have some subjective element, and pre-season rankings are not nearly good enough.
From August 8th:
Jay Gloab says:
August 9, 2011 at 9:26 pm
Elsewhere I suggested the idea of a fully objective power ranking based on a weighted average, with each game weighted by its ordinal number during the season, i.e. the first game of the season is given a weight of 1, the second is weighted 2, and so forth to whatever game the team is on.
I would have done it simply by winning percentage, but WAR% works too for a kind of “second order” power ranking.
From September 26th:
September 26, 2011 at 11:06 pm
One thing that has bothered me about the power rankings is that it does not reflect the best team NOW. By using WAR, which is closey tied to wins, we shoud not be surprised that the teams with the most WAR get into the playoffs. A thought for next year, what if it were the cumuative WAR of players on the active roster, instead of accumulated WAR by the team. So when the Phils trade for Hunter Pence they get his WAR in there power rankings. Or when Cay Buckholtz goes down for the year, the Red Sox lose his WAR. Obviously, there are flaws with the system as well, but I think it would be an interesting take (at least for the second half of the season).
Okay, now that we’re done rehashing why everyone thinks I’m an idiot, we’ll finish with some polls. I didn’t include a poll for FAN% versus an objective system because I already know the answer to that one. I realize it wasn’t a popular choice, and we will look at other options for regressing the in-season standings next season.
Aside from that, we still want your input, so let’s get to the polls. I tried to make these as comprehensive as I could, but by all means please elaborate in the comments below on these questions — particularly on the “‘X’ days” in the second question — and anything else you would like to see changed (or not changed, I suppose). Finally, I would like to thank you for taking the time to read Power Rankings this year, and for caring enough to make suggestions.