The Meaning of the Standings So Far: Adding On by Jeff Sullivan June 9, 2015 Monday, I wrote a post, building off of another post. Now this is a post, building off of Monday’s post. To review, this is said post, wherein I examined the relationship between early-season team performance and rest-of-season team performance. How much might the current standings tell you? Going back 10 years, and choosing an appropriate date: Right, this has already been published. As has the following plot: As far as predicting the rest of the season was concerned, I observed a far stronger correlation with preseason team projections than with early-season team records. Which isn’t to say that the projections did great, but they did a hell of a lot better than actual wins and losses through roughly two months. Right there, you’re given an important reminder never to dismiss what the projections are saying. Or, were saying, as it were, in this case. After I published the article, though, I realized I could’ve taken another step, and it was pointed out by a few people in response. While I looked at early-season wins and losses, I could’ve easily also looked at early-season Pythagorean wins and losses, based on run differential. That could help to eliminate some of the noise. After all, many of us recognize that run differential is a bit more meaningful than record, and you never want to penalize too much for luck. So this following plot is the same as the first plot, except instead of actual winning percentage on the x-axis, we’ve got Pythagorean winning percentage. How well might this predict the remaining four months of the year? Does this give us a real step forward? No, it doesn’t. Not really. Relative to the first graph, you do see a slightly stronger relationship. And, relative to the first graph, you do see a slightly steeper slope. But this still falls well short of the predictive power of preseason team projections. Erasing some noise didn’t make enough of a difference. You still shouldn’t put all that much stock in early run differential. Not if it’s in disagreement with what was expected, statistically, earlier on. To keep consistent with Monday’s article, here’s a glance at the extremes. Again, this is through June 7. I identified the 25 biggest under- and over-achievers, comparing early Pythagorean record to projected record. 25 biggest early-season over-achievers Pythagorean Win% through 6/7: .597 Projected Win% through 6/7: .486 Actual Win% after 6/7: .499 25 biggest early-season under-achievers Pythagorean Win% through 6/7: .396 Projected Win% through 6/7: .511 Actual Win% after 6/7: .502 You see the same thing as before. The over-achieving teams have tended to strongly regress. The under-achieving teams have tended to strongly regress, in the other direction. Doesn’t make that much difference whether you’re looking at actual wins and losses or theoretical wins and losses, based on run differential. Ultimately, you’re still better off more heavily weighing the projections. Two months of team performance isn’t irrelevant, but it’s far from the whole picture of everything you need to know. Each team is made up of players, and all those players have longer performance histories. It’s somewhere around here that I run into the limitations of my own mathematical ability. While my job is to be a baseball analyst, I’m mostly untrained in the art of advanced statistics, and when it comes to that sort of analysis I’m a total fraud. Other people could dig into these numbers even more deeply, but I’m content to stop at “trust the projections more, and the early performance less.” I think even these simple statistics can get the right point across. To put it all together, here’s a table, on forecasting the last four months: Approach R^2 Slope Early Actual Record 0.13 0.37 Early Pythagorean Record 0.15 0.43 Preseason Projection 0.27 0.93 At some point, it stands to reason you do become better off trusting the season-to-date performance, but that point is nowhere in June. For every individual team, there might be a player or two where you think the projections are just wrong, because of some change in true talent, but those aren’t actually easy to spot, nor do we know when they’re sustainable, and they also don’t make huge differences. Projections might seem too simplistic, but actual performance creates this illusion of meaning. Over smaller samples, we can’t help but underestimate the influence of noise. There are two steps I can’t take. Here, I went from looking at actual early record to Pythagorean early record. It would be even better to look at BaseRuns early record, but that doesn’t exist. Because BaseRuns strips out more noise, I assume it would be more predictive than the run-differential stuff. Maybe even almost as predictive as preseason team projections. But then, that’s the other step I can’t take: I can’t access historical in-season projections, updated for more recent events, because those also don’t exist. This whole time, I’ve been comparing between updated statistics and projections missing two months of data. The projections we have now are better, because we have updated numbers and updated depth charts, taking into consideration injuries and transactions. At this point, Pythagorean record is less predictive than preseason projections. My assumption is that BaseRuns record is less predictive than updated projections. That part, I can’t prove, but even the older projections clearly have worth. It’s not real fun to keep writing “projections” over and over. I know it makes a good number of you tune out. And that’s fine — I might tune me out, too. But there’s a reason we rely on the things. They’re pretty good, and baseball is always lying to us. Believe me, I’m not invested. I don’t have my own projection system, and I think it would be equally interesting if the projections didn’t hold up. But here we are. You’ve seen the information. Make of it what you will.