How 2017 Compares to the Steroid Era: Part I
The 2017 season has seen offensive levels rise to a height unmatched in major-league baseball for quite some time. Overall this year, teams are averaging 4.65 runs per game, the highest mark since 2007 — though not quite the five runs per game teams averaged in 1999 and 2000. Most of the offensive increase can be traced to a juiced ball. There’s also been a lot of talk about the role of a fly-ball revolution of some sort or another in the establishment of a new league-wide seasonal home-run record.
An increase in PED use has now been raised as an issue, as well. MLB has administered both PED testing and PED-related suspensions since 2004; both have existed in the minors since 2001. Even with those measures in place, however, power continues to be associated with steroid use, and unfounded rumors have hounded the authors of every breakout season over the last decade. With the rise of power in recent years, the whole league is under suspicion. But how similar is this version of the league to the one now known as the “steroid era”? Let’s take a look at what the latter actually looked like and how it compares to now.
Our split tools are very expansive going back to 2002. This is convenient because 2002 was the last season that lacked PED testing of any kind. It might not have been quite the height of that period now regarded as the “steroid era” — that was probably 1999 and 2000 — but the league looked quite different before testing and suspensions were permitted.
Notably, scoring looked a whole lot like it does now. For some perspective, here are some general statistics that illustrate some similarities and a few differences between the two seasons.
Season | R/PG | HR | BB% | K% | GB% | HR/FB | ISO | BABIP | AVG | OBP | SLG | wOBA |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 | 4.62 | 5059 | 8.7 % | 16.8 % | 43.5 % | 10.7 % | 0.155 | 0.293 | 0.261 | 0.331 | 0.417 | 0.326 |
2017 | 4.65 | 5882 | 8.5 % | 21.6 % | 44.1 % | 13.7 % | 0.171 | 0.299 | 0.255 | 0.324 | 0.426 | 0.321 |
In terms of run-scoring, the 2002 and -17 seasons are almost identical. The manner in which those runs have been produced, however, has changed. This season has been responsible for many more homers, for example, and while the ground-ball rate is similar between the two different campaigns, the number of homers per fly ball is much higher in 2017. The walk rates are the almost identical, while strikeouts have increased markedly since 2002. As a result, both batting average and on-base percentage have declined considerably — a substantial enough difference for the batters of 2002 to have actually recorded a higher wOBA despite scoring slightly fewer runs.
The starkest contrasts occur with strikeout rate and HR/FB ratio, however. More fly balls are leaving the ballpark this year. The two most logical explanations are those invoked above: the juiced ball and an effort among certain batters to get more loft with their swings. (I would recommend reading the links in the opening paragraph for more information on these issues.)
What about the demographics of the league? By looking at performance by position, it might be possible to identify a systemic difference between the 2002 and -17 season. The chart below shows the output by position in 2002 and 2017, sorted by wRC+. (The chart is sortable.)
Pos | Season | PA | HR | WAR | wRC+ |
---|---|---|---|---|---|
1B | 2017 | 19997 | 933 | 69.9 | 118 |
1B | 2002 | 20766 | 780 | 71.8 | 117 |
RF | 2002 | 20842 | 733 | 86.8 | 115 |
LF | 2002 | 20922 | 733 | 81.9 | 113 |
RF | 2017 | 20096 | 778 | 65.9 | 108 |
DH | 2002 | 9622 | 332 | 13.1 | 108 |
3B | 2017 | 19800 | 742 | 80.0 | 103 |
CF | 2017 | 19979 | 595 | 88.0 | 102 |
2B | 2017 | 20066 | 541 | 72.1 | 99 |
CF | 2002 | 21300 | 573 | 83.9 | 99 |
LF | 2017 | 19879 | 659 | 43.7 | 98 |
DH | 2017 | 9884 | 382 | -3.9 | 95 |
3B | 2002 | 20614 | 594 | 63.7 | 95 |
SS | 2017 | 19552 | 516 | 70.9 | 92 |
C | 2017 | 18517 | 602 | 63.7 | 90 |
2B | 2002 | 20967 | 352 | 56.0 | 90 |
SS | 2002 | 20786 | 423 | 67.5 | 89 |
C | 2002 | 19178 | 408 | 47.9 | 83 |
In both cases, first base rules supreme. That’s probably not a surprise. Eyeballing the 2002 numbers, we see the traditional power sources like corner outfield and designated hitter up near the top, with the weaker-hitting positions like catcher, shortstop, and second base down at the bottom. In 2017, everything is bunched more toward the middle. To provide a bit more clarity, we can see how the positions have changed in a more direct comparison.
Pos | 2017 wRC+ | 2002 wRC+ | Change |
---|---|---|---|
C | 90 | 83 | 7 |
1B | 118 | 117 | -1 |
2B | 99 | 90 | 9 |
SS | 92 | 89 | 3 |
3B | 103 | 95 | 8 |
RF | 108 | 115 | -7 |
CF | 102 | 99 | 3 |
LF | 98 | 113 | -15 |
DH | 95 | 108 | -13 |
Teams still seem able to find those big-hitting first baseman, but in the corner outfield, despite what Giancarlo Stanton and Aaron Judge have been able to do, hitters aren’t providing nearly as much offense as they used to. Relative to the rest of the league, we see an increase at the defensive-first positions like catcher and shortstop with second base and third base also moving up the ladder. Now let’s compare just homers.
Pos | 2017 % of HR | 2002 % of HR | Change |
---|---|---|---|
C | 10.5% | 8.3% | 2.2% |
1B | 16.2% | 15.8% | 0.4% |
2B | 9.4% | 7.1% | 2.3% |
SS | 9.0% | 8.6% | 0.4% |
3B | 12.9% | 12.1% | 0.9% |
RF | 13.5% | 14.9% | -1.3% |
CF | 10.4% | 11.6% | -1.3% |
LF | 11.5% | 14.9% | -3.4% |
DH | 6.6% | 6.7% | -0.1% |
The trend remains here, as well, with outfielders accounting for fewer homers relative to their infielder counterparts. Why? It could be that, across the league, teams are putting an emphasis on offense at the expense of defense. It’s possible that shifts have allowed clubs to deploy less talented defenders at premium positions. It could be random.
We can’t say for sure, but the WAR totals mirror the change in offense.
2017 % of WAR | 2002 % of WAR | Change | |
---|---|---|---|
C | 11.6% | 8.4% | 3.2% |
1B | 12.7% | 12.5% | 0.2% |
2B | 13.1% | 9.8% | 3.3% |
SS | 12.9% | 11.8% | 1.1% |
3B | 14.5% | 11.1% | 3.4% |
RF | 12.0% | 15.2% | -3.2% |
CF | 16.0% | 14.7% | 1.3% |
LF | 7.9% | 14.3% | -6.4% |
DH | -0.7% | 2.3% | -3.0% |
If there were rampant PED use in the same way there was during the steroid era, we might expect to see the same types of wide gulfs that we saw in 2002. Instead, we see a leveling out. If we assume that PED is one of the main causes of increased power, the data suggest that absolutely everyone is on PEDs and that they’re all using those PEDs to power up in such a manner that corner outfielders no longer have a big advantage over second or third basemen. While we commonly talk about the PED era affecting everyone, the era was still one of extremes.
Consider that, in 2017, the top-30 home-run hitters make up roughly 18% of the total home runs and the middle-100 players (with at least 300 PA) make up 27% of the home runs. Back in 2002, the top-30 home run hitters — led by Alex Rodriguez (with 57) and Jim Thome (52) — made up 22% of the home-run totals, with the middle-100 players accounting for 24% of them. Things were more extreme in 2002, which suggests that the changes in 2017 are more likely a product of something that’s affecting the entire league as opposed to one subset of players. We will look further into this potential affect looking at player age, as well as the role pitching has played, in Part II.
Craig Edwards can be found on twitter @craigjedwards.
I am not entirely convinced that those numbers re: “leveling out” are particularly significant. It seems entirely plausible to me that the “steroid era” was, like the current era, actually caused by a juiced ball.
This is entirely possible. 15 years has passed. In the world of sports that is literally a lifetime. I’d argue that athletes today are stronger, faster and more explosive than most or all 2002 athletes regardless of PED usage. Science of sport, nutrition and recovery has moved almost exponentially.
BP did a lot of analysis back in around 2002-2004 that suggested the primary factor was neither the ball or the players but the rash of new ballparks that were a lot smaller than the parks they replaced. Parks adjusted a few times in the interim, and more have sought to get closer to average since the introduction of park factors to general understanding. In the 1990s, there wasn’t any such concept in numerical form.
The final assessment at the time I believe was that the increase was about 60% due to changes in park dimensions with the remaining 40% unexplained, whether it was due to changes in player ability/strength (eg PEDs) or changes in approach (more guys trying to hit home runs), changes to the ball (which they didn’t have any data to analyze), or random variance.