Do Clean Innings Matter? by Ben Clemens November 12, 2019 If you watched any baseball at all this postseason, the topic of using starters as relievers probably came up. The Nationals used the tactic frequently, and the Cardinals, Dodgers, Braves, Yankees, Twins, Astros, and A’s all had at least one pitcher who was primarily a starter appear in relief. And when those pitchers came in, the same concern was always raised. “Hey,” the concern roughly goes, “this team should put the starter in at the beginning of the inning to put him in the best position to succeed.” Teams mostly stick to this advice. But I’ve never been one to take rules like this for granted. After all, plenty of other baseball aphorisms turned out to be nothing but high-minded nonsense. Bat your best hitter third. Bunt runners into scoring position. Focus on batting average. The list goes on. Some of those are nonsense. But there’s a kernel of logic to using only starters with a clean start to an inning. Starters are creatures of habit, with complex pre-game routines designed to get them to peak readiness just in time for the start of a game. Take them out of this environment, ask them to get ready on a moment’s notice, and they might not be completely up to speed when they step on the mound. It’s not that starters can’t handle having runners on base, in other words. After all, starters pitch with runners on base all the time. Instead, the issue is that with runners on base, the first batter a pitcher faces is sure to be important. Could it be that starters simply aren’t throwing at 100% for the first batter they face when they’re coming into the game in relief? To test this theory out, I looked at every instance of a starter pitching in relief in the playoffs over the last five years. It’s inherently a limited sample, but starters pitch in relief in the regular season in very different circumstances, and I deemed them different enough that the playoff data would be more relevant. I bundled mid-inning appearances and start-of-inning appearances together; after all, my theory is that insufficient warmups cause pitchers to underperform against the first batter, regardless of the base/out state. All told, since 2015, pitchers who appeared primarily as starters in a given year have made 151 appearances in relief in that year’s playoffs. They vary in their skill level and appearance length, from Max Scherzer’s inning of perfect relief against the Dodgers in 2019 to Chi Chi González facing eight batters in the 2015 ALDS. If you want it broken down, there’s not much of a trend; here are the appearance and outcome numbers by year: Starters As Relievers by Year Year Appearances First Batter wOBA Subsequent wOBA 2019 32 .288 .317 2018 42 .291 .267 2017 38 .371 .247 2016 13 .256 .387 2015 26 .311 .361 So, are they worse? As a first pass, I’d have to say no. Pitchers in those 151 plate appearances allowed a .311 wOBA. The same pitchers then allowed an average wOBA of .299 the rest of the way. Of note, that .299 wOBA is calculated using the same weights as the number of relief appearances out of 151 that the pitcher made. Clayton Kershaw, for example, has made four of those 151 appearances, so his non-first-batter wOBA counted 4/151 towards working out the rest-of-appearance wOBA. Each pitcher was handled in this fashion, regardless of how many batters they faced the rest of the way. Right off the top, that doesn’t appear significant. A 10-point difference in wOBA just doesn’t mean much over 150 plate appearances. For that difference to amount to even one standard deviation, you’d need a sample of around 2500 plate appearances, and we simply aren’t getting to anything that large when looking only at the playoffs. But we can pull more data to see if we’re missing something. We can learn something meaningful about strikeout rate more quickly than we can for wOBA, for example. If starters aren’t quite ready when they enter, it stands to reason that they might strike out fewer of the batters they face while they’re still getting their sea legs. Walk rate is much the same; you don’t need a gargantuan sample to learn at least a little bit about walk rate. Unfortunately, the data are inconclusive. Starters struck out 25.8% of their initial batters faced, and went on to strike out 26.1% of subsequent batters, using the same weights as before. That’s inconclusive, to say the least; it looks as though there’s almost no difference. On the other hand, starters walked or hit 12.6% of initial batters, and only 8.5% of subsequent batters (ignoring intentional walks). So maybe there’s something there, but it looks small. Starters have a higher walk rate on their first batter faced when entering the game in relief; not a statistically significant amount (at either the 10% or 5% level), but enough that I decided to do at least one more test before packing it in. Before I get to that last test, however, let’s address some methodological shortcomings. First, this isn’t an experiment. We don’t get to bump every single starter in relief in an otherwise perfectly controlled environment and see how it affects them. Teams are no dummies — they might be using the starters best suited for relief strategically, obscuring a real effect. Even ignoring that, problems abound. Not every sample is created equal; in 2016, for example, Clayton Kershaw had a 0% strikeout rate when facing the first batter of a relief appearance and a 100% strikeout rate against all other batters faced in relief. Now, did he face only two batters? Yes. As I said, the sample size can be a problem. But merely using each pitcher’s regular-season strikeout rate isn’t a great solution. It’s important to compare apples to apples as much as possible, and so comparing strikeout and walk rates accrued only in relief appearances feels right to me. The rates could be regressed to the mean, and in fact I regressed strikeout and walk rates to the mean to see if that mattered, but it didn’t affect the outcome, and merely flattened out each pitcher’s stats. So yes, there are problems with the methods of this study. I couldn’t find any I was more comfortable with, though, so here we are, checking one more thing. Opposition matters; facing Bryce Harper is more difficult than facing Jose Lobaton. Pitching talent matters; being Chris Sale is better than being Jonathon Niese. Platoon advantage matters; a lefty-lefty matchup is a situation where a pitcher should dominate, while a lefty-righty matchup might be time to simply survive. To account for that, I projected a wOBA for each of the 151 plate appearances. I used regular season stats to provide a baseline for each player, and generic platoon advantages; this part of the method could be improved on, should someone have the inclination and time. In any case, using this method, here are some broad things we know about the first appearances of starters used in relief. First, they’re better than your average pitcher. They had an aggregate .304 wOBA allowed in the relevant regular seasons, comfortably better than league average. Second, their opposition was also excellent; the hitters had a .349 wOBA in the regular season, significantly above average. These starters were coming in to face dangerous hitters. Third, they had the platoon advantage 63% of the time — their managers tried to get them into the game at key moments to take advantage of their handedness. Taking those three things into account, the pitchers should have allowed a .327 wOBA by my estimation. If you recall from above, they allowed only a .311 wOBA. Wait, are starters better on the first batter they face? Probably not! That’s the danger of working with small samples like this; the results are far from conclusive. In fact, none of the data around the first batter of a starter’s relief appearance mean much of anything. It’s a lot of noise, essentially. As so often happens when I go looking to quantify conventional wisdom, there’s no real conclusion to draw here. Should you bring starters in for a clean inning to minimize the danger of the first batter? Maybe! But also maybe not. Without more data, it’s hard for us to say. There isn’t an obvious, measurable difference between the first batter of a starter-in-relief appearance and subsequent batters in that appearance. Does this mean the effect doesn’t exist? Most certainly not. I didn’t find evidence for it, but I definitely didn’t find evidence of its absence. More than anything, I’d say trust the pitchers and trust the granular data. If a pitcher says he’s feeling unprepared and would prefer to start the inning off clean, let him. If his velo and spin rate are down, let him warm up more. But I doubt there’s a universal reason to only let a starter begin an inning, and nothing I found today contradicts that.