Explaining the Command Disconnect

We only remember the most exciting plays. Diving catches, game-winning hits, high-pressure situations — these are the things that fill our memories and our imaginations. But baseball isn’t always exciting — often it’s far from it. I love baseball, but many of its events are boring. Headlines in the following morning never read, “Roy Halladay throws a two-seam fastball for a called strike to open third inning.”

Home runs are some of the most exciting events in baseball. Everyone — even the most apathetic towards baseball — can appreciate a baseball that’s hit really hard and really far. But home runs are, compared to more mundane baseball events, pretty rare. During the 2011 regular season, batters hit 4,552 homers. That’s a seems like a lot. But considering the fact that about 700,000 pitches were thrown, home runs make up less than 1% of all pitches. It seems pretty likely then that our impressions about what is important in baseball are disproportionately affected by home runs.

Not surprisingly, most homers come on pitches that are thrown down the middle:

So if you want to avoid giving up a home run, then announcers and conventional wisdom are absolutely right: don’t throw the ball down the middle. But given the spectacular nature of home runs, we tend to generalize this lesson to many other facets of pitching.

But what about the hundreds of thousands of pitches that weren’t put into play (including home runs)?

We can measure the quality of a pitch using linear weights, a tool that measures the change in run expectancy for a given event. The aforementioned home run obviously increases run expectancy — by an average of about 1.4 runs. But what about less-notable events, like balls and strikes? These affect run expectancy, too, just in much smaller capacities.

We calculate change in run expectancy (RE) with the equation: change in RE = final RE – initial RE + runs scored on event. Because run expectancy is lower in pitcher-favorable counts (0-2, 1-2, etc.), a single — or any hit — has a higher run value when the hit comes during a pitcher-favorable count. But you don’t even need a hit. A change from a 0-0 count to a 1-0 count also increases run expectancy, just by a small amount. We can use this method to look at all pitches with a unified metric.

As we discussed before, the above graph only covers a small portion of the events in baseball. While less than 1% of all pitches are home runs, just 19% are put into play. About 81% of pitches are a ball, a strike or a foul ball. On these pitches, run value is what you’d would expect:

The pitches that are most likely to be home runs — pitches thrown down the middle — are also the pitches with the lowest run values when not put into play, because they are most likely to be strikes. Keep in mind that negative run values are good for pitchers. If we compare the run value of pitches not put into play  with the pitches that are put into play, we find a nice juxtaposition:

 

On pitches down the middle, the balls that are put into play have, on average, about twice the magnitude of run value as pitches that aren’t put in play. That means for the two to come into equilibrium, you would need to have about 33% of pitches put into play and 66% not put into play. But as discussed earlier, far fewer than 33% of pitches are put into play. This means that, on average, pitches thrown down the middle are good for the pitcher, not the batter.

Merging everything together, we can see this visually:

Remember that everything with a negative run value is a positive event for a pitcher. And in this graph, I split up run value by batter handedness. The two don’t completely match because lefties and righties have different called strike zones, which I don’t show here to avoid cluttering the graph.

There is still so much more to look into. Last week I wrote about whether better pitchers have better command. Despite looking at the data in many different ways, I was unable to find any large differences in command between the best pitchers and everyone else.

One of the metrics I used was the percentage of pitches thrown to the horizontal borders of the strike zone. I defined a border pitch as one that was within half a foot of these horizontal borders. If we plot the pitches, they look like this:

There’s some overlap here because I accounted for the fact that a left-handed batter’s strike zone is shifted about a fifth of a foot left compared to a right-handed batter (from the catcher’s perspective), as found by Mike Fast. This measure of command came out of conventional baseball wisdom: Hit the corners, stay out of the middle of the plate, and you’ll be successful. By this definition, fewer than half of all pitches are border pitches.

The metric implicitly assumes that border pitches are better than other pitches. But it turns out that this isn’t really true. The average run value of a border pitch is nearly identical to the average run value of a non-border pitch. Of course this run value is distributed differently. The BABIP on border pitches is .279, while the BABIP on non-border pitches is .297. But this seems balanced out by the fact that border pitches are in the zone significantly less often.

I wondered if the metric was too inclusive. I altered the metric so that only pitches within a quarter of a foot of the strike zone were considered border pitches. As some of you suggested in the comments from my last post, I made the groups a little more granular and split up all pitchers into four groups (quartiles), again based on FIP. But despite making these changes, the results were basically the same. When ahead in the count, the best pitchers threw more border pitches. When they’re behind, better pitchers throw fewer border pitches. Still, the difference is very small.

It seems that the importance of not throwing pitches down the middle is overblown. On average, pitches thrown down the middle are actually pretty effective. The obsession with pitches thrown down the middle probably can be traced back in part to home runs. They disproportionately affect our perceptions because they’re so exciting. When they do happen, we need to explain the why behind them. According to the theory of cognitive dissonance, when reality and our mental attitudes diverge, we experience a mental tension. And we often resolve this tension through rationalizations.

Home runs are pretty random, but it’s hard to accept. Because of that, we rationalize their occurrences through a host of a factors, chief among them is pitch location. Throwing the ball down the middle is bad if the batter puts the ball in play — but that doesn’t happen often. And that might be why the command difference between the best pitchers and everyone else is much smaller than we might expect.





23 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
RMR
12 years ago

It should be noted that pitches are not independent events. Hitter’s are not purely reactionary, but rather have some basis in expectation. That expectation is driven by many things, including the pitcher’s repertoire, the count and the base-out situation. From an overall analysis point of view, I assume these come out in the wash to some degree. However, we should be very careful about inferring the logic/value of making a given pitch based on these data.

The RE of a fastball down the middle with the bases loaded, no outs and a 1-2 count (batter likely to swing) is presumably quite different than an identical pitch with the bases empty, 2 outs and a 3-0 count (batter likely to take).