The Doomed Search for a Perfect Way To Interpret Exit Velocity Data

Last year, I took a long look at the predictive power of rookie exit velocity. One of the things I learned was that for rookies with at least 200 balls in play, wRC+ was less predictive of their future performance than max exit velocity. That blew my mind. Knowing just one measurement, the velocity of a player’s hardest-hit ball, was more useful than knowing about their overall performance through their entire rookie season. Exit velocity matters a lot, as does how you interpret the data.
Since the rollout of Statcast in 2015, we’ve been introduced to three general ways of thinking about exit velocity, along with half a dozen individual variations. Depending on the context, we might read about a player’s average exit velocity, their maximum exit velocity, their hard-hit rate, or any number of exit velocity percentiles. For a while now, I’ve been wondering which one of these methods is most useful. Could there be one exit velocity metric to rule them all?
I have to imagine that at some point in the last several years, the R&D department of each major league team has asked itself that exact same question. In each big league city, someone much smarter than I am did the math and wrote up the results in a report that now rests comfortably in a proprietary database with a catchy name. The rest of us just have to make do with rumors and innuendo suggesting that teams most often value something akin to 90th-percentile exit velocity. To my knowledge, no one in the public sphere has made a comprehensive survey, and I wanted to look into the matter for myself. Read the rest of this entry »







