The State of Analytics Within MLB by Dave Cameron January 18, 2016 This post was written by Adam Guttridge and David Ogren, the co-founders of NEIFI Analytics, an outfit which consults for Major League teams. Guttridge began his MLB career in 2005 as an intern with the Colorado Rockies, and most recently worked as Manager of Baseball of Research and Development for the Milwaukee Brewers until the summer of 2015, when he helped launch NEIFI. As part of their current project, they tweet from @NEIFIco, and maintain a blog at their site as well. Much has been made lately of “title inflation”; the presence of multiple personnel above the role of assistant GM within MLB front offices. That’s a symptom of something larger, rather than a phenomenon unto itself. Front offices are growing, and most rapidly through analytics-centric staff of some type. Critically, though, they’re mostly proceeding with a classically hierarchical (vertical) structure. As a result, specialization is now at an all-time high. 40 years ago, a general manager’s duties extended to such perfunctory tasks as preparing operating budgets and arbitration presentations. In 2016, those may be functions of people who report to people who report to the GM (and perhaps the GMs themselves are reporting to yet another non-owner above them). Even just 10 years ago, well into post-Moneyball times, general managers would occupy barstools in the open lobbies of winter meetings hotels, and talk trades over drinks. That’s simply not how it is any longer, as the mechanics of trades (and evaluations) are more sophisticated. Certainly, not all organizations have taken the vertical route. But many (or most) have, and it has several effects, as one might imagine. As organizations seem to be relying more and more upon analysis, the ones actually creating/executing that analysis tend to be relatively low on the chain. Typically the person developing your pitching metrics or developing your interpretations of defensive data — in other words, some of the most critical judgements your organization makes — is beneath the structure which existed prior to the “analytics boom” of the past few years, as seniority plays a large role in hierarchies. Often, then, the analysts in these departments may have a larger understanding of the concepts and data than the people they’re reporting up to, and it is incumbent upon these analysts to make their conclusions palatable for decision-makers who aren’t as comfortable with the technical aspects of the analysis. As a consequence of this expanding vertical structure, the question of “is this actually valid work or not?” is more difficult to test internally, unless the organization assigns multiple analysts to double-check each other’s work, and doesn’t force one to report to the other. Upper management may have the impression they’re successfully implementing sabermetric tools — sub-metrics, projections, valuations, etc — due to the presence of analytics staff. They may or may not be, and in situations in which those managing the analytics staff do not have technical backgrounds, some in leadership roles may not have firm reference point by which to judge its relative quality. So while organizations at large may be relying on “analytics” more than ever, the team-to-team disparities in terms of quality can be just as exposed as ever, although the laggards are likely to be more content with their adoption of analytics than previously. A small number of organizations have managed a very flat approach. There are GMs, AGMs, and Directors of Baseball Operations who have worked neck-deep in analytical development themselves. Not coincidentally, these organizations tend to create environments where analysis is not a department within the house, but analysis of baseball is the house itself. Consequently, virtually all management personnel are not only familiar with the fine details of, for example, the how-and-why of how the organization distributes credit for called strikes between the catcher and pitcher (which of course helps determine the value of players generally), but are expected to question it, share ideas about it, or participate in further development of the matter. Crucially, though, it’s not that these flat organizations are utilizing less scouting information than the verticals; it’s that they’re treating baseball analysis as the enterprise through which all matters flow, rather than analysis being something that some people do in one department among other departments. The nexus between what are classically termed as scouting and sabermetrics is growing ever tighter. Perhaps, and hopefully, out of the ultimate recognition that they’re the same thing, analysis of baseball and ballplayers. Many scouts in the stands today know what DRS stands for, to say nothing of WAR. The first thing (and perhaps best thing) young analysts are eager to do is pick up on intricacies or wisdom from scouts. A tension still exists, but it’s far less palpable today than it was even a couple years ago. There’s some question as to how closely teams would want these two departments to cooperate, however. While there’s an obvious benefit to learning for each side, and raising the education level of all participants in general, there’s also likely something to be said for independence and diversity of thought. By creating too monolithic of a union between the two fields, some fear the risk of losing diversity of thought or independence of observations. Technology is at the epicenter of this nexus. In tomorrow’s follow-up, , we’ll discuss how many of the new tools — Trackman, Statcast, wearable tech devices, etc… — are playing a role. For teams building departments, the duties of analytics can remain highly conflated and ambiguous. Speaking in generalities, there are three fields within sabermetrics, which have some overlap with one another: — Data Scientists: Those who manage and manipulate data for the purpose of divining relationships, seeking evidence, and optimizing behavior. Pitch sequencing studies, defensive positioning strategies, and catcher/pitcher stolen base credit allocation are some examples of projects they might work on. — Developers: Those who have the skills to create/manage systems and structures to display/warehouse/input information. For example, creating internal databases, scouting applications that combine reports with data, and heat maps that coaches can use in their game preparation. — Predictive analysts: Those who model information and learned intelligence for the sake of judgement/expectations. Player projection systems, player development theory, and player performance metrics would fall under this category. Perhaps a valid analogy is a Formula 1 race team: there’s a driver, a mechanic, and a designer/engineer. They understand plenty about each other’s jobs, but expecting one to execute the role of the other entirely would be suboptimal at best and perhaps a disaster. (And certainly, in hiring, viewing each of them as interchangeable “car people” would be a very poor idea). The role of the General Manager also has shifted. In more horizontal organizations, the work of the GM (in an evaluative sense) tends to be a finished product which the GM merely executes. The GM’s value-added, in this case, is the degree to which he’s able to extract value from competing GMs. Negotiation and executive skills are of utmost importance in adding value at that level. In the more vertical organizations, the role of the GM is still far more concerned with evaluation itself, and they can set themselves up as the sole decision maker, with the rest of the staff serving a support role. And, of course, GMs and organizations can act differently depending on the context of the decision. If we’re talking about a 19-year-old, we gravitate more towards the wisdom of the scouts. If we’re concerned with a 28-year-old with seven pro seasons and five in the major leagues, our answers are perhaps better founded in the performance record. If it’s a 24-year-old whose apparent abilities have never quite matched his performance results, we seek evidence to reconcile where our expectation should lie. In all cases, medical and perhaps other information is considered, and in some cases it’s of great weight. It’s equally obvious as it is critical: the above is eminently ripe to move from the GM’s head to a more transparent evaluative framework. That’s because player-to-player talent differences are qualitatively small, and the amount of information relevant to their ranking is very large. The essence of sabermetric improvement is moving from fragmentary evaluative processes and decisions to systematic ones. Again, the horizontal organizations are in situations where the General Manager is the executor, rather than the evaluator. That’s what we’ve seen take place in the world of finance over the past 15+ years, and for the exact same reasons, to the point where financial evaluations are themselves now highly automated, even despite the need for reliance upon different information sources situation-to-situation, often including subjective/unstructured information, which is synthesized within a matter of comprehensive evaluation). A number of teams have begun to move systematically. The Tampa Bay Rays have been a systematic organization for years, and they’re now gaining some company. Those organizations are reaping enormous benefits, at least in decision-making power if not yet in on-field wins. Organizations which have grown vertically may not gain at nearly the same pace as their competitors, even if they’ve added more staff. The structures built within them will likely need time to calcify before the need for even further change becomes apparent.