the spread the (data) science of sports

Situational thinking in football - How can data help?

Wed 23 September 2015

What is the current state of data-driven football research? Where can we improve? I've written before about smugness and overconfidence in sports analytics. It's a real problem. But we also know quite a bit. As an exercise, I thought I'd break down open areas of research into categories and identify where we have room to grow (and I'm sure a lot of this could apply to other sports as well).

As I see it, there are three major categories of open research. Obviously, these categories are not totally independent, nor are they exhaustive, but they give us a potentially useful rubric for thinking about how to frame our questions.

Team-level Possibly the most well-developed and mature, research in this area involves questions like 'who is more likely to win', 'who has the best defense', and 'who is better or worse at drafting.' The prevalence of this kind of research is driven by widely available (and easily manageable) data aggregated to the team level and sports betting. We're pretty good at predicting the winners of games, and we can argue about the nuances of the other questions, but these seem like questions with essentially "knowable" answers given the current state of data.

Player-level All of those team-level questions could probably be better answered with improved player-level data. After all, what are teams except for aggregated and interacting individuals? Most research in this area is still somewhat rudimentary. There are serious data availability and data quality issues. We don't have a good way to compare players across positions (some will argue about this point). We don't have reliable data for college players. We are notoriously bad at predicting how the careers of draft picks will turn out.

Pro-level evaluation is going to improve some as data from motion-tracking systems like Zebra are more widely adopted, but it's almost certain that the data will remain proprietary and owned by teams and the league. Further, making use of motion data has its own unique challenges, as many SportVU analysts can tell you. However, having more data doesn't tell you what questions to ask, and Zebra can't solve the problem of evaluating college players.

Situational decision-making By far the most immature area of research involves in-game decision-making. Unfortunately, most of the yet-to-be-conducted research in this area will have effects that cascade through both team-level and player-level research. We think we know a lot about fourth downs, but most of our knowledge about fourth downs is biased by the fact that a) teams don't go for it on fourth very much, b) teams that do go for it on fourth don't go for it randomly, and c) we don't know much about the plays being called, the defense being faced, the specific personnel on the field, and so on. In fact, the data get very sparse the more specific you want to make the game situation. Fourth and three from the eleven facing Cover 2 with 21 personnel? Without looking, I'm guessing that situation exists less than 15 times in the modern game. Restrict it to run or pass? You're probably looking at less than 7 plays. With less than a minute remaining? Oops, you're probably looking at a single play (if that) now.

This is compounded by the fact that some formations are only played once or twice a game (a small sample problem), and that many teams have similar plays with different names. Charting organizations like Pro Football Focus and TruMedia are collecting data on these things, but most of it won't be available publicly. Motion-tracking data will help with a lot of this, too, as we'll be able to build models that group plays together, regardless of team, based on the motion of position players, but we're a long way from that.

Unfortunately, situational awareness and the nitty-gritty specifics are what so-called "football people" often like to use to criticize the analytics crowd. "Your model can't account for Peyton Manning," they'll say. And they'll be half right. Most models can't account for Peyton Manning or the fact that the other team's strong safety injured his hamstring last week. These things probably do matter to some extent. However, they're also half wrong. We can't predict specific, rare outcomes with a high degree of certainty. It's not possible in most sciences, and certainly not in football.

It also bears thinking about how this would help teams make decisions. Let's say we know a lot more about specific situations. In order to build accurate predictive models, we need a training data set with "true" labels/measurements. For instance, if a coach is interested in going for a two-point conversion, he might be interested in more than the league-wide average. He might be more interested in how that conversion rate is affected by certain personnel packages.

This is only knowable to an extent, though. Predictive models that are going to be used in non-contrived settings should only be trained using data that the model "would" know about if it were trying to predict in real-time. This is the problem of overfitting. For instance, if we were to hypothetically know that two-point conversions succeed 40% of the time, but can adjust that to 35% of the time if facing an all-out blitz, we're basically no better off than not knowing that, because we can't know in a real-time game situation if the other team is going to all-out blitz or not. The models should mirror the information available to the human decision-makers as much as possible.

This is not to say that we shouldn't be studying these questions -- we should! There are lots of unanswered questions surrounding strategy and situations. I think there's been something of a divide between football stats people and football "tape" people, and that we could overcome a lot of this with more constructive dialog. But that's going to require some humility on both sides and a shared desire to know more about the sport rather than to prove the other side wrong.

blog comments powered by Disqus