One of the core tenets of basketball statistical analysis is the usage of per minute stats. When compared to per game stats, per minute stats are highly valuable in the evaluation of individuals. This is because per minute stats puts players of varying playing time on the same level. Using per game stats, starters will always dwarf bench players due to the extended time they get to accumulate various stats. Meanwhile per-minute stats allows to compare players independent of minutes, allowing for a more even approach in player evaluation.
Recently a debate has come up on the validity and usefulness of per minute stats. I’ve quoted the main parts below, but even abbreviated it’s a long read. If you have the time, I suggest reading it now so the rest of this article will make more sense. For those on a limited time constraint, a quicker summary is here:
Hollinger & Kubatko: “Hey per minute stats are a great way to evaluate players! In fact we’ve done a few studies and it seems that a player’s per minute stats increase slightly when they get more minutes. At the worst we can conclude that they should stay relatively the same.”
FreeDarko: “Per minute stats won’t stay the same if a player gets more minutes, because there is a division between greater and lesser players. A player that only gets 10-25 minutes per game is playing against lesser caliber players. Hence when that player sees an increase in playing time, he’s playing against steeper competition, so his stats should decrease.”
Tom Ziller: “That’s not true. Here is every 10-25 minute player in the last 10 years that saw an increase in minutes. Most of them (70%) saw an increase in per-minute production. To discount any of this data being from young players getting better as they age, I looked at 8+ year vets, and saw that about the same ratio of players increased (69%).
Brian M.: “Tom, the problem with all this data is a causality vs. correlation issue. It’s possible that these players saw more minutes first then improved. But it’s also possible that these players improved first which allowed their coach to play them more minutes.”
Brian’s case is a good one. To use an analogy, imagine I come across a person who calls himself Merlin Appleseed. He claims that just by touching apples he can magically make them taste better. He opens up a box of apples saying that he never touched any of them. He picks out 10, and imbues them with his magic. He asks me to taste each of them. I find all of them to be delicious. He says “here’s the same box I got my apples from. Now I want you to take 10 at random while blindfolded. You can compare them to my magic apples. I bet mine taste better.” I do just as he asks, and indeed my random set of apples are less tasty than his. So does Merlin Appleseed have magical power?
Maybe. Unfortunately this test wouldn’t be able to confirm or deny his magical power. Since Merlin gets to choose his apples, he might be selecting the best ones! To test Merlin’s abilities I would need something to gauge how good his apples are expected to taste. One way to do this would be to find comparable apples that have the same color, size, blemishes, etc. Then I can compare the taste of his apples to my apples. If Merlin’s has the magical powers he claims, then his apples will taste better than my apples.
Similarly with Tom’s study, Brian is saying that by selecting players who have seen an increase in minutes we might be choosing the best apples. This is because players who improve on a per minute basis could be given more playing time by their coaches. Therefore to show whether or not these players have improved, I need to find how good they’re expected to be. Then I can compare their actual performance to their expected performance. If FreeDarko’s theory is true, that role players should decrease their per minute production with more minutes, then they should perform worse than their expected values.
To separate the control group from the test group, I’ll only use players with an even numbered age for the control, and odd numbered ages for the test group. Since this study is intended for role players, which was defined by Ziller, I limited my control group to player seasons where:
* The player age was an even number.
* The player appeared in 41 games or more.
* The season was 1981 or greater.
* The player averaged 10-25 mpg.
Now I can calculate the expected production of the players in my group, by looking at per minute production (PER) over playing time (mpg).
Just as expected, the graph tends to go from the bottom left (low production = low minutes) to the top right (high production = high minutes). That is players who receive more minutes are more productive. From the 1840 player-seasons in my data, I’m able to calculate the expected PER based on mpg (PER = .2158*mpg + 8.2941). So if a player averaged 10 mpg, you would expect his PER to be 10.45. This equation is represented by the red line on the graph.
Now that our control group is defined, I need to create the test group. Again this group was defined by Ziller as role players who saw an increase in minutes. I selected player seasons where:
* The player’s age was an odd number.
* The player appeared in 41 games or more.
* The season was 1981 or greater.
* The player averaged 10-25 mpg the year before.
* The player increased his mpg by 5+ from the year before.
Since I have the expected values based on mpg, all that is left is to compare their actual production to the control group. In our test group 185 players did better than their expected PER, while 177 did worse. On average each player gained 0.17 PER. This is a tiny gain, not enough to show that players increase production with more minutes. However it clearly shows that they didn’t decline and at least matched the predicted PER.
Another way to see how our prediction did is to calculate the regression (trendline) of this group, and compare it to the expected equation. The red line in the graph below shows the regression of PER/MPG for our control group.
* Control: PER = .2158*mpg + 8.2941
* Test: PER = .2185*mpg + 8.3917
The test group, which has both the higher slope and y-intercept, will slightly outperform the control group. But not by much. The average player who saw 40 mpg, will see a .20 increase in PER, which is negligible. In other words, the test group has neither exceeded nor fallen short of our expectations, but rather has met them.
In the end what does this prove? Specifically this study removes the correlation between the role player group and players that saw extra minutes due to improvement. It debunks the thought that there is some kind of division between per minute stats, where the per minute stats of high minute players are more a representation of actual talent than those who play few minutes per game. But combined with the past works of Hollinger, Kubakto, and Ziller, among others, it makes an overall stronger statement. Players who receive 10 or more minutes per game are likely to keep the same per minute stats no matter what the increase in playing time is. Therefore per minute stats remains far superior to per game stats in terms of comparing and evaluating players.
EXTRAS:
- “It’s a pretty simple concept, but one that has largely escaped most NBA front offices: The idea that what a player does on a per-minute basis is far more important than his per-game stats. The latter tend to be influenced more by playing time than by quality of play, yet remain the most common metric of player performance.” — John Hollinger
- The great thing about this study is that I can perform it again, this time using the “odd” aged players as the control and the “even” aged players as the test group. This time the prediction equation was PER = .2039*mpg + 8.4439. And again our test players slightly outperformed the average. This time 192 did better than their expected PER, while only 161 did worse. On average each player gained 0.23 PER.
- This article doesn’t mean that every player that has good per minute stats should see more playing time. It’s very clear that basketball stats don’t capture a player’s total ability. A player that does well on a per minute basis may have other flaws, such as poor defense, which prevent him from contributing more. This also isn’t an endorsement for any single per minute ranking system, like PER, WOW, etc. There are flaws in each of these in addition to being unable to account for attributes not captured in box scores.
- Summary of the events that led to this article.
Back in 2005, I wrote an article outlining some of the pioneers in per minute research.
In the 2002 Pro Basketball Prospectus John Hollinger asked and answered the question ?Do players do better with more minutes?? For every Washington player, Hollinger looked at each game and separated the stats on whether or not he played more than 15 minutes. He found that when players played more than 15 minutes, they performed significantly better than when they played less. To check his work, he used a control group of 10 random players, and each one of those improved significantly as well.
The knock on Hollinger?s study is the small sample size, containing less than 25 guys from only one season. Enter Justin Kubatko, the site administrator of the NBA?s best historical stat page www.basketball-reference.com. Earlier this week Justin decided to re-examine the theory using a bigger sample size. Taking players from 1978-2004, he identified 465 that played at least a half season and saw a 50% increase in minutes the year after. Three out of four players saw an increase in their numbers as they gained more minutes, although the average increase was small (+1.5 PER).
Two independent studies have shown that NBA players get better when they get more minutes. A conservative interpretation is that per-minute numbers are universal regardless of playing time. So if a player averages 18 points per 40 minutes, he?ll do about that regardless of how many minutes he plays. A more liberal summary would say that underused players will see an improvement in their per-minute numbers if given more court time. A player that only averages 20 minutes a game is likely to be a little bit better if given 35. So the straight dope is per minute stats are a fantastic way to evaluate NBA players.
Recently, this research was questioned by the writers of freedarko.
The problem with this line of reasoning is that it assumes the homogeneity of court time. It assumes that if a player scored 20 points in 20 minutes, he would also score 40 points in 40 minutes. That there will by systematic differences between these two situations is almost too obvious to point out. It’s the difference between sharing the ball with Jordan Farmar while being guarded by Kenny Thomas, and sharing the ball with Kobe Bryant while being guarded by Ron Artest.
Insofar as the problem here is one of rotation, small-scale adjustments in minutes played shouldn’t create major distortions (it isn’t unrealistic to think that if Tim Duncan played 5 extra minutes per game, his per-minute production, as influenced by the level defense he’d face, would basically be the same). But when PER catapults bench players into the starting five (or vice-versa), be on the look-out for inflation. Call this the Silverbird-Shoals Hypothesis, or the THEOREM OF INTERTEMPORAL HETEROGENEITY (TOIH).
Enter Sactown Royalty’s Tom Ziller, to refute Free Darko’s theory.
Shoals and Silverbird are arguing that because low-minutes high-PER guys typically play against fellow bench players, their PER is higher than it would be if they played starter minutes. They aren’t arguing (as some surmised) that PER is useless, just that it is prone to inflation. The argument, from seemingly everyone on the ‘anti per-minute statistics’ side, is that if you increase a player’s minutes, his efficiency will suffer.
There’s a problem with this oft-repeated claim: It’s not true.
Thanks to the data-collection efforts of Ballhype’s own Jason Gurney, I’m going to try to ensure this claim never gets stated as fact ever again. Using seasons from 1997-98 to the present, we identified all players whom played at least 45 games in two consecutive seasons and whom saw their minutes per game increase by at least five minutes from the first season to the second. The players must have played between 10 and 25 minutes per game in the first season, to ensure we were not dealing with either folks who went from none-to-some playing time or superstar candidates who took over an offense and thus got a minutes boost. This is aimed at roleplayers whose role becomes more prominent — exactly the candidate FD’s Theorem of Intertemporal Heterogeneity implies will suffer from increased minutes.
Since I seem to express myself more clearly via Photoshop, here is the result of our mini-study.
No, increased minutes do not seem to lead to decreased efficiency. In fact, the data indicates increased minutes lead to… increased efficiency. More than 70% of the players in the study (there were 251 in total) saw their PER (which is, by definition, a per-minute summary statistic) increase with the increase in minutes. Players whose minutes per game increased by five saw an average change of +1.38 in their PER. The correlation between increased minutes and change in PER in this data set was +0.20.
One step further: Players who had at least five years of experience including their first-season in this study and got the requisite 5-minute increase (106 such players) saw an average change of +1.26 in their PER. It’s not just young kids who happen to improving and getting more minutes all at the same time — vets who get more minutes typically see their per-minute production rise. A full 67% of these players so positive changes in PER with the increased minutes. (And this answers one of Carter’s concerns with existing studies.) Let’s bump this up to players who had at least eight years of experience going into their minutes increase; we had 52 such cases. The average change in PER: +1.31. Of these players, 69% saw their PER increase with more minutes.
Case closed right? Well not if Brian M. has something to say about it.
Imagine we wanted to test the relationship between duration of exercise and reports of fatigue. We have two experimental conditions, one group jogs for 10 minutes and the other for 30 minutes. We predict that the group that jogs 30 minutes will report more fatigue.
But we must assign people to the two groups randomly in order for the data to have any bearing on the hypothesis. If we systematically assign people who are in better shape to the 30 minute jogging condition, we may find that in fact, if anything, people report less fatigue with longer durations of exercise. But the study is flawed in a fundamental way and so the data don?t tell us much of anything. At most what the results of this poor experiment tell us is that the effect of exercise duration on reported fatigue is not so strong that it overrides the differences in health between the two groups. But that is a really limited conclusion, especially if we don?t even have means to quantify how much the two groups differed in health to begin with.