Statistical Analysis. Humor. Knicks.

Thursday, August 21, 2014

Do Players’ Per-Minute Stats Decline When Given More Minutes? No!

Poster Caleb put the following link in a comment to the Isiah Thomas report card, and I think it is interesting enough to get its own thread.

Here is the link, which is a piece written by tziller for BallHype, exploring the results of players who are “promoted” from the bench to starting.

Thanks to the data-collection efforts of Ballhype’s own Jason Gurney, I’m going to try to ensure this claim never gets stated as fact ever again. Using seasons from 1997-98 to the present, we identified all players whom played at least 45 games in two consecutive seasons and whom saw their minutes per game increase by at least five minutes from the first season to the second. The players must have played between 10 and 25 minutes per game in the first season, to ensure we were not dealing with either folks who went from none-to-some playing time or superstar candidates who took over an offense and thus got a minutes boost. This is aimed at roleplayers whose role becomes more prominent — exactly the candidate FD’s Theorem of Intertemporal Heterogeneity implies will suffer from increased minutes.

[Graphic not shown.]

No, increased minutes do not seem to lead to decreased efficiency. In fact, the data indicates increased minutes lead to… increased efficiency. More than 70% of the players in the study (there were 251 in total) saw their PER (which is, by definition, a per-minute summary statistic) increase with the increase in minutes. Players whose minutes per game increased by five saw an average change of +1.38 in their PER. The correlation between increased minutes and change in PER in this data set was +0.20.

It is a fascinating piece, and I think it’s well worth a read.

36 comments on “Do Players’ Per-Minute Stats Decline When Given More Minutes? No!

  1. Mr. Black

    Me no understand. Me used think me smart but me no understand lines and charts with numbers. Me think Paul Milsap good player me think he be still good player if he play more. Me did understand one part and me do agree. Malik Rose sucks.

  2. Brian M

    I don’t think this study is terribly informative. I posted a more elaborate comment at ballhype, but in brief, there is almost definitely a severe confound in the data, both within and across seasons– coaches give players more minutes when they play better. So the trivial and most likely explanation of the trend in the data is better play => more minutes => better efficiency with more minutes. If a minutes played => worse efficiency effect does exist, it’s entirely possible that it’s swamped out of the data by the minutes/ability confound.

    To conclude something about the effect of minutes played on efficiency, the confound must be controlled for somehow. The really surefire way to do this would be to force coaches to assign minutes according to the output from a random number generator, but of course that’s not terribly feasible.

  3. Caleb

    “coaches give players more minutes when they play better”

    Aren’t cause/effect questions a similar limitation on any stat? No reason to single this one out.

    In any case, it’s not an ironclad link between performance and playing time. Coaches try untested players who don’t pan out, players are forced into extra minutes because of injuries, etc.

    In a decent-size sample the true impact still shows.

    But I agree with silverbird’s comment on the ballhype site, that it would be a stronger study if looked at all players whose minutes CHANGED, not just those who got MORE minutes.

  4. Brian M

    Mike, don’t think so. The argument for minute/ability confound isn’t dependent on a player’s baseline level of ability. If a player with a PER of 8 becomes a player with a PER of 12 the following season, all else being equal we should still expect his minutes to rise. Likewise if his PER drops from 12 to 8, we should still expect his minutes to take a hit, all else being equal.

    None of this is to say that per minute stats are bad. I think they’re great. The possibility that there is a negative relationship b/t minutes played and efficiency (all else equal) is not a bad thing for per minute stats because the primary use of per minute stats is not to project how a player will perform given X minutes. The reason we use per minute stats is just to make players’ statistics more directly comparable.

  5. Mike K. (KnickerBlogger)

    Brian, I understand – I’m just curious how you would test such a thing without buying a team and setting the minutes/roster any way you wish. Even if you go to in season data, how could you not prove that it was the player that improved which meant more playing time?

    I can’t say I see this “explanation” of coaches realizing that their players are much better and start to give them playing time. This seems especially less plausible when we’re looking at 8 year vets. In other words if 70% of all players improve their per minute stats when given 5+ min, and you’re saying that this is due to improvement, then we should be able to isolate players that improve. We know that across the board in every sport younger players improve, and older ones decline. So if this theory is correct, then by looking at just older players we should see less of an improvement. But Tom’s results? Nearly the same percentage (69%) saw an improvement.

    To me that’s pretty damning evidence against coaches giving more evidence to those that are improving, because 70% of 28-30 year olds are not getting better.

  6. Ted Nelson

    I thought the author of the piece was very careful to point out that there was a question of causation and the thesis was not “Do Players? Per-Minute Stats Decline When Given More Minutes? No!” but rather “Do Players? Per-Minute Stats Decline When Given More Minutes? We don’t know, but there is not evidence suggesting they do.”

  7. Brian M

    Mike, in the ballhype data they specifically selected players whose minutes increased some minimum amount across two consecutive seasons. If I understand your point correctly, you’re saying that older players tend to decline rather than improve. That’s true of course, but it doesn’t immediately come to bear on the ballhype data I think. This is because they didn’t look at older players in general, they looked specifically at older players who did see an increase in minutes. So it’s quite possible that this group of older players they looked at were specifically selected such that they tended to have improved relative to their prior season. In other words, it’s not necessarily a random sample of older players, but plausibly a biased sample.

  8. Brian M

    Ted– the problem is that the question of causation really is central to how the data comes to bear on the claim that efficiency drops with minutes. Without settling the issue of causation the data do not really say one thing or another about the hypothesis under scrutiny.

  9. Mr. Black

    I would be much more interested in the impact that a lucrative long term deal has on a player’s PER. I think Boris Diaw would actually be a great case study. There has been a good amount of anecdotal evidence to suggest players have great seasons during what is called a “contract year.” (See Larry Hughes and Boris Diaw) Does the contract have some impact on production.

    I think this would be very hard to quantify. I havent researched this at all, but if memory serves, alot of lucartive free agent deals involve a player moving via sign and trade or straight free agent deals. (See Rashard Lewis). Is it fair to compare the stats of a player on a new team to the stats he put up on a diffrent team. Arguably the new team will have a very diffrent system and certainly a diffrent set of teammates. For example, larry hughes has a career year in Washington, during his contract year. In washington, he benefits from having two year strong scorers around him (Arenas and Jamison). Because these players play at his wings, Hughes being a 2 with Arenas and Jamison at the 1 and 3. It is hard for teams to double Hughes by leaving Arenas or Jamison. Washington played a wide open offense stlye that did not at all rely on post scoring. Hence, more back court touches.

    Hughes then signs a big deal with the Cavs. The Cavs have a huge talent at the 3 (James) but a zero, at least on offense at the 1 (Snow). Now Hughes lacks an offensive weapon at the one to keep defenses honest. He also moved to a team that played a less open offense. The Cavs rely on James to create most of the offense, this was not the case in Washington. The end result is a season in which Hughes does not come close to duplicating his final year in Washington. But, does that mean that a player “dogs” it after the fat contract? I’m not sure.

    Certainly, we have the Jerome James’ of the world in which retirement begins shortly after the big contract is signed. The same could be said of players like Ike Austin. But those players had mediocre careers, then a short burst during a contract year, then back to normal production. But both Ike and Jerome, like Larry, left thier old teams to get the big deal.

    On the flip side is Kobe Bryant. Bryant has improved his play following the lucrative extension. However, Bryant did not change teams. Steve Nash not only changed teams but he also improved significantly after getting the big deal.

    i would like to see an article that uses stats to explore the “fat contract” phenomenon. Is Boris Diaw the latest example of declining returns on a big extension. He stayed with his team, he didnt come near his PER of his contract year.

    Any of you stat heads want to give this a try. If you do please use small words and lots of pictures. Me no good understand numbers on line chart.

  10. dave crockett

    Ted wins the candy bar on this one I think. What we can say with a fair degree of certainty is that there’s no reason to think performance declines with additional minutes.

  11. Frank

    Hi all- this stats thing is a bit beyond me but all I can say is that this is a classic case of why retrospective analyses are not trustworthy at all. You cannot possibly go back and try to take out all the confounders and so the data get swamped by hundreds of factors that were never looked at. An example in the health care community is the case of giving hormone replacement to women after menopause. Many retrospective studies showed that it had all kinds of benefits, but when a true prospective study was done, it had clearly a negative impact on the various endpoints they looked at. So the ballhype analysis holds very little weight in my mind.

    That being said, all I can say about this from an intuitive standpoint is yet another analogy: in baseball, it is general knowledge that starters must have at least 3 pitches in order to last 6-7 innings. Relievers like Mariano Rivera only have to have 1 great pitch to get through their 1 inning. Does that mean that if Mariano pitched 9 innings that he would have won every game he ever pitched with his career ERA of 2.33? Obviously not. If Mariano pitched 2 innings every game he might still be great. But once you start going to 3 and 4 innings/game, the other team starts to catch up to your tricks and adjusts. And who know then whether Mariano could similarly make the adjustment. That is why I think extrapolating stats from 22 min/game to 40 min/game is a useless exercise. It leads you to draft players like Stromile Swift in your fantasy draft hoping their 8 points, 4 rebounds, and 2 blocks in 12 minutes will suddenly turn into 24, 12, and 6. (that obviously hasn’t happened for good ol’ Stromile).

    I am also still waiting for Ike Diogu to have those ridiculous stats that Hollinger predicted.

    If you are truly that good and have that great an impact on the game, the other team WILL adjust, target your weakness, and shut you down, unless your name is Kobe, MJ, Garnett, etc. And if your name were Kobe/MJ etc. you would never be playing just 22 min/game in the first place.

  12. Frank

    well, no. He played 30 min/game this year and averaged 10 and 10. So even if we believe he will have no dropoff in efficiency between minutes 30 and 40, he’ll end up at 13.3 and 13.3.

    I don’t think he’ll ever be > 15-16 ppg.

  13. Ted Nelson

    “So the ballhype analysis holds very little weight in my mind.”
    “Without settling the issue of causation the data do not really say one thing or another about the hypothesis under scrutiny.”

    He never seems to assert that a player’s efficiency improves with minutes. He admits that the causes behind the numbers are not known. He just seems to be presenting the numbers that were found and analyzing them very briefly. Here’s a quote from the article:

    “The data is inconclusive — even if it were more conclusive, such as the previously highlighted notes — we cannot ascertain motive from these relationships. As the adage goes, correlation does not mean causation. Or in terms I prefer: We don’t know.
    I want to emphasize this again: We don’t know. We cannot look at any of this data and say “Increased minutes leads to increased per-minute production (aka efficiency)” just as we cannot and should not say “Increased minutes leads to decreased per-minute production.”"

    Frank- The principle use of per 40 minutes stats is usually to say how efficient a player was not how good a player would be in increased minutes. The problems associated with giving, say, Lee Nailon or Stro Swift 40 mpg are well known. If you were to try to

    Per minute numbers are multiplied by 40 minutes to make them more familiar to us: we’re used to looking at the numbers of starters and stars who play 40 minutes. They could easily be left per minute or multiplied by 10, 100, 1000, or whatever number. 40 is just convenient because it’s about how many minutes a good starter will player. We can say “wow Backup PGX is really a good passer becuause he dished 10 ast/40 and that’s how many Jason Kidd dishes” as easily as we can say “Starting PGY isn’t much of a passer or doesn’t pass much because he only dished 3 ast/40 which is less than our starting PF.”

    The numbers don’t have to be used to determine how many minutes a guy should get, we can simply say that Player X is a fairly productive 20 mpg player, while Player Y is a rather unproductive 27 mpg player far more easily than if we had to weigh their mpg in our heads while looking at per game stats.

    The “stats thing” is really not that complicated. You might find Dean Oliver’s Basketball on Paper and one of Hollinger’s Pro Basketball Prospectus’ and, although I haven’t read it Owen would certinaly recommend it, Berri’s book to be interesting and informative reads. Two important things to keep in mind, which many “anti-stats” people seem to forget, are that people are trying to use stats to analyze and predict outcomes on the court (it’s not meant to be totally abstract) and that there is a lot of debate among people who follow stats (take all the debates between Owen and others about WOW).

  14. Owen

    Ok, I post about Dave Berri all the time. So one more. His player metric, despite how it often sounds coming out of my mouth, wasn’t actually designed as a killer app for settling bar and locker room sports arguments. The purpose of his metric was actually to answer questions like the one on this thread.

    FWIW, and I don’t have the book in front of me, I do think he finds that player performance actually rises slightly as minutes increase. He has said on his blog I believe that this MAY happen because the player is performing better, as is often the case with young players. However, he definitely does not find the player performance drops as minutes increase. I think he had a big post on Mikki Moore, who had a “breakout” season last year, in which his WP48 was virtually identical to his career average.

    I also don’t think the contract year outperformance is a real thing. I am not sure if he showed that in basketball, maybe in baseball. But will check on that when I get home…

  15. Jersey J

    “That being said, all I can say about this from an intuitive standpoint is yet another analogy: in baseball, it is general knowledge that starters must have at least 3 pitches in order to last 6-7 innings. Relievers like Mariano Rivera only have to have 1 great pitch to get through their 1 inning. Does that mean that if Mariano pitched 9 innings that he would have won every game he ever pitched with his career ERA of 2.33? Obviously not. If Mariano pitched 2 innings every game he might still be great. But once you start going to 3 and 4 innings/game, the other team starts to catch up to your tricks and adjusts. And who know then whether Mariano could similarly make the adjustment. That is why I think extrapolating stats from 22 min/game to 40 min/game is a useless exercise. It leads you to draft players like Stromile Swift in your fantasy draft hoping their 8 points, 4 rebounds, and 2 blocks in 12 minutes will suddenly turn into 24, 12, and 6. (that obviously hasn?t happened for good ol? Stromile).

    That being said, all I can say about this from an intuitive standpoint is yet another analogy: in baseball, it is general knowledge that starters must have at least 3 pitches in order to last 6-7 innings. Relievers like Mariano Rivera only have to have 1 great pitch to get through their 1 inning. Does that mean that if Mariano pitched 9 innings that he would have won every game he ever pitched with his career ERA of 2.33? Obviously not. If Mariano pitched 2 innings every game he might still be great. But once you start going to 3 and 4 innings/game, the other team starts to catch up to your tricks and adjusts. And who know then whether Mariano could similarly make the adjustment. That is why I think extrapolating stats from 22 min/game to 40 min/game is a useless exercise. It leads you to draft players like Stromile Swift in your fantasy draft hoping their 8 points, 4 rebounds, and 2 blocks in 12 minutes will suddenly turn into 24, 12, and 6. (that obviously hasn?t happened for good ol? Stromile).”

    I agree with Frank. It’s refreshing to know someone else understands the business of sports.

  16. Brian M

    “He never seems to assert that a player?s efficiency improves with minutes. He admits that the causes behind the numbers are not known. He just seems to be presenting the numbers that were found and analyzing them very briefly.”

    Maybe it’s just me, but the overall tone of the article seemed something like this: I failed to find efficiency decreasing with minutes played, so stop saying this effect exists. But of course, the failure to detect the effect doesn’t mean it’s not there. At most what we can say from this data is that if a more minutes played does tend to suppress efficiency, it’s not an effect so strong that it overturns any other factors that mediate the relationship b/t minutes and efficiency. And it would really have to be a huge effect to swamp out some of the things that determine how players actually get minutes, e.g. by playing better.

    I agree with the point that people should not state that efficiency declines with minutes as if it’s a proven effect. But to demonstrate this point all you need to do is note that this effect has not been shown to exist, and you’re done. The analysis of how minutes played correlates with PER doesn’t add much to the discussion because it’s not a pure analysis of how minutes played relates to efficiency, holding all else equal.

  17. Mike K. (KnickerBlogger)

    “That being said, all I can say about this from an intuitive standpoint is yet another analogy: in baseball, it is general knowledge that starters must have at least 3 pitches in order to last 6-7 innings. Relievers like Mariano Rivera only have to have 1 great pitch to get through their 1 inning. Does that mean that if Mariano pitched 9 innings that he would have won every game he ever pitched with his career ERA of 2.33? Obviously not. If Mariano pitched 2 innings every game he might still be great. But once you start going to 3 and 4 innings/game, the other team starts to catch up to your tricks and adjusts. And who know then whether Mariano could similarly make the adjustment.”

    There are plenty of places where intuition fails us. For instance it’s intuitive to think that flying is less safe than driving. I mean who would think that a plane weighing 150 tons and carrying 200 people 30,000 feet in the air is safer than driving in a car on the highway? Intuitively that’s just bat-sh!t crazy.

    And yet the stats bear it out. Every year on average 40,000 people in America die from car accidents, but only about 200 from airplane related accidents. So you’re more likely to die from the taxi ride to the airport than the actual flight.

    But all in all I think your analogy is poor. I think Tom was clear in his statement.

    The argument, from seemingly everyone on the ‘anti per-minute statistics’ side, is that if you increase a player’s minutes, his efficiency will suffer.

    There’s a problem with this oft-repeated claim: It’s not true.

    So in fact basketball players are not like baseball relievers. Baseball relievers’ efficiency will decline with the increase in usage. Tom Ziller looked at every player in the last 10 years that saw an increase in playing time, and his stats show that there is no correlation in basketball between increased playing time and production.

    After reading his article, the better analogy is that basketball players are more like non-platoon hitters. If a hitter went .280/.350/.450 in 300 ABs, you’d expect him to do that in 600 ABs.

  18. Ted Nelson

    “I agree with the point that people should not state that efficiency declines with minutes as if it?s a proven effect”

    A lot of posters were stating this as a conctrete fact and implying that anyone who didn’t recognize it was crazy. The article seems to be introduced to address those people.

    “And it would really have to be a huge effect to swamp out some of the things that determine how players actually get minutes, e.g. by playing better.”

    I agree that playing better often leads to minutes, but certainly there are players who get more minutes when their team is trying to play younger players, save money, a teammate is injured, suspended, leaves the team, or starts playing worse while the guy getting more minutes didn’t actually start playing better. And there are coaches who for various reasons sit better players behind worse players.

    Even then you can still debate about whether the guy who increased his minutes did so because the coach thought he was playing better in practice. The only way to conduct a proper study would seem to be to ask every coach about minutes handed out before or right after the game.

    “The analysis of how minutes played correlates with PER doesn?t add much to the discussion because it?s not a pure analysis of how minutes played relates to efficiency, holding all else equal.”

    I think the author agrees with you. “We don?t know. We cannot look at any of this data and say ?Increased minutes leads to increased per-minute production (aka efficiency)? just as we cannot and should not say ?Increased minutes leads to decreased per-minute production.?”

    He states that “our theorem that increased minutes does not necessarily lead to decreased efficiency”
    not that increased minutes leads to increased efficiency.

    “But to demonstrate this point all you need to do is note that this effect has not been shown to exist, and you?re done.”

    I don’t think you’re done. You’re just starting on level ground instead of going into things with a preconceived notion that increased playing time decreases efficiency. Players usually getting more minutes by playing better is another, logical, preconception.

    This whole thing seems like a vicious cirle where you can never really establish causation because there is always a counter argument. If you start to get enough evidence on one side, though, the counter arguments start to look like big tobacco blaming diet and exercise and whatever other variables for the bad health of smokers. So, I guess in this sense even flawed studies on the subject can be useful.

  19. Caleb

    If you really want to go with a Mariano Rivera analogy, most people would say that Mariano was the most valuable Yankee, or close to it, on every world championship team. Despite being a specialist.

  20. Caleb

    re: the study, no one would call it the last word on the subject… but the results are pretty suggestive. Even if there are weaknesses, there’s NO evidence to suggest that “role” players who get more PT play any worse. (on the whole)

    They probably did a limited study because they didn’t have unlimited time. If you doing something more comprehensive, you’d look for the correlation between playing time and PER for all players getting more than some minimal amount (say, 10mpg) in both the “before” and “after” year.

    Among other things, you’d look for a dose-response effect, i.e.: did the change in PER (better or worse) have a relationship to the size of the increase/decrease in minutes.

    The cause/effect question (are players getting more PT because only because they started playing better?) is a big confounder, so a larger sample size is important.

  21. retropkid

    Any study like this is skewed by year 1 guys not getting alot of time often…then improving, gaining a coach’s trust, and getting minutes and being successful with them.

    What can we learn about the guys who didn’t improve? Is there a common theme in that sub-set of players?

    More importantly though…would be to know what happened on defense during those increased minutes as well…Individual stats aren’t hard to increase, but did an individual make a difference in team points vs. opponents team points while on the floor?

    In the NBA, just about every player would score 20 points a game if given 20 shots a game. Guys like Barkley were incredibly efficient at points per shot attempt. Track MJ’s career…while he scored at the end, he needed alot more shots to do it, so wasn’t helping his team as much with all those misses.

    It might be interesting to run points per shot attempt up against that increased minutes analysis …and if scoring improved, how much was better shooting percentage and how much came from the foul line…though in either case, increasing points per attempt is a pretty good barometer of helping your team.

  22. Caleb

    I sort of like PER as the measure in this research, since it is semi-accepted and at least tries to incorporate a wide range of measures.

    In theory, it would be interesting to do the same analysis on different stats – TS, points/shot, rebounds, defensive measures, etc. – to see which aspects of a player’s game tend to stay flat, and which improve or worsen with additional minutes. (I suspect you’d see players shoot and score at a higer rate as they get a more prominent role, even if their overall game doesn’t improve)…

    …but in practical terms, it’s complicated enough just to run one set of numbers (with PER, for example).

  23. retropkid

    It’s a pretty neat analysis as is — didn’t mean to suggest it wasn’t.

    If anybody was game, I do think a look at the player universe that didn’t improve would be interesting. Were they mostly centers?…were they guys who got time because a starter was injured (vs. improvement in ability)?…Bad foul shooters? Guys who couldn’t shoot threes? What did they have in common with one another?

    What factors can be teased out that over-index with the group that doesn’t improve performance with more time?

  24. Brian M

    “re: the study, no one would call it the last word on the subject? but the results are pretty suggestive. Even if there are weaknesses, there?s NO evidence to suggest that ?role? players who get more PT play any worse. (on the whole)”

    Imagine we are debating whether a building in a secret government location is colored red or not. Since it’s top secret we can’t just go and look for ourselves. But suppose someone sends out a spy camera that gets near the building and snaps a photo. But the photo is just a mush of nonsense because the lens was cloudy and scratched, and there were trees in the way of the shot.

    Now the guy who snapped the photo shows it to everyone and says “there is no evidence that the building is red.” Sure, the photo shows no evidence that the building is red. But this is a trivial result; the photo doesn’t really tell you much of anything because it didn’t get a clear shot of the building in the first place. So it does not help the discussion to even bring up what the photo recorded.

  25. Caleb

    “The photo is just a mush of nonsense because the lens was cloudy and scratched, and there were trees in the way of the shot… the photo doesn?t really tell you much of anything because it didn?t get a clear shot of the building in the first place.”

    I don’t see why the picture is as murky as that analogy. The study results may blur cause and effect, but as a group the players studied played significantly better when they played more minutes. If there were a strong current in the opposite direction – if players were frequently exposed when the coach gave them more burn – I don’t think you’d have such robust numbers. Of course my opinion is subject to change as new evidence comes in.

    Even a blurry picture is better than no picture. You might not be able to tell what shade of red the building is, but you can tell if it’s a house or a skyscraper.

  26. Mike K. (KnickerBlogger)

    Brian – the only problem I have with your analogy is that we do have some evidence. It may not be a picture of the house with someone holding up today’s newspaper accompanied by the negatives. But with that many player seasons not pointing to a decline in production, we’ve got someone saying “I live on this block, and I’m not sure if there’s a red house in that location or not, but none of the houses on the block are blue.”

    For it to be true that a increase in playing time means a decrease in production then both of these must be true:
    A. Of the ~70% of players who didn’t decline, a majority improved on their own to earn the playing time.
    B. Of the ~30% of players who did decline, coaches were forced to play a majority of them due to injury/roster problems.

    That seems to be a bit of a stretch to me. I don’t see that as being analogous to there being a blurry picture. It’s not a clear picture but there is a decent amount of evidence to the contrary, without anything substantial to backup the opposing theory.

  27. jon abbey

    “Every year on average 40,000 people in America die from car accidents, but only about 200 from airplane related accidents. So you?re more likely to die from the taxi ride to the airport than the actual flight.”

    this is a totally mistaken conclusion. you need to factor in how many car rides people take versus plane trips, and if it’s more than 200 times more (as I’d assume), then flying is in fact more dangerous.

  28. Brian M

    The data gives us evidence of how efficiency covaries with minutes, but the problem is that the claim being tested isn’t just about how efficiency covaries with minutes. It’s specifically a claim that the factor of minutes played, in itself, has a direct negative relationship with efficiency. The existing data can’t give evidence about this claim b/c minutes are not assigned randomly, but in a systematic way.

    Imagine we wanted to test the relationship between duration of exercise and reports of fatigue. We have two experimental conditions, one group jogs for 10 minutes and the other for 30 minutes. We predict that the group that jogs 30 minutes will report more fatigue.

    But we must assign people to the two groups randomly in order for the data to have any bearing on the hypothesis. If we systematically assign people who are in better shape to the 30 minute jogging condition, we may find that in fact, if anything, people report less fatigue with longer durations of exercise. But the study is flawed in a fundamental way and so the data don’t tell us much of anything. At most what the results of this poor experiment tell us is that the effect of exercise duration on reported fatigue is not so strong that it overrides the differences in health between the two groups. But that is a really limited conclusion, especially if we don’t even have means to quantify how much the two groups differed in health to begin with.

  29. NJ-n-GA

    Stats are for statisticians and mathematicians. They can help you make your aguement no matter what side of the fence you are on.

    “You are more likely to be struck by lightning that bit by a shark or stung by a bee.” that might be true, but I know one thing, if you lock me in a room with 1000 bees or dropped me in a tank of great white sharks, those odds are reversed.

    Sports are no different than anything in life, it’s all about timing, being in the right place at the right time, and having the right people around you.

    Let’s talk sports people NOT STATS!

  30. Brian M

    Here’s a sketch of how the analysis could be improved (though it’s still not perfect). Basically the idea is to statistically control for the purported confound in the data, i.e. the effect whereby coaches give better players more minutes.

    Within a given season, you could estimate how minutes covary with PER by running a regression analysis. Since better players tend to receive more minutes, this should be a significantly positive relationship. (You could even run separate regressions for each coach, assuming the relationship b/t PER and minutes assigned is different for each coach.) The regression analysis can then basically serve the function of a quantitative prediction of how players are rewarded more minutes as a function of increases in PER. Call this “the coaching effect.”

    Now, the key variable of interest is not how a player’s PER changes b/t 2 seasons where he sees a big increase in minutes. The variable of interest is how the player’s actual change in PER as a function of minutes played differs from the quantitative prediction of how his PER should have changed with the increase in minutes, according to the coaching effect. By subtracting the predicted change from the actual (i.e. by calculating the residual displacement of each data point from the regression line), we will have cancelled out any variance in the data attributable to how coaches dole out minutes according to performance (at least in theory).

    With this data set, the hypothesis to be tested would be that the residuals are negatively correlated with the change in minutes played (i.e. after controlling for the coaching effect, we should see that PER drops proportionately to the increase in minutes played).

    For instance, suppose player A plays 10 mpg one season and then 25 mpg the next. Suppose that the coaching effect predicts that in order to have earned 15 more mpg, his PER must have increased by 5 points. If his PER actually only increased by 2 or 3, though, then his actual rise in efficiency is lower than that predicted by the coaching effect. If a pattern like this could be established to occur with consistency, where on average actual changes in PER undershoot predicted changes in PER according to the coaching effect, we would conclude that in fact, increases in minutes do tend to decrease PER, once we take the coaching effect into account.

Comments are closed.