Knicks acquire draft rights to Jared Jordan

NEW YORK, September 30, 2007 ? New York Knickerbockers President of Basketball Operations and Head Coach Isiah Thomas announced today that the draft rights to guard Jared Jordan have been acquired from the Los Angeles Clippers in exchange for cash considerations. Jordan was originally selected by the Clippers in the second round (45th overall) of the 2007 NBA Draft.

Jordan, 6-2, 190-pounds, averaged 13.1 points and 6.9 assists in 117 career games (108 starts) at Marist College. The 22-year old, Hartford, CT-native averaged 17.2 points and 8.7 assists as a senior in 2006-07 and became the first player since Avery Johnson to lead the nation in assists in NCAA Division 1 play for two consecutive seasons.

As a member of the Clippers? entry in the 2007 NBA Summer League in Las Vegas, Jordan averaged 4.2 points and 4.8 assists in five games (all starts).


Some links about Jordan:

http://www.draftexpress.com/viewprofile.php?p=1094

Jordan continues to impress here in Orlando, outplaying more highly regarded points guards such as Bobby Brown and Sean Singletary in this afternoon?s matchup. Jordan?s team is now 2-0, and it would be safe to say that his play at the point guard position has been a major reason for this. Today Jordan racked up a sparkling 7 assists against 0 turnovers, with just about every single pass he made productive in some form. Whether it was find cutters in the halfcourt or handing out crisp, accurate open court passes on the move, Jordan?s mark was on this game whenever he was on the court.

Everyone?s favorite draft prospect here in Orlando (at least amongst the media and staff members) had another solid outing in his final performance at the pre-draft camp, leading his team to its third straight win. Despite lacking an incredible first step, Jordan got into the paint again and again, utilizing terrific ball-handling skills, hesitation moves and crafty spins to force defensive rotations and find the open man instantaneously as soon as they freed up. He also did a nice job running the pick and roll, and pushed the ball up the floor when given the opportunity to do so. He?s an old-school player who doesn?t score a ton of points (really lacking a steady 3-point shot, a consistent pull-up mid-range jumper and a more reliable floater he can go to) but he did put some score some points late in the game when his team really needed a bucke

http://sportsillustrated.cnn.com/2007/writers/the_bonus/05/24/jared.jordan/

There are questions about Jordan’s game — no ups, not strong enough or quick enough laterally for defense, an inconsistent three-point shot. But one Eastern Conference scout with whom I spoke has another take. “We’re bereft of really good point guards in the league,” he said. “We have plenty of guys who can shoot, but consider passing secondary. Jordan’s the ideal leader. He has complete control of the game, and his team is so much better when he’s out there. He looks like the real thing to me.”

http://www.nbadraft.net/admincp/profiles/jaredjordan.html

Hustler who has no problem doing the dirty work like diving on the floor for loose balls ? Constantly looks to push the ball up the court to ignite the fast break ? Great decision maker with the ball in his hands

Jared Jordan Interview

I think the athleticism is emphasized too much I think in terms of how high someone can jump, and people don?t realize how quick I am until they play against me in person. I just need to keep playing and hope that I can be in a situation to prove them wrong.

Stats from Yahoo.

Pre-Draft workout from YouTube.

Balkman Hurt!

According to the Associated Press (via SI.com, via poster DS):

http://sportsillustrated.cnn.com/2007/basketball/nba/09/27/balkman.knicks.ap/

Forward Renaldo Balkman will miss at least four weeks because of a stress reaction in his right ankle, leaving the Knicks without a key reserve just days before the start of training camp.

An MRI exam performed this week also revealed a small cartilage injury in the ankle, the team announced Thursday. Balkman will be fitted for a walking boot and will be re-evaluated after resting for four weeks.

Here at KB, we wondered if Balkman would start the season at SF. Given Balkman’s rookie season & his phenomenal summer league, it seemed at least possible that the Knicks would open the year with Balkman at the 3. In this scenario, Balkman would probably have needed a strong preseason to showcase his abilities. Consequently we also hoped that Richardson would either start or see significant time at SG. However this injury seems to have derailed that opportunity. So it looks like the opening day lineup is either Crawford/Richardson, Crawford/Jeffries, or Richardson/Jeffries. Another question remains: how much does Balkman’s injury hurt his chances at starting?

Do Stats Lie?

Lately I’ve been thinking about the greatest offensive team of the last 20 years. Led by Michael Adams and Orlando Woolridge the mighty 1991 Denver Nuggets punished opponents by scoring 119.9 points a night. That Nuggets offense just beats out the the 1992 Mullin-Hardaway Warriors (118.7 pts/g) and the 1989 Chambers-K.J. Suns (118.6 pts/g). Certainly since the 1991 Denver Nuggets scored more points per game than any team since 1987, they were the NBA’s best offense in that timespan.

Or are they? This seems to be a dubious claim. Looking at the 1991 Nuggets, none of the players were voted to the All Star team that year. There aren’t any Hall of Famers on that team. Denver went a rancid 20-62 that year. Of the three teams above, there are no champions. No Michael Jordan. No Magic Johnson. No Larry Bird. No Shaq. No Steve Nash.

How can a 20-win team be one of the great offensive teams of all time? You might say that the stats are “lying” because they’re misrepresenting what we believe to be true. But that’s not the case. The numbers are 100% accurate. If you watched every game of the last 20 years, you would not have found a team that scored more points in a season than the 1991 Nuggets. Saying the 1991 Nuggets scored the most points per game in the last 20 years is true. Saying the 1991 Nuggets are the best offensive team in the last 20 years is false. The deception is in the interpretation of the statistics, not in the stats themselves. The problem is in equating “most points per game” with “best offensive team”. The correct interpretation for “most points per game” is “most bountiful offense”, which is quite different from “best offensive team”.

Take this example: Going into the 2007 season, the Chicago Bears have a good chance to win the Super Bowl. One vegas line has their odds at 8-1 to win it all. One of their best players is Rex Grossman who has a fantastic 17-5 record as a starting QB.

Once you pick yourself off the floor laughing, it’s easy to see where the fallacy is. The Bears do have a good chance to win the Super Bowl this year. Their odds to win, at least from one vegas site, is 8-1. Rex Grossman has a 17-5 record as a starter. All these things are true. However they’re not one of the best teams in the NFL due to their QB. Rex Grossman is by all accounts a bad quarterback. Carson Palmer, an All Pro, has a winning percentage of only 55.6%. The deception is in saying that QB win percentage indicates the quality of the QB. There are better ways to judge the ability of a QB like completion percentage, TD-INT ratio, yards per attempt, etc.

Getting back to our original example, those 1991 Nuggets scored so many points per game because they ran a very fast offense (and also a very fast defense). Denver led the league in pace averaging 113.7 possessions per game. To show how much an aberration this was, the league average was only 97.8 and the second fastest team was the Golden State Warriors at 103.6 possessions per game. A team can increase its points per game by simply increasing its pace. This reveals a flaw in the relationship between “points per game” and “best offense.” It’s obvious that points per game isn’t the best measure of a team’s offensive capability.

To more accurately judge which team had the best offense, you need to account for this disparity in possessions per game. Offensive efficiency, sometimes known as offensive rating, calculates how many points a team scores per possession (or more accurately 100 possessions). The importance of offensive efficiency is that it evens the playing field between the fast and slow paced teams. The 1991 Nuggets had an offensive efficiency of 105.2, which placed them 21st out of 27 teams that year. The best offensive team in 1991? The Chicago Bulls, who scored 114.9 points per 100 possessions. This was Jordan’s first championship team, and clearly they were better than the Nuggets on offense that year.

In the end, stats don’t lie. They are numerical records of history. The 1991 Denver Nuggets did score 119.9 points per game. Rex Grossman had a record of 17-5 as a starter going into 2007. The problem is not in the numbers, but rather the people that use these statistics to make claims that they don’t support.


Extras:

  • For more information on points per possession, check out Dean Oliver’s excellent book: Basketball On Paper. Or read this and that.
  • During the season I keep track of offensive efficiency on the stats page. Historical offensive efficiency can be found at basketball-reference.com
  • The team with the highest offensive efficiency over the last 20 years? The 1996 Bulls at 115.8. Does this make them the best offensive team of the last 20 years? Well you might want to account for league average, but that’s a discussion for another day.
  • For more information on the 1991 Nuggets, see this link.
  • For a really good way to rate QBs, I would use DVOA.

Hysterical!

http://ballhype.com/story/nba_festivus_atlantic_edition/

Before you get all fanboy on it, it’s meant to be funny. My favorite line:

New York might be the only team in the league whose second unit could beat its first team. Lee, Robinson, Balkman… Jared Jeffries and Jerome James. Never mind.

Sometimes it’s good to laugh at all the stupid things we take seriously around here.

One More Nail In the Anti-Per Minute Argument’s Coffin?

One of the core tenets of basketball statistical analysis is the usage of per minute stats. When compared to per game stats, per minute stats are highly valuable in the evaluation of individuals. This is because per minute stats puts players of varying playing time on the same level. Using per game stats, starters will always dwarf bench players due to the extended time they get to accumulate various stats. Meanwhile per-minute stats allows to compare players independent of minutes, allowing for a more even approach in player evaluation.

Recently a debate has come up on the validity and usefulness of per minute stats. I’ve quoted the main parts below, but even abbreviated it’s a long read. If you have the time, I suggest reading it now so the rest of this article will make more sense. For those on a limited time constraint, a quicker summary is here:

Hollinger & Kubatko: “Hey per minute stats are a great way to evaluate players! In fact we’ve done a few studies and it seems that a player’s per minute stats increase slightly when they get more minutes. At the worst we can conclude that they should stay relatively the same.”

FreeDarko: “Per minute stats won’t stay the same if a player gets more minutes, because there is a division between greater and lesser players. A player that only gets 10-25 minutes per game is playing against lesser caliber players. Hence when that player sees an increase in playing time, he’s playing against steeper competition, so his stats should decrease.”

Tom Ziller: “That’s not true. Here is every 10-25 minute player in the last 10 years that saw an increase in minutes. Most of them (70%) saw an increase in per-minute production. To discount any of this data being from young players getting better as they age, I looked at 8+ year vets, and saw that about the same ratio of players increased (69%).

Brian M.: “Tom, the problem with all this data is a causality vs. correlation issue. It’s possible that these players saw more minutes first then improved. But it’s also possible that these players improved first which allowed their coach to play them more minutes.”

Brian’s case is a good one. To use an analogy, imagine I come across a person who calls himself Merlin Appleseed. He claims that just by touching apples he can magically make them taste better. He opens up a box of apples saying that he never touched any of them. He picks out 10, and imbues them with his magic. He asks me to taste each of them. I find all of them to be delicious. He says “here’s the same box I got my apples from. Now I want you to take 10 at random while blindfolded. You can compare them to my magic apples. I bet mine taste better.” I do just as he asks, and indeed my random set of apples are less tasty than his. So does Merlin Appleseed have magical power?

Maybe. Unfortunately this test wouldn’t be able to confirm or deny his magical power. Since Merlin gets to choose his apples, he might be selecting the best ones! To test Merlin’s abilities I would need something to gauge how good his apples are expected to taste. One way to do this would be to find comparable apples that have the same color, size, blemishes, etc. Then I can compare the taste of his apples to my apples. If Merlin’s has the magical powers he claims, then his apples will taste better than my apples.

Similarly with Tom’s study, Brian is saying that by selecting players who have seen an increase in minutes we might be choosing the best apples. This is because players who improve on a per minute basis could be given more playing time by their coaches. Therefore to show whether or not these players have improved, I need to find how good they’re expected to be. Then I can compare their actual performance to their expected performance. If FreeDarko’s theory is true, that role players should decrease their per minute production with more minutes, then they should perform worse than their expected values.

To separate the control group from the test group, I’ll only use players with an even numbered age for the control, and odd numbered ages for the test group. Since this study is intended for role players, which was defined by Ziller, I limited my control group to player seasons where:
* The player age was an even number.
* The player appeared in 41 games or more.
* The season was 1981 or greater.
* The player averaged 10-25 mpg.

Now I can calculate the expected production of the players in my group, by looking at per minute production (PER) over playing time (mpg).

Control Group

Just as expected, the graph tends to go from the bottom left (low production = low minutes) to the top right (high production = high minutes). That is players who receive more minutes are more productive. From the 1840 player-seasons in my data, I’m able to calculate the expected PER based on mpg (PER = .2158*mpg + 8.2941). So if a player averaged 10 mpg, you would expect his PER to be 10.45. This equation is represented by the red line on the graph.

Now that our control group is defined, I need to create the test group. Again this group was defined by Ziller as role players who saw an increase in minutes. I selected player seasons where:
* The player’s age was an odd number.
* The player appeared in 41 games or more.
* The season was 1981 or greater.
* The player averaged 10-25 mpg the year before.
* The player increased his mpg by 5+ from the year before.

Since I have the expected values based on mpg, all that is left is to compare their actual production to the control group. In our test group 185 players did better than their expected PER, while 177 did worse. On average each player gained 0.17 PER. This is a tiny gain, not enough to show that players increase production with more minutes. However it clearly shows that they didn’t decline and at least matched the predicted PER.

Another way to see how our prediction did is to calculate the regression (trendline) of this group, and compare it to the expected equation. The red line in the graph below shows the regression of PER/MPG for our control group.

Test Group

* Control: PER = .2158*mpg + 8.2941
* Test: PER = .2185*mpg + 8.3917

The test group, which has both the higher slope and y-intercept, will slightly outperform the control group. But not by much. The average player who saw 40 mpg, will see a .20 increase in PER, which is negligible. In other words, the test group has neither exceeded nor fallen short of our expectations, but rather has met them.

In the end what does this prove? Specifically this study removes the correlation between the role player group and players that saw extra minutes due to improvement. It debunks the thought that there is some kind of division between per minute stats, where the per minute stats of high minute players are more a representation of actual talent than those who play few minutes per game. But combined with the past works of Hollinger, Kubakto, and Ziller, among others, it makes an overall stronger statement. Players who receive 10 or more minutes per game are likely to keep the same per minute stats no matter what the increase in playing time is. Therefore per minute stats remains far superior to per game stats in terms of comparing and evaluating players.


EXTRAS:

  • “It’s a pretty simple concept, but one that has largely escaped most NBA front offices: The idea that what a player does on a per-minute basis is far more important than his per-game stats. The latter tend to be influenced more by playing time than by quality of play, yet remain the most common metric of player performance.” — John Hollinger
  • The great thing about this study is that I can perform it again, this time using the “odd” aged players as the control and the “even” aged players as the test group. This time the prediction equation was PER = .2039*mpg + 8.4439. And again our test players slightly outperformed the average. This time 192 did better than their expected PER, while only 161 did worse. On average each player gained 0.23 PER.
  • This article doesn’t mean that every player that has good per minute stats should see more playing time. It’s very clear that basketball stats don’t capture a player’s total ability. A player that does well on a per minute basis may have other flaws, such as poor defense, which prevent him from contributing more. This also isn’t an endorsement for any single per minute ranking system, like PER, WOW, etc. There are flaws in each of these in addition to being unable to account for attributes not captured in box scores.
  • Summary of the events that led to this article.

Back in 2005, I wrote an article outlining some of the pioneers in per minute research.

In the 2002 Pro Basketball Prospectus John Hollinger asked and answered the question ?Do players do better with more minutes?? For every Washington player, Hollinger looked at each game and separated the stats on whether or not he played more than 15 minutes. He found that when players played more than 15 minutes, they performed significantly better than when they played less. To check his work, he used a control group of 10 random players, and each one of those improved significantly as well.

The knock on Hollinger?s study is the small sample size, containing less than 25 guys from only one season. Enter Justin Kubatko, the site administrator of the NBA?s best historical stat page www.basketball-reference.com. Earlier this week Justin decided to re-examine the theory using a bigger sample size. Taking players from 1978-2004, he identified 465 that played at least a half season and saw a 50% increase in minutes the year after. Three out of four players saw an increase in their numbers as they gained more minutes, although the average increase was small (+1.5 PER).

Two independent studies have shown that NBA players get better when they get more minutes. A conservative interpretation is that per-minute numbers are universal regardless of playing time. So if a player averages 18 points per 40 minutes, he?ll do about that regardless of how many minutes he plays. A more liberal summary would say that underused players will see an improvement in their per-minute numbers if given more court time. A player that only averages 20 minutes a game is likely to be a little bit better if given 35. So the straight dope is per minute stats are a fantastic way to evaluate NBA players.

Recently, this research was questioned by the writers of freedarko.

The problem with this line of reasoning is that it assumes the homogeneity of court time. It assumes that if a player scored 20 points in 20 minutes, he would also score 40 points in 40 minutes. That there will by systematic differences between these two situations is almost too obvious to point out. It’s the difference between sharing the ball with Jordan Farmar while being guarded by Kenny Thomas, and sharing the ball with Kobe Bryant while being guarded by Ron Artest.

Insofar as the problem here is one of rotation, small-scale adjustments in minutes played shouldn’t create major distortions (it isn’t unrealistic to think that if Tim Duncan played 5 extra minutes per game, his per-minute production, as influenced by the level defense he’d face, would basically be the same). But when PER catapults bench players into the starting five (or vice-versa), be on the look-out for inflation. Call this the Silverbird-Shoals Hypothesis, or the THEOREM OF INTERTEMPORAL HETEROGENEITY (TOIH).

Enter Sactown Royalty’s Tom Ziller, to refute Free Darko’s theory.

Shoals and Silverbird are arguing that because low-minutes high-PER guys typically play against fellow bench players, their PER is higher than it would be if they played starter minutes. They aren’t arguing (as some surmised) that PER is useless, just that it is prone to inflation. The argument, from seemingly everyone on the ‘anti per-minute statistics’ side, is that if you increase a player’s minutes, his efficiency will suffer.

There’s a problem with this oft-repeated claim: It’s not true.

Thanks to the data-collection efforts of Ballhype’s own Jason Gurney, I’m going to try to ensure this claim never gets stated as fact ever again. Using seasons from 1997-98 to the present, we identified all players whom played at least 45 games in two consecutive seasons and whom saw their minutes per game increase by at least five minutes from the first season to the second. The players must have played between 10 and 25 minutes per game in the first season, to ensure we were not dealing with either folks who went from none-to-some playing time or superstar candidates who took over an offense and thus got a minutes boost. This is aimed at roleplayers whose role becomes more prominent — exactly the candidate FD’s Theorem of Intertemporal Heterogeneity implies will suffer from increased minutes.

Since I seem to express myself more clearly via Photoshop, here is the result of our mini-study.

No, increased minutes do not seem to lead to decreased efficiency. In fact, the data indicates increased minutes lead to… increased efficiency. More than 70% of the players in the study (there were 251 in total) saw their PER (which is, by definition, a per-minute summary statistic) increase with the increase in minutes. Players whose minutes per game increased by five saw an average change of +1.38 in their PER. The correlation between increased minutes and change in PER in this data set was +0.20.

One step further: Players who had at least five years of experience including their first-season in this study and got the requisite 5-minute increase (106 such players) saw an average change of +1.26 in their PER. It’s not just young kids who happen to improving and getting more minutes all at the same time — vets who get more minutes typically see their per-minute production rise. A full 67% of these players so positive changes in PER with the increased minutes. (And this answers one of Carter’s concerns with existing studies.) Let’s bump this up to players who had at least eight years of experience going into their minutes increase; we had 52 such cases. The average change in PER: +1.31. Of these players, 69% saw their PER increase with more minutes.

Case closed right? Well not if Brian M. has something to say about it.

Imagine we wanted to test the relationship between duration of exercise and reports of fatigue. We have two experimental conditions, one group jogs for 10 minutes and the other for 30 minutes. We predict that the group that jogs 30 minutes will report more fatigue.

But we must assign people to the two groups randomly in order for the data to have any bearing on the hypothesis. If we systematically assign people who are in better shape to the 30 minute jogging condition, we may find that in fact, if anything, people report less fatigue with longer durations of exercise. But the study is flawed in a fundamental way and so the data don?t tell us much of anything. At most what the results of this poor experiment tell us is that the effect of exercise duration on reported fatigue is not so strong that it overrides the differences in health between the two groups. But that is a really limited conclusion, especially if we don?t even have means to quantify how much the two groups differed in health to begin with.

Do Players’ Per-Minute Stats Decline When Given More Minutes? No!

Poster Caleb put the following link in a comment to the Isiah Thomas report card, and I think it is interesting enough to get its own thread.

Here is the link, which is a piece written by tziller for BallHype, exploring the results of players who are “promoted” from the bench to starting.

Thanks to the data-collection efforts of Ballhype’s own Jason Gurney, I’m going to try to ensure this claim never gets stated as fact ever again. Using seasons from 1997-98 to the present, we identified all players whom played at least 45 games in two consecutive seasons and whom saw their minutes per game increase by at least five minutes from the first season to the second. The players must have played between 10 and 25 minutes per game in the first season, to ensure we were not dealing with either folks who went from none-to-some playing time or superstar candidates who took over an offense and thus got a minutes boost. This is aimed at roleplayers whose role becomes more prominent — exactly the candidate FD’s Theorem of Intertemporal Heterogeneity implies will suffer from increased minutes.

[Graphic not shown.]

No, increased minutes do not seem to lead to decreased efficiency. In fact, the data indicates increased minutes lead to… increased efficiency. More than 70% of the players in the study (there were 251 in total) saw their PER (which is, by definition, a per-minute summary statistic) increase with the increase in minutes. Players whose minutes per game increased by five saw an average change of +1.38 in their PER. The correlation between increased minutes and change in PER in this data set was +0.20.

It is a fascinating piece, and I think it’s well worth a read.