Jay Bilas has taken to Twitter (@JayBilas) and was on fire yesterday. He’s easily my favorite college basketball analyst, so it is a very welcome addition. One of his statements had to do with the RPI, otherwise known as the Ratings Percentage Index: “The RPI is a joke. I make use of KenPom.com and Sagarin, among other data. Much more reliable and better to reveal teams that can play.”
First, Ken Pomeroy is one of our favorites too, with all the breakdowns of types of offensive and defensive efficiency that helps us look at the game on a more granular level. There are so many teams in college basketball, and unless you watch every single game by every single team (impossible) then you need some way to compare teams, and just using wins and losses to compare teams with different schedules is insufficient. It’s good to see someone as influential as Bilas agree.
Now, back to that RPI. Here’s the quick primer on what it is. It is comprised of 3 things: a team’s winning percentage against division I opponents (25%), opponent’s winning percentage (50%), and opponent’s opponent’s winning percentage (25%). The winning percentage is then adjusted in a rather hokey fashion for road wins versus home wins, so that winning on the road is valued more. (Ken Pomeroy, has promised a post on this issue on Friday, something I will probably comment on over the weekend).
It does not include any point differential (a 20-point win counts no more than a 1-point win or overtime win) and takes no account of how a team won. Since it is based on wins and opponent’s win percentage, it’s going to put out a list that is generally correct at the extremes. However, it’s also a lot like passer rating. Just because the final results are plausible (Kansas is going to be rated ahead of UMKC, just like Peyton Manning is going to be ahead of Derek Anderson), doesn’t mean there are not better ways to evaluate teams that give better answers at the margins.
Anyway, just for giggles, I thought I would take the actual tournament seeds from the last four tournaments and see where there were differences in what Ken Pomeroy’s rankings showed, and where the selection committee seeded the teams. I looked at just the top 12 seeds in a region. I counted it as a “Pomeroy team” if going strictly by the end of regular season Pomeroy rankings, a team would have been seeded at least 2 spots higher. I counted it as a “Committee team” if the Pomeroy rankings would have had a team ranked at least 2 spots lower (or out of the tourney entirely). Finally, many teams were “Consensus teams”, where the selection committee seeded a team within 1 seed level of where Pomeroy’s rankings had them. This was true of most top teams, where there was general agreement on the #1 seeds and #2 seeds.
Then, I used the historical average performance by seed of teams to see if an individual team overperformed or underperformed expectations. The results are interesting.
The Committee Teams, the teams that Pomeroy’s rankings say were overrated, were -11.0 wins below expectation based on their tourney seed. There were 74 teams in this category. Two (Butler in 2010, Villanova in 2008) reached the Final Four and accounted for +6.0 wins above expectation.
The Pomeroy Teams, that the Pomeroy rankings say were underrated, were -6.3 wins below expectation based on their tourney seed. There were 42 teams in this category. None reached the Final Four.
The Consensus Teams, where the committee’s seeding was also in agreement with Pomeroy, were +18.5 wins above expected. There were 76 teams here, including, as I said earlier, most of the higher seeds. For the most part, the committee didn’t have a team in the top 3 seeds that wasn’t also in the top 20 in Pomeroy.
In defense of the “Pomeroy teams” I will say this. A fair percentage of them were #7, #8, #9 or #10 seeds that the Pomeroy rankings saw as borderline top 20 teams. Sometimes, those teams were paired against each other (Missouri-Clemson as a #7-#10 matchup last year is one example), so the fact that his rankings thought both were undervalued canceled out. Then, when Missouri won, but then lost to West Virginia in round two (a legitimate #2 seed in the Pomeroy rankings), it showed up as a net negative for both teams combined. In other words, they sort of got screwed by being seeded badly against the best teams in the country. Two of the #1 overall teams in Pomeroy’s rankings won the National Championship, and all four of the past champions came from his top 3 end of season rankings, a pretty good batting average.
I do think that the committee should use the Pomeroy rankings as a counterbalance. Many of the teams that were overvalued by the committee (likely due to road wins, close wins, and artificially boosted schedule strength figures) ended up getting upset early. If the committee wants to set up the best possible bracket, they probably want to avoid overvaluing the wrong things. In other words, they probably don’t want to put Florida as a #3 seed (some really bad losses, lots of close wins and overtime wins, #31 in Pomeroy’s rankings).
[photo via Getty]