A few weeks ago, the NCAA hosted an analytics meeting to discuss creating a new composite measure to replace the RPI. We’ll see if any progress is made. However, most of those who were present have developed systems that are predictive in nature. Ken Pomeroy, for example, is one that I check frequently. And I agree with Pomeroy here when saying why he thinks results have to matter:
The reason this is so is that the outcome of the game has to matter. This is why we watch the game. Make the selection process, and thus the games, purely about points scored and allowed and the games become less entertaining. There is no special purpose to having one more point than your opponent. No point in managing foul trouble. No point in hoisting threes in the final minute to catch up. The contest becomes one of points accumulation. There’s a reason televised Scrabble has never hit it big.
So any selection metric still needs to properly account for wins and losses. The problem with the RPI, which I am going to demonstrate, is that the way it does it actually overemphasizes bad opponents, the teams that everyone remotely in tournament consideration beats like a drum.
Using Pomeroy’s rankings and the current RPI rankings (via CBS) here are the 20 teams highest ranked teams in either ranking that show are separated by at least 8 spots in the relative rankings. First, those where Pomeroy’s predictive rankings are higher on a team:
And here are the teams that had the largest difference in favor of the RPI rankings:
While Ken Pomeroy would say his system is designed to be predictive and should not be used to judge resume, it’s not actually clear that the RPI does a better job of it. The list of teams on the RPI-favored group have a slightly better win percentage (.797 to .731), but that’s largely a function of scheduling. The group that is favored by Pomeroy has played more top teams. Fifteen of them are from the top six conferences, compared to 11 on the other list. Those (on the Pomeroy favored list) that are not in the power conferences are from the next group of conferences, whereas far more low majors appear on the RPI list.
In fact, the teams on that Pomeroy list went 8-21 (and played much closer games) against teams currently on the top two seed lines in Joe Lunardi’s bracket, while the RPI list went 3-15. So I think there’s a legitimate question as to which group has “accomplished” more despite the differing rankings.
Another thing you notice, though, is how many games against the dregs of Division I the teams who are disfavored by RPI have played. The teams on the Pomeroy list have played 24 games against teams ranked 320 or lower in the RPI; those on the RPI list only 9. Those games have an outsized impact.
To illustrate that, let’s compare Indiana and Minnesota. It’s an apt comparison because they play in the same conference, have similar overall records (15-7 for Minnesota; 15-8 for Indiana), and if you’ll notice above, are virtually equal in Pomeroy’s rankings. Meanwhile, they are ranked far differently by RPI, with Minnesota at 23 and Indiana at 74.
Since they have a similar overall record, then that difference is almost entirely accounted for by the relative strength of schedule. By the RPI, Minnesota has played a far tougher one.
But is that true? Here are the toughest ten opponents, using, again, the seeding assigned by Lunardi in his latest bracket (and otherwise using Pomeroy’s rankings for those not in the field).
Indiana has played the tougher opponent on every line, or another way to think about it is that their schedules at the top are similar except Indiana also played Kansas and Louisville on neutral courts. How then does RPI calculate Minnesota as playing a far tougher schedule?
Well, Indiana played Mississippi Valley State (3-19), Delaware State (6-18), SIU-Edwardsville (5-19), and several other games from small conferences against teams with losing records. Minnesota, meanwhile, played one small conference team with a losing record currently (NJIT), and a bunch of teams with slightly above .500 records in their low/mid-major conferences. They were all at home and Minnesota was likely to win them (and they did). Indiana played 4 teams currently projected in the tournament field non-conference, Minnesota only 2, but RPI says Minnesota has played the much tougher schedule.
To show how much the very bottom of Indiana’s schedule is driving that list, I used the RPI Wizard at RPI Forecast to check what a few switches to their schedules would do. The RPI Wizard allows you to alter the result or change any opponent to see what the effect would be.
Simply flipping the three worst opponents on each schedule leads to the following changes:
Minnesota’s RPI would go from 23 to 41.
Indiana’s RPI would jump from 74 to 54.
Neither of these teams was going to lose to the bottom of their schedule. But those three games played in early December account for virtually all of that large difference in the RPI. Now, is that a good measure of who has accomplished more?
And keep in mind that because of those rankings, a win over Minnesota is treated like a top 25 RPI win (hello, Florida State), while a win over Indiana is not even a top 50 win (sorry, Louisville, and probably more importantly for bubble purposes, Michigan).
So while there are many issues that could be addressed, I think an improved metric would work toward this flaw. You might say, “well, Indiana should schedule tougher.” There are some issues with that. One, we shouldn’t be having a system that is so easy to game. Two, some of those games are based on existing athletic department relationships and giving smaller schools some income on guaranteed games. Three, the RPI excludes non-Division I games, so teams that load up on those easy wins aren’t penalized in the same fashion. Four, and related to those points, the MEAC and SWAC, the two historically black conferences, are among the lowest-rated teams, and Indiana is dinged big-time for playing one from each. The system currently incentivizes teams to never schedule those games non-conference. That would be my advice to teams who wanted to improve their at-large chances, if I were a school’s scheduling person, and I would hate giving it.
The shorter term solution–before creating an all-new metric–is simple. Eliminate the bottom tier games from each team’s RPI score. Let’s say every team had the lowest three ranked wins excluded. This would be about 10% of the eventual schedule, and make sure the bottom 10% wasn’t having an outsized impact. Games against Division II would count against those exclusion games, so that we level that loophole. Teams would still have incentive to schedule well, but they could plan for a couple of exclusion games, and not be sunk when a December opponent has a truly awful year.