BCS Computer Rankings Are Subjective, Don't Even Rate the Same Collections of Teams

None
facebooktwitter

The BCS polls don’t have a uniform data input. Massey and Wolff include all 723 teams that play some form of college football. Sagarin uses FBS and FCS teams. Colley uses FBS teams and groupings of FCS teams. Anderson and Hester and Billingsley include only FBS teams. How do you have teams eligible for ranking in one set of polls and not in another?

Then there’s margin of victory. The BCS doesn’t require logic or statistically validity. The only requirement seems to be not including the margins of victories. Sagarin and Massey distinguish their rankings from the rankings they provide for the BCS. The reason is that the ones submitted are incomplete rankings they won’t fully endorse or be associated with. How are we supposed to have faith in the validity of a ranking system if the creators themselves don’t?

Running up the score may be objectionable, however, not including margin of victory entirely is burning down your apartment to get rid of a roach infestation. Ranking systems are charged with comparing small sample sizes of dissimilar games. They need all the data they can get. Stanford has not played a tough schedule, but ignoring their average victory margin of 36 points distorts the ratings far more than a few teams scoring garbage points at the end of the game.

The point of having mathematical rankings is to be objective. Anderson and Hester justify their ranking system with the following statement.

"Unlike the polls, these rankings do not reward teams for running up scores.  Teams are rewarded for beating quality opponents, which is the object of the game.  Posting large margins of victory, which is not the object of the game, is not considered."

That’s imposing a subjective qualification of what is the proper style of play. This would be like baseball taking Ty Cobb’s assertion that hitting home runs was gauche to heart and arbitrarily counting them as singles to calculate slugging percentage. It distorts what is intended to be an accurate, useful measurement and dispenses nonsense. It’s common sense that Stanford beating Duke 44-14 is more impressive than Wake Forest beating them 24-23. The polls should reflect that.

The BCS formula is rife with problems. That’s before even accounting for the methodologies of the computer rankings, let alone the problems created by the human element accounting for 2/3 of the vote. If you plan to pick two teams to play for the national title by an “objective” formula, that formula should be simple, transparent and statistically valid. It’s not that hard to create a clear formula that accounts for wins and losses, strength of schedule and margin of victory (perhaps weighted so that it is useful but not overarching).

[Photo via Getty]