You’ve probably heard of the Curse of 370, and maybe you have read some of the rebuttals, such as that offered by Brian Burke of Advanced NFL Stats or Maurile Tremblay of Footballguys. For those that are unfamiliar, it basically has to do with running back rushing attempts and the likelihood of injury or decline. The proponents of “The Curse” believe that at a really high seasonal workload (the chosen cutoff being 370), running backs are more susceptible to injury or decline thereafter. The skeptics point out issues with the choice of endpoints and statistical significance when compared to other groups of running backs.
I think both sides have something to add to the discussion. I cringe at the use of the word “curse”, whether it is a Madden curse or a Super Bowl Loser’s curse or a Curse of 370. The public will too often latch on to the curse and apply it blindly, without considering it fully. Would it be better for a team to rest a guy that has carried the ball 360 times going into the final week just to avoid crossing that line? Of course not. The rebuttals are correct in certain aspects. The choice of endpoints is selective to achieve maximum results, and the sample size and results do not have statistical significance when compared to the group of backs with 344-369 carries.
On the other hand, the highest workload backs did miss more games the next year. What standard do we want to apply to make decisions on whether workload is costly for a franchise making these decisions? Are we going to demand that there is only a 5% chance that the resulting difference were obtained by random chance before we adapt behavior? Would you let your franchise back carry the ball 400 times because the p-value is not 0.05 (if you knew what a p-value was)?
Unfortunately, those darn coaches and players won’t cooperate and give every starting back 25 carries every game until they get hurt or get to the end of the year, so we can have larger sample sizes. For all the legal experts out there, I think we should probably look more at a “more likely than not” standard, as opposed to a “beyond a reasonable doubt” standard. I don’t want to be unneccessarily resting my starter if he could play more without substantial injury increase, but I also don’t want to overplay him if it could cause injury. Were I making a decision as a coach or GM, I would have to lean toward the outcome I thought more likely, and not blindly insist on statistical significance.
I’ve attempted in the past to add to this discussion, as three years ago, I looked at injury rates based on early season rush attempt totals, and also examined injury rates the following season based on the rush totals at the end of the previous year. After the 2007 season, I delved into the weekly injury reports and examined injury report appearances immediately after games with certain rush attempt amounts. The upshot of those findings, both at the start of the year and at the end of the year (looking at 6-week periods), and then also looking at the 2007 season, was that serious injuries were higher for backs that had higher rush attempt totals, but the sample sizes were too small to make any definite conclusions. Last year, I wrote “Why Rush Attempts Matter and Receptions Do Not,” which set forth my theory as to why rush attempts are more important in trying to assess workload overuse effects (it has to do with correlation between high rush totals and winning), and also showed that backs who had the same rush attempt total in losses in the playoffs/week 17 were healthier the following season than those who had their high carry games in wins.
Tomorrow, I am going to hopefully add to the discourse again. I’ll have lots of stats – or as I would prefer to call them, recorded observations of fact. I will try to present it in as interesting of a way as possible, but if you are averse to a large table showing hundreds of games and several thousand carries and hundreds of injuries and games missed, you have been warned. [photos via Getty]