Thursday, February 12, 2015

Is the new course for the 2015 Twin Cities 1 Mile going to be slower?



Nick Willis leads the 2013 TC 1mi

This week, Twin Cities in Motion announced that the 2015 edition of the Twin Cities 1 Mile will be run on a new course heading north on Hennepin Avenue through downtown Minneapolis, instead of the historic course down Nicollet Mall.  The motivation for this change was construction of a new light rail line that crosses the Mall, with trains that run every few minutes—far too frequent to be able to get a full wave of runners across quickly.
The old course was flat and very fast, and with its very generous prize purse, the elite wave attracted several extremely fast milers.  Because of stormy weather, the race was canceled at the last minute in 2014, but in 2013, Nick Willis set a course record of 3:56.1 in 2013 for a cool $10,000 bonus, and five other runners broke four minutes. 

A new course, but the same record

In a recent interview with Minnesota running blog Down the Backstretch, TC 1 Mile race director Jeff Decker clarified that, even though the course has changed, Willis' 3:56 (and Sara Hall's 4:30.8, run in 2011) are still considered the "event records," so to earn the $10,000 record bonus, these are still the marks a runner would need to hit. 

Which brings us to the new course.  The new route up Hennepin Avenue has no turns to mention, but it does have a noticeable uphill in the first half mile or so.  Down the Backstretch provided a handy chart comparing the elevation profile of the old and the new course.  Can we use this to predict whether the course will be faster or slower, and what kind of performance would be necessary to break a course record?

In fact, we can, as long as we make a few simplifications.  If we can make an idealized model of each course, we can compare their relative "fastness."  As you can see in the chart above, the old course fluctuates a bit, but never gains nor loses more than ten feet.  Because of this, I'm comfortable treating the old course as if it were perfectly flat, i.e. no significant differences from an idealized "fast as possible" course.

Tuesday, February 3, 2015

Is less running better for you? An in-depth look at "Dose of jogging and long-term mortality"


About once a year or so, a new scientific study comes out which makes bold claims about the harmful effects of too much exercise.  These get a lot of media attention and feed a strange sort of schadenfreude among the sedentary populace.  Meanwhile, other research that doesn't carry such contrarian views is quickly brushed aside.

It's about that time again—this week, a study published in the Journal of the American College of Cardiology made the case that infrequent, slow jogging is best for health: too much jogging, or jogging too fast, are detrimental to the point of being just as bad as sedentary life.  The news media, eager to grab onto a more attention-grabbing storyline, highlighted the article's claim that too much jogging is as bad as being sedentary.  Cue the satisfactory back-patting from the couch potatoes. 

The scientific paper in question was published by Peter Schnohr and other researchers at a number of hospitals in Denmark, as well as the University of Missouri-Kansas City.  Some other online commentators have brought up the authorship issue—one of the coauthors, James O'Keefe, is a cardiologist who strongly believes that endurance training is bad for your cardiovascular system and has authored or co-authored many of the attention-grabbing scientific papers in the past few years that argue high-volume endurance training is harmful—but here, we'll concern ourselves only with the data and its interpretation, not any accusations of bias.  If we're bringing up author bias, certainly you must include me (a proponent of and participant in high volume and high intensity training) in the discussion.  But instead of fretting about all of this, let's jump right in to looking at this article. 

The statistics of study design

By following a very large group of Danish citizens for a twelve-year period, the authors sought to investigate the effects of jogging (let's leave the "running vs. jogging" terminology debate for another day) on your risk of death from any cause.  Because it picked out a large group of healthy subjects, then followed them to observe who ended up dying before the study's conclusion, this study was a prospective study.  This design gives the study a lot more predictive power to discover associations between lifestyle and mortality (i.e. death rate), but at the cost of making the statistics harder.

The best analogy for us to understand a prospective study versus the alternative, a retrospective study, is to imagine trying to figure out what causes a running injury like IT band syndrome.  The most obvious way to investigate the causes of IT band syndrome would be to gather up a large group of runners who already have IT band syndrome, then make some measurements (like impact forces during running, or hip strength, for example) and compare these measurements to an equally-sized sample of healthy runners.  This design is retrospective, and though it's easier to find a large number of people with the condition we are interested in, you can probably see some of the problems with this.  Maybe we discover that the runners with IT band syndrome have a "hitch" in their stride when compared to the healthy runners.  Is this asymmetric stride the cause of their IT band syndrome, or is it a result of trying to avoid putting weight on the injured area? The retrospective study design is fraught with these types of problems.

A prospective study designed to investigate IT band syndrome would have to gather a large group of healthy runners, measure all of them, and then wait and see who goes on to develop IT band syndrome in a year (or any timeframe, really).  We can gather some very powerful information from this type of research, because the data grant us predictive power.  After doing our analysis, we might be able to say "runners with poor hip strength are twice as likely to get IT band syndrome," for example.  The only problem is that it's very hard to get good data in prospective studies because often, the condition you are trying to study is just not very common.  Let's say we follow 200 runners for a year, and half of them suffer an injury at some point during our study.  From other research on the frequency of running injuries, we would only expect about eight cases of IT band syndrome from our initial sample.  So, to draw useful information from prospective studies, you need to do at least one of three things:

1) Have a very large sample size
2)  Follow your study population for a very long time
3) Be comfortable inferring conclusions from small sample sizes

The same issues hold true for the Danish longevity study.  While it's measurably harder to do a retrospective study on mortality (good luck asking a dead man about his exercise habits), the prospective design is still the right choice.  To achieve usable results, however, the authors of this study had to take all three of the above steps.