About once a year or so, a new scientific study
comes out which makes bold claims about the harmful effects of too much
exercise.
These get a lot of media
attention and feed a strange sort of schadenfreude among the sedentary
populace.
Meanwhile, other research that
doesn't carry such contrarian views is quickly brushed aside.
It's about that time again—this week,
a study published in the Journal of the American College of Cardiology made the case that infrequent, slow jogging is best
for health:
too much jogging, or
jogging too fast, are detrimental to the point of being just as bad as
sedentary life.
The news media, eager to
grab onto a more attention-grabbing storyline, highlighted the article's claim
that
too much jogging is as bad as being
sedentary.
Cue the satisfactory
back-patting from the couch potatoes.
The scientific paper in question was published by Peter
Schnohr and other researchers at a number of hospitals in Denmark, as well as
the University of Missouri-Kansas City.
Some other online commentators have brought up the authorship issue—one
of the coauthors, James O'Keefe, is a cardiologist who strongly believes that
endurance training is bad for your cardiovascular system and has authored or
co-authored many of the attention-grabbing scientific papers in the past few
years that argue high-volume endurance training is harmful—but here, we'll
concern ourselves only with the data and its interpretation, not any
accusations of bias. If we're bringing
up author bias, certainly you must include me (a proponent of and participant
in high volume and high intensity training) in the discussion. But instead of fretting about all of this,
let's jump right in to looking at this article.
The
statistics of study design
By following a very large group of Danish citizens
for a twelve-year period, the authors sought to investigate the effects of
jogging (let's leave the "running vs. jogging" terminology debate for
another day) on your risk of death from any cause. Because it picked out a large group of
healthy subjects, then followed them to observe who ended up dying before the
study's conclusion, this study was a prospective
study. This design gives the study a
lot more predictive power to discover associations between lifestyle and
mortality (i.e. death rate), but at the cost of making the statistics harder.
The best analogy for us to understand a prospective study versus the
alternative, a retrospective study,
is to imagine trying to figure out what causes a running injury like IT band
syndrome. The most obvious way to
investigate the causes of IT band syndrome would be to gather up a large group
of runners who already have IT band
syndrome, then make some measurements (like impact forces during running, or
hip strength, for example) and compare these measurements to an equally-sized
sample of healthy runners. This design
is retrospective, and though it's easier to find a large number of people with
the condition we are interested in, you can probably see some of the problems
with this. Maybe we discover that the
runners with IT band syndrome have a "hitch" in their stride when
compared to the healthy runners. Is this
asymmetric stride the cause of their
IT band syndrome, or is it a result of trying to avoid putting weight on the
injured area? The retrospective study design is fraught with these types of
problems.
A prospective study designed to investigate IT
band syndrome would have to gather a large group of healthy runners, measure all of them, and then wait and see who
goes on to develop IT band syndrome in a year (or any timeframe, really). We can gather some very powerful information
from this type of research, because the data grant us predictive power. After doing our analysis, we might be able to
say "runners with poor hip strength are twice as likely to get IT band
syndrome," for example. The only
problem is that it's very hard to get good data in prospective studies because
often, the condition you are trying to study is just not very common. Let's say we follow 200 runners for a year,
and half of them suffer an injury at some point during our study. From other research on the frequency of
running injuries, we would only expect about eight cases of IT band syndrome from our initial sample. So, to draw useful information from
prospective studies, you need to do at least one of three things:
1) Have a very large sample size
2) Follow
your study population for a very long time
3) Be comfortable inferring conclusions from small
sample sizes
The same issues hold true for the Danish longevity
study. While it's measurably harder to
do a retrospective study on mortality (good luck asking a dead man about his
exercise habits), the prospective design is still the right choice. To achieve usable results, however, the
authors of this study had to take all three of the above steps.