These get a lot of media attention and feed a strange sort of schadenfreude among the sedentary populace. Meanwhile, other research that doesn't carry such contrarian views is quickly brushed aside.
It's about that time again—this week, a study published in the Journal of the American College of Cardiology made the case that infrequent, slow jogging is best for health: too much jogging, or jogging too fast, are detrimental to the point of being just as bad as sedentary life. The news media, eager to grab onto a more attention-grabbing storyline, highlighted the article's claim that too much jogging is as bad as being sedentary. Cue the satisfactory back-patting from the couch potatoes.
The scientific paper in question was published by Peter Schnohr and other researchers at a number of hospitals in Denmark, as well as the University of Missouri-Kansas City. Some other online commentators have brought up the authorship issue—one of the coauthors, James O'Keefe, is a cardiologist who strongly believes that endurance training is bad for your cardiovascular system and has authored or co-authored many of the attention-grabbing scientific papers in the past few years that argue high-volume endurance training is harmful—but here, we'll concern ourselves only with the data and its interpretation, not any accusations of bias. If we're bringing up author bias, certainly you must include me (a proponent of and participant in high volume and high intensity training) in the discussion. But instead of fretting about all of this, let's jump right in to looking at this article.
The statistics of study design
By following a very large group of Danish citizens for a twelve-year period, the authors sought to investigate the effects of jogging (let's leave the "running vs. jogging" terminology debate for another day) on your risk of death from any cause. Because it picked out a large group of healthy subjects, then followed them to observe who ended up dying before the study's conclusion, this study was a prospective study. This design gives the study a lot more predictive power to discover associations between lifestyle and mortality (i.e. death rate), but at the cost of making the statistics harder.
The best analogy for us to understand a prospective study versus the alternative, a retrospective study, is to imagine trying to figure out what causes a running injury like IT band syndrome. The most obvious way to investigate the causes of IT band syndrome would be to gather up a large group of runners who already have IT band syndrome, then make some measurements (like impact forces during running, or hip strength, for example) and compare these measurements to an equally-sized sample of healthy runners. This design is retrospective, and though it's easier to find a large number of people with the condition we are interested in, you can probably see some of the problems with this. Maybe we discover that the runners with IT band syndrome have a "hitch" in their stride when compared to the healthy runners. Is this asymmetric stride the cause of their IT band syndrome, or is it a result of trying to avoid putting weight on the injured area? The retrospective study design is fraught with these types of problems.
A prospective study designed to investigate IT band syndrome would have to gather a large group of healthy runners, measure all of them, and then wait and see who goes on to develop IT band syndrome in a year (or any timeframe, really). We can gather some very powerful information from this type of research, because the data grant us predictive power. After doing our analysis, we might be able to say "runners with poor hip strength are twice as likely to get IT band syndrome," for example. The only problem is that it's very hard to get good data in prospective studies because often, the condition you are trying to study is just not very common. Let's say we follow 200 runners for a year, and half of them suffer an injury at some point during our study. From other research on the frequency of running injuries, we would only expect about eight cases of IT band syndrome from our initial sample. So, to draw useful information from prospective studies, you need to do at least one of three things:
1) Have a very large sample size
2) Follow your study population for a very long time
3) Be comfortable inferring conclusions from small sample sizes
The same issues hold true for the Danish longevity study. While it's measurably harder to do a retrospective study on mortality (good luck asking a dead man about his exercise habits), the prospective design is still the right choice. To achieve usable results, however, the authors of this study had to take all three of the above steps.
The study's methods
Schnohr et al. used data from the Copenhagen City Heart Study, which draws from a sample of nearly 20,000 men and women. The authors limited their analysis to a smaller group which continued to be a part of the study as of a checkup period from 2001 to 2003. This narrowed the sample to about 6,300 people. After further elimination of participants who had already been diagnosed with cancer, heart disease, or stroke, the sample was whittled down to 5,048 people. Among these, 1,098 were classified as joggers, with the remainder being nonjoggers. The authors didn't provide a firm definition of how often one needed to jog to be placed in the jogger group.
To quantify whether different amounts of jogging had differing health implications, Schnohr et al. further divided up the sample of joggers based on how much jogging they did, and at what relative intensity. All of this information came from a single questionnaire sent to the participants at their checkup between 2001 and 2003—a significant flaw in the study, as it assumes the participants' running habits were more or less static for the entire twelve-year duration. The vast majority of the thousand-some joggers in the study are what I would call "recreational runners"—a strong majority did less than 2.5 hours of jogging per week, and only fifty people in the entire study ran more than four hours a week. For reference, I would consider the bare minimum time commitment for anyone who is remotely competitive to be about 4.5 hours of actual running per week. Good distance runners routinely do at least seven hours of running per week, with top champions logging fourteen or more.
By combining time spent jogging per week, the number of weekly jogging sessions, and the relative intensity of the jogging sessions (slow, average, or fast speed), the authors came up with a rubric that categorized the participants as "light," "moderate," or "strenuous" joggers. I am generally opposed to this kind of "binning" in scientific studies, as using these kinds of artificial aggregate groups can make it very easy to create statistical significance by manipulating the parameters that determine who goes in which group (A similar problem arose in a 2012 paper which attempted to show that "heelstrike-associated injuries"—a totally arbitrary category—were more common in heelstrikers).
Finally, Schnohr et al. reviewed medical records to determine how many of the participants died over the twelve-year course of the study. Then, by plugging in all of the data collected on the participants into a statistical model, the authors were able to crunch the numbers on the association between jogging and death by any cause.
The bulk comparison of joggers to nonjoggers was largely in line with what we would expect: joggers as a whole died less often than nonjoggers. But the interesting and headline-grabbing findings came when the authors broke down the deaths among joggers by category. According to their statistical model, jogging too fast, too far, or too often appeared to diminish the protective effects of exercise. In some cases, such as joggers who often ran at a fast pace, their risk of death appears to be equal to that of sedentary people! Schnohr et al. even provide prescriptions on how far, how fast, and how often you should go jogging for optimal health.
I'm not going to spend much time analyzing the actual statistics in detail. I don't (yet) have the proper training to say when it's appropriate to do a Cox proportional hazards regression analysis, or in which situations the model's assumptions cease to be valid. Instead, I'd like to point out a problem that's very central to any interpretation of the results of this study: not very many joggers died!
Indeed, of the groups we are most interested in—the joggers who exercised several times a week, the joggers who used a fast pace on a frequent basis, and the joggers who logged many hours of running every week—the absolute number of deaths was quite small. Remember the fifty people who ran more than four hours a week? Well, one of them died. Even among the largest categories, the absolute number of deaths was very small. Seven of the 576 "light joggers" died during the study, eight of the 262 moderate joggers, and two of the fifty strenuous joggers. Similar single-digit deaths occurred in all of the categories we are interested in, as the chart below details.
|From Schnohr et al.|
And this is the problem with doing statistics on such small numbers. No matter what your advanced model spits out, your results are going to be extremely sensitive to random variation. Maybe the fifty people jogging over four hours per week were fantastically healthy, and that one poor fellow got ran over by a bus? The solid data points in the forest plot can be a little misleading: the important thing to look at are the confidence intervals, represented by the horizontal black lines.
When looking at the chart above taking the confidence intervals into account, we can only draw a few basic conclusions:
Quantity of jogging and risk of death
- We can be 95% confident* that doing up to 2.4 hours of jogging per week lowers your risk of death compared to being sedentary
- We do not have the data to determine whether jogging 2.5 hours per week or more is more beneficial than 0-2.4 hours, less beneficial, or even detrimental. The error bars on the data are unhelpfully large; this is because only four people included in the analysis who jogged >2.5 hours per week died (a few were removed to control for smoking, alcohol intake, etc.)
*This is a little too loosey-goosey for hard-nosed statisticians, for a detailed discussion see this article
Frequency of jogging and risk of death
- We can be 95% confident that jogging 1-3 times per week lowers your risk of death compared to being sedentary
- We do not have the data to determine how the risk of death among people who jog more than three times per week relates to the risk of death among sedentary people OR to people who jog 1-3 times per week.
Jogging pace and risk of death
- We can be 95% confident that jogging at slow or average paces lowers your risk of death compared to being sedentary
- We do not have the data to determine how the risk of death among fast joggers relates to the risk of death among slow to average joggers, OR how it relates to the risk of death among sedentary people.
An important tenet of statistics is that a nonsignificant outcome does not imply the absence of an effect. This is the core misunderstanding made in the interpretations of this article. The fact that the all-cause mortality rate in high volume/high intensity joggers was not statistically significantly different from the all-cause mortality rate in sedentary people does not mean that the risks associated with these two are equivalent. It means the data are insufficient! This is plain to see when you just look at the size of the confidence intervals.
There are surely other limitations to this study. The subjects in it were surveyed only once, at the outset, about their jogging habits. Certainly, these could change significantly over the course of twelve years! And the intensity of jogging was entirely subjective and self-reported. The pacing guidelines provided by some of the study's authors have nothing to do with the data in this study; they are drawn from other research. There's no telling what the actual physiological stress of one person's "average" pace is compared to another person's "fast" or "slow"—which is to say nothing of the selection biases that arise when you group subjects by how they describe their workout intensities!
I don't really want to focus on these, though. The core problem with this study is that it attempts to draw strong statistical conclusions from extremely weak data. Don't get me wrong; this isn't a bad study—the design and methods are quite good, and it provides valuable data. But the interpretation of the data with regards to high-volume, high-speed, and high-frequency jogging is flawed, especially the parts of that interpretation that make headlines.
As a final note, just because this one study has small sample sizes does not make this data meaningless. If enough studies are conducted, the results from each study's small sample of frequent, high-volume and high-intensity joggers can be pooled together, which does allow for more robust statistical analysis.
Is this study a game-changer for runners? Certainly not. Recreational joggers who only make it out the door a few times a week can take heart knowing that their jogging is good for their health. But some shaky statistics done on single-digit sample sizes isn't nearly enough evidence to make a dedicated runner change his or her routine.
Perhaps later, I'll find the time to do a full review of the scientific literature on the interplay between strenuous exercise and long-term health. I haven't read enough studies to have a clear picture of the landscape of current research, but what I have read so far indicates that evidence to date is equivocal.
Reading through this article, I'm reminded of a quote from famous Italian coach Renato Canova, who says that top champions are a "different animal" than regular runners. In the same sense, the exercise habits of the Danes in this study are worlds away from what is expected for an athlete training to compete. While I'm certainly interested in the health benefits or risks of running, my primary goal as a coach (and as a runner myself) is not to increase lifespan. To reiterate a previous point, only fifty of the 1,098 runners in this study even approached what I would consider the bare minimum level of adequate training. Every fall, I help coach over one hundred high school boys, all of whom average far more than four hours of running per week. Serious runners aren't even on the same planet as the subjects in this study: Schnohn et al. recommend running 2-3 times a week for a total of around two hours per week of running, at a speed of eight km/h. Top runners can run twelve to fourteen times per week, for a total of two hours of running per day at sixteen km/h!
Our understanding of the impact of strenuous exercise on longevity will certainly change over time as the science improves, but one thing that won't change is this: to achieve top results as a runner, you must run long, you must run fast, and you must run frequently. To invoke Canova again, to jog for 30 or 45min at twelve-minute mile pace 2-3 times per week might be something enjoyable and good for your health, but do not speak of good results in athletics with this approach.
If your goal is to live a long and healthy life, guided by science, I also have some bad news for you: before you worry about how fast and how often you go jogging, you should probably stop eating red meat, never consume more than 1-2 drinks of alcohol at a time, never sleep less than 7-8 hour per night, and severely curtail your intake of sugar—all of these have much better evidence connecting them with increased longevity.
It will take a very focused study to tease apart the health impacts of serious run training. Fortunately, bigger and better research is currently underway. I am particularly excited about the ULTRA study, an ongoing cohort study of ultramarathoners. The study includes over 1,200 active ultramarathoners, who, as a group, average forty miles per week year-round. A substantial proportion of them are sure to run a lot more than that. This cohort should provide fertile ground for research on the health benefits and/or risks of real endurance training—it's already provided great data on injury rates and overall health (so far, so good on that front!). Until I see the findings of that research, or other high-quality research on high-volume training, I won't hesitate to encourage runners to run fast, run far, and run often.