Max van der Velden
Research Manager
The prevalence of running injuries ranges somewhere between 10-92%, depending on the definition and subgroup. Several risk factors have been identified such as no previous experience, high BMI, higher age, and higher weekly mileage. Runners reported that a website might be a good tool to get educated on injury reduction. This study designed an online prevention program called: 10 Steps 2 outrun injuries.
This randomized controlled trial aimed to look at two groups. One receives online running tips, and the other one does not. Running injury proportions were compared between groups.
The ten tips for preventing injuries were based on the literature and clinical expertise of the clinicians and researchers:
The inclusion criteria were:
The participants received a personalized code to unlimitedly access the website with the tips.
The injury had to limit distance, speed, duration, or frequency for seven days or three consecutive training sessions to be counted, or when the participant contacted a health professional for the issue.
Based on an expected injury rate of 52.1%, the authors calculated that 3394 runners needed to be included for a two-sided t-test with 80% power and an alpha of 0.05.
A total of 4105 participants were included and randomized into either the intervention group or the control group. The participants in the intervention group were older, had a higher BMI than the control group, and reported fewer running-related injuries at baseline.
During follow-up, 35.5% of participants injured themselves. The proportion of the intervention group was 35.5% and for the control group 35.4% resulting in a statistically non-significant result. The authors performed several subgroup analyses that we will not discuss in this review.
Let’s applaud these authors for performing such a huge study. One of the main issues within physiotherapy science is small sample sizes, resulting in underpowered studies with imprecise results. Although the authors probably hoped for a clinically significant result, these insignificant findings were nonetheless published — which is a good thing. Studies should be published based on their methods and relevance, not their outcomes.
There are a few things to get into. After randomization, it appeared that the groups differed at baseline in three important factors (previous injuries, BMI, and age). This could confound results.
Another comment is the lack of validation of the program. Not all tips are researched — let alone confirmed — to be effective in isolation. We need long-term prospective cohort studies, looking at different factors to see which factors might lead to injuries. Adding to this, the author’s first tip is not to change anything if the runner has no experience with running injuries. However, half of the study sample did not experience an injury in the past 12 months, making almost all other tips irrelevant for this subgroup.
This segways nicely into the next point, compliance. Within the intervention group, only half of the participants reported having implemented at least one thing of the program in their training. Sadly, we do not know if they have actually changed anything. It could very well be that the tip they ‘implemented’, was part of their autoregulated training. It’s hard to say if a program ‘works’ if it is not getting implemented well by the participants. We are all humans that find it hard to plan for the long term and spend time and energy on things that do not seem applicable to us (being the absence of a current injury). Maybe the authors could have nudged the participants a bit more to boost this implementation. However, this should obviously be automated in a way since calling more than 2000 runners to check if they have read and implemented it, would be a pretty shitty task for researchers.
As mentioned above, we should applaud the authors for setting up such a huge study. However, the study could be a lot smaller. The aim of the trial was to test if a prevention program was superior. To test this, a one-sided t-test is sufficient. A two-sided t-test lowers your statistical power (meaning you need more participants) since the test needs to look both ways. It needs to check if the intervention data is ‘better’ or ‘worse’ than the control group. One could say that the authors wanted to notice if the intervention group might have done worse, but this seems implausible since they call it a prevention program — not simply a program.
This is a fine study and adds to the knowledge bank of running injury prevention. Results might differ if compliance/implementation could be upregulated in future trials. However, we need prospective long-term cohort studies at first to research what the risk factors actually are before we can jump to a conclusion in made-up ‘prevention’ trials.
Don’t run the risk of missing out on potential red flags or ending up treating runners based on a wrong diagnosis! This webinar will prevent you to commit the same mistakes many therapists fall victim to!