Learn
What’s a P-Value? | Statistics
In simple terms the p-value expresses how surprised you are with the data, assuming there is no effect. The lower the p-value, the more incompatible the data seems with your model (i.e. the assumption that there is no effect).
Eg.
Treatment A is compared to treatment B, you assume there is no effect or no difference; you expect the null hypothesis to be correct. You perform the test and get a p-value of 0.02. That means that the data you gathered is pretty surprising, considering that you assumed the groups would not differ.
The p-value exists to protect yourself from randomness. If you perform a study, chances are that the effects you see are just random— or data noise, as we call it. That’s why you might see noticeable differences in the mean values between groups, but no statistically significant effect. It can go the other way around as well. A study might show a non-significant result, but there might be a true effect; perhaps because the sample size is just too small.
What influences the p-value?
P-values are influenced by a few different factors: sample size, effect size, and the type of test with its assumptions.
- Sample size: the bigger the group, the faster you’ll get statistically significant results with small differences— and vice versa.
- Effect size: the bigger the effect size, the faster you’ll get statistically significant results, even with smaller groups— and vice versa
- Type of test: a test gets more sensitive to differences with certain assumptions about for example the data distribution, independence of measures, homoscedasticity, one-sided vs two-sided, between-group vs within-group, etc.
Eg.
A huge study can find statistically significant results with even the smallest of effects. These effects might not mean a thing. This is where clinical significance comes into play.The original penicillin study used a tiny sample to make the data show that there are huge effects on eliminating bacteria.
P-value <0.05 threshold
The threshold for statistical significance most researchers use (i.e. p < 0.05) is just arbitrary. All things considered, it should change based on your study setup. If you really do not want false positive results (eg. a decision to undergo a life-threatening operation), you need a low threshold number. If you really don’t want false negatives (eg. diagnosing aggressive tumors), you need a high-powered study with subsequently a higher p-value threshold number. This illustrates the give-and-take relation between type 1 (α) and type 2 (ß) errors.
Do note that the p-value is derived from the data, not the theory. You cannot ‘prove’ your theory with a statistically significant effect. The only thing you can do is try to refute your theory with different studies, if it holds, your theory stands. This is falsification.
Misconceptions around the p-value
Some common misconceptions about the p-value in medical research include:
- A significant p-value means that the effect or association is large or clinically meaningful.
- Reality: The p-value only indicates the likelihood of obtaining the observed result or more extreme under the null hypothesis. It does not provide information about the size or clinical significance of the effect or association
- A non-significant p-value means that there is no effect or association.
- Reality: A non-significant p-value only suggests that the observed result is not statistically significant, but it does not necessarily mean that there is no effect or association. It may be due to low statistical power or other factors such as measurement error or confounding variables.
- A p-value of 0.05 is a universal threshold for statistical significance.
- Reality: The choice of significance level depends on the context and should be based on factors such as the study design, sample size, and the consequences of making a Type I error. A lower significance level may be appropriate in some situations, such as in studies with multiple comparisons or high stakes
- A significant p-value proves causation.
- Reality: Statistical significance only indicates the likelihood of obtaining the observed result or more extreme under the null hypothesis. It does not establish causality, which requires additional evidence from study design, biological plausibility, and other factors.
- A large sample size always leads to a significant p-value.
- Reality: A large sample size increases the power to detect an effect or association, but it does not guarantee a significant p-value. The effect size, variability, and other factors also play a role in determining statistical significance.
References
Elkins, M. R., Pinto, R. Z., Verhagen, A., Grygorowicz, M., Söderlund, A., Guemann, M., Gómez-Conesa, A., Blanton, S., Brismée, J. M., Agarwal, S., Jette, A., Karstens, S., Harms, M., Verheyden, G., & Sheikh, U. (2022). Statistical inference through estimation: recommendations from the International Society of Physiotherapy Journal Editors. The Journal of manual & manipulative therapy, 30(3), 133–138.
Neyman, J. and Pearson, E.S. (1928) On the Use and Interpretation of Certain Test Criteria for Purposes of Statistical Inference. Biometrika, 20A, 175-240.
Like what you’re learning?
BUY THE FULL PHYSIOTUTORS ASSESSMENT BOOK
- 600+ Pages e-Book
- Interactive Content (Direct Video Demonstration, PubMed articles)
- Statistical Values for all Special Tests from the latest research
- Available in 🇬🇧 🇩🇪 🇫🇷 🇪🇸 🇮🇹 🇵🇹 🇹🇷
- And much more!