Drawing A Line In The Evidence-Based Sand

In All Articles, Blog Posts by Danny Lennon2 Comments

There is a definite growth in the number of people in the health & fitness space talking about taking an ‘evidence-based’ approach to nutrition, training and health.

Which can only be a good thing for the fitness industry. In fact, if more people were of the same mindset the incidence of idiotic messaging would decline significantly.

But when we’re talking about approaches to nutrition and fitness, to what extent should we constrain possible strategies to employ?

I mean, if we want to be evidence-based, how should we approach practices that could work but lack a meta-analysis of several randomized controlled trials?

i.e. those practices in the yellow band of the graphic below.

Bumbarger, B. K., Moore, J. E., & Cooper, B. R. (2013). Examining adaptations of evidence-based programs in natural contexts. The Journal of Primary Prevention, 34(3), 147-161. Retrieved from http://link.springer.com/article/10.1007%2Fs10935-013-0303-6

Bumbarger, B. K., Moore, J. E., & Cooper, B. R. (2013). Examining adaptations of evidence-based programs in natural contexts. The Journal of Primary Prevention, 34(3), 147-161. Retrieved from http://link.springer.com/article/10.1007%2Fs10935-013-0303-6

So what about all the stuff in the middle of that continuum?

Should such practices be discarded?

How much evidence is enough to make a concept acceptable?

Is it one RCT? Is it a meta-analysis showing majority support? Is one meta-analysis enough? Can the thoughts of one set of authors be sufficient? Is it an overwhelming consensus in the scientific community?

Is it implausible that you could get a great result from doing something that currently lacks “definitive proof”?

Actually, let’s tackle the notion of needing “proof”.

Definitive Proof is Unscientific

That search for proof is inherently flawed. Looking for proof is not what we should be doing. Because we don’t prove concepts with research but simply move further away from what is incorrect.

In science we look for levels of evidence, not definitive proof. The distinction is important.

Especially when it comes to something like the biological sciences, which cannot be treated in the same way as say mathematics.

In nutritional science, knowledge is tentative rather than definitive. So any theory that we currently accept as true is simply the one that has the best explanation and evidence-base, compared to any alternative idea.

So when we talk about being evidence-based, we aren’t talking about knowing the definitive answer to everything. What we are saying is that we base our practice and recommendations on theories that have better (and/or more) evidence than alternative theories with worse (and/or less) evidence.

Evidence-Based vs. Evidence-Only

But can we reach a point where evidence-based becomes too evidence-based?

Where it’s a matter of clear evidence or nothing? No in-between. No discussion. No openness to potential novel approaches.

As Sackett et al. originally stated in relation to evidence-based medicine; “It’s about integrating individual clinical expertise and the best external evidence”.

We must be careful not to think of real-world experience, anecdotes and hypotheses as something to dismiss out of hand.

So evidence-BASED practice is not evidence-ONLY practice, which I think is summed up perfectly by this venn diagram:

The Supply & Demand Problem

Consider how long it could take to hypothesize a concept, design a study, get it funded, carry it out, get it published, have it replicated by another group, then trialed in humans (with the process repeated), then a meta-analysis or review being published on the totality of the evidence base.

Seem like a quick and easy job to you?

Due to the time and expense of RCTs, in combination with the enormous number of health-related questions that are still unanswered, it seems there will always be more demand for evidence than can be supplied.

Remember, absence of evidence of a benefit is not the same thing as evidence of absence of a benefit.

So do we just sit and wait? Or do we attempt to include methods of practice that can not currently be considered to be truly evidence-based?

Are there pitfalls of NOT acting because something lacks conclusive evidence of benefit?

If It Works, Does It Matter If There’s Science?

While I think there is some room for educated trial-and-error to some extent, let’s not lose the run of ourselves here. It doesn’t mean we can accept the “well it worked so I don’t need evidence” line from people.

I don’t care if it worked for you, I want to know WHY it worked.

Because that’s almost the worst and most sly form of pseudoscience; recommending something that will get a result but being dishonest with people about why it worked. I’m sure you can think of several examples.

If you don’t have an evidence-based operating system at the core of your philosophy/practice then you can run into several problems:

1) Waste of Tools/Resources

You may still get results but how much extra mental energy, time, money and lifestyle disruption has to be expended vs. an evidence-based, tested and proven approach? This was excellently laid out by Joseph Agu in this superb piece: ‘Science vs. Broscience: A Matter of Economy?’

2) Increased Potential to Cause Harm

Even worse than wasting someone’s time/effort/money is the very real potential to cause harm. Just ask the growing number of female trainees who experience amenorrhoea (loss of their period) because some idiot told them to eat 600 calories a day. Or the anti-vaccination lobbyists.

3) Not Open To Scrutiny

When someone defends their actions with; “this is my method, it works, I don’t need evidence”, it means that they become bulletproof (no pun intended, LOL) to scrutiny & challenge from competent experts or peers. I mean how can you challenge someone on their views when they choose to take scientific evidence out of the equation?

Being open to scrutiny is extremely important in order to become better, continue to learn, evolve your ideas and be seen as credible.

Again, often we’re not arguing whether something worked or not, it’s WHY it worked that we are trying to get to the bottom of.

So how do you separate the legit from the sketchy?

This can be a difficult task but here are markers of good science as well as some red flags that indicate a claim may not be scientifically valid:

Conclusion

I’m certainly not calling for an abandonment of evidence-based practice. EBP should remain our default setting.

But those of us in the evidence-based “scene” need to remember that this is not a dichotomy where it’s either “evidence-based” or “quackery”. Rather it is a continuum that needs to be considered depending on a given case. Sometimes we need to listen to the anecdotes of other in-the-trenches coaches and doctors who are actually getting success with people.

Obviously anecdotal data has to be taken for what it is; an anecdote. It’s rightly at the bottom of the hierarchy of evidence.

Not everyone who uses strategies that fall outside of overwhelming clear evidence is a quack. But as Alan Aragon has pointed out previously, not all anecdotes are created equal.

As an example, an anecdote from let’s say a former tech-based businessman pedaling over-priced supplements on the internet is VERY different from an anecdote from a world-class coach with a track record of being involved in academia and is highly respected by his scientific peers.

Irish comedian Dara O’ Briain put it best: “There’s a kind of notion that everyone’s opinion is equally valid… my arse!” (see video at bottom of page)

It doesn’t mean their strategies/conclusions will always be correct for the reasons they claim. But it’s worth having a conversation about. Such conversations generally lead to learning for all involved.

At the core foundation of our approach should be evidence. But this should be something that aids us, not restricts and constrains us.

I realize much of this post has been talking in abstract terms. In the next post, I’ll lay out some examples of potentially useful practices that still need more research to be classed as evidence-based practice.


“There’s a kind of notion that everyone’s opinion is equally valid… my arse!”


Comments

  1. Pingback: Best Fitness Articles Of The Week | DL Fitness

  2. Pingback: SNR #104: Brad Schoenfeld, PhD – Muscle Hypertrophy Research & Evidence-Based Practice | Sigma Nutrition

Leave a Comment