Episode 179: Patterns that I’ve recognised and traits that are common to the best people in evidence-based nutrition and fitness.
Listen on Stitcher (Android app)
Transcript:
They Avoid Absolutism
In a world where there is no shortage of gurus who resort to the powerful tactic of talking in absolutes, I have seen the opposite be an across-the-board trait in all the experts that I respect and believe to have integrity. These true experts are very careful to place caveats on almost everything they say. “Facts” are only really useful when they are delivered in the appropriate context and applied to an appropriate scenario. Take any statement/question in nutrition, and if you don’t provide caveats, then it truly is meaningless. For example, think about the statement “sugar is bad and should be avoided”. That means nothing. Is it always bad? In all cases? For all people? What about an elite athlete in the middle of a marathon? What about someone who has gone hypoglycaemic? What about the sugar in that kid’s apple or carton of milk? Like I said, meaningless.
And this happens all the time in nutrition. Because absolutist statements make the person sound more convincing to the masses, and they are better at driving emotional reactions. I mean, it’s attractive to think of things in black and white, because it requires less mental legwork on our part. Once you start introducing caveats, context, individual variation, etc., then you have to start thinking about problems deeper, you have to start tweaking/experimenting with your individual response, you have to start thinking critically. That stuff is hard. But it’s what the true best minds in the field do.
They Value Scientific Literacy
Scientific literacy is something I’ve harped on about for a while. At last year’s Sigma Nutrition Conference I think I made a point of how scientific literacy might be more important in what knowledge or facts you know right now. Because scientific literacy allows you to decipher what information is good, or at least what is likely to be good, and what is likely to be bad, or something you should be sceptical of.
So in trying to boil down a definition of scientific literacy, I go back to some documentation that relates to my time as a science school teacher. Here in Europe, the OECD has something called the PISA Framework, PISA standing for “Programme for International Student Assessment”, which is an international survey which aims to evaluate education systems worldwide by testing the skills and knowledge of 15-year-old students. In the most recent PISA Framework (2015) document put out (link), there is a large discussion of scientific literacy, so well worth reading if you’re interested. In the framework, they define scientific literacy as “the ability to engage with science-related issues, and with the ideas of science, as a reflective citizen.”
Later it does on to expand on that by saying;
“A scientifically literate person, therefore, is willing to engage in reasoned discourse about science and technology which requires the competencies to:
- Explain phenomena scientifically
- Evaluate and design scientific inquiry ( propose ways of addressing questions scientifically)
- Interpret data and evidence scientifically – analyze and evaluate data, claims and arguments in a variety of representations and draw appropriate scientific conclusions.”
Being scientifically literate encapsulates a number of ideas, one of the most basic of which is understanding what science actually is. Despite a common idea in the general population that science is experiments being done in a lab, science isn’t some sort of activity. Science is a process. The scientific method is a very specific process that we can use to move us closer to the truth over time.
Essentially it starts with having a question that you want answers to. From there you formulate a hypothesis that might explain the issue of your questions. It’s key to bear in mind that a scientific hypothesis must be falsifiable, otherwise it cannot be meaningfully tested. Once we have a hypothesis, then we must determine the predictions that this hypothesis would make. From these predictions, one or more of them are selected to be tested, most likely based on which predictions are best suited to be testable and have the lowest probability of generating results down to chance. Then we test these predictions, most notably through an experiment/study. One data has been collected, then interpretations are made as to what this means. And from here we can decide to reject the initial hypothesis if the predictions are falsified, or we may refine/modify/expand the hypothesis, and then the process cycles back to developing more predictions and then testing them. And over time we refine the hypothesis so it approximates closer and closer to the likely truth.
Science and the scientific method isn’t perfect, but it damn sure is the best tool we have for explaining why things are they way they are and why things work.
For more on concepts related to scientific literacy, see episode 165 with Kevin Hall.
They Value Good Science Over Their Own Perceived Knowledge
This point builds on the last point about good science. One of the striking things about the best evidence-based folks in the world is their desire to actually find the truth through good science. Objective enquiry to try and make sensible conclusions. They don’t care as much about how people perceive them or their knowledge. They don’t try to always sound like they know everything or that they have the answers. In fact, very often they will talk about things they were wrong about, things they don’t know and what we still need to work out.
And so here’s the big issue: “Using” Science & Being a good scientist are two different things. In both my conversations with Brad Dieter (link) and Ben Esgro (link) we touched on this difference. Essentially, that many of the people putting out misleading information are not in what obvious quacks, but rather they seem to be science-based or using science because they know some fancy terms and know how to link to a study on Pubmed, but really they are fooling people. I mean the obvious quacks who say crazy stuff are easy for most people to spot (Mercola, Food Babe, etc.). But what it’s the others who make it out that they are science-based. They point to all sorts of studies, but to go back to the previous point, they aren’t being good scientists that are analysing and interpreting the literature in an honest, objective manner. The best in the world don’t do that. Instead they pride themselves on making the most ethically accurate, objective interpretations of the totality of the literature that they have come across on the topic. All the while, interpreting the the correct context and caveats as they go.
They care more about the truth and scientific honesty than they do about seeing as some all-knowing expert.
They Are Aware of What They Don’t Know
This is part of the last point I made, but I feel it deserves to be isolated because it’s so damn important. This is a real sign of an expert. They are extremely aware of just how much they don’t know. And not only that, they are often keen to point that out and make that disclaimer. Listen back to episode 98 with Dr. Lee Hamilton. He’s one of the smartest guys I know and his understanding of molecular biology is amazing. But yet, he’s very clear to point out the narrow field where he feels his expertise lies, point out the other researchers and mentors in his field that he learns from, and point out his lack of expertise in other areas (e.g. implementing interventions with people in a coaching situation, rather than a lab). Similarly, a recent episode with Eric Trexler from UNC highlighted this awareness of not knowing lots. His appetite for wanting to answer more research questions was phenomenal and it was all driven by his underlying understanding of all the things he doesn’t know yet.
The best in the world are aware of just how much they don’t know.
They Know Both Science & Experience are Important to Evidence-Based Practice
Science vs. Experience is a False Dichotomy. This is often lost in discussions of evidence-based practice. I wrote about this previously in a post titled “Drawing a Line in the Evidence-Based Sand”. And similarly, it was a key point made by both Spencer Nadolsky (link) and Brad Schoenfeld (link) in their episodes of the podcast. They were keen to point out that both an understanding of the current state of the scientific literature base AND practical experience of using different interventions were key components of effective evidence-based practice. You can’t an effective doctor/coach if you have no understanding of the scientific consensus on a certain topic. Similarly, you can’t effectively talk about helping people as a doctor/coach if you’ve never treated a patient or worked with a single client in your life.
People who argue reading studies is more important than in-the-trenches practical learning are wrong. People who argue that only experience matters and reading research is of no value are also wrong. Effective evidence-based practice is predicated on both.
They Care About People
To round this out, and on a different note to the science stuff discussed so far, I’ve noticed that the people who seem to be driven to learn more, spread knowledge and act with intergrity are those who seem to truly care about helping people. You see it in discussions with coaches I really respect, like Alberto Nunez (link) or Andy Morgan (link). Eric Helms talked about it way back in episode 28; how his drive, both in his academic career and coaching, is driven by wanting to put something good into the world. And I’ve seen it come up over and over again. The reasons why all these previous points about science, objectivity, honesty, humility, etc. are important is because conducting yourself in that manner, in my opinion, is by far the most likely way you can help people and empower them through better information or from coaching, or really just from you just being a better, more authentic person.
Thought this was good? Click here to share on Facebook