I want to talk about EBP in a different perspective in this post. I hope it will generate some good discussion about the role of evidence, theory, and research in driving our practice and our therapy culture. My apologies in advance for the long post, but I hope you’ll find it worth reading, thinking about, and commenting on. As a bonus, I promise to not discuss specificity, reliability, or analyses of variance or to end another sentence with a preposition.
In a recent series of blog posts on John Barnes’ Myofascial Release, we examined that proposed evaluation and treatment scheme. During some of the discussions, a colleague posted this comment:
"Individual practitioner experiences would be level 5 evidence. That’s
great…but if we are to really see if MFR is a better choice of
treatment than any other intervention or just the passage of time, then
we need better quality controlled trials. Even a case series in a peer
reviewed journal would open the door to a feasible discussion."
While I agree with what was said, it unsettled me at the time, and I had a hard time figuring out why. The next day, I posted this response:
"I think it’s important to consider
that, for MFR and most alt-med treatments, the "outcome evidence-only"
brand of EBP won’t get us very far – and in fact may set us up for
Let’s say for example I publish an RCT showing that MFR produced
clinically meaningful changes in an outcome measure of interest versus
a competing intervention. Does that tell us ANYTHING about the truth of
stored memories, fascial restrictions, or energy medicine? No. Success
in the treatment DOES NOT validate the theory.
When I see my colleagues approaching alt-med treatments asking for
outcome evidence, I get justifiably nervous – are they just one RCT
away from believing in energy medicine? What we should be focusing on
is the absolutely indefensible theory here – it’s scientific reasoning
that will help us here, not statistics. Let’s never forget that."