EBP, Deep Models, and Scientific Reasoning

I  want to talk about EBP in a different perspective in this post. I hope it will generate some good discussion about the role of evidence, theory, and research in driving our practice and our therapy culture. My apologies in advance for the long post, but I hope you’ll find it worth reading, thinking about, and commenting on. As a bonus, I promise to not discuss specificity, reliability, or analyses of variance or to end another sentence with a preposition.

In a recent series of blog posts on John Barnes’ Myofascial Release, we examined that proposed evaluation and treatment scheme. During some of the discussions, a colleague posted this comment:   

"Individual practitioner experiences would be level 5 evidence. That’s
great…but if we are to really see if MFR is a better choice of
treatment than any other intervention or just the passage of time, then
we need better quality controlled trials. Even a case series in a peer
reviewed journal would open the door to a feasible discussion."

While I agree with what was said, it unsettled me at the time, and I had a hard time figuring out why. The next day, I posted this response:

"I think it’s important to consider
that, for MFR and most alt-med treatments, the "outcome evidence-only"
brand of EBP won’t get us very far – and in fact may set us up for

Let’s say for example I publish an RCT showing that MFR produced
clinically meaningful changes in an outcome measure of interest versus
a competing intervention. Does that tell us ANYTHING about the truth of
stored memories, fascial restrictions, or energy medicine? No. Success
in the treatment DOES NOT validate the theory.

When I see my colleagues approaching alt-med treatments asking for
outcome evidence, I get justifiably nervous – are they just one RCT
away from believing in energy medicine? What we should be focusing on
is the absolutely indefensible theory here – it’s scientific reasoning
that will help us here, not statistics. Let’s never forget that."

For a long time, our profession has been hamstrung by a lack of evidence to support our interventions. Whatever the theory driving our practice might be, we had no way to show each other, our colleagues or our patients that what we were doing was effective – did it help and what was it’s efficacy in relation to other treatments? We have (for the most part) as a profession embraced the EBP model and the last few years have seen a veritable explosion in outcome studies that demonstrate efficacy of many of our interventions – both by themselves and in comparison to others. This is outcomes evidence, and a very important part of practice.

However, there is more to evidence and to evidence-based practice than outcome studies.

Let’s do a thought experiment. Let’s say that tomorrow a large RCT is published on the treatment of thoracic back pain with JFB-MFR. As many of you know, we don’t have much quality evidence to choose from in considering options for a patient with thoracic back pain – relative to cervical or lumbar region pain. Let’s say this RCT found that the MFR treatment produced clinically-meaningful improvements in a patient-centered outcome survey and in pain rating scores. Would that give you any information at all about the truth of energy medicine, stored memories and emotions in the fascia, the applicability of quantum physics to patient care, or the validity of myofascial restrictions?

I would hope your answer here would be "NO!"

We see here that demonstrating that the intervention is effective does not do some of the hardest work that we need to do in science – it doesn’t help us formulate a scientific theory or "deep model" that we can use to guide our practice and research. At the end of that notional RCT, we are no closer to determining why the patient got better than we were before we did the study!

You see, its a basic theory or deep model of function that underpins it all. It drives our education. It becomes part of our therapy culture. It’s imparted to patients during treatment. We cling to it for support when we have a lack of outcomes evidence to guide us.

The late Jules Rothstein PT PhD, in one of my favorite editorials "When Thoughtfulness Dies", encouraged us to not just lob outcome studies at each other, but to develop a good theoretical base for our education, explanatory models, and treatment development. He even referred to it as a "secret weapon". I contend it’s only a secret because we don’t examine our interventions and teach our students the way other scientists do – we don’t start with a sensible explanatory model or theory. It’s at that level that JFB-MFR is dead in the water. An RCT would be a waste of time if the treatment makes no sense. That is where JFB-MFR and the alt-medicine treatments fail – at the basic level, they just aren’t consistent with human physiology. Seen in this light, NCCAM could save a lot of money by requiring a good deep model from their investigators before throwing gobs of money toward outcome studies that somehow never seem to bear much fruit. Those who recognize the role of theory in practice are not surprised at this lack of success.

We should focus as much on challenging our explanatory models and teaching in ways congruent with actual human physiology as we do on producing outcomes research, or the next generation of DPTs will be just one or two RCTs away from doing Reiki and Therapeutic Touch. Hey, if there’s "evidence to support it", it must be good, right?  Can you see the problem with outcomes-only evidence?

I can think of a few deep models in the therapy culture that haven’t held up to examination, but that some therapists still cling to – and these false ideas still are common in our educational programs. I’m thinking of the disc model often referred to by some McKenzie advocates and the facet joint subluxation and motion palpation examination model often referred to by some in the manual therapy community. These deep models have been shown to be inaccurate, but they persist in part because some of the interventions that are thought to rely on them (directional preference exercise and manipulation) have good supporting outcomes evidence. These incomplete and inaccurate deep models, when kept in our therapy culture, keep us from looking for other explanatory models that still fit the outcomes evidence but that are more accurate. That updated deep model opens up new avenues for treatment and research. If those models aren’t challenged and updated, we miss those opportunities. This is how science progresses in many areas – a constant reexamination of the underlying theory in light of empirical evidence.

What do you think about the role of theory in practice?
What would you say to a colleague who practiced based more on a solid theory than on outcomes research? Could that still be considered "evidence-based" practice?

Your comments are welcome.
-Jason Silvernail DPT

4 responses to “EBP, Deep Models, and Scientific Reasoning

Comments are closed.