Remember when online "Purity Tests" were popular?
The point, of course, is to see whether you were "pure" or not, and share the verdict (and your percentage) with your friends. While largely irrelevant to real life, not to mention the ease of giving false answers to change your percentage, it arguably got a lot of people to think about the larger issues involved in considering the questions posed.
I think EBP is in many ways similar to the purity test. Stay with me, I’ll get there.
I realize that very few of our colleagues read information about their profession on the internet, and even fewer write regularly, and possible fewer still think hard about the way they practice and try to line it up with evidence and the latest scientific advancements. However, many of the admittedly few people talking about and writing about it online are thought leaders at least, if not also some of the leading figures in our profession. That being the case, the reputation EBP gets from our thought leaders might affect to a great degree the way in which it is taught to our students and residents and the way it is promulgated through our profession.
So why am I comparing it to the purity test?
Many internet discussions of physical therapy practice have recently begun to turn towards competitions about who is more "evidence-based". I have even seen people list their percentage (the equation leading to such a number must be interesting) in their posts, presumably to convince others of their evidence-based-ness. Evidently, not only can individuals be "evidence-based" but techniques, CEU courses, and even books can be, too! Worse yet, many consider this issue not a question of percentages, but a simple question of yes or no. I mean, you’re either evidence-based or you’re not, right?
If you’re having difficulty seeing how this plays out, I’ll include a brief transcript of a discussion I’ve read recently to help you see what I’m talking about.
In the meantime, I’d like you to consider that boiling down such a complex and important movement to a simple "yes / no" or percentage does nothing to advance understanding of EBP, and it completely ignores Sackett’s originial complex blending of the aspects and considerations inherent to medical practice. In short, it’s becoming a bastardization of an important scientific concept, and I believe it has the potential to permanently mar the reputation and use of the term "EBP". I’m afraid it’s going the way of "pay for performance" – sounds great in theory, but handled so poorly in practice that the term is widely discredited. (I am not in any way expressing support for "P4P" by the use of the example) If we really want to better embrace science and make EBP a part of our therapy culture, we need to be careful how we integrate it, explain it, and use it in practice and in our discussions with colleagues.
Here’s the discussion I’m talking about. I don’t have permission (nor have I asked) to reproduce either the real names or the pseudonyms of those involved.
Therapist One: "Yes, there are times that I have referrals for unusual diagnoses or
multiple co-morbidities OR multiple body parts to be treated (there is
no easy way to convey treatment rationale with supporting literature
with those situations). 46.15% of the time I don’t have solid ground
using direct evidence that is easily related to the patient in front of
Therapist Two: "By the way, I did the same thing you did and found that about 60% of what I do can be directly supported by the evidence."
While the quotes are lifted and clearly out of context, I think you can see how the use of percentages ignores the totality of the issues regarding the balance of the various forms of evidence in practice. It would be difficult or impossible to place the percentages in ANY context that would make them seem useful or relevant. We could say the numbers might reflect the number of patients that we see for whom we can apply a high-quality RCT on outcomes. However, real EBP, it must be said again, involves more than outcomes evidence. The basic science underlying physical therapy that includes physiology, biomechanics, motor control, psychology, and neuroscience (to name a few) can support a broad range of interventions and offers fertile ground for creating rational, defensible treatment plans. I doubt the two therapists above had no evidence at all underlying their treatment for such a significant portion of their caseload – but it’s worth asking why they might phrase it the way they did. I have seen the critically-important clinical research on patient outcomes and therapy treatments become a proxy for the concept of "evidence" in its entirety. Meaning the only evidence worth talking about is an RCT reflecting patient outcomes. This trend may be limited to a few thought leaders or not, but in any case it is a concern for all of us. Such "black and white" thinking is easy and convenient for those who want simple answers for simple questions. Unfortunately, integrating science and evidence in practice requires a level of reflection and understanding that is far more complex. I know that those producing these outcome studies are fully aware of this, but many people slinging those references back and forth in discussions about their EBP purity clearly do not.
It is reflective and complex decision-making that integrates all sources of evidence that we should be having serious conversations about, and its that thoughtfulness [PDF] that is required of a doctoring profession – not the myopic and obtuse yes or no to the question: "Are you evidence based?"
Jason Silvernail DPT
P.S. – I have previously written about other issues inherent to EBP such as a deep models, the scientific method and intelligent theory here. Another mention of some of these same issues in the International Journal of Osteopathic Medicine here. International Journal of Osteopathic Medicine frontpage here. Thread on SomaSimple that started this latest discussion of evidence in practice here. Evidence In Motion’s own discussion forum, MyPhysicalTherapySpace, here.