Clinical Podcast: Outcomes-Related Research with Dr. Chad Cook

This week, on the EIM Clinical Podcast, John and Jeff are joined by Dr. Chad Cook.  Chad is the Director of Duke University’s DPT program as well as a clinical trialist and well-known researcher, specifically, outcomes-related research and, most recently, health services research.

Chad shares his perspective on the research he’s conducted and papers he’s written: topics that affect PTs across the industry. Chad also shares some thoughts on predicting outcomes and how these outcomes can be influenced by projecting care out over longer periods of time in hopes that it can prevent some issues patients experience.

It’s a thought-provoking discussion that will help you with your practice!

LINKS:

8 responses to “Clinical Podcast: Outcomes-Related Research with Dr. Chad Cook

  1. John and Jeff;
    This was fantastic; please do more with Chad Cook!!! It is so important to know what we are measuring, what influences patient bias, where the PT’s efforts in treatment provide the best outcome, as well as similar items with the perspective you presented.

  2. Gentlemen, great job! Chad is a real asset to the profession.
    Obviously Chad touched on some areas that really made me think my current clinical practice. I am pleased to see so much research done in the area but I rarely see 3rd party payers in my area look at this data (i.e. outcomes). Are we putting too much stalk into outcome measures?

  3. Selena Horner says:

    Hi Chad,

    There has been some research around MCID that has defined different MCID on the same tool for different populations. I wish this area wasn’t as confusing. When a patient is in front of clinicians, decisions may be driven by MCID.

    There have been multiple debates online about improvement and interventions. As clinicians we only have one snapshot and that is what has been provided in the clinic. Although I appreciate the complexity of discussions bringing into the picture natural course of healing, when someone walks into our doors to receive and pay for treatment, we have to make a clinical decision right at that moment: treat or do not treat. If we decide to treat, then that is the route we can analyze in our own inhouse EMR or databases. The other aspect, that even though we do not truly know that our interventions or interactions actually had an effect on the final outcome, we do need to use that interventional information when we analyze and when we self-reflect. If we do not, then what is the point of even doing what we do?

    Full disclosure on my next thought. I consult with FOTO. I can appreciate your comment that it isn’t possible to make a 100% comparison between companies or providers. What is possible is to begin to risk adjust and define the amount of variance explained. Insurance companies risk adjust all the time when they determine the cost for premiums. Of course, in this case, the goal is to minimize the financial risk by increasing premium costs for those who cost the most. I think during this process the risk adjustment process has improved from explaining 5% of variance to now maybe 25% of variance. When predicting rehab outcomes, FOTO has improved its risk adjustment process and is far better at predicting a functional outcome (and I know more than 21% variance is explained). Because FOTO’s risk adjustment process has been improved, the variance explained will depend on the patient being treated. The risk adjustment process is no longer a single, static process and is more individualized for each patient for improved comparisons. And yes, initial level of disability is a definite risk adjusted factor. And yes, fear level is no longer included as a risk adjusted factor in FOTO’s risk adjustment process. I appreciated hearing as you talked about factors that affect comparisons that these factors were addressed in FOTO’s process.

    And area that isn’t often measured or included when comparing outcomes is that of the clinic and the clinician. You mentioned soft skills. For some reason, I tend to think that when explaining variance and looking at final outcomes, there also has to be a way to include clinician or even clinic factors. From the moment a patient contacts a clinic, the clinic culture comes into play with the first person to interact with the patient whether its the website or the person answering the phone. The way a potential patient learns about a clinic or clinician may have an effect. There may be clinician factors that have an impact on outcomes. In my mind, to really explain variance and improve comparisons, we also need to look at clinicians and clinician factors (and somehow include them in the risk adjustment process). The tricky part with that: we grow and change as clinicians which means our factors will evolve over time.

    Thanks for taking the time to discuss outcomes. It’s been a passion of mine since the early 90’s and I always enjoy learning more in this area.
    ~selena

  4. Chad Cook says:

    Hi Selena. Thanks for the comment. I have a few long-winded thoughts related to your comments that might assist in framing my discussion on outcomes.

    I do understand the role of risk adjustment very well and have participated in a number of studies that analyze these. In one study, we looked at building a specific risk adjustment measure to level risks and it just can’t be perfectly done. https://www.ncbi.nlm.nih.gov/pubmed/21217444 In many others, we’ve controlled for baseline values when building prognostic models or for baseline group differences when comparing variables in observational trials. I’m not suggesting there isn’t value to risk adjustment, but I do support my comments that statistical controls don’t completely, perfectly level the playing field when comparing clinicians, groups, or institutions. And I’m not debating their use in industry either. In fact, I’d recommend using these to build a quota system for healthcare access (just like life insurance).

    We used FOTO data in our exceptional responders and non-responders analyses too (N=3K to 6K). The biggest predictors for meeting responder criteria, whether exceptional or MCID based, were always baseline characteristics. When treatment interventions were factored in, or clinician experience, or any other factors we hope influence outcomes, it just didn’t influence responder classification as much as patient characteristics. I was troubled by this initially, until I started considering what outcomes really tell us.

    Most functional and disability outcomes are proxy measures of health status. In most cases, health status is multidimensional and will change at rates that are uniquely tied to specific patients (and is heavily wedded on their own expectations, natural history, and self report of health status). A wonderful paper by Herbert and colleagues once suggested that outcomes measures, measure health status of a patient, whereas RCTs measure effectiveness of interventions. Recently, O’Connell and Kamper opined that a common mis-assumption with responder analyses is that the intervention is the reason that someone met the responder thresholds. In reality, most modeling of data suggests that baseline characteristics are more strongly associated with meeting responder criteria (e.g., MCID) than interventions received. This is a reflection of how MCID are calculated (anchor methods using patient means).

    I do want to clear something up. I’m not suggesting we dump outcomes measures. We finally have people routinely using them! I will say that they aren’t a complete measure of the patient change we see in clinical practice. I’m with you in the idea that there is something intangible (that we witness) that changes in our patients that we aren’t properly measuring. And I’m not sure what that is yet.

    My regards, Chad

  5. Julie Whitman says:

    Great job, Jeff, John, and Chad! I picked up some tips on new relevant literature and many of your comments, Chad, sure challenge my thinking! That is great as every day is a good day to learn and to be challenged. Julie

  6. Selena Horner says:

    Hi Chad,
    I am somewhat confused about the ability to predict perioperative spine infection via patient factors. Although patient characteristics will play a role to some degree, it would seem that the pre-operative and post-operative procedures both required by the patient and required by all staff and surgeons would really be huge factors. If I knew that in your study all hospitals and all staff and surgeons and patients all fully complied with the exact same procedures, then figuring out those that are at risk would seem reasonable to me.

    It is highly unlikely that statistical controls will ever be perfect. Even though I say that, Watson excites me and maybe there will be a way in the future to come more close to perfect via artificial intelligence. There will probably always be some level of variance – we are talking about humans and the human mind and human spirit can surprise us. What needs to happen is that when risk adjustment is being used, we should be demanding that we be told how much variance is explained. And then, since we know we may never see 100% explained, we need to determine what level is acceptable AND understand why. We’re going to be compared. We’re already compared. My NPI data has been used since the mid 2000’s to compare my care with the care of others. I am defined by my care by being placed in a category. The risk adjustment process is horrible and only looks at body part, gender, maybe age and number of visits. The payer doesn’t even understand all the metrics and believes this risk adjustment is fine. It isn’t fine. I couldn’t fit it, but I tried.

    I recall an interesting study by Deutscher. What seemed to produce better outcomes (just looking at the data), was patients who really performed their home exercise programs and attended their appointments. As far as the interventions, it seemed manual therapy combined with therapeutic exercise provided better outcomes compared to passive treatments.. I happened to find that interesting.

    Another turning point study for me was the one by Resnik that focused on expert clinicians. These clinicians had high outcomes. There were clinician characteristics that were important. The characteristics had nothing to do with education or years of experience. The characteristics tended to fall in the category of “soft skills.”

    Patient reported outcome measures actually measure a patient’s perception.

    Maybe we’re using the wrong models to really understand what plays a role in outcomes. I’ve always wondered about structural equation modeling. Would be a huge undertaking, but just might provide interesting findings about what matters in attaining an outcome from an episode of care. I am with you, something happens – I can see scores change, sometimes even after a single appointment. I hope that one day we can figure it out so that we can positively impact as many patients as possible.

    I enjoyed your thoughts. Thank you for sharing them,
    ~selena

  7. Mark Werneke says:

    Hi Chad
    Really enjoyed the Podcast and your blog comments.
    Just 2 points:
    1. The RCT is considered the design of choice because it measures intervention effectiveness. Yet the majority of RCTs, measure effectiveness based on differences in patient report outcomes such as ODI and the ODI measure’s MCID. Therefore, RCT effectiveness results are also influenced by the individual’s health status and expectations and may not accurately confirm that the outcomes identified by RCT are a direct result of the interventions received.
    2. I agree that statistical controls will never completely, perfectly level the playing field when comparing clinicians, groups, or institutions. However it may be interesting for the podcast viewers to read the paper by Gozalo, Resnik and Silver “Benchmarking Outpatient Rehabilitation Clinics Using Functional Status Outcomes.” HSR Health Services Research 2015. The authors developed models using advanced statistical analyses i.e., hierarchical regression methods with patients nested within therapists within clinics, they demonstrated that clinic and therapist effects explained 11.6 percent of variation in FS beyond controlling for baseline pre-care differences.
    Thanks
    Mark

Leave a Reply

Your email address will not be published. Required fields are marked *