IHR Insider Volume 7 Issue 2

Message from the Director

John Steiner

Telling the Story of the Results: Interpretation of Quantitative Data

In recent columns, I have discussed several of the principles we are using to build collaborations between IHR researchers and clinical and operational leaders in the learning health system of Kaiser Permanente Colorado and the Colorado Permanente Medical Group. These six principles are:

▪ Ask the right question
▪ Convene the right team to answer the question
▪ Use the right project design
▪ Assemble the right data
▪ Apply the right analytical tools
▪ Provide the right interpretation of the findings

In this column, I’ll address the final principle – interpreting the findings of an operational data analysis or evaluating an intervention.

When thinking about this, one of our IHR programmers recently proposed a helpful definition – “interpreting the data is telling the story of the results”. In other words, interpretation is the process of converting numbers to a narrative. This can be a perilous process. Social psychologists and economists have shown that when we interpret quantitative data, we are susceptible to a host of cognitive biases that operate unconsciously and often lead us to mistaken conclusions and actions. It is in our nature to interpret data in ways that reinforce the story we want to hear, rather than the story the data are trying to tell us. Here are a few examples:

Over-interpreting data
We tend to see a pattern in events, even when none exists. For example, when we track rare events over time (such as serious surgical complications or other adverse patient outcomes), we cannot help ourselves from trying to connect the dots to plot trends. Most of these time trends are actually meaningless because the numbers are so small, but our propensity to see patterns and our sense of urgency to take action impel us to generalize from insufficient data.

We fail to appreciate the expected variability in our measurements. When we weigh ourselves, we rejoice over a pound lost and agonize over a pound gained, even though we know deep down that the next day’s measurement is likely to negate the change. In our professional lives, we forget that member satisfaction with clinic visits or employee ratings of their work environment depend on a host of influences other than the quality of our service or the skill of our leadership. If those ratings are unfavorable, we can be tempted to launch an intervention and repeat the survey, focusing on areas where we received unfavorable ratings. This approach fails to recognize that high or low values of any measure tends to drift back toward the norm (statisticians call this “regression to the mean”). Unless we identify a comparison group, we may congratulate ourselves on improvements that are really explained by random variation in our measures, contextual changes in the environment, or simply the passage of time.

Under-interpreting data
Occasionally we are unimpressed by findings that should inspire us to action. In a recent randomized trial in a KPCO primary care clinic, my research colleagues and I showed that a single reminder phone call or text message one day before an appointment reduced the missed appointment rate from 7.5% to 6.5%. Everyone’s first reaction, including ours, was that this wasn’t much of a change, but we all reconsidered our opinion when we projected that this intervention reminded 14 patients each week to keep appointments that they might otherwise have missed. Over the course of a year, that is over 700 more appointments kept. This example illustrates that we are prone to under-interpret findings when we don’t consider their impact on a larger scale.

Our closely-held belief that our circumstances are unique may lead us to unwarranted skepticism about the effectiveness of an intervention. We may under-interpret data when we consider adopting an intervention that has been successful in another site (for us, even another KP region). This tendency is so widespread that it has its own acronym, N-I-H, which stands not for the National Institutes of Health, but rather for “Not Invented Here”. Overcoming this problem requires innovators to assess contextual factors that might limit dissemination and adopters to endorse opportunities for replication.

How do we get it right? Even experienced researchers misinterpret data. Here are three suggestions for counteracting our biases and interpreting data more accurately:

Be skeptical but persuadable. All of us are skeptical of the data of others. The best of us are skeptical of our own findings, particularly those that seem too good to be true. To detect our biases, it is useful to ask, “What else could be causing these results, other than the explanation I would prefer?” At the same time, it is easy to get trapped in reflexive skepticism – to make “no” the default answer to any question. After challenging our findings through skeptical interpretation, we need to acknowledge that even solid findings are rarely conclusive, and that important decisions are almost always made in the face of uncertainty.

Encourage disagreement. Other people inevitably see different stories in data than we do. It’s important to surround ourselves with colleagues whose biases differ from ours, and to create professional environments where it is safe for others to challenge our narratives.

Less is more. In research, we regularly need to condense the findings of a five-year, multi-million dollar study into four tables and a figure for publication in a scientific journal. Too often, operational decision-makers are confronted with stacks of multi-colored charts and complex tables that obscure the story of the findings. Data analysis is inevitably interpretive, and our goal in presenting our findings should not be to prove that we have done a lot of work, but rather to get to the essence of the data, the heart of the story. 

Warmest Regards,
John F. Steiner, MD, MPH
Senior Director

HCSRN 2017 is Around the Corner!

HCSRN 2017The 2017 Health Care Systems Research Network will be held in San Diego, CA from March 21-23. Poster and Panel submissions will be accepted until Friday, October 7. Register by December 16 to save on registration. 

 



Studying Access to Naloxone



IHR Investigators Ingrid Binswanger, MD, MPH, and Jason Glanz, PhD, recently received an R01 grant from the National Institute on Drug Abuse (NIDA) to study the safety and impact of expanded access to naloxone, a medication that reverses the effects of an opioid overdose, in primary care practices. The study is a partnership with researchers at Denver Health. In 2015, Colorado passed legislation to approve standing orders for naloxone, which will allow pharmacists to provide naloxone without provider-generated prescriptions. In this new environment, teams at the IHR and Denver Health will have the opportunity to study the wide scale adoption of this new practice and its implications.


Department News



At the 2016 Convocation of the Colorado School of Public Health in June, IHR Investigator Ted Palen, PhD, MD, MSPH, was selected to be inducted into Delta Omega, the public health honor society. 

IHR Evaluation Project Manager Marisa Allen, PhD, recently co-presented a training for the Colorado Evaluation Network. The presentation focused on educating evaluators and researchers about a structured approach called Emergent Learning, which can be used to improve evaluation design, debrief evaluation results, and integrate multiple sources of evidence as a means to strengthen the quality of evaluation projects. Learn more about upcoming events.

 



IHR Publications



Check out the most recent IHR Publications here.