Wednesday, September 7, 2011

Doctors, Patients and Framing Bias

Which line is longer?
[Click here for a recording of the noon conference talk on framing bias.]
[Click here for a recording of Dr. Kaur's talk on bacterial meningitis.]

The last couple of posts have been about measures of treatment effect, specifically about absolute and relative differences in risk and numbers needed to treat or harm.

Something to remember about these measures is that, while they're calculated from the same data, they often sound quite different.  You can significantly prejudice somebody's answer to a given question by the way you frame it.  For instance, consider these two questions:
  1. Would you like to spend the next three years of your life constantly underslept, trying to do seven things at once, eating mainly hospital food and struggling against insurmountable barriers to do a merely adequate job?
  2. Would you like to spend the next three years of your life at an excellent community teaching hospital working with inspiring mentors to join the next generation of excellent internists, all while working for people who really need and appreciate you?
Because you're an internal medicine resident, you know that these are in fact the same question, but you can see how the way you put it makes a difference.  In statistics, this is called "framing bias," and is the subject of a study recently published in the Journal of General Internal MedicinePerneger and Agoritsas did a large survey of Swiss doctors and patients' understanding of a hypothetical treatment.  The subjects were randomized to several groups: some of them only got information about the treatment's relative benefit, some of them only got information about absolute data in terms of survival or mortality, some of them only got information about the number needed to treat, and some got all the available data. 

They observed that how the information was presented had a substantial and significant effect on what people thought about the treatment.  For instance, while only 51.8% of doctors who only saw the data presented as absolute survival thought the treatment was likely to be better than its predecessor, that proportion rose to 93.8% among doctors who only saw the relative risk reduction.

But what was arguably most fascinating was that there was pretty much no difference between doctors and the patients responses. These doctors' extensive training in understanding quantitative analyses of treatment effects did not appear to reduce their liability to framing bias at all.  This study (which I recommend reading in full) suggests that when we're only given relative measures of effect size, we are very likely to overrate the effect relative to what we would say if we could also see absolute effect sizes.

No comments:

Post a Comment