Thursday, December 29, 2011
Grand Rounds 12/16/11
Click here to listen to a recording of Dr. Cohen's presentation, "Integrating Prevention Strategies into Clinical Care".
Monday, December 12, 2011
Likelihood Ratios
On Wednesday we talked briefly about likelihood ratios. LRs are a commonly reported metric for assessing the accuracy of a diagnostic test, and are calculated from the same data you use to generate sensitivity, specificity, and predictive value (positive and negative).
The math is simple, as you will see in a minute, but it's even easier to remember what LRs are by using the name. The LR of a diagnostic finding is the ratio of the likelihood of that finding in somebody with the disease you're looking for to the likelihood of the same finding in somebody without that disease. So, for instance,when we say that conjunctival pallor (defined as a loss of color distinction between the inner palpebral rim and the conjunctiva) has a likelihood ratio of 16.7 for a hemoglobin of <11.0g/dl, we're saying that somebody with a hemoglobin of less than 11.0 is 16.7 times as likely to have that finding than somebody with a hemoglobin of more than 11.0. As you can easily see, a likelihood ratio of >1 means a sign is more consistent with the presence of a condition than its absence, and a likelihood of ratio of < 1 means it is more consistent with the absence of a condition than its presence.
You can always interpret any likelihood this way, but they're calculated differently depending on whether you're interested in the presence or absence of the finding. The LR for the presence of a finding (or "+ve LR") is calculated as the true positive rate divided by the false positive rate, which is equivalent to he sensitivity divided by the complement of the specificity. The LR for the absence of a finding ("-ve LR") is calculated as the false negative rate divided by the true negative rate, which is equivalent to the complement of the sensitivity divided by the specificity.
Likelihood ratios, like sensitivity, specificity, and PPV/NPV are diagnostic weights; they give you some sense of how seriously you should take a particular test or physical exam finding. For instance, if you know that most methods of clinically determining hepatogmegaly have positive LRs of between 1 and 2, you might not rely on these methods to diagnose this condition. Likelihood ratios are particularly useful if you happen to know, or have a reasonably good way to guess the pre-test odds of the condition you're looking for (i.e., it's prevalence in your patient population,) because they can be used to translate pre-test odds directly into post-test odds. The calculation isn't simple, and it's easiest to use a nomogram like this one, (which was developed by David Sackett):
It's extremely simple to use. You just draw a straight line through the pre-test probability of your condition and the likelihood ratio associated with a positive test result, and it will intersect the post-test probability on the right-hand axis. Thus, if you happen to know that the prevalence of pneumonia among otherwise healthy outpatients is 1%, and that a egophony has a likelihood ratio of 4.1 for bacterial pneumonia, then the probability that an otherwise healthy person with egophony has pneumonia is 5%.
This highlights that the value of an LR depends on the pre-test probability of the condition it's used to diagnose. If the probability is very low, only very high LRs will raise post-test probability by much; if it's very high, then a relatively modest LR may change post-test probability significantly.
We always find that reading multiple explanations of the same concept helps it stick better, so if you're interested in a concise and well-written explanation of LRs you should check this one out on the website of the Center for Evidence-Based Medicine. Most of the LRs used in this post are taken from Steven McGee's majesterial Evidence Based Physical Diagnosis, which we highly recommend if you're looking for a way to spend your book money.
Conjunctival pallor |
You can always interpret any likelihood this way, but they're calculated differently depending on whether you're interested in the presence or absence of the finding. The LR for the presence of a finding (or "+ve LR") is calculated as the true positive rate divided by the false positive rate, which is equivalent to he sensitivity divided by the complement of the specificity. The LR for the absence of a finding ("-ve LR") is calculated as the false negative rate divided by the true negative rate, which is equivalent to the complement of the sensitivity divided by the specificity.
Likelihood ratios, like sensitivity, specificity, and PPV/NPV are diagnostic weights; they give you some sense of how seriously you should take a particular test or physical exam finding. For instance, if you know that most methods of clinically determining hepatogmegaly have positive LRs of between 1 and 2, you might not rely on these methods to diagnose this condition. Likelihood ratios are particularly useful if you happen to know, or have a reasonably good way to guess the pre-test odds of the condition you're looking for (i.e., it's prevalence in your patient population,) because they can be used to translate pre-test odds directly into post-test odds. The calculation isn't simple, and it's easiest to use a nomogram like this one, (which was developed by David Sackett):
It's extremely simple to use. You just draw a straight line through the pre-test probability of your condition and the likelihood ratio associated with a positive test result, and it will intersect the post-test probability on the right-hand axis. Thus, if you happen to know that the prevalence of pneumonia among otherwise healthy outpatients is 1%, and that a egophony has a likelihood ratio of 4.1 for bacterial pneumonia, then the probability that an otherwise healthy person with egophony has pneumonia is 5%.
This highlights that the value of an LR depends on the pre-test probability of the condition it's used to diagnose. If the probability is very low, only very high LRs will raise post-test probability by much; if it's very high, then a relatively modest LR may change post-test probability significantly.
We always find that reading multiple explanations of the same concept helps it stick better, so if you're interested in a concise and well-written explanation of LRs you should check this one out on the website of the Center for Evidence-Based Medicine. Most of the LRs used in this post are taken from Steven McGee's majesterial Evidence Based Physical Diagnosis, which we highly recommend if you're looking for a way to spend your book money.
Thursday, December 1, 2011
PPV, NPV and Troponin
If you ask most interns how to rule out NSTEMI biochemically, they will tell you to check a troponin level (plus or minus a myoglobin level) "q6 hours X3". That 18 hours can mean the difference between admission and outpatient follow-up, and which can be a burden on patients and incur unnecessary healthcare costs.
An interesting paper by Scharnhorst et al. in the American Journal of Clinical Pathology suggests that myocardial necrosis can be ruled out much faster with modern troponin assays. In a prospective, single center observational study, they evaluated the ability of a troponin of <0.06 micrograms/L or a rise in baseline troponin levels of 30% or more over two hours to diagnose myocardial infarction. The gold standard was the synthetic diagnosis reached by the attending cardiologist.
They reported their results in terms of sensitivity, specificity, and positive and negative predictive value (PPV and NPV). These can all be calculated from the table at right. Sensitivity and specificity were covered in this blog recently; the positive and negative predictive values of a test are even easier to remember. The PPV is the proportion of positive results which are accurate (in this case 30/43 = 70%) and the negative predictive value is the proportion of negative results which are accurate (in this case 94/94 = 100%). In English, this means that in the population studied a negative test result predicted with 100% accuracy that no actual mycoardial necrosis had taken place, thus excluding the diagnosis of NSTEMI, but that a positive result was only an indicator of myocardial infarction in 70% of cases.
The important thing to note about PPV/NPV is that they depend on the prevalence of the disease. We encountered this once previously, but, to review, this is easiest to understand if you think in extremes:
Finally, it's worth noting that these investigators also measured CK-MB and myoglobin and neither added anything to the diagnosis. The original paper is short, free, and very much worth checking out.
Actual numbers obtained in the study. UA=unstable angina |
They reported their results in terms of sensitivity, specificity, and positive and negative predictive value (PPV and NPV). These can all be calculated from the table at right. Sensitivity and specificity were covered in this blog recently; the positive and negative predictive values of a test are even easier to remember. The PPV is the proportion of positive results which are accurate (in this case 30/43 = 70%) and the negative predictive value is the proportion of negative results which are accurate (in this case 94/94 = 100%). In English, this means that in the population studied a negative test result predicted with 100% accuracy that no actual mycoardial necrosis had taken place, thus excluding the diagnosis of NSTEMI, but that a positive result was only an indicator of myocardial infarction in 70% of cases.
The important thing to note about PPV/NPV is that they depend on the prevalence of the disease. We encountered this once previously, but, to review, this is easiest to understand if you think in extremes:
- If the prevalence of a condition is 100%, then the characteristics of the test are irrelevant - any positive result will be accurate.
- If the prevalence is 0%, then similarly, it doesn't matter how sensitive the test is - it's always wrong.
Finally, it's worth noting that these investigators also measured CK-MB and myoglobin and neither added anything to the diagnosis. The original paper is short, free, and very much worth checking out.
Subscribe to:
Posts (Atom)