Showing posts with label PPV/NPV. Show all posts
Showing posts with label PPV/NPV. Show all posts

Tuesday, May 15, 2012

Predictive Value

This week we briefly reviewed the concept of predictive value (as in "positive predictive value" and "negative predictive value.")  This has come up a few times before in noon conferences and at Journal Club, but it's worth reviewing because it's very important to interpreting diagnostic tests.

Predictive value is a measure of the accuracy of a test; specifically, it tells you how likely it is that the result you got (whether positive or negative) is actually true.  It's calculated from the same information you use to calculate sensitivity and specificity, given in the familiar two-by-two table below:



Fig 1. All the data you'll ever need to calculate test characteristics.


However, there are some crucial differences in how you calculate it and how you interpret it.  Recall that sensitivity is equal to the number of true positives (a) divided by the total number of people with the disease (a+c), and specificity is equal to the number of true negatives (d) divided by the total number of people who are healthy (b+d).  Thus, the sensitivity tells you how many people with the disease the test picks up, and the specificity tells you how many healthy people it doesn't.  Predictive values, on the other hand, tell you how likely the result is to be accurate - so the positive predictive value is the number of true positives (a again) divided by the total number of positives (a+b), and the negative predictive value is the number of true negatives (d again) divided by the total number of negativs (c+d).

Another important difference is that sensitivity and specificity are intrinsic properties of the test, which should be the same whoever you use it on.  Conversely, the predictive values can only be calculated for a given population because they vary with prevalence.  To see why this is, imagine two populations, one where everyone has the disease and one where nobody does.  In the first population, any negative test (regardless of its specificity) is a false negative.  In the second, any positive test (regardless of its sensitivity) is a false positive.  There has been a question about this on every standardized test I have every taken, so think about it for a second.

Thursday, March 1, 2012

Sensitivity and Salmon

This week we talked briefly about sensitivity and predictive value, and reviewed a fantastic poster by Bennett et al. Here is methods section, in toto:



Let's just translate this briefly.  What Bennett is saying is that he bought a dead salmon to calibrate his fMRI machine, and that, like a good scientist, he performed the entire experiment he intended to run on humans on the salmon in order to control for all potential unmeasured variables.

Reminding us once again of the striking graphic efficacy of fMRI results, here's a slice from his scan:



As you can see, the salmon appears to be...ah...."mentalizing."

Bennett is to be commended in publishing this cautionary tale, whcih reminds us that high sensitivity is not always a good thing.  In addition, this is about the best illustration I can think of to explain why predictive value depends on prevalence.  If you're testing whether a dead fish has the capacity to perform a "mentalizing task," then all results are false positive results, because, unless fMRI counts among its widely trumpeted virtues the power of resurrection, dead salmon don't mentalize.