Friday, April 27, 2012
Grand Rounds 4.27.12
Follow this link to hear a recording of Dr. Ergie's presentation on surgical strategies for advanced heart failure.
Friday, April 20, 2012
Grand Rounds 4.20.12
Follow this link to hear a recording of Dr. Fraga's presentation on conversion disorder.
Friday, April 13, 2012
Grand Rounds 4.13.12
Follow this link to hear a recording of Dr. Broderick-Villa's presentation on bariatric surgery.
Tuesday, April 3, 2012
Case-Control and Cohort Studies
This post is apropos of an article we recently reviewed in journal club, which used both methodologies. You can listen to a recording of the remarks we made at the time by following this link, or read on.
The first thing to understand about cohort and case-control studies is that they're both observational as opposed to experimental studies. This is easy to understand in principle, but it is stunning how often people will try to fool you (using techniques like propensity-score matching, discussed in the last section of the post on our GI Journal Club) into thinking that an observational study is actually experimental.
The second thing to understand is that the two represent very different levels of evidentiary quality, which is obvious once you understand their methodologies.
In a prospective cohort study, a group (or cohort) are identified prior to the development of the outcome(s) of interest. The presence or absence of putative risk factors (i.e. "exposures") thought to be related to the outcome are then studied in relation to the development of the outcome(s) the investigator wants to know about. A classic and famous example is the Framingham Heart Study, in which a large cohort were identified and closely studied prior to the development of heart disease, and from which many valuable associations between risk factors like smoking, cholesterol and hypertension and cardiovascular outcomes have emerged. Prospective cohort studies are considered to be the "gold standard" of observational research, in the same way that the RCT is considered the "gold standard" of experimental research.
Case-control studies, on the other hand, come at the problem from the other end (indeed, some have even proposed they be called "trohoc" studies, because they are cohort studies done backwards). A group of cases are selected in which the outcome of interest has already developed, and then they are compared to people who appear otherwise similar but do not have the outcome in order to determine the strength of association between the outcome and exposures thought to be risk factors. Some of the earliest work associating tobacco consumption with lung cancer was done through case-control studies. The primary weakness of case-control studies is that they are by definition retrospective, and therefore subject to various forms of recall bias. Case-control studies are not considered the "gold-standard" of anything.
An important point to understand is that cohort studies can generate infomation about the incidence of a condition, which allows you to calculate the relative risk of the outcome associated with a specific exposure. However, in a case-control study the investigator decides how many subjects have the outcome, because they select the cases - so there is no "incidence," and the strength of association can only be expressed by relatively indirect methods like the odds ratio.
The first thing to understand about cohort and case-control studies is that they're both observational as opposed to experimental studies. This is easy to understand in principle, but it is stunning how often people will try to fool you (using techniques like propensity-score matching, discussed in the last section of the post on our GI Journal Club) into thinking that an observational study is actually experimental.
The second thing to understand is that the two represent very different levels of evidentiary quality, which is obvious once you understand their methodologies.
In a prospective cohort study, a group (or cohort) are identified prior to the development of the outcome(s) of interest. The presence or absence of putative risk factors (i.e. "exposures") thought to be related to the outcome are then studied in relation to the development of the outcome(s) the investigator wants to know about. A classic and famous example is the Framingham Heart Study, in which a large cohort were identified and closely studied prior to the development of heart disease, and from which many valuable associations between risk factors like smoking, cholesterol and hypertension and cardiovascular outcomes have emerged. Prospective cohort studies are considered to be the "gold standard" of observational research, in the same way that the RCT is considered the "gold standard" of experimental research.
Case-control studies, on the other hand, come at the problem from the other end (indeed, some have even proposed they be called "trohoc" studies, because they are cohort studies done backwards). A group of cases are selected in which the outcome of interest has already developed, and then they are compared to people who appear otherwise similar but do not have the outcome in order to determine the strength of association between the outcome and exposures thought to be risk factors. Some of the earliest work associating tobacco consumption with lung cancer was done through case-control studies. The primary weakness of case-control studies is that they are by definition retrospective, and therefore subject to various forms of recall bias. Case-control studies are not considered the "gold-standard" of anything.
An important point to understand is that cohort studies can generate infomation about the incidence of a condition, which allows you to calculate the relative risk of the outcome associated with a specific exposure. However, in a case-control study the investigator decides how many subjects have the outcome, because they select the cases - so there is no "incidence," and the strength of association can only be expressed by relatively indirect methods like the odds ratio.
Cluster Randomization
At this month's journal club, we reviewed a study which used "cluster randomization." This is a study strategy which does basically what it says on the tin: instead of being randomized individually, test subjects are randomized in clusters. In the case of the study we talked about at journal club, the unit of "clustering" was the individual ICU.
We thought it was worth going into cluster-randomization in a little detail, apropos of a very interesting piece in this month's first issue of Annals of Internal Medicine. You may recall from Dr. Bhuket's Grand Rounds on cirrhosis earlier this year that the American Association for the Study of Liver Diseases' (AASLD) guidelines on screening for hepatocellular carcinoma in cirrhotics currently recommend ultrasound examination every six months, without measuring alpha-fetoprotein.
The authors point out that this recommendation (which was graded as a level I) is based on a single trial cluster-randomized trial. In the analysis of their data, however, the authors calculated the 95% confidence intervals of the hazard ratio they used to report their findings as though the patients had been individually randomized. This is a basic statistical mistake.
This case is instructive for all students of the medical literature, which is so complex and difficult to navigate that even an august body such as the AASLD can be misled, yet so democratic that anyone who understands basic principles of medical statistics can understand where the mistake lies, and potentially be the first to recognize it.
We thought it was worth going into cluster-randomization in a little detail, apropos of a very interesting piece in this month's first issue of Annals of Internal Medicine. You may recall from Dr. Bhuket's Grand Rounds on cirrhosis earlier this year that the American Association for the Study of Liver Diseases' (AASLD) guidelines on screening for hepatocellular carcinoma in cirrhotics currently recommend ultrasound examination every six months, without measuring alpha-fetoprotein.
The authors point out that this recommendation (which was graded as a level I) is based on a single trial cluster-randomized trial. In the analysis of their data, however, the authors calculated the 95% confidence intervals of the hazard ratio they used to report their findings as though the patients had been individually randomized. This is a basic statistical mistake.
This case is instructive for all students of the medical literature, which is so complex and difficult to navigate that even an august body such as the AASLD can be misled, yet so democratic that anyone who understands basic principles of medical statistics can understand where the mistake lies, and potentially be the first to recognize it.
Subscribe to:
Posts (Atom)