The Power of Observational Studies

March 2, 2012
By

My colleage Gary Schwitzer at HealthNewsReview.org has a post today questioning the validity of observational studies (where epidemiological researchers look at selected variables in large populations to see if there is a relationship between a cause and effect). He’s legitimately frustrated by constant media attention on small and poorly constructed observational studies that generate misleading headlines like “Vitamin D Cures Cancer,” only to be followed a few weeks later by “Vitamin D Causes Cancer.” “Such research CAN NOT PROVE CAUSE-AND-EFFECT,” he screams in frustration.

Alas, Schwitzer has gone overboard to make a valid point. Carefully constructed observational studies have been crucial to advancing medicine and safety. Where would drug safety be today if David Graham of the FDA hadn’t done the observational study of a quarter million Kaiser Permanente Vioxx patients that proved that pain pill caused an excess of heart attacks and strokes? And where would the campaign against smoking be without the pioneering observational studies conducted by Richard Doll and Austin Bradford Hill (and later studies by Richard Peto) that showed smoking causes lung cancer?

Retrospective observational studies may not be the gold standard of double blind, placebo-controlled trials or even prospective observational studies (like the Framingham heart study). But if done properly, they are valid and sometimes crucial to learning how medical interventions work in real world populations (not in the controlled environment of a clinical trial), and the impact that environmental insults are having on human health.

Let’s not throw the baby out with the bathwater in our efforts to increase journalistic skepticism about poorly constructed observational studies.

Did you like this? If so, please bookmark it,
tell a friend
about it, and subscribe to the blog RSS feed.

3 Responses to The Power of Observational Studies

  1. Gary Schwitzer on March 2, 2012 at 11:20 am

    Gooz,

    Thanks for your thoughts, but I don’t think I wrote anything to diminish the importance of and contribution of observational studies – and I certainly had no such intent. I know and respect the history of and contribution of observational studies.

    You know that my work is aimed at improving the state of health journalism – and journalists who use causal language to describe observational studies are inaccurate and wrong and need help.

    We offer a primer for journalists on why the words used to describe observational studies matter – and how to choose those words more carefully.

    So, in trying to improve the quality and accuracy of journalism about observational studies, we are not in any way saying that these studies lack value.

    No baby being thrown with out with the bathwater by this guy!

    Regards,

    Gary Schwitzer
    Publisher
    HealthNewsReview.org (with a team of 28 independent reviewers)

  2. Greg Pawelski on March 2, 2012 at 6:35 pm

    We tend to forget that medicine and most of its discoveries have been observational. Observational studies, which do not involve randomization but where available data are nonetheless analyzed to make treatment comparisons, have also been used to provide information on how well patients respond to treatment. Many investigators perform these types of studies by analyzing data from the Surveillance, Epidemiology and End Results (SEER) Registry.

    I can see where many journalists these days are using causal language to describe observational studies that are inaccurate and wrong, and this should be pointed out, as Gary has tried to improve the state of health journalism. However, when the results of the observational studies are inaccurate and wrong, not because observational studies are somehow less reliable. While one technique relies upon quantities, similarities, populations and averages, the other relies on qualities, idiosyncrasies, individualization and specifics.

    I think both Gooznews and Healthnewsreview have been invaluable resources in pointing out the various calamities of health journalism.

  3. Donald Klein on March 5, 2012 at 12:36 pm

    Both observational studies and randomized,double blind ,placebo controlled,clinical trials can lead to erroneous conclusions. Briefly ,clinical trials by randomization can cancel out expected effects other than those produced by the experimental agent. However, those inferences are restricted by the sample definition and composition . Broadly generalizing such inferences from this particular sample ,often largely selected by convenience,is problematic. Ideally the sample should be selected randomly from the population in question ,but that is hardly the case. The practical solution is to rely on multiple positive replications. Unfortunately the FDA only requires two.
    Observational studies are open to both sample peculiarities and unrecognized confounds . It is true that many medical discoveries such as aspirin,quinine,chlorpromazine etc were due to non-random observation but they largely relied on historical controls where the expectation of untreated course of illness was quite clear. However,purging,bleeding, etc were orthodox treatments that were in fact dangerous although justified by “observation” and wrong theory. How to draw sound inferences from non-random observational studies is,to put it mildly ,controversial. See any issue of the Journal American Statistical Association.
    Clearly skepticism is warranted re any novel, unreplicated , finding but insofar as journalism is into building readership and advertising income , hope generating novelty is grabbed. I doubt if education of journalists about such methodological truisms will help much. Educating the general public ,starting about age 9 ,is more promising.

Yes! Send Me Weekly Headlines
Email:  

Yes! I Support Independent Health Care Journalism