Use and Misuse of “Evidence Based”

By Robert L. Moore, MD, MPH, MBA, Chief Medical Officer

People almost invariably arrive at their beliefs not on the basis of proof but on the basis of what they find attractive.”

-Blaise Pascal

Clinicians strive to base our diagnostic and treatment practices on appropriate interpretation of scientific studies. The Evidence Based Medicine (EBM) movement has grown in the past 50 years to help create frameworks for evaluation and application of such studies. The worthy goals of EBM are to avoid unnecessary interventions that can potentially harm patients, and improving the health care that we do provide.

The term “evidence based” is sometimes misused.

I recall a resident physician from UCSF, commenting on a particular intervention: “there is no good evidence that this intervention works.” In this case, however, there was no strong evidence for any other intervention, including the one used at UCSF. The resident earlier had pointed to previous studies that showed both a lack of statistical improvement with an intervention, and a “trend towards benefit.”

On the other hand, a study can be done that shows that an intervention has no benefit. These two situations are not equivalent. “Lack of evidence of benefit” is not the same as “evidence that an intervention does not work.”

In the first case, a clinician can very defensibly try out the intervention if there is not established superior treatment. In the second case, when studies definitively show no benefit, a clinician would arguably be practicing substandard medicine to use the particular intervention.

Evidence based medicine was also misused several times in the Covid-19 pandemic. Transmission of earlier Coronaviruses causing SARS-1 and MERS was found to be impeded by use of masks. Early in the Covid-19 pandemic, before studies could have been conducted proving that use of masks also reduced transmission, the Center for Disease Control and Prevention (CDC) stated that mask wearing by the public was not recommended, because of the absence of evidence that masks helped. It incorrectly implied that absence of evidence of benefit meant that there was no benefit, even though the prior probability based on related viruses suggested that a benefit was likely. When the evidence became available and the message flipped to recommend masks and later highly effective masks to prevent transmission, this fundamental change in recommendation contributed to lack of trust in the CDC.

In behavioral health and social science research, the “evidence-based practice” is a standard requirement for programs to be funded. For example, an academic study of a new behavioral counseling technique might show that it reduces depression symptoms by 5%, from an average PHQ9 score of 20 to 19. Technically, this is evidence based—a published study showing a benefit of this intervention. Many government grants would allow this intervention to be implemented more widely.

Implementing this “evidence-based” intervention would be a mistake, for two reasons:

First, the setting that this academic study was done almost certainly differs from any real-world setting. “Implementation science” studies such questions. In general, four different replications of a behavioral intervention in different settings all with a similar benefit are needed to have a 95% or greater confidence than another implementation of this intervention will also have the same benefit.

Second, although the 5% reduction may be statistically significant, is it not clinically meaningful. Sadly, one often has to read scientific studies carefully to see if a difference is clinically meaningful.

The term “evidence based” has also been used to disparage a person’s educational level. For example, a community health worker (CHW) with at 10th grade education may be intuitively skilled at connecting with clients and getting them to change behavior yet lose a job to a more educated and articulate applicant if the hiring manager, instead of recognizing that the CHW has not been trained on “evidence-based approaches,” says that the CHW does not use such approaches. This statement is arguably a reflection of implicit bias against someone with less formal education.

All of these examples of misuse of “evidence based” are a reflection of cognitive biases of one form or another. In the first case, the resident preferred one approach over another and incorrectly used the term “evidence based” to disparage one approach and prop up another. This is sometimes called the “confirmation bias” or the “my side” bias, a very common and very human bias to which scientists are not immune. In the second case, many Asian countries commented on the bias against public mask wearing in the United States, which likely played a role in the early recommendation to NOT wear masks to limit the spread of Covid-19. In the third case, the confirmation bias is also at play, because the researcher really wants something important to come out of their research, as this more often leads to publication of studies, invitations to give talks, and academic reputation.

When someone smart uses “evidence based” to promote or disparage a particular practice or treatment, our internal bias-detection should move into high gear. Switch to system 2 thinking (slow thinking), and critically review the underlying evidence for statistical significance, clinical meaningfulness, and replicability.