Improving Diagnostic Judgment: A Behavioral Economic Approach (Part IV in series on Diagnostic Accuracy)

By Robert L. Moore, MD, MPH, MBA, Chief Medical Officer

“We’re blind to our blindness. We have very little idea of how little we know. We’re not designed to know how little we know.”

-Daniel Kahneman, Nobel Laureate in Economics

Regular readers of this newsletter will recall a series of lead articles on improving diagnostic accuracy (Parts I, II, and III found on our blog). Medical schools, residencies and continuing medical education programs have recently adopted some formal training in critical thinking, including how to understand how cognitive biases can lead to mistaken diagnoses. This takes the principles of behavioral economics, based on the pioneering work of Psychologist Daniel Kahneman (summarized for a general audience in his most famous book, Thinking Fast and Slow), and helps us understand how physicians think and make mistakes.

In the January 25, 2022 JAMA, Dr. Pat Croskerry provided a succinct summary of recommendations for overcoming these biases to become a “rational diagnostician.”

  1. Establish Awareness of How Cognition Works. Understand the most common cognitive biases and the difference between type 1 (intuitive/fast) and type 2 (analytical/slow) processing.
  2. Teach and Coach Critical Thinking. Excellent coaching promotes deep learning, allowing 10-fold faster development of expertise. Understanding the mechanism of deep learning can help those of us without ready coaches to improve our mastery of complex areas of expertise. The book The Talent Code, provides the best overview of this topic.
  3. Make the Work Environment More Conducive to Sound Thinking. Three main conditions that interfere with analytical thinking include:
    1. Psychological stress leading to anxiety and dysphoria,
    2. Sleep deprivation causing chronic fatigue, and
    3. Excessive cognitive loading (responding to a barrage of emails and tasks without time to pause and reflect).
  4. Circumvent Type 1 Distortion. Setting up mental steps and processes to allow “executive override” to pause and reflect on the possibility that our intuitive initial impression is incorrect, and evaluating possible alternative explanations or decisions. For example, when a patient’s clinical presentation has some findings that are not explained by our initial, presumptive diagnosis, we pause to consider what else might explain this. For example: “Is this recurrent pharyngitis a sign of an underlying immune compromise?”
  5. Expand Individual Expertise. While routine expertise is developed with training and practice, adaptive expertise encourages flexibility and innovation in problem-solving. Adaptive expertise is fueled by curiosity; it develops when exploring the possibilities raised with type 2 thinking, and also by regularly reading journal articles or exploring topics that are unrelated to any particular patient.
  6. Promote Team Cognition. Regular conferring with colleagues on challenging diagnostic or therapeutic situations brings a collective expertise to bear, which can produce better outcomes for your patients. While synchronous consultation (for example “curbside consultation”) allows some back and forth, and is quicker, asynchronous consultation (for example using eConsult or secure email) allows time for more nuance and detail to be included and more analytic thinking and background research to be done.
  7. Mitigate Judgment and Decision-making Fatigue. Dr. Croskerry suggests the use of “cognitive forcing strategies,” like adopting clinical maxims such as “rule out worst-case scenario,” practices such as routinely documenting a differential diagnosis, or always using a pre-operative checklist.

The common feature of these approaches is that they will require an intentionality derived from a sense of professionalism. It is essential for clinical leaders to find ways nurture these habits for those on our teams.

Principles of Improving Diagnostic Accuracy (Part II of Diagnostic Accuracy Series)

By Robert L. Moore, MD, MPH, MBA, Chief Medical Officer

“There are three constants in life: change, choice and principles.” –Steven Covey

Steven Covey notes that actions and intent flow from principles; they are the foundation used to choose between different courses of action and to decide where to invest energy in self-improvement. The importance of principles extends to specific fields as well, including medicine.

In Part I of this series, we reviewed the extent of diagnostic inaccuracy in medicine, ranging from an error rate of 5% to 50%, depending on the nature of the patient/problem. A key contributor to this inaccuracy is our way of thinking about uncertainty: we are trained to be overconfident in the accuracy of our decisions.

Unfortunately, the other extreme, excessive concern about diagnostic uncertainty, leads less confident clinicians to order excessive laboratory and radiological tests. The Choosing Wisely campaign begins to shine a light on the scenarios where such tests are unequivocally useless but does not provide a framework for unnecessary testing when there is even slight uncertainty.

Last year, the American College of Physicians convened a group of experienced clinicians, teachers, and communications experts to address this challenge of diagnostic uncertainty. The product of this effort is called, “Ten Principles for More Conservative, Care-full Diagnosis.” Here is a brief summary of the first five principles:

  1. Promoting Enhanced Care and Listening. Perform an appropriate and thoughtful history and physical exam. When the diagnosis is unclear, continue collecting the history and evaluating changes in the physical exam at subsequent visits to determine how the patient’s clinical course is unfolding.
  2. Understand Uncertainty. Become comfortable with it, learn how to respond to it, and how to convey it to patients.
  3. Respond Carefully to Symptoms. Balance the natural history of common symptoms (75% – 80% of self, resolve within 4 to 12 weeks) with a consideration of potential psychological causes of symptoms (2/3 of patients with anxiety, depression, or somatoform disorders are undiagnosed), considering both the Social Influencers of Health and the long-term effects of Adverse Childhood Events, which also cause or accentuate symptoms.
  4. Maximize Continuity and Trust. Continuity of care by a primary care clinician is not only the single best predictor of patient satisfaction but also generates the trust needed to address the psychosocial issues mentioned above and to have patients trust the strategy of “watchful waiting” to observe the natural history of symptoms.
  5. Taming the Time Pressures around Patient Visits. Ensure the clinician has adequate time to listen, observe, discuss, and think. Adjust the system and the environment of care, as needed, to support this.

The derivation of these principles, which perhaps seems self-evident, required thought and effort of experts; trying to improve without guiding principles to guide us is disjointed at best. Like moral philosophy, it is the application of principles which is more challenging. Some approaches include taking time to think about how to apply these principles, finding small self-improvements or system changes to move towards achieving them, and telling stories to help reinforce how we approach gaps.

You, as clinical leaders in your settings, have an especially important role to play in helping your clinicians learn and apply these principles. On behalf of your patients, thanks for addressing these challenges in your setting.

Diagnostic Accuracy (Part I in a series)

By Robert L. Moore, MD, MPH, MBA, Chief Medical Officer

In 2001, the Institute of Medicine (IOM) published Crossing the Quality Chasm, which defined 6 realms of quality:

  • Safe
  • Effective
  • Patient Centered
  • Timely
  • Efficient
  • Equitable

Most quality improvement and quality assurance activities related to health care address one or more of these realms.  However there is one critical area that is missing from both the IOM realms and most QI plans.  This missing realm is at the heart of what it means to be a clinician, is infrequently measured and is uncomfortable to verbalize.

This realm is Accuracy, in particular the diagnostic accuracy of clinicians.

Diagnostic Inaccuracy:  Studies show that between 5 and 50% of diagnoses are erroneous, depending on the type of patient/problem.  The low range applies to a population of patients where most are normal; the high range applies to a population where all patients have complex abnormalities.  Autopsy studies and studies with “secret shopper” patients show rates of inaccuracy of between 10 to 20%.  (Mark Graber:  “The Incidence of Diagnostic Error in Medicine”, BMJ October 2013)

What is the source of this diagnostic inaccuracy?

At the core is an insufficient appreciation of uncertainty.  Put another way, clinicians and scientists are often overconfident in the accuracy of their decisions.  This psychological trait develops when we are trainees, as it makes us appear confident in the eyes of our patients, and helps prevent us from being paralyzed by indecision.  Fortunately, many medical conditions in primary care resolve on their own, so neither the clinician nor the patient ever become aware of the inaccuracy.

We can reduce diagnostic inaccuracy by changing the way that we think.

Daniel Kahneman (winner of the 2002 Nobel Prize in Economics) describes two ways of thinking:

  1. Fast thinking (also known as intuitive thinking or system I thinking)
  2. Slow thinking (also known as rational thinking or system II thinking)

Fast, intuitive thinking tends to be automatic, with input from emotions.  In his book, Thinking Fast and Slow, Dr. Kahneman notes 12 different classes of bias and 5 heuristics which can lead to irrational decisions, when we think intuitively.

Slow, rational thinking is more deliberative, systematic, and logical, with an evaluation of consequences of a decision.

As we go through our everyday lives and routine practice of medicine, we use fast thinking for most decisions, so we can get through our days without being paralyzed by indecision over minor decisions.  When the stakes are high, or when we notice a diagnostic pattern that doesn’t quite fit, we need to transition to slow, rational thinking.  For a clinician to be efficient and accurate, we need to know when to toggle back and forth between slow and fast thinking.

When slow thinking is associated with a retrospective analysis of a serious diagnostic error, as happens in morbidity and mortality rounds, or when a clinician becomes aware of a diagnostic error that occurred, it is good to explicitly think about which biases or heuristics contributed to the error, to help prompt us to move to slow thinking when needed.

This process is sometimes called “cognitive debiasing,” which is a fancy way of saying “learning from our mistakes”.

Good Medical Decision-Making: Much More than Applying Evidence (Diagnostic Accuracy Part III)

By Robert L. Moore, MD, MPH, MBA, Chief Medical Officer

“Medicine is best described not as a science, but as a form of flexible practical reasoning that often uses science.”

Adam Rodman, MD

How do the best clinicians apply their knowledge?

Last year, this monthly newsletter reviewed the propensity for mental shortcuts, biases and prior experiences that lead to poor medical decision-making, and discussed options for minimizing the degree that these cognitive traps affect our clinical decisions. Think of this as the cognitive psychology of medical decision-making.

When mental shortcuts are minimized, and reasoning is applied, we might, at first blush, think that the best reasoning limits itself to “Evidence Based Medicine” where high quality, prospective, placebo-controlled, double blind, allocation concealed studies are consistently applied in making medical decisions. In reality, several sources of knowledge (sometimes conflicting with each other) are brought to bear. The study of the nature of knowledge is also known as epistemology, a branch of philosophy. Within the medical realm, this is known as medical epistemology, a branch of the study of the philosophy of medicine.

In a grand rounds at Beth Israel Hospital in October, 2019, clinical professor Adam Rodman, MD defines a historical framework of medical epistemologies that clinicians use to decide on what treatments to offer patients:

  1. Observation – This involves obtaining a careful and complete history and physical examination, with review of lab work to categorize the disease or diseases that a patient has, and recalling how similar patients/disease categories that the clinician has directly observed or heard/read about have responded to treatments given. The earliest example of this is the 4,000-year-old Edwin Smith papyrus, in which an ancient Egyptian healer carefully described a series of 48 surgical cases and their treatments.
  2. Theory – Pre-scientific theories, such as ancient Greek humoral theory of disease dominated medical practice until the mid-nineteenth century, when they were replaced by scientific theoretical frameworks, such as physiology, immunology, biochemistry. These frameworks are then used to interpret observations (such as a rising creatinine in a patient receiving a diuretic), and make judgements based on this understanding.
  3. Experimentation/Clinical Trials – While there are scattered examples of medical experimentation before 1900, it was not very commonly used. It is primarily a twentieth-century framework, and led to the Evidence-Based Medicine movement, starting in the late 1980s. It includes applying a hierarchy of different types of medical trials and studies, with expert opinion at the bottom of the pyramid and meta-analysis reviews at the top. This pillar of understanding has crumbled to be replaced with the current standard: grading of available evidence, which takes many other factors into account.
  4. “Population Medicine”/Epidemiology/Biostatistics – This began in the early 1800s, in Paris and was first called the “Numerical Method.” This involves collecting data on numbers of patients and analyzing this data statistically for insights that can then be used to improve clinical decision making for the individual patient being cared for. The most modern applications of this are decision rules (for example for osteoporosis screening or genetic testing), “big data” analyses, and augmented intelligence medical applications.

Rodman contends that whenever clinicians make treatment decisions on individual patients, we use some or all of these frameworks, even on the same patient, in the same day. The frameworks often might lead to conflicting treatment options which need to be sorted out rationally. Importantly, the third framework is the preferred framework for Evidence-Based-Medicine purists, but real-life excellent clinicians seamlessly integrate EBM with the other 3 frameworks. We need not feel guilty or inferior when we use these other frameworks; they have a vital role in the decision making of all excellent clinicians.

In the end, to the extent medicine uses science, it is in the application of science to deciding on individualized treatment of patients that matters.

The medical ethicist Jose Alberto Mainetti stated it best in his research, Embodiment, Pathology, and Diagnosis: “Diagnosis is not knowledge for knowledge’s sake. It is knowledge for the sake of action. Medicine exists to cure, to care, to intervene, or in limiting cases to know when not to intervene. Medicine is not a contemplative science.”

Knowing the noble history of these four epistemologies can help us balance their use thoughtfully, both in our continuing educational activities to better master them and in applying them to make therapeutic decisions that best serve our patients.