By Robert L. Moore, MD, MPH, MBA, Chief Medical Officer
“It is clear to me that AI will never replace physicians — but physicians who use AI will replace those who don’t.”
– Jesse Ehrenfeld, President of the American Medical Association
In the last two years, due to a big improvement in silicon chip technology (NVIDIA) and neural network design, artificial intelligence (AI) systems are able to learn from the entire knowledge of the public internet and generate new knowledge. The process of creating a new Large Language Model (LLM) AI system is akin to a parent guiding the mental and moral development of a child. Like infants absorbing information from the world around them, new LLMs absorb unstructured and unlabeled text, images or sound from the internet. After this unstructured phase, like older children who attend school, the LLMs are fine-tuned, trained for specific tasks, and boundaries of right and wrong are defined, through programming if not trial and error.
It requires human beings to ensure ethical guardrails are built in and logical, and to program additional overarching rules to make the output more accurate, useful, and innovative. The final product is analogous to an overconfident high school student with a photographic memory.
For an understandable and entertaining introduction to AI, I recommend the 3-episode series on Freakonomics Radio.
When thinking of the myriad potential of generative AI in health care, it is helpful to divide up the possibilities into two buckets:
- Ways that AI can increase efficiency, taking tasks that take some amount of human brain processing that is fairly repetitive, teaching the generative AI how to do this quickly and accurately.
- Improving the quality of an activity, by recognizing patterns quickly or drawing on a deeper fund of knowledge than humans can achieve.
One example of efficiency improvement is using a programmed LLM to be a computer scribe for a clinician visit. Several clinicians in the Partnership service area are starting to use these programs. Early adopters of this technology say that it reduces their time to document on patients by about 15%.
There are many digital scribes on the market; each Electronic Health Record (EHR) seems to be building one that is programmed to work best with their particular EHR. Free standing programs are more like computer dictation systems, not integrated with the electronic health record.
A deeper EHR integration of these computer scribes has begun, with many of the larger EHR systems. Psychiatrist and technology consultant Bill Weeks thinks that this will progress rapidly, such that the EHR database will still exist in the background, but an AI assistant will take the place of the typing and clicking that currently is so disheartening to clinicians.
Image recognition is a second early application of AI in medicine. The main medical example for using AI to improve quality is in the areas of radiology and pathology, where training on huge volumes of images is used to assist radiologists and pathologists with identifying concerning patterns that they may have missed if using their own eyes. An example more relevant to primary care clinicians (and approved by the FDA and covered by Partnership) is the use of AI to interpret screening retinopathy imaging done in the primary care office.
A third example of using AI for improving efficiency is to leverage AI to create a virtual “you” to educate your patients. For conditions or education that you end up repeating over and over again in your day, an AI capable of copying your voice and likeness can give high quality personalized education to your patients while they are either in the office or at home. Early pilots suggest that this will both save time and improve the chances that the advice is followed. When patients are fully aware that an avatar is speaking for you with information you approved of, your AI generated avatar can deliver more personalized and detailed education than you would have time to do in your clinical visit. Notably, many are concerned about this particular application of AI, as it can be used to mimic politicians in political campaigns.
A Generative AI (that is a LLM that is able to generate new knowledge) trained on a variety of high quality medical references could help improve the quality of preventive and curative care. If integrated with your EHR and a part of your computer scribe, such a system can remind you of preventive health care items the patient is due for and order them for you with your vocal approval.
In the area of acute care and chronic disease care, in the future LLMs will vocally and seamlessly point out allergies when a medication is ordered, note new studies showing better drug treatments, give a differential diagnosis, summarize abnormal lab results with interpretation etc. It is sort of like having a very smart third year resident in the room with you, taking notes and looking things up for you as you see your patient. It will take some training to make this happen seamlessly and accurately, but it is likely to be standard in a few short years.
Improved efficiency will help address clinician burnout. Artfully using generative AI to progressively improve quality of care may do the same; only time will tell. Either way, some training is needed, both of the generative AI itself and the humans that use it!