Is ChaGPT Fast Becoming ChatMD? Introducing Generative AI To Healthcare
Senior Contributor
I write about technology, entrepreneurs and innovation.
Did you hear that ChatGPT is now qualified as ChatMD? It recently performed “at or near the passing threshold” for all three parts of the United States Medical Licensing Exam (USMLE) without any specialized training or reinforcement, demonstrating a “high level of concordance and insights in its explanations,” said the examiners. They concluded that large Language Models (LLMs) have the potential “to assist with medical education, and potentially, with clinical decision-making.”
I have written before on the use of AI in healthcare or the statistical analysis of large medical datasets, with AI applications ranging from assisting in the administrative work of medical professionals to managing workflow in a hospital to helping with diagnosis to identifying more effective treatments for specific patients.
For example, Diagnostic Robotics is proactively identifying people at risk of preventable health incidents and, in addition, is reducing the time and cost of admitting a patient to a hospital; OncoHost is advancing precision medicine by predicting the medical benefit from specific cancer treatment for specific patients; Navina’s AI engine provides links between diagnoses, medications, labs, vitals, consult notes, imaging and more, allowing it to provide alerts regarding missed diagnoses, abnormal results, missing labs, and missing tests; and AEYE Health is preventing vision loss by detecting signs of diabetes in images that retinal experts deem healthy and providing a practical solution at the point-of-care.
The current boom in generative AI applications, however, is accelerating the introduction of AI-based solutions specifically tailored to healthcare and even the rate of adoption by healthcare providers.
Generative AI can help specifically in the processing, digitizing and storing of doctor-patient conversations and doctors’ notes and documentation. Abridge AI, for example, creates summaries of medical conversations from recorded audio during patient visits. The University of Kansas Health System has just made the Abridge application available to more than 1,500 practicing physicians, as it “addresses the biggest challenge facing our providers — excessive time spent on documentation including non-traditional hours,” said Dr. Gregory Ator, Chief Medical Information Officer and Head and Neck Surgeon at The University of Kansas Health System.
Another example of using generative AI to streamline time-consuming administrative tasks is DocsGPT.com, integrating ChatGPT, “trained on healthcare-specific prose,” with the free digital fax service of the professional medical network Doximity. ChatGPT helps Doximity members draft a letter that is then reviewed by a physician before being faxed online to an insurer.
Beyond helping with healthcare administration, Generative AI seems particularly suited to address two essential requirements. One is the dependence of AI systems on the availability of data, lots of data. And the other, is the stringent requirement for patient privacy. Generating “synthetic health data” could be the answer.
Synthetic health data is data that is artificially generated for research and analysis from real-world observations without revealing the source of the data, protecting the identity of patients. In December 2022, Google researchers published EHR-Safe, “a novel generative modeling framework,” demonstrating how synthetic data can retain its usefulness while meeting privacy requirements.
Syntegra is a startup using generative AI to create synthetic health data and its technology is being tested by Janssen Pharmaceutical Cos., a drug company owned by Johnson & Johnson. The synthetic data has been validated by Janssen’s data scientists against real data, and will be particularly useful for researching less common diseases, where it is harder to gather sufficient patient data, Janssen’s Sebastian Kloss told The Wall Street Journal.
Speaking about Electronic Health Records (EHR), the leading provider of such systems, EPIC, has already experimented with GPT-4, the just-released version of the Large Language Model that underlies ChatGPT. It has “shown tremendous potential for its use in healthcare” and will be used by EPIC “to help physicians and nurses spend less time at the keyboard and to help them investigate data in more conversational, easy-to-use ways.”
Similarly, Nuance Communications, a Microsoft company, announced on March 20 “a workflow-integrated, fully automated clinical documentation application that is the first to combine proven conversational and ambient AI with OpenAI's newest and most capable model, GPT-4.”
The promise of generative AI—and any kind of AI—for improving healthcare is attracting interest and investments from venture capitalists, Big Tech and healthcare incumbents. For example, Transcarent is acquiring part of AI-powered 98point6 Inc. in a healthtech deal valued at up to $100 million. Transcarent CEO Glen Tullman told Forbes “We're putting AI front and center,” with 98point6 chatbot that collects information via a text exchange with the patient and summarizes it for the doctor who then takes over the chat.
The cutting-edge approach to AI also attracts other emerging technologies, even in an industry such as healthcare that has been known to be somewhat hesitant about the cutting edge. The Cleveland Clinic and IBM recently unveiled the first quantum computer delivered to the private sector and fully dedicated to healthcare and life sciences. It “will help supercharge how researchers devise techniques to overcome major health issues… potentially find new treatments for patients with diseases like cancer, Alzheimer’s and diabetes."
Among the general excitement and talk of promise, you hear a few dissenting voices. For example, Dr. Benjamin Mazer, assistant professor of pathology at Johns Hopkins University. In The AI Doctor Will Charge You Now, Mazer wrote: “The medical profession isn’t afraid of losing business to AI. Instead, physicians are embracing their ability to profit from it. Your AI doctor isn’t going to be free and efficient; it’s likely to be a costly generator of unnecessary care.”
And in AI In Medicine Is Overhyped, Visar Berisha and Julie Liss wrote: “Published reports of this technology currently paint a too-optimistic picture of its accuracy, which at times translates to sensationalized stories in the press… As researchers feed data into AI models, the models are expected to become more accurate, or at least not get worse. However, our work and the work of others has identified the opposite, where the reported accuracy in published models decreases with increasing data set size… We can prevent these issues by being more rigorous about how we validate models and how results are reported in the literature.”
The “sensationalized stories in the press” don’t do much to educate the public and to increase its confidence in healthcare AI. A Pew Research Center December 2022 survey of 11,004 U.S. adults found only 38% say AI being used to do things like diagnose disease and recommend treatments would lead to better health outcomes for patients generally, while 33% say it would lead to worse outcomes and 27% say it wouldn’t make much difference.
The (future) machine doctor, however, is quite confident in its healthcare potential: “Chatbots… are the ultimate healthcare game-changer, turning the traditional healthcare model on its head and making it more accessible, accurate and convenient for everyone, ” says ChatGPT.
Forbes