A public conversation around generative artificial intelligence (AI) in medicine is both long overdue and increasingly urgent. In health care applications, the stakes are heightened because of the intricate privacy considerations and the critical nature of decision-making. The potential positive impact is also enormous.
“AI and medicine have always had a really intimate relationship. Even the birth of artificial neural networks was inspired by how human brains work.” –Dr. Tinglong Dai, PhD
In response to this need, HBHI faculty members Tinglong Dai, PhD, and Risa Wolf, MD, sought out two of their fellow leading experts in the field of AI and health to discuss the existing and potential applications of generative AI in everyday clinical workflows and the broader implications for the future of health care.
Dr. Dai and Dr. Wolf both serve as co-chairs of the HBHI Workgroup on AI and Healthcare, and the two experts joining the conversation were Chris Callison-Burch, PhD, and Curtis Langlotz, MD, PhD.
Dr. Callison-Burch is an associate professor of computer and information science at the University of Pennsylvania, where his AI course attracts over 500 students each fall. He is currently exploring how large language models like ChatGPT can solve complex challenges, and he recently testified before Congress on the topic of generative AI and copyright law.
Dr. Langlotz is a professor of radiology, medicine, and biomedical data science at Stanford University and the director of the Stanford Center for Artificial Intelligence in Medicine and Imaging. He is also Associate Director of Stanford’s Institute for Human-Centered Artificial Intelligence, which oversees an impressive imaging study library of over 8 million entries. His lab is at the forefront of using machine learning to detect diseases and reduce diagnostic errors.
Together with our moderators, they discussed how AI tools are currently being utilized in healthcare delivery, along with the potential future of the technology. Here are 10 promises and perils to look out for as artificial intelligence becomes increasingly integrated in the field of medicine.
The upshot: AI can help…
…cut the jargon. Patients increasingly have access to their digital health data on apps or websites but that doesn’t mean they understand the concepts that they're seeing in these reports. AI can prepare explanations at the reading level that the patient prefers and in the language that they prefer.
… serve “back office” functions. Billing and patient communications–significant administrative workloads–are some of the early applications currently using AI. These functions are well suited to language models’ current expertise, while effective diagnostic decision support will only come to fruition once the models are trained on large amounts of health data.
…predict next events. Rather than thinking of language as a sequence of words, AI can interpret those words as a sequence of medical events. For example: if a patient has a clinic visit, maybe they were given a diagnosis, and perhaps they had an imaging test. Based on the words associated with that discussion, diagnosis, and test results, AI can use previous patterns to predict the next event and how long it will take before that event occurs.
…analyze images powerfully. The current technology’s strengths in image processing are part of why radiology is leading the way in incorporating AI in healthcare delivery. “If you look at the FDA-cleared algorithms, three quarters of them (several hundred now) are focused on imaging and I think that'll probably continue,” said Dr. Langlotz. Beyond radiology, AI can change the way healthcare is delivered, as demonstrated by an autonomous AI diabetic retinopathy detection device that Dr. Wolf has pioneered in the pediatric diabetes population.
…keep patient data private One example is that OpenAI has a partnership with Microsoft, which has a relationship with the electronic health record vendor Epic, creating a closed loop instance of this technology that is HIPAA-compliant and private in the same manner as all other patient information in the system.
…train itself. The T in ChatGPT stands for transformer: the kind of neural network that model uses. By pre-training a transformer model on huge amounts of data of any kind, with language and images, we are preparing it to tackle nearly any kind of task through this successful learning strategy.
The downside: AI can also…
…create disinformation. Because large language models generate such incredibly plausible text, it’s easy to believe what they're saying is accurate. “It’s key to understand that they're not doing any fact checking,” said Dr. Callison-Burch. “They're not retrieving documents to compare their answers against and because they're generating text word by word, just like your cell phone is doing when it's doing autocomplete, they end up hallucinating things that sound really convincing.”
…be a “yes” man. “One concern for me about ChatGPT is that it tends to agree with what you say,” said Dr. Dai. The model generates responses that typically confirm or add to the users' preferred response, which could lead to a risk of misdiagnosis in a medical context. Anything that requires computation or reasoning, beyond just word patterns or images, is beyond the current capabilities, which has huge implications for where it should be used at the moment.
…provide a false sense of security. “Sometimes I draw the analogy to a self-driving car where if 99.9% of the time it's driving perfectly, that's still insufficient, because the remainder is going to cause you problems, and it will also potentially lull you into a false belief that it’s better than it is,” said Dr. Callison-Burch.
…intimidate even the experts. Although Dr. Callison-Burch has been deeply involved in the subfield of natural language processing for the last 20 years, he described the experience of testing out the capabilities of the OpenAI system as a “career existential crisis.”
“It could do these amazing things that I previously thought were all first order research questions, things I wouldn't even assign my own PhD students to do, and it was really a jaw dropping moment for me,” said Callison-Burch. “So I've gone through the many iterations of reactions to this as I think all of us have, starting with fear and panic that my career may be obsolete and trying to find my own footing and how to use these technologies in an effective way.”