
Artificial Superintelligence and the Future of Healthcare: 8 Expert Takeaways
Featuring Tinglong Dai, Risa Wolf
As artificial intelligence rapidly advances across healthcare, a provocative question emerges: what happens when AI systems not only match but exceed human performance in medical diagnosis, treatment planning, and even empathy?
While the future of artificial superintelligence (ASI) remains speculative, the trajectory of current AI development demands serious consideration of how healthcare systems should prepare for potentially transformative technological capabilities.
As the latest installment in our popular webinar series, Conversations on the Business of Health, the Hopkins Business of Health Initiative (HBHI) convened two of the nation’s leading experts to examine both the promise and perils of advancing AI in healthcare.
The featured speakers for this recent conversation included Bryant Y. Lin, co-founder and co-director of Stanford's Center for Aging Health Research and Education (CARE) and leader of Stanford's Medical Humanities and Arts program; and Girish Nadkarni, Chair of the Windreich Department of AI and Human Health at Mount Sinai and Chief AI Officer of the Mount Sinai Health System.
Risa Wolf, MD, and Tinglong Dai, PhD, the two co-chairs of HBHI’s AI and Healthcare workgroup, moderated the conversation.
“Imagine a day when machines exceed us,” said Dai. “What are the opportunities, the risks, and what must remain human?”
Here are 8 takeaways from our experts on artificial superintelligence and the future of healthcare:
1. Healthcare leaders must prepare now for AI that exceeds human performance, regardless of timeline uncertainty.
"ASI might happen in the next five years, next 20 years, next 50 years, or may never happen at all," said Dr. Lin. However, Dr. Nadkarni added that uncertainty shouldn't prevent preparation: "It behooves our larger community, who have been entrusted with this responsibility of caring for patients, to prepare as if it was already here, because in that way, you're preparing potentially for any eventuality." This proactive approach ensures human-centered care remains paramount regardless of technological developments.
2. Current healthcare systems lack the adaptive governance structures needed for superintelligent AI.
“Because we have no idea how AGI/ASI will come into being, we may not be able to put a control system in that part. The easiest thing is to place control systems on what [systems] can do,” said Dr. Lin. “We’re intercepting the input and output… but it’s challenging.”
On the issue of governance, Dr. Nadkarni also warned against creating "fixed fortifications" like the failed Maginot Line during World War II. "If we build fixed governance structures, then AI agents will simply drive around them, because their ethical values don't align to this," he explained. "That's why we need to shift from a more fixed governance structure to a more adaptive, more evidence-based, more observable governance structure with human oversight to ensure that medicine remains patient and human-centered."
3. AI risks both de-skilling and mis-skilling healthcare professionals through automation bias.
Two critical challenges emerge as AI becomes integrated into clinical workflows. "De-skilling basically means that if you use AI too much, you lose your clinical skills," Dr. Nadkarni said, citing research showing endoscopists who used AI augmentation actually found fewer polyps afterward. "Mis-skilling, also known as automation bias, is basically when you get information from the AI and you stop thinking critically, and you just trust the AI too much." These risks highlight why providers’ clinical judgment will remain paramount in the hybrid systems of the future.
4. Healthcare infrastructure and workflows are fundamentally unprepared for advanced AI integration.
Despite technological advances, healthcare systems are in some ways woefully outdated. "I did residency around 12, 13 years back, and we used pagers then; some people use pagers now," noted Dr. Nadkarni. "The EHR systems are really not built for AI readiness. The workflows are not really built for AI readiness because they are configured with old, non-cognitive technology in mind."
This infrastructure gap represents a major barrier to effective AI adoption, especially because the U.S. is home to a dense patchwork of healthcare settings all operating in their own distinct ways. “Rolling out anything new will be a different implementation because workflows are all different,” said Dr. Lin. “As an intern, half of what you learn is medical and half is hospital‑specific.”
5. The translation gap between AI research and clinical implementation creates dangerous blind spots.
A concerning disconnect exists between AI development and deployment. "We have a lot of stuff that we're doing on the research side that gets written about and put in papers and never translated [into practice]," Dr. Nadkarni observed. "On the clinical and operational side, we do a lot of things through vendors that never get researched to make sure that they are effective, safe and ethical."
This gap means healthcare systems may be running AI algorithms without understanding their safety or effectiveness, and the newest developments in AI technology may not be incorporated into practical applications.
Dr. Lin: “We need high‑quality, intentionally curated, balanced datasets — not just datasets of convenience — and regulators are lagging on digital enforcement.”
6. Medical education must evolve to reflect the new paradigm of practicing medicine in the age of AI.
“We're moving from a model of a knowledge-based profession to more of a critical thinking profession,” said Dr. Lin. “At the same time, I do think you still need that underlying knowledge to call on. You can't always just be looking things up, particularly when it comes to more challenging diagnoses. You need to know some things and have that data in your head, not just at your fingertips.”
7. AI promises to exchange knowledge for time, potentially addressing physician burnout while improving patient care.
One of the most compelling near-term benefits of AI in healthcare involves reclaiming physicians' time for human interaction. "If you think about AI as encoded knowledge, it's an arbitrage or exchange of knowledge for time," said Dr. Nadkarni. "In the case of ambient AI, giving back time to the physicians, preventing burnout, and in the case of predictive AI, giving enough time to make an actionable decision." This element of time recovery could restore the human elements of medicine that have been eroded by administrative burdens.
8. AI's scalability represents both its greatest potential and most dangerous risk.
The dual nature of AI's impact lies in its ability to reach massive populations simultaneously. "What excites me the most is the ability to scale. AI can influence hundreds of thousands, if not millions, of patients," said Dr. Nadkarni. "What scares me the most is also the ability to scale. Any mistakes, any biases that are inherent in it, also scale and can reach millions of patients."
This point underscores the critical importance of continuous monitoring and the ability to rollback problematic systems as soon as errors are detected and before widespread harm occurs.
In addition, Dr. Lin also warned that increasing reliance on AI systems could exacerbate existing inequalities. “In one day I can see a patient who is a billionaire, who has access to everything, and another patient who doesn’t even have a phone. What if you don’t have a phone? What if you don’t know how to use it? These capabilities aren’t accessible to you.”
HBHI’s popular webinar series, Conversations on the Business of Health, hosts expert discussions that engage leaders from business and academia working on the cutting edge of improving American healthcare. Register today for our next event and join us for insights you won’t find anywhere else.