HBHI is the host of 'Conversations on the Business of Health,' a series of one-hour webinars that engage leaders of business and academia about improving American health care. Here are highlights from the recent discussion about the challenges and opportunities of implementing Artificial Intelligence (AI) in healthcare delivery, featuring Avi Goldfarb, Rotman Chair in Artificial Intelligence and Healthcare and Professor of Marketing at University of Tortonto’s Rotman School of Management, and Natalia Levina, Professor of Technology, Operations, and Statistics at New York University's Stern School of Business. The conversation was moderated by HBHI's Andrew Ching, Professor of Marketing and Economics at the Johns Hopkins Carey Business School with a joint appointment at Bloomberg School of Public Health, and Tinglong Dai, Professor of Operations Management and Business Analytics at Johns Hopkins Carey Business School with a joint appointment at the Johns Hopkins School of Nursing. This particular conversation was co-sponsored by the Digital Business Development Initiative at the Carey Business School. 

For many years, there has been a collective excitement about using artificial intelligence in healthcare delivery. Still, AI technology is often talked about as if it is science fiction and the future of healthcare. Despite the FDA's approval of more than 500 medical AI systems by July 2022, many experts question if AI is a reality in the medical arena, or if it remains far in the future—and how to overcome the obstacles to adopting the new technologies?

As an economist who studies the opportunities and challenges of the digital economy, Avi Goldfarb said he is super excited about the potential for AI in healthcare. But his research has shown that there has been a slow adoption of AI in healthcare so far. From census data he collected, Goldfarb found that less than 5% of healthcare organizations are using AI tools; job board data showed fewer than 1000 jobs in healthcare related to machine learning and AI. "There's this weird dichotomy between the juxtaposition of people excited about AI in healthcare who can see the potential, and [yet] when you look at the data on the ground, so far the impact has been minimal," Goldfarb said.  

Natalia Levina, who studies the evaluation and adoption of AI in medicine, said her observations are similar. Her 2018 research showed that few medical specialties, other than diagnostic radiology, had adopted more advanced AI tools. But she believes because people's lives and well-being are at stake, it's good to go slow when adopting AI in the healthcare setting. "Research is moving forward, but it's not necessarily producing the type of tools we should adopt widely without caution."

The areas of healthcare in which AI adoption has taken hold, Levina said, are those where technology can do a better job than humans of analyzing large data sets and where accuracy is less important than speed. "A lot of the more promising tools are in areas where humans lack expertise, rather than where humans are already doing a great job and need to be replaced or augmented," she added. 

Meanwhile, Goldfarb suggested that the upside to a slow rollout is that "AI is operating in healthcare; it's just not in the places that we might have imagined being so exciting." For example, healthcare research—where data scientists use machine learning tools to do the stats that underlie a research paper for a clinical scientist—is one area of AI that's operating at scale.  

AI is operating in healthcare; it's just not in the places that we might have imagined being so exciting.

Still, there is a big push by vendors and researchers to sell new, shiny tools that may or may not work, even when people are at the forefront of adoption. The last 5 or 10 years ultimately have shown that the impact of replacing one human process with a machine but not doing anything else is limited. And AI is costly. "For most organizations, the juice isn't worth the squeeze, and they're not bothering," Goldfarb said. "Is doing something differently worth the risk just to make the radiologist's job incrementally better?"

Maybe not, but Goldfarb believes it's worth the effort to know where the big picture lies, and that comes down to predictions. In his new book, "Power and Prediction: The Disruptive Economics of Artificial Intelligence," Goldfarb and his co-authors explain AI as a prediction technology directly impacting decision-making. Once you have predictions, you can reimagine workflows and think about system-level change.

On a more micro level, other obstacles prevent medical practitioners from adopting AI. Once a tool is in place, Levina suggested individual physicians may grapple with three things: a lack of ability to explain the results, a lack of time for reconciling the second opinion, and a lack of understanding of whether to trust the tool.  

While it may take a while, an impressive technology eventually will be a game-changer in AI healthcare. For Goldfarb, it's diagnostics. "Diagnosis is fundamentally a prediction problem, and machines are good at prediction," he said. And for now, while everyone agrees there will be a giant leap in AI healthcare, no one can predict when it will happen. Levina said, "It won't be in five years."

Go deeper on this topic with HBHI's 'Conversations on the Business of Health' webinar from Nov. 18. Watch it here.