We are particularly grateful to Drs. Danton Char and Kadija Ferryman for sharing their invaluable insights into the development and deployment of medical AI. We also thank Dr. Michael Abramoff, inventor of the first FDA-approved autonomous AI, for attending the seminar and reinforcing the insights shared by Drs. Char and Ferryman.

Below is the AI-generated summary of the December 15, 2023 seminar, with minor edits made by the AI Workgroup Leadership::

The HBHI-AI Seminar discussed the ethical challenges of implementing AI in healthcare. Dr. Danton Char of Stanford Medicine emphasized the importance of addressing ethical concerns arising from next-generation technologies such as AI, genomic testing, and mechanical circulatory support. He proposed a framework based on Stanford's implementation science model for new technologies, which emphasizes the need to identify stakeholder values and address potential conflicts. The seminar also discussed the challenges and value conflicts encountered in the implementation of a mortality prediction tool in palliative care, the ethical challenges of applying AI in health care, and the need for continuous monitoring and adaptation of AI models.

Summary

Ethical Challenges in AI Healthcare Implementation

The monthly HBHI-AI Seminar series discussed the ethical challenges related to implementing AI in healthcare delivery. Dr. Danton Char from Stanford Medicine was invited to speak about these challenges. Dr. Char, a pediatric cardiac anesthesiologist, and empirical bioethics researcher, shared their insights on ethical concerns arising from the implementation of next-generation technologies such as AI, genomic testing, and mechanical circulatory support. The seminar also included a discussion on the changing landscape of healthcare ethics and the increasing importance of addressing ethical challenges in the context of AI.

Genomic Sequencing and Ethical Implications in Critically Ill Children

Danton discussed the implementation of genomic sequencing in the care of critically ill children and its potential benefits and ethical implications. He explored the use of AI tools to sequence genomic information and provide knowledge support to clinicians at the bedside.

Danton raised concerns about the possibility of self-fulfilling prophecies in healthcare and the ethical issues associated with AI and machine learning tools. He emphasized the importance of principles such as responsibility, equity, bias avoidance, transparency, reliability, and governability in the development of AI tools. The role of ethical assurance labs in evaluating AI tools was also discussed.

Ethical AI in Healthcare: Bias, Privacy, and Accountability

Danton discussed the ethical implications of AI in healthcare, highlighting the issue of bias and the potential perpetuation of existing inequalities. He highlighted the issue of privacy and ownership of data, citing instances where AI tools have negatively impacted healthcare professionals and patients.

Danton also pointed to the challenge of accountability for mistakes, citing the example of a Starbucks AI tool that led to unstable work schedules for employees. He proposed a framework based on Stanford's implementation science model for new technologies, which emphasizes the need to identify stakeholder values and address potential conflicts. Danton also mentioned an ongoing project to develop a machine learning-based mortality prediction tool.

Ethical Challenges in Palliative Care Innovation

Danton discussed the challenges and value conflicts encountered in implementing a mortality prediction tool in palliative care. He highlighted the ethical concerns raised by its use, including pressure to publish, potential misuse of predictive models, and the impact of regulatory metrics. Danton also emphasized the importance of transparency and accountability in healthcare innovation.

Kadija then offered a perspective on Danton's work, noting his focus on core bioethical principles, expansion into other areas of ethics and stakeholder relations, interdisciplinary work, and practical guidance for implementation. She praised his emphasis on balancing ethical principles with the needs of different stakeholders.

AI Ethics in Healthcare: Challenges and Solutions

Kadija and Danton discussed the ethical challenges of using AI in healthcare. Danton acknowledged his limited knowledge of bioethics and highlighted the need for legal support and expert input. They also expressed the difficulty of giving a voice to patients who lack AI knowledge and the need for rapid ethical review. Danton further noted that AI exceptionalism, while bringing more financial support, does not necessarily introduce new ethical issues, but rather amplifies existing ones. He emphasized the importance of understanding how endowment models work to build trust in their results.

Ethical Implications of AI in Healthcare

The seminar discussed the ethical implications of AI in healthcare, with a focus on regulation and reimbursement. Danton and Kadija opened the discussion, and Michael elaborated on the role of ethical frameworks in AI regulation and reimbursement. They also introduced the concept of metrics for ethics, which could potentially speed up the review process.

Catherine then raised concerns about the trade-off between predictive accuracy and protected classes, highlighting the potential for discrimination in AI models. Danton acknowledged the complexity of the issue and emphasized the need for ongoing oversight and constant adjustment of AI models. The discussion concluded with Irene asking about ethical considerations related to fairness.

AI Decision-Making Challenges in Healthcare

Irene raises concerns about the need for a human in the loop when making AI decisions and the challenges of recruiting experts. Danton emphasizes the importance of having an appropriate group of experts at many levels to evaluate AI tools. Marcus asks how he can contribute to the conversation by participating as a collaborator, initiator, or sponsor. The team acknowledges the challenges of identifying the necessary expertise and the need to develop professional ethics for AI in healthcare. They agree that the AI decision-making process is still in its infancy and needs to be improved.