Below is the AI-generated summary of the October 13, 2023 seminar, with minor edits made by the AI Workgroup Leadership:
Topic: Legal Liability Implications of Using Medical AI
Nicholson discussed the potential benefits and challenges of using AI in medicine, citing the example of a woman who suffered a heart attack and permanent damage due to a lack of timely diagnosis. Nicholson highlighted the efficiency and triaging capabilities of the AI algorithm, but also noted its opacity and the fact that it was trained predominantly on male patients, which could lead to discrepancies in diagnosis between genders. Nicholson emphasized the need for law to be involved in regulating and improving the use of AI in medicine, including issues of tort liability and distributed governance. However, he also acknowledged that the law does not have all the answers and that the future of AI in medicine is still in flux.
|
AI Ethics and Challenges in Medicine
|
Nicholson discussed the potential of artificial intelligence (AI) in democratizing expertise, automating drudgery, and allocating medical resources. He highlighted the ethical questions that AI's role in allocating resources raises. Nicholson also pointed out the challenges of quality control and bias in AI systems, using the examples of an AI system identifying patients with complex care needs based on past spending and an AI diagnostic system for diabetic retinopathy. He concluded by emphasizing the challenge of variation based on context as a persistent quality challenge for AI in medicine.
|
Product Development and Data Localities Challenge
|
Nicholson discussed the challenges of product development and data locality. He highlighted the issue of products performing differently in various environments, sometimes leading to unexpected failure. He also mentioned how dermatology datasets, mostly comprising light-skinned patients, often perform poorly on dark-skinned patients. Nicholson explained that these issues are due to contextual bias and are often taken on faith by users, which the market does not efficiently address. He suggested that the law can address these issues through regulation or liability, with potential entities to sue including providers, hospitals, or algorithm developers. However, Nicholson expressed doubts about the law's ability to incorporate algorithm recommendations into the standard of care.
Regulating AI in Healthcare: Nicholson's Insights
|
Nicholson discussed the challenges of regulating AI in medical technology by the Food and Drug Administration (FDA). He emphasized the need for distributed governance in addition to centralized governance to ensure the effectiveness of AI systems at the point of care. Nicholson also highlighted the importance of continuous monitoring and adaptation of AI systems due to changes in patient needs and data sets. He suggested outsourcing evaluation tasks to entities like the FDA or the Office of the National Coordinator who promote transparency. Nicholson and Risa (from Johns Hopkins) discussed the potential benefits of AI in healthcare and the challenges of implementing such systems, including liability distinctions between assistive AI and autonomous AI. Nicholson also cautioned that care must be taken in designing human-machine systems so that humans aren't just there to absorb liability but play a meaningful role.
|
|
|
Implications of AI in Law and Medicine
|
Tinglong and Nicholson discussed the implications of large language models in the context of law. Nicholson noted that every lawyer and law professor they knew was grappling with how these models would interact with the law. They also discussed the legal and medical implications of AI systems in the context of medical care, with concerns raised about AI systems providing substandard care or incorrect answers. Nicholson confirmed that no physician had ever been sued for using AI in practice in the United States, but an ongoing lawsuit in California regarding a biased clinical algorithm could potentially set a precedent. Christine suggested spreading the risks of liability among all members of the patient care team to incentivize the use of AI, but Nicholson didn't directly comment on this. Nicholson expressed concern over the spreading of liability, suggesting it might not be beneficial in all cases. The discussion then shifted to AI insurance, with Tinglong sharing a link for further information. Antonio raised a question about the balance between compensating patients and encouraging innovation in the context of AI, and Nicholson noted that the liability cap issue might be irrelevant when considering the impact of AI. Gordon, Tinglong, and Nicholson also discussed the suitability of AI bias for lawsuits, and Miaolan raised concerns about the transferability of machine learning models from one hospital to another. Nicholson concluded by expressing appreciation for the opportunity to participate in the discussion, noting that these were complex issues that required more time than the meeting allowed.
|
|