Seminar Summary for HBHI Workgroup on AI and Healthcare (10/11/2024), Featuring Dr. Charlotte J. Haug, Executive Editor of NEJM AI

Speaker Bio: Dr. Charlotte J. Haug, MD, PhD, is the Executive Editor of NEJM AI and an International Correspondent for the New England Journal of Medicine. She is also a Senior Scientist at SINTEF Digital Health in Norway and an Adjunct Affiliate at Stanford Health Policy. Dr. Haug has a background in clinical medicine and holds an MSc in Health Services Research from Stanford University. She has played significant roles in healthcare organization priority setting and overseeing healthcare systems both in Norway and internationally. From 2002 to 2015, she served as Editor-in-Chief of the Journal of the Norwegian Medical Association and was a Council Member of the Committee on Publication Ethics (COPE) from 2005 to 2015, serving as Vice-Chair from 2012 to 2015. Her work focuses on scientific publication, research ethics, and the ethical use of personal data in clinical research, with a special emphasis on the responsible and ethical application of AI in clinical medicine, particularly around privacy protection, minimizing bias, and improving care for underserved populations.

Abstract: Dr. Charlotte J. Haug’s presentation, titled “NEJM AI: Advancing Artificial Intelligence for Health,” provided an in-depth look at the challenges and opportunities in applying AI to healthcare. She discussed NEJM AI’s mission to set rigorous standards for AI tools, ensuring they meet the same evidence requirements as other medical interventions. Dr. Haug highlighted the difficulties of applying traditional research methods, such as randomized controlled trials, to AI studies and the complexities AI introduces into clinical workflows. She explained that AI has the potential to reduce diagnostic errors and improve decision-making, especially for underserved populations. Dr. Haug also explored how AI could transform clinical trials by making them more inclusive and less costly, potentially involving more diverse patient populations and generating more comprehensive data.

Summary: On October 11, 2024, the Hopkins Business of Health Initiative (HBHI) convened a seminar as part of its series on artificial intelligence (AI) and healthcare. The event featured Dr. Charlotte J. Haug, Executive Editor of NEJM AI, and was moderated by Dr. Tinglong Dai and Dr. Risa Wolf. The session brought together healthcare professionals, researchers, and experts to discuss the challenges and advancements in applying AI to healthcare, with a particular focus on how scientific journals like NEJM AI are shaping the future of AI in clinical practice.

Dr. Haug began by addressing the purpose of NEJM AI, a journal launched in July 2023 to respond to the growing demand for scientific scrutiny and guidance in the application of AI in healthcare. As she explained, the field of AI in medicine is still in its early stages. While AI has shown significant potential, much of the technology is still in the "proof of concept" phase, meaning that its full integration into clinical practice is yet to be realized. Dr. Haug emphasized that NEJM AI seeks to bridge this gap by ensuring that AI tools meet the same clinical evidence standards expected of traditional medical interventions.

The journal’s mission goes beyond merely publishing innovative research—it aims to ensure that AI applications in healthcare provide clear, measurable benefits for patients, their families, and healthcare professionals. Dr. Haug stressed that AI tools should not only be seen as cutting-edge technology but must demonstrate real improvements in patient outcomes, healthcare system efficiencies, or quality of care. She provided an example of one of the journal’s editorials that calls for pre-registration of all AI-related trials, a policy that will be fully enforced starting in 2025 to maintain transparency and accountability in research.

Dr. Haug highlighted the role of NEJM AI in shaping ethical standards and fostering conversations within the AI research community. The journal has introduced guidelines on the responsible conduct of AI studies, including the importance of securing informed consent for patient data usage, even in small-scale implementation studies. This is particularly relevant as AI relies heavily on large datasets, raising privacy and ethical concerns. The journal also organizes virtual events that engage researchers in discussions about emerging topics, such as the role of large language models in healthcare, regulation of AI tools, and the involvement of patients in the AI research process.

A significant portion of Dr. Haug’s talk focused on the role of AI in clinical trials. She explained that traditional clinical trials are time-consuming, expensive, and often limited in scope, typically answering narrow questions like, "Does this drug perform better than another?" AI, however, has the potential to make trials faster, cheaper, and more comprehensive. Dr. Haug expressed optimism that AI could help recruit more diverse patient populations, automate data collection and analysis, and expand the types of outcome measures used in trials. AI could also reduce the exclusion criteria that limit patient participation, allowing for broader and more generalizable results. However, she acknowledged that implementing AI in trials introduces its own set of challenges, such as the fact that AI algorithms evolve over time and may behave differently in various contexts due to differences in local data.

The conversation also covered the challenges of maintaining clinical-grade evidence for AI tools. Dr. Haug noted that the research community is still figuring out how to apply AI in a way that meets the rigorous standards of medicine, which traditionally relies on static interventions like drugs. AI, on the other hand, adapts based on the data it encounters, which complicates the process of proving its effectiveness. To address this, NEJM AI has adopted stricter criteria for publication and is working closely with statisticians to ensure that published studies meet the highest standards of evidence.

Ethical concerns, especially related to data privacy, were another key theme of the seminar. Dr. Haug, along with other panelists, discussed the importance of transparency in how patient data is collected and used. She emphasized that patients must be involved early in the research process and made aware of how their data will be used, not only during trials but in general. There is a growing risk that if patients feel their privacy is not adequately protected, they may refuse to participate in data-sharing initiatives, which would hamper the advancement of AI in healthcare. Dr. Haug warned that the loss of trust in how patient data is handled could significantly slow the progress of AI research.

Dr. Jonathan Weiner, the discussant, echoed many of Dr. Haug’s points and expanded the conversation by stressing the importance of collaboration among healthcare institutions, such as Hopkins, to tackle the hurdles AI presents. Dr. Weiner emphasized that AI must be embedded into the healthcare system and public health, not treated as a standalone tool. He noted that AI’s potential goes beyond medical diagnostics to address broader issues related to population health, community well-being, and social factors that impact healthcare outcomes. He also pointed out that AI cannot be effective without good science, context, and proper implementation.

During the Q&A, several participants raised thought-provoking questions. Dr. Peter Greene addressed the issue of sponsored content in NEJM AI, questioning how it is presented and whether it undergoes any form of editorial review. Dr. Haug clarified that sponsored content is not peer-reviewed and does not go through the same editorial process as other published content. She acknowledged the concerns, explaining that while this content allows the journal to remain selective in the manuscripts it accepts, the editorial team remains open to suggestions on how to clearly differentiate such material to avoid confusion among readers.

Dr. Harold Lehmann, another participant, shifted the focus to how AI can be used to improve the process of scientific publishing itself. He asked Dr. Haug whether NEJM AI was considering ways to use AI tools to enhance the quality of journal articles and streamline the editorial process. Dr. Haug expressed interest in the idea, stating that while NEJM AI is currently exploring the use of AI in limited areas, such as podcast summaries, there is a broader opportunity for AI to revolutionize publishing practices. She emphasized, however, that AI-generated content would still need oversight to ensure accuracy and maintain high standards of quality.

An important topic that emerged from the audience questions was the need to report negative results in AI research. Rajanikanth, a participant, asked about how the journal handles negative findings, particularly when AI tools fail to perform as expected in different settings. Dr. Haug acknowledged that while it can be difficult for authors to publish studies that show their tools did not work as intended, such results are crucial for advancing the field. She pointed out that understanding why an AI tool works in one environment but not in another is key to refining the technology and improving future applications. The journal, she said, is committed to publishing these important findings and fostering a more transparent dialogue around the limitations of AI.

As the seminar came to a close, Dr. Dai thanked the speakers and participants for their insights and contributions. He reminded the audience of the next seminar in the series, featuring Dr. Mark Dredze from Johns Hopkins University’s Whiting School of Engineering, which will take place in November 2024.