PV24 Speakers

Subject to change.

 

 

image

Chris Garcia, MD, MS

Medical Director, Mayo Clinic


Dr. Chris Garcia is the Medical Director in the Division of Computational Pathology and Artificial Intelligence at the Mayo Clinic. He is a pathologist (AP/CP) with subspecialty training in Pathology Informatics. He has over 10 years experience in working in industry, reference labs and academic medicine. He currently focuses on developing, evaluating, and integrating AI/ML solutions into clinical operations and practice in Laboratory Medicine and Pathology.

 

 

SESSIONS

Moving beyond data bias: Integrating AI ethics in computational pathology for improved patient care
   Mon, Nov 4
   12:45PM - 01:05PM ET

Authors: Christopher Garcia, MD1, Mark Zarella, PhD1, Chhavi Chauhan, PhD2Affiliations:1. Mayo Clinic, Rochester, MN2. Israeli Association for Ethics in Artificial IntelligenceAbstract: As artificial intelligence (AI) advances and integrates into digital and computational pathology, it is crucial to address the potential risks associated with equity and bias in these systems. Bias in AI can arise from various sources, including unrepresentative training data, flawed algorithm designs, and human biases embedded in the development process. These biases can lead to disparities in diagnostic accuracy, treatment recommendations, and ultimately, patient care. To ensure that AI models perform effectively in real-world scenarios, it is essential to design studies that accurately reflect the intended clinical use case, the characteristics of the target patient population, and the data processing methods used in clinical practice. Failure to align study design with these real-world factors can result in models that do not generalize well to their intended applications, leading to suboptimal performance and potential risks to patient care.This roundtable discussion aims to raise awareness of equity and bias issues in AI for digital and computational pathology, demonstrate their relevance in current practice, and share rapidly evolving best practices for mitigating risks while promoting transparency. We will explore real-world examples of how equity and bias manifest in digital and computational pathology, examining the implications of these issues on patient care, research, and the overall trustworthiness of AI systems. By delving into case studies and current research, we aim to provide attendees with a comprehensive understanding of the challenges at hand.Furthermore, we will discuss the rapidly evolving best practices for developing and deploying AI solutions responsibly, including strategies for ensuring diverse and representative training data, implementing robust validation processes, and promoting transparency in AI development. We will also highlight the importance of multidisciplinary collaboration, involving pathologists, computer scientists, ethicists, and other stakeholders in the development and evaluation of AI tools.Attendees will gain valuable insights into the current landscape of equity and bias in AI for digital and computational pathology, learning practical approaches for mitigating risks, promoting transparency, and fostering trust in AI-assisted pathology workflows. By the end of the roundtable, attendees will be equipped with the knowledge and tools necessary to advocate for responsible AI practices in their own institutions and research endeavors.As the field of digital and computational pathology continues to evolve, it is imperative that we proactively address the challenges of equity and bias in AI. This roundtable discussion serves as a call to action, encouraging the pathology community to engage in ongoing dialogue, collaboration, and education to ensure the responsible development and use of AI solutions. Together, we can harness the potential of AI to transform pathology while upholding the highest standards of equity, fairness, and patient care.

 

Learning Objectives

  1. Understand issues surrounding equity and bias in AI for digital and computational pathology.
  2. Learn practical approaches for mitigating risks, promoting transparency, and fostering trust in AI-assisted pathology workflows.
  3. Advocate for responsible AI practices in their own institutions and research endeavors.
Chat bot