PV24 Speakers

Subject to change.

 

 

image

Chhavi Chauhan, PhD

Director of Scientific Outreach, American Society for Investigative Pathology


Dr. Chhavi Chauhan is Director of Scientific Outreach at the American Society for Investigative Pathology & currently serves on the Boards for 8 different mission-driven organizations in the spheres of scholarly publishing, digital pathology, AI Ethics, & youth education. She is a former biomedical researcher, expert scholarly communicator, & a sought-after mentor in the fields of scienceresearch, scholarly publishing, & AI Ethics, & has received several awards and accolades in these domains.

 

 

SESSIONS

Moving beyond data bias: Integrating AI ethics in computational pathology for improved patient care
   Mon, Nov 4
   12:45PM - 01:05PM ET
  Regency Q

Authors: Christopher Garcia, MD1, Mark Zarella, PhD1, Chhavi Chauhan, PhD2Affiliations:1. Mayo Clinic, Rochester, MN2. Israeli Association for Ethics in Artificial IntelligenceAbstract: As artificial intelligence (AI) advances and integrates into digital and computational pathology, it is crucial to address the potential risks associated with equity and bias in these systems. Bias in AI can arise from various sources, including unrepresentative training data, flawed algorithm designs, and human biases embedded in the development process. These biases can lead to disparities in diagnostic accuracy, treatment recommendations, and ultimately, patient care. To ensure that AI models perform effectively in real-world scenarios, it is essential to design studies that accurately reflect the intended clinical use case, the characteristics of the target patient population, and the data processing methods used in clinical practice. Failure to align study design with these real-world factors can result in models that do not generalize well to their intended applications, leading to suboptimal performance and potential risks to patient care.This roundtable discussion aims to raise awareness of equity and bias issues in AI for digital and computational pathology, demonstrate their relevance in current practice, and share rapidly evolving best practices for mitigating risks while promoting transparency. We will explore real-world examples of how equity and bias manifest in digital and computational pathology, examining the implications of these issues on patient care, research, and the overall trustworthiness of AI systems. By delving into case studies and current research, we aim to provide attendees with a comprehensive understanding of the challenges at hand.Furthermore, we will discuss the rapidly evolving best practices for developing and deploying AI solutions responsibly, including strategies for ensuring diverse and representative training data, implementing robust validation processes, and promoting transparency in AI development. We will also highlight the importance of multidisciplinary collaboration, involving pathologists, computer scientists, ethicists, and other stakeholders in the development and evaluation of AI tools.Attendees will gain valuable insights into the current landscape of equity and bias in AI for digital and computational pathology, learning practical approaches for mitigating risks, promoting transparency, and fostering trust in AI-assisted pathology workflows. By the end of the roundtable, attendees will be equipped with the knowledge and tools necessary to advocate for responsible AI practices in their own institutions and research endeavors.As the field of digital and computational pathology continues to evolve, it is imperative that we proactively address the challenges of equity and bias in AI. This roundtable discussion serves as a call to action, encouraging the pathology community to engage in ongoing dialogue, collaboration, and education to ensure the responsible development and use of AI solutions. Together, we can harness the potential of AI to transform pathology while upholding the highest standards of equity, fairness, and patient care.

 

Learning Objectives

  1. Understand issues surrounding equity and bias in AI for digital and computational pathology.
  2. Learn practical approaches for mitigating risks, promoting transparency, and fostering trust in AI-assisted pathology workflows.
  3. Advocate for responsible AI practices in their own institutions and research endeavors.
Chat bot