PV25 Schedule of Events

Towards Transparent AI in Computational Pathology: Concept Learning for Clinical AI

   Tue, Oct 7
   02:40PM - 03:00PM ET

The integration of artificial intelligence (AI) into pathology represents a transformative opportunity for precision medicine, with the potential to radically improve cancer diagnosis, treatment planning, and patient outcomes. Yet, despite their high predictive performance, deep neural networks (DNNs) remain black boxes. Without transparency, AI models risk harmful care, jeopardising patient safety and eroding clinician trust. As new regulations like the EU AI Act mandate transparency and human oversight, there is an urgent need for interpretable AI systems in pathology. We address this critical gap by proposing a transparent framework based on the Concept Bottleneck architecture (CBM) that delivers clinically relevant and interpretable explanations for model predictions. To the best of our knowledge, this is the first application of the CBM paradigm to histopathology images. At the core of the framework is A-CEM, an attention-based concept embedding model, which extends the CBM architecture to address a key limitation, concept leakage, where CBM models exploit additional, unintended information encoded in relationships between concepts, or with the target label to optimise accuracy at the cost of interpretability. A-CEM replaces the binary concept structure of traditional CBMs with mutually exclusive multiclass concept spanning sets, allowing for more clinically accurate representations. This design reduces inter-concept leakage and enhances model transparency for downstream clinical tasks. We train A-CEM using a multiple instance learning (MIL) framework on 4,568 whole-slide histopathology images and survival data from The Cancer Genome Atlas (TCGA), jointly predicting intermediate clinical concepts such as tumour stage, primary site, and patient age, followed by survival outcomes. Our model achieves a survival prediction F1 score of 74%, and fuzzy F1 scores (allowing +/-1 class misclassification) of 74% for tumour stage, 66% for age, and a F1 score of 93% for primary site. This interpretability is achieved with only a modest reduction compared to a black-box survival model (80% F1). By enforcing concept exclusivity and learning concept specific self-attention, A-CEM reduces inter-concept leakage by 20% and provides transparent, verifiable predictions. Without interpretable models, AI risks remaining a black-box tool that clinicians cannot trust, and regulators cannot approve, limiting its potential to transform patient care. Our A-CEM framework offers a practical, scalable solution: a model that explains predictions through clinically meaningful concepts. By reducing concept leakage and enabling visual interpretability, A-CEM paves the way for safer, more transparent AI deployment in clinical environments and represents a crucial step toward clinician-in-the-loop decision-making, where AI systems act as accountable, collaborative partners.

 

Learning Objectives:

  1. Understand the limitations of current black box AI models in pathology, including their lack of transparency, vulnerability to hidden biases, and inability to provide clinically meaningful explanations. Recognise how these limitations can undermine clinician trust, delay regulatory approval, and ultimately risk inappropriate or unsafe patient care. Appreciate the critical role of interpretable AI in supporting clinical decision-making, ensuring accountability, and enabling safe and ethical integration into real-world clinical environments.
  2. Explain the principles of concept learning and the concept bottleneck model (CBM) architecture, and compare this approach to other explainability techniques such as Grad-CAM, counterfactuals, and attention based methods in terms of transparency, clinical relevance, and model accountability.
  3. Describe the A-CEM framework and its enhancements over standard CBMs, including the use of multiclass concepts and attention mechanisms, and evaluate its performance and interpretability in pan-cancer survival prediction using whole-slide histopathology images. Understand how A-CEM supports human-in-the-loop interaction through intervention policies, allowing clinicians to review and modify concept level predictions, thereby improving model accountability, safety, and trust in clinical decision making.

2025 Pathology Visions

REGISTER NOW
Chat bot