PV25 Schedule of Events
The integration of artificial intelligence (AI) into pathology represents a transformative opportunity for precision medicine, with the potential to radically improve cancer diagnosis, treatment planning, and patient outcomes. Yet, despite their high predictive performance, deep neural networks (DNNs) remain black boxes. Without transparency, AI models risk harmful care, jeopardising patient safety and eroding clinician trust. As new regulations like the EU AI Act mandate transparency and human oversight, there is an urgent need for interpretable AI systems in pathology. We address this critical gap by proposing a transparent framework based on the Concept Bottleneck architecture (CBM) that delivers clinically relevant and interpretable explanations for model predictions. To the best of our knowledge, this is the first application of the CBM paradigm to histopathology images. At the core of the framework is A-CEM, an attention-based concept embedding model, which extends the CBM architecture to address a key limitation, concept leakage, where CBM models exploit additional, unintended information encoded in relationships between concepts, or with the target label to optimise accuracy at the cost of interpretability. A-CEM replaces the binary concept structure of traditional CBMs with mutually exclusive multiclass concept spanning sets, allowing for more clinically accurate representations. This design reduces inter-concept leakage and enhances model transparency for downstream clinical tasks. We train A-CEM using a multiple instance learning (MIL) framework on 4,568 whole-slide histopathology images and survival data from The Cancer Genome Atlas (TCGA), jointly predicting intermediate clinical concepts such as tumour stage, primary site, and patient age, followed by survival outcomes. Our model achieves a survival prediction F1 score of 74%, and fuzzy F1 scores (allowing +/-1 class misclassification) of 74% for tumour stage, 66% for age, and a F1 score of 93% for primary site. This interpretability is achieved with only a modest reduction compared to a black-box survival model (80% F1). By enforcing concept exclusivity and learning concept specific self-attention, A-CEM reduces inter-concept leakage by 20% and provides transparent, verifiable predictions. Without interpretable models, AI risks remaining a black-box tool that clinicians cannot trust, and regulators cannot approve, limiting its potential to transform patient care. Our A-CEM framework offers a practical, scalable solution: a model that explains predictions through clinically meaningful concepts. By reducing concept leakage and enabling visual interpretability, A-CEM paves the way for safer, more transparent AI deployment in clinical environments and represents a crucial step toward clinician-in-the-loop decision-making, where AI systems act as accountable, collaborative partners.
Learning Objectives: