PV24 Schedule of Events

Adversary-Robust Graph-Based Learning of WSIs

   Mon, Nov 4
   02:00PM - 02:20PM ET
  Regency Q

Enhancing the robustness of deep learning models against adversarial attacks is crucial, especially in criticaldomains like healthcare where significant financial interestsheighten the risk of such attacks. Whole slide images (WSIs) are high-resolution, digitized versions of tissue samples mounted on glass slides, scanned using sophisticated imaging equipment. The digital analysis of WSIs presents unique challenges due to their gigapixel size and multi-resolution storage format. In this work, we aim at improving the robustness of cancer Gleason grading classification systems against adversarial attacks, addressing challenges at both the image and graph levels. As regards the proposed algorithm, we develop a novel and innovative graph-based model which utilizes GNN to extract features from the graph representation of WSIs. A denoising module, along with a pooling layer is incorporated to manage the impact of adversarial attacks on the WSIs. The process concludes with a transformer module that classifies various grades of prostate cancer based on the processed data. To assess the effectiveness of the proposed method, we conducted a comparative analysis using two scenarios. Initially, we trained and tested the model without a denoiser using WSIs that had not been exposed to any attack. We then introduced a range of attacks at either the image or graph level and processed them through the proposed network. The performance of the model was evaluated in terms of accuracy and kappa scores. The results from this comparison showed a significant improvement in cancer diagnosis accuracy,highlighting the robustness and efficiency of the proposed method in handling adversarial challenges in the context of medicalimaging.Impact Statement-The significance of adversarial attacks and the enhancement of the robustness of deep learning (DL) models are crucial factors in establishing trust in the deployment of these technologies, particularly in sensitive domains such as healthcare. Given the significant financial incentives in the healthcare domain, the occurrence of adversarial attacks is an inevitable challenge that must be addressed to ensure a safe and reliable use of these technologies. This research explores various adversarial attacks in cancer grading scenarios using WSIs. The medical dataset, including WSIs can be attacked while stored in the cloud or during the training phase. We have addressed both cases in this work. Although there has been significant research focused on enhancing the robustness of convolutional neural network (CNN) based models, they cannot handle irregular, non-Euclidean data structures like graphs. Deep learning models based on GNN have shown significant improvement in medical imaging, where understanding the relational information between various regions or cells within a tissue is often critical for accurate diagnosis. Through a novel method, this work shows that the integration of a GNN classifier with a denoiser and a transformer leads to improved robustness of the cancer grading, allowing it to withstand more effectively with different levels of adversarial attacks.As regards cancer detection using WSIs, research has mainly focused on CNNs for patch-based image analysis. In this paper, we present a novel GNN-based architecture for digital pathology that stands out for its unique approach to enhancing robustness against adversarial attacks in WSIs. This work is distinguished by its two main factors: first, it is the pioneering effort specifically designed to protect models from adversarial threats targeting both image and graph representations of WSIs; second, it moves beyond traditional patch-level analysis, addressing the need for context-aware evaluation around tumor areas in WSIs. Prior research has shown graph-based methods to be more effective than patch-based approaches. The proposed model showcases superior robustness to various levels of adversarial perturbations in WSIs. By generating and testing against adversarial attacks, we benchmark the robustness of the proposed architecture against leading GNN models. The findings confirm the method's outstanding capability to mitigate challenges in clinical applications. The core contributions of this study include:1) Evaluating the susceptibility of cutting-edge GNN models to adversarial threats in WSIs; 2) Introducing a novel graph-based architecture designed to enhance robustness against adversarial threats in WSIs, particularly focusing on cloud-stored and graph-converted WSIs.

 

Learning Objectives

  1. Be familiar with different types of adversarial attacks on WSIs.    
  2. The importance of GNN for learning from WSIs.      
  3. Be introduced to a novel robust model with denoiser and transformer for adversarial attacks.
Chat bot