PV24 Speakers

Subject to change.

 

 

image

Fatemeh Zabihollahy, PhD

Postdoc Research Scholar, University of Toronto


Dr. Zabihollahy is an AI research scientist at the University of Toronto. Her research focused on the development of multi-modal AI for cancer diagnosis and prognosis. Fatemeh was awarded the Carleton University medal for outstanding graduate work at the doctoral level. She was a postdoc researcher at Johns Hopkins University and studied AI models for MR-guided cancer radiation therapy. She then moved to UCLA as a project scientist to study multimodal image registration.

 

 

SESSIONS

Intra-patient Co-registration of Prostate Whole-Mount Histopathology to Magnetic Resonance Imaging
   Tue, Nov 5
   01:45PM - 02:05PM ET

BackgroundMRI is a useful non-invasive tool for evaluating the extent of prostate cancer (PCa). However, the inter-observer agreement of qualitative assessment of MRI is only modest for detecting and grading PCa lesions. Machine learning-based algorithms can improve PCa diagnosis. Accurate labeling of PCa lesions on MRI, however, is a prerequisite for training supervised learning-based models. Whole-mount histopathology (WMHP) provides an essential reference standard for the detection and grading of PCa tumors. Once the tumor is labeled on the WMHP image, the image is cognitively aligned to T2-weighted (T2W) MRI for mapping the PCa boundary from WMHP to MRI for further research studies.ObjectivesOur objective was to develop a novel artificial intelligence (AI)-based approach to register WMHP to pre-surgical T2W MRI for further research studies. High-resolution ex vivo MRI was used to discover and assess the structural relationship between in vivo MRI and WMHP.MethodsThis study was performed in compliance with the United States Health Insurance Portability and Accountability Act of 1996, where the local institutional review board waived the requirement for informed consent. The study cohort included prostate multi-parametric (mp)MRI of 315 patients before robotic-assisted laparoscopic radical prostatectomy performed between 2016 and 2020 on one of the 3T MRI scanners (MAGNETOM Trio, Verio, Skyra, or Prisma, Siemens, Erlangen, Germany). A genitourinary radiologist used mpMRI to delineate a 3D contour of the prostate and PCa. The images were divided into 270 and 45 for train and test, respectively.Since there are fundamental differences in image appearances between image pairs (i.e., in vivo MRI and WMHP), high-resolution ex vivo MRI was used as an interface to facilitate spatial alignment between image pairs. Each WMHP slide was compared to the 2D slices across 3D ex vivo MRI to find its corresponding MR image. In vivo and ex vivo MRIs were then resized and re-sampled to the same size and resolution. A 2D slice across in vivo MRI with a similar spatial position to the ex vivo MRI was selected to be registered to the WMHP slide.Generally, the conventional registration methods, such as VoxelMorph, that solely focus on image intensity for learning the deformation field generate a noisy transformation domain when there are inherent dissimilar intensities between image pairs. The proposed registration method uses the anatomy of the prostate whole gland (Pwg) as a supervision signal for deformable field learning. An embedding layer was included in the VoxelMorph architecture that integrates the anatomy of Pwg into the network.To acquire Pwg shape information on different imaging modalities, a target localization head was added to the pipeline to detect and segment the prostate on WMHP and MRI automatically. Regions with irrelevant information were then suppressed to accentuate the importance of the target for the learning transformation field required for registration. A simple encoder-decoder architecture was trained using a limited number of images for automated prostate contouring on WMHP slides. For prostate delineation from in vivo MRI, 3D U-Net-based was employed as an encoder-decoder architecture to effectively examine interslice information in a 3D image. A dense block was embedded into the 3D U-Net to mitigate learning redundant features and strengthen feature propagation inside the network.The registration method contained affine and deformable transformations. For affine transformation, scaling and translation were calculated and applied to WMHP to align the image pair globally. To learn the deformable field, the optical flow was estimated, which is a related registration problem for 2D images, returns a dense displacement vector field between WMHP and MRI. Once the displacement field was learned during the training phase, it was applied to each pixel of the WMHP to warp it and register it to MRI.The Dice similarity coefficient (DSC) was used to estimate prostate volume overlap between the registered MRI and WMHP. Hausdorff distance (HD) was also estimated between contours computed from the registered image pair. All metrics were measured for each patient, and the average across all patients in the dataset was reported.ResultsThe registration module achieved DSC and HD of 0.95 +- 0.06 and 3.77 +- 0.77 pixels on the test dataset. The proposed registration method was compared to the state-of-the-art registration model; VoxelMorph which yielded DSC and HD of 0.84 +- 0.05 and 5.93 +- 0.78 on the same test cohort. The Wilcoxon rank sum test with &alpha; = 0.05 was conducted to investigate the significance of the difference between DSC and HD computed by different registration methods. The p-values were < 0.0001 for both metrics, which demonstrates the effectiveness of the proposed algorithm for intra-patient multi-modal image registration.ConclusionsWe developed a novel multi-modality registration method between pre-surgical prostate MRI and histopathology images that allows accurate mapping of PCa from WMHP to MRI. A target localization module was added to the registration architecture that not only eliminated the need for manual segmentation of the prostate on both MRI and histopathology images but also significantly improved the registration accuracy compared to the VoxelMorph.

Chat bot