AI Model Effective in Detecting Prostate Cancer

Deep learning combined with radiologists' findings performed better than radiologists alone


Naoki Takahashi, MD
Takahashi

A deep learning model performs at the level of an abdominal radiologist in the detection of clinically significant prostate cancer on MRI, according to a study published in Radiology. The researchers hope the model can be used as an adjunct to radiologists to improve prostate cancer detection. 

Prostate cancer is the second most common cancer in men worldwide. Radiologists typically use multiparametric MRI to diagnose clinically significant prostate cancer. Results are expressed through the Prostate Imaging-Reporting and Data System version 2.1 (PI-RADS). However, lesion classification using PI-RADS has limitations.

“The interpretation of prostate MRI is difficult,” said study senior author Naoki Takahashi, MD, from the Department of Radiology at the Mayo Clinic in Rochester, MN. “More experienced radiologists tend to have higher diagnostic performance.”

Applying AI algorithms to prostate MRI has shown promise for improving cancer detection and reducing observer variability. However, a major drawback of existing AI approaches is that the lesion needs to be annotated by a radiologist or pathologist at the time of initial model development and again during model re-evaluation and retraining after clinical implementation.

“Radiologists annotate suspicious lesions at the time of interpretation, but these annotations are not routinely available, so when researchers develop a deep learning model, they have to redraw the outlines,” Dr. Takahashi said. “Additionally, researchers have to correlate imaging findings with the pathology report when preparing the dataset. If multiple lesions are present, it may not always be feasible to correlate lesions on MRI to their corresponding pathology results. Also, this is a time-consuming process.”

Listen as the study’s lead author Jason C. Cai, MD, discusses his research. 

Dr. Takahashi and colleagues developed a new type of deep learning model to predict the presence of clinically significant prostate cancer without requiring information about lesion location. They compared its performance with that of abdominal radiologists in a large group of patients without known clinically significant prostate cancer who underwent MRI at multiple sites of a single academic institution.

The researchers trained a convolutional neural network (CNN) to predict clinically significant prostate cancer from multiparametric MRI.

Takahashi Fig 5 Radiology Images in a 59-year-old male patient who underwent MRI for clinical suspicion of prostate cancer.

Images in a 59-year-old male patient who underwent MRI for clinical suspicion of prostate cancer (internal test set). The patient subsequently underwent prostatectomy and had a 1.5-cm prostate adenocarcinoma (Gleason score 3 + 4) in the right mid anterior to bilateral posterior inferior prostate gland. The model’s output (patient level probability) was 0.83. Only the lesion in the right lobe was highlighted by the gradient-weighted class activation map (Grad-CAM). The radiologist graded this examination as Prostate Imaging Reporting and Data System (PI-RADS) 4 for the right lobe lesion and PI-RADS 3 for the left lobe lesion. (A) T2-weighted image (representative section). (B) Apparent diffusion coefficient map (representative section, left) and high-b-value diffusion-weighted image (representative section, right). (C) T1 dynamic contrast-enhanced images (representative sections). (D) Volumetric composite of T2-weighted images (rows 1 and 2), diffusion-weighted images (rows 3 and 5), and apparent diffusion coefficient maps (row 4), with superimposed Grad-CAMs (rows 2 and 5). All images are in the transverse plane.

https://doi.org/10.1148/radiol.232635 ©RSNA 2024

Improving Cancer Detection Rates with Fewer False Positives

Among 5,735 examinations in 5,215 patients, 1,514 examinations showed clinically significant prostate cancer. On both the internal test set of 400 exams and an external test set of 204 exams, the deep learning model’s performance in clinically significant prostate cancer detection was not different from that of experienced abdominal radiologists.

A combination of the deep learning model and the radiologist’s findings performed better than radiologists alone on both the internal and external test sets.

Because the output from the deep learning model does not include tumor location, the researchers used something called a gradient-weighted class activation map (Grad-CAM) to localize the tumors. The study showed that for true positive examinations, Grad-CAM consistently highlighted the clinically significant prostate cancer lesions.

Dr. Takahashi sees the model as a potential assistant to the radiologist that can help improve diagnostic performance on MRI through increased cancer detection rates with fewer false positives.

“I do not think we can use this model as a standalone diagnostic tool,” Dr. Takahashi said. “Instead, the model’s prediction can be used as an adjunct in our decision-making process.”

The researchers have continued to expand the dataset, which is now twice the number of cases used in the original study. The next step is a prospective study that examines how radiologists interact with the model’s prediction.

“We’d like to present the model’s output to radiologists and assess how they use it for interpretation and compare the combined performance of radiologist and model to the radiologist alone in predicting clinically significant prostate cancer,” Dr. Takahashi said.

For More Information

Fully Automated Deep Learning Model to Detect Clinically Significant Prostate Cancer at MRI,” and the related editorial, “AI-powered Diagnostics: Transforming Prostate Cancer Diagnosis with MRI.”

Read previous RSNA News stories about prostate imaging: