21/xsl/MobileMenu.xsltmobileNave880e1541/WorkArea//http://rsna.org/Content.aspx?id=372&ekfxmen_noscript=1&ekfxmensel=falsefalsetruetruetruefalsefalse10-18.0.0.0730truefalse
 

 

QIBA Quarterly Masthead 

 
   
 

September 2010 • Volume 2, Number 3 

In this issue: 

IN MY OPINION
Structured Reporting and Quantitative Imaging
By CHARLES E. KAHN, Jr., MD, MS
 

ANALYSIS TOOLS AND TECHNIQUES
Assessing CAD Algorithms
By NICHOLAS PETRICK, PhD
 

FOCUS ON
RSNA 2010 Annual Meeting: Quantitative Imaging/Imaging Biomarkers and QIBA Meetings and Activities
 

QI / BIOMARKERS IN THE LITERATURE
PubMed Search on Image Archives 

 


IN MY OPINION 

Structured Reporting and Quantitative Imaging
 

By CHARLES E. KAHN, Jr., MD, MS 

A clinical radiology report records the results of an imaging procedure and communicates those results to the referring physician and/or the patient. But what if the report's structure encouraged radiologists to enter more detailed, quantitative information? What if the report's design made it easier to build databases, retrieve reported information, and exchange data consistently among enterprises?

Structured reporting can help radiologists record, retrieve, and reuse the information of imaging procedure reports [1]. Structured reports ideally use meaningful, consistently ordered sections to organize their contents. Standardized language, such as terms from RSNA's RadLex® radiology lexicon, facilitates retrieval of report content by human readers and information systems.

The RSNA's Radiology Informatics Committee (RIC) has undertaken an initiative to identify and promote best practices in radiology reporting. Our mission has been to develop structured reporting templates to improve the communication of radiology results.

Since its inception in 2008, the RSNA's reporting initiative has established a consensus for the high-level structure of radiology reports. We have developed a technical approach for report templates that builds on widely accepted information standards, including RadLex, HL7, DICOM, and the Web's Extensible Markup Language (XML). Our committee has convened two well-attended workshops to engage radiology subspecialty societies, other medical professionals (including cardiologists, pathologists, and oncologists), medical informatics specialists, and radiology reporting system vendors.

The first 70 reporting templates were published during RSNA 2009. Twelve subspecialty work groups created templates across a variety of imaging modalities and organ systems, such as whole-body PET/CT for cancer staging and brain perfusion CT. Free plain-text and XML-encoded versions of these templates are available at RSNA.org/reporting. Additional templates are being developed, and their terms are being mapped to RadLex concepts.

Structured reporting makes it easier to capture and retrieve quantitative data. For masses such as lung nodules, reporting templates can include measurements in one, two, or three dimensions, and could include volume measurements. By linking a numerical value to a specific report concept, one is in effect building a data structure from which one can retrieve the data.

To facilitate interoperability, we are working to extend RadLex and the National Center for Biomedical Ontology's "Units of Measurement" ontology (www.bioontology.org). This effort will allow information systems to convert units of measurement automatically using ontology-based knowledge. For example, physicians and researchers will be able to track results consistently, regardless of whether a lesion's size has been reported in centimeters, millimeters, or inches.

Additional information about the RSNA Reporting Initiative is available at RSNA.org/reporting.

Reference:
[1] Toward Best Practices in Radiology Reporting. Radiology 2009; 252:852-856. Kahn CE Jr., et al.

Charles E. Kahn, Jr., MD, MS, is a professor of radiology and chief of radiology informatics at the Medical College of Wisconsin in Milwaukee, an adjunct professor of computer science at the University of Wisconsin-Milwaukee, and vice-chair of RSNA's Radiology Reporting Committee. 

[BACK TO TOP] 

 


ANALYSIS TOOLS & TECHNIQUES 

Assessing CAD Algorithms
 

By NICHOLAS PETRICK, PhD 

Let's start by defining CAD, since the acronym can have various meanings including computer-aided detection, computer-aided diagnosis, computer-aided display, computer-assist device among others. In this article, I'll use CAD to refer to the general class of computer-assist devices (CADs) defined as a computer algorithm used in combination with an imaging system to aid the clinician in detecting or classifying disease [1]. This includes computer-aided detection (CADe), a computerized system that marks or highlights portions of an image that may reveal abnormalities, and CADx, a computerized system that provides an assessment of disease in terms of likelihood of presence, type, stage, or other characteristic. It is important to understand that CADs don't detect or diagnose disease. The premise of CADs is that information produced by the clinician and by the computer system are somewhat complementary. The contribution of CAD stems from the ability of a clinician to sort through and utilize relevant CAD information as part of his/her overall clinical interpretation process [2]. A clinician interacting with the computer algorithm is fundamental to CAD and its assessment.

There are many elements to assessing a new CAD algorithm, but two types of studies are generally part of the assessment mix: (1) stand-alone performance testing and (2) reader performance testing. The former type of study is useful in developing and ranking prototype CAD designs, but does not provide direct evidence of how a CAD will affect clinical decision making. The latter type of reader study provides the necessary evidence that the CAD does aid clinical decision making. Stand-alone testing can be an effective tool for identifying subgroups of patients or disease characteristics where a CAD algorithm has either superior or inferior performance, so both should generally be used to thoroughly evaluate a CAD.

Reader performance for CAD is typically assessed using a multiple-reader, multiple-case (MRMC) study design. In MRMC, a set of clinicians (readers) from a relevant population of clinicians evaluates a set of patient cases from a relevant population of patients. For CAD assessment, without-CAD reading is typically used as a control. It provides a performance benchmark for which any CAD-related performance change can be compared [3] and serves as a control on the range of case difficulty and reader skill in the study [1]. The MRMC study design is quite general accommodating a number of different reading protocols and statistical endpoints. One common design is the so-called "fully-crossed" protocol where each reader reads each case in both the with-CAD and without-CAD arms. This study design is efficient in minimizing the total number of cases but hybrid designs, such as the "doctor-patient" design where each clinician reads only their own patients, can also be accommodated. The MRMC approach is often considered as tied to receiver-operating-curve (ROC) analysis. This is a common misconception because MRMC accommodates a variety of endpoints including sensitivity, specificity and location-based analysis.

Reader variability plays a key role in CAD assessment for two main reasons: (1) differences in reader skill levels and (2) differences in reader aggressiveness. Large variability, not uncommon in imaging modalities for which CAD is being developed, implies a need for large reader studies. The magnitude of the variability is often unknown without access to some type of pilot data; therefore pilot studies are key to sizing MRMC studies. Fortunately, a number of statistical tools, along with their software implementations, have been developed and are available to account for the correlations and common sources of variability in MRMC data, thus providing estimates of mean performance as well as confidence intervals correctly accounting for reader and case variability [1, 3].

In summary, CAD assessment generally includes both stand-alone and MRMC reader performance testing with the former used for triaging underachieving CAD algorithms and for identifying substandard performance within subgroups and the latter assessing CAD's impact on clinical decision-making.

Reference:
[1] Assessment of Medical Imaging Systems and Computer Aids: A Tutorial Review. Academic Radiology, 2007; 14(6): 723-748. Wagner, R.F., et al.
[2] Anniversary Paper: History and Status of CAD and Quantitative Image Analysis: the Role of Medical Physics and AAPM. Medical Physics, 2008. 35(12): 5799-820. Giger, M.L., et al.
[3] Reader Studies for Validation of CAD Systems. Neural Networks, 2008; 21(2-3): 387-397. Gallas, B.D., et al.

 

Nicholas Petrick, PhD, is deputy director for the Division of Imaging and Applied Math and Leader of the Image Analysis Laboratory at the Center for Devices and Radiological Health, U.S. Food and Drug Administration, and is a member of the QIBA Volumetric CT Technical Committee. Dr. Petrick's research interests include quantitative imaging, computer-aided diagnosis, and the development of assessment techniques for medical imaging and computer analysis devices. 

[BACK TO TOP] 

 


FOCUS ON 

RSNA 2010: Quantitative Imaging/Imaging Biomarkers and QIBA Meetings and Activities 

MARK YOUR CALENDAR
Quantitative Imaging/Imaging Biomarkers Focus Session: Imaging Biomarkers for Clinical Care and Research
• Monday, November 29, 4:30 PM–6:00 PM 

QIBA Quantitative Committees Working Meeting
• Wednesday, December 1, 3:30 PM–5:30 PM 

The Quantitative Imaging Reading Room
Following the success of the RSNA 2009 Toward Quantitative Imaging: Reading Room of the Future, RSNA 2010 will feature The Quantitative Imaging Reading Room. The educational showcase will provide visual and experiential exposure to quantitative imaging and biomarkers through exhibitor products that integrate quantitative analysis into the image interpretation process. Participants can learn through hands-on exhibits featuring informational posters, computer-based demonstrations and Meet the Expert presentations scheduled throughout the week. 

[BACK TO TOP] 

 


QI/IMAGING BIOMARKERS IN THE LITERATURE 

PubMed Search on Image Archives 

Each issue of QIBA Quarterly will feature a link to a dynamic search in PubMed, the National Library of Medicine's interface to its MEDLINE database. Click here to view a PubMed search on structured reporting in radiology. 

Take advantage of the My NCBI feature of PubMed that allows you to save searches and results and includes an option to automatically update and e-mail search results from your saved searches. My NCBI includes additional features for highlighting search terms, storing an e-mail address, filtering search results and setting LinkOut, document delivery service and outside tool preferences. 

[BACK TO TOP] 

 

QIBA MISSION Improve the value and practicality of quantitative imaging biomarkers by reducing variability across devices, patients and time. 


QIBA CONNECTIONS 

Quantitative Imaging Biomarkers Alliance (QIBA) 

QIBA Wiki 

Contact us
Comments & suggestions welcome 


Daniel C. Sullivan, MD
RSNA Science Advisor
 

       
 
 
   
  RSNA  © 2010 Radiological Society of North America, Inc. QIBA Quarterly is brought to you by the Radiological Society of North America (RSNA), Department of Scientific Affairs, 820 Jorie Blvd., Oak Brook, IL 60523, 1-630-571-2670. If you wish to unsubscribe, send an e-mail to QIBA@rsna.org.