Transcript Optimizing Softcopy Mammography Displays Using a Human
MTF Correction for Optimizing Softcopy Display of Digital Mammograms: Use of a Vision Model for Predicting Observer Performance
Elizabeth Krupinski, PhD 1 Hans Roehrig, PhD 1 Michael Engstrom, BS 1 Jeffrey Johnson, PhD 2 Jeffrey Lubin, PhD 2 1 University of Arizona 2 Sarnoff Corporation This work was supported by a grant from the NIH R01 CA 87816-01.
• • • • •
Rationale
MTF (Modulation Transfer Function) of monitors is inferior to radiographic film In both vertical & horizontal directions MTF is degraded (spatial resolution lost) & moreover is non-isotropic
–
Horizontal by ~ 10 – 20%
–
Vertical by ~ 30 – 40% Over half the contrast modulation is lost at highest spatial frequencies Images are thus degraded both in spatial & contrast resolution Maybe image processing can help !
Rationale
• •
Observer trials (ROC) are ideal for evaluation, but for good statistical power
–
Require many images
–
Require many observers
– –
Often require multiple viewing conditions Are time-consuming Predictive models may help decrease need for extended & multiple ROC trials
–
Simulate effects of softcopy display parameters on image quality
–
Predict effects on observer performance
JNDmetrix Model
• • • •
Developed by the Sarnoff Corporation
–
Successful in military & industrial tasks Computational method for predicting human performance in detection, discrimination & image-quality tasks Based on JND (Just Noticeable Difference) measurement principles & frequency channel vision-modeling principles Uses 2 input images & the model returns accurate, robust estimates of visual discriminability
JNDmetrix Model
input images o p tic s sa m p li n g oriented responses frequency specific contrast pyramid transducer JND Map Masking - gain control
...
distance metric Q norm JN D va lu e p ro b a b ility
• • • •
JNDmetrix Model
Optics: input images convolved by function approximating point spread optics of eye Image Sampling: by retinal cone mosaic simulated by Gaussian convolution & point sampling sequence of operations Raw Luminance Image: levels converted to units local contrast & decomposed to Laplacian pyramid yielding 7 frequency band pass Pyramid Levels: convolved with 8 pairs spatially oriented filters with bandwidths derived from psychophysical data
JNDmetrix Model
• •
Pairs Filtered Images: squared & summed yielding phase-independent energy response that mimics transform in visual cortex from linear (simple cells) to energy response response (complex cells) Transducer Phase: energy measure each pyramid level normalized by value approximating square of frequency specific contrast detection threshold for that level & local luminance
• • • •
JNDmetrix Model
Normalized Level: transformed by sigmoid non-linearity duplicating visual contrast discimination function Transducer outputs: shaped kernal & averaged to account for foveal sensitivity convolved with disk Distance metric: spatial position computed from distance between vectors (m-dimensional, m = # pyramid levels x # orientations) from each JND Spatial Map: degree discriminability; reduced to single value (Q-norm) results representing
• • • • •
The Study
Measure monitor’s horizontal & vertical MTF Apply MTF correction algorithm
–
Based on Reiker et al. Proc SPIE 1997;3035:355 368 but using a Weiner-filtering algorithm instead of the Laplacian pyramid filter
–
Compensates mid to high-frequency contrast losses Run human observer (ROC) study
–
Calculate area under the curve (Az) Run JNDmetrix model on images
–
Calculate JNDs Compare human & model performance
Physical Evaluation
• • •
Siemens monitor: 2048 x 2560; monochrome; P45 phosphor; Dome MD-5 video board; DICOM calibrated Luminance: 0.8 cd/m 2 – 500 cd/m 2 ) Input to model: each stimulus imaged on monitor by CCD camera to capture display effects
Block diagram of program for automatically finding the CRT MTF from a CCD image of a single CRT line
CRT Line
Profiles to find Vertical MTF
CRT Line
Step 1:
Input Image details like magnification, CRT pixel size and orientation of line.
Step 2:
Specify ROI for profiles.
Step 3:
Perform Fast Fourier Transform of the profiles and take their average.
Step 4:
Correct for finite size of pixel width.
Step 5:
Get a Polynomial curve fit to get normalization factor.
Step 6:
Divide the average FFT by this normalization factor to obtain MTF.
Profiles to find Horizontal MTF
1.2
MTFs obtained from the Line Response of a DICOM Calibrated High Performance 5M-Pixel CRT with a P45 Phosphor for Different Mean Luminances.
ADUs 55,120 and 210; Nyquist Frequency: 3.47 lp/mm
0.8
Vertical MTF: 8 cd/m 2 Vertical MTF: 237 cd/m 2 Vertical MTF: 42 cd/m 2 0.4
Horizontal MTF: 237 cd/m 2 Horizontal MTF: 8 cd/m 2 Horizontal MTF: 42 cd/m 2 0 0 1 2 Spatial Frequency (lp/mm) 3 4
Images
• • • • • • •
Mammograms from USF Database 512 x 512 sub-images extracted 13 malignant & 12 benign
m
Ca ++ The
m
Ca ++ are removed using median filter Add
– m
Ca ++ to 25 normals with reduced contrast levels 75%, 50% & 25% present versions
m
Ca ++ by weighted superposition of signal-absent & 250 total images Decimated to 256 x 256 (for CCD imaging)
Edited Images
Original 75%
m
Ca++ 50%
m
Ca++ 25%
m
Ca++ 0%
m
Ca++
MTF Restoration
•
If MTF is known then digital data can be processed with essentially the inverse of the display MTF(f) before displayed:
•
O’(f) = O(f)/MTF(f) where O(f) is the object Displayed O’(f) on the monitor with MTF(f) will result in an image equivalent to the digital data O(f)
•
There is no degradation and the image on CRT display looks just like digital data I(f)=O’(f)*MTF(f)=[O’(f)/MTF(f)]*MTF(f)=O(f) (where I(f) = the displayed image)
• • • • • • • •
Observer Study
250 images
–
256 x 256 @ 5 contrasts 6 radiologists No image processing Ambient lights off No time limits 2 reading sessions ~ 1 month apart Counter-balanced presentation Rate confidence (6-point scale)
Human ROC Results
1 0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0 25% * 50% * 75% MTF No MTF 100%
* P < 0.05
Model Results
2 0 6 4 14 12 10 8 * 25% * 50% * 75% * 100% MTF No MTF
* P < 0.05
Correlation
1.0
0.9
MTF No MTF R 2 = 0.98
0.8
0.7
0.6
7 8 9 10 11 Model JND 12 13
• • • •
Summary
MTF compensation improves detection performance significantly JNDmetrix model predicted human performance well High correlation between human & model results Future improvements to model may include attention component derived from eye-position data
Model Results
• •
Model predicted same pattern of results as human observers
–
MTF processing yields higher performance than without
–
At all lesion contrast levels Correlation between human Az and model JND is quite high