Context Aware Computing

Ultra-high resolution, multi-scale, context-aware approach for detection of small cancers on mammography

While detection of malignancies on mammography has received a boost with the use of Convolutional Neural Networks (CNN), detection of cancers of very small size remains challenging. This is however clinically significant as the purpose of mammography is early detection of cancer, making it imperative to pick them up when they are still very small. Mammography has the highest spatial resolution (image sizes as high as 3328 × 4096 pixels) out of all imaging modalities, a requirement that stems from the need to detect fine features of the smallest cancers on screening. However due to computational constraints, most state of the art CNNs work on reduced resolution images. Those that work on higher resolutions, compromise on global context and work at single scale. In this work, we show that resolution, scale and image-context are all important independent factors in detection of small masses. We thereby use a fully convolutional network, with the ability to take any input size. In addition, we incorporate a systematic multi-scale, multi-resolution approach, and encode image context, which we show are critical factors to detection of small masses. We show that this approach improves the detection of cancer, particularly for small masses in comparison to the baseline model. We perform a single institution multicentre study, and show the performance of the model on a diagnostic mammography dataset, a screening mammography dataset, as well as a curated dataset of small cancers < 1 cm in size. We show that our approach improves the sensitivity from 61.53 to 87.18% at 0.3 False Positives per Image (FPI) on this small cancer dataset. Model and code are available

Continue Reading