Lesion Segmentation from Mammogram Images using a U-Net Deep Learning Network

DOI : 10.17577/IJERTV9IS020213

Download Full-Text PDF Cite this Publication

Text Only Version

Lesion Segmentation from Mammogram Images using a U-Net Deep Learning Network

Neha S. Todewale

Electronics Department,

Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded

Maharashtra, India

Abstract:-Breast cancer is one of the most frequent mortality causes among women. In order to get the proper results and for early diagnosis, the efforts are being taken for developing more effective technique to improve the results. Previous methods need radiologist or oncologist to examine the presence of cancer and consume a lot of time. Thus, in order to diagnose the tumor or the cancerous cells without the involvement of human but with good accuracies, different approaches need to be used. In this study, deep learning approach is used to automatically segment the cancerous lesion. For the segmentation, U-Net model, which is a Fully Convolutional Neural network (FCN), is used. The database used are TMC (Tata Memorial Centre) data and MIAS (Mammographic Imaging Analysis Society). This computer-aided detection helps to improve the results by properly separating out the lesion which will help to study the temporal changes taking place within lesion over the time. The network is found to give an acceptable performance with the given datasets. The validation dice coefficients of the MIAS and TMC datasets are found to be 0.8582 and 0.8952 respectively.

Keywords: Deep learning, U-NET, FCN, Lesion.

  1. Introduction

    Breast cancer has become the most common cancer and leading cause of death among women across the world. After skin cancer, breast cancer is the second most commonly occurring cancers with high mortality rate [1]. In addition to the previous cases, the new 2 million cases were identified in 2018. In India also, the rate of growth of breast cancer cases are high; it would reach as high to 1,797,900 by 2020 [2]. Among the women with the ages 50 years or older than that, the rate of diagnosis is 81% and the largest deaths of 89% occur in this age group. The major causes behind the increasing mortality rates include the poor awareness and the lack of screening facilities [3]. Not only women but men are also affected by this cancer.

    1.1 Anatomy and Literature Survey

    Breast cancer develops typically in either ducts or the lobules of the breast. Lobules are the mammary glands that produce the milk and the ducts carry the milk from the lobules to the nipple during breastfeeding. The abnormal and uncontrolled growth of the cells leads to the formation of lump or the tumor. In addition to this, the lump can also be formed in the fatty tissue and fibrous tissue. The cancer may spread to the other body parts via lymph nodes which plays a role in spreading the tumor to the remaining body parts. There are different cancers with various

    irregularities in their abnormalities. In case of breast masses, it is very challenging to locate and to properly segment the lesion due to their size, shape of abnormality and low SNR [4]. It is found that mortality rates can be reduced if patients undergo some screening tests like mammography. However, in case of dense breasts, the false positive rates are high. It may happen that evaluations of radiologists may fail, such as the cases which are marked as negative by them were already present in the screening mammograms [5, 6]. It is necessary to develop an automated system which can be used as second opinion by radiologist which plays an important role in early detection of cancer and help to reduce death rates. Though the computer aided detection systems are used with traditional systems, it is found that radiologists do not improve their performance due to low specificity [7].

    The performance of medical imaging techniques has remarkably improved due to developments in machine learning approaches. Convolutional Neural network (CNN) is a machine learning technique found to replace many segmentation algorithms [8]. From the past few years, the convolutional networks had already existed which were mainly used to perform classification tasks in which single label of class was assigned to that image. In case of biomedical images, separate class label is to be assigned to every pixel. Ciresan et al. [9] trained a network where a sliding window is used over the pixels which predict separate class label to each pixel. The drawbacks in this method are it was slow and redundant. Also, there was trade-off between localization and contextual information. Jonathan Long et al. [10] in his work proposed the Fully Convolutional Network end-to-end for pixelwise prediction and from supervised pre-training. The method is efficient as it precludes the complications in future works. Mostly, patchwise training is used in previous works which is slower as compared to fully convolutional training as it uses subimages and produces outputs for subimages instead of whole images. In this work, the network U-Net is found to perform well with a training data of limited amount. The lesions are separated out instead of only detecting them which allows to give us information about temporal changes in that lesion. The evaluations on datasets demonstrate that the Deep learning approaches achieve best performance compared to conventional methodologies. The organization of paper contain the sections such as the methodology in which the datasets used in the experimentation, the method of creation of masks, the architecture of model used is

    included. The second section is about the results and the discussion of the segmented output masks we are getting after training our model and the last section is the concluding part which tells about the future scope for this experimentation.

  2. Methodology

    1. Dataset

      The proposed method is evaluated on following datasets:

      • Mammographic Imaging Analysis Society (MIAS)

      • Tata Memorial Centre data (TMC)

        The MIAS which is Mammographic Imaging Analysis Society has generated digital mammogram database. This database is publicly available for researches. The database contains 161 pairs of the images with resolution of 1024×1024 with mediolateral oblique (MLO) view. All the

        images in this dataset have 200-micron pixel edge and they are padded in order to make them in the size of 1024×1024. The information of location of abnormalities is provided in order to mark the ground truths. They are annotated based on background tissue (fatty, fatty- glandular and dense glandular), class of abnormality (calcification, well defined, spiculated, architectural distortion, asymmetry, ill-defined and normal) and severity of abnormality (benign or malignant). For truths markings, they are provided with the centre pixel locations and the radius [11]. Also, data of 69 patients from Tata Memorial Centre is added to dataset. As the total images available are lesser in number, the images are augmented with horizontal flip, vertical flip, zoom, shear to increase our data. Figure 1. (a) represents a sample image from MIAS dataset whereas Figure 1. (b) is the image from TMC data.

        1. (b)

          Fig. 1. (a) MIAS image (b) TMC image

    2. Ground truth marking and mask creation

      In deep learning approach, we need ground truth data in order to compare network generated masks with the annotated masks. Some datasets such as DDSM, INbreast are provided with their ground truth masks. The datasets

      used here do not have annotated masks with them. So, the masks are manually created and are verified by the expert radiologists. Figure 2.(a) is the image from TMC data along with its binarized mask shown in Figure 2.(b).

      1. (b)

        Fig. 2. TMC image and its manually annotated mask

    3. Data Augmentation

      Augmenting the data is nothing but increasing the number of images in the dataset to get better generalization in the model used where more amount of the data as well as variation is needed. So, for getting enough variation, we have to generate more data from the dataset with limited images. This is what data augmentation is. Augmentation

      process increases the training images which improves the overall network performance whenever the training images in the dataset are relatively smaller in number. Augmentation includes horizontal, vertical flipping, adding random noise, zooming and blurring the images. Figure 3. ((a), (b), (c), (d)) shows the augmented images after flipping zooming, blurring operations on dataset.

      (a) (b) (c) (d)

      Fig. 3. Augmented Images

    4. Deep learning and Training

      The deep convolutional network has achieved good outcomes in many visual recognition tasks. Prior to that, the convolutional networks have already been there for a longer time interval, but outcomes achieved from the convolutional network has limited success rate. G.E. Hinton et al. proposed a new algorithm [12] in which deep learning has approved as an effective approach for tremendous of problems related to the pattern recognition. Long et al. introduced Fully Convolutional Network (FCN)

      [13] and this contribution made the architecture of the CNN useful for dense prediction without the use of any fully connected layers. Compared to the previously used classical approaches, the prediction can generate the segmentation map much faster for images. This challenge is also for the pooling layers which diminishes the details of object. Thus, in this fully convolutional network, the expanding/up sampling path is used in order to overcome such types of problems. Thus, this architecture is like encoder-decoder network where contracting path reduces special dimensions using pooling layer and expanding path helps to recover the details of objects. The overall architecture of U-Net is like autoencoder and at the end, we need to get a segmentation map of lesion [14]. The advantage of U-Net is that the images of different sizes can

      be given as input to the U-Net which is a fully convolutional network so that complete mammogram images are used to evaluate on the trained network.

    5. U-Net Architecture

      The architecture of U-Net is proposed by Ronneberger et al. [15]. The overall U-Net architecture can be looked as combination of contracting path with several convolutional layers and expanding path with deconvolutional layers. The network is divided into three parts.

      • The Contracting/Downsampling path

      • The Bottleneck

      • The Expanding/Upsampling path

      The contracting path consist of two 3×3 Convolution layers with Rectified Linear Units as an activation function and for the downsapling path, here max-pooling is used with the stride of 2. The number of feature channels becomes double at every downsampling stage. The expansive path consists of an upsampling of the feature map followed by 2×2 up-convolutions which reduces the number of feature channels by the factor of two. To map the feature vector to the desired number of classes, 1×1 convolution is used. Overall U-Net architecture used in our experimentation is shown in Figure 4.

      Fig. 4. The U-Net architecture

      As shown in Figure 5, the images of our dataset along with their generated masks are given to the network in training phase and for getting the results in the form of predicted output, test image is given in the testing phase so that the network will generate the mask of test image.

      Input image Output Segmented map

      Fig. 5. The working of U-Net architecture where input test image is fed to the network and respective mask of that image is generated.

  1. RESULTS AND DISCUSSION

    1.(a) 1.(b) 1.(c)

    2.(a) 2.(b) 2.(c)

    3.(a) 3.(b) 3.(c)

    4.(a) 4.(b) 4.(c)

    Fig. 6. Results of the experimentation performed

    In this work, we have taken the images from MIAS and TMC data, created their binarized masks, resized them to 256×256 and augmented them to increase the data and then fed to our network as input. Figure 6 shows the results of segmentation performed by our network. The first image in each row represents original image, the second image is of the manually created masks and the third image shows the masks generated by our network. All the images along with their segmented outputs are shown here. Each last

    column image shows the segmentation predicted by our network in which the lesion is segmented out properly.

    The experimentation is performed using Keras library for deep learning that can run on the top of Theano or TensorFlow. Here, tensorflow is used as backend. To evaluate the performance of U-Net for segmentation, we have used Dice coefficient which is nothing, but a statistic used to find similarity between two samples. About 20% of the training data is used for validation purpose.

    The parameters used for training are shown in the Table 1

    Table 1. Parameters used for implementation

    Parameters Values

    1. No of epoch 40

    2. Steps per epoch 300

    3. Loss function Binary crossentropy

    4. Optimizer Adam

    5. GPU NVIDIA Quadro P5000

    Consider, S and G are Segmented result and the Ground truth. Then the Dice Coe cient is defined as the equation (1),

    Dice Coefficient = 2(GS) . (1)

    |G|+|S|

    In this work, we calculated the Dice coefficients of both datasets which are shown in the following Table 2.

    Table 2. Dice coefficients of used datasets

    Dataset

    Training Dice coeff

    Validation Dice coeff

    1.MIAS

    0.9623

    0.8582

    2.TMC data

    0.9027

    0.8952

    The following two graphs shows the graphs of Loss and the Accuracy in terms of Dice coefficients obtained for training and validation data containing images of both datasets together where red line indicates data of validation and blue for training data.

    Fig. 7(a). Graph showing Loss obtained in training and validating dataset

    Fig. 7(b). Graph showing accuracy in terms of Dice coefficients of training and validating dataset

  2. CONCLUSION

The proposed segmentation using deep learning approach is tested on the images of the different datasets. The main advantage of the approach which is presented in this paper is its uniform nature and can be applied to different medical image segmentation tasks. Here, we have segmented lesions instead of only detecting it as in case of classical detection approaches. The advantage of separating out lesion is that we can easily detect their growth and morphological modifications with time. Furthermore, the correlation of MLO and CC views can be

improved more precisely if the data related to shape of projection of lesion is available.

REFERENCES

  1. Jose Manuel Ortiz-Rodriguez, Carlos Guerrero-Mendez, Maria del Rosario MartinezBlanco, Salvador Castro-Tapia, Mireya Moreno-Lucio, Ramon Jaramillo-Martinez, Luis Octavio Solis- Sanchez, Margarita de la Luz Martinez-Fierro, Idalia Garza- Veloz, Jose Cruz Moreira Galvan and Jorge Alberto Barrios Garcia, Breast Cancer Detection by Means of Artificial Neural Networks, In Intech, 2018.

  2. Meenakshi M.Pawar, Sanjay N.Talbar, Local entropy Maximization based image fusion for contrast enhancement of

    mammogram, In Journal of King saud University-Computer and Information Sciences,19 Feb 2018.

  3. Suhas G.Sapate, Abhishek Mahajan, Sanjay N.Talbar, Nilesh Sable, Subhash Desai,

  4. Meenakshi Thakur, Radiomics based detection and characterization of suspicious leions

  5. on full field digital mammogram, In ELSEVIER, 15 May,2018.

  6. Heyi Li, Dongdong Chen, William H. Nailon, Mike E. Davies, and David Laurenson,

  7. Improved Breast Mass Segmentation in Mammograms with Conditional Residual U-net,

[8] In arXiv: 1808.08885v1[cs.CV], 27 Aug 2018.

  1. Min Sun Bae, Woo Kyung Moon, Jung Min Chang, Hye Roung Koo, Won Hwa Kim, Nariya Cho, Ann Yi, Bo La Yun, Su Hyun Lee, Mi Yoang Kim, Eun Bi Ryu, Mirinae Seo, Breast Cancer Detected with Screening US: Reasons for Nondetection at Mammography, In Radiology 270, 369377, 1 Feb 2014.

  2. Solveig R. Hoff, Anne Line Abrahamsen, Jon Helge Samset, Einar Vigeland, Olbjorn Klepp, Solveig Hofvind, Breast Cancer: Missed Interval and Screening-detected Cancer at Full- Field Digital Mammography and Screen-Film Mammography Results from a Retrospective Review, Radiology 264, 378386, 1 Aug 2012.

  3. Constance. D. Lehman, Robert. D. Wellman, Diana.S.M. Buist, Diagnostic Accuracy of Digital Screening Mammography With and Without Computer-Aided Detection, In JAMA Internal Medicine 175, 1828, Nov 2015.

  4. Syed Muhammad Anwar, Muhammad Majid, Adnan Qayyum, Muhammad Awais, Majdi Alnowami, Muhammad Khurram Khan, Medical Image Analysis using Convolutional Neural Networks: A Review, In Journal of Medical Systems, 21 May 2019.

  5. Dan C. Ciresan., Luca M. Gambardella, Alessandro Giusti, Jurgen Schmidhuber, Deep neural networks segment neuronal membranes in electron microscopy images. In NIPS. pp. 2852 2860, 2012.

  6. Jonathan Long, Evan Shelhamer, Trevor Darrell, Fully Convolutional Networks for Semantic Segmentation, In CVPR 2015.

  7. http://peipa.essex.ac.uk/info/mias.html

  8. Geoffrey E. Hinton, Simon Osindero, Yee-Whye Teh, A Fast Learning Algorithm for Deep Belief Nets, In Neural Computation, vol. 18, no. 7, pp. 15271554, 2006.

  9. Jonathan Long, Evan Shelhamer, and Trevor Darrell, Fully convolutional networks for semantic segmentation, Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 0712 June, pp. 34313440, 2015.

  10. Timothy de Moora, Alejandro Rodriguez-Ruiza, Albert Gubern M´eridaa, Ritse Manna, and Jonas Teuwena, Automated soft tissue lesion detection and segmentation in digital mammography using a u-net deep learning network, In CVPR, 8 Mar 2018.

  11. Olaf Ronneberger, Philipp Fischer, Thomas Brox, U-Net: Convolutional Networks for Biomedical Image Segmentation, Med. Image Comput. Comput. Interv. — MICCAI 2015, pp. 234 241, 2015.

Leave a Reply