Strucseg – A Pipeline for Segmenting Brain Structures using Open Source Tools

Download Full-Text PDF Cite this Publication

Text Only Version

Strucseg – A Pipeline for Segmenting Brain Structures using Open Source Tools

R. Neela

Research Scholar, Manonmaniam Sundaranar University,

Tirunelveli,India.

Dr. R. Kalaimagal

Assistant Professor, Dept. of Computer Science, Govt. Arts College for Men(Autonomous), Chennai, India.

AbstractStructure segmentation is often the first step in the diagnosis and treatment of various diseases. Because of the variations in the various brain structures and overlapping structures, segmenting brain structures is a very crucial step. Though a lot of research had been done in this area, still it is a challenging field. Using prior knowledge about the spatial relationships among structures, called as atlases, the structures with dissimilarities can be segmented efficiently. Multiple atlases proves a better one when compared to single atlas, especially when there are dissimilarities in the structures. In this paper, we proposed a pipeline for segmenting brain structures using open source tools. We test our pipeline for segmenting brain structures in MRI of using the public data set provided by MIDAS

KeywordsSegmentation; brain structures; multiple atlases

  1. INTRODUCTION

    Segmenting brain tissues/ structures is the primary step in diagnosis and treatment of various brain related disorders. The MRI scan gives clear and exhaustive images of soft tissue. Bones cant be visualize using MRI because bone tissue doesnt comprise much water. Due to this, MRI, fMRI are widely used in tissue and structure segmentation. Tissue segmentation can be done using intensity thresholding. But segmenting brain structures are complicated as a structure can be comprised of more than one tissue type. Various tools and techniques are available for segmenting brain structures. Out of all proposed techniques, segmenting structures using atlases gives good result. Since there are large anatomical variations in the structures, using multiple atlases rather than single atlas gives superior results. The accuracy of the result largely depends on the selection of atlases.

  2. MATERIALS AND METHODS

    1. Processing Pipeline

      Fig. 1. Pipeline for segmenting brain structures

    2. Intensity Inhomogeneity Correction usingANTss N4BiasFieldCorrection

      The intensity inhomogeneity is caused mainly due to the RF coil imperfections. This artifact, results in the variations of intensities in the same tissues. These variations affect the segmentation results. Various methods are proposed for intensity inhomogeneity correction. N4 is a bias correction algorithm and is a variation of N3 algorithm described by Tustisonet. al. [1]. Using N4BiasCorrection command[2] , the intensity inhomogeneity of MRI scan can be corrected.

      *N4BiasFieldCorrection -i inputfile.nii.gz o outputfile.nii.gz

      Fig. 2a. Before bias correction Fig. 2b. After bias correction

    3. Global Rigid Registration With MNI template space using 3DSlicers BRAINS

      Registration is an important preprocessing step in the analysis of images. It is a process of finding geometric transformation so that two images are perfectly aligned. Depending on the nature of transformation, the registration can be classified into rigid, affine, projective, curved and non- rigid[3].

      Fig. 3a MNI Template image Fig. 3b. Output image after global (T1) fixed rigid registration

    4. Skull Stripping using the tool ROBEX

      Skull Stripping removes non-brain tissue in MR brain images. Many Skull Stripping methods have been proposed[4- 9]. Skull stripping improves the performance of the registration and segmenting process. We use the ROBEX tool proposed by [9] and downloaded from [10].

      Fig. 4a. Fig. 4b. Fig. 4c. Input image After Masking Image mask

    5. Non-Rigid Registration of subject image with template image using ANTs s ANT and WarpImageTransform tool

      There are enormous research in the segmentation of brain structures. Several studies have shown that applying prior knowledge in the process of segmentation gives a better performance[11-13]. Especially using multiple atlases gives greater accuracy when there are large variations in the structures[14]. To transfer labels from the atlas images, target image should aligned [non-rigid] with all atlases and a deformation field should be calculated. Using this, labels from the atlas images are propagated to the target image.

      Fig 5aWarped image Fig. 5b Final Registered image

    6. Label Fusion Using Patch Differences

    For each voxel in the test image,TXi, , we have to select an optimal label from a label set (Li1, Li2, , Lmn). N refers number of atlases and M refers to the number of voxels. In our experiment, we select 10 atlas images. Mutual information between each atlas and test image pair is calculated and the top 10 atlas images are selected. As the intensity of a voxel dominated by its surrounding voxels, the label of a voxel is determined by the labels in the neighbouring voxels too. So, for each voxel in the test image, TXi, we choose a 3×3 neighbourhood patch. And for each patch in the test image, we select a patch that is closest to the patch in the test image from the patches from all the segmented images. We use Euclidean distance metrics to measure the distance between the patches. A patch with the smallest distance is selected for each voxel and the label from the selected patch is propagated to the test patch. Repeat this procedure for all the voxels in the test image.

  3. RESULTS AND DISCUSSION

    All the test and atlas images we used in our paper are downloaded from MIDAS digital archive system[15]. All data are already intensity homogeneity corrected. Preprocessing should be done to all subject and atlas scans. Intensity inhomogeneities were corrected using ANTs N4BiasFieldCorrection, then skull stripping were performed using ROBEX. The test image is registered with a set of atlas images using ANTs. Labels are propagated from the atlas

    images using warpimagemultitransform. This step gives a set of segmented images for a test image. Finally the labels from these segmented images are fused together to gives a final segmentation. The bias field correction takes about 20 seconds for each image. The global registration takes 2 seconds for each image. For skull stripping, each image takes 12 seconds, non-rigid registration and label propagation takes about 40 minutes for each image atlas pair. Finally the label fusion takes 1 ½ hours. All these are done using shell scripting and the sample commands are given below:

    >n4biasfieldcorrection -d 3 -i h:\data\training\1003_3.nii -o h:\newdata\bias_1003_3.nii.gz -s 4

    >BRAINSfit –fixedVolume h:\newdata\bias_1003_3.nii.gz — movingVolume mnitemplate.nii.gz — outputTransform h:\newdata\rigid_1003_3.nii.gz –transformType Rigid — histogramMatch

    >robex h:\newdata\rigid_1003_3.nii.gz h:\results\skullstrip\1003 _3_robex.nii.gz h:\results\skullstrip\1003_3_robex_mask.nii.gz

    >ants 3 -m PR[h:\results\skullstrip\1003_3_robex.nii.gz,h:\results\skullstri p\1007_3_robex.nii.gz ,1,4] -i 50x50x0 -t Elast[1.0] -o h:\register\skull_prnew_seg37.nii.gz

    >warpimagemultitransform 3 h:\results\skullstrip\1007_3_robex.nii.gz h:\register\skull_prnew_total37.nii.gz -R h:\results\skullstrip\1003_3_robex.nii.gz h:\register\skull_prnew_seg37warp.nii.gz h:\register\skull_prnew_seg37affine.txt –use-NN

    >warpimagemultitransform 3 h:\data\training\1007_3_glm.nii h:\register\skull_prnew_glm37.nii.gz -R h:\results\skullstrip\1003_3_robex.nii.gz h:\register\skull_prnew_seg37warp.nii.gz h:\reister\skull_prnew_seg37affine.txt –use-NN

    Fig. 6a. Test Image Fig. 6b. Test image after skull stripping

    `

    Fig 6 c d : Registered Test image, its segmentation and its labeled image (for one set of atlas)

    1

    1

    0

    0

    Total Union Mean Volume False False (jaccard) (dice) sim. negative positive

    Total Union Mean Volume False False (jaccard) (dice) sim. negative positive

    -1

    -1

    Fig 6e : Final segmentation of test image after fusion

    To measure the efficiency of our pipeline, we use Dice co- efficient, Jaccard co-efficient and kappa statistics. We compare the segmented image with the golden standard image. Our pipeline for the test image (Fig 5a) gives DICE= 0.954700, JACCARD= 0.913327 and KAPPA= 0.945266. Our

    experiment was conducted for 15 test images. The same test image gives DICE= 0.824501, JACCARD = 0.902120 and KAPPA= 0.932134 for simple majority voting method and gives DICE= 0.924500, JACCARD= 0.945255 and KAPPA=

    0.89344 for weighted majority voting.

  4. CONCLUSION

We have presented a pipeline for segmenting brain structures using open source tools. We compare our pipeline with majority voting and weighted majority voting. For all the operations, only non-rigid registration and fusion steps

1

0.95

0.9

0.85

0.8

0.75

10 7 5 4

10 7 5 4

Similarity measure

Similarity measure

No. ofAtlases

No. ofAtlases

Fig. 7a : comparison for number of atlases

STRUCSEG MV WMV

consumes more time. For doing each set of registration, 30 to 40 minutes are needed. To do final fusion, the algorithm takes 1 ½ hours. The time taken to compare our result with the golden standard is 49332 milliseconds. Fig 7a and Table Ishows the performance of simple majority voting, weighted majority voting and our pipeline for a single test image. Fig 7b and Table II shows the segmentation performance of our pipeline for a various number of atlases and for the same test image. For 10 atlases, it gives a maximum accuracy. When compare to majority voting and weighted majority voting methods, our pipeline gives better accuracy.

TABLE I. PERFORMANCE COMPARISON OF VARIOUS ALGORITHMS

MEASURES

STRUCSEG

MV

WMV

DICE

0.9547

0.824501

0.9245

JACCARD

0.913327

0.90212

0.945255

KAPPA

0.945266

0.932134

0.89344

TABLE II: COMPARISON OF OUR SEGMENTATION PIPELINE FOR DIFFERENT NUMBER OF ATLASES.

Atlases

Total

Jaccard

Dice

False negative

False positive

10

0.895681

0.822509

0.902612

0.104319

0.09035

7

0.880107

0.802049

0.890152

0.119893

0.099571

5

0.85359

0.772619

0.871726

0.14641

0.10935

4

0.817069

0.729576

0.843647

0.182931

0.127988

Fig. 7b: Comparison of Strucseg with MV & WMV

REFERENCES

  1. Nicholas J Tustison, Brian B Avants, Philip A Cook, YuanjieZheng, Alexander Egan, Paul A Yushkevich, James C Gee,N4ITK: Improved N3 Bias Correction, IEEE Transactions on Medical Imaging, 29(6):1310-1320, June 2010.

  2. http://stnava.github.io/ANTs/.

  3. Mani, V.R.S, and Dr.S. Arivazhagan. "Survey of Medical Image Registration." Journal of Biomedical Engineering and Technology 1.2 (2013): 8-25.

  4. Kapur T, Grimson WEL, III WMW, Kikinis R. Segmentation of brain tissue from magnetic resonance images. Medical Image Analysis 1996;1(2):109 27.

  5. Yoon UC, Kim JS, Kim JS, Kim IY, Kim SI. Adaptable fuzzy C-means for improved classification as a preprocessing procedure of brain parcellation. J Digit Imaging 2001;14(2):23840.

  6. Lemieux L, Hagemann G, Krakow K, Woermann FG. Fast, accurate, and reproducible automatic segmentation of the brain in T1-weighted volume MRI data. Magnetic Resonance in Medicine 1999;42(1):12735.

  7. Lemieux L, Hammers A, Mackinnon T, Liu RS. Automatic segmentation of the brain and intracranial cerebrospinal fluid in T1- weighted volume MRI scans of the head, and its application to serial cerebral and intracranial volumetry.Magnetic Resonance in Medicine 2003;49(5):87284.

  8. S. Sadananthan, W. Zheng, M. Chee, and V. Zagorodnov, Skull stripping using graph cuts, NeuroImage, vol. 49, no. 1, pp. 225239, 2010.

  9. Iglesias JE, Liu CY, Thompson P, Tu Z: "Robust Brain Extraction Across Datasets and Comparison with Publicly Available Methods", IEEE Transactions on Medical Imaging, 30(9), 2011, 1617-1634.

  10. http://www.nitrc.org/projects/robex/

  11. Xian Fan, Yiqiang Zhan & Gerardo Hermosillo Valadez. (2009). A Comparison study of atlas based image segmentation: the advantage of multi-atlas based on shape clustering. Proc. SPIE 7259, Medical Imaging. doi:10.1117/12.814157.

  12. Aljabar, P., Heckemann, R., Hammers, A., Hajnal, J.V., & Rueckert, D. (2009). Multi-Atlas Based Segmentation of Brain Images: Atlas Selection and Its Effect on Accuracy. NeuroImage, 46(3), 726- 739. Retrieved from

    http://www.doc.ic.ac.uk/~pa100/pubs/aljabarNeuroImage2009- selection.pdf.

  13. M. R. Sabuncu, B. T. T. Yeo, K. Van Leemput, B. Fischl, and P. Golland, A generative model for image segmentation based on label fusion, IEEE Transactions on Medical Imaging, vol. 29, no. 10, Article ID 5487420, pp. 17141729, 2010.

  14. Rohlfing, T., Robert Brandt, Randolf Menzel, & Calvin R. Maurer, Jr. (2004). Evaluation of atlas selection strategies for atlas-based image segmentation with application to confocal microscopy images of bee brains. NeuroImage, 21(4), 14281442, doi: 10.1016/j.neuroimage.2003.11.010.

  15. http://placid.nlm.nih.gov/user/48

ABOUT THE AUTHORS

R.Neela is a research scholoar inManonmaniamSundaranar University, Tirunelveli. She is currently working as an Assistant Professor in e Department of Computer Science in AVC College

(Autonomous), Mannampandal, Mayiladuthurai. She has 14 years of teaching experience. Her research area is Medical Image Analysis.

Dr R. KalaiMagal is currently working as an Assistant Professor in the Department of Computer Science in Government Arts College for Men (Autonomous), Nandanam, Chennai. She has more than 18 years of experience and guiding 11 Ph.D. scholars

and more than 5 M.Phil. students. She has written 2 books for Computer Science in the field of Mobile Computing and Data Structures and Algorithms. Her research Area is Routing in Mobile Computing

Leave a Reply

Your email address will not be published. Required fields are marked *