Review of Segmentation Algorithms in Cerebellar Lesion Detection from MRI Images using Deep Learning

DOI : 10.17577/IJERTV10IS060279

Download Full-Text PDF Cite this Publication

Text Only Version

Review of Segmentation Algorithms in Cerebellar Lesion Detection from MRI Images using Deep Learning

H S Mohan1, Mrs. Prathibha G 2, Pujith K L3, Deepika R4, Subhashini V R5

1Department of Electronics and Communication Engineering, 234Department of Computer Science & Engineering, 1234Navkis College of Engineering,

Karnataka – India

Abstract The brain is the central organ of the human body which controls the nervous system.In this paper, we give a brief insight into different techniques and contributions of different people for the segmentation and detection of brain tumors. Different methodologies are proposed by different researchers. The MRI scan image considers as a high-quality input for experiments as compared to other scans. In the future, we will develop a deep learning-based automated brain tumor detection system and will compare it with the existing state-of-the-art techniques for better and more accurate results.

Keywords Brain Tumor, MRI, Image Segmentation, Deep Learning.

  1. INTRODUCTION

    A brain lesion is an area of injury or disease within the brain. The brain is surrounded by a bony shell, called Skull. The brain is made up of three major parts: the cerebrum, section of the brain, and cerebellum. The cerebrum is the main portion with a left-hemisphere arrangement on the right. It executes higher functions like vision, listening, reaction. The irregular cell population is created by unregulated cell division. The human brain is the center of the nervous system. It is a collection of the white mass of cells. Tumors of the Brain are of two types they are, Benign which is not cancerous means no danger at all. Another one is Malignant which is a cancerous tumor; it grows abnormally by multiplying the cells rapidly, which leads to the death of the person if not detected. Manually it is not so easily possible to detect and identify the tumor. Magnetic Resonance Images (MRIs) are used to detect and identify the tumor using image processing techniques. To give the precise output a strong segmentation method must be used. Brain tumor identification is a challenging task in the early stages of life. But now improved techniques are available which use various machine learning and deep learning algorithms. In recent days, the issue of automatic identification of brain tumors is of great interest.

    Figure 1.1: The MRI of Normal and Tumour-Filled Brains

    To detect the brain tumor of a patient we consider the data like MRI images of a patient's brain. Here the problem is to identify whether a tumor is present in the patient's brain or not. It is very important to detect the tumors at starting level for a healthy life of a patient. There is much literature on detecting these kinds of brain tumors and improving the detection accuracies. The segmentation, detection, and extraction of infected tumor areas from magnetic resonance (MR) images are a primary concern but a tedious and time taking task performed by radiologists or clinical experts, and their accuracy depends on their experience only. So, the use of computer-aided technology becomes very necessary to overcome these limitations. They estimate the brain tumor severity using the Convolutional Neural Network algorithm which gives us accurate results.

    Figure 1.2 :a.Meningiomab.Gliomac.Pituitory

    A variety of image-processing techniques and methods have been used for the diagnosis and treatment of a brain tumor. Segmentation is the fundamental step in image

    processing techniques and is used to extract the infected region of brain tissue from MRIs. Segmentation of the tumor region is an important task for cancer diagnosis, treatment, and the evaluation of treatment outcomes. A vast number of semi-automatic and automatic segmentation methods and techniques are used for tumor segmentation.

    wide range of applications of deep learning, the objective of this article is to review major deep learning concepts pertinent to brain tumor analysis (e.g., segmentation, classification, prediction, evaluation.). A review conducted by summarizing a large number of scientific contributions to the field (i.e., deep learning in brain tumor analysis) is presented in this study. A coherent taxonomy of the research landscape from the literature has also been mapped, and the major aspects of this emerging field have been discussed and analyzed. A critical discussion section to show the limitations of deep learning techniques has been included at the end to elaborate on open research challenges and directions for future work in this.

    Different medical imaging techniques and methods that include X-ray, Magnetic Resonance Imaging (MRIs),

    Fuzzy clustering is a form of clustering during which each data point can belong to more than one cluster. Clustering or cluster analysis involves assigning data points to clusters such that items in the same cluster are as similar as possible, while items belonging to different clusters are as dissimilar as possible. Clusters are identified via similarity measures. These similarity measures include distance, connectivity, and intensity. Different similarity measures may be chosen based on the data or the application.

    FCM algorithm has been employed by many researchers, to initially segment MR images. Even though the efficiency and convergence rate is superior, the number of computation and the complexity is higher. This is because the data element belongs to more than one cluster and associated with each element is a set of membership levels.

    The FCM algorithm attempts to partition a finite collection of pixels into a collection of "C" fuzzy clusters for some given criterion. Depending on the data and the application, different types of similarity measures may be used to identify classes. This algorithm is based on the minimization of the following objective function:

    Ultrasound, and Computed Tomography (CT), have a great

    influence on the diagnosis and treatment process of patients'

    = |

    2

    |

    (1)

    Brain. The formation of abnormal groups of cells inside the brain or near it leads to the initialization of a brain tumor. The abnormal cells abrupt the processing of the brain and affect the health of a patient. Brain imaging analysis, diagnosis, and

    =1

    =1

    treatment with adopted medical imaging techniques are the main focus of research for the researcher, radiologist, and clinical experts. Several advanced Magnetic Resonance Imaging (MRI) techniques that include Diffusion Tensor Imaging (DTI), MR Spectroscopy (MRS), and Perfusion MR are used for the analysis of brain tumors through MRI.

    The development of deep learning application to brain tumor analysis motivated us to present a comprehensive review in all fields of brain tumor that includes segmentation, prediction, classification, both from a methodology-driven and applications perspective. The review includes a large number of research papers, most of them recent, presenting an extensive variety of deep learning applications in brain tumor analysis to identify the most relevant contribution (deep learning AND Brain Tumour) in the title and abstract query performed.

    In summary, the aim of this review is (a) to show the deep learning development in the entire field of brain tumor,

    (b) the identification of open research challenges for successful deep learning methods for brain tumor tasks, (c) to highlight the successful deep learning contribution to brain tumor analysis.

  2. ALGORITHMS FOR IMAGE SEGMENTATION

    1. Fuzzy C-Means algorithm:

      One in every of the foremost widely used fuzzy clustering algorithms is the Fuzzy C-means clustering (FCM) Algorithm.

      Fuzzy c-means (FCM) clustering was developed by

      J.C. Dunn in 1973, and improved by J.C. Bezdek in 1981.

      Where, J is the objective function N is the number of

      pixels in the image, C is the number of clusters, is the membership table — a table of NCC entries which contains the membership values of each data point and each cluster, m is a fuzziness factor (a value larger than1), xi is the ith pixel in N, cj is jth cluster in C and |xi – cj| is the Euclidean distance between xi and cj. Fig. 3 shows the results of segmenting a sample image using Fuzzy C-means.

      Fig. 2.1. A sample image segmented using FCM.

    2. Level-set method (LSM): are a conceptual framework for using level sets as a tool for numerical analysis of surfaces and shapes. The advantage of the level-set model is that one can perform numerical computations involving curves and surfaces on a fixed Cartesian grid without having to parameterize these objects (this is called the Eulerian approach). Also, the level-set method makes it very easy to follow shapes that change topology, for example, when a shape splits in two, develops holes, or the reverse of these operations. All these make the level-set method a great tool

      for modeling time-varying objects, like inflation of an airbag, or a drop of oil floating in the water.

      Its an implicit and important mathematical method which using for detecting shapes and objects which change topologically over time depending upon the theory of the curve and surface evolution. It was first mentioned by Osher and Sethian in 1988. It's consisting of levels, the surface or contours is considered as Zero level set and usually called a level set function. It defines the issue in one higher dimension. There are two types of level set formulations which include Time-dependent level set formulation and Stationary level set formulation.

      In image processing, some studies aimed for segmentation by using partial differential equations or calculus of variation for converting the continuous image into a discrete image to be processed with a level set method. It can implement curves and surfaces on a fixed Cartesian grid without having to parameterize these objects on its task processing. But, its algorithm is slow and does not implicitly preserve the level set function as a distance function.

      Let = (, ) , where

      [1,] , [1,] (2)

      and X is the pixels numbers of the processed image.

      1 = (, ) is a point of the brain MRI image and its changes with time changing. () is a position over time, and every point m (t) is on the highest surface in the following equation (3).

      ((),) = 0 (3)

      Here, this method depends upon a mathematical function of a partial differential equation (PDF)(, , ) and the evaluation is calculating by active contour by tracking Zero level set w(t).

      Figure 2 will summarize on the following equation (4):

      () = { (, , ) < 0 (, )() (, , ) > 0 (, )()

      (, , ) = 0, (, ) ()}(4)

      The initial function at t=0 can be calculating by the following equations and we obtain

      ((),) = 0 (5)

      By applying Chain rule :

      () () + = 0 (6)

      () + = 0 (7)

      By the following equation (8), we determine the function

      . { + || = 0 , (0, , ) = 0 (, ) (8)

      || , 0 (, ) is initial contour

      And for segmentation and ending the process and obtain the optimal solution, Z should be regularized by applying equation (9) and if its value around zero or means its on the boundary you follow to apply the equation (10).

      = 1 1+|(1 )|2 (9)

      1 is used for obtaining the image gradient by convolute brain MRI image with Gaussian noise. And then the famous level set for segmentation is:

      = || ( ( || )) (10)

      And here, level set method algorithm could be summarized as follows:

      Algorithm 3: level set method algorithm Input: output brain MRI image from algorithm Output: segmented brain MRI image.

      1. Reading the first cluster by using a loop.

      2. Testing for being inside, outside, or on the boundary 2.1. if (, , ) > 0

        2.2. if (, , ) < 0

        2.3. if (, , ) = 0

      3. Calculating initial function 4- Determine by equation 8.

        5- if F on the boundary goes to equations 9 and10. 6- Repeat all steps until finishing all clusters.

        7- End.

    3. K means clustering Algorithm:

      K-means Clustering is one of the most common exploratory data analysis techniques used to get an intuition about the structure of the data. It can be defined as the task of identifying subgroups in the data such that data points in the same subgroup (cluster) are very similar while data points in different clusters are very different. In other words, we try to find homogeneous subgroups within the data such that data points in each cluster are as similar as possible according to a similarity measure such as euclidean-based distance or correlation-based distance. The decision of which similarity measure to use is application-specific.

      Kmeans algorithm is an iterative algorithm that tries to partition the dataset into Kpre-defined distinct non-overlapping subgroups (clusters) where each data point belongs to only onegroup. It tries to make the intra-cluster data points as similar as possible while also keeping the clusters as different (far) as possible. It assigns data points to a cluster such that the sum of the squared distance between the data points and the clusters centroid (arithmetic mean of all the data points that belong to that cluster) is at the minimum. The less variation we have within clusters, the more homogeneous (similar) the data points are within the same cluster.

      Figure 2.3: visualization of clustered data

      Algorithm: K-means clustering algorithm

      Input: filtered brain MRI image.

      Output: dividing filtered brain MRI image into k-clusters by K-means algorithm.

      1. Initialize cluster centroid with k random values.

      2. Put the point =the nearest cluster center.

      3. Calculate centroid center value.

      4. If has not changes

        1. Yes, repeat from step 2 until step 4.

        2. No, go to step 5.

      5. End.

      Figure 2.3.1: Before and after of using K-means algorithm

    4. Support Vector Machine (SVM):

Support Vector Machine (SVM) approach is considered a good candidate due to high generalization performance, especially when the dimension of the feature space is very high. The SVM uses the subsequent idea. It maps the input vector x into a high-dimensional feature house Z through some non-linear mapping, chosen a priori. SVM accepts, exploitation pictures as input, it gives accuracy corresponding to a neural network with hand-designed options in an exceedingly handwriting recognition task. Those training points for which the equality in of the separating plane is satisfied those which wind up lying on one of the hyperplanes (H1, H2), and whose removal would change the solution found, are called Support Vectors (SVs). An SVM classification to classify the brain into the neoplasm and non- tumor categories victimization T1-weighted and distinction increased T1-weighted pictures. Here in our project some of the functions that are been used for the implementation of SVM are fitsvm(),crossval(),kfoldloss() are been used. The SVM methodology has the advantage of generalization and dealing in high dimensional feature area, it assumes that knowledge is independently and identically distributed that isn't acceptable for tasks like segmenting medical images with irregularity and noise then it should be combined withdifferent strategies to think about the abstraction of data and even have the benefits of such classifiers are that area unit they're freelance of the spatiality of the featured house with the results obtained square measure correct, although the training time is very high. In addition, the matter of patient- specific learning and storage should be added to the disadvantage of SVM-based strategies. We also see that SVMs do not consider the negative information which cannot learn the feedback well.

Figure 2.4: The classification process of SVM

SVM is based on the optimal hyperplane for linearly pair able patterns but can be complete to patterns that are not linearly divisible by transformations of unique data to map into new space. They are based on an abstract model of learning and come with theoretical guarantees about their performance. They also have a modular design that allows one to separately apply and design their components and are not affected by local minima. Support vectors are the elements of the training set that would change the location of the dividing hyperplane if removed[19]. Support vectors are the grave elements of the training set. The problem of discovering the best hyperplane is an optimization problem and can be solved by optimization techniques.

  1. CONCLUSION

    In this work, several methodologies are examined to denote the conventional stages of MRI image processing also analyzed the individual segmentation approach. In conjunction with this different methodologies proposed by the researchers are considered to conclude that machine learning shows an important role in brain tumor detection and classification together with an appropriate segmentation approach. Along with this comparison between K-means and fuzzy c means, the Level set method and SVM have also being drafted.

  2. REFERENCES

  1. S. M. Meter and V. P. Veiko, Laser Assisted Microtechnology, 2nd ed.,

    R. M. Osgood, Jr., Ed. Berlin, Germany: Springer-Verlag, 1998.

  2. J. Breckling, Ed., The Analysis of Directional Time Series: Applications to Wind Speed and Direction, ser. Lecture Notes in Statistics. Berlin, Germany: Springer, 1989, vol. 61.

  3. S. Zhang, C. Zhu, J. K. O. Sin, and P. K. T. Mok, A novel ultrathin elevated channel low-temperature poly-Si TFT, IEEE Electron Device Lett., vol. 20, pp. 569571, Nov. 1999.

  4. M. Wegmuller, J. P. von der Weid, P. Oberson, and N. Gisin, "High- resolution fiber distributed measurements with coherent OFDR, in Proc. ECOC00, 2000, paper 11.3.4, p. 109.

  5. R. E. Sorace, V. S. Reinhardt, and S. A. Vaughn, High-speed digital- to-RF converter, U.S. Patent 5 668 842, Sept. 16, 1997.

  6. (2002) The IEEE website. [Online]. Available: http://www.ieee.org/

  7. M. Shell. (2002) IEEEtran homepage on CTAN. [Online]. Available: http://www.ctan.org/tex- archive/macros/latex/contrib/supported/IEEEtran/

  8. FLEXChip Signal Processor (MC68175/D), Motorola, 1996.

  9. "PDCA12-70 datasheet," Opto Speed SA, Mezzovico, Switzerland.

  10. A. Karnik, "Performance of TCP congestion control with rate feedback: TCP/ABR and rate-adaptive TCP/IP," M. Eng. thesis, Indian Institute of Science, Bangalore, India, Jan. 1999.

  11. J. Padhye, V. Firoiu, and D. Towsley, A stochastic model of TCP Reno congestion avoidance and control, Univ. of Massachusetts, Amherst, MA, CMPSCI Tech. Rep. 99-02, 1999.

  12. Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specification, IEEE Std. 802.11, 1997.

Leave a Reply