A Survey on Classification Methods of Brain MRI for Alzheimer’s Disease

DOI : 10.17577/IJERTV7IS050218

Download Full-Text PDF Cite this Publication

Text Only Version

A Survey on Classification Methods of Brain MRI for Alzheimer’s Disease

Mamata Lohar

Department of Electronics and Telecommunication MMIT

Pune, India

Rashmi Patange

Department of Electronics and Telecommunication MMIT

Pune, India

Abstract Alzheimers disease (AD) is the most typical type of dementia. There are no available treatments that stop or reverse the progression of the disease which is harmful and eventually leads to death. There are currently no specific techniques that can confirm with a 100% certainty AD diagnosis. A combination of brain imaging and clinical assessment checking for signs of memory impairment is used to identify patients with AD. There is a need for automated techniques to be developed in order to detect the disease well before irreversible loss is made. Currently there are lot of advances in the area of biomarkers for assessment of risk, diagnosis and monitoring disease progression. In recent years, Neuroimaging combined with machine learning techniques have been studied for the detection of Alzheimers disease. Our research work is focused on the automatic classification methods for the detection of Alzheimers disease, with a primary focus on improving the prediction accuracy which will be helpful for practitioners for detection of Alzheimers disease and even its progression stages as Normal Control (NC), Mild Cognitive Impairment (MCI) and Alzheimers disease (AD). This paper is about the survey on recent studies in related field that are towards development of semi or fully automatic computer aided diagnosis of the AD progression. Paper presents comparison of methods implemented, classes considered, Data base used, evaluation parameters considered and the results obtained with detailing about the disease.

KeywordsAlzheimers disease (AD); Classification Techniques; Database; Feature Extraction; Magnetic Resonance Imaging (MRI); Computer Aided Diagnosis (CAD)

  1. INTRODUCTION

    Abnormality detection in Magnetic Resonance (MR) brain images is a challenging task. The difficulty in brain image analysis is mainly due to the requirement of detection techniques with high accuracy within quick convergence time. The detection process of any abnormalities in the brain images are a two-step process. Initially, the abnormal MR brain images are classified into different categories (Image Classification) since treatment planning varies for different types of abnormalities. Further, the abnormal portion is extracted (Image Segmentation) to perform volumetric analysis which verifies the success rate of the treatment given to the patient. Conventionally, the detection process is performed manually which is highly prone to error because of the intervention of human perception [27].

    Dementia is the general brain disorder of which Alzheimers disease is most common, progressive and fatal brain disease. It destroys brain cells, interfering with memory,

    thinking, and behavior severely enough to affect a persons work, hobbies, and social life. Alzheimers disease gets worse over time and is fatal. In diagnosis of this, Image pre- processing is one of the preliminary steps which are highly required to ensure the high accuracy of the subsequent steps. The raw MR images normally consist of many artifacts such as intensity inhomogenities, extra cranial tissues, etc. which reduces the overall accuracy. Grayscale cross sectional MRI images as well as pre-processed, segmented versions of each raw image. Custom normalizing and preprocessing methods for were implemented for the unprocessed brain images for testing consistency for this study. The next step in the automated diagnosis process is feature extraction. Feature extraction is the technique of extracting specific features from the pre-processed images of different abnormal categories in such a way that the within class similarity is maximized and between class similarity is minimized. The important process in the diagnosis system is brain image classification. The main objective of this step is to differentiate the different abnormal brain images based on the optimal feature set. This image classification technique is able to give the information about the presence of abnormality in the input brain image which is used to detect the dementia and Alzheimers disease. The main objective of classification step is to differentiate the different abnormal brain images based on the optimal feature set [27]. Several conventional classifiers are available for categorization such as K-NN, SVM, Naïve Bayes, PCA, ICA, LDA, ANN, Decision tree, fuzzy technique etc. which gives the best results for basic feature extraction used for the diagnosis of Dementia and Alzheimers disease. The K Nearest Neighbors (K-NN), a technique that compares the test sample to the k nearest points and assigns a class based on the majority class of the nearest points. The Naïve Bayes, which classifies a test sample based on the most probable class. Support Vector Machines (SVM), which attempts to find the hyper plane which best separates the data into the respective two classes [13]. PCA is commonly used to decrease the dimensionality of images and get most of information. ICA is a probabilistic and multivariate method which ensures the identification of original components. LDA is used to make the feature extraction and to classify samples of unknown classes based on training samples with known classes. ANN is used to improve the accuracy of the classifiers. The goal of this comparison is to determine which technique would yield the best results using a standard set of image features. The results could then be applied to more efficient feature extraction of many samples, while assigning the class using the best classical classification technique.

    The rest of the paper is organized as follows: An effects of AD and role of MRI in Diagnosis of AD is presented in

    section II, A comprehensive literature survey of work done towards computer-aided diagnosis of AD is presented in section III, Section IV provides Procedure for AD MR Image Classification, Section V gives the information about Feature Extraction and Selection. Section VI provides different classification techniques followed by conclusion in section VII.

  2. ALZHEIMERS DISEASE

    1. Alzheimers Disease and its Symptoms

      Dementia is a general term for a group of brain disorders. It is a decline of intellectual function, medically called decline of cognition. Alzheimers disease is a progressive dementia caused by a progressive degeneration of brain cells. Alzheimers disease results in impaired memory, thinking and behavior. It is named after Alois Alzheimer, the German doctor who first described it in 1907. As Alzheimers disease affects different areas of the brain, specific functions or abilities are lost. Memory of recent events is often the first to be affected, but as the disease progresses, long-term memory is also lost. The disease also affects many of the brains other functions and consequently language, attention, judgment and many other aspects of behavior are affected.

      Some change in memory is normal as we grow older, but the effects of Alzheimers disease are more severe than simple lapses. They include difficulties with communicating, learning, thinking, and reasoning impairments severe enough to have an impact on an individuals work, social activities, and family life in the early and middle stages. Some of the most common symptoms of that people with Alzheimers disease experience are [37]:

      Taking longer time for routine task

    2. Role of MRI in Diagnosis of AD

    p>Neuroimaging techniques enable in assessment of brain changes and are therefore promising in the field of early detection of AD. Understanding the brain of Alzheimers and dementia patients is of a great clinical importance. MRI could help detect Alzheimers disease at an early stage before irreversible damage has been done. Analyzing MRI exams of healthy patients as well as those with mild cognitive impairment (MCI) and early Alzheimers, examined specific biomarkers of the disease process. Fig 2 shows the various stages of Alzheimers disease.

    NC MCI AD

    Fig. 2. Normal, MCI and AD T1 Weighted Axial Brain MR Images

    All MR images are to some degree affected by each of the parameters that determine tissue contrast (i.e., T1, T2, and proton density), but the Repetition time (TR) and Echo time (TE) can be adjusted to emphasize a particular type of contrast. T1-weighted images best depict the anatomy, and, if contrast material is used, they may also show pathologic entities; however, T2-weighted images provide the best depiction of disease, because most tissues that are involved in a pathologic process have higher water content than in normal, and the fluid causes the affected areas to appear bright on T2-weighted images. Proton-density weighted MR images usually depict

    Becoming Disoriented in well spaces

    Deterioratio n of Social Skills

    Alzheimers Disease Symptoms

    Emotional Unpredictab ility

    Persistent & Frequent Memory Loss

    Apparent Loss of Enthusiasm

    both the anatomy and the disease entity [42]. The three weighted MR images are shown in Fig. 3. T1-weighted MR image offers high contrast between the brain soft tissues. On the contrary, T2-weighted and Proton density images exhibit very low contrast between GM and WM, but high contrast between CSF and brain parenchyma. Fig 3 shows a comparison of T1, PD and T2 weighting.

    Fig. 1. Symptoms of Alzheimers Disease

    T1 Image PD Image T2 Image

    Fig. 3. T1, PD and T2 Weighted Axial Brain Images

  3. LITERATURE SURVEY

    Automated brain disorder diagnosis with MR images is becoming increasingly important in the medical field. The automated diagnosis involves two major steps: (a) Image classification and (b) Image segmentation. Image classification is the technique of categorizing the abnormal images into different groups based on some similarity measure. The accuracy of this abnormality detection technique must be significantly high since the treatment planning is based on this

    identification. Many research papers with different approaches for image classification are reported in the literature. TABLE I gives the extensive literature survey on types of classifiers, different stages of AD, sources of publically available databases, extracted features, results of classification etc. which is used for abnormality detection in brain images.

    TABLE I. SURVEY ON AUTOMATIC CLASSIFICATION TECHNIQUES FOR

    ALZHEIMERS DISEASE DETECTION

    Author Name

    Classifier Used

    Mod ality

    No of Images

    Source of Image

    Features

    Results

    Kajal Gulhare (IJARCSSE) 2017 [1]

    Deep Neural Network

    (DNN)

    MRI

    AD+MCI

    +NC= 150

    OASIS

    Textural Features, Intensity

    DNN Accuracy = 96.6 %

    Rupali Kamathe (ICTACT) 2017 [2]

    K-NN ,

    Adaboost

    MRI

    AD=26 MCI=68 NC= 107

    OASIS

    Contrast, Correlation, Energy, Homogeneity, Absolute Value, Information Measure of

    Correlation

    Model Name

    Accuracy (%)

    K-NN

    Adaboost

    Abnormal vs Normal

    76.92

    87

    AD vs MCI

    92.31

    100

    AD vs NC

    92.75

    100

    MCI vs NC

    83.33

    90.28

    Eman M Ali (IJCA) 2016 [3]

    TANNN

    MRI

    AD+MCI

    +NC=416

    OASIS

    Statistical, Symmetry, Texture

    Accuracy (%)

    Seg.

    DA

    NN

    NB

    SVM

    DT

    KNN

    TAN NN

    OASIS

    94.4

    93.6

    95.2

    92.5

    96.4

    96.6

    99.2

    Antonio Martínez (HPC) 2015 [4]

    Logistic Regression Classifier

    MRI PET

    NC= 469 MCI=893 AD= 280

    ADNI

    Correlation based features Forward selection and Backward elimination

    of features

    Analysis

    Cohort

    Acc (%)

    Sen (%)

    Spe (%)

    AUC (%)

    NC-AD

    Calibration set

    87.7

    84.9

    90.5

    94.5

    Test set

    85.4

    91.3

    80

    92.2

    NC-MCI

    Calibration set

    80.2

    86.2

    70.4

    86.4

    Test set

    78.5

    80.5

    75

    84.1

    MCI-AD

    Calibration set

    83.8

    47.6

    94.1

    83.8

    Test set

    80

    33.3

    93

    81.5

    Archana M (IEEE) 2014 [5]

    SVM

    MRI

    NC=92 MCI=97 AD=45

    OASIS

    Structural features Orientation Anisotropy index 1, 2, Energy

    For Normal vs AD

    Features

    Acc (%)

    Sen (%)

    Spe (%)

    Orientation

    76.1

    71.34

    72.43

    Anisotropy index

    65.76

    62.54

    59.85

    1

    51.17

    48.46

    45.32

    2

    87.39

    85.56

    83.45

    Energy

    88.67

    87.65

    84.87

    For Normal vs MCI

    Features

    Acc (%)

    Sen (%)

    Spe (%)

    Orientation

    65.8

    71.3

    65.8

    Anisotropy index

    57.1

    55.1

    54.8

    1

    47.3

    47.1

    46.3

    2

    75.8

    73.6

    74.4

    Energy

    80.3

    76.4

    78.3

    For MCI vs AD

    Features

    Acc (%)

    Sen (%)

    Spe (%)

    Orientation

    66.7

    64.3

    62.5

    Anisotropy index

    53.3

    52.6

    53.3

    1

    43.6

    42.5

    40.5

    2

    75.2

    68.3

    70.5

    Energy

    79.1

    74.7

    76.7

    Bibo Shi (IEEE) 2014 [6]

    Large margin nearest neighbors (LMNN),

    relevant component analysis (RCA),

    Distance Informed metric learning (DIML), K- NN

    MRI

    NC=161 MCI=104 AD=56

    ADNI

    Structural features -Cortical thickness, hippocampal volume/ shape, voxel tissue probability map, atrophy

    AD vs NC results

    Classifier

    ACC (%)

    SEN (%)

    SPE (%)

    PPV (%)

    NPV (%)

    K-NN

    76.67

    56.33

    97

    94.64

    81.33

    RCA

    81.46

    70.67

    92.24

    85.28

    86.03

    LMNN

    81.93

    69.67

    94.18

    88.83

    85.77

    DIML

    82.52

    72.67

    92.36

    84.83

    86.86

    MCI vs NC results

    Classifier

    ACC (%)

    SEN (%)

    SPE (%)

    PPV (%)

    NPV (%)

    K-NN

    62.63

    67.9

    57.36

    71.95

    54.6

    RCA

    61.23

    71.54

    50.91

    69.15

    55.9

    LMNN

    64.2

    71.58

    56.82

    72.29

    57.56

    DIML

    71.56

    77.57

    65.55

    77.59

    69.25

    Fayao Liu (IEEE) 2014 [7]

    Multiple kernel learning (MKL),

    Random Fourier feature (RFF), SVM

    MRI CSF

    Nc=70 MCI=50

    ADNI

    Structural Features WM, GM, CSF

    Method

    ACC (%)

    SEN (%)

    SPE (%)

    MCC (%)

    MKL

    87.06

    87.89

    86.68

    74.57

    RFF+L1

    81.94

    83.83

    78.97

    63.31

    RFF+L2

    85

    85.49

    84.28

    69.41

    RFF+L21

    90.56

    93.26

    87.49

    81.98

    Filipa Rodrigues (IEEE) 2014 [8]

    SVM

    PDG

    -PET

    NC=66 MCI=109 AD=48

    ADNI

    Multi-region analysis, Voxel- based analysis

    Group

    CN/AD

    CN/MCI

    Multiregion Analysis

    Baseline

    81.1 ±11.1

    68.5± 9.5

    Baseline+ Change

    83.3 ± 9.7

    68.9 ±9.7

    12 Months

    87.4 ± 9.8

    65.1 ±11.3

    12Months+ Change

    87.8 ± 9.1

    65.6 ±9.6

    Voxel based analysis

    Baseline

    84.2 ± 10.0

    68.1 ± 10.6

    Baseline+ Change

    91.2 ± 8.0

    69.3 ± 10.9

    12 Months

    92.8 ± 6.3

    69.7 ± 10.6

    12Months+Ch ange

    92.6 ± 6.7

    70.2 ± 9.0

    Helena Aidos (IEEE)

    2014 [9]

    SVM, KNN,

    Naïve Bayes

    FDG

    -PET

    MCI=59 AD=59

    ADNI

    Voxel intensities (VI)

    Highest Accuracy with lower no of features and vice versa

    Best

    SVM + KNN

    ROI (Automatic)

    Naïve Bayes

    ROI (Automatic+ Expert)

    Accuracy (%)

    AD vs CN

    85

    MCI vs CN

    65~79

    Saima Farhan (HPC)

    2014 [10]

    SVM, MLP, J48

    MRI

    NC=37 AD=48

    OASIS

    Volume of WM, GM, CSF

    Ensemble Classifiers

    Acc (%)

    Sen (%)

    Spe (%)

    93.75

    100

    87.5

    Andrea Rueda (IEEE)

    2014 [11]

    Saliency Based Pattern Recog- nition

    MRI

    G1=> NC=66

    ,MCI= 20G2=> NC=98, MCI= 28 G3=> NC=66, MCI= 70 G4=

    >NC=98, MCI=100

    OASIS- MIRIAD

    Intensity, Orientation, Contrast (18 Features)

    Parameter

    G1

    G2

    G3

    G4

    Accuracy

    86.05

    80.16

    76.47

    70.2

    Sensitivity

    85

    75

    87.14

    70

    Specificity

    86.36

    81.63

    69.7

    73.47

    BAC

    85.68

    78.32

    76.28

    70.23

    F-Measure

    73.91

    62.29

    78.71

    69.65

    EER

    0.86

    0.79

    0.79

    0.69

    Qi Zhou (IEEE) 2014 [12]

    Support vector machine

    MRI

    NC=59 aMCI=6, naMCI=5 6AD=127

    Private MSMCI

    Statistical Features & Ranking Mechanism

    Accuracy

    92.40%

    Sensitivity

    84.00%

    Specificity

    96.10%

    Carlos Cabral (IEEE)

    2013 [14]

    SVM,

    Random forest (RF)

    FDG

    -PET

    NA

    ADNI

    Voxel intensity

    RBF SVM

    L-SVM

    RF

    Accuracy (%)

    66.78

    66.33

    64.63

    Francesco Carlo Morabito (IEEE) 2013 [15]

    Wavelet transform, compres- sive sensing, time frequency analysis

    EEG

    NC=4 MCI=4 AD=4

    IRCCS

    NA

    NC

    MCI

    AD

    Mean

    28.3

    31.8

    50.6

    Standard deviation

    2.9

    3.5

    4.8

    Javier Escudero (IEEE) 2013 [16]

    Instance based classifier

    i.e. K-NN logistic regression

    MRI PET

    NC=45 cMCI=12 nMCI=59 AD=41

    ADNI

    NA

    MRI, PET ,

    Biochemistry

    NC vs AD

    nMCI vs cMCI

    MCI to AD

    ACC (%)

    93

    75

    67

    Dr. G.

    Wiselin (IEEE) 2013 [17]

    SVM, Ada- SVM

    MRI

    Training AD,MCI, NC=10

    Testing AD=20 NC=20

    ICBM

    Intensities, Gradients, Curvatures, Tissue classifi. Local filters ,

    Adaboost and Ada-SVM gives Superior accuracy

    Eric Westman (Springer) 2012 [18]

    Multiva- riate Analysis

    MRI

    NC=255 MCI=287 AD=187

    ADNI

    Regional Volume, Cortical Thickness, Gray Matter Volume

    AD vs NC

    MCI vs AD

    Accuracy

    91.50%

    75.90%

    Manhua Liu Springer (2012) [19]

    Single classifier, ensemble low level classifier, Multilevel Classifier

    MRI

    NC=229 AD=189

    ADNI

    Correlation contex Features

    Classifier

    Acc (%)

    Sen (%)

    Spe (%)

    AUC

    Single

    86.43

    83.89

    88.64

    0.928

    Ensemble low level

    89.7

    86.89

    92.11

    0.939

    Multiple

    92.04

    90.92

    92.98

    0.9518

    Mohamed Dessouky (IJCA) 2013 [20]

    Support Vector Machine

    MRI

    NC=71 AD= 49

    OASIS

    Intensity Level

    Acc (%)

    Sen (%)

    Spe (%)

    100

    100

    100

    Stefano Diciotti (IEEE) 2012

    [21]

    SVM,

    Naïve Bayes

    MRI

    NC=29 MCI=30 AD=21

    Clinical

    Volume , thickness

    Acc (%)

    Sen (%)

    Spe (%)

    NC vs AD

    86

    82

    90

    Zhuo Sun (IEEE) 2012 [22]

    LDA, K- NN, SVM

    MRI

    AD= 20

    NC= 20

    ADNI

    Correlation based features

    Classifiers

    Accuracy %

    Non- scaled

    Scaled

    LDA

    87.1

    87.1

    K-NN

    83.33

    93.55

    SVM

    90.32

    90.32

    Jayapathy Rajeesh (Asian Biome- dicine) 2012 [23]

    Support Vector Machine

    MRI

    NC=146 AD=133

    ADNI

    Textural Features- Entropy, Variance, Skewness, Symmetry, Mean

    Case 1 (%)

    Case 2 (%)

    Case 3 (%)

    Case 4 (%)

    Precision

    90.90

    88.90

    89.10

    95.30

    Sensitivity

    88.90

    88.90

    91.90

    91.10

    Specificity

    91.80

    89.80

    89.80

    95.90

    Accuracy

    90.40

    89.40

    90.40

    93.60

    Lavneet Singh IJREISS (2012) [24]

    SVM, KNN,

    Naïve Bayes, Multiboost AB

    Rotation forest, VFI, J48,

    Random Forest

    MRI

    Normal and Abnormal MRI

    Image

    NA

    Wavelet based Feature extraction

    Classifiers

    TP

    FP

    Preci

    Acc

    KNN

    0.935

    0.917

    0.826

    91.04

    SVM

    0.912

    0.812

    0.831

    91.17

    Naïve Bayes

    0.868

    0.916

    0.828

    86.76

    Multi-boost AB

    0.91

    0.91

    0.829

    91.04

    Rotation Forest

    0.971

    0.285

    0.971

    97.06

    VFI

    0.742

    0.049

    0.93

    74.16

    J48

    0.96

    0.314

    0.958

    95.98

    Random Forest

    0.91

    0.271

    0.97

    97.01

    T. R. Sivapriya (IJRAI) 2012 [25]

    Clustered Z-Score Least Square, Support Vector

    Machine(C ZLSSVM)

    MRI

    NC=229 MCI=397 AD=193

    OASIS- ADNI

    Cross Validation

    Acc (%)

    Sen (%)

    Spe (%)

    94

    96

    99

    Nabil Belmo- khtar

    (IJCA) 2012 [26]

    Binary Suppoert Vector Machine

    MRI

    AD=193

    OASIS

    VBM Analysis= Mean, Standard Deviation

    Cross validation (K=10)

    SVM

    karnel

    Global Accuracy (%)

    Total Process Time (ms) (%)

    Linear

    84.9

    178

    Polynomial

    100

    125

    RBF

    62.26

    109

    Sigmoid

    7.54

    109

    Anil Rao (IEEE) 2011 [29]

    SLR, SRSLR, PLR, MLDA

    MRI

    NC=60 AD=69

    NINCDS ADRDA

    Voxel based features WM,GM

    segmented

    Classifier

    Sen (%)

    Spe (%)

    Acc (%)

    SLR

    90.77±3.67

    80.26±3.93

    85.26±1.39

    SRSLR

    90.35±3.73

    80.26±3.93

    85.26±1.81

    PLR

    85.85±3.67

    79.85±4.88

    82.95±2.23

    MLDA

    85.10±4.38

    79.85±4.88

    82.95±2.23

    Daoqiang Zhang

    (IEEE) 2011 [30]

    MLapRL, mRLS

    MRI PET CFS

    NC=52 MCI=99 AD=51

    ADNI

    WM, GM, CSF

    AUC

    mLapRLS

    98.50%

    mRLS

    94.60%

    Javier Escudero (IEEE) 2011 [31]

    LR, SVM, RBF, C4.0

    MRI

    NC= 180 MCI=222 AD= 122

    ADNI

    Filter method, Forward selection

    Experiment

    Classifier

    Acc (%)

    AUC

    NC vs AD

    LR

    85.63

    0.919

    SVM

    89.17

    0.884

    RBF

    87.94

    0.874

    C4.0

    83.93

    0.833

    NC vs MCI

    LR

    72.51

    0.803

    SVM

    72.65

    0.726

    RBF

    70.92

    0.710

    C4.0

    72.69

    0.725

    Dong Hye Ye (IEEE)

    2011 [32]

    SVM

    MRI

    NC=63 cMCI=68

    ncMCI= 169 AD=53

    ADNI

    RAVENS maps as a feature characterizing the images

    Racall rates between cMCI vs ncMCI

    Sen (%)

    Spe (%)

    Acc (%)

    Embedding+LapSVM

    94.1

    40.8

    56.1

    Embedding+SVM

    88.2

    42

    55.3

    Compare +SVM

    89.8

    37

    52.3

    Murat Seckin Ayhan (IEEE)

    2010 [33]

    SVM,

    Naïve Bayes

    PET

    =394

    NA

    ADNI

    Correlation based features 15964 features

    Feature selection procedure improves the classification accuracy

    Xiaojing Long (IEEE) 2010 [34]

    SVM, MDS,

    Quick shift clustering, symmetric log domain diffeo-

    morphic demons

    MRI

    NC=40 AD=35

    OASIS

    NA

    Method

    Target structure

    Correctly Classified

    MDS

    Hippocampus

    60~75

    SVM

    Gray Matter

    85.6~95.6

    Proposed Method

    Gray/White Matter

    94.67~97.33

    Jonathan H. Morra NIH Access (IEEE) 2010 [35]

    ADA- BOOST

    and SVM

    MRI

    NC=10 MCI=10 AD=10

    ICBM53

    Intensity Distributions, Adjacency Priors, Mean (100 Features)

    Ada-SVM

    Manual SVM

    Left

    Right

    Left

    Right

    Precision

    0.785

    0.802

    0.364

    0.755

    Recall

    0.851

    0.848

    0.973

    0.719

    R.O

    0.691

    0.701

    0.36

    0.582

    S.I

    0.814

    0.822

    0.526

    0.732

    Hausdroff

    4.34

    4.63

    6.05

    6.83

    Mean

    0.029

    0.034

    0.384

    0.047

    Acc=Accuracy, Sen=Sensitivity, Spe=Specificity, HC=Hippocampus, EC=Entrohinal Cortex, NC=Normal Control, MCI=Mild Cognitive Impairment, AD=Alzheimers Disease, SVM=Support Vector Machine, KNN=K-Nearest Neighbour, ANN= Artificial Neural Network, DNN= Deep Neural Network, LDA= Linear Discriminant Analysis, PCA= Principal Component Analysis, ICA= Independent Component Analysis, OASIS= Open Access Series for Imaging Studies, ADNI=Alzheimers Disease Neuroimaging Initiative, NINCDS-ADRDA=National Institute of Neurological and Communicative Disorders and Stroke- Alzheimers Disease and Related Disorders Association, ICBM=International Consortium for Brain Mapping, MIRIAD=Minimal Interval Resonance Imaging in Alzheimers Disease, GM= Gray Matter, WM= White Matter, CSF= Cerebrospinal Fluid, VI= Voxel Intensities

  4. PROCEDURE FOR CLASSIFICATION OF AD MR IMAGES The general procedure for classification of AD MR

    Images is described in Fig. 4.The MR Images are selected from the database. After selection of MR images, features are first extracted and then selected. Training and testing of the database is done. Then data is given as an input to the classifier. Classifier classifies the images into desired categories. The performance of classifier is evaluated in terms of accuracy, error rate, sensitivity, specificity, AUC, etc. Results are then validated from the authority.

    Fig. 4. Procedure for Classification of AD MR Images

  5. FEATUTE EXTRACTION AND SELECTION

    In image pre-processing, one of the preliminary steps in the automated diagnosis of AD process is feature extraction which extracts specfic features from the pre-processed images of different abnormal categories. The feature extraction stage is designed to obtain a compact, non-redundant and meaningful representation of observations. It is achieved by removing redundant and irrelevant information from the data. These features are used by the classifier to classify the data. It is assumed that a classifier that uses smaller and relevant features will provide better accuracy and require less memory, which is desirable for any real time system and improves the computational speed of the classifier [28]. After feature extraction, features are selected in which only some of the features from the dataset are selected and used in the training process of the learning algorithm. In this process the aim is to find the optimal subset that increases the efficiency of the learning algorithm. Feature extraction and selection aims to achieve a compact pattern representation which also leads to the decrease of measurement cost and the increase of the classification accuracy. Consequently, the resulting classifier will be faster and will use less memory [12].

    Feature selection (FS) algorithms [41] occupy the approach to dimension reduction by finding the best least subset of the original features, without transforming the data to a new set of dimensions. Feature selection enables combining features from different data models. Potential difficulties in feature selection

    1. small sample size, (b) what criterion function to use. Feature selection can be done using:

      1. Supervised Learning:-

        In supervised learning there is a specified set of classes, and example objects are labeled with the appropriate class. The goal is to generalize from the training objects that will enable novel objects to be identified as belonging to one of the classes.

      2. Unsupervised Learning:-

    In unsupervised feature selection the object is less well posed and consequently it is a much less explored area. Often the goal in unsupervised learning is to decide which objects should be grouped together, in other words, the learner forms the classes itself [37].

    Features are used as inputs to classifiers which assign them to the class that they represent. Feature extraction enable to reduce the original data by measuring certain properties of images which have relevant data, or features, that distinguish one pattern from another pattern. There different types of features like shape based, color based, texture based [38], wavelet based [36], region based, histogram based, GLCM based [38], etc are extracted from the brain image for the diagnosis of AD. Features can be selected using filter method, wrapper method [40], Sequential forward selection and backward elimination method, correlation based method, mutual information based method and wavelet based techniques.

  6. CLASSIFICATION TECHNIQUIES

    The important process in the automated system is brain image classification. The main objective of this step is to differentiate the different abnormal brain images based on the optimal feature set. Image classification is one of the sub- categories of pattern recognition system in which an input image is categorized into any one of the pre-defined classes. The image classification is performed with the whole image rather than with pixels. In other words, image classification can be termed as between images operation.

    This image classification technique is able to give the information about the presence of abnormality in the input brain image. Broadly, the image classification is divided into two subclasses: (a) Binary classification and (b) Multi-level classification. In the binary classification system, the number of pre-defined classes is only two and hence the details of the presence or absence of the abnormality in the brain image can be obtained. The output of such systems is able to differentiate the normal images and the abnormal images. Practically, this information is insufficient since the nature of the abnormality is necessary for treatment planning. The next level of classification is multi-level classification in which the number of pre-defined classes is more than two. These classification techniques have the capability of differentiating the different types of abnormalities which aids in treatment planning. The complexity of such techniques is quite high but these classification systems are more suitable for real-time applications [11]. There are various methods classification of images used in MRI scan for detection of Alzheimers and Dementia such as K-NN [2,6,9,13,16,22,28,24], SVM[5, 7, 8,

    9,10,12,17,20,21, 22,23,24,26,31,32,33,34,35], Naïve Bayes

    [9,21,24,28,33], PCA [20], ICA [28], LDA [20, 22], ANN

    [27], Decision tree, fuzzy technique etc. which gives the best

    results for basic feature extraction used for the diagnosis of Dementia and Alzheimers disease.

    1. K-Nearest Neighbour (K-NN)

      K-Nearest Neighbour (KNN) is a data mining algorithm with a wide range of applications in the image processing domain. There are three key elements of this approach: a set of labeled training examples, a distance measure to compute the distance between the training set examples and the test example, and the value of k; i.e., the number of nearest neighbours to the testing example. We used Euclidean and Riemannian distance measures in our work to classify the testing set examples from the three classes which can be mathematically expressed as:

      Euclidean distance = (4i=1(xi yi)2 ) (1) Riemannian distance = || logxi -1yi || (2)

      The k training images that were identified as being closest to the test image were then tallied as to which class they fell into, normal or positive for Alzheimers disease. The class with the most points was assigned to the test image as the classification [2,6,9,13,16,22,28,24].

    2. Support Vector Machine (SVM)

      Support vector machine (SVM) is a versatile data classification method widely used in the machine learning domain. It can be used to classify both linearly and nonlinearly separable data. Kernel trick is used to separate examples that are non-linearly separable in the space of the inputs and might be separable in a higher dimensionality feature space given a suitable mapping. We made use of the inverse multiquadratic kernel which is defined as follows:

      1 / (||xi – xj||2 + c) (3)

      Where, c is a constant greater than zero while xi and xj are variables dependent on the available data [5, 7, 8,

      9,10,12,17,20,21, 22,23,24,26,31,32,33,34,35].

    3. Naïve Bayes

      The Naïve Bayes assigns a class to a test sample based upon the highest-class probability. It is the almost insensitive to synthetic oversampling; although best results are observed when the technique is not applied (oversampling of 0%). In this study we also considered applying kernel density estimation to achieve better estimations of the features pdfs. However, results were slightly worse than with the typical Gaussian assumption. Naive Bayes has one of the best performances achieving a balanced classification model. It also achieves the highest AUC. It should be noted that, whereas with the full feature set no oversampling was required, the optimal case after feature selection was achieved after synthetic duplication of AD instances. Naïve Bayes classifier naturally leads with missing values; when computing the instance likelihood it disregards any feature value that is missing [9, 21, 24, 28, 33].

    4. Principle Component Analysis (PCA)

      PCA is known as the best data representation in the least-square sense for classical recognition. It is commonly used to decrease the dimensionality of images and get most of information. The central idea behind PCA is to find an

      orthonormal set of axes pointing at the direction of maximum covariance in the data. It is often used in representing facial images. The idea is to find the orthonormal basis vectors, or the eigenvectors, of the covariance matix of a set of images, with each image treated as a single point in a high-dimensional space. It is supposed that the facial images form a connected sub region in the image space. The eigenvectors map the most significant variations between faces and are preferred over other correlation techniques that assume that every pixel in an image is of equal importance. PCA is a powerful tool for analyzing data and once we have found these patterns in the data and compress the data by reducing the number of dimensions, without much loss of information [20].

      Methods:

      Step 1: Get some data. Step 2: Subtract the mean.

      Step 3: Calculate the covariance matrix.

      Step 4: Calculate the eigenvectors and Eigen values of the covariance matrix.

      Step 5: Choose components and form a feature vector. Step 6: Derive the new data set.

    5. Independent Component Analysis (ICA)

      ICA is a probabilistic and multivariate method for learning a linear transform of random vectors. The basic goal of ICA is to search for the components which are maximally as independent and non-Gaussian as possible. Its fundamental difference to classical multivariate statistical methods such as PCA and linear discriminate analysis (LDA) is in the assumption of non-gaussianity, which ensures the identification of original components, in comparison with these classical methods. ICA can be mathematically modeled as,

      X = A × S (4)

      Where, X is the observed data vector, A is the mixing matrix and S is the source matrix. In practice, we use of the Fast ICA matlab toolbox to compute both A and S from X. The mixing matrix A has been considered in the subsequent steps of feature selection and classification [28].

    6. Linear Discriminate Analysis (LDA)

      LDA is used to make the feature extraction and to classify samples of unknown classes based on training samples with known classes. It get a linear transformation of k- dimensional samples into an m-dimensional space (m < k), so that samples pertinence to the same class are close together, but samples from different classes are far apart from each other. This method maximizes the ratio of between-class variance to within-class variance in any data set; thereby, the theoretical maximum separation in the linear sense will be guaranteed. Since LDA require directions that are efficient for discrimination, it is the optimal classifier for specializing classes that are Gaussian distribution and have equal covariance matrices. LDA requires a transformation matrix that in some sense maximizes the ratio of the between scatter matrix to the within scatter matrix [20,22].

    7. Artificial Neural Network (ANN)

    Artificial Neural Networks (ANN) is used to improve the accuracy of the classifiers. ANN is dependent on input data and hence a wide variety of pattern is desirable for high accuracy. ANN is a mathematical model or computational model that is inspired by the structural and functional aspects of biological neural networks. A neural network consists of an interconnected group of artificial neurons and it processes information using a connectionist approach to computation. In most cases, an ANN is an adaptive system that changes based on external or internal information which flows through the network during the learning phase. They are usually used to model complex relationships between inputs and outputs or to find patterns in data [27].

  7. CONCLUSION

With manual techniques for identifying the presence of Alzheimers disease through brain MRI too expensive and time consuming. Hence we use their classification and analysis for feature extraction and diagnosis. In this paper, a comprehensive information about the different methods of MR image classification such as KNN, Naïve Bayes, SVM, PCA, ICA, LDA, ANN, Decision tree, Fuzzy techniques etc are presented. By reviewing all the classification methods, we can identify the required classifiers are satisfactory in terms of both accuracy and computational speed and has promising results for basic feature extraction and image classification. Thus the classical methods of classification would give the effective identification of Alzheimers patients with MRI analysis. This work presents significant contribution in the field of automatic classification of brain MRI using different automatic classification techniques. Such system can be proved to be helpful to radiologist and researchers to identify AD classification with improved accuracy.

REFERENCES

  1. Kajal Gulhare, S.P. Shukla, L. K. Sharma, Deep Neurak Network Classification Method to Alzheimers Disease Detection, International Journals of Advanced Research in Computer Science and Software Engineerin, Volume-7, Issue-6, pp. 1-4 , June 2017.

  2. Rupali S. Kamathe, Kalyani R. Joshi, A Robust Optimized Feature Set Based Automatic Classification of Alzheimers Disease from Brian MRI Images using K-NN and Adaboost, ICTACT Journal On Image And Video Processing, Volume: 08, Issue: 03, pp. 1665-1672, February 2017.

  3. Eman M. Ali, Ahmed F. Seddik, Mohammed H. Haggag, Automatic Detection and Classification of Alzheimers Disease from MRI using TANN, International Journal of Computer Applications, Volume 148 No.9, pp. 30-34,August 2016.

  4. Antonio Martínez-Torteya, Víctor Treviño, José G. Tamez-Peña1, Improved Diagnostic Multimodal Biomarkers for Alzheimers Disease and Mild Cognitive Impairment, Hindawi Publishing Corporation BioMed Research International, Volume 2015, pp. 1-11, April 2015.

  5. Archana M, Ramakrishnan S, Detection of Alzheimer Disease in MR Images using Structure Tensor, IEEE,pp. 1043-1046, 2014.

  6. Bibo Shi, Zhewei Wang, Jundong Liu, Distance-informed metric learning for Alzheimers Disease Staging,IEEE, pp. 934-937, 2014.

  7. Fayao Liu, Luping Zhou, Chunhua Shen, Jianping Yin, Multiple Kernel Learning in the Primal forMultimodal Alzheimers Disease Classification, IEEE Journal Of Biomedical And Health Informatics, Vol. 18, No. 3, pp. 984-990, May 2014.

  8. Filipa Rodrigues, Margarida Silveira, Longitudinal FDG-PET features for the classification of Alzheimers Disease, IEEE, pp. 1941-1944, 2014.

  9. Helena Aidos, Joao Duarte and Ana Fred, Identifying Regions Of Interest For Discriminating Alzheimers Disease From Mild Cognitive Impairment, IEEE, pp. 21-25,2014.

  10. Saima Farhan,Muhammad Abuzar Fahiem, Huma Tauseef, An Ensemble-of-Classifiers Based Approach for Early Diagnosis of Alzheimers Disease: Classification Using Structural Features of Brain Images, Hindawi Publishing Corporatio Computational and Mathematical Methods in Medicin, Volume 2014, pp. 1-11, September 2014.

  11. Andrea Rueda, Fabio A. González, Eduardo Romero, Extracting Salient Brain Patterns for Imaging-Based Classification of Neurodegenerative Diseases, IEEE Transactions On Medical Imaging, Vol. 33, No. 6, pp.1262-1274, June 2014.

  12. Qi Zhou, Mohammed Goryawala, Mercedes Cabrerizo, Jin Wang, Warren Barker, David A. Loewenstein, Ranjan Duara, and Malek Adjouadi, An Optimal Decisional Space for the Classification of Alzheimers Disease and Mild Cognitive Impairment, IEEE Transactions On Biomedical Engineering, Vol. 61, No. 8, pp.2245-2253, August 2014.

  13. Kyle S. Marcolini, Stephanie Gillespie, Comparing classification methods of MRI brainscans for dementia and Alzheimer's disease, University of Miami, Member, IEEE,PP.1-6, 2014.

  14. Carlos Cabral, Margarida Silveira, Classification of Alzheimers Disease from FDG-PET images using Favourite Class Ensembles, 35th Annual International Conference of the IEEE EMBS Osaka, Japan, pp.2477-2480, July 2013.

  15. Francesco Carlo Morabit, Domenico Labate, Alessia Bramanti, Fabio La Foresta, Enhanced Compressibility of EEG Signal in Alzheimers Disease Patients, IEEE Sensors Journal, Vol. 13, No. 9, pp.3255-3261, September 2013.

  16. Javier Escudero, John P. Zajicek, Coln Green, James Shearer, Stephen Pearson, Machine Learning-Based Method for Personalized and Cost- Effective Detection of Alzheimers Disease, IEEE Transactions On Biomedical Engineering, Vol. 60, No. 1, pp. 164-168, January 2013.

  17. Dr. G. Wiselin Jijl, M. Rangini, Detection of Alzheimer's Disease through Automated Hippocampal Segmentation, IEEE, pp.144-149, 2013.

  18. Eric Westman, Carlos Aguilar, J-Sebastian Muehlboeck , and Andrew Simmons, Regional Magnetic Resonance Imaging Measures for Multivariate Analysis in Alzheimers Disease and Mild Cognitive Impairment, Springer, Brain Topogr, pp.923, August 2012.

  19. Manhua Liu, Daoqiang Zhang, Pew-Thian Yap, and Dinggang Shen, Hierarchical Ensemble of Multi-level Classifiers for Diagnosis of Alzheimer's disease, Springer, Nanjing University of Aeronautics and Astronautics, China,pp.27-35, 2012.

  20. Mohamed M. Dessouky, Mohamed A. Elrashidy, Hatem M. Abdelkader, Selecting and Extracting Effective Features for Automated Diagnosis of Alzheimers Disease, International Journal of Computer Applications (0975 8887) Volume 81 No.4, pp. 17-28, November 2013.

  21. Stefano Diciotti1, Andrea Ginestroni1, Valentina Bessi,Marco Giannelli, Carlo Tessa, Laura Bracco, Mario Mascalchi, Nicola Toschi, Identification of Mild Alzheimers Disease through automated classification of structural MRI features, 34th Annual International Conference of the IEEE EMBS San Diego, California USA, pp.428- 431,September 2012.

  22. Zhuo Sun, Jan. A.C. Veerman, Radu S. Jasinschi, A Method for Detecting Interstructural Atrophy Correlation in MRI Brain Images, IEEE, pp.1253-1256, 2012.

  23. Jayapathy Rajeesh, Rama Swamy Moni, Suyambumuthu Palanikumar, and Thankappan Gopalakrishnan, Discrimination of Alzheimers disease using hippocampus texture features from MRI, Asian Biomedicine, Vol. 6, No. 1, pp.87-94, February 2012

  24. Lavneet Singh,Girija Chetty, Detecting The Brain Abnormalities From Mri Structural Images Using Machine Learning And Pattern Recognition Tools, International Journal of Research in Engineering, IT and Social Sciences Volume 2, Issue 11, pp.15-30, November 2012.

  25. T. R. Sivapriya, Imputation And Classification Of Missing Data Using Least Square Support Vector Machines A New Approach In Dementia Diagnosis, International Journal of Advanced Research in Artificial Intelligence, Vol. 1, No. 4,pp.29-34, 2012.

  26. Nabil Belmokhtar, Classification of Alzheimer's Disease from 3D Structural MRI Data, International Journal of Computer Applications, Volume 47,No.3,pp.41-44, June 2012.

  27. Jude Hemanth D, Computer Aided Classification And Segmentation Of Abnormal Human Brain Magnetic Resonance Images Using Modified Soft Computing Techniques, a thesis, Doctor Of Philosophy in Electronics and Communication Engineering, a thesis, September 2012.

  28. Ahsan Bin Tufail, Ali Abidi, Adil Masood Siddiqui, and Muhammad Shahzad Younis, Automatic Classification of Initial Categories of Alzheimers Disease from Structural MRI Phase Images: A Comparison of PSVM, KNN and ANN Methods, World Academy of Science, Engineering and Technology Vol:6, PP. 1570-1574,December 2012.

  29. Anil Rao, Ying Lee, Achim Gass, Andreas Monsch, Classification of Alzheimers Disease from Structural MRI using Sparse Logistic Regression with Optional Spatial Regularization, 33rd Annual International Conference of the IEEE EMBS Boston, Massachusetts USA, pp. 4499-4502, September 2011.

  30. Daoqiang Zhanga, Yaping Wanga,b, Luping Zhoua, Hong Yuana, and Dinggang Shen, Multimodal Classification of Alzheimers Disease and Mild Cognitive Impairment, Neuroimage, 55(3), pp.856-867, April 2011.

  31. Javier Escudero, John P. Zajicek, Emmanuel Ifeachor, Machine Learning Classification of MRI Features of Alzheimer Disease and Mild Cognitive Impairment Subjects to Reduce the Sample Size in Clinical Trials, 33rd Annual International Conference of the IEEE EMBS Boston, Massachusetts USA, pp.7957-7960, September 2011.

  32. Dong Hye Ye, Kilian M. Pohl, Christos Davatzikos, Semi-Supervised Pattern Classification: Application to Structural MRI of Alzheimers disease, IEEE Computer Society: IEEE International Workshop on Pattern Recognition in NeuroImaging, 2011.

  33. Murat Seckin Ayhan, Ryan G. Benton, Vijay V. Raghavan, Suresh Choubey, Exploitation of 3D Stereotactic Surface Projection for Automated Classification of Alzheimers Disease according to Dementia Levels, IEEE International Conference on Bioinformatics and Biomedicine, pp.516-519, 2010.

  34. Xiaojing Long, Chris Wyatt, An Automatic Unsupervised Classification of MR Images in Alzheimers Disease, IEEE, pp.2910- 2917, 2010.

  35. Jonathan H. Morra, Zhuowen Tu, Liana G. Apostolova, Amity E. Green, Arthur W. Toga, and Paul M. Thompson, Comparison of AdaBoost and Support Vector Machines for Detecting Alzheimers Disease through Automated Hippocampal Segmentation, IEEE Trans Med Imaging, 29(1), pp. 30-43, January 2010.

  36. B. Al-Naami, N. Gharaibeh and A. Kheshman, Automated Detection of Alzheimers Disease using Region Growing Technique and Artificial neural Network, World Academy of Science Engineering and Technology, Vol. 7, No. 5, pp 204-208, 2013.

  37. Peter A. Freeborough and Nick C. Fox, MR Image Texture Analysis Applied to the Diagnosis and Tracking of Alzheimers Disease, IEEE Transactions on Medical Imaging, Vol. 17, No. 3, 475-479,June1998.

  38. Robert M. Haralick, K. Shanmugam and Its Hak Dinstein, Textural Features for Image Classification, IEEE Transactions on Systems, Man, and Cybernetics, Vol. 3, No. 6, pp. 610-621, 1973.

  39. Alzheimers Society, The progression of Alzheimers disease and other dementias, leading the fight against dementia, alzheimers.org.uk.

  40. R. Kohavi and G.H. John, Wrappers for Feature Subset Selection, Artificial Intelligence, Vol. 97, No. 1-2, pp. 273-324, 1997.

  41. A. Jain and D. Zongker, Feature Selection: Evaluation, Application, and Small Sample Performance, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 19, No. 2, pp. 153-158, 1997.

  42. www.mr-trip.com

Leave a Reply