Detection of Alzheimer’s Disease (AD) in MRI Images using Deep Learning

DOI : 10.17577/IJERTV10IS030310

Download Full-Text PDF Cite this Publication

Text Only Version

Detection of Alzheimer’s Disease (AD) in MRI Images using Deep Learning

Amnaya Pradhan

Department of Computer Science and Engineering

S.R.M. Institute of Science & Technology Chennai, India

M. Eliazer

Jerin Gige

Department of Computer Science and Engineering

      1. Institute of Science & Technology Chennai, India

        Department of Computer Science and Engineering

        S.R.M. Institute of Science & Technology Chennai, India

        AbstractAlzheimer's disease is an unrepairable degenerative brain disease. Every four seconds, someone in the world is diagnosed with Alzheimer's disease. The result is fatal, as it leads to death. As a result, it's crucial to catch the disease early on. The leading cause of dementia is Alzheimer's disease. Dementia causes a reduction in reasoning abilities and interpersonal coping skills, which affects people's ability to function independently. The patient will forget recent events in the early stages. If the illness progresses, they will gradually forget whole events. It is essential to diagnose the disease as soon as possible. This paper proposes a model that takes brain MRI sample images as input and determines whether a person has mild, moderate, or no Alzheimer's disease as an output. We are using the VGG19 and DenseNet169 architectures for this classification, providing a comparative analysis of which architecture shows promising results.

        KeywordsAlzheimer's, MRI images, VGG19, DenseNet.

        1. INTRODUCTION

          The brain is considered one of the most crucial organs in our body. All the activities and responses that allow us to think and believe are controlled and facilitated by the brain. It also empowers our sentiments and recollections. Alzheimer's disease is brain dysfunction which is unrepairable and progressive in nature. Someone in the world is diagnosed with Alzheimer's disease every four seconds. It enhances at a languid pace and tears down the memory cells, thereby destroying an individual's thinking ability. It's a degenerative nerve disorder that leads to loss of function or even death of neurons. The average life expectancy after an Alzheimer's diagnosis is only about four to eight years. On an average, 1 out of 10 people over the age of 65 is affected by this condition, but sometimes it can strike at a younger age and has been diagnosed in several people in their 20s. This disease is the primary cause of dementia in older people. Dementia causes a decline in cognitive skills that are used to perform daily activities, 60-80% of dementia cases are Alzheimer's.

          This ailment is associated with accumulation of plaques and tangles in the brain, accompanied by brain cells' damage and death. It was noted first by Dr. Alois Alzheimer, where he saw a lady lead to death due to some internal brain tissue changes. After her death, the doctor scanned her brain, and in doing so, he noticed the formation of various clumps. These

          were concluded as the primary cause of this disease. They broke the coordination of the brain with other body parts. Hence, people having this disease find it challenging to perform daily activities such as driving, cooking, etc. In the early stages, the symptoms aren't evident and may include difficulty in recalling names, Misplacing important objects, having trouble with planning things, etc. The middle stage of Alzheimer's is the longest, and some symptoms may include severe mood swings, confusion, impulsivity, short attention, poor object recognition, etc. The last stage is the most severe.

          Fig. 1. Image representing a Healthy Brain vs. Severe AD Brain

          The most evident symptoms include not being able to communicate properly with others, being more prone to infections, poor judgment, poor sense of direction, short-term memory loss, and visual problems. A recent survey suggests that around 50 million people worldwide have Alzheimer's. This disorder poses scientists and physicians today with a massive challenge as it's frequently not identified until patients reach the final stages of the disease because their cognitive symptoms are often credited to aging. Unless better treatment is provided, the threat of this disease will continue to increase. Hence, older people are at a high risk of getting affected by this disease. Right now, there is no cure for this disease, but

          earlier intervention can help slow dementia progress. A variety of factors have been related to lowering the risk of Alzheimer's disease, including a healthy diet, physical activity, being social, protecting head from injuries, reading, playing musical instruments and indulging in intellectual activities; such activities can strengthen overall brain health & cognitive effectiveness.

        2. LITERATURE SURVEY

          Over the years, several researchers have used different approaches to diagnose Alzheimer's Disease. The following paragraphs give a quick insight into the works completed to date.

          Suresha et al. [1] used a rectified adam optimizer and a deep neural network to classify images into Normal, AD, and MCI, respectively. They achieved a high accuracy of 99.5% by using a Histogram of Oriented Gradients to extract features.

          Lan [2] defined a new model that would help doctors detect Alzheimer's. He made use of diffusion tensor images which are used to deploy brain networks. Those details were captured by him using the graph theory method. He then applied three different algorithms to check the accuracy and suggested a better approach. The algorithms used by him were SVM, random forest, and CNN trained with SPL features. The accuracy of the three models is 90%, 98%, and 90%, respectively. The sensitivity of the model was 92%,96% and 72% while specificity was 88%,100%,94% respectively

          Zubair [3] claimed a method to detect Alzheimer's disease. He made use of a 5 stage ML pipeline process for the detection in which each stage had a sub-stage. Multiple classifiers were applied to this pipeline. He concluded that the random forest Classifier had better performance metrics.

          Khan et al. [4] made use of the Random-Forest classifier to compare the performance in imputation and non-imputation methods. They observed that the imputation method gives 87% accuracy, and the non-imputation method gives 83% accuracy. It further classified the subjects as demented or non-demented, respectively.

          Asim et al. [5] proposed a multi-atlas approach to detect Alzheimer's disease by utilizing the unique features extricated from every atlas template and the joined characteristics of the two atlases using PCA and casted-off SVM for classification. They achieved 94% accuracy for AD vs. CN, 76.5% accuracy for CN vs. MCI, and 75.5% accuracy for MCI vs. AD. They observed that the multi-atlas approach showed better results than the single-atlas approach.

          Alam [6] stated in his paper that early-stage detection can prevent the spread of disease. He made use of structural magnetic resonance(MRI) for capturing brain images from the database. He postulated the use of kernels for projecting the data onto the available linear space. He then applied a Support Vector Machine(SVM) for classifying the data. He obtained a good accuracy of 93.85% for his classification with high sensitivity and specificity.

          Moein [7] first applied voxel morphometry analysis for capturing some of the most crucial MRI images. He then carried out a principal component analysis on the features extracted. Then a hybrid manifest learning framework was presented in the given subspace. Then, a label propagation mdel was applied to classify into two types as normal and

          mild by taking a chunk of training data. The model provided a whooping accuracy of 93.86% for the given classification.

          Kumar Lama [8] made a collaborative approach to distinguish AD from different other diseases. He made use of SMR's to determine AD amongst mild cognitive impairment and Health Control. He used three algorithms, namely RELM,SVM and IVM, for this segregation process. In addition to that, a discriminative approach including kernels is provided to tackle various data distributions which are complex. He concluded from his classification approach that RELM had better performance metrics.

          Dr. Bryan [9] saw that varieties among intersite and multi vendor estimations restricted AI applications for cerebral bloodstream imaging strategies in Alzheimer's Disease. Such types can be vigorously standardized in human vision but need significant advances in figuring out how to dodge perils from overlooked and underrepresented measurable blunders.

          Escudero et al. [10] proposed a ML approach using biomarkers in their paper. They tested a personalized classifier for the disease using a method learning locally weighted and biomarkers. The methodology attempts to classify the subject first and then later decides which biomarker to order. They classified the patients who were MCI who advanced to AD inside a year against the individuals who didn't.

        3. PROPOSED WORK

          Deep Learning is known to be learning a hierarchical set of representations such that it learns low mid and high-level features. Deep neural networks can adapt to more complex data sets. It's better in generalizing previously unseen data because of its multiple layers. Different algorithms use Deep Learning's fundamental expertise and use diverse datasets to train and test these algorithms.

          Like neurons in humans, deep learning has layers that help the model or algorithm learn and process the data. These layers process the data given to them as the input and learn by processing the input while traveling through the layers. When it passes out the last layer, an activation function is applied at last, and finally, we get the predicted output from the deep learning model. This gives us the training accuracy, and then when we take another similar type of dataset, we can predict or detect from the learned deep learning model whatsoever we want. Well, this is what deep learning does in simple working terms.

          A. Dataset

          The data is taken from an open online dataset library known as Kaggle, and the dataset hasn't been used by various other research projects and studies yet. It is an open-source dataset. This dataset contains almost 6,000 images distributed over four classes labelled Mildly, Moderately, Very Mildly and Non-Demented. The features are then distributed into 80% train dataset and 20% test dataset. 80% of the data is used in training means that each deep learning model has two phases that are training and testing, where it predicts the data that is provided to it. Both the models use the same dataset separated from the original Kaggle dataset and are divided in 8:2 ratio, which has 80% training and 20% validation dataset. The datasets have to have the same kind of distribution so that there is no such kind of discrepancy in the comparison of the prediction that both the models had a different type of input.

          This removes the question upon both the models and brings them to the same inspection level that both were using, not the same dataset, and was trained and tested upon the exact distribution of the dataset.

          1. VGG19

            1. Zisserman and K. Simonyan of the University of Oxford proposed VGG19. It is one of the convolutional neural system models, consisting of 16 convolutional layers along with three fully connected layers. Its shown to yield excellent results even though there are a large number of groups to classify. According to A. Zisserman and K. Simonyans analysis, the model was able to classify a dataset of about 1000 classes with an accuracy of 92.7 percent. VGG19 is a well-known classifying model that can classify a wide range of classes and is used in a variety of medical research projects. It has a high level of accuracy in predicting things like vehicles and trees. VGG19 is now being used on a wider scale in medical datasets to predict smaller groups, such as breast cancer detection, macular edema detection, brain tumor detection, and so on. This is one of the reasons that the same model has been used for the classification in this study. It also offers a standard method of constructing a classifier, which is helpful in most studies since it employs simple Convolutional and Max Pooling layers in its model construction.

          2. DenseNet

          n Convolutional neural network,the input image is passe rom layers of the network to obtain an output of a predicte

          abel in which the forward pass is quite straightforward.

          Fig. 4. CNN

          n Convolutional neural network,the input image is passe rom layers of the network to obtain an output of a predicte

          abel in which the forward pass is quite straightforward.

          Fig. 4. CNN

          I d

          f d

          l

          1. Methodology

            Fig. 2. Dataset after Pre-processing

            Fig. 3. Proposed Methodology

            The system architecture diagram shows us a conceptual and behavioral view of the system. It is just a view that shows us how the database is used in taking the dataset and then how this data is used up in our project modules to train the different models.

            In the architecture diagram above, we can see the data is being taken from the training dataset and then provided to the models. Then it is validated against the test dataset to get the testing or validation accuracy. After the accuracy is compared, the diseased images are taken from the dataset, The classification done is of four types namely Mild Demented, Moderate Demented, Non Demented and Very Mild Demented. Also, the architecture diagram shows us the various modules working together in the project and how they are integrated to provide us the desired output, and how all the modules need to be interconnected to make the project work in unison.

            Every convolutional layer, except the first layer,gets the output through the trailing convolutional layer and produces a feature map output which is forwarded to the leading convolutional layer. The Densenet is the modification of the standard CNN

            model

            .

            Fig. 5. Densenet Architecture

            For L Layers, there can be L(L+1)/2 connections.For every layer, all the trailing layer feature maps can be used as an input, and its own mapping feature can be pushed as an input for the leading layers.

            Some of the advantages of Densenet:

            • The vanishing gradient problem is alleviated by densenet.

            • Feature propagation is strengthen

            • Reusability of feature

            • Number of parameters is reduced

        4. IMPLEMENTATION

          This research compares two state-of-the-art deep learning models' detection accuracy in detecting Alzheimer's disease in an MRI image. The Keras module of tensor flow, an open- source library for implementing deep learning models, is used to implement VGG19. Using the Image Data Generator function, the ata was augmented and loaded into the model. The training consisted of a batch size of 128, with 50 epochs with early stopping. Similarly, the Keras module is used to implement the Densenet Model, and the data is loaded into the densenet model via the Image Data Generator function. The densenet model is trained using a batch of 128 images each. Both models are trained on 3048 MRI images consisting of four classes, and both models are tested on a total of 2067 MRI images. All the work was done on Google colab.

          1. Densenet169

            Densenet169 is among the major models of the Densenet group for image classification.The model is fairly popular due to its better size and accuracy. It is an output classifier object for 1000 varying classification present in the Imagenet..

            Fig. 6. Densenet 169 with 4 dense blocks

            So, based on the architecture, when an image is inputed, it passes through a sequence of layers of dense blocks. After every dense block layer, there is a transition layer (Convolution and Pooling) layer, which enhances the image pixels and reduces the size and a modified version of the image is pushed to the subsequent layers. After going through certain layers, the model classifies the images accordingly. The classification done is of four types namely

            Mild Demented, Moderate Demented, Non Demented and Very Mild Demented. The model displayed an accuracy of about 87% on the train dataset and an accuracy of about 78% on the test dataset.

          2. VGG19

          The below figure shows us the architecture of the VGG19 model, which shows how the input layer and the output layer are present at the start. In between them, we have layers consisting of convolutional and Max Pooling layers, which are used to filter (3*3) over the images repeatedly and make even smaller filters cover every part of the image. At last, the softmax function finally provides the probability of the image belonging to a particular class. We map the images labeled

          Mild Demented as '0' and Moderate Demented as '1' Non- Demented as '3' and Very Mild Demented as '4' to classify the data easily and efficiently. The formula for the calculation of output size is:

          Fig. 7. VGG19 Architecture

        5. RESULTS DISCUSSION

          1. Densenet169 Model Results

            The Densenet model has shown decent accuracy in classification of images. The model has displayed certain promising graphs.It used a batch size of 128. The model has runned for 40 epochs. The model provided an accuracy of about 87% in the train data and about 80% in the test data.

            The model displayed an AUC of about 88% in train data and about 82% in the test data and the model loss obtained is also pretty low. The graphs and confusion matrix are as follow:

            Fig. 8. Model loss in DenseNet169

            Fig. 9. Model accuracy in DenseNet169

            Fig. 10. Model AUC in DenseNet169

            Fig. 11. Confusion Matrix for DenseNet169

          2. VGG19 Model Results

          The VGG19 model was trained using 3048 MRI images and tested on 2067 MRI images. It used a batch size of 128 images to train itself. The number of epochs used was 50. This model demonstrated Accuracy and AUC of 88% and 94%

          respectively on the training dataset and 82.6% and 86.7% on the test dataset. Loss is more of a function in the field of deep learning, it is not expressed in percentage like accuracy, and it is a summation of all the errors the machine has made during the time of training and during the time of validation. One can always try to adjust any of the deep learning model's hyperparameters to reduce the loss, such as the batch size of images for training and the number of epochs. Furthermore, in the confusion matrix, the numbers on the diagonal indicates the number of times the samples were classified correctly; the numbers that are not on the diagonal show how many times the samples were classified incorrectly.

          Fig. 12. Model loss in VGG19

          Fig. 13. Model accuracy in VGG19

          Fig. 14. Model AUC in VGG19

          identify other Neurodegenerative Diseases more automatically in the future.

          REFERENCES

          Fig. 15. Confusion Matrix for VGG19

        6. CONCLUSION AND FUTURE WORKS Alzheimer's disease is the leading cause of dementia. This

paper determines a prospective solution for detecting the disease at an early stage. The models used in this paper have successfully classified the images into the appropriate four classes and indeed provided us with promising results. We observe that VGG19 performs better than DenseNet. Further research is required to ensure that this particular model can be implemented in clinical settings, increasing the health care rate against this specific disease. Knowledge should be spread among people regarding this disease, and they should be encouraged to get themselves examined. We are currently working on deploying this model onto a website for better practical usage. In the future, this model can also be tested on a larger dataset. We had only 52 and 12 images for training and testing in the current dataset for the 'Moderate Demented' class. The proposed model can help doctors diagnose Alzheimer's Disease more effectively and can be modified to

  1. Suresha, Halebeedu Subbaraya, and Srirangapatna Sampathkumaran Parthasarathy. "Alzheimer Disease Detection Based on Deep Neural Network with Rectified Adam Optimization Technique using MRI Analysis." 2020 Third International Conference on Advances in Electronics, Computers and Communications (ICAECC), pp. 1-6. IEEE, 2020.

  2. Deng, Lan, and Yuanjun Wang. "Hybrid diffusion tensor imaging feature-based AD classification." Journal of X-Ray Science and Technology Preprint, 2020, pp. 1-19.

  3. Khan, Afreen, and Swaleha Zubair. "An Improved Multi-Modal based Machine Learning Approach for the Prognosis of Alzheimers disease." Journal of King Saud University-Computer and Information Sciences, 2020.

  4. Khan, Afreen, and Swaleha Zubair. "Usage Of Random Forest Ensemble Classifier Based Imputation And Its Potential In The Diagnosis Of Alzheimer's Disease." Int. J. Sci. Technol. Res. 8, no. 12, 2019, pp. 271- 275..

  5. Asim, Yousra, Basit Raza, Ahmad Kamran Malik, Saima Rathore, Lal Hussain, and Mohammad Aksam Iftikhar. "A multimodal, multiatlas based approach for Alzheimer detection via machine learning." International Journal of Imaging Systems and Technology 28, no. 2, 2018, pp. 113-123.

  6. Alam, Saruar, GooRak Kwon, and Alzheimer's Disease Neuroimaging Initiative. "Alzheimer disease classification using KPCA, LDA, and multikernel learning SVM." International Journal of Imaging Systems and Technology 27, no. 2, 2017, pp. 133-143.

  7. Khajehnejad, Moein, Forough Habibollahi Saatlou, and Hoda Mohammadzade. "Alzheimer's disease early diagnosis using manifold- based semi-supervised learning." Brain sciences 7, no. 8, 2017, p. 109.

  8. Lama, Ramesh Kumar, Jeonghwan Gwak, Jeong-Seon Park, and Sang- Woong Lee. "Diagnosis of Alzheimer's disease based on structural MRI images using a regularized extreme learning machine and PCA features." Journal of healthcare engineering 2017, 2017.

  9. Bryan, R. Nick. "Machine learning applied to Alzheimer disease." , 2016, pp. 665-668.

  10. Escudero, Javier, Emmanuel Ifeachor, John P. Zajicek, Colin Green, James Shearer, Stephen Pearson, and Alzheimer's Disease Neuroimaging Initiative. "Machine learning-based method for personalized and cost- effective detection of Alzheimer's disease." IEEE transactions on biomedical engineering 60, no. 1, 2012, pp. 164-168

Leave a Reply