An Intelligent Advisory System Based on Image Processing to Teeth Diseases Diagnosis

DOI : 10.17577/IJERTV10IS100175

Download Full-Text PDF Cite this Publication

Text Only Version

An Intelligent Advisory System Based on Image Processing to Teeth Diseases Diagnosis

M. I. El-Alami

M. H. Ghazy

A. E. Amin

S. A. Mousa

Dept. of Computer Science Faculty of Specific Education

Mansoura, Egypt

Dept. of Fixed Prosthodontics Faculty of Dentistry Mansoura, Egypt

Dept. of Computer Science Faculty of Specific Education

Mansoura, Egypt

Dept. of Computer Science Faculty of Specific Education

Mansoura, Egypt

Abstract:- This research has focused on studying an advisory system that assists in the diagnosis of teeth diseases and helps the dentist in diagnosing cases and providing appropriate treatment for patients. The proposed system consists of an image processing and an advisory system. Image processing was used two algorithms: The gray-Level Co-Occurrence Matrix (GLCM) algorithm to extract the features from x-ray images and the K-Means algorithm to classify images features, then stored in the database. The advisory system consistency of the knowledge base built with the experts of the dentist, which helps to extract the rules from the database of images, another part is responsible for the implementation of the matching between the query rule and knowledge base. When the proposed system was applied to several cases, the extracted therapeutic reports were so satisfactory.

Key words: Image processing, advisory system.

  1. INTRODUCTION

    Nowadays, we are living in the digital age, where digital imaging has been developed to become a widespread technology. Digital image is used as a means of pictorial information in the field of medical diagnosis. [1]

    Images are two-dimensional (2D), as a picture or screen display, a two-dimensional function indicates to an image, f (I, J), where I and J are spacial coordination, and the intensity or grey level of the picture at that step is defined as the amplitude of (f) with any pair of coordinates (I, J). The image is called a digital image when I, J, and the amplitude values of (f) are all finite, distinct variables.[2] Also, an image is an array, or a matrix, of square pixels (picture elements), consists of columns and rows. [3] Many types of images are based on a grayscale as X-ray images.

  2. IMAGE PROCESSING

    Image processing is a form of signal processing whose input is an image, as a picture or photograph; its output is one of these two: image or set of properties or determinants regarding of image. Most of image processing techniques treat the image as 2D signal and apply standard signal processing techniques to it. Ordinarily, Image processing indicates to digital image processing. [4]

    Image processing is using of computer algorithms to execute image processing on images. Digital image processing has many advantages over analog image processing as a subcategory or field of digital signal processing. [5]

    Image processing entails applying mathematical processes to any type of signal processing, including region-based and boundary-based techniques. [5] Region- based techniques and boundary-based techniques are two categories of techniques in the shape of based retrieval. A region-based technique uses an entire shape region, but a boundary-based technique relies only on the border points of shapes in feature vector extraction. [6]

    The multi-user technique is an epitome of a large, important image retrieval method that aided image processing in the post-store image retrieval process.

    CBIR (content-based image retrieval) is a method that uses visible content to retrieve images; it is sometimes referred to as features. CBIR stands for query by image content, and it is used to search huge databases for pictures in the form of a query image. The flow chart in Fig.1 shows CBIR systems. CBIR's key low-level characteristics are color, texture, and form. Based on these low-level characteristics, a feature vector is automatically extracted for each picture in CBIR systems. The similarity distance between the query picture's feature vector and the feature vectors in the image databases is then calculated. Finally, the algorithm finds photos that are comparable to the query based on their similarity values, which are known as features. [7]

    Fig.1. Flow chart of Content Based Image Retrieval

  3. FEATURES EXTRACTION

    Feature extraction is a CBIR starting point for gathering

    • Homogeneity equation is :

      visual content of images for indexing and retrieval. Low level or primitive image features may be global, such as

      1

      =

      (, )

      (3)

      color, texture, and shape extraction, or domain particular features. [8]

      Feature extraction is the operation of outputting features for using in selection and classification tasks. Feature selection decreases the numeral of features provided to the classification task. Those features which are probable to help in discernment are selected and used in the classification task. [9]

      1 + | |

      ,=0

      Homogeneity gives information about how little change there is in an image. It is defined as the quality or state of being homogeneous.[10]

    • Energy equation is :

      1

      Further, each features class is split into subclasses by the type of algorithm used for building the feature vector and

      = (, )2

      ,=0

      (4)

      many methods for feature extractions are used as color features, texture features, shape features.

      In the analysis of statistical texture, texture features are calculated from the statistical distribution of observed groups of intensity in the positions defined for each other

      Energy parameters are called as Uniformity. Energy is a

      feature that measures the softness of the image. [10]

    • Angular Second Moment equation is

      1 1

      in an image. The texture is defined not only by the grey

      = 2(, )

      (5)

      value at a particular pixel but also through grey value in

      =0

      =0

      the area surrounding the pixel, wherever the brightness level at a point is determined by the brightness levels of the points around it. The statistics are divided into first- order, second-order, and higher-order statistics based on the numeral of density points (pixels) in each collection. [8]

      3.1. Gray-Level Co-Occurrence Matrix (GLCM)

      In this feature, the image homogeneity is measured by the sum of squares of inputs in the GLCM of Angular Second Moment. [8]

    • Inverse Difference Moment (IDM) equation is

      :

      1

      algorithm

      The Gray Level Co-occurrence Matrix (GLCM) is a

      =

      ,=0

      1 + ( )2

      (6)

      technique for extracting statistical texture features of second-order. [8]

      The GLCM contains14 feature extracted and calculated by using the following equations:

    • Contrast equation is :

      1

      IDM stands for local homogenization. It is high when the local grey level is consistent and the GLCM is high reversed.[8]

    • Entropy equation is :

      1 1

      = (, ) log (, ) (7)

      = ( )2 (, ) (1)

      =0

      =0

      ,=0

      Contrast is known as the separation between the darkest and brightest region. It's the difference between a contiguous collection of pixels' highest and lowest values. [10]

    • Correlation equation is :

      1

      Entropy shows and measures the quantity of the image inormation required to compress of the image. The loss of information or message in the delivered signal is measured by entropy. [8]

    • Sum of Squares, Variance equation is :

      1 1

      = ( )2 P(, ) (8)

      = ,

      ( )( )

      (2)

      =0

      =0

      ,

      Correlation is a measurement of grey tone linear relations in an image; in specially, the direction under realization is the same as vector displacement. [10]

      This feature gives the items that differ of the average value of P (i,j) a lot of weight.

    • Sum Average equation is :

      22

      = +()

      =0

    • Sum Entropy equation is :

      22

      (9)

      Classification algorithms methods assume that the subtype image shows one or more features, such as spectral regions. The classes can be provided a priori by an analyst or automatically grouped into sets of prototype classes, where the analyst just provides the required number of categories and that each of these features belongs to one of many unique classes. [12]

      The following pseudocode shows the steps of the

      = +() log (+()) (10)

      =0

    • Difference Entropy equation is :

      1

      = +() log (+()) (11)

      =0

    • Inertia equation is :

      1 1

      algorithm:

      Let m is the required number of clusters.

      Let Z denote the set of feature vectors (where |Z| denotes the set's size).

      Let D is the collection of related clusters for every feature vector

      Let sim(x, y) is the likeness function Let c[m] is our clusters' vectors. initial:

      Let Z' = Z

      = { }2 × (, )

      (12)

      //select m random vectors to begin our clusters For i=1 to m

      =0

      =0

      j = rand (|Z'|)

    • Cluster Shade equation is

      1 1

      c[m] = Z'[j]

      Z' = Z' – {c[m]}

      // Delete that vector from Z' so that we won't be able to

      = { +

      3

      + } × (, )

      (13)

      select it again.

      End

      =0 =0

      // assign the first clusters

    • Cluster Prominence equation is

    1 1

    4

    For i=1 to |Z|

    D[i] = argmax ( j = 1 to m ) { sim(Z[i], c[j]) } End

    = { + + } × (, )

    (14)

    =0 =0

    Run:

    Let change = true

    In this work, four significant features were chosen: Contrast, Homogeneity, Energy, and Correlation to extracting image features. It uses the following equation to get the best results:

    = { 1., 2., . . . , .,

    While change

    change = false //assume nothing has changed

    //reassign feature vectors to clusters For i = 1 to |Z|

    a = argmax ( j = 1 to m ) { sim(Z[i], c[j]) } If d! = D[i]

    1 , 2 , , ,

    D[i] = d

    .

    .

    .

    change = true //a vector's associations have changed–

    1 , 2 , , ,

    .

    .

    .

    so we need to

    1 , 2 , , }

    //recalculating our cluster vectors and run again

    .

    .

    .

    End

    After extraction; features are stored in the database and classified as image features depending on the massive size of knowledge previously stored within the database.

  4. IMAGES CLASSIFICATION Classification of things is a simple activity for humans, but it has proven to be a difficult challenge for machines. The task of image classification requires a description of different images with values patterns, and the conversion of the problem into that pattern recognition. [11] Classification methods are used in many fields as data mining, signal decoding, finance, voice recognition, natural language processing and medicine.

    End

    //recompute cluster placement if a change occurred For i = 1 to m

    mean, count = 0 For j = 1 to |Z| If D[j] == i

    mean = mean + Z[j] count = count + 1 End

    End

    c[i] = mean/count End

    End

    Classification is the process where a given test sample is assigned a class on basic of knowledge obtained by the classifier during training. [10]

    Image classification analyses the numerical characteristics of distinct image features and categorizes data. Classification algorithms are generally processed in two stages: training and testing. During the first training phase, typical image feature characteristics are isolated and based on these a unique description is created for every classification category, i.e. training class. In the testing phase, these feature-space partitions are then utilized to categories image features. There are several types of classifiers that may be employed in the image classification, similar to feature extraction. K-Nearest- Neighbor, Nave Bayes, Decision Tree, Support Vector Machine (SVM), and K-means [13] are just a few examples. K-means is the classifier we've chosen. This is owing to its ease of use and simplicity in data set training.

      1. K-Means algorithm

        The K-Means algorithm, often known as the Hard C Means algorithm, is a well-known clustering technique. This method divides an image into several clusters of pixels in the feature space, each specified by its center. Each pixel in the picture is initially assigned to the closest cluster. The new centers are then calculated using the new clusters. This process is continued until convergence is achieved. To begin, we must first calculate the number of clusters K. The centroid for these clusters will then be assumed. We could use random objects as starting centroids or the first K items in a series as initial centroids. [14] The K-Means is also the most often used partitioning-based clustering technique. It is a clustering algorithm that uses an unsupervised approach. It intelligently selects the centroid, compares the centroid to the data points based on their intensity and features, and calculates the distance; data points that are comparable to the centroid are allocated to the cluster with the centroid. By identifying the data points closest to the clusters, new 'k' centroids are calculated, and k- clusters are produced. [15]

  5. AIDE DENTIST SYSTEM (ADS)

    As shown in Fig.3, an Aide Dentist System (ADS) mainly consists of two stages: image processing and the advisory system.

    Fig.3. Flow chart of advisory system

    In the first stage, image processing is applied to extract the basic features from the image and to create a classified features database. In the second stage, advisory system is consists of expert dentist that helps to building rules from image decision. A knowledge base is used to store the extracted rules as shown in Table.1.

    Table.1. Example of Extract Rules and Execute Rules

    Extract Rules

    Execute Rules

    If

    Image belongs to Cluster (Cln).

    And

    Image is labeled by (Ims)

    Then

    Therapeutic report is (TRs )

    Im(7) belongs to Cl 8

    Then TR8

    Diagnosis : Deep caries of upper 5

    Treatment : Extraction and Implant of upper 5

  6. EXPERIMENTAL AND RESULT

    To evaluate an aide dentist system (ADS), samples of X- Ray images were collected and collected treatment reports were assisted by an expert dentist as shown in Table.2

    Table.2. samples of X-Ray images and treatment reports

    Image

    Diagnosis

    Treatment

    Deep caries of upper 5

    Extraction and Implant of upper 5

    Impacted of lower 7

    Extraction of lower 7

    Skeletal Class II maxillary protrusion

    High pull headgear to restrict maxillary growt

    Aide Dentist System (ADS) is logged by inputting the patient's personal data as shown in the Fig. 4.

    Fig.4. login to proposed system

    As shown in Fig.5, images are input into ADS. Subsequently, the image processing phase is used to extract image features. After that, the image features are saved in the image database, Fig.6 shows a confirmation message to store the image features successfully.

    Fig.5. example of Image features extraction

    Fig.6. Confirm message store image features

    As shown in Fig.7, image features are divided into classes. Thus, images can be retrieved easily through the query, also aide dentist system (ADS) confirms of the classification as shown in Fig.8.

    Fig.7. query and classification button

    Fig.8. confirm classification message

    Then, the query image and the knowledge base are matched. In the case of the matching, the therapeutic report of the case is extracted as shown in Fig.9.

    Fig.9. Therapeutic Report

  7. CONCLUSIONS

This paper attempts to present a proposed system based on image processing to help dentists in diagnose of teeth diseases where is the experimental results extracted from the aide dentist system (ADS) showed it was very

convincing and satisfactory and when the system were applied to several cases, the extracted therapeutic reports matched with the experts in the field.

ACKNOWLEDGMENT

First and foremost, I thank the Almighty "Allah," who provided me with the power, faith, and competence to complete this thesis in the greatest possible manner. I have my heartfelt gratitude to Members of the supervisory committee, Prof. Mohi Eddin Ismail El- Alami, Prof. Mohamed Hammed Ghazy, and Prof. Ahmed El-Sayed Amin.

I've been trying to find words to express my thanks for their encouragement, thorough follow-up of this work, honest oversight, and close mentoring, but I can't find enough words to express my gratitude to them. I gratefully acknowledge and express my deepest thanks to my parents, brother, and sister, who have always been there for me in good times and bad.

REFERENCE

      1. C. Bhalla S. Gupta, A Review on Splicing Image Forgery Detection Techniques, IRACST – International Journal of Computer Science and Information Technology & Security (IJCSITS), Vol.6, No.2, Mar-April 2016, pp. 262-271.

      2. A. Niranjana, I. Ahmed, Survey: Mood Detection in Images, International Journal of Electronic and Electrical Engineering, Volume 7, Number 10 (2014), pp. 1037-1048.

      3. Er. Nisha, Er. Lavina Maheshwari, Size Estimation of Lung Tumor by Using Image Segmentation & BPN, International Journal of Advanced Research in Computer Science and Software Engineering, Volume 5, Issue 11, November 2015, pp. 621-626.

      4. K.Radhika, P.Vishalini, DIGITAL IMAGE PROCESSING Sequence, Components and Pros, International Journal of Computer Science and Information Technologies (IJCSIT), Vol. 7 (2), 2016, 922-924.

      5. S. Nasira Tabassum, Digital Computing Image Processing Network Technology, International Journal of Scientific & Engineering Research, Volume 3, Issue 11, November-2012, pp.1-5.

      6. Sonya Eini and Abdolah Chalechale, Shape Retrieval Using Smallest Rectangle Centroid Distance, International Journal of Signal Processing, Image Processing and Pattern Recognition, 2013, PP. 61-68.

      7. S Eini, A Chalechale, E Akbari, A New Fourier Shape Descriptor Using Smallest Rectangle Distance, Computer and Knowledge Engineering (ICCKE), 31 December 2012.

      8. P. Mohanaiah, P. Sathyanarayana and L. GuruKumar, Image Texture Feature Extraction Using GLCM Approach, International Journal of Scientific and Research Publications, Volume 3, Issue 5, May 2013,PP. 1-5.

      9. Manisha Lumb, Poonam Sethi, Texture Feature Extraction of RGB, HSV, YIQ and Dithered Images using GLCM, Wavelet Decomposition Techniques, International Journal of Computer Applications (0975 8887), Volume 68 No.11, April 2013, pp. 25-31 .

      10. Manisha Lumb, Poonam Sethi, A Hierarchical Model to Classify Brain Cancer using GLCM, International Journal of Science and Research (IJSR), Volume 5 Issue 8, August 2016, pp.790-974.

      11. P. Kamavisdar, S. Saluja, S. Agrawal, A Survey on Image Classification Approaches and Techniques, International Journal of Advanced Research in Computer and Communication Engineering, Vol. 2, Issue 1, January 2013, pp.1005-1009.

      12. Geeta M Arwindekar, Shubhangi Vaikole, Transform Based Method for Classification of Various Meningioma Subtype Images, International Journal of Advanced Research in Computer Science, Volume 4, No. 1, January 2013 (Special Issue), pp.42-44.

      13. Siti N. A. Hassan, Nadiah S. A. Rahman, Zaw Zaw H. S. Lei, Automatic Classification of Insects Using Color-based and shape-based descriptors, International Journal of Applied Control, Electrical and Electronics Engineering (IJACEEE) Vol 2, No.2, May 2014, pp.23-35.

      14. B. Subbiah and S. Christopher, Image Classification through integrated K- Means Algorithm, IJCSI International Journal of Computer Science Issues, Vol. 9, Issue 2, No 2, March 2012, pp.518-524.

      15. Md. Khalid Imam Rahmani1 and Naina Pal, Kamiya Arora, Clustering of Image Data Using K-Means and Fuzzy K-Means, (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 5, No. 7, 2014, PP.160-163.

Leave a Reply