Melanocytes Detection Using Convolutional Neural Network Based Approach

DOI : 10.17577/IJERTV11IS060332

Download Full-Text PDF Cite this Publication

Text Only Version

Melanocytes Detection Using Convolutional Neural Network Based Approach

Samruddhi Suryakar

M.Tech. Scholar,

Department of Computer Science & Engineering G H Raisoni University,

Amravati, India

Prashant Adakane

Asst. Prof.

Department of Computer Science and Engineering G H Raisoni University,

Amravati, India

Abstract The most dangerous kind of cancer in India is skin cancer, which has increased in recent years. The precise recognized causes of this form of cancer might change depending on the circumstance, state, environment, etc. The rapid development rate of melanoma skin cancer, its high cost of treatment, and its mortality rate have all heightened the need for early identification of skin cancer. The majority of the time, treating cancer cells requires time and manual detection. This study presented a machine learning approach and VCG-16 Architect image processing system for an artificial skin cancer diagnosis system. Following the segmentation of the dermatoscopic pictures, the damaged skin cells' characteristics are retrieved using a feature extraction approach. The stratification of the retrieved characteristics is done using a convolutional neural network classifier, which is based on deep learning. Applying the publicly accessible data set resulted in an accuracy of 89.5 percent and a training accuracy of 93.7 percent.

Keywords Machine Learning; Convolution Neural Network; Information Search and Retrieval; Melanoma; Feature Extraction;

  1. INTRODUCTION

    In the past 10-year period, from 2008 to 2018, the annual number of melanoma cases has increased by 53%, partly due to increased UV exposure. Although melanoma is one of the most lethal types of skin cancer, a fast diagnosis can lead to a very high chance of survival.

    According to the WHO statistics, the number of people will be affected by the skin cancer will rise upto almost 13.1 million by 2030 Skin cancer is a condition in which there is an abnormal growth of melanocytic cells in the skin. Malignant melanoma class of skin cancer is generally caused from the pigment-containing cells known as melanocytes. Melanoma is found among non-Hispanic white males and females, and results in approximately 75% of deaths associated with skin cancer. According to the world cancer report, the primitive reason of melanoma is ultra-violate light exposure in those people who have low level of skin pigment. The UV ray can be from the sun or any other sources and approximately 25% of malignant can be from moles. Neural Network algorithm is utilized to detect the benign and malignant. This framework is based on learning the images that are captured with dermatoscopic device to find out whether it is benign or malignant.

    Convolutional Neural Network (CNN) is a type of neural network which is used in signal and image processing.

    Convolutional Neural Network is also used in Recommender System. CNN is chosen because it gives high accuracy in image processing. CNN has four working standards. VGG16 is a convolution neural net (CNN) architecture which was used to win ILSVR (Imagenet) competition in 2014. It is considered to be one of the excellent vision model architecture till date.

    Dermatologists enter all of their information into the main layer, which serves as the input layer. The information is then formed by the input layer and sent to the next levels, who subsequently transmit it to the pooling layer.

    The information structure is pooled by the pooling layer using max pool or min pool. The information is sent from the pooling layer to the straightening layer for smoothing, which converts the data to a one-dimensional vector. At that time, the information is dense enough to be shifted over to the category they choose, whether it be for a benign or malignant circumstance. This study proposes a method for automatically classifying skin cancer photos into malignant and benign melanoma using convolutional neural networks.

  2. INSPIRATION

    Skin cancer is a growing problem that has to be found as soon as feasible. The diagnosis is a manual process that takes a lot of time and money. However, machine learning has progressed science in the modern world and may be useful in a variety of ways. Therefore, machine learning, specifically convolutional neural network, may make it simple to detect malignant cells, which is why machine learning is used to detect cancerous cells more rapidly and effectively.

  3. LITRATURE SERVEY Dermatologists diagnose skin cancer by accessing

    photographs of cancer patients and analysing the results to determine if the patient has malignant cells or not. [1] Dermatologists advise treating it as malignant melanoma rather than benign melanoma since it contains harmful cells. The problem with this system is that it takes a long time to process a large number of patients and that increasing the rate of identification requires a lot of personnel, which raises the cost. The dermatologists will be helped and their work will be made simpler and faster by the emerging computerized system, which can automate this procedure of skin cancer diagnosis. [2] For years, several procedures or approaches have been developed to diagnose skin cancer. It is suggested

    in to properly identify the boundaries of the skin lesion using a closed elastic curve methodology and an intensity threshold method. An lighting correction and feature extraction framework based on high level intuitive features applied to skin pictures has been proposed by Robert Amelard et. al. in a study. Back-propagation neural network (BNN) and auto- associative neural network are two artificial neural network techniques that the authors of have suggested. A technique based on the ABCD standard has been put out by Ramteke et al. to identify skin malignant development. The performance- improving technique "E" in the ABCD rule is not used at this time. In, the scientists put forth a technique that, by eliminating particular highlights through 2D wavelet modification, may identify severe melanoma skin malignant development. The resulting image is then sent as a contribution to a fake neural system classifier. However, the procedure's drawback is that it can only discriminate outcomes with an accuracy of 84 percent.

    THEORY & RELATED WORK

    A. Melanoma Cancer

    Melanoma comes from melanocyte cells, melanin- generating cells which might be generally present in the skin. due to the fact most melanoma cells nonetheless produce melanin, melanoma is often brown or black in color. Fig. 1 shows the form of melanoma skin cancer. [3]

    Fig. 1 shows the form of melanoma skin cancer

    Melanoma can emerge on healthy skin or as a mole or another alteration in the skin's surface. Some birthmark moles can progress to melanoma. Additionally, melanoma can develop in the lips, tongue, jaws, ears, eyes, and gingiva of the upper jaw. The formation of additional moles or a change in the shape of an existing mole are frequent symptoms of the melanoma malignancy. Melanoma possesses the following features in contrast to normal moles, which are typically one colour, round or oval, and smaller than 6 millimetres in diameter:

    1. Has several colours

    2. Its form is erratic.

    3. Its diameter exceeds 6 mm.

    4. It stings and may bleed.

    The ABCDE list may be used to check for its shape and to differentiate between melanoma and normal moles as follows:

    1. Asymmetrical: Melanoma is not symmetrical and has an uneven form.

    2. Border: Unlike other moles, melanoma has an irreular and rough edge.

    3. Color: Melanoma often combines two or three different hues.

    4. Diameter: Melanoma differs from regular moles in that it often has a diameter more than 6 millimetres.

    5. Expansion or evolution: Melanoma often develops from moles that alter in size and form over time. [3]

  4. METHODOLOGY

Currently, a patient must have a single screening by a dermatologist to determine if they have skin disease or not in order to check for skin cancer. Dermatologists can process different situations much more quickly because to this framework. There are several symptomatic checklists that have been developed. A checklist that includes the letters ABCDE is Asymmetry(A): One side of the tumor-forming, afflicted cell does not cooperate with the other. Wattage is 1.3 for this factor. Border(B): The damaged, scored, and hidden margins and fringe of the contaminated cells. The wattage for this matching factor is 0.1. Color(C): The shade isn't consistent. There are many tones of tan or dark coloured skin patches. The obscene aspect is accentuated with red, white, and blue slashes. This factor has a wattage of 0.5. Diameter(D): The cell width is more significant when it is 6mm or greater. Evolution(E)- The aforementioned modifications or developments demonstrate that

  1. VGG16 Model:

    VGG16, a convolutional neural network (CNN) architecture, won the 2014 ILSVRC (Imagenet) competition. One of the most sophisticated vision model architectures ever created is widely regarded as being this one. Throughout design, convolutional and max-pool layers are arranged in exactly the same way. Two fully interconnected layers and a softmax serve as output. The fact that there are 16 layers with different values is indicated by the number 16 in VGG16. It is believed that there are 138 million components in this system. All 16 levels of the VGG16 model are depicted in Fig. 2. [1]

    Fig. 2 Block Diagram of Automatic milk pasteurization Unit

    The VGG16 model's 16 layers are separated into 5 sorts of layers (conv, ReLU, max-pooling, softmax, and fully connected). A source RGB value of 224 × 224 pixels with a set length is used by the convolutional layer. A number of

    convolutional layers are used to modify the data, each having a 3 3 visual field (the smallest size necessary to preserve the ideas of top/bottom, right/left, and core). A max-pooling layer comes after the convolutional layer. Together, convolution and ReLU process the data. It also has 11 convolution filters in some configurations, which are often thought of as a proportionate modification to the input networks (followed by nonlinearity). Following convolution, preserving spatial resolution, each convolution stride is specified as a single pixel, and the spatial padding of such a convolution layer source is set the same as a single pixel for 3 by 3 convolution layers. VGG16 focused on having a 3×3 filter input layer with a stride of 1, and would always utilise the same padding and max-pool structure, as opposed to having a lot of hyperparameters. A portion of the convolutional layers are replicated using layers of five maximum-pooling to perform spatial pooling. [1]

  2. Description Suggested Technique:

    This method made use of the tagged pictures "benign" and "malignant." Since the photographs in such categories couldn't be diagnosed, the images marked as "other and unknown" weren't utilised. Images were included to the collection based on the analysis mark that was taken from the image's metadata. The dataset has been divided into two groups, one of which contains all the hazardous dermoscopic images, and the other of which contains advantageous dermoscopic images. For the experimental portion, random selection was used to choose the photos from the ISIC dermoscopic library. collects data that are delivering and adding some weight with it that goes to hidden layers. [4] There are three levels in the system that we have presented. The input layer, or first layer, is where the training data sets are placed. The input layer gathers data that is being delivered and gives it some weight before it proceeds to the hidden levels. To identify a pattern, the neurons of the hidden layer remove the data's characteristics from it.

    The output layers that choose the right classes are then created using the pattern as a foundation. Finally, binary classification is employed, which correctly chooses classes 1 and 0. Class 0 in our situation denotes the absence of hazardous cells, whereas class 1 denotes the presence of cancerous malignant cells. Fig. 2 shows the convolutional neural network used to create our system.

  3. Flow Chart

a.

Fig.3 Flow chart of Pasteurization Process

STEPS OF SYSTEM

To determine if a particular dermoscopic picture contains cancer or not, the following steps are used:

  • Step 1: Initializing all of the system's required photos and settings.

  • Step 2: The system saves the photos into the system after receiving a training image as input.

  • Step 3: The system determines the prediction using a convolutional neural network.

  • Step 4: Training with the convolutional neural network in step four are produced in step three.

  • Step 5: Save the model into the system for test data prediction. [4]

  • Step 6: Use the accepted assessment measures to assess the outcome. like precision, recall, f1 score, and accuracy.

    The six steps are described in the following order:

    Step 1: Data preprocessing is step one. The enormous size of the pictures is one of the primary challenges in computational vision. There may be a lot of input data. If the inputted photos are 70 by 70 by 3, the input feature dimension can be 14700. If the picture is 1024×1024, the processing required to transmit it to a deep neural network, particularly a convolutional neural network, will be quite large (depending on the number of hidden units). There are three picture channels. There, are three RGB channels (Red, Green, Blue). We must try to describe a single channel while reading the image due to the limited computing capabilities. The size of the image is another problem. the image data collection, which is incredibly large in both width and height. The picture has a width of 1022 and a height of 767, making it

    quite large to analyse and requiring a lot of processing power to register many photographs, which takes a lot of time and memory. Accordingly, we must downsize the informational images so that our system can analyse them while using less memory and graphical processing power. It will be defined in such a manner that there is just one colour channel left in order to address these two issues with picture reading. In, our circumstances, original photos are converted into grayscale versions that are simpler for the CPU to handle.

    Step 2: Save the prepared file in step two. The classes of each of the preprocessed photos are also preserved in the record. Images of normal and cancerous tissue are taken from the dataset for further processing. The pictures without a class label must be discarded. The captured pictures are then sent to a convolutional neural network as the last step.

    Step 3: Feeding the preprocessed data to the convolutional neural network is step three (CNN). [4] A convolutional neural network has three different kinds of layers. Those are listed in the section below:

  • Layer of convolution

  • Layering pool

  • Layer with full connectivity

    Convolution Layer: Here, we describe our system using an example. Let's say we have a single channel, 6×6 grayscale picture like Fig. 4. Once more, we have a 3×3 filter. First, a 3×3 matrix was extracted from a 6×6 picture, and the filter was added to it. As a result, the first element of a 4×4 output, for instance, is equal to the sum of the element-wise product of these values.

    5×1+0+2×1+3×1+5×0+8×1+2×1+5×0+6×1 = 6.

    By summing the element-wise product and adjusting the filter one unit to the right, the second element of the 4×4 output was once again computed. Similar to this, the entire image was distorted to create Fig. 6's 4×4 output.

    It may be said that, in general, combining an input of x× x with a y× y filter will produce in (x y + 1) × (x y + 1):

    Fig. 4 (6×6 image with 3×3 filter)

    Fig. 5 (4×4 image after applying 3×3 filter to 6×6 image)

    Filter size: y×y; input: x×x

  • The result is (x y + 1) (x y + 1)

    The convolution technique has some significant drawbacks, including the reduction of the image's size. In contrast to the pixel at the to compensate for the information loss in the image's centre, just a small number of pixels are used there. The input size is changed from a 6×6 matrix to an 8×8 matrix by padding the picture and adding an additional border (i.e., one pixel all around the borders). Convolution of an 8 by 8 input matrix with a 3 by 3 matrix filter now yields the original picture, which is generally expressed as:

    Padding: p; Input: x×x;

  • Size of filter: y×y

    (x + 2p y + 1) is the result. × (x + 2p y + 1)

    The CNN function that allows users to decrease the image size is significant and helpful. Convoluting the picture, for instance, by selecting a stride of 2, will consider both the horizontal and vertical directions individually. Stride s's measurements are as follows:

  • Padding: p; Input: x×x

  • Step: z

  • Y is the filter size

    The result is [(x + 2p y)/z + 1] × [(x + 2p y)/z + 1] Thus, after the bias is included, the equation will have the value 1. The corrected linear unit activation function 2 receives it next. Biased terminology are used here. The input picture is xi, and the filter is wi..

    zi+ = bi + xi × wi (1) Relu(zi ) = max(0, zi ) (2)

    Pooling Layers: Pooling layers are frequently used to decrease the size of the image and speed up calculation. Consider the following 4×4 matrix:

    Fig. 6 Pooling Layer

    Fig. 7 Result after applying max pooling

    The maximum number was taken and the filter and stride were applied with a 2 unit size for each subsequent 2 ×2 block. If the pooling layer receives the input xh × xw ×xc, it will produce the output [(xh y)/z + 1 xw y)/z + 1 ×xc].

    Then, in order to extract more complicated characteristics, we once more use convolutions and pulling. To enable us to input the model to a fully connected neural network, the features are flattened to a single layer. The desired outcome, whether benign or malignant, is then discovered after applying the softmax as shown in equation 3.

    Output = Zi / = Zi / – (3)

    Step 4: Train as Step 4 Our model has to be trained 200 times minimum. Each time, the system's loss drops to a particular level. Training epochs are around 180, so we weren't aware of this. Any appreciable shift in the loss. Thus, we must halt our a 200th iteration

    Step 5: Save the model. The model is preserved for upcoming testing needs. The model is then utilised to forecast whether a picture will be benign or malignant.

    Step 6: Prediction With the help of the final output layer, we must forecast the photos. Using the accuracy, precision, recall, and f1 score, we assess our system following the prediction of the test pictures measures.

    Here, accuracy, f1 score, recall, precision, and recall are all calculated.

    Fig. 8 Neurons vs Accuracy

    Accuracy, the Loss function, and mean squared error of the suggested model are shown in Fig. 811. Fig. 8 displays the number of neurons' iterations and accuracy. Accuracy increases as the number of iterations increases. But after 80 rounds, the accuracy drops because the extra neurons have a detrimental impact on the system. Fig. 9 depicts the relationship between loss and iteration The loss decreases as the iteration count rises. Fig. 11 once more displays an accuracy graph. The accuracy increases as the number of iterations increases.

    • Data Set

EXPERIMENTAL SETUP

ISIC Archive has accumulated over 33907 pictures.

Cancer risk can be predicted using these photos metrics.

Formula for Recall =

True Positive

True Positive + False Negative

Precision Formula =

True Positive

True Positive + False Negative

Specificity Formula =

True Negative

True Negative + False Positive

F1 Score =

2 x

Precision * Recall

Precision + Recall

To evaluate the model's precision, specificity, recall, accuracy, and f1 Scores are used to assess the performance of the suggested model. [4] Recall is what number of frightening scenarios can differentiate in this case. given all the risky instances, incomplete.

Fig. 9 Iteration vs Loss.

Fig. 10 Iteration vs Accuracy.

Fig. 11 Iteration vs Mean

Displaying the recall, accuracy, and F1 Score Parameter Result.

Table 1

Parameter

Result

Recall

0.84

Precision

0.8325

F1 Score

0.8325

Type of layers

Outcome structure

Number of parameters

Conv1 (b1)

(Nil, 224 x 224x 64)

1,792

Conv2 (b1)

(Nil, 224x224x64)

36,928

Pooling (max)

(Nil, 112: 112: X 64)

Null

Conv1 (b2)

(Nil, 112x 112×128)

73,856

Conv2 (b2)

(Nil, 112x 112: x 128)

147,584

Pooling (max)

(Nil, 56x 56 x 128)

Null

Conv1 (b3)

(Nil, 56: X 56x 256)

295,168

Conv2 (b3)

(Nil, 56x56x256)

590,080

Conv3 (b3)

(Nil, 56x56x256)

590,080

Pooling (max)

(Nil, 28 x 28x 256)

Null

Conv1 (b4)

(Nil,28x28x512)

1,180,160

Conv2 (b4)

(Nil, 28 x 28 x 512)

2,359,808

Conv3 (b4)

(Nil, 28 x 28 x 512)

2,359,808

Pooling (max)

(Nil, 14x 14x 512)

Null

Conv1 (b5)

(Nil, 14x 14x 512)

2,359,808

Conv2 (b5)

(Nil, 14x 14x 512)

2,359,808

Conv3 (b5)

(Nil, 14x 14x 512)

2,359,808

Pooling (max)

(Nil, 7x x 512)

Null

Layer flatten

(Nil, 25088)

Null

Fully connected 1- dense

(Nil, 4096)

102,764,544

Fully connected 2- dense

(Nil, 4096)

16,781,312

Layer dropout

(Nil, 4096)

Null

Layer dense

(Nil, 1)

4097

Result error squared versus iteration Here, the mean square error is decreased as the number of iterations increases. We discovered the recall, precision, and f1 score using the ISIC archive date, which are shown in the table.

RESULT

The accuracy and loss function graph of the VGG16 model is displayed in Fig. 12. Every graph has two curves. The test curve is one, and the train curve is the other. Due to overfitting or training the model on a specific dataset, the training curve's model accuracy (a) is always greater than that of the test curve. At the utmost accuracy of 93.18 percent, it provides a value of 0.2603 in test loss (b) and a value of 0.1716 in test. loss.

Fig. 12 Accuracy and Loss function

The parameter values utilised in this system are displayed in Table 1. The VGG16 model's 16 layers each provide certain parameters. As the layers get thicker, the number of parameters grows. Convolutional layer one contains 1792 parameters, whereas block five's convolutional layer one has 2359808 parameters. The 5 blocks comprise the 13 convolutional layers. Each block has a max-pooling layer at the end. There were no parameters produced by the max- pool layer. Numerous parameters are formed in each layer as a result of modifications to the data's output shape. The final two completely linked levels are where the most parameters may be established. There are 134,264,641 parameters in this VGG16 system altogether. All parameters are trainable.

Table 2

Layer numbers

Parameters

Train loss

Test loss

Highest accuracy (%)

16

134,264,641

0.2603

0.1716

93.18

The maximum accuracy achieved by this model (VGG16) is shown in Table 2 along with the peak results for train loss and test loss. Data regarding the system's overall parameters and layers are also shown. Results from a system's built-in loss function are what determine train loss and test loss. It assisted the system in locating lost data and retrieving it. The VGG16 system provided us with a train loss of 0.2603 and a test loss of 0.1716, computed on the loss function, after training and applying a test picture set. These numbers were gathered within the most accurate time period. The system's best accuracy is 93.18 percent.

CONCLUSION

The modern world is full of terrible illnesses. One of them is skin cancer. The best course of action is an early diagnosis. Today's world has advanced medical knowledge. Skin cancer used to be manually identified, which was time- consuming and expensive. However, it has gotten more simpler as a result of deep learning's growth in the field of medical research. Skin cancer may be quickly detected using Deep Learning, notably CNN, which is simple and less expensive. The CNN is recommended in this study to identify skin cancer as a result. We employed many VGG16 convolutional neural network models in this study.

The dataset was subjected to convolutional models, and an accuracy of VGG16, 74.24 percent was obtained.

The VGG16 model produced the best outcomes in this study. The purpose of this technique was to find skin cancer. This will make skin cancer detection for doctors quick and simple.

Future versions of this comparison might include convolutional neural network models that are more sophisticated. The data on deep learning models for skin cancer obtained in this study can aid the subsequent generation of researchers in achieving 100% accuracy in skin cancer detection. The approaches used in this research can be used to other forms of skin cancer as it only focuses on two specific varieties. Large datasets can be used using these systems. It will assist in developing more precise image classification algorithms for the diagnosis of skin cancer.

REFERENCES

[1] Mohammed Rakeibul Hasan, Mohammed Ishraaf Fatemi, Mohammad Monirujjaman, Comparative Analysis of Skin Cancer (Benign vs. Malignant) Detection Using Convolutional Neural Networks, Journal of Healthcare Engineering, Hindawi, (2021) https://doi.org/10.1155/2021/5895156

[2] Spencer Shawna Bram Hannah J, Frauendorfer Megan and Hartos Jessica L., Does the Prevalence of Skin Cancer Differ by Metropolitan Status for Males and Females in the United States?, Journal of Preventive Medicine 3, (2017), Pp. 16. https://doi.org/10.21767/2572-5483.100019.

[3] Rina Refianti, Achmad Benny Mutiara and Rachmadinna Poetri Priyandini, Classification of Melanoma Skin Cancer using Convolutional Neural Network, International Journal of Advanced Computer Science and Applications (IJACSA), 10 (3), 2019. http://dx.doi.org/10.14569/IJACSA.2019.0100353

[4] Mahamudul Hasan, Surajit Das Barman, Samia Islam, Ahmed Wasif Reza. "Skin Cancer Detection Using Convolutional Neural Network", Proceedings of the 2019 5th International Conference on Computing and Articial Intelligence – ICCAI '19, 2019

[5] Koby Crammer and Yoram Singer, Online ranking by projecting, Neural Computation 17, 1, 2005, 145175.

[6] Swati Srivastava Deepti Sharma, Automatically Detection of Skin Cancer by Classification of Neural Network, International Journal of Engineering and Technical Research 4, 1 (2016), 1518.

[7] A. Goshtasbya D. Rosemanb S. Binesb C. Yuc A. Dhawand A. Huntleye L. Xua, M. Jackowskia, Segmentation of skin cancer images, Image and Vision Computing 17, 1 (1999), Pp. 6574. https://doi.org/10.1016/S0262-8856(98)00091-2

[8] Shweta V. Jain Nilkamal S. Ramteke1, ABCD rule based automatic computer aided skin cancer detection using MATLAB, International Journal of Computer Technology and Applications 4, 4 (2013), Pp. 691697.

[9] World Health Organization. 2019. Skin Cancer. Retrieved March 16, 2019. http://www.who.int/en/

[10] ISIC project. 2018. ISIC Archive. Retrieved March 16, 2019 from https://www.isic-archive.com

[11] Sibi Salim RB Aswin, J Abdul Jaleel, Implementation of ANN Classifier using MATLAB for Skin Cancer Detection International Journal of Computer Science and Mobile Computing (2013), Pp. 8794.

[12] Alexander Wong David A. Clausi Robert Amelard, Jeffrey Glaister, Melanoma Decision Support Using Lighting-Corrected Intuitive Feature Models. Computer Vision Techniques for the Diagnosis of Skin Cancer, Series in Bio Engineering (2014), 193219. https://doi.org/10.1007/978-3-642-39608-3_7

[13] Wild CP Stewart BW. 2014. World Cancer Report. Retrieved March 16, 2019 from http://publications.iarc.fr/Non-Series-

Publications/World-Cncer-Reports/World-Cancer-Report-2014 [14] Cancer Research UK. 2012. Cancer World Wide – the global picture.

Retrieved March 16, 2019 from http://www.cancerresearchuk.org/cancer-info/cancerstats/world/the- global-picture/

Leave a Reply