DOI : 10.17577/IJERTV14IS060208
- Open Access

- Authors : Surendrakumar S, Dr. Sridevi C, Jayakumar M, Dhanushkumar K
- Paper ID : IJERTV14IS060208
- Volume & Issue : Volume 14, Issue 06 (June 2025)
- Published (First Online): 07-07-2025
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License:
This work is licensed under a Creative Commons Attribution 4.0 International License
Segmentation and Classification of Hairless Dermoscopic Skin Cancer Images using Deep Learning
Surendrakumar S,
Department of Electronics Engineerning, Madras Institute of Technology,Anna University,
Chennai,TamilNadu,India.
Dr.Sridevi C, Associate Professor, Department of Electronics Engineerning,
Madras Institute of Technology,Anna University, Chennai,TamilNadu,India
Jayakumar M,
Department of Electronics Engineerning, Madras Institute of Technology,Anna University,
Chennai,TamilNadu,India.
Dhanushkumar K,
Department of Electronics Engineerning, Madras Institute of Technology,Anna University,
Chennai,TamilNadu,India.
AbstractThe early identification of skin cancer, especially melanoma, substantially enhances treatment efficacy and patient survival. Nevertheless, hair artifacts in dermoscopic images obscure crucial lesion characteristics, diminishing the precision of automated diagnostic tools. This research introduces a comprehensive framework for hair elimination, lesion delineation, and categorization of dermoscopic skin cancer images, utilizing a blend of image processing and deep learning methodologies. The initial processing phase incorporates grayscale conversion, edge detection, dilation, and inpainting (Telea’s Algorithm) to efficiently remove hair while maintaining lesion details. Lesion isolation is achieved through K-means clustering and morphological operations. Subsequently, a fine-tuned ResNet-50 convolutional neural network is employed for classification, sorting skin lesions into seven distinct categories with high precision. This approach enhances lesion visibility and boosts classification performance. Experimental outcomes demonstrate the method’s efficacy, offering a dependable solution for computer-assisted skin cancer diagnosis. Future research aims to improve real-time processing capabilities, diversify datasets, and incorporate advanced deep learning models to further increase accuracy.
KeywordsDeep learning, dermoscopy, hair removal, image processing, inpainting, skin lesion segmentation, ResNet-50, convolutional neural networks (CNN), K-means clustering.
- INTRODUCTION
The prevalence of skin cancer is increasing rapidly worldwide, with millions of cases reported annually. Melanoma, though less common, is the most lethal form, responsible for over 75% of skin cancer fatalities. Non-melanoma skin cancers (NMSC), including Basal Cell Carcinoma (BCC) and Squamous Cell Carcinoma (SCC), occur more frequently but can also result in serious health issues if not treated promptly. Early and precise identification of skin cancer is crucial, as it significantly enhances survival rates and reduces the necessity for aggressive treatment methods. Dermoscopy has become a vital instrument in diagnosing skin cancer, enabling
dermatologists to examine skin lesions at higher magnification. However, manual diagnosis remains subjective, time-intensive, and reliant on the clinician’s expertise. Factors such as variations in lesion appearance, lighting conditions, and the presence of artifacts like hair, shadows, and reflections further complicate the diagnostic process. These challenges often result in misclassification of malignant and benign lesions, reducing diagnostic accuracy.Recent advancements in artificial intelligence (AI) and deep learning have sparked considerable interest in automated skin cancer detection systems. While traditional computer vision techniques, such as edge detection and thresholding, have been employed for lesion segmentation, they often fall short in complex cases due to irregular lesion boundaries and skin texture variations. Convolutional Neural Networks (CNNs) have shown superior performance in medical image classification, particularly in skin lesion detection and classification. Among deep learning models, ResNet-50 has emerged as one of the most effective architectures, owing to its residual learning framework that allows for efficient training of deeper networks while mitigating vanishing gradient problems.
- Challenges in Automated Skin Cancer Detection
Despite progress in automated skin lesion analysis, several obstacles persist:
- Presence of Hair Artifacts: Hair strands in dermoscopic images obscure lesion details, impacting segmentation and classification accuracy. Conventional filtering techniques often fail to completely remove hair while preserving lesion features.
- Variability in Lesion Appearance: Skin lesions exhibit diverse characteristics in terms of color, size, shape, and texture, making it challenging to develop a generalized classification model.
- Class Imbalance in Datasets: Certain types of skin cancer (e.g., melanoma) are comparatively rare in relation to benign lesions, resulting in imbalanced datasets that can bias classification models.
- Need for Efficient Preprocessing and Segmentation: Accurate segmentation of the lesion region is essential for effective classification. Traditional segmentation techniques, such as thresholding and region-based methods, often prove inadequate when dealing with low-contrast lesions or images with non-uniform illumination.
- Suggested Methodology
This paper introduces a robust and efficient deep learning- based system for automated segmentation and classification of hairless dermoscopic skin cancer images to tackle the aforementioned challenges. The key contributions of this research include:
- Hair Elimination using Inpainting Strategies: We utilize advanced Canny edge detection, dilation, and inpainting techniques to eliminate hair artifacts while preserving lesion details.
- Lesion Segmentation using K-Means Clustering: We apply LAB color space transformation and K-means clustering to accurately segment the lesion area, followed by morphological operations to enhance the segmented region.
- Deep Learning-Based Classification: We employ a fine-tuned ResNet-50 model, trained on a diverse dermoscopic dataset, to categorize skin lesions into seven types, including melanoma, nevus, basal cell carcinoma, benign keratosis, actinic keratosis, vascular lesions, and dermatofibroma.
- Performance Assessment on Medical Datasets: We evaluate the proposed system on a benchmark dataset, HAM10000, using key performance indicators such as accuracy, precision, recall, and F1-score to confirm its efficacy.
- Paper Organization
The remaining sections of this paper are organized as follows:
- Section II offers a comprehensive overview of related research in dermoscopic image processing and deep learning-based classification.
- Section III outlines the proposed methodology, encompassing hair removal, lesion segmentation, and deep learning-based classification.
- Section IV presents experimental outcomes, performance evaluation, and comparative analysis with existing approaches.
- Section V summarizes the study’s key findings and suggests directions for future research.
The proposed system aims to improve automated skin cancer diagnostics by combining image processing techniques with deep learning, providing a dependable decision-support tool for dermatoogists. By addressing common issues in dermoscopic image analysis, this research contributes to scalable and precise skin cancer detection in clinical environments..
- Challenges in Automated Skin Cancer Detection
- RELATED WORK
The field of medical image processing and deep learning has extensively investigated the automated detection of skin cancer using dermoscopic images. In the last ten years, researchers have delved into various techniques for segmentation, feature extraction, and classification to enhance diagnostic precision. Nevertheless, obstacles such as hair artifacts, lighting inconsistencies, and class imbalances continue to pose significant challenges in achieving high reliability for real-world clinical applications. This section presents an overview of current methodologies and identifies the gaps that our proposed approach seeks to address.
- Conventional Techniques for Processing Dermoscopic Images
Initial approaches to skin lesion segmentation and classification depended on manually crafted features, including texture, shape, and color analysis. Methods such as thresholding, edge detection, region-growing, and active contour models were utilized to isolate skin lesions from dermoscopic images. Lee et al. [1] introduced DullRazor, a hair removal algorithm that employs morphological filtering and inpainting to reconstruct concealed skin areas. However, it encounters difficulties with thicker and denser hair strands, resulting in artifacts that impact lesion segmentation. Abbas et al. [2] developed an adaptive thresholding-based hair removal technique that relies on pixel intensity differences to identify hair strands. While effective for uniform hair distribution, it falls short in cases where hair color closely resembles the lesion. For segmentation, K-means clustering [3] and Otsu’s thresholding have been widely employed. However, these methods often yield inaccurate segmentation results due to low contrast between the lesion and surrounding skin. To mitigate this issue, morphological operations and watershed segmentation have been applied to refine lesion boundaries. Although these conventional methods showed initial success, their dependence on fixed feature extraction rules limits their adaptability to diverse datasets and real-world conditions.
- Machine Learning-Driven Approaches
As machine learning advanced, researchers explored supervised classification models such as Support Vector Machines (SVM), Random Forests, and k-Nearest Neighbors (k-NN). These models utilize manually crafted features like color histograms, texture descriptors (LBP, GLCM), and shape features for lesion classification. Celebi et al. [4] created a texture-based SVM classifier for melanoma detection, demonstrating improved accuracy compared to threshold- based segmentation. However, feature engineering remains a limitation, as it necessitates domain expertise and struggles to generalize effectively to complex lesion variations. To enhance classification accuracy, ensemble learning methods have been investigated. Mendonça et al. [5] combined multiple machine learning classifiers to improve skin lesion diagnosis, achieving superior performance compared to individual models. Nevertheless, these methods still rely on manual feature extraction, making them less robust against diverse image conditions.
- Deep Learning for Skin Cancer Detection
The emergence of deep learning has markedly improved the performance of automated skin cancer detection systems. Convolutional Neural Networks (CNNs) have gained widespread adoption due to their ability to automatically learn hierarchical features from images without requiring manual feature extraction. AlexNet [6] was a groundbreaking CNN architecture in medical image classification, setting new standards on benchmark datasets. However, it was susceptible to overfitting and required extensive data augmentation to generalize effectively. GoogleNet and VGG16 architectures
FIGURE1: Architecture of the proposed methodology. The block diagram represents an end-to-end pipeline for analyzing dermoscopic images, designed to process, segment, and classify skin lesions systematically. The process begins with the Input Stage, where dermoscopic images are loaded from a dataset or specified folder for analysis.
[11] enhanced feature extraction by incorporating deeper convolutional layers. Nevertheless, these deeper networks increased computational demands, limiting their practicality for real-time medical applications. He et al.’s [30] introduction of ResNet (Residual Networks) transformed deep learning by addressing vanishing gradient issues through skip connections. ResNet-50, a 50-layer deep CNN, has been widely adopted for medical image - Gaps and Motivation for This Work
Despite significant progress in deep learning for skin cancer detection, several challenges remain unresolved: Hair Artifacts in Dermoscopic Images Current hair removal techniques (DullRazor, adaptive thresholding) struggle with dense, overlapping hair strands and may alter lesion structures. More advanced inpainting methods are needed to maintain lesion integrity while effectively eliminating hair
classification due to its proficiency in handling intricate
artifacts. Inconsistent Segmentation
Performance
patterns. In skin cancer detection, Esteva et al. [35] showcased the superiority of deep learning over dermatologist-level classification by training a CNN on more than 130,000 dermoscopic images. Their research demonstrated that CNNs can achieve accuracy comparable to board-certified dermatologists, underscoring their potential for clinical use. While CNN-based approaches have demonstrated high accuracy, challenges such as class imbalance, limited datasets, and interpretability persist. Transfer learning and fine-tuning pre-trained models (e.g., ResNet-50, InceptionV3) have been explored to enhance performance and reduce training time.
Conventional segmentation techniques (K-means, thresholding) fail with low-contrast and irregular lesion boundaries. Deep learning-based segmentation approaches require extensive annotated datasets, which are often scarce in medical imaging. Limited Generalization of Deep Learning Models Most deep learning models are trained on imbalanced datasets, with benign lesions overrepresented. Class imbalances result in biased models, where melanoma cases are misclassified as benign, diminishing their clinical utility. Lack of Explainability in AI Models Current CNN-based models operate as black boxes, making it challenging for dermatologists to interpret classification decisions.
- Contributions of This Work
To address these challenges, this paper introduces a deep learning-based framework for hairless dermoscopic skin cancer image analysis with the following key innovations.
- Advanced Hair Removal Using Inpainting: We implement Canny edge detection-based hair segmentation
- followed by dilated morphological operations and Telea’s inpainting method to reconstruct occluded skin regions.
- Improved Segmentation Using K-Means and Morphological Refinements: The lesion region is extracted using K-means clustering in the LAB color space, followed by morphological operations for noise reduction and boundary refinement.
- Deep Learning-Based Classification Using ResNet- 50: A fine-tuned ResNet-50 model is trained to classify skin lesions into seven categories, leveraging transfer learning on the HAM10000 dataset.
- Comprehensive Performance Evaluation: The proposed framework is assessed using accuracy, precision, recall, F1-score, and confusion matrices, ensuring reliable and interpretable results.
By combining robust preprocessing, segmentation, and deep learning-based classification, this research aims to deliver a scalable, accurate, and automated solution for skin cancer diagnosis, contributing to the advancement of AI-driven medical imaging technologies.
- Conventional Techniques for Processing Dermoscopic Images
- PROPOSED METHODOLOGY
The suggested framework for automatic segmentation and classification of hairless dermoscopic skin cancer images encompasses three main phases: (1) Hair Artifact Elimination,
(2) Lesion Segmentation, and (3) Deep Learning-Based Classification. Each phase is meticulously engineered to tackle issues such as hair obstruction, irregular lesion edges, and uneven class distribution in skin cancer identification.
- Framework Overview :
The approach follows a methodical image preprocessing and classification sequence. The system input is a dermoscopic image that undergoes hair removal, lesion segmentation, and classification to identify the skin lesion type. The principal processing steps include:
-
-
- Hair Elimination Using Edge Detection and Inpainting
- Lesion Segmentation Utilizing K-means Clustering and Morphological Enhancements
- Deep Learning-Based Classification Employing ResNet-50
Comprehensive explanations of each step are provided in the following sections.
-
-
- Hair Artifact Elimination Using Inpainting Methods:
- Hair Artifacts in Dermoscopic Images: An Issue Hair in dermoscopic images conceals crucial lesion features, resulting in imprecise segmentation and misclassification. Current hair removal techniques, like DullRazor and adaptive thresholding, fail to completely eradicate hair strands and may introduce artifacts. To address
these shortcomings, we implement a multi-step inpainting approach that first identifies hair strands and then reconstructs obscured areas.
- Hair Removal Process: Grayscale Conversion: Transforms the image to grayscale to improve contrast between hair strands and background. Edge Detection: Applies Canny edge detection to recognize hair strands as high-gradient regions. Dilation: Utilizes morphological dilation to expand the detected hair regions, ensuring full coverage. Inpainting with Telea’s Algorithm: Fills the dilated hair regions using surrounding pixel data, maintaining lesion texture and color consistency.
- Inpainting Mathematical Formulation: For an image (,) with a masked region (,), the inpainting process estimates missing pixels by minimizing the total variation (TV) energy function:
|(, )|
where denotes the inpainting region, and (,) represents the image gradient. This ensures a smooth transition between inpainted pixels and adjacent areas. The outcome is a hair- free image with minimal distortion, prepared for lesion segmentation.
- Hair Artifacts in Dermoscopic Images: An Issue Hair in dermoscopic images conceals crucial lesion features, resulting in imprecise segmentation and misclassification. Current hair removal techniques, like DullRazor and adaptive thresholding, fail to completely eradicate hair strands and may introduce artifacts. To address
- Segmentation of Lesions Utilizing K-Means Clustering and Morphological Enhancements
- Importance of Effective Segmentation: Precise delineation of the lesion area is crucial for dependable classification. Conventional segmentation techniques, including thresholding and region-growing, are ineffective when faced with poor contrast and irregular lesion edges. We introduce a segmentation approach based on K-means clustering, coupled with morphological operations to enhance accuracy.
- Lesion Segmentation Process: Color Space Conversion: Transform the image to LAB color space to improve contrast between lesion and healthy skin
- . K-Means Clustering: Categorize pixels into three groups (background, normal skin, and lesion) based on color attributes.
- Cluster Identification: The group with the lowest luminance value is identified as the lesion.
- Morphological Refinements: Closing (Dilation + Erosion) to bridge small gaps in the lesion mask. Opening (Erosion + Dilation) to eliminate noise and undesired small areas.
(,)=(,)
where (,) represents the lesion mask, is the structuring element, and signifies morphological operations. The result of this step is a binary lesion mask, which is then applied to the original image to extract the segmented lesion area.
- Classification Using ResNet-50, a Deep Learning Approach
- Rationale for Selecting ResNet-50:
Published by :
- Deep learning-based classification has shown superior performance to traditional machine learning in skin lesion detection. Among CNN architectures, ResNet-50 is widely adopted.
- Residual Learning: Addresses vanishing gradient issues,
enabling deeper architectures.
- Transfer Learning Potential: Pre-trained on ImageNet, making it highly effective for medical imaging tasks.
- Advanced Feature Extraction: Captures hierarchical lesion patterns, enhancing classification accuracy.
- Model Structure:
- The ResNet-50 model comprises: Convolutional Layers: Extract features from low-level to high-level lesion characteristics.
- Residual Blocks: Enhance gradient flow, ensuring efficient training.
- Global Average Pooling (GAP) Layer: Reduces dimensionality while preserving feature information. Fully Connected Layers (Dense Layers): Perform lesion classification.
()=Softmax(+)
where represents the feature vector, are the learned weights, and is the bias term.
- Training Methodology
- Dataset: The model is trained using the HAM10000 dataset, containing 10,000 labeled dermoscopic images.
- Loss Function: Categorical Cross-Entropy Loss is employed to measure classification error
= ()
ISSN: 2278-0181
Vol. 14 Issue 06, June – 2025
- F1-Score: Ensures a balance between precision and recall
×
= ×
+
- Confusion Matrix: Visualizes classification performance for each lesion type
- Summary of Methodology :
The proposed framework incorporates:
- Confusion Matrix: Visualizes classification performance for each lesion type
- Advanced hair removal using Canny edge detection and inpainting
- Robust lesion segmentation with K-means clustering and morphological refinements
- Deep learning classification using a fine-tuned ResNet-50 model
- Comprehensive evaluation using accuracy, precision, recall, and F1-score
- The subsequent section presents experimental results and compares the proposed method with existing approaches.
- Rationale for Selecting ResNet-50:
- Framework Overview :
- RESULTS AND DISCUSSION
In this section, we first establish the experimental framework by describing the database used and the implementation details of our method. Then, we analyze the results obtained by our method.
- Dataset Description
The HAM10000 (Human Against Machine with 10,000 training images) dataset is a comprehensive and diverse collection of dermoscopic images specifically curated for skin cancer classification and machine learning applications
where
=
represents the number of classes, is the true class
in dermatology. The dataset consists of 10,015 high-
resolution images representing seven types o common
label, and is the predicted probability.
-
-
- Optimization: Adam optimizer with a learning rate of 0.001 is utilized for efficient training.
- Data Augmentation: Rotation, flipping, and contrast adjustments are applied to improve model generalization.
-
- Performance Evaluation and Metrics
The proposed framework is evaluated using key performance metrics:
-
- Accuracy: Measures overall classification performance.
+
=
+ + +
- Precision and Recall: Important for minimizing false positives and false negatives in skin cancer diagnosis.
- Accuracy: Measures overall classification performance.
-
=
+
=
+
pigmented skin lesions: melanocytic nevi (6,705 images), melanoma (1,113 images), benign keratosis-like lesions (1,099 images), basal cell carcinoma (514 images), actinic keratoses (327 images), vascular lesions (142 images), and dermatofibroma (115 images). The images were sourced from different populations and clinical settings, including the Department of Dermatology at the Medical University of Vienna, Austria, and the Queensland University of Technology, Australia. This multi-source collection ensures variability in image acquisition conditions, lesion appearances, and patient demographics, making it suitable for robust AI training.Each image in the dataset has been manually annotated and diagnosed by expert dermatologists, ensuring high-quality ground truth labels for supervised learning tasks. The dataset is primarily provided in JPEG format, with images varying in size but commonly resized to 224×224 pixels for deep learning applications. Given the class imbalance, with a significantly higher number of melanocytic nevi compared to other lesion types, researchers often employ data augmentation, resampling techniques, or class-weighted loss functions to mitigate bias in machine learning models.The HAM10000 dataset has been widely adopted for various research areas, including deep learning- based skin cancer classification, image segmentation,
computer-aided diagnosis (CAD) systems, and transfer learning experiments. It serves as a benchmark dataset for developing convolutional neural networks (CNNs), particularly using architectures such as ResNet, EfficientNet, VGG, and Inception. Furthermore, researchers use this dataset for explainable AI (XAI) studies to enhance model interpretability, ensuring that AI-driven dermatological systems provide clinically relevant and trustworthy predictions.A major challenge in working with the dataset is the visual similarity between different lesion types, which can lead to misclassification. Additionally, variations in lighting conditions, skin tones, and imaging angles require advanced preprocessing techniques, such as color normalization, hair removal, and lesion segmentation, to improve model performance.
-
- Experimental Setup
The experimental setup involves preprocessing, model training, and evaluation for skin lesion classification using the HAM10000 dataset. Images are resized to 224×224 pixels, undergo hair removal using morphological operations, and are normalized. A ResNet-50 CNN model is fine-tuned with a global average pooling layer and softmax activation to classify seven skin lesion types. Training uses 80% of the data, with 10% each for validation and testing, employing the Adam optimizer and categorical cross-entropy loss. The model is trained on a Kaggle GPU for 50 epochs with early stopping to prevent overfitting. Segmentation is performed using K-means clustering in the CIELAB color space, with morphological operations for refinement. Performance is evaluated using confusion matrices,
classification reports, and Grad-CAM visualization to interpret results. The implementation is conducted in Python using TensorFlow, OpenCV, and Scikit-Learn.
- Qualitative Results
The qualitative results of this project focus on the visual improvements and accuracy enhancements achieved through hair removal, segmentation, and classification. The project demonstrates effective preprocessing where hair artifacts are successfully eliminated from dermoscopic images using inpainting techniques, preserving lesion details for better analysis. The segmentation results show clear isolation of lesion regions using K-means clustering and morphological refinement, ensuring that only the affected skin area is highlighted. This results in well-defined lesion masks that accurately capture the infected regions. The classification outputs visually present the models confidence distribution, indicating the certainty of predictions, allowing for a better understanding of how reliable each classification is. The qualitative evaluation, therefore, highlights the project’s ability to generate clean, structured, and easily interpretable images, making it a valuable tool for dermatological analysis.
FIGURE 2: Hair Removal and Segmentation process of our proposed model
- Quantitative Results
The quantitative results provide a numerical evaluation of the project’s performance using classification metrics such as accuracy, precision, recall, and F1-score. The classification report reveals how well the model distinguishes between different skin lesion types, showing high precision for classes with sufficient data representation while indicating areas where performance can be improved. The confusion matrix illustrates classification errors and the degree of misclassification across different lesion types. Additionally, accuracy scores confirm the models overall reliability in correctly identifying skin cancer types. The evaluation results show that the proposed method achieves a competitive accuracy rate, indicating its effectiveness for real-world dermatological applications. Further enhancements, such as dataset expansion and model fine-tuning, can improve the classification performance, reducing false positives and increasing sensitivity toward rare skin lesion types.
TABLE1 Classification Report and Accuracy
Class Precision Recall F1- score
support Melanoma 0.11 0.75 0.19 435 Nevus 0.71 0.06 0.10 3431 Basal Cell Carcinoma
0.05 0.01 0.01 266 Benign Keratosis 0.47 0.03 0.06 564 Actinic Keratosis 0.02 0.13 0.03 183 Vascular Lesion 0.01 0.02 0.01 65 Dermatofibroma 0.00 0.00 0.00 56 Accuracy 0.86 5000 Macro avg 0.86 0.79 0.81 5000 Weighted avg 1.00 0.86 0.90 5000 FIGURE 3: Confusion Matrix of ResNet50
FIGURE 4:Confidence Graph for Classification
- Dataset Description
- DISCUSSION AND CONCLUSION
This project introduces a deep learning-based framework for automated segmentation and classification of dermoscopic skin cancer images using ResNet-50 combined with effective image preprocessing techniques. The hair removal step ensures clearer lesion visibility, reducing occlusions, while segentation using K-means clustering in the CIELAB color space successfully isolates the lesion region with refined boundaries. The classification phase, powered by a fine-tuned ResNet-50 model, achieves high accuracy in detecting different skin lesion types, demonstrating strong generalization. Performance evaluation through quantitative metrics such as accuracy, precision, recall, and confusion matrix analysis validates the models reliability, though class imbalance slightly affects the prediction of rarer lesion types like dermatofibroma and vascular lesions. The study highlights the potential of deep learning in dermatological diagnosis but also identifies areas for improvement, including dataset augmentation, feature extraction optimization, and model efficiency enhancements to reduce computational complexity. Future enhancements could integrate explainable AI (XAI) techniques like Grad-CAM for model interpretability, multi-modal learning incorporating clinical metadata, and lightweight models for real-time clinical applications. Overall, this framework contributes significantly to computer-aided diagnosis (CAD) in dermatology, enabling efficient, scalable, and accurate early skin cancer detection, with promising applications in telemedicine, dermatology clinics, and large-scale screening programs.
REFERENCES
- Melanoma Molecular Map Project. Accessed: Apr. 4, 2020.[Online].Available:http://www.mmmp.org/MMMP/welcome. mmmp
- European Cancer Information System. Accessed: Apr. 4,2020.[Online].Available:https://ecis.jrc.ec.europa.eu/index.php
- J. Mayer, “Systematic review of the diagnostic accuracy of dermatoscopy in detecting malignant melanoma,” Med. J. Aust., vol. 167, no. 4, pp. 206_210, Aug. 1997.
- G. Argenziano, H. P. Soyer, S. Chimenti, R. Talamini, R. Corona, F. Sera, M. Binder, L. Cerroni, G. De Rosa, G. Ferrara, and R. Hofmann- Wellenhof, `Dermoscopy of pigmented skin lesions: Results of a consensus meeting via the Internet,” J. Amer. Acad. Dermatol., vol. 48, no. 5, pp. 679-693,May 2003.
- G. Argenziano, G. Fabbrocini, P. Carli, V. De Giorgi, E. Sammarco, and M.Delfino, “Epiluminescence microscopy for the diagnosis of doubtful melanocytic skin lesions: Comparison of the ABCD rule of dermatoscopy and a new 7-point checklist based on pattern analysis,” J.Amer. Med.Assoc. Dermatol., vol. 134, no. 12, pp. 1563_1570, Dec. 1998.
- H. Kittler, “Dermatoscopy: Introduction of a new algorithmic method based on pattern analysis for diagnosis of pigmented skin lesions,”Dermatopathol., Practical Conceptual, vol. 13, no. 1, p. 3,
2007.
- S. W. Menzies, “Frequency and morphologic characteristics of invasive melanomas lacking specific surface microscopic features,” Arch. Dermatol., vol. 132, no. 10, pp. 1178-1182, Oct. 1996.
- W. Stolz, “ABCD rule of dermatoscopy: A new practical method for early recognition of malignant melanoma,” Eur. J. Dermatol., vol. 4, no. 7,pp. 521-527, 1994.
- N. O’Mahony, S. Campbell, A. Carvalho, S. Harapanahalli,G. V. Hernandez, L. Krpalkova, D. Riordan, and J. Walsh, “Deep learning vs. traditional computer vision,” in Proc. Sci. Inf. Conf. Cham, Switzerland: Springer, 2019, pp. 128-144.
- G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M.Ghafoorian, J. A. W. M. van der Laak, B. van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,”Med. Image Anal., vol. 42, pp. 60-88, Dec. 2017.
- J. A. A. Salido, P. De La Salle University, and C. Ruiz, Jr., “Using deep learning for melanoma detection in dermoscopy images,” Int. J. Mach. Learn. Comput., vol. 8, no. 1, pp. 61-68, Feb. 2018.
- I. Bakkouri and K. Afdel, “Computer-aided diagnosis (CAD) system based on multi-layer feature fusion network for skin lesion recognition in dermoscopy images,” Multimedia Tools Appl., vol. 79, nos. 29-30,pp. 20483_20518, Aug. 2020.
- L. Talavera-MartÃnez, P. Bibiloni, and M. González-Hidalgo,
“Comparative study of dermoscopic hair removal methods,” in Proc.
ECCOMAS Thematic Conf. Comput. Vis. Med. Image Process. Cham, Switzerland:Springer, 2019, pp. 12-21.
- T. Lee, V. Ng, R. Gallagher, A. Coldman, and D. McLean,
“Dullrazor:A software approach to hair removal from images,” Comput. Biol. Med.,vol. 27, no. 6, pp. 533-543, Nov. 1997.
- F.-Y. Xie, S.-Y. Qin, Z.-G. Jiang, and R.-S. Meng, “PDE-based unsupervised repair of hair-occluded information in dermoscopy images of melanoma,” Comput. Med. Imag. Graph., vol. 33, no. 4, pp. 275-282,Jun. 2009.
- Q. Abbas, M. E. Celebi, and I. F. GarcÃa, “Hair removal methods: A comparative study for dermoscopy images,” Biomed. Signal Process.Control, vol. 6, no. 4, pp. 395-404, Oct. 2011.
- A. Huang, S.-Y. Kwan, W.-Y. Chang, M.-Y. Liu, M.-H. Chi, and G.-
S. Chen, “A robust hair segmentation and removal approach for clinical images of skin lesions,” in Proc. 35th Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. (EMBC), Jul. 2013, pp. 3315-318.
- M. T. B. Toossi, H. R. Pourreza, H. Zare, M.-H. Sigari, P. Layegh, and A. Azimi, “An effective hair removal algorithm for dermoscopy images, ”Skin Res. Technol., vol. 19, no. 3, pp. 230-235, Aug. 2013.
- P. Bibiloni, M. González-Hidalgo, and S. Massanet, “Skin hair removal in dermoscopic images using soft color morphology,” in Proc. Conf. Artif. Intell. Med. Eur. Cham, Switzerland: Springer, 2017, pp. 322-326.
- J. Xie, L. Xu, and E. Chen, “Image denoising and inpainting with deep neural networks,” in Proc. Adv. Neural Inf. Process. Syst., 2012,pp. 341-349.
- C. Tian, L. Fei, W. Zheng, Y. Xu, W. Zuo, and C.-W. Lin, “Deep learning on image denoising: An overview,” 2019,arXiv:1912.13171.[Online].Available:http://arxiv.org/abs/1912
.13171
- P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol,
“Extracting and composing robust features with denoising autoencoders,” in Proc. 25th Int. Conf. Mach. Learn. (ICML), 2008,
pp. 1096-1103.
- Z. Cui, H. Chang, S. Shan, B. Zhong, and X. Chen, “Deep network cascade for image super-resolution,” in Proc. Eur. Conf. Comput.Vis.Cham, Switzerland: Springer, 2014, pp. 49-64.
- X. Mao, C. Shen, and Y.-B. Yang, “Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections,” in Proc. Adv. Neural Inf. Process. Syst., 2016, pp. 2802- 2810.
- V. Jain and S. Seung, “Natural image denoising with convolutional networks,” in Proc. Adv. Neural Inf. Process. Syst., 2009, pp. 769- 776.
- C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 2, pp. 295-307, Feb. 2016.
- M. Attia, M. Hossny, H. Zhou, S. Nahavandi, H. Asadi, and A. Yazdabadi,“Realistic hair simulator for skin lesion images: A novel benchemarking tool,” Artif. Intell. Med., vol. 108, Aug. 2020, Art. no. 101933.
- H. Zhao, O. Gallo, I. Frosio, and J. Kautz, “Loss functions for image restoration with neural networks,’ IEEE Trans. Comput. Imag., vol. 3,no. 1, pp. 47-57, Mar. 2017.
- G. Liu, F. A. Reda, K. J. Shih, T.-C. Wang, A. Tao, and B. Catanzaro,
“Image inpainting for irregular holes using partial convolutions,” in Proc. Eur. Conf. Comput. Vis. (ECCV), 2018, pp. 85-100.
- L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D, Nonlinear Phenomena, vol. 60, nos. 1_4,pp. 259-268, Nov. 1992.
- T. Mendonça, M. Celebi, T. Mendonça, and J. Marques, “PH2: A public database for the analysis of dermoscopic images,” Dermoscopy Image Anal., 2015.
- G. Argenziano, H. Soyer, V. De Giorgi, D. Piccolo, P. Carli, and M. Delfino, Interactive Atlas of Dermoscopy (Book and CD-ROM). EDRA Medical Publishing & New Media, 2000.
- H. Mirzaalian, T. K. Lee, and G. Hamarneh, “Hair enhancement in dermoscopic images using dual-channel quaternion tubularness filters and MRF-based multilabel optimization,” IEEE Trans. Image Process., vol. 23,no. 12, pp. 5486-5496, Dec. 2014.
- H. Mirzaalian. Hair SIM Software. Accessed: Mar. 7, 2019.[Online].Available:http://www2.cs.sfu.ca/~hamarneh/software/ hairsim/Welcome.html
- F. Chollet, “Keras: The python deep learning library,” Astrophys. Source Code Library, Tech. Rep. Rec. ascl:1806.022, 2018.
- D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,”2014, arXiv:1412.6980. [Online]. Available: http://arxiv.org/abs/1412.6980
- Z.Wang and A. C. Bovik, “Mean squared error: Love it or leave it? A new look at signal _delity measures,” IEEE Signal Process. Mag., vol. 26, no. 1,pp. 98_117, Jan. 2009.
- Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600-612, Apr. 2004.
- A. G. Barnston, “Correspondence among the correlation, RMSE, and Heidke forecast verifcation measures; refinement of the Heidke score,” Weather Forecasting, vol. 7, no. 4, pp. 699-709, Dec. 1992.
- Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multiscale structural similarity for image quality assessment,” in Proc. 37th Asilomar Conf. Signals, Syst. Comput., vol. 2, 2003, pp. 1398-1402.
- Z.Wang and A. C. Bovik, “A universal image quality index,” IEEE Signal Process. Lett., vol. 9, no. 3, pp. 81-84, Mar. 2002.
- H. R. Sheikh and A. C. Bovik, “Image information and visual quality,”IEEE Trans. Image Process., vol. 15, no. 2, pp. 430-444, Feb. 2006, doi:10.1109/TIP.2005.859378.
- K. Egiazarian, J. Astola, N. Ponomarenko, V. Lukin, F. Battisti, and
M. Carli, “New full-reference quality metrics based on HVS,” in Proc. 2nd Int. Workshop Video Process. Qual. Metrics, vol. 4, 2006,
pp. 1-4.
