Non-Contact Advance Method of COVID-19 Cases using Deep Neural Networks with X-Ray Images

DOI : 10.17577/IJERTCONV9IS15004

Download Full-Text PDF Cite this Publication

Text Only Version

Non-Contact Advance Method of COVID-19 Cases using Deep Neural Networks with X-Ray Images

Dr. Kumar Bid

HOD, Assistant Professor in CSE Department Amruta Institute of Engineering and Management Sciences

Ms. Chandu. K

UG Scholar

Amruta Institute of Engineering and Management Sciences

Ms. Anusha S. V

UG Scholar

Amruta Institute of Engineering and Management Sciences

Ms. Ramyashree

UG Scholar

Amruta Institute of Engineering and Management Sciences

Ms. Kavana A. S

UG Scholar

Amruta Institute of Engineering and Management Sciences

Abstract:- COVID-19 presentation, which began with the reporting of unknown causes of pneumonia in Wuhan, Hubei province of China on December 31, 2019, has rapidly become a pandemic .The disease is named COVID-19 and the virus is termed SARS-CoV-2The most common test technique currently used for COVID-19 diagnosis is a real-time reverse transcription-polymerase chain reaction (RT-PCR). Chest radiological imaging such as computed tomography (CT) and X- ray have vital roles in early diagnosis and treatment of this disease

At the beginning of the pandemic, Chinese clinical centers had insufficient test kits, which are also producing a high rate of false-negative results, so doctors are encouraged to make a diagnosis only based on clinical and chest CT results [12,14]. CT is widely used for COVID-19 detection.

The novel coronavirus 2019 (COVID-2019), which first appeared in Wuhan city of China in December 2019, spread rapidly around the world and became a pandemic. In this study, a new model for automatic COVID-19 detection using raw chest X-ray images is presented. The proposed model is developed to provide accurate diagnostics for binary classification (COVID vs. No-Findings) and multi-class

classification (COVID vs. No-Findings vs. Pneumonia). Our model produced a classification accuracy of 98.08% for binary classes and 87.02% for multiclass cases. The Dark Net model was used in our study as a classifier for the you only look once (YOLO) real time object detection system. We implemented 17 convolutional layers and introduced different filtering on each layer. Our model (available at can be employed to assist radiologists in validating their initial screening, and can also be employed via cloud to immediately screen patients.

Keywords- : COVID-19; X-ray image; deep learning; convolutional neural network (CNN); histogram oriented gradient (HOG); watershed segmentation sensor.

  1. INTRODUCTION

    COVID-19 presentation, which began with the reporting of unknown causes of pneumonia in Wuhan, Hubei province of China on December 31, 2019, has rapidly become a

    pandemic [[1], [2], [3]]. The disease is named COVID-19 and the virus is termed SARS-CoV-2. This new virus spread from Wuhan to much of China in 30 days [4]. The United States of America [5], where the first seven cases were reported on January 20, 2020, reached over 300,000 by the 5th of April 2020. Most coronaviruses affect animals, but they can also be transmitted to humans because of their zoonotic nature. Severe acute respiratory syndrome Coronavirus (SARS-CoV) and the Middle East respiratory syndrome Coronavirus (MERS-CoV) have caused severe respiratory disease and death in humans [6]. The typical clinical features of COVID-

    19 include fever, cough, sore throat, headache, fatigue, muscle pain, and shortness of breath [7].

    The most common test technique currently used for COVID- 19 diagnosis is a real-time reverse transcription-polymerase chain reaction (RT-PCR). Chest radiological imaging such as computed tomography (CT) and X-ray have vital roles in early diagnosis and treatment of this disease [8]. Due to the low RT-PCR sensitivity of 60%70%, even if negative results are obtained, symptoms can be detected by examining radiological images of patients [9,10]. It is stated that CT is a sensitive method to detect COVID-19 pneumonia, and can be considered as a screening tool with RT-PRC [11]. CT findings are observed over a long interval after the onset of symptoms, and patients usually have a normal CT in the first 02 days [12]. In a study on lung CT of patients who survived COVID-19 pneumonia, the most significant lung disease is observed ten days after the onset of symptoms [13]. At the beginning of the onset of symptoms [13].

  2. LITERATURE APPROACH

  1. Chowdhury et al. :

    worked with chest X-ray images to develop a novel framework named PDCOVIDNet based on parallel-dilated CNN. In the proposed method, the authors used a dilated convolution in the parallel stack that could capture and

    stretch necessary features for obtaining a detection accuracy of 96.58%.

  2. Abbas et al.:

    Proposed and validated deep convolutional neural networks called decompose, transfer, and compose (DeTraC) to detect COVID-19 patients from their chest X- ray images. They proposed a decomposition mechanism to check irregularities from the dataset by investigating class boundaries for obtaining a high accuracy (93.1%) and sensitivity (100%).

  3. Azemin et al.

    used a deep learning method based on the ResNet- 101 CNN model. In their proposed method, thousands of images were used in the pre-trained phase to recognize meaningful objects and retrained to detect abnormality in the chest X-ray images. The accuracy of this method was only 71.9%.

  4. El-Rashidy et al:

    Introduced a framework consisted of three layers: patient layer, cloud layer and hospital layer. A set of data was collected from the patient layer using some wearable sensors and a mobile app. A neural network-based deep learning model was used to detect COVID-19 using the patient X-ray images. The proposed model achieved 97.9% accuracy and 98.85% specificity.

  5. Khan et al.:

    Developed a new architecture for the diagnosis of X- ray images as the COVID-19 or normal using pre-trained deep learning models like ResNet50, VGG16, VGG19 and DensNet121, where VGG16 and VGG19 showed the best accuracies. The proposed model consisted of two phases like preprocessing and data augmentation, and transfer learning, and finally showed 99.3% accuracy.

    In the proposed model by Loey et al. [35], three deep transfer models like AlexNet, GoogleNet and ResNet18 were employed on a dataset of 307 images with four different types of classes: COVID-19, normal, pneumonia bacterial and pneumonia virus. The research work was distributed into three scenarios to reduce memory consumption and execution time. At the last deep transfer model, GoogleNet achieved 100% testing accuracy and 99.9% validation accuracy

  6. Minaee et al.:

    Reported a deep learning-based framework to detect COVID-19 from chest X-ray images using four tuning models like ResNet18, ResNet50, Squeeze Net and DensNet-

    121. The proposed method took advantage of data augmentation to create a transformed version of the COVID-

    19 images, which increased the number of samples and finally achieved 98% sensitivity and 90% specificity.

  7. Sekeroglu et al. :

    Developed a model using deep learning and machine learning classifiers where a total of 38 experiments was conducted by CNN for the detection of the COVID-19 using the chest X-ray images with high accuracy. Among

    them, 10 experiments were performed using 5 different machine-learning algorithms, and 14 experiments were carried out by the state-of-the-art pre-trained network for transfer learning. The system demonstrated 98.50% accuracy, 99.18% specificity and 93.84% sensitivity. They concluded that the system develope by CNN was capable of achieving COVID-19 detection from a limited number of images without any preprocessing and with minimized layers.

  8. Wang et al.:

Developed a model using ResNet-101 and ResNet-

151 with fusion effects to enhance their weight ratio dynamically. Classification of the chest X-ray images was carried out based on three classes, such as normal, COVID- 19 and viral pneumonia. Performance accuracy of 96.1% was achieved during the testing phase.

  1. Yoo et al:

Applied chest X-ray radiography (CXR) images to classify using a deep learning-based decision-tree classifier for detecting COVID-19. This classifier compared three binary decision trees based on the PyTorch frame. The decision tree classified CXR images as normal or abnormal, where the third decision tree achieved an average accuracy of 95%.

J. Sahlol et al.:

Proposed an improved hybrid classification approach using CNNs and marine predators algorithm for classifying COVID-19 images, which were obtained from international cardiothoracic radiologists. Inception architecture of CNNs was employed to extract features, and a swarm-based marine predators algorithm was used to select the most relevant features from the images. However, the research work did not consider any fusion approach to improve the classification and feature extraction of the COVID-19 images. Most of the reported work in the literature has used chest X-ray images to diagnose COVID- 19, and this highlights the importance of chest X-ray image analysis as an indisputable tool for doctors and radiographers. However, imbalance in data manipulation and lack of necessary extracted features from the images sometimes cannot provide expected accuracy in the classification result. To overcome these limitations, this work proposed fusion of features extracted by HOG and CNN and classify using CNN for improving the detection accuracy of the COVID-19

III. PROPOSED WORK

SYSTEM ARCHITECTURE

A System Architecture is the conceptual model that defines the structure, behavior, and more view of the system.

The proposed system considered input of the X-ray images to identify COVID-19. First of all, this system converted images from RGB to grayscale and identified the region of interest (ROI) by removing the unwanted regions. Furthermore, the system considered two feature extractors: histogram-oriented gradient (HOG) and CNN. First, the HOG technique was used to extract a feature vector from the X-ray COVID-19 dataset. Then the CNN method was used to extract another feature vector from the same images. These two features were fused and used as the input to train the classification model. The number of features extracted by one technique was not large enough to accurately identify COVID-19. However, the fusion approach of extracting features by two different techniques could provide a large number of features for accurate identification. Fusion was considered as a concatenation between the two individual vectors in this context.

Speckle-affected and low-quality X-ray images along with good quality images were used in our experiment for conducting tests. If training and testing are performed with only selected good quality X-ray images in an ideal situation, the output accuracy may be found higher. However, this does not represent a real-life scenario, where the image database would be a mix of both good- and poor-quality images. Therefore, this approach of using different quality images would test how well the system can react to such real-life situations.

A modified anisotropic diffusion filtering technique was employed to remove multiplicative speckle noise from the test images. The application of these techniques could effectively overcome the limitations in input image quality. Next, the feature extraction was carried out on the test images. Finally, the CNN classifier performed a classification of X-ray images to identify whether it was COVID-19 or not. Figure 2 shows the basic steps of the proposed system architecture, which is also represented by Algorithm 1

DATASET USED

The chest X-ray images of the patients were acquired and stored in a commonplace. The images were categorized as either COVID-19-positive or negative as a reference to evaluate the performance of the intelligent system. In this

work, three standard datasets were employed to validate the systems performance.

  1. The benchmark data set [43] used in our experimental evaluation consisted of two main categories with 819 COVID-19-positive and 1341 normal chest X-ray images

  2. Cohens data set [44] contained a total of 660 images with 390 positive COVID-19 X-ray images;

  3. Another publicly available [45] dataset was used with 770 images of the COVID-19 and 1500 normal images.

    The databases contained various sizes of images ranging from 512 × 512 pixels to 657 × 657 pixels. The acquired images were in both grayscale and RGB formats, and the RGB images were converted to grayscale images. Any feature extraction method can easily detect features from grayscale images compared to the images in other formats. To convert RGB to grayscale image, Equation (1) is used for calculating the grayscale value (I) by forming a weighted sum of the monochrome colors, red (R), green (G), and blue (B).

    I = (Wr. R) + (Wg. G) + (Wb. B) (1) Wr, Wg, and

    Wb are the weights of red, green, and blue colors, respectively, with a value of 0.30, 0.59, and 0.11, summing to a total equal to 1

    . Furthermore, the data formats of the images included png and jpeg with bit depths of 8-bit (grayscale) and 24-bit (RGB). As the image size, format and bit depth were different in the databases, they were converted to a size of 224 × 224 pixels with 8-bit grayscale images and saved in png format.

    Algorithm 1 Proposed Algorithm for COVID-19 Detection

    Input: COVID-19 Chest X-ray image dataset (D) with resize image (M) Extraction: Extract Feature Matrix (f). CNN Feature Vector (Fc).

    Step 1: Initialize Fc Mi .i = 1

    Step 2: Extract each image feature D(i,1,570). Step 3: Fc (i,1) = M(x,1) + Fc (i,1).

    Step 4: Fc = overall CNN features. Histogram Oriented Gradient (HOG).

    Step 1: Initialize. H0 = Low pass output,H1 = Band pass output.

    Step 2: HOG (i,1) = H0 (i,1) + H1 (i,1).

    Step 3: HOG = overall Histogram Oriented Gradient Fusion of features in Vector (V).

    Training feature (V) = [Fc , HOG]. test_image = imread(img).

    Extract test feature (T) = repeat step 1, 2 from test_image. result (i) = classify (training feature, T).

    Output: result (i) = COVID19 Positive or Normal.

    Fig. Comparison of the COVID-19 and normal X-ray images

    DATA PREPROCESSING

    Data Preprocessing Image processing is an important step to achieve meaningful information and accurate classification by removing noisy or deformed pixels from each image. First, the images were converted from RGB to grayscale using the MATLAB tool and resized to 224 × 224 pixels to be made ready as input to the system.

    To eliminate superfluous text and machine annotations around images, the region of interest (ROI) was extracted for training and testing. In order to obtain meaningful information, the ROI on the chest X-ray images was defined by an area covering mainly the lung region. First, an ROI is defined by a rectangle, and a mask is created from the rectangle. Using logical indexing, the area outside the ROI was set to zero, and the extracted portion is displayed. Figure 4 illustrates example images at different preprocessing stages. For example, unnecessary symbols (tick mark in normal image) or text (B in the COVID 19 image) in the original images were removed at the ROI stage. As the images used in this study were collected from three different sources, they might differ in quality, size, or inherent noise. Therefore, the preprocessing approaches employed would normalize all the images such that they were independent origin approaches employed would normalze all the images such that they were independent of their origin, and the image size be avoided.

    Fig: Proposed preprocessing stages: original image, the region of interest (ROI) image and 24-bit (RGB) to gray.

    MODIFIED ANISOTROPIC DIFFUSION FILTERING (MADF)

    Modified anisotropic diffusion filtering (MADF) was proposed for this work in order to preserve detailed information while reducing noise and distortion from the images. This filtering technique performs better than the other filtering methods due to its capability in eliminating multiplicative speckle noise in plane regions. The proposed method uses correlation and kurtosis values of noise to hold the useful edge information. In Equation (2), Io is a noisy image comprised of speckle-noise n and the original image I [27,49]. The noise part is denoted by Equation (3), where G is noise intensity and calculated from image properties in MATLAB. The mean of noise intensity is µ, which is calculated by Equation (4). Kurtosis k is calculated using Equation (5). The correlation between the im age class and noise class should be minimum, which is the iteration stopping condition. This speckle suppression process continues until the noise part of the image is close to the Gaussian value. In this situation, the kurtosis value should be zero. The iteration cutoff is defined when the kurtosis value falls below 0.001 (Equation (6)), indicating a low speckle with better edge preservation. As soon as the correlation between image class and noise class is the least, the iteration will be stopped. Equation (7) calculates the correlation of intensitie and Equation (8) calculates the correlation of noise intensities (G). The proposed filtering will get the optimal result when I and G show minimum deviance..

    .

    FEATURE EXTRACTOR

    1. Histogram-Oriented Gradient (HOG) Feature Extractor:

      Histogram-oriented gradient (HOG) system extracts features by using a selective number of histogram bins [51]. For extracting HOG features, the proposed system used a higher number of histogram bins on different regions of the images. First, the input image was scaled to 64 × 128 pixels and converted into a grayscale image. The gradient for every pixel in the image was calculated using Equations (9) and (10)

    2. CNN BASED FEATURE EXTRACTOR AND CLASSIFICATION

Image processing, particularly features extraction by employing CNN, is an important research topic in computer science [52]. An experiment was conducted using scratch and pre-trained CNN models in this proposed work. The results achieved by the scratch model were not satisfactory; however, the pre-trained model showed good performance. VGG19 model (pre-trained) was fine-tuned to suit as a feature extractor for the experimental dataset used in this study. A VGGNet consisting of 19-layer was used to develop this network model. Experimental trial asserted that VGG19 showed better performance compared to VGG16, scratch model and other deep learning models, including ResNet50 and AlexNet. The VGG19 model was developed using sixteen convolution layers with three fully connected layers (Figure 7). A nonlinear ReLU was used in the activation function for getting the output of convolution layers, whereas the convolution part was split by five consecutive max- pooling layers. Two convolution layers were used to develop the first and second subregions, where the depth of the layers was 64 and 128. Furthermore, four con secutive convolution layers were used to build the remaining three subregions where the depth of the layers were 256, 512, and 512, respectively. Afterward, Pooling layers were employed to decrease the learnable parameter. The last layer of the proposed VGG19 model helped in obtaining the feature vector, whereas 1024 and 512 neurons existed in the two hidden layers placed before the feature collection layer. For reducing the overfitting during the implementation of the fine-tuned model, L2 regularization was employed after each fully connected layer. The CNN-based VGG19 models provide 4096 appropriate features

FEATURE FUSION AND CLASSIFICATION

Data fusion was applied in several machine learning and computer vision applica tions [53]. Particularly, feature fusion can combine more than one feature vector. Two fea ture extractors provide a feature vector of 1 × 4096 and 1 × 3780. The feature selection process was mathematically explained by Equations (15)(17) [54]. Equations (15) and

(16) represent features extracted by HOG and CNN, respectively. The extracted feature vectors are combined by concatenation and represented by Equation (17).

Then the features extracted by HOG and CNN are fused with 7876 features. 1186 score based features were selected out of 7876 features based on maximum entropy. When the value of i = 1, it recalls HOG features and when i = 2, it recalls VGG19 features and finally adds them together. For the purpose of selecting optimal features, entropy was employed considering score values. The probability of features and entropy is defined by Equations (18) and (19). The final selected features were fed to the classifiers in order to identify COVID-19 images.

SEGMENTATION OF THE COVID-19-AFFECTEDff REGION

For biomedical image segmentation, the watershed technique

  1. provides better results compared to the other techniques, such as Fuzzy-C means (FCM). The conventional FCM algorithm suffers from some weaknesses in terms of initializing clusters center or de termining an optimal number of clusters and sensitiveness to noise [56]. FCM segmentation method cannot detect the fracture regions from the X-ray images affected by the COVID-19. However, watershed segmentation is a fast, simple and intuitive method, which provides closed contours, requires low computational time and produces a complete division of the Images in separated regions. Segmentation was applied for the non-trivial task of separating. the fracture lung regions from the X-ray images. A watershed segmentation technique was applied to segment the fracture regions of each image owing to its relatively less computational complexity and capability of providing high accuracy in segmentation. This method separated touching objects in an X-ray image and provided a complete division. Figure 9 presents different stages of image processing from filtering to segmentation of significant regions from the COVID-19-affected lung X-ray images.

    Figure . COVID-19 segmentation by using the watershed technique (a) applied anisotropic diffusion for filtering (b) adjusting the filtered image (c) watershed RBG image (d) fracture lung region caused by the corona virus (COVID-19)

    EXPERIMENTAL DETIALS AND RESULTS

    1. Datasets and Overall Performance:

      To validate the framework developed for intelligent COVID-19 detection, this work used a total of 5090 chest X-ray images for training, testing and validation, as shown in Table 1 without data augmentation. In this study, the distribution of the data was laminated in order to mitigate the data disequilibrium issue. The validation images were taken from the training set, but the testing set was taken before training

    2. Filtering Performance:

      The proposed method used COVID-19 X-ray images as test data with different speck les, noises and resolutions. To work with meaningful features, information preservation and noise reduction are the prerequisite conditions to fulfill. The current system used modified anisotropic diffusion filtering (MADF) at the image preprocessing stage. The per formance measurement in MADF was assessed using three evaluation metrics, namely Signal-to-noise ratio (SNR), minimum square error (MSE) and edge preservation factors (EPF) [21]. Higher values of SNR and EPF represent more noise reduction and much edge details preservation, respectively. On the other hand, the minimum MSE value indicates less error between the input and filtered images. Classification models were run 10 times, and the highest values were reported for the performance etrics. Figure presents the filtering performance using different evaluation metrics and comparison with other techniques available in the literature. It was clear that all the existing filtering techniques produced lower MSE values indicating that the proposed technique was only slightly worse than the existing techniques. On the other hand, the SNR and EPF values were com paratively much higher in the proposed filtering technique, demonstrating its superiority over the others.

      Fig: Confusion matrix with overall performances parameter during training

      Image filtering method

    3. Feature Extraction Performance:

      CNN used extracted features to train before classifying. Test features were also found from the test images using different pre-trained models to measure the performance of CNN models. Nowadays, CNN uses different pre-trained models like AlexNet, ResNet50, VGG-16, VGG-19 and ResNet50 to extract features from the training and test data sets. All presents a comparison of performance among different CNN models. It was apparent that VGG19, which was proposed in this work, achieved better accuracy and specificity than the other CNN models, although ResNet50 showed the best performance in terms of sensitivity.

      Fig: Performance measurement of different feature extraction models

    4. Classification Performance:

      This work proposed a fusion of feature vectors obtained by a combination of HOG and CNN techniques. This fusion vector was deliberated as the final input for the training and test datasets. Figure 13 presents a comparative study of different feature extraction approaches. The performances of different individual feature extraction techniques were less satisfactory than the fusion approach. This demonstrated that the proposed approach could classify COVID-19 cases more accurately than the single feature extraction approaches.

      Fig: Comparative results of individual and fusion features.

      The final classification was also performed with other popular machine-learning methods, such as artificial neural network (ANN), support vector machine (SVM) and k-nearest neighbor (KNN), in addition to CNN. The fused feature vectors were fed to during the classification to find the better classifier. CNN clearly showed the best performance, as shown in Figure

      Fig: Comparative performance of different classifiers

      Accuracy vs. epoch curve is plotted in Figure 15a. This showed clear evidence of no overfitting situation with very close training and accuracy curves. The learning rate starts from 0.001 with mini-batch sizes of 64 and 36 epochs. The loss curve depicted in Figure 15b indicated only a little amount of lost value.

      Fig: Learning curves (a) accuracy vs. number of epochs (b) loss vs. number of epochs.

    5. Limitations and Future Work

One of the limitations of this work was the imbalance of data in the datasets used for training and testing. In general, balanced data set with an equal number of normal and COVID-19 X-ray images makes the model building more comfortable, and the developed model can provide better prediction accuracy. Furthermore, the classification algorithm finds it easier to learn from a balanced dataset. Naturally, in any open source database, the number of normal images would be higher than the COVID-19-positive images. As the images used in this study were taken from open-source databases, the imbalance in the training and testing data sets was obvious. However, the ratio between the number of normal and COVID-19 images was maintained at 1.57 in both the training and testing data sets in order to alleviate the data imbalance problem to some extent.

CONCLUSION

The corona virus pandemic has stretched the healthcare systems in every country in the world to its limit as they had to deal with a large number of deaths. Early detection of the COVID-19 in a faster, easier, and cheaper way can help in saving lives and reduce the burden on healthcare professionals. Artificial intelligence can play a big role in identifying COVID-19 by applying image processing techniques to X-ray images. This work designed and developed an intelligent system for the COVID-19 identification with high accuracy and minimum complexity by combining the features extracted by histogram-oriented gradient (HOG) features and convolution neural network (CNN). Suitable feature selection and classification are absolutely vital in the COVID-19 detection using chest X-ray images. Chest X-ray images were entered into the system in order to produce the output of the marked lung significant region, which was used to identify COVID-19. The proposed feature fusion system showed a higher classification accuracy (99.49%) than the accuracies obtained by using features obtained by individual feature extraction techniques, such as HOG and CNN. CNN produced the best classification accuracy compared to the other classification techniques, such as ANN, KNN and SVM. Furthermore, the proposed fusion technique was validated with higher accuracies using generalization and k-fold validation techniques.

REFERENCES

  1. Wu, F.; Zhao, S.; Yu, B.; Chen, Y.M.; Wang, W.; Song, Z.G.; Hu, Y.; Tao, Z.W.; Tian, J.H.; Pei, Y.Y.; et al. A new coronavirus associated with human respiratory disease in China. Nature 2020, 579, 265269. [CrossRef].

  2. Guan, W.J.; Ni, Z.Y.; Hu, Y.; Liang, W.H.; Ou, C.Q.; He, J.X.; Liu, L.; Shan, H.; Lei, C.L.; Hui, D.S.; et al. Clinical characteristics of coronavirus disease 2019 in China. N. Engl. J. Med. 2020, 382, 17081720. [CrossRef]

  3. Chen, N.; Zhou, M.; Dong, X.; Qu, J.; Gong, F.; Han, Y.; Qiu, Y.; Wang, J.; Liu, Y.; Wei, Y.; et al. Epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in Wuhan, China: A descriptive study. Lancet 2020, 395, 507513. [CrossRef]

  4. Wang, C.; Horby, P.W.; Hayden, F.G.; Gao, G.F. A novel coronavirus outbreak of global health concern. Lancet 2020, 395, 470473. [CrossRef]

  5. Zhu, N.; Zhang, D.; Wang, W.; Li, X.; Yang, B.; Song, J.; Zhao, X.; Huang, B.; Shi, W.; Lu, R.; et al. 2020. A novel coronavirus from patients with pneumonia in China. N. Engl. J. Med. 2019, 382, 727 733. [CrossRef] [PubMed]

  6. Li, Q.; Guan, X.; Wu, P.; Wang, X.; Zhou, L.; Tong, Y.; Ren, R.; Leung, K.S.; Lau, E.H.; Wong, J.Y.; et al. Early transmission dynamics in Wuhan, China, of novel coronavirusinfected pneumonia. N. Engl. J. Med. 2020, 382, 11991207. [CrossRef]

  7. Holshue, M.L.; DeBolt, C.; Lindquist, S.; Lofy, K.H.; Wiesman, J.; Bruce, H.; Spitters, C.; Ericson, K.; Wilkerson, S.; Tural, A.; et al. First case of 2019 novel coronavirus in the United States. N. Engl. J. Med. 2020, 382, 929936. [CrossRef] [PubMed]

  8. WHO Coronavirus Disease (COVID-19) Dashboard. Available online: https://covid19.who.int/?gclid=CjwKCAjw5 p_8BRBUEiwAPpJO682JEO1UwRkSSDosfqaqGeAncQYeiEeTcn MSFJd55I0lzYlHrvi4SxoCAeUQAvD_BwE (accessed on 15 October 2020).

  9. Ledford, H.; Cyranoski, D.; Van, N.R. The UK has approved a COVID vaccine-heres what scientists now want to know. Nature 2020, 588, 205206. [CrossRef]

  10. Anon. The COVID vaccine challenges that lie ahead. Nature 2020, 587, 522. [CrossRef]

  11. Kim, J.H.; Marks, F.; Clemens, J.D. Looking beyond COVID-19 vaccine phase 3 trials. Nat. Med. 2021, 27, 17. [CrossRef]

  12. Logunov, D.Y.; Dolzhikova, I.V.; Shcheblyakov, D.V.; Tukhvatulin, A.I.; Zubkova, O.V.; Dzharullaeva, A.S.; Kovyrshina, A.V.; Lubenets, N.L.; Grousova, D.M.; Erokhova, A.S.; et al. Safety and efficacy of an rAd26 and rAd5 vector-based heterologous prime-boost COVID-19 vaccine: An interim analysis of a randomised controlled phase 3 trial in Russia. Lancet 2021. [CrossRef]

  13. Chen, Z.; Zhang, L. Meet the Challenges of Mass Vaccination against COVID-19. Explor. Res. Hypothesis Med. 2021, 13.

  14. Li, Y.; Shen, L. Skin lesion analysis towards melanoma detection using deep learning network. Sensors 2018, 18, 55. [CrossRef]

  15. Liao, Q.; Ding, Y.; Jiang, Z.L.; Wang, X.; Zhang, C.; Zhang, Q. Multi-task deep convolutional neural network for cancer diagnosis. Neurocomputing 2019, 348, 6673. [CrossRef]

  16. Yoo, S.; Gujrathi, I.; Haider, M.A.; Khalvati, F. Prostate cancer detection using deep convolutional neural networks. Sci. Rep. 2019, 9, 110. [CrossRef]

  17. Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017, 542, [CrossRef]

  18. Wang, L.; Wong, A. Covid-net: A tailored deep convolutional neural network design for detection of arXiv:2003.09871. [CrossRef] [PubMed] 19.

  19. Afzal, A. Molecular diagnostic technologies for COVID-19: Limitations and challenges. J. Adv. Res. 2020. [CrossRef] [PubMed] 20.

  20. World Health Organization: Use of Chest Imaging in Covid19.2020Availableonline:https://www.who.int/publictions/i/ite m/use-of-chest-imaging-in-covid-19 (accessed on 7 January 2021)

Leave a Reply