Automatic Neonatal Pain Detection for Pediatrics using CNN

DOI : 10.17577/IJERTCONV9IS07005

Download Full-Text PDF Cite this Publication

Text Only Version

Automatic Neonatal Pain Detection for Pediatrics using CNN

Kripa Rachel Thomas1

Dept of Comp Science & Engineering Mangalam College of Engineering, Kottayam, India.

Poorna Sudhakaran2

Dept of Comp Science & Engineering Mangalam College of Engineering, Kottayam, India.

Shaldrin Emerson3

Dept of Comp Science & Engineering Mangalam College of Engineering, Kottayam, India.

Rahul R4

Dept of Comp Science & Engineering Mangalam College of Engineering, Kottayam, India.

Simy Mary Kurian5

Dept of Comp Science & Engineering Mangalam College of Engineering, Kottayam, India.

Abstract- There exist many methods for detecting the neonatal pain. Finding the cause of pain is very important in pain management. Most of the existing standards fail to detect the pain since it identifies the pain in a discontinuous and inconsistent manner. The result also depends on inter or intra observer variations. These drawbacks may lead to delayed intervention and over or under treatment. Convolutional Neural Networks are used nowadays because of its successful application in medical image analysis, image recognition etc. Here we are using a lightweight Convolutional Neural Network which is specifically designed for neonates for the effective detection of neonatal pain. The N-CNN used here is applied on a real world dataset which contains images of the neonates at the time of pain while being hospitalized in Intensive Care Unit. It is clear from the experiments that N-CNN used here is very efficient in classifying the pain when compared to the existing standards.

Keywords Convolutional Neural Network, Neonatal pain, Pain Detection

  1. INTRODUCTION

    For many years Paediatricians believe that neonates do not have the ability to sense the pain. But later it is discovered that they do have a sense of pain. There exist many methods for detecting the neonatal pain. The main drawback of these methods are they are discontinuous and not consistent in determining the pain of neonates. Since neonates cant able to communicate correctly it is very important to determine the cause of pain for the proper pain management. The current standards are discontinuous and inconsistent as it depends highly on intra and inter observer variations it will gradually leads to under or over treatment. Over use of medications like Morphine causes serious Side effects like hypotension, intolerance etc. Convolutional Neural Network were used because of its wide successful application in medical image analysis, object recognition, image recognition etc. Convolutional Neural Networks provides pain relevant features as they have the ability to learn and extract features at multiple levels of abstraction. The existing CNNs are not very efficient in determining the pain of

    neonates as these networks are trained using a number of images [9]. Here we are using a novel light weight neonatal convolutional network which is designed specifically for determining the pain of neonates using facial expressions. These CNNs are trained using globally available dataset. Many existing standards fail to detect the facial expression of neonates due to the unique craniofacial structure of neonates and large variations in pose and expressions [10]. From the experiments it is clear that the proposed CNN is very efficient and viable in determining the facial expression of neonates than other existing standards.

  2. RELATED WORKS

    Local Binary Pattern is a vector based used for detecting the pain in neonates. Images are adopted from COPE dataset. Feature Vector is classified into pain or no pain using the Gaussian and nearest mean classifier. Gaussian classifier is best in distinguishing pain and no pain symptom. The time consumption of the classifier showed very fast and a consistent result. As the neighbourhood is small it cannot capture dominant features with a large-scale structure [1].

    Image-based technique is used for the classification of pain states. Comparing several texture descriptors based on Local Binary Patterns (LBP). The two novel solutions obtained based on the combination of new text descriptors are the Elongated Ternary Pattern (ELTP) and the Elongated Binary Pattern (ELBP). ELTP is considered as the best performing descriptor. ELBP combines both the characteristics of the Local Ternary Pattern (LTP) and ELTP. This method involves unusual amount of noise in the face of neonates and the variations in the expression between the classes affect the sub images rather than the whole image [2].

    Neonatal Facing coding system (NFCS) is used to study the full term pian behaviour of the neonates. The aim is to evaluate the reliability and feasibility. NFCS can be used at the bed side thereby ensuring a good reliability. Inter observer reliability was found to be high for the NFCS pain

    facial actions. It discovered pain in premature babies. Through this method a good reliability can be demonstrated consistently. Rather than hand held computer recording paper and pen recording was used. For the face actions at the bed side a high inter observer reliability was observed [3].

    Pretrained Deep learning Convolutional Neural Network (DCNN) with transfer learning is used for the neonatal pain expression recognition. The transfer learning method used here helps to avoid the occurrence of overfitting and accelerate the training procedure. In order to enhance the generalization ability of DCNN it is fine-tuned using the neonatal pain expression image dataset. Fine tuning helps DCNN to achieve a good performance. DCNN along with transfer learning gives a promising application for clinical diagnosis [4].

    The basic concepts of three face classification techniques are PCA, LDA and SVM. The main idea in PCA is that finding the covariance matrix of image data set with which each image is considered as a one or single point in high dimensional space. In LDA images are projected into the space. SVM separates the input vector patterns into two classes considering the optimal separating hyperplane. Using this method, a high recognition rate was achieved. Reactions corresponding to acute pain is only included in the dataset. This study takes photograph which do not take into account the dynamic nature of the facial expressions [5].

    A combination of hand-crafted and deep-learning based features are used for the pain assessment of neonates. Uses a pipeline for pain assessment in new-borns using the face images. Face detection is facilitated by face detector and aligned by using affine transformation. Local Binary Pattern (LBP) and Histogram of Oriented Gradients (HOG) are considered as hand crafted features. VGG and MBPCNN have been used for deep feature extraction. Attains an average accuracy of 82.47 [6].

    A neonatal convolutional neural network is proposed for the detection of the neonatal pain. The proposed N-CNN is evaluated in an unconstrained dataset which was collected from 31 neonates. When compared to the current existing standard the proposed CNN achieves attains a high accuracy. It is also considered as the most viable and efficient when compared to other standards [7].

    Existing standards which were developed for recognizing the face in adults do not work well for infants due to the unique structure of the face of neonates. A special engine is trained to recognize the face of the neonates which also discriminate the faces of neonates from adults. This software system developed to annotate the image database is also developed. Experimental result shows that trained baby face recognizer achieves a great improvement in differentiating bab faces from adults and works well in dataset which contains both baby faces and adult faces [8].

  3. PROPOSED METHODOLOGY

    We use a novel Convolutional Neural network for assessing the type of neonatal pain by facial expression. The machine learns from examples like the human brain and

    forms a logic in CNN to classify, which category it belongs to. We use pre-processed neonatal images collected from NICU (Neonatal Intensive Care Unit) and CNN to develop the trained model and the model won pain classification tasks. The Architecture layers of CNN are created using Keras Library in Python. The trained model categorizes the images into four different stimuli-Pain stimuli, Rest/Cry stimulus, Air stimulus to Nose, Friction stimulus.

    1. The Pre-processing Stage

      The neonatal image dataset must be loaded as python data structures for pre-processing to suppress the undesired distortions, enhance some relevant features, and for further analysis of the trained model. Since the model gets trained faster on small images, resizing is done. Later, Histogram Equalization and Normalization techniques are applied to neonatal images to clean image data for model input.

    2. The Feature-Extraction Stage

      Feature Extraction increases the accuracy of the model by extracting the features of pre-processed neonatal images and converting them into a lower dimension without losing image characteristics. Based on this stage, classification of the pain states can be done.

    3. The Classification Stage

      An image classification model is trained to recognize various pain states of neonatal images and classification is done by the dense layer associated with the CNN. It will tell the probability that a neonatal image representing one or more classes that the model was trained on.

    4. Training Stage

      The system is based on the idea that it learns from pre- processed neonatal images and forms a logic in CNN to classify, based on to which pain category it belongs to. This trained model is saved and is later used in the prediction section. In CNN, the stages of feature extraction and feature classification are merged and thus it improves classification efficiency and accuracy.

    5. Prediction Stage

    In this stage, the saved model automatically detects the pain state of the neonatal image given by the user. The saved model and the pre-processed images are loaded for predicting the type of pain state. CNN offers high accuracy over image classification and produces perfect results.

  4. MODULES

The proposed system contains the following modules:

Pre-processing images Training

Prediction GUI Creation

  1. Pre-processing Images

    Here the images are loaded from the dataset. After the loading of images resizing is applied to images. Later, weighted mean Histogram Equalization is performed for brightness preserving image enhancement and to reduce noise. After that Normalization is performed to normalize the images from 0-255 range to 0-1 range.

    C. Prediction

    In prediction we load a neonatal image which has to be classified. Resizing and pre-processing is done to the loaded image. Thereafter, load the saved model. Pass the pre- processed images through the trained model for predicting the type.

  2. Training

In training, load the pre-processed images and the architecture, that was created. After loading, training is performed. When training gets completed, a logic is formed in the CNN on how to classify the images and this model is saved.

D. GUI Creation

GUI will be created using tkinter package in Python. This contains a button for choosing the images. The predicted type will be displayed in the same window.

TRAINING

Create CNN Architecture

Dataset Load Dataset Pre process Training

Save Model

PREDICTION

Neonatal Image

Pre process

Prediction

Type

Fig.1. System Architecture

The proposed model uses a novel Convolutional Neural Network (CNN) for assessing neonatal pain by facial expression. This CNN was designed and trained end to end using a real-world dataset of neonates to form the model. Here, the system learns from the dataset like the human brain

  1. Dataset (Database)

    The datasets are collected from the neonates while being hospitalized in the Neonatal Intensive Care Unit (NICU). The dataset of neonates is classified into four classes depending on the four pain states – 1. Pain stimulus, 2. Rest/Cry stimulus, 3. Air stimulus to Nose, 4. Friction stimulus and is stored in four separate folders. Each folder contains up to 50 neonatal images for training. This dataset is loaded using Python data structures.

    Pre-processing

    The loaded dataset is undergone pre-processing to reduce the data distortion of the image. The pre-processing

    and forms logic in CNN to categorize the neonatal image into four different stimuli–1. Pain stimulus, 2. Rest/Cry stimulus,

    1. Air stimulus to Nose, 4. Friction stimulus. The following steps are performed inside the system:

      involves Resizing, Histogram Equalization, and Normalization techniques. Images captured by a camera and those fed into the CNN may vary in size and so it is necessary to establish a base size for all the images. Then, it is undergone weighted mean Histogram Equalization for contrast-enhancement in images. Later, the Normalization technique is also applied that changes the intensity values. After this stage, the neonatal image set is capable of training the model.

      Neonatal Resizing Image

      Pre-Processed Image

      Fig.2.Pre processing

  2. CNN Architecture creation

    Histogram Equalization

    Normalization

  3. Training

    The pre-processed neonatal images are directed to the CNN for training. Based on the dataset provided, a logic is formed in the CNN to categorize the image to the pain state. This trained model is then saved. Thus, the saved model is capable of classifying the images based on four stimuli–1. Pain stimulus, 2. Rest/Cry stimulus, 3. Air stimulus to Nose,

    1. Friction stimulus. The pain assessment for each period is done using the Neonatal Infant Pain Scale (NIPS).

  4. Prediction

    In this Stage, a neonatal image is given by the user for Prediction. It is undergone pre-processing. This pre- processed neonatal image and the saved model are then loaded. Based on the logic created, the system predicts the category of the pain state to which the image belongs.

    1. RESULT

      CNN is designed specifically for analysing the facial expression of neonates. The Convolutional Neural Network Architecture layers will be created using the Keras library in Python. The CNN is a combination of an input layer, output layer, and hidden layers- the Convolutional layer, Batch Normalization layer, Max pooling layer, Dropout layer, Dense layer. The convolutional layer is used for Feature extraction. It extracts the features of neonatal images and converts them into a lower dimension without losing the image characteristics. The output of the convolutional layer will be the input of the next Batch Normalization layer. The Batch Normalization layer standardizes the inputs to a layer for each mini-batch. This has the effect of stabilizing the learning process and dramatically reducing the number of training epochs. Following the Batch Normalization layer, a Max pooling layer is used to reduce the spatial volume of the input in the

      Here we are using a convolutional neural network for accessing the neonatal pain. It facilitates automatic detection of neonatal pain which overcome the variations caused by the observer and provide more accurate result. CNN used here also helps to overcome the discontinuous nature of the current standard. From the experiments it is clear that the proposed CNN achieves a high accuracy when compared to other architectures like VGG-16, ResNet50, etc. These results suggest that the proposed CNN for the pain detection of neonates is more viable and efficient.

      Table.1. COMPARISSON WITH N-CNN

      Create Input Layer

      Add Dense Layer

      Create Softmax Layer

      Add Convolutional Layer

      Add Dropout Layer

      Add Batch Normalisation Layer

      Test Set Count

      Accuracy

      ResNet 50

      VGC 16

      N-CNN

      10

      0.8

      0.76

      0.85

      20

      0.76

      0.76

      0.86

      30

      0.75

      0.76

      0.85

      40

      0.81

      0.75

      0.87

      50

      0.8

      0.76

      0.87

      Test Set Count

      Accuracy

      ResNet 50

      VGC 16

      N-CNN

      10

      0.8

      0.76

      0.85

      20

      0.76

      0.76

      0.86

      30

      0.75

      0.76

      0.85

      40

      0.81

      0.75

      0.87

      50

      0.8

      0.76

      0.87

      Add Max Pooling Layer

      Fig.3. CNN Architecture Creation

      neonatal image. The dropout layer prevents Overfitting. Each neuron in the Dense layer will receive an input from the Dropout layer. Since the Dense layer is fully connected, it can be used for classification. The final layer associated with the CNN is the Softmax layer which is used for the output of the system.

      III. REFERENCES

      Performance Analysis Chart

      0.9

      0.87 0.87

      0.86

      1. M. N. Mansor and M. N. Rejab, A computational model of the infant pain impressions with Gaussian and nearest mean classifier, in Proc. IEEE Int. Conf. Control Syst. Comput. Eng., 2013, pp. 249253.

      2. L. Nanni, S. Brahnam, and A. Lumini, A local approach based on a local binary patterns variant texture descriptor for classifying pain states, Expert Syst. Appl., vol. 37, no. 12, pp. 78887894, 2010.

      3. R. E. Grunau, T. Oberlander, L. Holsti, and M. F.

        0.85

        ACCURACY

        ACCURACY

        0.8

        0.85

        0.8

        0.76

        0.76

        0.85

        0.78

        0.81

        0.75

        0.8

        0.76

        Whitfield, Bedside application of the neonatal facial coding system in pain assessment of premature infants, Pain, vol. 76, no. 3, pp. 277286, 1998

      4. Guanming L., Qiang Hao, K. Kong, J. Yan, Haibi L., Deep Convolutional Neural Networks with Transfer Learning for Neonatal Pain Expression Recognition IEEE Int. Conf. Natural Computation, Fuzzy Systems and Knowledge Discovery, 2019 pp. 251-256.

      5. S. Brahnam, C.-F. Chuang, F. Y. Shih, and M. R.

        Slack, Machine recognition and representation of neonatal facial displays of acute pain, Artif. Intell. Med., vol. 36, no. 3, pp. 211222,

        0.75

        0.7

        0.65

        0.73 0.73

        10 20 30 40 50

        TEST SET COUNT

        ResNet 50 VGC16 N-CNN

        Fig.4. Graphical illustration of N-CNN

        2006.

      6. L. Celona and L. Manoni, Neonatal facial pain assessment combining hand-crafted and deep features, in Proc. Int. Conf. Image Anal. Process., 2017, pp. 197204.

      7. G. Zamzmi, R. Paul, D. Goldgof, K. Rangachar, and Y. Sun, Pain assessment from facial expression: Neonatal convolutional neural network (N-CNN), in Proc. Int. Joint Conf. Neural Netw. (IJCNN), Jul. 2019, pp. 16.

      8. D. Wen, C. Fang, X. Ding, and T. Zhang, Development of recognition engine for baby faces, in Proc. 20th Int. Conf. Pattern Recognit., Istanbul, Turkey, 2010, pp. 34083411.

      9. D. Hudson-Barr, B. Capper-Michel, S. Lambert, T. Mizell Palermo, K. Morbeto, and S. Lombardo, Validation of the pain assessment in neonates (pain) scale with the neonatal infant pain scale (NIPS), Neonatal Netw., vol. 21, no. 6, pp. 1521, 2002.

      [10] K. Simonyan and A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv:1409.1556, Sep. 2014

    2. CONCLUSION

Accessing neonatal pain correctly is very important as neonates cannot communicate properly. Pain management is a very important step in pain detection. There exist many standards for accessing the neonatal pain which is discontinuous and not consistent in nature. These methods require large number of well-trained nurses for observing the neonates. The presented CNN is evaluated on a real-world dataset and achieves an overall accuracy of 85%. The results are encouraging and suggest that the automatic recognition of neonatal pain is more viable and efficient and can be used as a better alternative to the current standards. .

Leave a Reply