Chatbot with Music and Movie Recommendation based on Mood

DOI : 10.17577/IJERTCONV8IS15025

Download Full-Text PDF Cite this Publication

Text Only Version

Chatbot with Music and Movie Recommendation based on Mood

Shivani Shivanand1, K S Pavan Kamini2, Monika Bai M N3, Ranjana Ramesh4, Sumathi H R5

1,2,3,4, UG Student, ISE, JSS Academy of Technical Education, Bangalore, India 5, Assistant Professor, ISE, JSS Academy of Technical Education, Bangalore, India

AbstractIn this era of technological advancements, music recommendation based on mood is much needed as it will help humans relieve stress and listen to soothing music according to their mood. In this project, we have implemented a chatbot that recommends music as well as movies based on the user's mood. The objective of our application is to identify the mood expressed by the user and once the mood is identified, songs are played by the application or a list of movies are displayed in the form of a website according to the choice made by the user and also his current mood. Our proposed system is implemented as an application which can be run on the users desktop and its main focus is to reliably determine the users mood. Human computer interaction (HCI) has a lot of importance in todays world and the most popular concept in HCI is recognition of emotion from facial images. In this process, the frontal view of the facial images is utilized so as to detect the mood from the images. Another important factor is the extraction of facial elements from the users face. We have used the Haar Cascade Algorithm for accurately detecting the users face in the live webcam feed and the CNN Algorithm is used to detect the emotion being expressed by the user from the facial features. Facial attributes like the arrangement of the mouth and eyes are used in order to detect the mood of the user.

KeywordsHaar cascade, mood detection, mood based recommendation, CNN

  1. INTRODUCTION

    Emotion detection is an important process in our project which requires accuracy and this can be done effectively with the help of facial expressions which is how humans understand and interpret an emotion. Research shows that when a persons facial expressions are read, it can actually vary your interpretation of what is being spoken and it can also control how the conversation turns out. Humans are capable of perceiving emotions which is exceedingly important for a communication to be a success and hence in a typical conversation almost 93% of communication depends on the emotion being expressed.

    In our project, the process of emotion detection of the user is done with the help of facial images that are captured through the live webcam feed. Happy, sad, angry, fear, surprised, disgust, and neutral are the seven basic emotions common to humans, and they are identified by the various expressions of the face as depicted in Fig 1. In this project we aim to find and implement an effective way to identify all these emotions from frontal facial emotion. The positioning and shape of for example the eyebrows and lips are used by the application so it

    can understand and interpret the facial attributes that make up the expression and thus the emotion being expressed by the user. Fig 2 demonstrates how various facial features are taken into consideration to identify the emotion.

    Fig. 1. The seven basic emotions

    Fig. 2. Distinguishing features of Anger

    The Chatbot module of the application makes use of AI techniques for its implementation. Our chatbot is rule based which is the AI methodology used to design a simple Chatbot. We have made use of rule based chatbot as our application required a simple chatbot. The emotion detection module utilizes Deep Learning algorithms for identifying the face of the user in the input image and then accurately determine the emotion displayed on the users face. It implements two algorithms, the Haar Cascade Algorithm is used for identifying

    the users face in each instance of the webcam feed and the Convolutional Neural Networks Algorithm is used to extract the facial features so as to identify the users mood.

  2. LITERATURE SURVEY

    Few of the key features emphasized by the papers that have been surveyed are:

    Nikhil et al. [1] use algorithms and technologies which include Haar cascade, Canny edge, Blob detection for the process of emotion detection. The system captures pictures of the user and according to that mood gets detected. Inputs like face and emotions are taken from the picture, and the system also provides a chat box to give responses. The proposed system in the paper presents a new approach for building desktop application for chat bot using text and gestures. The system is able to make a conversation through the chatting application. The system will send some links, web pages or information depending on the response from the user. The system detects smile and stress. When a smile is detected by the system, jokes pop-ups will be shown on the screen, and when stress is detected, inspirational quotes pop-ups will be shown on the screen. Also, happy songs are played when a smile is detected. And similarly, inspirational songs are played when stress is detected.

    Ai Thanh Ho et al. in their paper [2], introduce an Emotion-based Movie Recommender System (E-MRS) which is intended to solve the problem that the conventional system of user profile does not take into consideration how important users emotions are and how they affect users choices, which the recommender systems are unable to understand and capture the constantly changing preferences of user. According to the paper, the objective of EMRS is to give the users a list of suggestions that are customized using a combination of collaborative filtering and content-based techniques. Here the users emotions as well as his preferences are taken into account when providing a recommendation, also other similar user opinions are considered. The design of the proposed system, its implementation along with its evaluation procedure is also discussed. In order to relate emotions to movies, the users have to answer a questionnaire about what movies or which categories of movies they liked to watch according to each emotion. Furthermore, the system captures user emotions by asking them to use 3 colours to decorate their avatar.

    In the paper [3], Manish Dixit et al. proposed an approach for Harris corner point. Which is considered as the most important feature that is improved by using the Bezier curve. It produces low dimension feature which was used in image recognition. They design a model for feature extraction from face image to solve the problem of sentiment recognition in a minimum time period. To achieve execution in a minimum time period they execute the process efficiently and logically by using an improved and stable combination of straightforwardness and cleverness of finding features points. In this design they detect the Harris corner points on various parts of the face and on the basis of those points the Bezier curve is formed. By using this curve, they remove less significant corner points and present the combination of human and computer intelligence by means of the Bezier curve.

    Fatima Zahra Salmam et al. in [4] has applied a Sentimental analysis from facial expressions. This analysis is completed byusing three steps like detection of face, extraction of features and expressions classification. There are two arguments on which they focused: First focus was on to design a geometric based approach for extraction of features. This geometric based method is used to calculate a distance of face which will give a facial expression. Secondly, the focus was to design an automatic supervised machine learning method known as decision tree. They made useof two different databases namely JAFEE and COHEN to which the decision tree algorithm was applied. They improve the accuracy and use a new combination of parameters which mainly focus on eyebrows, eyes, mouth and nose of face. They achieved facial recognition accuracy rates of nearly 89% and 90% for JAFFE and COHEN databases respectively.

    Jae Sik Lee et al. [5] have used the concept of context reasoning wherein the context data is utilized to understand the users situation. They propose a music recommendation system that comprises the ability of context reasoning in this paper. Their proposed system contains modules such as Intention Module, Mood Module and Recommendation Module each of which provide a unique functionality to the system and play a vital role for the systems performance as a whole. Context reasoning is done by the Intention Module with the help of environmental context data and concludes whether the user is interested in listening to music or not. Next, the type of music that is deemed to be most appropriate to the users context is determined by the Mood Module. Lastly, the music is recommended to the user by the Recommendation Module.

    Renuka R. Londhe et al. in [6] have studied the concept of recognizing facial expressions by taking into account the various properties that are associated with a persons face. Whenever there is a change in the facial expression, changes can be noticed in the curvatures on the face as well as features of the face such as nose, lips, eyebrows and mouth area. And accordingly, there will be changes in the intensity of the corresponding pixels of the images. These features are then classified into six expressions which include anger, disgust, fear, happy, sad and surprise with the help of artificial neural network. The Scaled Conjugate Gradient back-propagation algorithm is used to train and test the two-layered feed forward neural network. They acquired a 92.2 % recognition rate. Here, they have made use of the JAFFE database which consists of seven expressions for analysis through the computer.

    Dolly Reney et al. in their paper [7] address the importance of face and emotion identification in the field of security and how it helps give solutions to the different challenges faced. Database plays a major role when comparing the facial attributes and sound Mel frequency components, when it comes to whichever face and emotion identification system. The database is created for which facial characteristics are computed and these are then stored in the database. Various algorithms are used in order to analyze the face and emotion with the help of the aforementioned database. The implementation of the process of recognizing the persons face and the emotion

    being expressed by him uses an effective method for the creation of a database comprising the facial expressions and emotion. They have used the Viola-Jones algorithm for the face identification process and the face and emotion identification is evaluated by the KNN classifier.

    Shan C et al. in the paper [8], the facial presentation is empirically evaluated according to the statistical local features, Local Binary Patterns (LBP), in order to recognize the expressions depicted by the face that are person- independent. Various machine learning algorithms have been used on different databases so that they could be deeply analyzed. Thorough analysis depicted that LBP features were important for identifying the facial expression effectively and efficiently. Next, they developed Boosted- LBP which extracts the most discriminant LBP features, and when Support Vector Machine classifiers are used along with the Boosted-LBP features, they achieved the best recognition performance. The performance of LBP features is stable and robust across a valuable scope of low resolutions of face images and produce favorable results in compact low-resolution video sequences that are recorded in a real-world environment, all of which was observed through their experiments.

    Enrique Correa et al. in their paper [9] propose an artificially intelligent system whose goal is to identify the emotion with the help of facial expressions. They start by taking three promising neural network frameworks which they then customize, train, and subject to different categorization tasks, next the framework that performs the best is optimized further. The execution of the final framework is depicted using a live video application which returns the emotion expressed by the person instantaneously. The artificially intelligent systems which are based on neural networks are used to recognize the persons emotion with the help of images of his face, this tends to be the papers main focus. They have also experimented with the various methods from existing studies and evaluated the resulting outcomes of the different choices in the design procedure.

    Y. Lv, Z. Feng et al. in their paper [10] deal with the new area of study within machine learning that is Deep learning and how it could be utilized for the classification of facial images of humans into various categories of emotion by the means of Deep Neural Networks (DNN). The difficulties faced during the classification of facial expression are overcome with the help of Convolutional neural networks (CNN), that are popularly being applied for this purpose. Here, they have proposed a new framework for the identification of the emotional state that is based on CNN. Visual Geometry Group model (VGG) is used to refine the architecture in order to enhance the results. Numerous largely public databases (CK+, MUG, and RAFD) were used for the testing and evaluation of their proposed architecture. And it was observed from the resulting outcomes that the CNN method effectively identifies image expression on many public databases, hence achieving improvement in the evaluation of facial expression.

    Xie S et al. in their paper [11], aim to provide a solution to the problem of Facial expression recognition (FER) through the deep comprehensive multipatches aggregation convolutional neural networks (CNNs) procedure. The

    methodology presented here primarily comprises two branches of CNN and it is based on the deep learning framework. Each of the two branches serve specific functionality that is local features from the image patches are extracted by one of the branches and the other one is utilized for the purpose of extracting holistic features from the whole expressional image. The proposed system interprets the expressional details with the help of local features while the holistic features are used for the characterization of high-level semantic information present in an expression. Before the classification can be made both the local and holistic features are combined. The expressions can be rendered in different scales by using these two types of hierarchical features. The proposed model in this paper is able to represent the expressions more completely, in comparison to many of the present-day methods that use only one sort of feature.

    Liliana Lo Presti et al. in their paper [12], put forth a fresh idea to configure the temporal dynamics of a series of facial expressions. To fulfill this objective, a series of Face Image Descriptors (FID) is considered to be the result of a Linear Time Invariant (LTI) system. The Hankel matrix is used for the representation of the temporal dynamics of the aforesaid series of descriptors. This paper introduces various types of strategies for the computation of dynamics-based representation of a sequence of FID, and it also states the classification accuracy values of the intended representations within different standard classification frameworks. Emotion recognition and pain detection are the two application domains considered for the validation process of the proposed representations. They experimented on two locally available criterions and compared it to advanced approaches which shows that competitive performance is achieved by the dynamics-based FID representation when off-the-shelf classification tools were used.

    Caragnì P et al. in their paper [13], aim to do an extensive research on how the histogram of oriented gradients (HOG) descriptor could be applied to the Facial expression recognition (FER) problem, their main focus being on how they can efficiently and completely make use of this powerful method for this goal. Specifically, they want to stress on the fact that a correct group of the HOG parameters is capable of making this descriptor a very befitting one for characterizing the peculiarities of the facial expression. They carried out a large experimental session, which was among three different stages and it exploited a consolidated algorithmic pipeline. The aim of the first experimental phase was to prove the aptness of the HOG descriptor to distinguish the attributes of the emotion and this was done through a successful comparison with the most regularly used FER frameworks.

    Anagha S. Dhavalikar et al. [14] in their paper propose an Automatic Facial Expression recognition system which has three main steps: Face detection, Feature Extraction and Expression recognition. The Face detection process which constitutes the first stage of the proposed model implements an RGB Color model which uses lighting compensation for selecting the face and morphological procedures for keeping the key attributes of the face like eyes and mouth of the face. The facial feature extraction process makes use of the

    Active Appearance Model (AAM) technique wherein various points on the face are located forming a feature such as eye, eyebrows and mouth. These collected points are then used to generate a data file that provides the necessary details about the model points identified and hence detects the facial expression that is given as input to the AAM model.

  3. METHODOLOGY

    The application developed in our project is called MoodBot, the application primarily is a Chatbot application which incorporates the emotion detection module. The emotion detection module is used for identifying the emotion expressed by the user and hence making it essential to the application as it provides the entertainment in the form of Music and Movies

    according to the users mood. The application consists of three main modules: Chatbot, Mood detection and Music/Movie recommendation. Fig 3 illustrates the block diagram for the working of the application presented here.

    As shown in Fig 3, once the application is opened the users screen displays the chatbot window, which acts as the base of the application. The chatbot application named MoodBot provides the user with three options. The first one being chatting, that is the user can chat with the chatbot using the textbox to type in the message and then click on the send button to send the message. Second option is to click on the My Mood button, upon which the chatbot application will start the emotion detection process. The last option is to simply quit the application.

    Fig. 3. Block Diagram of the Proposed system

    When the user selects the My Mood option the application starts the emotion detection process. The emotion detection process involves using the webcam/camera to capture the face and passing it to the face

    detection process where the face is analyzed. It is then passed on to the emotion detection process which analyzes the face features and classifies the emotion into one among the seven classes.

    Once the current mood of the user is detected, the application uses a pop-up window to display the users mood identified by the application and provides the user with three choices. The first one is music, when this is selected the application will start playing songs based on the users mood. The second option is movies, when this option is selected the application opens the Movie for Your Mood Website, a specially designed website which displays a list of movies appropriate for the users current mood. The last option is to quit the application. Every time the user feels a change of mood, all he needs to do is click on the My Mood button and the application will do the rest. Also, the user can continue to do other tasks on the computer, as the music will continue to play in the background.

    1. Artificial Intelligence

      Artificial intelligence (AI) is described as simulating human intellect in machines so that they are capable of thinking like humans and imitate their actions as theyre programmed to do so. The term AI can basically be associated with any machine that indicates the presence of attributes related to a human mind like learning and problem solving. AI is an interdisciplinary science comprising various outlooks, yet the breakthroughs happening in machine learning and deep learning are creating a paradigm shift in virtually every sector of the tech industry. To this date, Neural networks and fuzzy logic (FL) are the more commonly and frequently used AI technologies.

    2. Chatbot

      Chatbot also known as chatterbot, is a popular AI application which allows it to be incorporated and used via any significant messaging applications and it prompts human conversation using voice commands or text chats or both. Machine Learning and NLP (Natural Language Processing) are the more often used AI technologies for developing chatbot applications. The chatbot module in our proposed system implements Rule based Chatbot which follows a list of predefined rules for answering the queries the user has listed. Rule-based chatbots are mainly used by basic applications that are trained to answer the questions according to the rules.

      Rule-based chatbots might be incapable of interpreting complicated conversations, which is its only setback. It is able to only execute the tasks it has been programmed to do unless the developer decides to add in more upgrades. The future of chatbots includes it being equipped with Emotion AI and advanced sentiments analytics in order to understand and interpret the conversations more like human beings.

    3. Deep Learning

      Deep learning basically belongs to a larger class of machine learning techniques based on artificial neural networks with representation learning. It comes under machine learning in artificial intelligence (AI) that has networks capable of learning unsupervised from data that is unstructured or unlabeled. It is also referred to as deep

      neural learning or deep neural network. The main strength of deep learning algorithms is in learning processes and it produces a high degree of intelligence to systems based on them. In deep neural networks, the deep refers to the aspect that multiple layers of processing lead to the transformation of the input data be it images, speech, or text into certain output useful for making decisions. In deep learning, a computer model is able to learn how to classify tasks directly from images, text, or sound.

      Deep learning models can achieve high levels of accuracy, sometimes surpassing human-level performance. A large set of labelled data and neural network architectures with many layers is used to train the model. Deep learning algorithms achieve recognition accuracy at higher levels than ever before. Recent advances in deep learning have improved a great deal where deep learning outperforms humans in some tasks like classifying objects in images.

    4. Face Detection Haar Cascade Algorithm

      Face identification is among the most basic applications utilized in face recognition technology and it plays a vital role

      in emotion detection. Our application first needs to identify the face of the user in order to recognize the emotion being expressed. Face detection is a type of application that comes within computer vision technology. It is a technique wherein they design algorithms and train them so that it is capable of locating the faces or objects accurately within the images. The images could be captured in real time or from pictures, our application gathers images of the user from each frame of the live webcam feed. A popular use of this technology is in airport security systems and in sartphones for locking and unlocking it using face ID.

      The application we have developed makes use of the Haar Cascade Algorithm for the face detection process. Haar Cascade is a machine learning object and face detection algorithm whose objective is to detect objects or faces in a video or image. This algorithm is developed on the notion of features presented by Paul Viola and Michael Jones in their paper "Rapid Object Detection using a Boosted Cascade of Simple Features" in 2001. Here a cascade function is given training with the help of many positive and negative images and this approach is based on machine learning. After this it is used to recognize objects in other images and this same concept can be extended for identifying faces in images.

      Additionally, the Face detection process uses classifiers, that are algorithms whose objective is to detect whether there is a face present (1) or no face present (0) in an image. Immense training has been given to these classifiers in order to identify the faces with the help of several images so as to achieve greater reliability. Our project makes use of OpenCV library of python that consists of two sorts of classifiers, Haar Cascades and LBP (Local Binary Pattern). Out of these two our application uses the Haar Cascade Classifier for the face identification process.

      Initially, many positive images that contain faces and negative images that do not have any faces are required by the algorithm for training the classifier. Next, we choose

      facial attributes from the image so they can be extracted. We first fetch the Haar Features. A Haar feature considers adjacent rectangular regions at a specific location in a detection window, sums up the pixel intensities in each region and calculates the difference between these sums. Each feature is a single value obtained by subtracting the sum of pixels under white rectangle from the sum of pixels under black rectangle. Fig 4 shows how the Haar features are selected and evaluated.

      Fig. 4. Haar Features

      The Adaboost method is used for the selection of the best features out of 150000+ possible features and it also trains the classifiers that make use of them. To do this, all the features are applied to all the training images and then it finds the best threshold for each of these features that helps in classifying the faces as positive and negative. The features with the least error rate are selected as they best classify the face and non-face images. The final classifier is a weighted sum of the weak classifiers, it says weak as these on their own are unable to classify the image but when combined with others form a strong classifier. Cascade of Classifiers is used to check a possible face region as it is known that a major part of the image is non-face region. The features are grouped into different levels of classifiers and then applied one by one on a window. When a window fails the first level, it is discarded and the remaining features arent applied to it. In case it passes, the second level of features is applied to it and the same process is continued. When the window has passed all levels then it is concluded that it is a face region and in this way the classifier is taught to distinguish between face and no face.

    5. Emotion Detection CNN Algorithm

      A Convolutional Neural Network (ConvNet/CNN) is a Deep Learning algorithm which takes an image as input, assigns importance (learnable weights and biases) to the various aspects in the image based on which it is capable of differentiating one from the other. This deep learning architecture is commonly used and sought-after. In comparison to other classification methods ConvNet requires lesser pre-processing. In earlier methods filters are hand-engineered but ConvNets are capable of learning these filters/characteristics with enough training.

      CNN is computationally efficient. Special convolution and pooling operations are used by it and it also conducts

      parameter sharing which allows the CNN models to run on any device, making them universally appealing. When using CNNs, there is no need to identify the features required for the classification of images, thereby eliminating the need for extracting the features manually. CNN works by extracting features directly from the images. The relevant features are not pre trained because they are learned while the network trains on a collection of images.

      The Convolutional Neural Network is known for its work with image data and it is highly used and recommended for identifying what an image could be or what the image contains. The basic CNN structure is as follows: Convolution -> Pooling -> Convolution -> Pooling

      -> Fully Connected Layer -> Output. Fig 6 illustrates how a typical convolutional neural network works. Our application makes use of such a structure for the emotion detection process. Once the Haar Cascade method has successfully detected the presence of a face in the image taken as input, it is then handed over to the CNN structure for identifying the mood expressed by the user.

      In CNN, convolution is used to create feature maps using the original data and pooling is down-sampling which is commonly in the form of max-pooling, and here a region is selected for which the maximum value is calculated and this value becomes the new value for that entire region. Fully Connected Layers are typical neural networks, where all nodes are fully connected.

      The mood or the emotional state of the user can be interpreted using his facial expressions which are just the positioning of the facial muscles. The application developed here will be trained to interpret and distinguish among the seven fundamental emotional states which includes happy, sad, fear, neutral, angry, surprised, and disgust. The convolutional neural network is trained with the help of FER2013 dataset and different hyper-parameters are used to refine the model.

      Fer2013 is basically used to identify and classify the emotion. It is an open source dataset which is used to group all the faces according to the emotion depicted in the facial expression into one among the seven classes (0=Angry, 1=Disgust, 2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral). It has two main columns emotion and pixel. The emotion column consists of a key (0-6) that represents the emotion being expressed in that image.

      It is known that all images are made up of pixels which are represented with the help of numbers. All colored images have three color channels red, green, and blue and each channel is represented by a grid or 2-dimensional array. A number between 0 and 255 is stored in each cell of the grid, thus denoting the intensity of that cell. When the alignment of the three channels happens, the image appears as perceived normally by us.

      The dataset we have considered consists of images that are 48×48 in dimension. There exists only one channel as all the images in the dataset are gray-scale. The image data is extracted and arranged into a 48×48 array. To normalize the data, it is converted into unsigned integers and then divided by 255 as it is the maximum possible value a single cell can have and also it ensures that all the values lie between 0 and

      1. Next the Usage column will be checked and the data will be stored in different lists, where one will be used to train the network and the other to test it. To create a Sequential Convolutional Network, we have made use of Keras which is implemented on top of TensorFlow in our project. The emotion detection process uses a four-level CNN framework.

        First, the primary expressional vector (EV) is extracted and generated finding the various relevant facial points of importance, using CNN. The changes happening in the expression are directly connected to EV. On receiving the image which serves as the input data to each of the convolutional layers, it changes it and the result of which is passed on to the next level. In this way CNN is used to recognize the mood of the user.

        Fig. 5. CNN Strucure

    6. Music/Movie Recommendation

    Once the emotion has been detected and classified into one of the seven categories, a popup box is displayed on the users screen with the options music and movie. He can make a selection among the two and accordingly music will be played or a list of movies will be displayed. If the user chooses music, a song is played based on the emotion detected. The songs will play one after the other, until the user asks the chatbot to detect the mood again.

    When the user selects the movie option, he is redirected to a movie website that is specially designed for this application and classifies the movies based on mood. It is a simple movie website that displays the users mood for which the list of movies is suggested on the top along with the website name. Only the movies appropriate for the mood are listed on site and theyre further categorized into three categories: Latest Movies, Top Rated and Recommended. Each movie listed on the website provides the user with the movie poster and some basic details of the movie such as the genre, rating and also an IMDB link of the movie in case the user is interested and wants to know more about the movie.

  4. CONCLUSION

Chatbots are one of the most important advancements of AI Technology. Our project successfully combines this technology with the humans need for entertainment in the form of Music and Movies. In this age and time of technology, such an application would serve the purpose of helping humans relax and relieve their stress. The MoodBot application developed in our project is a simple chatbot that

allows users to choose music and movies according to their mood. The application is implemented as a desktop application, thereby being available to the user whenever required. When the user chooses the music option, songs appropriate to his mood are played. And when the choice is movies, the application opens a specially designed website that recommends movies to him as per his current mood.

The future enhancements for the application could be automatic detection of the users mood which could be initiated when the user opens the application thereby eliminating the need for the My Mood button. The emotion detection module could be executed at regular intervals to check for change in the users emotion and in case theres a change ask the user to make a choice among music or movies.

Other enhancements could be a fully functioning website like IMDB dedicated to suggest movies depending on the mood and also previous choices of the user. Also, could add like and dislike options to the songs being played so that the users liked songs are played often and the disliked songs are not played.

REFERENCES

  1. G. M. Mate, Nikhil Wadekar, Rohit Chavan, Tejas Rajput, Sameer Pawar, Mood Detection with Chatbot using AI-Desktop Partner, April 2017.

  2. Ai Thanh Ho, Ilusca L. L. Menezes, Yousra Tagmouti, E-MRS: Emotion-based Movie Recommender System.

  3. Manish Dixit, Sanjay Silakari, A Hybrid Facial Feature Optimization Approach using Bezier Curve, IEEE, 2015.

  4. Fatima Zahra Salmam, Abdellah Madani ,Mohamed Kissi, Facial Expression Recognition using Decision Trees, 13th International Conference Computer Graphics, Imaging and Visualization, 2016.

  5. Jae Sik Lee, Jin Chun Lee ,Music for My Mood: A Music Recommendation System Based on Context Reasoning, 2006.

  6. Renuka R. Londhe, Dr. Vrushshen P. Pawar, Analysis of Facial Expression and Recognition Based On Statistical Approach, May 2012.

  7. Dolly Reney, Dr.Neeta Tripaathi, An Efficient Method to Face and Emotion Detection, Fifth International Conference on Communication Systems and Network Technologies, IEEE, 2015.

  8. Shan C, Gong S, McOwan PW. Facial expression recognition based on local binary patterns: a comprehensive study. Image Vis Comput. 27(6):803816 (2009).

  9. S. Jagannatha, M. Niranjanamurthy, and P. Dayananda, Algorithm Approach: Modelling and Performance Analysis of Software System , Journal of Computational and Theoretical Nanoscience (American Scientific publishers),December 2018, Volume 15, Issue 15, PP. 33893397

  10. Enrique Correa, Arnoud Jonker, Micha¨el Ozo, Rob Stolk, Emotion Recognition using Deep Convolutional Neural Networks, June 30, 2016.

  1. Y. Lv, Z. Feng, and C. Xu, Facial expression recognition via deep learning. In Smart Computing (SMARTCOMP), 2014 International Conference on, pages 303308. IEEE, 2014.

  2. Xie S, Hu H, Facial expression recognition using hierarchical features with deep comprehensive multipatches aggregation convolutional neural networks. IEEE Trans Multimedia, 2018.

  3. Liliana Lo Presti, Marco La Cascia, Using Hankel Matrices for Dynamics-based Facial Emotion Recognition and Pain Detection, IEEE, 2015.

  4. Carcagnì P, Del Coco M, Leo M, Distante C. Facial expression recognition and histograms of oriented gradients: a comprehensive study. SpringerPlus. 4:645. (2015).

  5. Anagha S.Dhavalikar and Dr. R. K. Kulkarni, Face Detection and Facial Expression Recognition System, 2014 International Conference on Electronics and Communication System (ICECS – 2014).

Leave a Reply