Emotion Recognition Systems and Emotion Correlation Mining

DOI : 10.17577/IJERTCONV9IS07006

Download Full-Text PDF Cite this Publication

Text Only Version

Emotion Recognition Systems and Emotion Correlation Mining

Melvin K Mathew1

1Student,

Dept. Of Computer Science & Engineering, Mangalam College of Engineering, India,

Roshan J O 3

3Student,

Dept. Of Computer Science & Engineering, Mangalam College of Engineering, India,

Surya R2

2 Student,

Dept. Of Computer Science & Engineering, Mangalam College of Engineering, India,

Kavitha Nair R4

4Assistant Professor,

Dept. Of Computer Science & Engineering, Mangalam College of Engineering, India,

AbstractEmotion is complex, individualized, subjective, and sensitive to context. Emotions are correlated rather than independent, which contributes to the complexity of individual and public emotions. Most previous works focus on recognizing emotion, here we check for the reason for emotions getting wrongly recognized. We attempt to fill the void between emotion recognition and emotion correlation mining. To mine emotion correlation from emotion recognition through text two deep neural-network models are used. First the neural networks are trained on the dataset. Then DNNs are used to extract features from data. These features are used to mine correlation between emotions. Correlations are found out by Confusion and evolution of emotions. The result will be relations between emotions which will give insights like which emotion has the greatest possibility to be confused with particular emotion. Emotion correlation findings could provide insights for applications regarding affective interaction, such as network public sentiment, social media communication, and humancomputer interaction.

KeywordsConvolutional neural networks, deep neural networks, emotion correlation mining, emotion recognition.

  1. INTRODUCTION

    Emotion analysis has been pulling in researchers' attention. Most past works in the artificial intelligence field center around perceiving emotion instead of mining the motivation behind why emotions are not or wrongly perceived. The relationship among emotions adds to the disappointment of emotion analysis. Emotion is unpredictable, individualized, abstract, and delicate. Emotion guides choice, prepares the body for activity, and shapes the progressing behavior. Singular emotion is perplexing in any event the accompanying three viewpoints.

    A. Steady individual value is formed through long-term experience. Emotion response among individuals differs even in the same context. b. Misconception happens when people impart. The comprehension of the setting changes as individual earlier foundations contrast. The assessment of an individual turns out to be more significant while getting more information on track occasions. The misconception of introductory emotion happens when there is an earlier information hole between the data sender and the collector. c. Singular emotion turbulence exists. The turbulence is influenced by outside moment negative or positive mood. Emotion changes alongside instant conditions for a similar occasion. For most people, it is a typical phenomenon in day-by-day life that outside conditions impact

    inner emotions. For instance, a melodic tweet can likewise be upsetting when one's work execution is judged contrarily.

    From one viewpoint, the emotion of people is complex because of individualized long haul social encounters, relational misconceptions, and outer moment mind-set impact. Emotions are associated instead of autonomous, which adds to the intricacy of individual and public emotions. Emotion relationship mining can help examine the individual and public emotions in any event in the accompanying applications such as like, Human Computer Interactions, Social Media Communication, Public Sentiment Analysis. The correlation among emotions adds to the disappointment of emotion recognition. Emotion correlation is depicted as emotion confusion and evolution. The confusion of emotion alludes to "distance" among emotions. The evolution of emotion alludes to the emotion changes during the interaction of occasion propagation. Misjudgment of emotions is one of the significant variables in the evolution of emotion. Analysis of correlation among emotions brought about by the intricacy of emotion is not many engaged and few covered in the literature in computer science.

    Here, we attempt to fill the void between emotion acknowledgment and emotion relationship mining through characteristic language text. Two deep neural-network models are utilized for mining emotion correlation from emotion recognition using text. First the neural organizations are prepared on the dataset. At that point DNNs are utilized to extricate highlights from data. These highlights are utilized to mine correlation between emotions. Correlations are discovered by Confusion and advancement of emotions. The outcome will be relations between emotions which will give experiences like which emotion has the best chance to be mistaken for specific emotion. Mentioned dataset is in Chinese language, rather we can utilize elective dataset in English.

  2. RELATED WORK

    With the improvement of social network platforms, discussions, and question noting sites, an enormous number of short messages that regularly contain a couple of words for an individual archive are posted by online clients. In these short messages, emotions are every now and again installed for imparting feelings, communicating fellowship, and advancing

    impact. It is very significant to distinguish emotions from short messages, yet the relating task experiences the sparsity of feature space inserted in records. [1] This article put forward two models, WLTM and XETM, in order to address the matter of feature sparsity in perceiving emotions over short messages. In this article, they assessed the impact of the quantity of words in a term gathering and contrast the exhibition and cutting-edge baselines. To diminish the time cost of assessing boundaries, they proposed the accelerated strategies, fWLTM and fXETM to produce points and recognize emotions productively. The exploratory outcomes demonstrated that the accelerated models were very less tedious without decreasing a lot of value, particularly for the proposed fWLTM.

    Social media are utilized as primary conversation channels by a large number of people each day. The content people produce in every day social media-based miniature interchanges, and the movements in that communicated, may affect the emotional conditions of others. Here, [2] they studied the dynamics of emotional contagion using a random sample of Twitter users, whose actions was observed during a week of September 2014. Rather than manipulating content, they devised a null model that discounts some confounding factors. We measure the emotional valence of content the users are presented to prior to posting their own tweets. They identified two different classes of individuals: highly and scarcely susceptible to emotional contagion.

    [3] Manages sentence-level sentiment arrangement. The objective of this research was to create simple sequence models yet in addition endeavors to completely utilizing linguistic assets to benefit sentiment characterization.

    Multi-Task Multi-Label (MTML) [4], which performs classification of both sentiments and topics of tweets simultaneously, and in-corporates each others results from prior steps to promote and reinforce current results iteratively. The learned class labels of one task are incorporated as part of predicting features of the other task. For each assignment, the model is prepared with the greatest entropy by utilizing different names to earn more data and handle class uncertainty. Furthermore, the MTML model produces probabilistic outcomes, rather than parallel outcomes, so that multi-label expectation is permitted and labels can be positioned as needs be.

    Paper [5] involves a methodology for extracting small investor sentiment from stock message boards. Five distinct classifier algorithms coupled by a voting scheme are found to perform well against human and statistical benchmarks. Time series and cross-sectional aggregation of message information improves the quality of the resultant sentiment index. Empirical applications evidence a relationship with stock returns visually, using phase-lag analysis, pattern recognition and statistical methods.

    [6] Proposed a novel way to deal with multimodal sentiment analysis utilizing deep neural networks consolidating visual analysis and natural language processing. Our objective is not quite the same as the standard sentiment analysis objective of

    anticipating whether a sentence communicates good or negative sentiment; all things being equal, we mean to gather the inactive emotional condition of the user. It includes a novel multimodal sentiment examination strategy utilizing deep learning strategies. This methodology gave new apparatuses to the joint investigation of pictures and text via social media.

    Text emotion dispersion learning (EDL) [7] create models that can foresee the power estimations of a sentence across a bunch of emotion classes. Existing techniques dependent on supervised learning require a lot of all well labelled training data, which is hard to acquire because of conflicting impression of fine-grained emotion intensity. In this paper, in view of meta-learning, they proposed an effective way to deal with take in text emotion dispersion from a little example. To capitalize on a small labelled dataset, proposes to utilize the K-closest semantically similar neighbors (KNNs) of each training sample to cluster the training data, and train a meta-learner that can adjust to new testing data with only a few samples on the clusters. Then be able to fit the meta-learner on KNNs of each testing test.

  3. METHODOLOGY

    1. Proposed System

      The proposed model uses two deep neural-network models to mine emotion correlation from emotion recognition through text. Initially the neural networks are trained on the dataset. Then DNNs are used to extract features from data. The extracted features are used to mine correlation between emotions. Correlations are found out by Confusion and evolution of emotions. The result will be relations between emotions. The result will give insights like which emotion has the greatest possibility to be confused with particular emotion.

    2. Modules

      The proposed system contains following modules:

      1. Data preprocessing

      2. Architecture creation

      3. Training

      4. Emotion correlation calculation

      5. Prediction

        1. Data preprocessing

          Data processing involves the Loading of text and labeling. It involves preprocessing of data

        2. Architecture creation

          Here, two neural network models CNN-LSTM(M1) and CNN-LSTM- STACK(M2), are used for emotion recognition. M1 is constructed with part I and part II. M2 is constructed by adding an additional part III to M1. Part I deals with Feature Processing includes Embedding Layer, Convolution Layer. Part II deals with Emotion calculation includes LSTM, Dense layer. Part III deals with Attention mechanism includes LSTM- stack

        3. Training

          In training, load the preprocessed data and the architecture that was created. After loading, training is performed. After

          training gets completed, a logic is formed in the models. Thus, the two models are trained and saved

        4. Emotion correlation calculation

          It involves Data preprocessing. Here each model is tested with same amount of data. Both results are saved. Now Creation of correlation matrix using result is done.

        5. Prediction

        In prediction the text is loaded, then it is preprocessed and tokenized. Now the required model is chosen. The chosen model is loaded and the prediction using the loaded model takes place.

    3. System Architecture

    1. Data preprocessing

      Text preprocessing is traditionally an important step for natural language processing (NLP)tasks. It transforms text into a more digestible form so that machine learning can be better performed. Data processing involves the Loading of text and labeling. It involves preprocessing of data. Here Removal of unwanted characters and the Create vocabulary takes place.

    2. Architecture creation

      Two deep neural-network models (CNN-LSTM (M1) and CNN-LSTM-STACK (M2)) forms architectures are created. The calculation process can be divided into 3 parts.M1 is

      constructed with part I and part II .M2 is constructed by adding an additional part III to M1.

      • Part I Feature Processing

        Part I deals with Feature Processing includes Embedding Layer, Convolution Layer. It transforms the original features into dense vector information. Embedding involves language modeling and feature learning techniques where words from vocabulary are mapped to vectors of real numbers. Convolution layer apply convolution operation to the input, passing the result to the next layer. The final output of the convolution layer is a vector.

      • Part II Emotion calculation

        Part II deals with Emotion calculation after the feature processing in the Part -I. It includes LSTM, Dense layer. LSTM stands for Long short-term memory. It can not only process single data points, but also entire sequences of data. A dense layer feeds all outputs from the previous layer to all its neurons each neuron providing one output to the next layer.

      • Part III -Attention mechanism includes LSTM-stack Part III deals with Attention mechanism includes LSTM-

        stack. As the neural network goes further deep backward fine- tuning process in CNN-LSTM becomes weak and the vanishing gradient problem occurs. This is solved by associating CNN-LSTM with Part III. Thus, more attention to original information can be attained.

        Fig.1.System Architecture

    3. Training

      The training involves loading the preprocessed data and the architecture that was created. After loading, training is performed. After training gets completed, a logic is formed in the models. Thus, the two models are trained and saved.

    4. Emotion correlation calculation

      It involves Data preprocessing. Here each model is tested with same amount of data. Both results are saved. Now Creation of correlation matrix using result is done.

    5. Prediction

    In prediction the text is loaded, then it is preprocessed and tokenized. Now the required model is chosen. The chosen model is loaded and the prediction using the loaded model takes place.

    The performance of two proposed model is compared based on accuracy. The results show that the among the models M1 and M2, M2 achieves a better performance compared to M1.

    Performance Analysis of M1 & M2

  4. RESULT & PERFORMANCE ANALYSIS

    Here, we are using two models CNN-LSTM(M1) and CNN- LSTM- STACK(M2), for emotion recognition and determining emotion correlation. The output of the models is emotion like love, joy, anger, etc. and the relation among emotions. From the experiments it is clear that the proposed CNN- LSTM model achieves a comparatively high accuracy than the emotion recognizing models like fWLTM & fXETM, MTML etc. Results shows that the proposed system achieves a better performance than other existing systems.

    Performance Analysis Chart

    0.86

    0.84

    0.82

    ACCURACY

    ACCURACY

    0.8

    0.78

    0.76

    0.74

    0.851

    0.821

    0.822

    0.761

    0.844

    0.815

    0.821

    0.778

    1

    0.9

    <>0.8

    0.7

    0.688

    0.792

    0.678

    0.881 0.873

    0.712

    0.643

    0.72

    0.7

    10 – 20 20 -30 30 – 40 40 – 50

    TEST SET COUNT

    M1 M2

    ACCURACY

    ACCURACY

    0.6

    0.5

    0.4

    0.3

    0.2

    0.1

    0

    0.456

    0.346

    0.418

    0.382

    0.481

    Fig. 3. Performance Analysis of M1 and M2

    Test set count

    M1

    M2

    10 – 20

    0.821

    0.851

    20 -30

    0.761

    0.822

    30 – 40

    0.815

    0.844

    40 – 50

    0.778

    0.821

    Table 2. Accuracy Comparison of M1 and M2

  5. FUTURE SCOPE

    10 – 20 20 – 30 30 – 40 40 – 50

    TEST SET

    fWLTM & fXETM MTML CNN-LSTM

    Fig. 2. Performance Analysis Based on Accuracy

    Emotion correlation findings could provide insights for applications regarding affective interaction, such as network public sentiment, social media communication, and human computer interaction. This can bring about more affective interacting services and make interactions more live. Further development can bring about machines to perform human like interactions using its acquired insight. We can further be studied possibility of considering prediction of a next interaction in a series of interaction considered which can add about high amount of efficiency and accuracy as more likely to hit the right emotion within less time considering and comparing the predictions while reaching the final recognition result.

    Test set count

    fWLTM & fXETM

    MTML

    CNN-LSTM

    10 – 20

    0.346

    0.456

    0.688

    20 – 30

    0.418

    0.678

    0.792

    30 – 40

    0.382

    0.643

    0.881

    40 – 50

    0.481

    0.712

    0.873

    Test set count

    fWLTM & fXETM

    MTML

    CNN-LSTM

    10 – 20

    0.346

    0.456

    0.688

    20 – 30

    0.418

    0.678

    0.792

    30 – 40

    0.382

    0.643

    0.881

    40 – 50

    0.481

    0.712

    0.873

    Table 1. Accuracy of different emotion recognizing models

  6. CONCLUSION

Most past works in the field center around perceiving emotion as opposed to mining the motivation behind why

emotions are not or wrongly perceived. The correlation among emotions adds to the disappointment of emotion recognition. Here we fill the hole between emotion recognition and emotion correlation mining with this framework. Emotion correlation discoveries could give experiences to applications, for example, network public notion, social media correspondence, and human-computer interaction. conclusion section is not required. Although a conclusion may review the main points of the paper, do not replicate the abstract as the conclusion. A conclusion might elaborate on the importance of the work or suggest applications and extensions.

ACKNOWLEDGMENT

The authors wish to thank Principal Manoj George, Dr. Sunitha E.V, H.O.D, Computer Science Department for the proper guidance, valuable support, and helpful comments during the proofreading.

REFERENCES

  1. J. Pang et al., Fast supervised topic models for short text emotion detection, IEEE Trans. Cybern., early access, Sep. 30, 2019, doi: 10.1109/TCYB.2019.2940520.

  2. E. Ferrara and Z. Yang, Measuring emotional contagion in social media, PLoS ONE, vol. 10, no. 11, 2015, Art. no. e0142390.

  3. Qian, M. Huang, J. Lei, and X. Zhu, Linguistically regularized LSTMs for sentiment classification, 2016. [Online]. Available: arXiv:1611.03949.

  4. H. Shu, P. Wei, J. Li, and D. Lee, Sentiment and topic analysis on social media: A multi-task multi-label classification approach, in Proc. ACM Web Sci. Conf., 2013, pp. 172181.

  5. S. R. Das and M. Y. Chen, Yahoo! for Amazon: Sentiment extraction from small talk on the Web, Manag. Sci., vol. 53, no. 9, pp. 1375 1388, 2007.

  6. A. Hu and S. R. Flaxman, Multimodal sentiment analysis to explore the structure of emotions, in Proc. 24th ACM SIGKDD Int. Conf.

    Knowl. Disc. Data Min., 2018, pp. 350358

  7. Z. Zhao and X. Ma, Text emotion distribution learning from small sample: A meta-learning approach, in Proc. Conf. Empirical Methods Nat. Lang. Process. 9th Int. Joint Conf. Nat. Lang. Process. (EMNLPIJCNLP), 2019, pp. 39483958

Leave a Reply