DOI : https://doi.org/10.5281/zenodo.19973732
- Open Access
- Authors : Ajay Kumar Mandal, Vansh Tomar
- Paper ID : IJERTV15IS042982
- Volume & Issue : Volume 15, Issue 04 , April – 2026
- Published (First Online): 02-05-2026
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License:
This work is licensed under a Creative Commons Attribution 4.0 International License
A Comparative Analysis of Sequence Models for Emotion Detection in Short and Informal Text
Ajay Kumar Mandal , Vansh Tomar
Department of Computer Science and Engineering,Galgotias University
Abstract As the number of digital communication grows faster, people become more vocal with what they think and feel. with brief and uninformed writing on websites like social. media, emails, chats, and blogs. Intuition can be applied in human beings. read emotions, correctly recognizing sub reading between the lines is a pronounced problem still. as in the case of machines especially in noisy and context dependent. communication. The majority of sentiment analysis methods that are available. rst of all, pay attention to the classication of polarity and divide text into. positive, negative or neutral, and in many cases not able to reect ne. conditioned emotive conditions like anger, happiness, fear, sorrow or surprise. These restrictions limit the power of emotion. cognizant utilities in domains such as customer care, psychological. monitoring of health, and individualized recommendation system, where we must know a touch of feeling. To address this lacuna, this paper is an attempt of comparative analysis of mul. Multiple sequence models of text based emotion detection. In particular, a baseline model, a conventional Long Short-Term. Memory (LSTM) model, and a attention-enhanced LSTM. model are tested under the same conditions of the experiment. The performance of the overall classication is highlighted in the analysis. and simulate response in brief-text and emotionally perturbed cases. Experimental ndings prove that sequence based models. perform better than control strategies, where the attention-enhanced. LSTMwith better emphasis on emotionally important words. and more contextual knowledge. The ndings of this work help develop efcient, lightweight, and so on. affective awareness systems that are somewhat context-aware. world digital communication environments.
-
INTRODUCTION
Human language is of signicant value in expressing. ideas, will and emotional conditions. With the rapid vitalization of online communication systems like social. instant messaging, emails, media and online forums. applications, people are more and more expressive with their feelings. by text instead of actual face-to-face intercourse. Although human beings have an instinct to understand feelings. Based on experience and context, language allows machines to use language. to discern textual emotions with a high level of accuracy is still to be obtained. a challenging task. This has been problematic in that feeling. textual identication is a major challenge to the research eld. natural language processing (NLP).
In recent times, the amount of textual user-generated information has grown. The data has grown immensely because of the proliferation. smart phone and internet-related
communication. A most of this information is in the form of short, informal, and not structured (e.g., social media posts) but chat messages. and online comments. This reading is full of valuable emotional. data that can be used in an application such as analysis of customer feedback, social media monitoring, conversational agents, opinion mining and digital mental. health support systems. But there is the old feeling yet. analysis methods have little scope since they primarily. emphasize the determination of polarity,
e.g. positive, negative, or neutral sentiment. This kind of approaches is not able to capture nes. barred emotional states as happy, sad, angry, fearful, and surprise, which frequently are necessary to intelligence. emotional behavior and user intent is better represented.
Text based emotion detection is meant to detect particular. feelings that are put across in the form of a written statement. This task is this is especially hard with short and informal. text, in which emotional manifestations are delicate, indistinct, and context-dependent. The same sentence can convey various feelings with respect to word order, surrounding. text, or situational context. In addition, written information does not have. non-verbal expression in facial expressions, body language, etc. and voice, which men usually count upon in order to. interpret emotions. These aspects are a great contributor to the escalation of the. autonomy of automatic emotion recognition systems.
Combination of Natural Language Processing techniques and. deep learning models based on sequences are effective. guideline on how to proceed with such challenges. NLP enables while the cleaning and preparation of raw textual data, deep learning algorithms acquire meaningful emotional patterns. from word sequences. Recurrent is one of such models. Long Short-Term Memory (LSTM) and so on. The adoption of networks has been mostly due to their capability. to learn text, sequential and contextual information. Nevertheless, the current studies tend to pay attention to the proposal or. examines one model and majorly reports in general. accuracy, where little analysis of the effect of variation in sequence is done. models work in cases of brief text and emotion-switches.
Moreover, there are numerous deep learning-emotion
detection. Systems are based on extremely complicated structures that require. huge data and high-resource use, which is not necessarily suitable to practical or resource-constrained. environments. This brings out the necessity of a balanced. strategy that does not only deliver good performance but also tests the actions of the model under varying textual circumstances. relying on small and understandable architectures.
It is based on these observations that this paper attempts to present a. comparative perspective of emotional sequence-based models. detection of short and informal text. The study evaluates a standard LSTM model, an attention- and a baseline model. increased LSTM model with the same experiment. settings. When these models are compared and their analysis is made, one can draw conclusions. short-text and emotion-drift conditions of performance, this piece of work tries to give insight on the strengths and drawbacks of sequence models of emotion recognition. The aim is to make a contribution towards developing. concrete, effective and intelligent emotion recognition. systems that are applicable in the real world of digital communication environments
-
Related Work
Text emotion detection has become an important factor. with the fast development of digital communication comes attention. on social media and internet platforms, online conferences, and messaging applications. Initial studies on this eld. was mainly based on lexicon-based methods, in which case The identication of emotions based on matching words with predened was described. emotion dictionaries. These methods were easy as they were. and interpretable, these found it hard to cope with contextual. ambiguity, sarcasm, and colloquialism. As a result, they were not effective when applied to real-world. text that is usually brief, vibrant and varied.
In order to overcome these shortcomings, scientists then investigated. regulated machine learning methods of emotion. classication. These techniques were acquired patterns directly. labeled datasets and were shown to perform better. out of dictionary-based methods. However, most traditional text was viewed as a collection of machine learning models. single words as well and was not able to articulate the order. nature of language. This disadvantage complicated the ability of such. models to comprehend the effect of word on emotions. orer and contextual relations in the sentences.
Deep learning techniques have become the in the past few years. preferred method of emotion recognition tasks. Recurrent Neural Networks (RNNs) and Long Short-Term Memory. The reason is that (LSTM) networks have become very popular. take process text as sequences and not words. By coding of contextual knowledge with an increased length of text, Models based on LSTM have been found to be better at. grasping emotional attachment and minor
uctuation in language. Various studies demonstrate good outcomes on LSTM. and its derivatives to classify emotion depending on various. datasets.
Also in conjunction with recurrent architectures, transformer-based. they have also introduced models used in emotion detection. These are models that make use of attention to capture. relation of context on more than one side and often achieve high accuracy. However, their practical computational requirements are high which limits deployment. massive data dependencies, and complicated structures. Such they are not as well adapted to lightweight or constraints. applications and resource-constrained environments. where one has to be able to interpret and efcient.
Although deep learning-based has made advances. direct methods, current research is pointing to a number of unanswered. challenges. A large number of the existing works are concerned with the assessment of a Univariate model structure and mainly report in general. accuracy, where little comparative analysis is done. between various sequence models. In addition, the majority of researches measure performance in general or well-organized. text, on the other hand, and informal short and user generated content. has not yet been explored sufciently. Issues related to dataset bias, emotional overlap, and lack of inter-generalization. domains also suggest that more thoroughness is necessary evaluation strategies.
Based on these observations, there is room to suggest that there is scope. comparative research carried out in a systematic way. under realistic conditions based models. In particular, comparative analysis of base models, general LSTM. attention-enhanced LSTM models are capable of doing this. give greater details on the sequence modeling deceses. inuence detection performance of emotions in brief and informal. text. This is the reason where the comparative approach is taken in the present study.
TABLE I
Comparison of Recent Emotion Detection Approaches from
Text
Author (Year)
Dataset Used
Methodology
Emotion
Classes
Acheampong et al.
(2020)
Twitter Dataset
LSTM-based Model
5
Tripathi et al. (2021)
Text Corpus
Recurrent Neural Network
(RNN)
6
Zhang et al. (2022)
Social Media Text
LSTM
6
Basiri et al. (2023)
Multi-domain Text
BiLSTM-CNN
7
Devlin et al. (2019)
Multi-domain Text
BERT Transformer
Multiple
A comparison of the recent emotion detection is given in Table I. set-based approaches, methodologies and the num-ber of classes of emotions to be taken into account. It can be observed that LSTM and sequence-based deep learning models. and their variations, are also popular because they have the capacity to. record textual contextual and sequential
information. While The performance of the transformer-based models is high. large calculability infeasance interferes with practice. More- over, the vast majority of the existing literature is dedicated to the per-model of individuals. perfor-mance as opposed to comparative assessment, particularly in regard to. short and informal text. This is a gap that brings out the motivation. to perform a systematic comparison of models of sequence. in conditions of realistic texts.
-
Problem Definition And Research Gap
-
Problem Denition
In the contemporary digitalized world, much human is consumed. communication occurs using short and informal texts. on websites like social media, chat programs, online discussion forums, emails, and so forth. These platforms are full of emotional information which can assist in. getting insights to user behavior, opinion, and state of mind. [1], [5]. Nevertheless, recognizing feelings automatically with respect to text is still an enigmatic discipline due to the human language. is commonly vague, unorganized and very reliant on. context [10].
The majority of systems that are used to analyze texts are oriented on. sentiment polarity, which is categorized as positive, negative or neutral. neutral [1], [2]. Although polarity-based analysis is capable of offering a. general concept of opinion, it does not reect certain feeling. states like happiness, sadness, anger, fear or surprise. These small emotions play a vital role in their application in such a manner. as customer feedback research, social media monitoring, conversational agents, and systems of mental health support, where we have to do with emotional intent. than simple sentiment [6].
The use of text to detect emotions is also complicated by the. incidence of colloquial terms, misspelling, slang, contextual dependency, emojis, and contextual dependency [5]. The same sentence is able to convey various feelings in accordance with the sequence of words, context or the context in which it is written. In addition, textual information does not have non-verbal elements like facial. Human beings are captivated through expressions, movements and intonation. naturally use to read emotions, and make them automated. emotion recognition more complicated [10].
Thus the fundamental issue discussed in this study is the necessity of the automated system of emotion detection which proves to be in need. can adequately address short and sequential text information and properly categorize several emotional states. Such a system must be in a position to internalize contextual and temporal. relationships among words as they deal with noisy real-world. text. To solve this issue is a prerequisite of development. emotion-sensitive applications that have more insight. and react to human feelings in online communication. environments [1], [6].
-
Research Gap
In the contemporary digitalized world, much human is consumed. communication occurs using short and informal texts. on websites like social media, chat programs, online discussion forums, emails, and so forth.
-
These platforms are full of emotional information which can assist in. getting insights to user behavior, opinion, and state of mind. [1], [5]. Nevertheless, recognizing feelings automatically with respect to text is still an enigmatic discipline due to the human language. is commonly vague, unorganized and very reliant on. context [10].
-
The majority of systems that are used to analyze texts are oriented on. sentiment polarity, which is categorized as positive, negative or neutral. neutral [1], [2]. Although polarity-based analysis is capable of offering a. general concept of opinion, it does not reect certain feeling. states like happiness, sadness, anger, fear or surprise.
-
These small emotions play a vital role in their application in such a manner. as customer feedback research, social media monitoring, conversational agents, and systems of mental health support, where we have to do with emotional intent. than simple sentiment [6].
-
The use of text to detect emotions is also complicated by the. incidence of colloquial terms, misspelling, slang, contextual dependency, emojis, and contextal dependency [5]. The same sentence is able to convey various feelings in accordance with the sequence of words, context or the context in which it is written.
-
In addition, textual information does not have non-verbal elements like facial. Human beings are captivated through expressions, movements and intonation. naturally use to read emotions, and make them automated. emotion recognition more complicated [10].
-
Thus the fundamental issue discussed in this study is the necessity of the automated system of emotion detection which proves to be in need. can adequately address short and sequential text information and properly categorize several emotional states. Such a system must be in a position to internalize contextual and temporal. relationships among words as they deal with noisy real-world. text. To solve this issue is a prerequisite of development. emotion-sensitive applications that have more insight. and react to human feelings in online communication. environments [1], [6].
-
-
Methodology
-
Research Design
This study is based on an experimental and comparative research. design was concerned with short and informal emotion detection. text with Natural Language Processing and sequence-based. deep learning models. The major goal is to investigate. and contrast the way emo- performance is
reected in different sequence models. A directed learning model is taken where Various emotion data are trained and tested by using labeled emotion data. models under the same experiment. The architecture is not a single architecture as opposed to relying on a single architecture. comparative evaluation is what is understood in study. between baseline sequence models performance trade-offs. and developed contextual models. This design ensures is fair, reproducible and enhanced research novelty and trade off between accuracy and computer performance.
-
Data Collection
The data in this study is acquired through publicly. accessi-ble emotion-labelled text sources. It consists of short textual statements marked with emotion categories like happiness, sadness, anger, fear and neutral states. These datasets are true to life digital communication patterns that are prevalent on social media and messaging platforms.
Transparency is guaranteed by utilising open datasets. reproducibility of results. Before further processing, the data is scrutinized in order to eliminate incomplete, copy, or incompatible samples to ensure data is of high quality.
-
Data Preprocessing
Real-world sources create noise in the text collected. unstructured. To deal with this, a number of preprocessing processes. are applied. These are being able to change text to lower case, deleting the punctuation and special characters, deleting processing, and segmenting sentences into words. The text is tokenized and changed into numerical form using word indexing. The sequence models need since they involve equal length of the input, the sequences are padded or truncated to a xed length. This is taken to allow compatibility across various sequence models and allows the equitable comparison in training and appraisal.
-
Text Representation and Embedding Layer
Rather than being based on the conventional feature extraction technology techniques, the work used in this study is word embeddings to represent. text sequences numerically. Word embeddings preserve semantic similarities between words and are dense vectors. digestive representations or representations of context. The embedding layer is an input which is shared. representation of all sequence models that have been used in the study. This will make sure that differences of performance are due to model. strengthening instead of input representation, architecture. the relevance of the comparative analogy.
-
Emotion Classication Models
To ensure novelty and comparative insight, multiple sequence-based models are used for emotion classication:
-
Baseline Sequence Model: A simple recurrent neural network is utilized as reference point. Performance as a sequence learner without advanced performance memory mechanisms.
-
Standard LSTM Model: Long-term dependencies and contextual relations of words within text sequences are captured by the use of LSTM model. The gating mechanism of the same helps in reducing the vanishing gradient problem and enhances the accuracy of emotion recognition.
-
Attention-Enhanced LSTM Model: The LSTM model is coupled with an attention mechanism which enables the system to pay attention to words that are emotionally important in a sequence. This model is more applicable in short and informal texts where key emotional icons can be thinly spread.
All models are trained using the same dataset, preprocessing pipeline and the same evaluation metrics to have equal and fair competition.
-
-
Model Training and Evaluation
The dataset is split into training and testing sets with the help of. an 80:20 split. Model is learned with the help of training set. parameters, the testing set is used to test generaliza- tion performance. Every model is optimized using the same optimization. settings where possible. The assessment of performance is done on the basis of quality, precision, recall, and F1-score to give an account of. detailed evaluation of the classication of emotions. capability. The strengths are brought out in comparative results. and constraints of each model in the short-text and informal language conditions.
-
Adaptive Response Generation
Even though the main aim of this study is emo-Adaptive detection, the system architecture enables relevant tion detection. generation of responses based on the forecasted emotions. Once the system is able to respond to an emotion, which is identied. ned responses or recommendations in accord with the emotional state. The practical applicability is revealed in this component. of the suggested framework and outlines the importance of emotion. interactive models can be encompassed with recognition models. systems. The mechanism of response is rule-based and. written in such a way as to be easily extended.
-
Ethical Considerations
Ethical considerations are put into consideration during the process. study. The dataset employed does not have personally. identiable data, which will guarantee privacy and data security. rity. The system will be analytical and supportive. is not clinical diagnosis but merely purposive. or discriminating judgment. Reason-able utilization of emotion detection technology is. highlighted in order to prevent misuse, prejudice, or misinterpretation. of emotional predictions.
-
-
Experimental setup
-
Experimental Design
The experimental design is a deep-learn-supervised.
ing emotion classication short-term classication framework. and informal text [2], [6]. The primary objective of this project will be undertaken comparatively to assess the performance. of numerous sequence-based models under the same ex- perimental conditions. In particular, a starting sequence. human¿model, a conventional Long Short-Term Memory (LSTM). LSTM model, an attention-enhanced LSTM model are an- broken down to know their capacity to capture their context. and sequential emotional patterns of textual input [3], [4]. Everything should be compared equally and without bias. Tests and training models are performed using the same dataset, handling pipeline, and evaluation measures [2]. The dataset is separated in the independent training and. testing sets, and in any experiment all are done in a. caused conditions to avoid variation and reroducibility of results. The overall workow of the charts of the proposed framework, data collection. preprocessing, emotion recognition, and assessment, is presented in Fig. 1, which gives a clear overview of. the experimental process.
Fig. 1. Proposed Context-Aware Emotion Detection Architecture
-
Dataset and Data Preparation
The data in the research is retrieved through publicly. current emotion-labeled text sources [9]. It consists of brief textual statements that were gathered in the real world. they communicate online, and every instance. marked with a particular category of emotion like
happy, sad, angry, scared or neutral. These sam-ples contain informal and situational language. The latter are common in online communication [6]. The dataset is thoroughly pre-trained to model training. cleaned to eliminate incomplete, duplicate or noisy. entries. Preprocessing of text involves text to text conversion. small letters, without punctuation marks and other characters, cleansing out stopwords, and sentyencing sentences into word sequences [5]. The text is processed after which it is preprocessed. converting it into numbers with the help of word indexing. and buffered to some predetermined sequence length to guarantee. compatibility with all sequence based models [4]. The data is then divided into training and testing data. using an 80:20 ratio.
TABLE II Dataset Description
Parameter
Description
Data Source
Public emotion-labeled text dataset
Total Samples
Approximately 10,000
Emotion Classes
Happy, Sad, Angry, Fear, Neutral
Training Data
80%
Testing Data
20%
-
Baseline Comparison
In order to assess the efcacy of deep sequence. a baseline comparison is done based on learning models. with a plain recurrent neural network which does not. have high-order memory or attention functions [2]. This paradigm is used to compare current models to this one. learning how sequential learning works. The standard is compared to the baseline model. LSTM model, which is a long-term capturing model. relations in language via blocked memory units [3]. Also, a LSTM-based attention model is improved. used to enhance performance further with the help of permitting. the network to specialize in emotionally relevant words. within a sequence [4]. All models are trained using the same data segregations and optimization parameters in order to guarantee. a fair comparison.
-
Evaluation Metrics
The test results of the emotion detecting models. is measured by conventional classication measurements. common to text classication studies [2], [6]. The proportion is measured using overall accuracy. of samples of the correct classes. To provide a more specic evaluation, accuracy, recall and F1-score are calculated according to every category of emotion. These metrics assist in the study of behavior of a model under the inuence of class. unbalance and give understanding of the well-being of various. emotions are recognized [6].
-
Implementation Environment The experimental design is performed with the help of Python notebook environment. Deep NLP libraries like TensorFlow, learning. Preprocessing of the text is done in Keras and NLTK. cycle planner, and emotion recognition [7], [8]. All tests are done on a standard. any computing system that is not implemented on specialized hardware, providing an evidence that the comparative approach that is suggested. is computationally efcient and academic appropriate. and as well as practical applications.
-
-
RESULTS
-
Emotion Classication Results
-
The by several sequence based models, such as a. This is because of the baseline sequence model which is a standard Long Short-Term. Memory (LSTM) model, and an attention-enhanced. LSTM model. The training of all models was done on preprocessed models. written information and assessed against invisible examinations to evaluate their generalizational ability [2],
[6].The ndings prove that sequence-based deep learning models are efcient in representation. contextual and emotional patterns existing in short and informal text. Both the models are better than the baseline model. Emotion was better in LSTM-based approaches. recognition performance, which means the importance of. text-based emotion detection sequential learning.
-
Accuracy Performance
The total classication recognition of both. Table III is a summary of model. The baseline model gives a reference level of performance, whereas the LSTM. Their effect on model is an apparent improvement. text-based long-term dependency capturing ability. The attention-enhanced LSTM scores best on accu- racy, pointing to the advantage of taking the emotional route. words that are applicable in a sentence. These ndings validate the fact that addition of attention. mechanisms is
-
another mechanism that enhances emotion detection. performance, important emotional cues can be scanty.Fig. 2 illustrates. a comparative perspective of the classifying accuracy. the standard LSTM, which was achieved by the baseline model, and the attention-based LSTM. The results show when there is a steady performance improvement. sequence-based learning and attention are applied.
Class-wise Performance Analysis
In order to understand the behavior of models more closely, class- wise precision, recall and F1-score were evaluated on. each emotion category [2]. The LSTM and attention-improved LSTM models qualied well in perfor- mance among the most common affections
TABLE III
Overall Accuracy of Emotion Detection Model
Model
Accuracy (%)
Baseline Model (Simple Neural Model)
74.1
Standard LSTM Model
82.8
Attention-Enhanced LSTM Model
85.6
Fig. 2. Accuracy Bar Graph
like hap- piness and sadness.
It was found that the performance of was relatively lower. those that had fewer were emotions like fear and anger. training samples. This tendency is the result of inuence. class imbalance, which is a frequent problem of emotion. classication tasks. The attention-enhanced LSTM performed in a more stable way in emotion. classes, implying the more appropriate management of subtle emotional. cues.Fig. 3 shows the class-wise recall and F1-score. returned by the attention-based LSTM model among various categories of emotions. Higher recall values for common emotions that keep on happenings denote the model. skill to recognize the vast majority of the most relevant ones, and less frequent emotions, with a slight lower score. indicate the inuence of the imbalance of classes.
-
Discussion of Results
The laboratory results point to the fact that sequence-based signicantly higher results of deep learning models than its base. emotion detection in text line approach. The stan- dard LSTM model enhances the contextual knowledge. through acquisition of word order and long-term dependencies, although this is further rened using the attention-enhanced LSTM. process with the focus on emotional words of importance. within a sequence [4]. Transformer-based models have demonstrated good although they have weaknesses. they frequently demand performance in the recent studies. greater computational power and bigger data sets [3], [4]. On the contrary, the proposed LSTM-based models. be practical in terms of accuracy and efciency, and so they are practcal in real-life
Fig. 3. PrecisionRecallF1
resource-constrained environments.
Fig. 4. Confusion Matrix
-
Summary of Findings
The ndings afrm that the level of emotion detection performance. becomes much better when transitioning between baseline se- quence models are then converted to evolved LSTM-based models. The attention-enhanced is one of the models that were evaluated. The best overall performance is obtained with LSTM, which performs better than others. demonstrating its capability regarding contextual and Short and informal text consists of emotional information. The classication behavior of the proposed is generally. model is also demonstrated with the help of the confusion matrix. shown in Fig. 4.
-
-
CONCLUSION AND FUTURE WORK
-
Conclusion
Despite the fact that the suggested comparative frame-work has high performance, there are a number of directions that can be identied open for future research. A possible extension is the addition of further sophisti-cated sequence architectures, including as Bidirectional LSTM or light transformer-based models, in order to develop contextual comprehension more. manage com-plicated emotional expressions in a better way. Further research in the future can also aim at enhancing robustness. in difcult conditions like drift of emotions, sarcasm, and very informal or mixture text. Expanding dataset to contain multilingual and cross domain. Generalization of models can be evaluated with the help of text. various languages and cultures. Additionally, solving the class imbalance with data augmentation. or more may result through the use of advanced sampling methods. stability in performance in all types of emotions. Lastly, the proposed emotion detection is to be integrated. framework to real time web or mobile apps can illustrate its applicability in reality. settings. These enhancements that could be made in the future can contribute to. the creation of more scale-able, adap-tive, and reliable- capable feeling-sensitive systems of next-generation digital. communication platforms.
-
Future Work
Despite the fact that the suggested comparative frame-work has high performance, there are a number of directions that can be identied. open for future re-search. A possible extension is the addition of fur-ther sophisticated sequence architectures, including as Bidirectional LSTM or light transformer-based models, in order to develop contextual comprehension more. manage complicated emotional expressions in a better way. Further research in the future can also aim at enhancing robustness. in difcult conditions like drift of emotions, sarcasm, and very informal or mixture text. Expanding dataset to contain multilingual and cross domain. Generalization of models can be evaluated with the help of text various languages and cultures. Additionally, solving the class imbalance with data augmentation. or more may result through the use of advanced sampling methods. stability in performance in all types of emotions. Lastly, the proposed emotion detection is to be integrated. framework to real time web or mobile apps. can illustrate its applicability in reality. settings. These enhancements that could be made in the future can contribute to. the creation of more scale-able, adaptive, and reliable- capable feeling-sensitive systems of next-generation digital communication platforms.
References
-
Text-based emotion detection: Advances, challenges, and op-portunities, A. Acheampong, C. Wenyu, and H. Nunoo-Mensah, Engineering Reports, vol. 2, no. 7, 2020.
-
Emotion classication from text using natural language pro-cessing and machine learning, S. Tripathi, S. Tripathi, and S. Mishra, IEEE Access, vol. 9, pp. 2345623465, 2021.
-
BERT: Pre-training of deep bidirectional transformers for language understanding, J. Devlin, M. Chang, K. Lee, and K. Toutanova, Proceedings of NAACL-HLT, pp. 41714186, 2019.
-
ABCDM: An attention-based BiLSTM-CNN model for emo-tion detection, A. Basiri, S. Nemati, and M. Soryani, Applied Soft Computing, vol. 132, 2023.
-
A survey on sentiment and emotion analysis in text, B. Liu and L. Zhang, ACM Computing Surveys, vol. 54, no. 2, pp. 136, 2022.
-
Emotion detection from social media text using machine learn-ing techniques, Y. Zhang, J. Wang, and X. Li, International Journal of Information Management, vol. 63, 2022.
-
Natural language processing with Python, S. Bird, E. Klein, and E. Loper, OReilly Media, 2009.
-
Scikit-learn: Machine learning in Python, F. Pedregosa et al., Journal of Machine Learning Research, vol. 12, pp. 28252830, 2011.
-
Emotion dataset for NLP, Kaggle, Online Available: https://www.kaggle.com , 2022.
-
Sentiment analysis and emotion recognition in text using NLP, R. Feldman, Communications of the ACM, vol. 56, no. 4, pp. 8289, 2013.
