Learning-Based Approaches for Modeling and Classification of Human – Machine Dialog Systems

DOI : 10.17577/IJERTCONV10IS09019

Download Full-Text PDF Cite this Publication

Text Only Version

Learning-Based Approaches for Modeling and Classification of Human – Machine Dialog Systems

Y.Rithika II-B.Tech-CSE,

Sree Vidyanikethan Engineering College, Tirupati,Andhra Pradesh, India.

Mr. S. Gokul Pran M.E.(Ph.D), Assistant Professor,

Sree Vidyanikethan Engineering College, Tirupati,Andhra Pradesh,India.

K.P.Ruchitha II-B.Tech-CSE,

Sree Vidyanikethan Engineering College, Tirupati,Andhra Pradesh, India.

Abstract: -The purpose of this article is to provide a comprehensive survey on learning-based humanmachine dialog systems with a focus on the various dialog models.It involves the fundamental process of establishing a dialog model,examined features and classifications of the system dialog model, expound some representative models, and also compare the advantages and disadvantages of different dialog models, commonly used database and evaluation metrics of the dialog model,analization of the existing issues and point out the potential future direction on the human machine dialog systems.

Buzz words–Artificial intelligence (AI), deep learning (DL),dialog model, machine learning (ML), reinforcement learning(RL), sequence to sequence (Seq2Seq) model.

  1. INTRODUCTION:

    The dialog system is generally composed of speech recognition, natural language processing, speech synthesis (early template-based dialog model without speech recognition and speech synthesis module), as conceptually shown in Fig. 1,where the function of the speech recognition module is to convert human speech signal into text signal for the natural language understanding module.

    Next, the natural language understanding module first inputs the transformed text signal into the dialog model, then recognizes the human intention, and finally generates the corresponding reply. Lastly, the function of speech synthesis is to convert the text signals returned

    by the natural language understanding module into speech signals, and then output them. The core of the dialog system is to build a dialog model, and building dialog model is the main task of this module.

    century. Early dialog systems were based on artificial rules, such as Eliza (1966) [1], Parry (1975) [2] which passed the Turing test, and Alice (2009) [3] which won the Loebprize three times recently. With the development of speech recognition [4][6], speech synthesis [7], [8], natural language processing [9], [10], and information retrieval (IR) [11], [12], especially deep learning (DL) [13][15] and reinforcement learning (RL) [17], some data-driven-based models that use DL or RL have been proposed, such as IR models, generation models, RL models, and hybrid models. And so far, many humanmachine dialog products have emerged, such as Cortana and Microsoft Xiaob ing in 2014, Baidu Duer and Ali Xiaomi in 2015, Apple Siri and Google Assistant in 2016, Tencent tinkling in 2017 and so on.

  2. PROCESS OF ESTABLISHING DIALOG MODEL There are two typical methods for building a dialog model, i.e., nondata-driven and data-driven. The general process for building a dialog model is illustrated in Fig. 2. To build a dialog model with the nondate-driven method, we should first be familiar with the business scenarios of the application of the dialog model, then extract the corresponding rules through business analysis, and finally integrate all the rules to build a conversation model. To build a data-driven dialog model, first we need to prepare a corpus, which is the basis of building the dialog model. There are two ways to develop a corpus. One is to use the open corpus on the Internet directly. The other is to crawl from the Internet.

    Fig.2 Flowchart of building dialog model

    Internet. Second, we need to pre-process the data, the main operations include removing stop words, word segment (not involved in English though) and so on. Next, the processed data are put into the dialog model for training, and different dialog models are selected according to different business scenarios. Finally, the trained dialog model is used to predict, receive the users input and generate the response.

  3. CLASSIFICATION OF DIALOG MODELS

    The classification system of the dialog model is shown in Fig. 3,

    1. Dialog Model Based on Non data-Driven:

      The dialog model based on the nondata-driven method is mainly a template-based dialog model, which is realized by setting rules manually. When the users question matches the pre-set rules, the corresponding response will be triggered. The main workflow of the template-based dialog model includes receiving user input, question normalization processing, question query reasoning, and template processing. Among them, receiving user input is mainly to get text signal; question normalization processing is mainly to replace strings that need to be replaced in question sentences, such as replacing Im with I am; question query reasoning is to match the normalized question sentences with the rules in the rule base to get the

      best matching result; template processing is to process the special tags existing in the matching results to get the final response results, and then return the results to the user.

      Early dialog models are mainly template-based dialog models, such as Eliza mentioned earlier, whose template was written by the script of the dialog, and the script is composed of keywords and corresponding transformation rules. When there is user input, first checking the keywords in the input statement

      and selecting the keywords with the highest ranking, then finding the corresponding conversion rules, and generating the response statement through the rules.

      Artificial linguistic internet computer entity (Alice), the recent template-based dialog system, is written in an AI markup language (AIML) [16], which is an XML language that can create rules for robot dialog. In reality, the language adopts the "Stimulus-Response" theory and is developed in JAVA language. More specifically, there are some basic Fig. 4. Traditional ML algorithms used in dialog model (SVD is singular value decomposition) processes. After receiving the users input, first extracting keywords from the input statement, replacing them and removing noise. Then, matching keywords through rules, locating the position of questions in the template. Finally, getting the response through the template.

      Fig.4 Traditional ML algorithms used in dialog model

    2. Dialog Model Based on Data-Driven:

      The DL algorithms, as shown in Fig. 5, can transform the initial low-level feature representation into high- level feature representation by constructing a multilayer artificial neural network to extract and filter the input information layer by layer. Layer. Its like the process that human neurons transmit information through neural networks, reflecting the ability of human abstract learning. And the commonly used artificial neural networks are convolutional neural network (CNN), recurrent neural network (RNN) and deep belief network(DBN).

      Fig.5 DL algorithms used in dialog model

      By interacting with the environment through trial and error, it learns how to achieve the best state and action in order to get the greatest reward. The theoretical basis of RL is the Markov decision process (MDP) [18]. And the key elements of RL include action, state, strategy and reword.

      1. Model Based on IR: Model based on IR is widely used in industrial production. More specifically, the principle of the model is: first, extracting the keywords or semantic representation of the question, then matching the similarity with the question in the copus, and finally, selecting the response corresponding to the question with the highest similarity as the final output by sorting algorithm. Therefore, the core problem of IR-based model can be abstracted as a text-matching problem. And the commonly used methods of

        text matching include: 1) text matching based on keyword, the commonly used keywords contain term frequency- inverse document frequency (TF-IDF) and best matching (BM25).however, there are some limitations in keyword- based matching, which cannot use the semantic information of text. 2)text matching based on shallow semantic, latent semantic analysis (LSA) or latent semantic indexing (LSI), can solve

        the problem of synonymy at the semantic level, but cannot solve the problem of polysemy and ignore the order of words,and 3) text semantic matching based on DL mainly includes a representation-based matching method and interaction-based matching method.

        Reference

        Model name

        Datasets

        Huang et al[30]

        DSSM(Deep Structured Semantic Model)

        The evaluation dataset.

        Wan et

        al[31]

        MV-LSTM(Multi-View Long Short-Term Memory)

        Collected from yahoo community question answering system.

        Pang et al[32]

        MP(Match Pyramid)

        MSRP dataset and large academic dataset

        Wu et al[33]

        SMN(Sequential Match Network)

        Ubuntu corpus and douban conversation corpus.

        Zhou et al[34]

        DAM(Deep Attention Matching)

        Ubuntu corpus and douban conversation corpus.

        Wang et al[35]

        IRGAN(Information Retrieval with GAN)

        LETRO

        dataset,movielens(100k) and Netflix, insurance QA dataset

        Zhang ett at[36]

        DUA(Deep Utterance Aggregation)

        Ubuntu corpus and douban conversation corpus, e- commerce dialog corpus.

        The summary of the IR-based dialog models:

      2. Model Based on Generation: Model based on the generation normally adopts the structure of Seq2Seq [19], which generally includes encoder and decoder: the encoder mainly encodes the input questions and extracts the semantic information; the decoder uses the extracted semantic information to decode and generates replies. And the encoder and decoder are generally composed of RNN, which are LSTM [20]

    and GRU [21].Model based on the generation is usually trained by using short text or questionanswer corpus, and the model is also the basic Seq2Seq model.

    3) Model Based on RL: RL is widely used in robots [22], [23], games [24], [25] and network security [17]. In dialog system, action refers to generating dialog, state refers to humanmachine conversation, strategy refers to deciding what kind of dialog to respond to according to the current state, and reward refers to evaluation metrics of the outcome of dialog.

    RL is generally applied to dialog state tracking and dia log strategy selection, which belong to the natural language processing module in a dialog system

    4) Model Based on Hybrid: The hybrid model integrates several parts of template-based model, IR-based model, generation-based model, RL-based model, and external knowledge, so as to exert their respective advantages and improve the overall effect of the model. This model can give full play to the fluency and logic of the IR-based model, and can also use the generate-based model to deal with questions that have not appeared in the database. The dialog model in this article includes two parts: one part is the sentence modeling, which first uses Wikipedia data set to pretrain the CNN, so that the CNN can extract external knowledge. After training, this part extracts local information corresponding to the input sequence. The other part is the sequence modeling, which is an RNN model. The last hidden layer state of this model will add the local information extracted from the previous part. RNN model, arithmetic role is essential [37-39].

  4. CO MMON DATA SETS AND EVALUATION METRICS :

    A.DATA: –

    Data sets are the basic condition of the dialog system, and high-quality data sets can make the training model more effective. By consulting a large number of documents, we summarize some data sets commonly used in singleturn or multiturn dialogs in English or Chinese, as shown in Table IV, and give a brief introduction to the composition of the data sets.

    Cornell Movie-Dialogues Corpus [26] is a dialog of 9035 roles extracted from 617 movies, with 10292 movie roles, having 220579 conversations. The corpus also contains the title information, role information, the actual text of each dialog, the

    structure of the dialog, and the original source of the dialog.

    Ubuntu Dialog Corpus [27] is a dialog extracted from the Ubuntu chat log. The data set consists of 930000 dialogs and 7 100000 utterances. And the main data sets include training set, validation set, and test set. There are 1000000 examples in the training set, 50% positive and 50% negative .There are 19560 data in the validation set and 18920 data in the test set.

    Douban Dialog Corpus is the dialog data collected by Wu et al. on the Douban. The minimum number of turns per task is 3, and the average number of turns per task is about 6.

    Short-Text Conversation Data set is a short text content extracted from posts and comments under posts on Sina Weibo. There are 4.8 M post response pairs in the training set and 422 posts in the test set, each of which has about 30 responses.

    Papaya Conversational Data set consists of two parts: core data and peripheral data. The core data are manufactured to maintain the consistent personality of the chat robot, users can modify the role information according to their own needs; the peripheral data are a collection of online resources, including scene dialogs designed to train robots.

    Reddit data cleaned up. In particular, the data set website has a program for automatically generating Reddit data, which can generate millions of pairs of Reddit data.

      1. VALUTION METRICS:-

        The evaluation metrics of the dialog model are divided into objective evaluation metrics and subjective evaluation metrics. The objective evaluation metrics includes the evaluation metrics of the IR-based model and the generation-based model, and the subjective evaluation metrics includes the human metrics.

        1. Evaluation Metrics of IR Model: The evaluation metrics commonly used in retrieval models are recall@k(R@k) , mean average precision(MAP) and mean reciprocal rank(MRR).R@k refers to the choice of k ,people use R@1 to make k equal to 1, because when building a data set, there is only one correct answer for each question in the test set. MAP is a commonly used evaluation metrics in the field of object detection and text classification. Precision refers to the proportion of positive classes in classification results, AP refers to the average of all precision maxima when recall values are between 0 and 1.

          MAP(Q) = 1 AP (2) q Q

          where Q is a collection of sample queries, |Q| represents the number of queries.

          MRR [69] evaluates the performance of the retrieval model by ranking the correct retrieval results in retrieval results. When calculating, the reciprocal ranking of the correct answers of a search result is taken as its accuracy, and then averaging the accuracy of all queries. The MRR is defined query in the as

          MRR = |Q| I 1 ranki

          where rank i is the position of the first relevant result in the ranking of the query i.

        2. Evaluation Metrics of Generating Model: The evaluation metrics of the generated model are divided into word overlap matrix metrics and word vector metrics. Word overlap matrix reflects the quality of generating responses by counting the number of occurrences of ome phrases in

          sentences, and word vector calculates the similarity between words at the semantic level to express the similarity between sentences.

          1. Word Overlap Matrix Metrics: Bilingual evaluation understudy (BLEU) was originally designed to evaluate the quality of machine translation results and is now used in the field of dialog systems.

    b.Word Vector Metrics: Greedy matching (GM) expresses the similarity of two sentences by calculating the similarity of word embedding vectors in two sentences

    c.Human Evaluation Metrics: The existing objective evaluation metrics reflect the relationship between questions and responses to a certain extent, but there is no metrics that can solve the evaluation problem of the dialog system very well. Human evaluation metrics are generally from grammar, contextual relevance, diversity, and other aspects. The advantage of human evaluation is that it can fully reflect the real feelings of human beings and make the design of the dialog system develop in line with the daily conversation habits of human beings. But human evaluation needs to find volunteers and design questionnaires, which is time-consuming and laborious.

  5. ANALYSIS OF THE EVALUATION METRICS:

    A.Analysis:-

    If we know the specific evaluation metric values of each model, we can compare the models. But we just count the frequency of each metric in the corresponding literature. The data sets used in these articles are generally different, the evaluation metrics of different data sets are not comparable; on the other hand, because there is nou unified evaluation metrics adopted by all articles, the evaluation metrics of each article are very different. So it is not feasible to compare different models by the value of evaluation metrics.

    Model Based on IR: First, we can catch sight of MRR being widely used in the articles using IR- based model, and we can infer that MRR is more suitable for different data and tasks in IR-based model. Second, we can see that literature has the most metrics, because this article has three tasks, but other articles just have one or two tasks. Then, we also can find that the literature and have the same metrics, the reason is that these three articles use the same data set as shown in Table II and face the same task. So, we can find that the evaluation metric is related to the task, and completing different task requires different data sets. Finally, the human evaluation metrics are not the IR-based model does not need it, because the responses of this model are usually logical and fluent.

    Model Based on Generation: we can find PPL occurs more frequently, which may be more

    applicable for more data sets and tasks. Because the responses of generation-based model are not fluent and logical, there are many problems needed to solve which just like the style column and other problems not shown. It is a challenge to solve these problems. And it is even more challenging to design an metric to evaluate the model of solving these problems in a unified way. So this leads to the phenomenon of sparse metrics.

    B. CONCLUDING REMARKS:

    The evaluation metrics of both IR-based model and generation-based model can reflect the quality of the response to a certain extent. But so many metrics are confused that a unified evaluation of the dialog model cannot be formed.

    It is one-sided to evaluate the dialog system by referring to the evaluation metrics of other tasks of natural language processing, which may mislead the training of the dialog model. Therefore, it is necessary to design automatic evaluation metrics highly related to human evaluation metrics.

    Through comparison, it is found that the human evaluation metric is more often used in the generated-based dialog model.

    Data set is also an important factor affecting the evaluation results. And so many data sets with different quality are easy to interfere with the evaluation results of dialog models. So different tasks require a standard data set to facilitate performance comparison between models.

  6. EXISTING ISSUES AND FUTURE RESEARCH DIRECTIONS :

Although dialog models have made considerable progress in recent years, several typical issues still remain to be better addressed due to the complexity of natural language processing.

Existing Typical Issues:-

On the High Quality Large Data set: The size and quality of the data set determine the response effect of the dialog model. And the larger the data set and the higher the quality, the more useful information it contains, and the better the response effect of the dialog system will be. On the Unified Automatic Evaluation Metrics: The existing objective evaluation metrics have no clear correlation with human evaluation [74], which cannot meet the needs of dialog model evaluation, and human evaluation metrics are time- consuming.

On the Generalization Ability of the Model: When one trained dialog model faces the new business scenarios and new data set, the generalization ability of the model may be poor. Especially, with the continuous updating and enlargement of data

sets, it is time-consuming and computational cost- consuming to continuously retrain the dialog model.

On the Better Personification: Everyone has his own unique personality, which is a combination of emotions, values, personality, and other factors. Existing dialog robots can generate personalized responses with emotions, but they still give people a sense of inauthenticity.

On the Reasoning Ability: The products related to a dialog system that can be seen on the market give people a feeling of not smart. The main reason is that the existing dialog system cannot reasoning like human beings, which is the most critical factor restricting the development of the dialog system to a higher level of intelligence.

Future Research Directions:-

Construction of High Quality Large Data set: The existing models are all based on data-driven, and the quality and scale of data are very important to the models. Therefore, how to build a large data set, especially high quality data set is an important research direction.

Establishment of Dialog Evaluation System: The evaluation metrics designed in the future should be able to evaluate the reasonableness of response from different granularity of characters, words, and sentences. Kannan and Vinyals [28] try to use GAN to evaluate the dialog system. Lowe et al. [29] design an automatic dialog evaluation model (ADEM) on the basis of hierarchical RNN.

Improving the Generalization Ability of the Model: There are two directions worth studying. One is to dynamically integrate the dialog system with the knowledge graph so that the dialog system can continuously learn new knowledge in the process of communicating with human beings. The other is to integrate the multidomain knowledge which can reduce the errors caused by task switching and improve the generalization ability of the model and use transfer learning.

Design of Anthropomorphic Model: The anthropomorphic design of the dialog system is also a direction worthy of study. We need to maintain the consistency of personality. Some of the existing methods are controlled by training data sets and rules.

Research on Inductive Reasoning Theory: The reasoning ability of the dialog model has always been a challenging research problem. In recent years, graph neural network can combine end-to- end learning with inductive reasoning, which is expected to solve the problem that DL cannot

reasoning. The research of inductive reasoning theory is applied in the field of a dialog system, so that the dialog system can also reasoning is the key to achieve strong AI.

VII.CONCLUSION

Dialog model is crucial to building dialog systems. A comprehensive survey on the learning-based dialog models is presented. In particular, the current development status and construction process of humanmachine dialog system is reviewed, and the typial dialog models and methods as well as the corresponding advantages and disadvantages are compared and analyzed. Also, the challenging issues on the high quality large data set, the generalization ability of the model, the better personification and the reasoning ability, together with the potential research directions are briefly discussed.

All-round research on learning-based dialog model is not only of great theoretical significance but also of important social value. On the one hand, the research in this field will help to understand the nature of human dialog and promote the development of a dialog system and its related technologies. On the other hand, the research will bring bright prospects for the collaborative development of linguistics and AI.

REFERENCES:

[1] J. Weizenbaum, ELIZAA computer program for the study of natural language communication between man and machine, Commun. ACM, vol. 9, no. 1, pp. 3645, 1983.

[2] K. M. Colby, Artificial Paranoia; a Computer Simulation of Paranoid Processes. New York, NY, USA: Pergamon, 1975.

[3] S. R. Wallace, The anatomy of A.L.I.C.E., in Parsing the Turing Test. Berlin, Germany: Springer, 2009, pp. 181210.

[4] G. Hinton et al., Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups, IEEE Signal Process. Mag., vol. 29, no. 6, pp. 8297, Nov. 2012.

[5] L. Deng et al., Recent advances in deep learning for speech research at microsoft, in Proc. IEEE Int. Conf. Acoust., Speech Signal Process., Vancouver, BC, Canada, May 2013, pp. 8604 8608.

[6] W. Xiong et al., Toward human parity in conversational speech recognition, IEEE/ACM Trans. Audio, Speech, Lang. Process., vol. 25, no. 12, pp. 24102423, Dec. 2017.

[7] Y. Qian, Y. Fan, W. Hu, and F. K. Soong, On the training aspects of deep neural network (DNN) for parametric TTS synthesis, in Proc. IEEE Int. Conf. Acoust., Speech Signal Process. (ICASSP), Florence, Italy, May 2014, pp. 38293833.

[8] A. van den Oord et al., WaveNet: A generative model for raw audio, 2016, arXiv:1609.03499. [Online]. Available: http://arxiv.org/abs/1609.03499

[9] Y. Bengio, R. Ducharme, P. Vincent, and C. Janvin, A neural probabilistic language model, J. Mach. Learn. Res., vol. 3, pp. 11371155,Feb. 2003.

[10] I. Sutskever, O. Vinyals, and Q. V. Le, Sequence to sequence learning with neural networks, in Proc. Adv. Neural Inf. Process. Syst., Montreal, QC, Canada, 2014, pp. 31043112.

[11] G. Salton, and M. J. McGill, Introduction to Modern Information Retrieval. New York, NY, USA: McGraw-Hill, 1986.

[12] A. M. Elkahky, Y. Song, and X. He, A multi-view deep learning approach for cross domain user modeling in recommendation systems, in Proc. 24th Int. Conf. World Wide Web (WWW), Florence, Italy, 2015, pp. 278288.

[13] G. E. Hinton, S. Osindero, and Y.-W. Teh, A fast learning algorithm for deep belief nets, Neural Comput., vol. 18, no. 7, pp. 15271554, 2014.

[14] D. Bahdanau, K. Cho, and Y. Bengio, Neural machine translation by jointly learning to align and translate, 2014, arXiv:1409.0473. [Online].Available: http://arxiv.org/abs/1409.0473

[15] G. Mesnil, X. He, L. Deng, and Y. Bengio, August. Investiga- tion of recurrent-neural-network architectures and learning methods for spoken language understanding, in Proc. Interspeech, 2013, pp. 37713775.

[16] H. Zhang, R. Kishore, R. Sharman, and R. Ramesh, Agile integration modeling language (AIML): A conceptual modeling grammar for agile integrative business information systems, Decis. Support Syst., vol. 44, no. 1, pp. 266 284, Nov. 2007.

[17] Z. Ni and S. Paul, A multistage game in smart grid security: A reinforcement learning solution, IEEE Trans. Neural Netw. Learn. Syst.,vol. 30, no. 9, pp. 26842695, Sep. 2019.

[18] E. Levin, R. Pieraccini, and W. Eckert, Using Markov decision process for learning dialogue strategies, in Proc. IEEE Int. Conf. Acoust.,Speech Signal Process. (ICASSP), Seattle, WA, USA, May 1998,pp. 201204.

[19] I. Sutskever, O. Vinyals, and Q. V. Le, Sequence to sequence learning with neural networks, 2014, arXiv:1409.3215. [Online].

Available:http://arxiv.org/abs/1409.3215

[20] S. Hochreiter and J. Schmidhuber, Long short-term memory, NeuralComput., vol. 9, no. 8, pp. 17351780, 1997.

[21] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, Empirical evaluation of gated recurrent neural networks on sequence modeling, in Proc. Deep Learn. Represent. Learn. Workshop (NIPS), Montreal, QC, Canada, 2014, pp. 19. [Online].

Available: http://arxiv.org/abs/1412.3555

[22] Z. Yang, K. Merrick, L. Jin, and H. A. Abbass, Hierarchical deep reinforcement learning for continuous action control, IEEE Trans. Neural Netw. Learn. Syst., vol. 29, no. 11, pp. 51745184, Nov. 2018.

[23] B. Gokce and H. L. Akin, Implementation of reinforcement learning by transfering sub-goal policies in robot navigation, in Proc. 21st Signal Process. Commun. Appl. Conf. (SIU), Haspolat, Turkey, Apr. 2013, pp. 14.

[24] A. Jeerige, D. Bein, and A. Verma, Comparison of deep reinforcement learning approaches for intelligent game playing, in Proc. IEEE 9th Annu. Comput. Commun. Workshop Conf. (CCWC), Las Vegas, NV, USA, Jan. 2019, pp. 03660371.

[25] M. Xu, H. Shi, and Y. Wang, Play games using reinforcement learning and artificial neural networks with experience replay, in proc. IEEE/ACIS 17th Int. Conf. Comput. Inf. Sci. (ICIS), Singapore, Jun. 2018, pp. 855859.

[26] C. Danescu-Niculescu-Mizil, M. Gamon, and S. Dumais, Mark my words!: Linguistic style accommodation in social media, in Proc. 20th Int. Conf. World Wide Web (WWW). Hyderabad, India: ACM, 2011, pp. 745754.

[27] R. Lowe, N. Pow, I. Serban, and J. Pineau, The ubuntu dialogue corpus: A large dataset for research in unstructured multiturn dialogue systems, 2015, arXiv:1506.08909. [Online]. Available: http://arxiv.org/abs/1506.08909

[28] A. Kannan and O. Vinyals, Adversarial evaluation of dialogue models, 2017, arXiv:1701.08198. [Online]. Available: http://arxiv.org/abs/1701.08198

[29] R. Lowe, M. Noseworthy, I. V. Serban, N. Angelard-Gontier, Y. Bengio,and J. Pineau, Towards an automatic turing test: Learning to evaluatedialogue responses, in Proc. 55th Annu. Meeting Assoc. Comput. Linguistics, Vancouver, BC, Canada, 2017, pp. 11161126.

[30] P.-S. Huang, X. He, J. Gao, L. Deng, A. Acero, and L. Heck, Learning deep structured semantic models for Web search using clickthrough data, in Proc. 22nd ACM Int. Conf. Conf. Inf. Knowl. Manage. (CIKM),San Francisco, CA, USA, 2013, pp. 23332338.

[31] S. Wan et al., A deep architecture for semantic matching with multiple positional sentence representations, in Proc. 30th AAAI Conf. Artif. Intell., Phoenix, AZ, USA, 2016, pp. 28352841.

[32] L. Pang et al., Text matching as image recognition, in Proc. 30th AAAI Conf. Artif. Intell., Phoenix, AZ, USA, 2016, pp. 27932799.

[33] Y. Wu, W. Wu, C. Xing, M. Zhou, and Z. Li, Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots, in Proc. 55th Annu. Meeting Assoc. Comput.Linguistics, Vancouver, BC, Canada, 2017, pp. 496505.

[34] X. Zhou et al., Multi-turn response selection for chatbots with deep attention matching network, in Proc. 56th Annu. Meeting Assoc. Comput. Linguistics, Melbourne, VIC, Australia, 2018, pp. 11181127.

[35] J. Wang et al., IRGAN: A minimax game for unifying generative and discriminative information retrieval models, in Proc. 40th Int. ACMSIGIR Conf. Res. Develop. Inf. Retr. (SIGIR), Shinjuku City, Japan,2017, pp. 515524.

[36] Z. Zhang et al., Modeling multi-turn conversation with deep utterance aggregation, in Proc. 27th Int. Conf. Comput. Linguistics, Santa Fe, NM, USA, 2018, pp. 37403752.

[37] P. Anguraj and T. Krishnan, Design and implementation of modified BCD digit multiplier for digit-by-digit decimal multiplier, AnalogIntegr. Circuits Signal Process., pp. 112, 2021.

[38] T. Krishnan, S. Saravanan, A. S. Pillai, and P. Anguraj, Design of high-speed RCA based 2-D bypassing multiplier for fir filter, Mater. Today Proc., Jul. 2020, doi: 10.1016/j.matpr.2020.05.803.

[39] T. Krishnan, S. Saravanan, P. Anguraj, and A. S. Pillai, Design and implementation of area efficient EAIC modulo adder, Mater. Today Proc., vol. 33, pp. 37513756, 2020.