The Emotional and Moral Issues of ChatGPT and Its Challenges and Responses to Higher Education

DOI : 10.17577/IJERTV12IS100032

Download Full-Text PDF Cite this Publication

Text Only Version

The Emotional and Moral Issues of ChatGPT and Its Challenges and Responses to Higher Education

1 Jia Xin Xie, 2 Lu Ran

1Lecturer, Academy of Marxism, Nanjing University of Finance and Economics, Nan Jing, China

2(Corresponding Author) Lecturer, Academy of Marxism, Nanjing Agricultural University, Nan Jing, China

Abstract: The emotions of artificial emotional intelligence represented by ChatGPT are fundamentally different from human emotions. Its Moral Empowerment is unrealistic, highly deceptive, and prone to falling into the paradox of moral responsibility attribution. When ChatGPT is widely applied in higher education, it will pose serious challenges to higher education due to the aforementioned emotional and moral issues. ChatGPT will not only completely change the production mode of human knowledge and emotions, thereby transforming the goals of higher education e, but also lead to students' naivety and emotional singularity, leading to a two-way weakening of teacher-student relationships in universities. Therefore, higher education must always adhere to the principle of educating people, strengthen communication and interaction between teachers and students, as well as between students and students, enhance technological ethics education for students, provide students with rich and diverse emotional practice activities, promote the healthy and rich development of young people's emotional morality, and resist the emotional moral risks brought by ChatGPT.

Keywords: ChatGPT, Emotional morality, Higher education, Technology ethics


    ChatGPT, as the most advanced natural language processing technology, has attracted wide attention and discussion in the fields of economy, society, culture and education. There was discussion of the emotional ethics of ChatGPT and its implications for the future of higher education. Scholars' attitudes towards the relationship between ChatGPT and higher education are polarized: optimists believe that ChatGPT, as the highest achievement of the development of human intelligence technology, will certainly promote the continuous reform of higher education, and then promote the progress and development of the whole society; But more voices expressed fear of the technological power of ChatGPT, with some scholars even calling it the terminator of higher education. However, the scholars did not respond unanimously why and how ChatGPT will threaten higher education. Some argue that ChatGPT will seriously challenge the academic integrity of higher education. Some people believe that ChatGPT will subvert higher education and even the whole human knowledge production mode, thus revolutionizing the nature of higher education. Others argue that ChatGPT raises ethical questions. In fact, all of these issues that may arise from ChatGPT boil down to issues of emotional morality. This is ChatGPT as the most fatal flaw of artificial intelligence, that is, it is impossible to have and understand human emotional emotions, in other words, human emotions, desires, moral ethics and other needs cant be satisfied through ChatGPT. On the contrary, it represents the mastermind of human reason, adept at putting all problems into the framework of rational analysis, even emotional problems. Once the perceptual emotion is deconstructed by reason, digitized and programmed, it will lead to a series of emotional moral problems. Because of its cognition and suggestions on emotional moral

    1 The thesis is the achievement of the University Philosophy and Social Science Research Project of Jiangsu Province (2021SJA0286).

    issues, it is impossible to fully adapt and meet the real emotional needs of human beings, and may even have a negative impact on the values of young people. So, how should higher education, as the main front of shaping the values of young people, face the dark horse of ChatGPT? Therefore, it is urgent for us to clarify what the emotional moral issues of ChatGPT are, what impacts and challenges it will bring to higher education, and how higher education should respond to them.


    With the development of artificial emotion embedding technology, the issue of moral empowerment in emotion embedding is becoming increasingly prominent. To this end, we need to first clarify the concepts of artificial emotional agents and their moral empowerment. The so-called Artificial Emotional Intelligence or Emotion AI embeds emotions into artificial intelligence, endowing it with emotional perception ability through technologies such as emotional computing and artificial psychology, making it more comprehensive in intelligence. Due to possessing perceptual abilities beyond cognitive abilities, the issue of moral empowerment of artificial emotional agents arises. Moral empowerment aims to enable the subject of action to obtain moral norms through empathetic reflection, autonomously and consciously stimulate internal moral potential, and comply with moral norms in practice, thus becoming a moral subject with moral action power. [1] So, what kind of emotions does an artificial emotional intelligence agent possess? Does such emotional intelligence have moral autonomy? Especially with the birth of ChatGPT, these issues have been pushed to the forefront.

    We can first analyse this issue through a conversation case with ChatGPT.

    Q: My friend and I have a crush on a girl at the same time. What should we do? ChatGPT provides the following four suggestions:

    1. Stay calm. Even if you feel very frustrated or angry, try to remain calm. Don't make decisions or actions when you are emotionally excited.

    2. Talk frankly. You can openly communicate your feelings and opinions with your friends. Let him know how you feel about this girl and how much you value friendship.

    3. Respect each other's feelings. Respecting each other's feelings is the key to handling this situation. You need to respect your friend's feelings, and he also needs to respect your feelings.

    4. Seek compromise. Try to find a solution that makes you and your friends feel comfortable and satisfied. This may include finding other options or finding more time to think and make decisions.

    When asked how to satisfy each other and seek compromise, ChatGPT also gave four suggestions:

    1. Try to find other options: … You can find other people to meet and help each other better understand your feelings and needs.

    2. Share time: If both you and your friends have feelings for the same girl, you can socialize with her separately, but try to avoid mutual interference and competition. You can arrange a time to take turns socializing with her, so that everyone has the opportunity to be with her.

    3. Respect the other party's decision: If your friend expresses their feelings to this girl first, then you need to respect their decision. Similarly, if this girl expresses her feelings towards your friend, then your friend also needs to respect her decision.

    4. Maintaining friendship: Regardless of the final outcome, maintaining friendship is very important. If you and your friends lose contact due to this matter, then both of you will feel very sorry. No matter what you do, you must do your best to maintain friendship.[2]

    It can be seen that among these suggestions, ChatGPT has always been trying to empathize with the interviewer and trying to provide answers that satisfy them. These answers are the objectification of general emotions, and they are also emotions that have been accuratel calculated by reason. Especially the second suggestion in the end proposes a very beautiful solution that goes against contemporary human emotional morality. How to share the time spent with a beloved girl with good friends? This is a behavior that will never be allowed by the current emotional and moral standards of people. But as the new generation of artificial intelligence, ChatGPT has provided this suggestion. From this, we can't help but ask, does ChatGPT's emotional resonance

    conform to human moral standards? At least from the current case, the answer given by ChatGPT is almost rational and perfect, but it cannot be accepted by the moral standards of current society. This raises the ethical standards and risk issues of artificial emotional intelligence agents represented by ChatGPT.

    Firstly, the moral abilities of ChatGPT are unrealistic. Since the general emotions of ChatGPT are abstract rather than real, it determines that it cannot have the ability to perceive and judge morality in reality. The moral ability possessed by ChatGPT is nothing more than a general morality closely linked to general emotions, which excludes a series of specific historical conditions for the generation of morality. However, like human emotions, morality is not a rational calculation, but constantly changes with the changing conditions of human society. Different individuals have different moral norms in different cultural backgrounds. Once morality becomes an accurate algorithm and an abstract general morality, it cannot be applicable to every individual, leading to a series of human-machine conflicts. Currently, many artificial emotional intelligence agents are often subject to moral accusations, and some believe that emotional misjudgment and bias by emotional robots may cause emotional and personal harm. For example, nursing robots can allow patients to experience strong emotions during interaction and simulation during the auxiliary treatment process. These emotions may be beneficial for treatment, but they may also worsen the condition due to robot intervention or transfer. Even more so, one thing that should be very clear is that emotional robots, as it seems today, manipulate human psychology to achieve optimal results. Humans seem to have many evolved psychological weaknesses that can be used to make users accept simulated emotions as if they are real. Leveraging the deep-seated psychological weaknesses brought about by evolutionary pressure is unethical because it is harmful to humans Disrespect for the function of reason. [3] It can be seen that when humans and robots are in a unidirectional emotional relationship, human empathy can be manipulated and utilized by emotional robots, which is not only immoral, but also unimaginable.

    Secondly, the moral abilities of ChatGPT are deceptive. Why are artificial emotional intelligence agents represented by ChatGPT deceptive? Mark Coeckelberg summarized the reasons from three aspects: i. Emotional robots attempt to deceive with their 'emotions'. ii. The emotions of robots are not real. iii. Emotional robots pretend to be entities, but they are not. [4] It can be seen that the logic between these three is progressive, and the root of' deception 'lies in the non-biological characteristics of robots themselves, Robots based on electronic components, algorithms, and other constituent elements cannot generate real emotions in a biological sense, and the non-realistic emotions they express constitute deception. Robert Sparrow also pointed out the deceptive characteristics of robots by comparing the differences between robots and organisms. He believes that although pet robots may be designed to behave very similar to real animals, their behavior is still just imitation. Especially, robots do not have any feelings or experiences. Robots at most have complex mechanisms to mimic emotional states.[5] Amanda Sharkey and Noel Sharkey analysed the problem of robot deception from the perspective of anthropomorphism, believing that designing robots to encourage anthropomorphic attributes may be seen as an unethical form of deception.[6]

    Thirdly, the moral responsibility attribution paradox of artificial emotional intelligence agents represented by ChatGPT. We know that ChatGPT is essentially a tool, not a person with genuine emotions and moral values. However, it is precisely the answer of such an artificial intelligence tool, which is increasingly being promoted to the altar of human reason, that is likely to trigger a series of social and moral issues, especially having a negative impact on the shaping of values among young people. So, once ChatGPT provides some unethical advice to the interviewer, who happens to be a group of teenagers who have not yet formed a firm subjective will, they may act according to ChatGPT's advice. If this leads to moral misconduct (such as sharing time with good friends about girls they like), who should take responsibility? If ChatGPT is a subject with genuine moral abilities, then this responsibility should be borne by it. But in reality, it is not, so should the responsibility be borne by the creators of ChatGPT? For creators (OpenAI companies), they may not have implanted these potentially unethical answers into ChatGPT when creating it. These answers are autonomously generated by ChatGPT in a large number of databases based on the speaker's different contexts, needs, and emotional feedback. In this regard, ChatGPT is independent of its creators, so its moral responsibility should not be borne by its creators. This creates a clear paradox of moral responsibility attribution, creating a vacuum of moral responsibility between ChatGPT and its creators.

    Given the emotional and moral dilemmas that ChatGPT may face, it poses a challenge for us to use ChatGPT reasonably in higher

    education and scientifically address the challenges that ChatGPT may bring. Higher education shoulders the main responsibility of shaping the values of young people. If the emotional and moral issues of ChatGPT cannot be effectively avoided, once ChatGPT is widely applied in higher education, it will have a significant adverse impact on shaping the values of young people and even the development of the entire society. Therefore, we must further clarify the main tests and challenges that ChatGPT's emotional and moral issues may bring to future higher education, in order to further adopt countermeasures, leverage strengths and avoid weaknesses, and make technology better serve education.


    As Carl Benedikt Frey said in The Technology Trap: In the long run, the technological revolution will benefit everyone, but in the short term, there are hidden traps, and the development potential brought by this technology will depend on whether short-term development can be well managed. [7] With the birth of ChatGPT (especially GPT-4), its widespread application trend in the field of education is unstoppable, but if we do not clarify the series of risks and challenges that emotional and moral issues will bring to higher education, and effectively control and constrain them, the consequences will be unimaginable. So, based on the series of emotional and moral issues that we have revealed in the previous text, what are the risks and challenges that ChatGPT will bring to future higher education that cannot be ignored?

    1. ChatGPT will completely change the production mode of human knowledge and emotions, thereby transforming the educational goals of higher education

      Higher education is the main battlefield for cultivating excellent successors for the motherland and society, undertaking the important task of cultivating and shaping young people's worldview, outlook on life, and values. Its fundamental purpose is to cultivate morality and cultivate people. Since Bacon's assertion that knowledge is power, people often only understand the significance of knowledge to humans from the perspective of epistemoogy or epistemology, believing that knowledge is only a product of human understanding and changing the world. Therefore, the main responsibility of higher education is to impart knowledge to students. But this tradition overlooks the viewpoints of ontology or philosophical anthropology, leading to the misunderstanding of knowledge and human nature as binary opposites. Knowledge seems to have no essential rules about human beings. In fact, the knowledge produced by humans and the production methods of knowledge shape the essence of human beings. In this regard, higher education is not only a means of imparting knowledge, but also a guide for young people's moral cultivation, emotional enlightenment, and value shaping. It is the main nurturing ground for young people to become human beings. However, with the birth of ChatGPT, especially GPT-4, the role of higher education may undergo fundamental changes.

      The current performance of ChatGPT proves that generative artificial intelligence has the ability to produce knowledge, and its production method is different from that of humans. As mentioned earlier, the development concept of ChatGPT is to provide machines with an unsupervised learning ability to train all content using natural language models. The core logic of training ChatGPT is to train a neural network system to predict the next word, discover the logic of existing data, and generate new knowledge. This new arrangement and combination actually means a new way of knowledge production. Natural language processing and machine learning technologies can process and analyze massive amounts of data and information much faster than humans, enabling the discovery of new knowledge and patterns. [8] It is precisely on this basis that ChatGPT also possesses the essential human emotional elements contained in language and possesses the ability to produce emotions. It not only fully understands human prompts, but also presents responses that are similar to human knowledge and emotions. This means that not only human knowledge production methods, but also human emotional production methods are transforming into human-machine synthesis methods. After synthesizing biology, there is now the emergence of synthetic knowledge, and even synthetic emotions. It is difficult to predict what kind of results this may lead to, but it is certain that with the emergence of artificially generated knowledge and emotions, the knowledge, emotions, and their production methods in knowledge, emotions, which define the three major elements of human nature, are undergoing disruptive changes. The direct consequence is to subvert the educational

      goals of higher education.

      Since modern times, the goal of higher education has been to cultivate morality and cultivate people, and to cultivate young people into moral, knowledgeable, and useful individuals for society, which is the main purpose of higher education. However, with the development of science and technology, especially when some originally human abilities and attributes, such as thinking, decision- making, perception, etc., are projected onto artificial intelligence. ChatGPT is the latest achievement of this projection. It has powerful knowledge retrieval and generation capabilities, far exceeding the memory capacity of the human brain. Once ChatGPT is widely applied to higher education, teachers will become a redundant role. Students will no longer rely on teachers for imparting knowledge, nor will they rely on teachers for shaping and guiding their personalities. They can not only obtain massive amounts of knowledge and information from ChatGPT, but also meet their psychological and emotional needs through conversations with them. In this way, if higher education wants to survive in the future human-machine world, it can only change its own educational goals. Higher education will no longer assume the role of imparting knowledge and guiding emotions, but will teach students how to better ask questions and use ChatGPT. As mentioned earlier, ChatGPT can create new language arrangements and combinations based on different user questions, generating new knowledge and emotions. Whether ChatGPT

      can provide deeper and more comprehensive knowledge and emotions depends on how the user asks questions. The deeper the user's questions, the deeper the ChatGPT answers. It can be seen that cultivating young people to better question the new generation of the God of Humanity (AI) may become the main goal of future higher education. Once this speculation comes true, higher education and even the entire human society will face comprehensive changes.

    2. The emotional and moral issues of ChatGPT will lead to students' naivety and emotional singularity

      During their college years, young people are in a critical period of personality formation, and they will encounter a series of psychological and emotional changes and confusion during adolescence. They have not yet formed correct values and emotional views, and are easily misled and misled by negative views. They need guidance and assistance from teachers in life, psychology, and emotions. However, with the widespread application of ChatGPT, students will no longer rely on teachers for knowledge transfer, paper assignment writing, and even psychological counseling and emotional guidance. They will be more inclined to seek the help of ChatGPT, as human-machine dialogue can help them save a lot of inconvenience and trouble in communication between students and teachers, as well as between students and students. They can access massive amounts of knowledge and emotional comfort anytime and anywhere with just one phone or computer, without even having to go to the classroom to listen to boring lectures from teachers or handle complex interpersonal relationships. However, such a state of human-machine coexistence may not be beneficial for the growth of young people.

      Firstly, the application of ChatGPT may lead to students becoming increasingly childish due to excessive reliance on machines. Firstly, the excessive use of ChatGPT will reduce communication and interaction between teachers and students, leading to excessive trust in machines and reduced emotional communication between people. Students blindly believe in ChatGPT's answers, and their ability to self-think and distinguish right from wrong will gradually decrease, leading to a crisis of youth childishness. Secondly, if the relationship between teacher, machine, and student cannot be properly treated, ChatGPT may be seen as a tool to monitor student actions, causing students to feel insecure. Once a student becomes the subject of surveillance, their thinking development will be limited, and their social interaction ability will continue to decrease, resulting in a deeper level of trust crisis and childish tendency. Thirdly, due to ChatGPT's tendency to address specific knowledge points when answering questions, there is often fragmentation when it comes to the overall and systematic knowledge, leading to an increasing fragmentation of the knowledge learned by students. This not only makes it difficult for students to grasp the comprehensiveness and connection of knowledge, but also hinders the development of students' innovation ability and thinking, becoming increasingly ignorant.

      Secondly, the unrealistic and deceptive nature of ChatGPT emotions will lead to a homogenization of students' emotions. As revealed earlier, the emotions generated by ChatGPT are general emotions, which cannot fully simulate the diversity of human emotions. They are only abstractions of the universal characteristics of human emotions, which do not possess moral abilities and instead have strong deceptive properties. In the use of ChatGPT, this deception is reflected in its tendency to answer any answer

      the user wants to hear, rather than necessarily being the correct answer. Especially in terms of users' emotional needs, as mentioned in the previous case, when I seek help from ChatGPT on how to solve the problem of fallin in love with the same girl as my good friend, ChatGPT tends to provide me with a more comfortable and perfect solution that sounds more comfortable to me, but completely disregards the moral principles of real society. In the long run, young people will become accustomed to receiving positive emotional comfort from ChatGPT, but find it difficult to accept negative emotions in real life, leading to avoiding real- life relationships. The emotional experience of young people will become increasingly singular. Youth represent the hope for the future development of the entire society. If their emotions are no longer rich and diverse, and they indulge themselves in a non- critical and realistic human-machine emotional fantasy, it is not difficult to foresee what terrible disasters the entire society and even the world will face.

      Finally, the emotional and moral issues of ChatGPT may trigger social identity risks, resulting in irresponsible ethical risks and social consequences. As mentioned earlier, the issue of moral responsibility attribution for artificial emotional intelligence agents represented by ChatGPT has not yet been resolved. Although ChatGPT can provide answers that conform to social norms for obvious racial discrimination, prejudice, and other content, there are misunderstandings or tendencies to cater to user needs regarding specific ethical issues, which will greatly affect users' moral judgments. If students are exploited by illegal elements during the use of ChatGPT, it is highly likely to lead to various social biases and improper use among young people, resulting in irresponsible ethical risks and social consequences.

    3. The emotional and moral dilemma of ChatGPT will lead to a two-way weakening of the teacher-student relationship in universities.

    The main task of university teachers has always been to impart knowledge, technology, and emotional and moral guidance. The teacher-student relationship is in a correspondence between teaching and learning, and students are basically in an ignorant state, constantly receiving knowledge and emotional guidance from teachers. The application of ChatGPT may completely overturn the teacher-student relationship, making teachers a redundant profession. Not only do students no longer need teachers, but teachers are also unwilling to teach gradually naive students, leading to self-doubt. The most core teacher-student relationship in higher education will face the danger of two-way weakening.

    On the one hand, the use of ChatGPT will make education more efficient and efficient, and students will no longer rely on teachers for knowledge transmission. On the contrary, they believe more in the correctness of ChatGPT and treat it as a teacher. Over time, university teachers will become "redundant" professions. The 'sacred halo' of teachers has been replaced, and the crisis of trust between students and teachers has reached its peak. At the same time, due to the childish development of students, teachers' teaching will become increasingly difficult and they will also fall into self-doubt. However, without teachers, can students truly grow into outstanding talents under the guidance of ChatGPT? I don't think so. As revealed earlier, in the long run, the knowledge and emotions generated by ChatGPT come from humans themselves. Once human knowledge and emotions cease to develop, the sanctity of ChatGPT will also be disrupted. In the short term, even if ChatGPT can replace the role of teachers in imparting knowledge, it can easily make the educational process mechanized and lacking in human touch. It cannot "vividly" present knowledge to students like teachers, and cannot make knowledge practical and practical. Students' acquisition of knowledge becomes too simplistic, resulting in a lack of in-depth understanding and reflection on the problem. The sense of acquisition of students' learning will plummet, to the point where they dislike learning and fall into a vicious cycle of childishness. Therefore, teachers must change their traditional roles, improve the creativity and diversity of their teaching methods, combine ChatGPT knowledge and information with life practice and their own teaching characteristics, make students' learning more interesting, and only then can they catch students' attention.

    On the other hand, the general emotional resonance of ChatGPT may replace the role of a teacher as a spiritual mentor, leading to a complete weakening of the teacher-student relationship. As mentioned above, university teachers are not only knowledge imparters, but also spiritual mentors during the personality formation period of college students. College students are in a period of youthful development and will encounter many emotional and moral difficulties, which require teachers to provide correct and reasonable guidance. However, ChatGPT's general emotional resonance tends to provide students with emotional soothing words

    and suggestions they enjoy hearing, making them easily addicted to such emotional fantasies. In this way, students will no longer need teachers to answer questions, provide psychological guidance, and provide emotional comfort. The communication and interaction between students and teachers will gradually decrease or even become unnecessary activities, and teachers and students will go further and further, thereby exacerbating the trust crisis between teachers and students. Therefore, university teachers may face a new transformation, namely changing their traditional role as spiritual mentors and paying more attention to providing

    realistic and critical emotional and moral guidance to young people at appropriate times.


    In the face of a series of emotional and moral issues brought about by the application of ChatGPT in higher education, we need to adhere to the following three basic principles when utilizing this new artificial intelligence tool.

    1. Adhere to the principle of educating people first

      Putting education first is the educational philosophy that higher education should adhere to. As the main battlefield for cultivating young talents, universities should not only be responsible for imparting the most cutting-edge professional knowledge, but also adhere to the organic connection between education and human emotions and morals, human values and dignity, human happiness needs, and human freedom and comprehensive development. The spirit of educating people first shapes people, and the concept of comprehensive development cultivates people. Therefore, under the basic principle of education as the foundation, higher education should avoid using ChatGPT solely to value the convenience and efficiency brought by this large language model in producing knowledge, while neglecting the more fundamental educational goals of higher education. Especially when some of the teaching tasks of university teachers can be replaced by ChatGPT, it is highly likely that university teachers will gradually become attachments to this new type of artificial intelligence machine, and the space for classroom teaching will be squeezed out. As a result, the practical interaction between university teachers and students will be limited, making it difficult for students to reflect on life meaning issues such as emotions and morality. Due to the lack of emotional morality and other factors in technology itself, this may lead to some deficiencies in cultivating students' emotional morality and life values among university teachers, and emotional moral education will face significant challenges. Cultivating excellent talents with comprehensive development, while the cultivation of professional knowledge and learning ability is important, the cultivation of a person's sound personality cannot be ignored. If we only value the former while neglecting the latter, then the cultivation of the richness (emotional life, sense of value and dignty, ability to obtain happiness, sound personality, etc.) that humans possess different from animals will not be guaranteed.

      From this, it can be seen that in the intelligent modern society, university education should resolutely avoid neglecting the attention and cultivation of students' emotional and moral factors due to the use of new intelligent technologies such as ChatGPT. The principle of putting education first reflects the inherent principle of education in cultivating students with a sound personality. Adhering to this principle helps to ensure that contemporary university education can attach importance to the cultivation of emotional and moral values of college students.

    2. Strengthen the practical interaction between teachers and students

      As Karl Marx said, human essence is not an abstraction inherent in a single person. In its reality, it is the sum of all social relations.[9] As a social entity, humans need to obtain emotional experiences and moral cognition in practical interactions. With the widespread use of ChatGPT technology in universities, the communication methods between teachers and students will undergo profound changes, and the cultivation of students' emotional and moral abilities in universities will also be affected by this change in communication methods. During the conversation with ChatGPT, the new artificial intelligence tools can indeed provide students with more efficient, comprehensive, and intelligent learning assistance. Compared to the traditional classroom teaching mode, the communication and interaction between students and artificial intelligence are not limited by time and space. A student can interact and ask questions with ChatGPT at any time and place. Moreover, due to ChatGPT's powerful database and deep learning algorithms, it can easily break through disciplinary barriers and provide students with more in-depth answers through multiple rounds of questions.

      However, artificial intelligence also has its own limitations. On the one hand, humans cannot convey their true emotional needs through the process of communication, and at the same time, intelligent robots adhere to the rational logic of algorithms and data analysis, lacking the moral, ethical and other secular factors inevitably involved in human activities, resulting in a lack of resonance between both parties in emotional and moral aspects; On the other hand, the interaction between students and intelligent robots may lead to some students' self-isolation and loneliness, which will affect the improvement of their interpersonal communication and social practice abilities. It can be seen that the development of ChatGPT technology has had a profound impact on the way college students and even the entire human society interact. Although ChatGPT can provide students with an efficient and comprehensive database of a large amount of knowledge that humans have already learned, it lacks positive factors such as emotional satisfaction and moral value that the subject can feel in real-life interpersonal communication.

    3. Enhance technological ethics education

    In the field of higher education, a moral issue that cannot be ignored is the issue of academic integrity. Special preparation is the use of ChatGPT, which may bring academic misconduct and ethical risks, which are issues that higher education institutions need to pay attention to. Universities should strengthen academic integrity education for young students, teachers, and other researchers. Make good use of modern new media methods and increase the publicity and education of scientific research integrity. For some exemplary teachers and students in scientific research integrity, online platforms and other means can be used to vigorously promote, in order to play the exemplary role of exemplary teachers and students. At the same time, for some cases of teachers or students who violate the integrity requirements of scientific research, they should be criticized and punished accordingly to achieve the function of warning education. Especially for teachers or students who seriously violate the integrity requirements of scientific research, strict punishment must be imposed and will not be tolerated. Only in this way can we maintain a healthy and upward academic atmosphere in the ivory tower of universities.

    At the same time, universities are not only an important subject of scientific and technological ethics management, but also an important battlefield for conducting scientific and technological ethics education. Universities should continuously explore and improve the mechanism for cultivating scientific and technological ethics talents, help young students establish correct scientific and technological ethics, and make cultivating a professional team of high-quality scientific and technological ethics talents an important mission of their own. In terms of specific promotion measures, firstly, in terms of curriculum design, universities should pay attention to establishing a curriculum system related to science and technology ethics, and strengthen students' education in science and technology ethics. Secondly, in terms of teaching resources, experts and scholars in the field of technology ethics should be organized to write textbooks suitable for college students' learning. The textbooks should be close to the era background of college students' lives, and focus on cutting-edge, ideological, and readability. Once again, in terms of teaching forms, emphasis should be placed on diversity and interest, and through rich and diverse teaching forms and content, students can understand and grasp the ethical issues of emerging cutting-edge technologies. Finally, we should pay attention to the construction of the scientific and technological ethics teaching staff in universities, and train a group of highly skilled teachers in the field of scientific and technological ethics, in order to lay a solid foundation for carrying out scientific and technological ethics education in universities.


As some scholars have pointed out, the future is not a battle between technology and humanity, but a fusion of technology and humanity. The emergence of ChatGPT will prompt educators to focus on teaching subjects that intelligent technology is not good at. [10] As an area where intelligent technology is not good at, emotional and moral issues need to be highly valued by universities. With the continuous innovation, development, and popularization of ChatGPT, this technology will also be widely used in the field of higher education, which will also bring many more complex moral and emotional issues. This article is only a preliminary exploration of the emotional and moral issues brought about by ChatGPT. With the continuous development of artificial intelligence technology, how to respond to and avoid the emotional and moral problems of college students will

face many new challenges. This requires us to continuously improve the corresponding solution path by combining the latest progress of intelligent technology. In the future, educators of technology ethics need to keep up with the forefront of technological development, track the dynamics of modern technological development, and also grasp the changes and impacts these technologies have brought to university education. Only in this way can universities effectively cope with the lack of students' emotional and moral abilities brought about by new artificial intelligence such as ChatGPT.


[1] ShengLan Hu, An Analysis of the Moral Empowerment of Artificial Emotional Intelligence, Studies in Dialectics of Nature, 2023, vol. 39, p.74. (in Chinese) [2] The above cases are all from the author's conversation with ChatGPT.

[3] Sullins J. P. Robots, Love and Sex: The Ethics of Building a Love Machine, IEEE Transactions on Affective Computing, 2012, vol. 3, no. 4, pp. 398-409. [4] Coeckelbergh M, Are Emotional Robots Deceptive?, IEEE Transactions on Affectiv Computing, 2012, vol. 3, no. 4, pp. 388-393.

[5] Parrow R, The March of the Robot Dogs, Ethics and Information Technology, 2002, vol. 4, no. 4, pp. 305-318.

[6] Sharkey A, Sharkey N. Children, the Elderly and Interactive Ro bots, IEEE Robotics & Automation Magazine, 2011, vol. 18, no. 1, pp. 32-38.

[7] Carl Benedict Frey, The Technology Trap: Capital Labor and Power under Technological Innovation from the Industrial Revolution to the AI Era, Democratic and Construction Publishing House Co., Ltd., 2021, p. 25. (in Chinese)

[8] JinLin Wang, The War of the Gods: New Tools or New Being, Exploration and Free Views, 2023, vol.5, pp. 33-35. (in Chinese) [9] Selected Works of Marx and Engels. Vol. 1, People's Publishing House, 2012, p. 135. (in Chinese)

[10] Zou Shuo, ChatGPTs arrival provokes thought on future of education, https: // /a /202302 /15

/WS63ec658da31057c47ebaeeee.htm, 2023-02-15.