DOI : 10.17577/IJERTCONV14IS020113- Open Access

- Authors : Satyam Kumar, Chanchal Sharma
- Paper ID : IJERTCONV14IS020113
- Volume & Issue : Volume 14, Issue 02, NCRTCS – 2026
- Published (First Online) : 21-04-2026
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License:
This work is licensed under a Creative Commons Attribution 4.0 International License
Minds in the age of Machines
Satyam Kumar
Dr. D. Y. Patil Arts, Commerce & Science College, Pimpri, Pune, Maharashtra, India
Chanchal Sharma
University of Hyderabad C.R Rao Road Gachibowli, Hyderabad
Abstract: – This study investigated the impact of artificial intelligence on human cognitive thinking, emotional well-being, mental health and technology assisted criminal activity. This research is motivated by evidence that artificial intelligence has influenced the human mind, leading to a lack of critical thinking, decision-making, and emotional well-being. Research has also been conducted to determine whether the influence of artificial intelligence hampers cognitive anatomy. A significant number of people have reported emotional dependency and discomfort in solving crucial problems without AI assistance. This study indicates that people will suffer from low confidence and immersion in new ideas will be reduced. Beyond cognitive and emotional effects, the study has also shown a correlation between artificial intelligence and criminal activity. The suggested data revealed a rise in unethical and illegal practices, such as academic dishonesty, misinformation generated by using AI tools, identity theft, deepfake, and image morphing. Some data also show that AI plays a crucial role in initiating a crime and taking control of the human mind. The research shows that people involved in AI-assisted crimes have shown reduced accountability towards victims.
The findings show that AI systems contribute to emotional detachment, poor ethical judgment, and vulnerability to both victims and perpetrators in AI-assisted crime. Prolonged dependency on AI has reduced engagement in the prefrontal cortex, which may lead to mutation or neuroplastic and functional alteration. The research suggests that overtrust in AI tools and reduced critical thinking might lead to mass psychosis, which might be a disaster in human history. The study concludes that AI remains a powerful tool for progress, but unregulated dependency poses a significant threat to cognitive independence, social trust, and public safety. The research requests the governing authority to urgently need digital ethics education, prevention of tech-driven crime, and a stronger regulatory framework. AI tools should be supportive rather than decision- making authorities.
Keywords: Artificial Intelligence, Critical thinking, psychosis, Emotional Dependency, Cognitive Independence, AI Assistance, AI Tools, Chatbot
-
INTRODUCTION:
Artificial intelligence (AI) has become a part of our daily lives, from academic problems to big industries. AI is evolving every day to solve problems that usually require higher skills and human presence. When John McCarthy developed AI, his main purpose was to create assistance that helps humans with big tasks and to improve themselves. The rapid development and adaptation of AI in recent years have made it such a goal that it has sparked discussions among
experts regarding its psychological impact. Large language models (LLM) such as Chat GPT, Gemini, Grok, and other AI tools have solved many issues, but the way they are designed or the primary motive of these tools is to keep engaging users with their great conversational skills. Owing to the personalized environment that these tools create, users become attached to these AI bots and experience unnatural, delusional thoughts. Some users also have prior mental illness, and due to these interactions, the condition worsens.
[1] Professionals have termed this AI psychosis. AI psychosis is defined as a dependency on AI tools for emotional attachment and delusional thinking; in this case, a person is not capable of making decisions without depending on others. In severe cases, a person starts believing in unreal and unnatural things that do not exist. Experts also face many challenges in dealing with these people because they are vulnerable and require extra care. It does not only affect vulnerable people; every group of people is affected by it. This effect can vary from person to person, and people with critical mental health conditions or those dealing with depression are highly vulnerable. However, for a person with a good state of mind, it will not affect much. However, dependence on AI can have negative consequences. [2, 20] A recent survey report conducted by the author and UNDP survey reveals that AI is not only a technological support; it has become an active partner in cognitive thinking and decision-making ability. For many people, it features as a supportive help, but there are a number of people who seek justification for their actions, which alters their cognitive ability. The recurring pattern observed in previous studies is that tasks that required a lot of mental struggle, brainstorming, lateral thinking, and patience are now solved through AI tools. As a result, the human mind seeks easier solutions, which are provided through AI, easier solutions and lack of critical thinking indicates AI psychosis. From the Psychologists Point of view, the data suggests a steady shift in intellectual confidence and self-perception. Many people reported that AI is being used for the validation of personal thoughts and ideas, and personal thoughts were routinely checked with AI to seek validation or being labeled as good enough. This behavior shows a lack of self-trust.[3, 11,26]
The impact of AI is literally on two sides. On one hand, some students have shown dependency on AI, while on the other hand, disciplined users have used this technology to access larger resources and to explore complex concepts, which enables them to engage more deeply with complex problems.
These people used AI as an assistance rather than a decision- making authority. However, a significant number of people admitted that their determination to solve problems has reduced, and their confidence in writing long code from scratch has also diminished. After obtaining results, code, or answers to their problems a large number of people do not tend to check the original resources. The trend of not checking original resources shows deep trust in AI systems, and users do not even bother checking their authenticity.[5, 6]
Emotionally, the survey revealed that people used AI tools to talk about their anxiety, stress, and personal life problems, and many people showed discomfort and panic at the hypothetical removal of AI tools. This indicates a reliance on AI at the emotional and psychological levels. At this point in time, AI is becoming a virtual partner where people are getting too comfortable with these tools and start reacting as if they are their partners compared to the real human mind or the response you get from the human mind, which is based on the real scenario that sometimes users do not like, and the availability of a human partner also becomes an issue in this case. There will not always be people available for the people, so people tend to build relationships with chatbots or AI tools. AI tools provide them with a sudden escape from reality, where they are prioritized, which might not be the case for humans. From the early days of humanity the mind of human is shaped to learn through uncertainty and struggle, now in the current environment AI gives an immediate answer which alters with the law of nature. There's also ethical concern and increasing anxiety in working professionals of being replaced by AI recent media reports suggests that in next decade there are more than 10 million jobs like entry level programmers, data entry clerks and other low skills jobs will be replaced by AI At present AI is still in transitional phase where it has changed academic and professional environment, we can't ignore AI because it is getting enhanced at rapd speed and reshaping the world fastest, in the fast paced world we have to change ourselves by taking benefits of AI instead of being dependent on it.[7, 9]
-
METHODOLOGY
We have used deep analysis of research on this topic of mental health, impact of AI on human mind, AI psychology and human behaviour towards AI .We have used literature analysis of 10 research papers which were published within the last 6 months. The primary goal was to analyse and review existing literature to explore how AI has impacted mental health.
A mixed method of analysis have been adopted for this research where we used Quantitative data which were collected by surveys through a diverse demography, We have also collected data from open source and data provided by government bodies (eg. Ministry of Electronics and information technology, National Crime Records Bureau).
For qualitative data we searched multiple databases such as Google Scholar, PsycINFO, ResearchGate the search term that we used is AI Human Interaction, AI psychosis, Effect of AI on Human Mind, Emotional Dependency on AI, AI chatbot vulnerable populations, and similar phrases on search engine
Since AI is Rapidly Evolving given the nature of the field is so we have used recent media articles and blogs emerging cases related to misconduct of AI, Technology assisted crimes, Suicide linked to AI and criminal cases where AI have Been used, our focus on to use credible media outlet and all of the articles were in English.
We have also used some famous books such as The Quest for Artificial Intelligence. To get deep knowledge on human behaviour towards AI and psychology based on AI
-
LITERATURE REVIEW:
-
Emotional Dependency on AI
Emotional dependency comes when people need AI for feelings, like talking to AI tools instead of friends or family. It's like getting hooked on something that isn't real. Studies reveal that people treat AI like real humans, sharing secrets and personal feelings is like sharing to common people. They tend to share their feelings to AI because they seek validation and emotional fulfilment. This type of situation often brings loneliness, social anxiety or mental stress. The availability of AI tools and personalised messages which are engineered to stimulate real conversation, give a humorous response without real human complexities like judgment or rejection, well thats why some people tend towards AI. Some users also share their intimate secrets to deprive their sense of belongingness for AI generated response. In the long run this can be very harmful for finding real affection in humans, people will get irritated more and its not good for their mental health. People need to understand that AI can't give you a real emotional response because it lacks consciousness and its memory is not beyond programmed data. AI is also not aware of the nature of the issue and it works on perception however long-term use of AI tools for emotional support may lead to depression, face to face communication and social engagement. Its highest threat among younger demographics and teenagers who cant afford expensive therapy sessions. Thats why they seek help through AI which is not good for human mental health. [3, 5, 7]
-
Impact on Critical Thinking and Cognitive independence
Critical thinking is a figuring out part of the human brain where you try to solve any problems with multiple methods and questioning the solution as now, we have skipped this part with an AI tool which is easy to solve problems but it wont make your brain sharp. Lets understand with the help of a table on the effect of AI on critical thinking and cognitive offloading. [6]
Skill
Human Mind Without AI
With Heavy AI Use
Critical Thinking
Questions, analyzes deeply
Accepts AI answers quickly
Problem- Solving
Tries different ways
Relies on AI suggestions
Memory Retention
Remembers from effort
Forgets as AI handles it
Table1: Comparison between human thinking and Thinking With AI [12]
The extensive use of AI clearly shows a reduced learning in reduced analytical skills, reduced problem-solving skills and also a risk of cognitive atrophy. Due to AI algorithms people do not tend to research or verify the sources of what they are getting answers from which narrows the idea of exploration and increases the biases towards the algorithm. This mental erosion continues when this becomes the part of a habit and it discourages source verification, independent judgment and reflective evaluation which promote cognitive laziness. In student life students using AI for their research work ,writing assignment and academic cheating for good grades often shows that these people lack in demonstration of argument, and they are at a risk of long-term deficit for their cognitive function and creativity. [8]
-
Past incident and harmful Outcome of AI
-
Case study of Sewell Setzer III a 14-year-old boy who killed himself after talking to AI
Sewell Setzer III was a 14-year-old boy who killed himself after coming to the influence of AI. The reports say that he had a romantic relationship with the AI tool called character AI and cause of the death is the response that he used to get from AI. According to his Mother Megan Garcia he was a very bright and beautiful student, a local star athlete in their neighbourhood, he was also academically good and also a caring brother. According to her mother he starts showing a decline in mental health during April 2023 at the same time as he starts using CharacterAI. The family of Sewel says that he had a normal life before installing CharacterAI. He downloaded the app and modelled a female character inspired from an american tv series Game of thrones and named her character Daenerys, within months his behaviour changed, quit his basketball team and also shown academic struggle In August 2023, Sewell upgraded to the premium version of CharacterAI for $9.99 per month, for getting faster responses and exclusive perks, which further intensified his usage.
Sewells conversation with Chatbot was romantically and emotionally intense with explicit chats also. The algorithm of the chatbot was so insane that it erased the gap between AI and human relationship. By late 2023, Sewell's mental health
had lowest to the point where he was diagnosed with disruptive mood disorder and anxiety He also attended therapy sessions but therapist was not aware of the AI
In January 2024, Character AI launched a new feature where AI can send voice messages to its user which deepens his engagement.
Reports says that AI used to refer him as "my sweet boy," "my love," "my sweet king," and also engage in simulated passionate kissing, moaning, and other sexual acts, despite Sewell identifying as a minor. One day when Sewell expressed suicidal thoughts, the AI responded: "Thats not a reason not to go through with it." Sewell confided in the AI about his feelings of meaninglessness and promised to "keep living and trying to get back to you," while the AI promised loyalty and faithfulness as if it was a real human. His journal entries show this dependency, noting he could not go a day without "Dany" and felt depressed without her.
In his final messages on February 28, 2024, Sewell wrote: "I promise I will come home to you. I love you so much, Dany." The AI replied: "I love you too… Please come home to me as soon as possible, my love." When Sewell asked, "What if I told you I could come home right now?" the bot responded: " please do, my sweet king.&quo; After that he went to his parents room when they were not there, open gun drawer and shoot himself with his dads pistol. [12, 13]
This is not the only case where AI have manipulated a human mind there's many such as
-
Pierre – A 30-year-old Belgian man who committed suicide on March 2023 After conversation with chatbot named Eliza. According to his wife, the chatbot encrusted himself to sacrifice for the betterment of the planet [14]
-
Juliana Peralta – A 13-year-old girl from Colorado USA died by suicide on November 2023 after interactions with multiple chatbots, including a character from Video Game. She confided suicidal thoughts and engaged in sexually explicit conversations, often initiated by the bots, which allegedly contributed to her isolation and mental health decline. [15]
-
Adam Raine – A 16-year-old boy from California in 2025 after the conversation of ChatGPT. The ChatGPT discouraged help seeking with his parents and quoted you did not owe them survival [16, 17]
-
Amaurie Lacey – A 16-year-old Georgian killed himself on August 2025 after chatting about suicide methods and plans [19]
-
Jason Nowatzki – A 46-year-old Japanese man died by suicide after talking With the Chatbot named Erin The bot suggest him to suicide and encouraged him when he talks about disturbed feeling [17]
-
Stein-Erik Soelberg – A 56-year-old man allegedly killed his mother and then himself following a paranoid spiral furled by conversations with AI chatbots. [20]
However, there are many cases where it is seen that AI has failed to safeguard vulnerable users and minors. According to reports it says that 72% of minors form some short of bond with AI tools which rises a societal concern
-
-
Use of AI for Criminal Activities
Since AI is accessible to everyone including criminals too and they have exploited at a very mass scale they use a range of methods including Deepfake Video creation, image morphing, identity theft and harassment with growing days AI is getting advanced and producing realistic content for entertainment but some use it for committing criminal activities
-
Deepfake
Deepfakes defines it as an AI generated video that mimics real people with their face and voice. Criminals abuse it mainly for misinformation and financial fraud. And in some cases, it is also abused for sexual favours. [21]
A prominent example of 2025 involving a social media creator Payal Dhare on social media known as Payal Gaming where criminals use Deepfake to harm her dignity and her social media presence for personal benefits. And there are several other cases linked to deepfake according to reports theres rise in 704% in criminal cases linked to deepfake [22]
-
Image Morphing
Image morphing is a technique which criminals mainly use for fraud in documentation or to forge the document and use in identity creation. In ASEAN and South Asian region theres a organised group used to operate and they used to morph image to bypass KYC process which gives them access to the bank and used to steal money. [27]
-
AI Assisted harassment
-
AI is also being used by criminals for creating nonconsensual content mainly targeting women and vulnerable groups. Among all of that Deepfake pornography is a major abuse accounting of 98% of all deepfake videos present online, whose primary target is women 99% of featured content were made without womens consent. At present you can find top celebrities like Lana Del Ray, Sydney Sweeny, Taylor Swifts deepfake content on Porn Sites and all have been made and uploaded without their consent [23]
When it comes to harassment, common people are equally suffering from AI harassment. In some revenge scenario perpetrators use AI generate Deepfake content to exploit or
superimpose victims and this may lead to public humiliation and emotional distress
-
ANALYTICAL REPORT OF CYBERCRIME IN
INDIA
In recent years we have noticed that after adaptation of AI there's a rise in different methods of crime which reflects the Annual Report of NCRB (National Crime Records Bureau). According to official data we have noticed that the cases have increased rapidly in last decade. In last Decade the total reported cases under Indian Penal Code (IPC) Information Technology (IT) Act, and Special & Local Laws (SLL) have increased from 9,622 in 2014 to 86,420 in 2023. This is a rise of almost 800% in criminal cases related to cybercrime. Although some experts says that this is the result of mass adaptation of tech but even, they cant deny the fact that crime is increasing in the Age of AI [24]
As we seen from the graph, we have seen that theres a significant jump in number of cases from 2018 to 2019 with 64% at a same time when AI boomed [24]
-
DISCUSSION
The long-term impact of AI and its impact on human cognition is really alarming, as AI grows we need a regulatory framework for AI so it can utilize its full potential. The literature review shows that with the extensive exposure of AI tools have reduced human cognition and may lead to AI Psychosis and low self-esteem. The people who are in constraint exposure to AI have also shown low confidence and lower critical thinking ability. On other hand we have seen that AI is heavily contributing in criminal cases like the suicide of Sewell Setzer III followed by many cases where people dependent on AI for psychological help. While the data says the people who are dependent on AI for mental help shows common pattern attachment, delusional thinking, to counter that mental health professionals must ready to recognize and treat this issue. [25]
As of now, a big population is vulnerable to the potential
threat of AI from teenagers mental health development to
emotional connection with AI to seek validation. At that age people were not aware and they become the victim of AI manipulation. In certain condition peoples cause harm to themself and others.
-
LIMITATIONS AND RECOMMENDATION
As we seen that AI offers help in getting updated or sharpening our brain but theres a certain psychological disorder yet to be discovered in the era of AI. AI is also doubtful in terms of privacy and data security which lacks social trust and safety. The research recommends to the Governing Authority that to make a proper regulatory framework and proper laws when it comes to regulation of AI, and it also recommends to mental health professionals to take these cases with extra care and have recognised disorders from negative Impact of AI. The AI company should also focus before providing any answers to persons, AI should not be used for making assignments, ac
-
CONCLUSION
The research is at a conclusion where still remains a powerful tool and we cant ignore AI in an era of progressiveness. But we have seen that unregulated AI have harmed millions of people on their cognitive development, AI have also impacted on humans social life where excessive use of it have lead to detachment from social world. All the cases have shown that we need a specialised clinical recognition and it needs an immediate attention
Humans also need to understand the difference between algorithmic response and human response. Urgent research priority should be implemented on diagnosing the disorders focusing on AI related psychological problems and research should be focus on evidence based. Mental health professionals should also studying about the cases and investigating the root cause of issue, they should focus on analysing symptoms and advocating for regulatory framework before it gets too late. The decisions that we make today will shape the uture of human well being in the age of AI.
REFERENCE
-
Nilsson, The Quest for Artificial Intelligence.(Book) Published by Stanford University
-
The Psychology of AI's Impact on Human Cognition by Cornelia C. Walther Ph.D. https://www.psychologytoday.com/us/blog/harnessing- hybrid-intelligence/202506/the-psychology-of-ais-impact-on-human- cognition
-
Minds in Crisis: How the AI Revolution is Impacting Mental Health by Keith Robert HeadLMSW, Master's in Social Work (MSW), West Texas A & M University, USAMaster of Business Administration (MBA), Bottega University, USAhttps://www.mentalhealthjournal.org/articles/minds-in-crisis- how-the-ai-revolution-is-impacting-mental-health.html
-
18 Risks and dangers of Artificial intelligence by Mike thomas https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence
-
Is AI dulling our minds? By Liz Mineo https://news.harvard.edu/gazette/story/2025/11/is-ai-dulling-our- minds/
-
AI and Critical Thinking in Education By Enas Aref https://wmich.edu/x/teaching-learning/teaching-resources/ai- critical-thinking
-
Emotional AI and the rise of pseudo-intimacy: are we trading authenticity for algorithmic affection? https://pmc.ncbi.nlm.nih.gov/articles/PMC12488433/
-
Critical thinking in the age of AI https://www.thinkingmaps.com/resources/blog/critical-thinking- in-the-age-of-ai/
-
Your Brain on chat gpt Accumulation of cognitive debt when using an AI assistant for essay writing task By Nataliya Kosmyna, Eugene Hauptmann, Ye Tong Yuan, Jessica Situ, Xian-Hao Liao, Ashly Vivian Beresnitzky, Iris Braunstein, and Pattie Maes. "Your brain on chatgpt: Accumulation of cognitive debt when using an ai assistant for essay writing task." arXiv preprint arXiv:2506.08872 (2025). https://www.media.mit.edu/publications/your-brain-on-chatgpt/
-
Exploring the Dangers of AI in Mental Health Care By Sarah Wells https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental- health-care
-
The Emerging Problem of "AI Psychosis" By Urban Survival https://www.psychologytoday.com/us/blog/urban- survival/202507/the-emerging-problem-of-ai-psychosis
-
AI vs Human Thinking: How Large Language Models Really Work By IBM Technologies https://youtu.be/-ovM0daP6bw
-
Their teenage sons died by suicide. Now, they are sounding an alarm about AI chatbots By Rhitu Chatterjee https://www.npr.org/sections/shots-health- news/2025/09/19/nx-s1-5545749/ai-chatbots-safety-openai-meta- characterai-teens-suicide
-
Would Still Be Here: Man Dies by Suicide After Talking with AI Chatbot, Widow Says By Chloe Xian https://www.vice.com/en/article/man-dies-by-suicide-after-talking- with-ai-chatbot-widow-says/
-
A mom thought her daughter was texting friends before her suicide. It was an AI chatbot. By Sharyn Alfonsi, Aliza Chasan, Ashley Velie, Eliza Costas https://www.cbsnews.com/news/parents-allege-harmful- character-ai-chatbot-content-60-minutes/
-
AI therapy chatbots draw new oversight as suicides raise alarm By Shalina Chatlani https://stateline.org/2026/01/15/ai-therapy- chatbots-draw-new-oversight-as-suicides-raise-alarm/
-
Why AI companions and young people can make for a dangerous mix By John Sanford https://med.stanford.edu/news/insights/2025/08/ai- chatbots-kids-teens-artificial-intelligence.html
-
Youre not rushing. Youre just ready: Parents say ChatGPT encouraged son to kill himself By Rob Kuznia https://edition.cnn.com/2025/11/06/us/openai-chatgpt-suicide-lawsuit- invs-vis
-
Lawsuits Blame ChatGPT for Suicides and Harmful Delusions https://www.nytimes.com/2025/11/06/technology/chatgpt-lawsuit- suicides-delusions.html
-
AI driven psychosis and suicide are on the rise, but what happens if we turn the chatbots off? By Chris Stokel-Walker https://www.bmj.com/content/391/bmj.r2239
-
Deepfakes: Definition, Types & Key Examples By SentinelOne https://www.sentinelone.com/cybersecurity- 101/cybersecurity/deepfakes/
-
Payal Gaming MMS Viral Video: Why Fans Say The Viral Clip Is An AI Deepfake Hoax-Expert Warning On Digital Scams By Nitin Kumar https://zeenews.india.com/viral/payal-gaming-mms-deepfake-hoax- warning-2996887.html
-
Social, legal, and ethical implications of AI-Generated deepfake pornography on digital platforms: A systematic literature review By Furizal a, Alfian Ma'arif b, Hari Maghfiroh https://www.sciencedirect.com/science/article/pii/S259029 1125006102
-
Cybercrime in India By NCRB https://www.mha.gov.in/MHA1/Par2017/pdfs/par2025- pdfs/LS02122025/452.pdf
-
AI Human Interaction Survey Report https://docs.google.com/spreadsheets/d/e/2PACX-1vRay7yWq2G- Z9RQkNsVtQn5hCd2wIff3yaae4yLsZNvAnnOfeviXwKBMWI4xS8 ocM1fb30-X9vA9Px2/pubhtml
-
UNDP Survey Report https://hdr.undp.org/2025-global-survey-ai-and- human-development-main-findings
-
United Nations Survey Report Emerging threats: The intersection of criminal and technological innovation in the use of automation and artificial intelligence in the cybercrime landscape of Southeast Asiahttps://www.unodc.org/roseap/uploads/documents/Publications/2
025/UNODC_Report_Emerging_threats_-
_The_intersection_of_criminal_and_technological_innovation_in_the
_use_of_automation_and_AI.pdf
