DOI : 10.17577/IJERTCONV14IS020034- Open Access

- Authors : Priyanka Mali, Aakanksha Jadhav
- Paper ID : IJERTCONV14IS020034
- Volume & Issue : Volume 14, Issue 02, NCRTCS – 2026
- Published (First Online) : 21-04-2026
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License:
This work is licensed under a Creative Commons Attribution 4.0 International License
AI as Invisible Mentor: Influence of AI Tools on Students Career Decision-Making
Priyanka Mali
Department of Computer Science Dr. D. Y. Patil ACS College Pune, Maharashtra
Aakanksha Jadhav Department of Computer Science Dr. D. Y. Patil ACS College Pune, Maharashtra
Abstract – Artificial Intelligence tools have become deeply integrated into students daily academic activities. From conversational assistants to intelligent search systems, these technologies help students understand complex topics, complete assignments, and explore new areas of knowledge. As interaction with AI becomes routine, an important question emerges: does this engagement remain limited to academic assistance, or does it extend into shaping students career perspectives?
This study investigates whether AI tools function solely as learning aids or whether they subtly influence students career decision- making processes. A structured questionnaire was administered to 150 students to examine patterns of AI usage, levels of trust, perceived dependency, and the extent to which AI affects career- related thinking. Statistical analysis, including correlation testing, was used to explore relationships between frequency of use and perceived career influence.
The findings suggest that although students rely on AI regularly for academic support and acknowledge improvements in efficiency and understanding, they do not view AI as a primary authority in career decisions. Traditional sources such as teachers, mentors, and personal reflection continue to play a stronger role in long-term choices. Overall, AI appears to act as an informational and exploratory resource rather than a decisive mentor in shaping career trajectories.
Keywords: Artificial Intelligence, HumanComputer Interaction, Career Decision-Making, Trust in AI, Algorithmic Influence, Student Behavior, Decision Support Systems
-
INTRODUCTION
Artificial Intelligence is no longer confined to research laboratories or advanced technical systems. It has quietly entered classrooms, study rooms, and mobile screens, becoming part of students everyday academic routines. Tools such as conversational AI platforms, intelligent search engines, and content recommendation systems are now frequently used to clarify doubts, generate ideas, draft assignments, and explore unfamiliar topics. What once required extended discussions with teachers or hours of searching through textbooks can now be accessed within seconds through AI-based systems.
The rapid development of large language models and adaptive recommendation algorithms has significantly changed the way students interact with information [3]. Unlike earlier digital tools that simply displayed search results, modern AI systems interpret queries, generate contextual responses, and adapt based on user interaction patterns. As a result, students are not merely retrieving information; they are engaging in ongoing exchanges with algorithm-driven systems that learn from their behavior.
While the academic advantages of these tools are evidence faster clarification of concepts, improved efficiency, and easier access to learning resources the broader behavioral implications remain less examined. Continuous exposure to AI- generated suggestions may gradually influence how students perceive knowledge, opportunities, and professional pathways. For instance, when certain skills, courses, or industries are repeatedly highlighted through algorithmic recommendations, students may begin to view them as more relevant or desirable. Over time, such repeated exposure can subtly shape confidence levels, interests, and perceived career possibilities.
This observation raises an important question: does Artificial Intelligence function only as an academic assistant, or does it begin to operate as an invisible mentor influencing career- related thinking? Students increasingly consult AI tools for academic guidance, skill development suggestions, and information about emerging career fields. However, it is not yet clear whether this reliance remains limited to academic tasks or extends into long-term career decision-making. The distinction between assistance and influence is subtle but meaningful. If AI is perceived as a neutral source of information, its role may remain supportive. If it is viewed as reliable and authoritative, it may gradually shape decision-making patterns.
The objective of this study is to examine the behavioral impact of AI tools on students career decision-making processes. Specifically, the research analyzes how frequently students use AI tools, the degree of trust they place in AI-generated responses, the level of dependency they experience, and whether they perceive AI as influencing their career choices. By evaluating these dimensions together, the study seeks to understand whether AI functions primarily as a productivity tool or whether it assumes a more influential role in shaping aspirations.
Prior research has suggested that prolonged interaction with AI systems may blur the boundary between human and machine guidance, potentially fostering subtle psychological reliance [1]. In educational contexts, this possibility becomes especially relevant, as students are in a formative stage of developing professional identities and long-term goals. Yet existing literature largely focuses on AI influence in commercial, financial, or workplace decision-making environments, leaving everyday academic interactions relatively underexplored.
Addressing this gap is important. As AI systems become more personalized and embedded in educational ecosystems, understanding their psychological and behavioral impact becomes critical. Examining whether students maintain
autonomy in career decision or whether algorithmic exposure gradually shapes their perceptions contributes to ongoing discussions in HumanComputer Interaction and educational technology research.
This study therefore investigates the boundary between technological support and decision influence, asking whether AI remains a tool for learning or evolves into a silent guide shaping career trajectories.
-
LITERATURE REVIEW
The growing presence of Artificial Intelligence in everyday life has led researchers to examine its influence on human behavior, autonomy, and decision-making. As AI systems become more interactive and adaptive, their role extends beyond mechanical assistance toward shaping user perceptions and actions.
Recent discussions highlight that the integration of AI-powered virtual assistants into daily routines introduces complex psychological considerations. Continuous interaction with conversational systems may influence how users perceive authority, guidance, and reliability in digital environments [1]. When individuals repeatedly consult AI systems for answers, suggestions, or clarification, the boundary between tool usage and perceived mentorship can gradually narrow. This shift raises questions about dependence and autonomy, particularly among young users who are still forming academic and professional identities.
Another strand of research focuses on algorithmic recommendation systems and their feedback mechanisms. Machine learning models refine their outputs based on user engagement, creating adaptive loops that reinforce previously shown preferences [2]. Such reinforcement cycles can gradually shape exposure patterns. When certain topics, industries, or skill sets are consistently presented to users, those options may appear more prominent or viable, even if alternative pathways exist. This phenomenon has been widely studied in digital commerce and content platforms but has received comparatively less attention in educational contexts. Within the field of educatin, the emergence of large language models and AI-driven tutoring tools has transformed academic support systems [3]. These technologies provide personalized explanations, generate practice material, and simulate conversational learning environments. While many studies emphasize improvements in efficiency and accessibility, fewer investigations explore whether these systems influence students long-term aspirations or career perceptions. Most educational research treats AI interaction as transactional: a question is asked, a response is generated, and the task is completed [4], [5]. The broader developmental implications of repeated AI consultation remain underexamined.
Furthermore, trust plays a central role in humanAI interaction. As users perceive AI systems to be accurate, responsive, and context-aware, trust levels may increase. High trust can enhance usability and satisfaction, but it may also encourage reliance in areas beyond immediate academic assistance. Particularly in environments where AI tools are embedded in learning platforms, students may begin to consult these systems for advice regarding skills, course selection, or professional directions.
Despite the expanding body of research on AI ethics, algorithmic bias, and digital dependency, there is limited
empirical exploration of how routine academic AI usage influences students career planning processes. Existing literature primarily concentrates on AI in professional decision environments, financial forecasting, or consumer behavior, leaving a gap in understanding its subtle behavioral influence during formative educational stages.
This study addresses that gap by examining whether frequent academic interaction with AI tools correlates with perceived career influence, trust, and dependency. By situating the investigation within educational settings, the research contributes to broader discussions on autonomy, mentorship, and algorithmic exposure in emerging digital ecosystems.
-
METHODOLOGY
-
Research Design:
The study followed a descriptive and exploratory research design. The purpose was not to test a predefined behavioral theory but to understand patterns emerging from students interaction with Artificial Intelligence tools in their academic routine. The research explored awareness, trust, dependency, and perceived influence of AI on career decision-making.
Since the topic involves perceptions and experiences rather than measurable performance outcomes, a survey-based approach was considered appropriate. The design allowed observation of behavioral tendencies across multiple dimensions instead of restricting the analysis to a single variable relationship.
-
Participants
The participants consisted of undergraduate and postgraduate students from various academic streams. The target group represented young individuals who actively use digital platforms for learning purposes.
A total of 150 valid responses were collected. Respondents belonged to different age groups and academic years, ensuring variation in exposure to AI tools. Participation was voluntary, and students completed the questionnaire through an online form.
-
Questionnaire
A structured questionnaire was developed and divided into thematic sections to gradually examine user interaction with AI systems.
Section 1 Awareness and Exposure Questions measured familiarity with AI tools and frequency of usage in academic activities.
Section 2 Trust and Perceived Usefulness This section evaluated whether students considered AI reliable for explanations, learning support, and clarification of concepts. Section 3 Dependency and Psychological Relevance Questions examined behavioral reliance on AI, including:
-
dependence on AI for understanding
-
perception of AI as a mentor-like guide
-
role in academic improvement
Section 4 Overall Impact and Career Influence The final part measured whether repeated AI usage influenced students confidence, interests, and career thinking.
The questionnaire consisted mainly of multiple-choice and Likert-scale based questions to capture perception intensity rather than simple yes/no responses.
-
-
Data Collection procedure
Data was collected using an online survey form distributed among students through academic networks and peer sharing. Respondents completed the form independently without supervision.
The collection period continued until a sufficient number of responses was obtained for meaningful pattern observation. Only complete responses were included in the analysis to maintain consistency.
-
-
KEY FINDING
The responses indicate that AI tools have become a regular part of students academic routines. A significant proportion of participants reported using AI platforms frequently for clarifying concepts, completing assignments, and exploring unfamiliar topics. This suggests that AI is viewed primarily as a convenient and accessible learning support system.
In terms of trust, most students expressed moderate to high confidence in AI-generated responses for academic purposes. They acknowledged that AI tools help simplify complex subjects and improve efficiency. However, this trust appeared to be situational rather than absolute. Students were comfortable relying on AI for explanations, but not necessarily for making major decisions.
When examining dependency, the results show a nuanced pattern. While some respondents admitted that they prefer consulting AI before approaching teachers or peers, the majority did not consider themselves fully dependent. AI was perceived more as a quick reference tool rather than a replacement for human guidance. The idea of AI functioning as a mentor received mixed responses, indicating that students distinguish between informational support and personal mentorship.
Most importantly, regarding career decision-making, the findings suggest that AI does not significantly dominate long- term choices. Although exposure to AI sometimes introduced students to new career paths or skill areas, final decisions were still influenced by personal interests, family discussions, and traditional mentors. AI appears to expand awareness but does not independently determine career direction.
Overall, the results reflect that AI plays an important supportive role in academic life, yet students maintain personal agency when it comes to defining their professional future.
-
DISCUSSION
The findings suggest that students interact with AI tools as practical academic assistants rather than authoritative decision- makers. This aligns with earlier research indicating that users show algorithm appreciation in structured tasks but hesitate to rely on AI for personal or identity-related choices [3]. In this study as well, students trusted AI explanations and used them regularly for understanding concepts, yet career decisions remained guided by human influence.
The moderate level of dependency observed among participants reflects concerns raised in behavioral studies about convenience-driven reliance on intelligent systems [1]. However, the results also support arguments that AI primarily
reinforces exploration instead of replacing personal judgement [4]. Students appeared to treat AI recommendations as starting points for thinking rather than final answers.
Another important observation is the gap between exposure and influence. While recommendation systems continuously present career-related information and skill paths [2], participants did not automatically adopt them as decisions. This indicates that awareness does not necessarily translate into commitment. Career planning continues to involve reflection, discussion, and emotional evaluation beyond algorithmic suggestions [5].
Overall, the discussion highlights a balanced relationship: AI expands posibilities and accelerates information access, but the decisive stage of career choice still remains human-centered.
-
CONCLUSION
This study set out to understand whether everyday interaction with AI tools goes beyond academic assistance and begins to shape students career thinking. The responses show a clear pattern: AI has become a normal part of studying, but not a decision-maker in life choices.
Students rely on AI for speed, clarity, and convenience. It helps them understand topics faster, organize information, and sometimes discover new fields they were previously unaware of. Yet, when the question shifts from learning to choosing a future, the reliance weakens. Career decisions continue to involve personal interest, conversations with trusted people, and individual judgement. AI contributes to awareness, but not commitment.
Rather than acting as a mentor, AI functions more like an exploratory companion. It widens the range of possibilities and reduces uncertainty at the information stage, but the final direction still emerges from human reflection. This distinction is important because it shows that increased exposure to intelligent systems does not automatically translate into surrendering personal agency.
In simple terms, students are not handing over their future to AI. They are using it to think better about their future.
-
FUTURE SCOPE
This study focused on general student perceptions, but career decision-making is influenced by many contextual factors such as academic discipline, socio-economic background, and long- term exposure to technology. Future research can compare students from different fields or track behavioral changes over time to understand whether AI influence strengthens with prolonged usage. Qualitative interviews may also help reveal how students internally interpret AI suggestions while making real career choices.
REFERENCES
-
R. Chhatre and S. Singh, Impact of AI on Human Behaviour and Decision Making; Ethical Implications of AI, ISSN: 1539-1590 | E- ISSN: 2573-7104, vol. 6, no. 1, 2024.
-
J. Byrne, The rise of deepfakes: Implications for privacy and security,
Computers & Security, vol. 89, pp. 6779, 2019.
-
Logg, J. M., Minson, J. A., & Moore, D. A., Algorithm Appreciation: People Prefer Algorithmic to Human Judgment, Organizational Behavior and Human Decision Processes, 2019.
-
Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., et al., ChatGPT for good? On opportunities and challenges of large language models for education, Learning and Individual Differences, 2023.
-
Nandi, A., Hamilton, M., & Harland, J., Human-AI collaboration in career guidance: Students perceptions of AI-assisted advising, Computers & Education: Artificial Intelligence, 2025.
-
J. Shin, The effects of explainability and causability on trust in AI decision support systems, Computers in Human Behavior, vol. 119, 2021.
-
F. D. Davis, Perceived usefulness, perceived ease of use, and user acceptance of information technology, MIS Quarterly, vol. 13, no. 3, pp. 319340, 1989.
-
B. A. Dietvorst, J. P. Simmons, and C. Massey, Algorithm aversion: People erroneously avoid algorithms after seeing them err, Journal of Experimental Psychology: General, vol. 144, no. 1, pp. 114126, 2015.
-
P. J. B. Hancock et al., A meta-analysis of factors affecting trust in human-robot interaction, Human Factors, vol. 53, no. 5, pp. 517527, 2011.
-
Digital Education Council, Global AI Student Survey Report, 2024
