DOI : 10.17577/IJERTCONV14IS020142- Open Access

- Authors : Sagar Moreshwar Urade, Prof. Amol Bajirao Kale
- Paper ID : IJERTCONV14IS020142
- Volume & Issue : Volume 14, Issue 02, NCRTCS – 2026
- Published (First Online) : 21-04-2026
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License:
This work is licensed under a Creative Commons Attribution 4.0 International License
The Ghost in the Role: Professional Devaluation and the Psychological Toll of AI-Driven Workflows
Author: Sagar Moreshwar Urade
MAEERs MIT Arts, Commerce and Science College, Alandi (D.), India
Co-Author: Prof. Amol Bajirao Kale
MAEERs MIT Arts, Commerce and Science College, Alandi (D.), India
Abstract – The incorporation of Artificial Intelligence (AI) into modern workplaces is reshaping the essence of professional tasks, decision- making processes, and human identity. Although AI-driven systems offer promises of efficiency, precision, and the automation of mundane tasks, they also lead to unforeseen psychological and social repercussions. This research examines the phenomenon known as The Ghost in the Role, which describes the experience of professionals who, despite being formally employed, increasingly feel sidelined as AI takes over essential cognitive functions. The study investigates how AI-driven workflows contribute to the devaluation of professional roles, disruption of identity, and psychological strain among knowledge workers.
A qualitative-dominant mixed-method approach was utilized. Data were gathered through semi-structured interviews with professionals across the education, IT, finance, and media sectors, complemented by an online questionnaire assessing perceived autonomy, job satisfaction, and emotional well-being. The results indicate that the adoption of AI frequently diminishes skill utilization and decision- making authority, resulting in feelings of redundancy, role ambiguity, and a sense of lost purpose. Nevertheless, organizations that prioritize reskilling, transparency, and collaboration between humans and AI tend to experience more favourable outcomes.
This research enhances the understanding of the human costs associated with digital transformation and underscores the necessity for human-centered AI implementation. It advocates for organizational strategies such as participatory design, psychological support, and ethical governance of AI to safeguard professional dignity. The study concludes that advancements in technology must be balanced with a focus on identity, meaning, and mental health to avert the emergence of ghosts in contemporary workplaces.
Keywords: Artificial Intelligence, Workforce Psychology, Professional Identity, AI Automation, Job Devaluation, Workplace Stress.
-
INTRODUCTION
In the swiftly changing environment of the twenty-first century workplace, the incorporation of artificial intelligence (AI) into professional processes has become not only widespread but also revolutionary. From automated drafting tools and algorithmic scheduling systems to intelligent decision-support platforms, organizations across various sectors are progressively embracing AI technologies to boost productivity, enhance performance, and secure a competitive edge. Advocates contend that AI enhances human abilities, removes monotonous tasks, and enables professionals to concentrate on more complex cognitive work. Nevertheless, beneath this promise of heightened efficiency lies a significant and often overlooked consequence: the subtle yet widespread erosion of professional identity, value, and agencyreferred to in this study as professional devaluation. When highly skilled individuals find their roles diminished to merely overseeing or ensuring the quality of AI systems, they may endure a psychological impact that affects their self-worth, autonomy, and sense of purpose at work.
This research explores the intersection of AI-driven workflows and professional experiences, concentrating on how automated systems affect workers feelings of relevance and psychological health. While technological progress has historically transformed labour markets, the current surge of AI innovation is distinct in its ability to replicate cognitive tasks that were once the domain of highly trained professionalsprompting new inquiries regarding the future of work, human-machine collaboration, and the emotional landscape of employees navigating algorithmic environments.
The phrase ghost in the role encapsulates this emerging trend: professionals who remain nominally employed yet feel increasingly sidelined within their own job functions as AI systems take over essential responsibilities. Although these professionals may still retain their titles and carry out oversight tasks, they can experience a type of existential invisibilitya feeling of being present in name only but not in substance.
-
LITERATURE REVIEW
-
Artificial Intelligence and the Transformation of Work
The incorporation of artificial intelligence into workplace settings is regarded as one of the most significant technological changes since the advent of industrial automation. Experts from fields such as information systems, sociology, and organizational psychology concur that AI transcends being just another digital instrument; it represents a socio-technical force that fundamentally alters the execution of knowledge work. Initial research on automation primarily concentrated on manual labour; however, modern AI systems are increasingly capable of undertaking cognitive and decision-making tasks, including legal research, medical diagnosis, financial analysis, and the generation of creative content. This evolution signifies a pivotal shift from previous technological revolutions, as it encroaches upon areas traditionally linked to human expertise, judgment, and creativity.
Studies on digital transformation highlight that AI modifies not only the execution of tasks but also the structural roles within organizations. Davenport and Kirby (2016) contend that AI fosters augmentation rather than mere replacement, wherein human employees work alongside algorithms. Nevertheless, some researchers warn that narratives of augmentation frequently obscure the power disparities between humans and machines. As algorithmic systems increasingly assume the role of primary decision-makers, professionals may find themselves relegated to the status of assistants to technology, rather than maintaining their positions as independent experts. This transition has sparked heightened academic interest in the ways AI reshapes professional authority, discretion, and identity.
-
Professional Identity and Meaningful Work
Professional identity pertains to how individuals perceive themselves concerning their profession, abilities, and societal contributions. Theories regarding identity within organizational studies indicate that work transcends mere economic activity; it serves as a vital source of self-definition and dignity. Sennett (1998) and subsequent researchers on meaningful work emphasize that mastery, autonomy, and recognition are essential for maintaining a positive professional self-image.
AI-driven workflows pose challenges to these foundational elements. When algorithms undertake essential analytical or creative functions, professionals may encounter what Petriglieri (2011) describes as "identity threat"a disruption in the alignment between ones skills and ones professional role. Research involving radiologists, accountants, and journalists has uncovered apprehensions that AI diminishes years of training and implicit knowledge. Even when technology enhances objective performance, employees may feel symbolically displaced, fostering a belief that their expertise is no longer pivotal to organizational achievement.
The notion of professional devaluation arises from this conflict. Unlike conventional job loss, devaluation transpires while individuals remain employed; they continue in their roles but sense a reduction in status, influence, or distinctiveness. The discourse on deskilling,originating from Braverman (1974), offers a valuable perspective, yet AI introduces a new variant of deskillingone that impacts not physical dexterity but cognitive authority and interpretative judgment.
-
Psychological Impacts of Automation
A significant amount of psychological research connects alterations in job design to employee well-being. The Job Characteristics Model (Hackman & Oldham, 1976) suggests that autonomy, task significance, and skill variety are essential factors influencing motivation and satisfaction. AI systems frequently diminish these aspects by breaking work into monitoring and validation tasks. Empirical research in automated manufacturing and aviation indicates that excessive dependence on intelligent systems can lead to boredom, vigilance fatigue, and diminished situational awarenessissues increasingly noted in white-collar environments.
Recent studies on "algorithmic management" within gig platforms offer additional insights. Workers overseen by opaque algorithms express feelings of powerlessness, constant monitoring, and unpredictability. While these studies concentrate on platform labour, similar patterns are evident in traditional professions that implement AI dashboards and predictive analytics. The psychological effects encompass heightened stress, decreased organizational commitment, and emotional exhaustion.
The literature on burnout is particularly pertinent. Maslach and Leiter (2016) characterize burnout as occurring when there is a discrepancy between workers and six aspects of work life: control, reward, community, fairness, values, and workload. AI has the potential to disrupt multiple areas simultaneouslydiminishing control over decision-making, modifying reward systems that favour technical compliance over expertise, and undermining communities of practice as human interaction is mediated through systems.
Figure1 : Organizations Priorian-Centered AI Implementation
-
HumanMachine Collaboration and Role Ambiguity
Another area of research investigates the practical interactions between humans and AI. Research in the healthcare sector indicates that clinicians utilizing diagnostic algorithms frequently encounter "epistemic ambiguity"uncertainty regarding whether the ultimate responsibility rests with the physician or the machine. Comparable issues are reported among lawyers employing predictive coding and engineers depending on generative design tools. This ambiguity has the potential to undermine professional confidence and instil a fear of being evaluated by criteria established by algorithms instead of by human colleagues.
Trust in AI emerges as a pivotal topic. While certain scholars highlight the importance of explainable AI to foster user acceptance, others argue that an overabundance of trust can be detrimental, transforming professionals into mere passive operators. The foundational theory of automation misuse and disuse proposed by Parasuraman and Riley (1997) continues to hold significance: when systems are highly competent yet flawed, users fluctuate between excessive reliance and scepticism, both of which increase cognitive strain.
The concept of the "ghost in the role" aligns with this body of literature. Employees may perceive that the "true" professional is now the algorithm, relegating them to a shadowy role. Ethnographic investigations of newsrooms employing automated content and call centres utilizing conversational bots reveal that workers often feel a sense of "existential redundancy"the belief that their existence is temporary and easily replaceable.
-
Organizational and Societal Dimensions
Organizational research indicates that the influence of AI is shaped by managerial strategies and the culture within the workplace. Companies that perceive AI as a means of empowerment and offer reskilling opportunities tend to report more favourable employee attitudes. In contrast, organizations that implement AI mainly for the purpose of cost reduction frequently encounter heightened resistance and anxiety. Effective leadership communication, involvement in system design, and transparency regarding algorithmic decision-making rules are consistently recognized as protective factors.
At the societal level, critical scholars link the devaluation driven by AI to a wider political economy. Zuboffs (2019) theory of surveillance capitalism posits that data-driven systems commodify human behavior, thereby reducing workers to mere sources of training data. Viewed from this angle, psychological distress is not merely an incidental consequence but rather a structural result of business models that emphasize efficiency at the expense of human development.
The ethical discussions further add complexity to the situation. Professional codes in fields such as medicine, law, and education are founded on the principles of human judgment and accountability. As AI increasingly becomes a co-author in decision-making processes, issues regarding responsibility, consent, and fairness emerge. These discussions shape how professionals perceive their evolving roles and whether they regard AI as a collaborator or a competitor.
-
Coping, Adaptation, and Resistance
Despite existing concerns, literature also highlights adaptive responses. Certain professionals partake in "identity reconstruction," redefining their roles as interpreters of AI rather than merely as experts. Initiatives for reskilling, communities of practice, and hybrid positions such as "AI translator" or "algorithm auditor" reveal potential avenues for preserving agency.
However, the process of adaptation is inconsistent. Research indicates that workers who belong to robust professional communities and have access to ongoing learning opportunities manage to cope more effectively than those in unstable situations. Resistance movementsspanning from journalists advocating for transparency in automated news to artists opposing generative AI demonstrate that psychological responses are closely linked with collective action.
-
Research Gaps
While the existing body of scholarship offers significant insights, there are still several gaps that need to be addressed:
Lived Experience: A considerable amount of research depends on surveys that assess the acceptance of technology, rather than conducting an in-depth investigation into the subjective experiences of devaluation and invisibility.
Cross-professional Comparison: Many studies tend to concentrate on individual sectors; there is a scarcity of comparative analysis across professions that possess varying traditions of expertise.
Long-term Psychological Effects: Most research focuses on short-term responses, leaving many questions regarding the chronic erosion of identity throughout one's career unanswered.
Conceptualization of Devaluation: The concept of professional devaluation is lacking a well-defined theoretical framework that connects the characteristics of AI, the organizational context, and the psychological consequences.
-
Positioning the Current Research
This research expands upon and enhances the existing body of literature by focusing on the concept of the ghost as an analytical framework. Instead of viewing the impact of AI merely in terms of efficiency or skill enhancement, it emphasizes issues of dignity, presence, and significance. By incorporating insights from occupational psychology, the sociology of professions, and human computer interaction, this study aims to cultivate a comprehensive understanding of how AI-driven workflows transform the internal experience of work.
-
-
THEORETICAL FRAMEWORK
-
Job DemandsResources (JD-R) Model
The Job DemandsResources model offers a significant perspective for comprehending the psychological effects of AI-driven workflows. This model posits that every profesion encompasses particular job demands (such as work pressure, cognitive load, and role ambiguity) alongside job resources (including autonomy, support, and skill utilization). The well-being of employees is contingent upon the equilibrium between these two elements.
AI systems modify this equilibrium in intricate manners. On one side, automation has the potential to alleviate workload by managing repetitive tasks; conversely, it may heighten cognitive and emotional demands due to continuous monitoring of algorithmic outputs, anxiety over obsolescence, and accountability for machine errors. As professionals transition from being decision-makers to supervisors of AI, their autonomy and variety of skillsessential job resourcesdiminish. Consequently, the JD-R framework elucidates how AI can enhance efficiency while simultaneously escalating the risk of burnout.
Furthermore, this model elucidates the reasons behind the variation in psychological outcomes across different organizations. In environments where AI is implemented with adequate training, involvement, and supportive leadership, resources can offset the new demands. In contrast, when implementation is conducted in a top-down and non-transparent manner, demands prevail, leading to stress, disengagement, and feelings of devaluation.
-
Professional Identity Theory
Professional Identity Theory perceives occupations as sources of significance through which individuals develop a consistent understanding of "who I am." Identity is preserved when there is a continuity among:
-
personal expertise
-
recognized social value
-
daily work activities.
AI disrupts this continuity. When algorithms undertake tasks that represent expertisediagnosis, analysis, writing, designthe symbolic essence of the profession is jeopardized. Individuals may undergo identity dissonance: they continue to hold titles such as "engineer," "teacher," or "analyst," yet they no longer engage in the activities that historically defined those roles.
The concept of the "ghost in the role" arises directly from this theoretical framework. Professionals hold organizational roles but feel ontologically alienated, as if the genuine actor is the machine. Identity theory anticipates several reactions:
-
defensive rejection of AI
-
over-identification with technology
-
withdrawal and cynicism
-
Attempts at identity reconstruction.
-
-
Socio-Technical Systems Perspective
Socio-technical theory posits that organizations consist of interrelated social and technical subsystems. To achieve effective performance, it is essential to optimize both aspects jointly. Numerous AI implementations tend to prioritize the technical system such as accuracy, speed, and datawhile overlooking the social system that encompasses skills, values, and human needs.
From this perspective, the devaluation of professional roles is not an unavoidable consequence of AI but rather a failure in design. When workflows are structured around algorithms instead of people, job roles are reduced to minimal tasks: data cleaning, exception handling, and monitoring. The theory advocates for participatory design, transparency, and the maintenance of meaningful human discretion as vital conditions for the successful adoption of AI.
-
HumanAI Interaction and Algorithmic Authority
Research in the field of human-computer interaction presents the notion of algorithmic authority, which refers to the inclination to regard machine-generated outputs as more objective than human assessments. This form of authority alters the dynamics of power within professional environments. Employees may be reluctant to question AI suggestions, even when these suggestions contradict their own expertise, resulting in diminished self-efficacy.
Consequently, trust, explainability, and control emerge as both psychological and technical factors. In situations where systems lack transparency, employees face uncertainty regarding accountability: Who bears responsibility for errorsthe individual or the algorithm? This lack of clarity is a significant source of anxiety highlighted in this research.
-
-
RESEARCH METHODOLOGY
Figure2 : Research Methodology
-
Research Design
The study will utilize a predominantly qualitative mixed-method design. Given that the phenomenon of professional devaluation is highly subjective, a thorough understanding of lived experiences is crucial. Quantitative metrics will be employed to enhance narratives with indicators such as stress levels, job satisfaction, and perceived autonomy.
Approach:
-
Exploratory and interpretivist
-
Phenomenological orientation
-
Cross-professional comparison
-
-
Research Objectives
To investigate the ways in which AI-driven workflows transform the roles and daily tasks of professionals. To delve into the experiences of professional devaluation and the concept of the "ghost in the role."
To assess the psychological effects on identity, autonomy, and overall well-being.
To pinpoint organizational elements that either exacerbate or mitigate negative consequences. To suggest human-centered approaches for the integration of AI.
-
Research Questions
-
How do professionals articulate the changes in their roles following the adoption of AI?
-
In what manners do they perceive a loss or transformation of their professional identity?
-
What psychological impacts are linked to work mediated by AI?
-
How do organizational practices shape these experiences?
-
What strategies for coping and adaptation are employed?
-
-
Sampling
Population: Knowledge workers utilizing AI systems for essential tasks Fields: IT, education, finance, healthcare, media
Sample size:
-
2025 interview participants
-
100 survey respondents (to provide quantitative support)
-
Sampling technique: purposive and snowball sampling to identify individuals with firsthand experience of AI-driven workflows.
-
-
Data Collection Techniques
-
Semi-Structured Interviews Key themes include:
-
role modifications
-
feelings of significance
-
engagements with AI systems
-
experiences of oversight and control
-
career anxieties and aspirations.
-
Interviews lasting 4560 minutes will be recorded and transcribed.
-
-
Questionnaire
Standardized measures include:
-
Job Autonomy Scale
-
Maslach Burnout Inventory (short version)
-
Perceived Devaluation Scale (adapted version)
-
AI Trust and Explainability items.
-
-
Document Analysis
Review of organizational policies, AI guidelines, and performance metrics to comprehend the structural context.
-
-
TOOLS AND TECHNIQUES
-
Data Collection Instruments
-
Interview Instruments
Digital Voice Recorder / Mobile Recording Applications: Employed to capture semi-structured interviews with participants following the acquisition of informed consent.
Google Meet / Zoom: Used for facilitating online interviews with professionals who are working remotely or located in various geographical areas.
Interview Protocol Document: A guide designed by the researcher that includes open-ended questions pertaining to AI usage, changes in roles, and psychological experiences.
-
Survey Platform
Google Forms: Utilized for disseminating structured questionnaires that assess job satisfation, perceived professional devaluation, autonomy, and stress levels.
The integrated response analytics were beneficial in effectively organizing quantitative data.
-
-
Data Analysis Tools
-
Qualitative Analysis
NVivo / Atlas.ti (or manual thematic coding in MS Word):
For coding interview transcripts
Identification of themes such as identity loss, algorithmic pressure, and role ambiguity Categorization of narratives supporting the concept of "ghost in the role."
-
Quantitative Analysis Microsoft Excel / SPSS:
Utilized for descriptive statistics
Calculation of percentages, mean scores, and graphical representation of survey results.
-
-
Documentation and Writing Tools
Microsoft Word: Preparation of research reports, formatting, and referencing. Grammarly / LanguageTool: Proofreading and language enhancement.
Mendeley / Zotero: Management of references and organization of citations.
-
AI and Digital Technologies Studied
To gain insights into participants experiences, the research focused on widely utilized workplace AI systems such as: ChatGPT, Bard/Gemini, Copilot for content generation and coding support
Automated decision-support systems in finance and human resources AI-driven scheduling and analytics dashboards
Machine learning-based customer service bots
These technologies provided the framework within which professional devaluation and psychological effects were analyzed.
-
Security and Ethical Tools
Password-protected devices
Encrypted storage for transcripts
Anonymization techniques to safeguard participant identity.
-
-
CONCLUSION
The swift incorporation of artificial intelligence into professional workflows is transforming the essence of work in ways that go well beyond mere technical efficiency. This research has explored how AI-driven systems affect not only organizational productivity but also the human experience of work, particularly regarding professional identity, value, and psychological well-being. The results emphasize that while AI provides significant advantages in terms of speed, accuracy, and the automation of routine tasks, it also leads to unintended consequences such as professional devaluation and emotional strain.
The notion of the "ghost in the role" encapsulates the lived experiences of many modern professionals who, despite being formally employed, increasingly feel marginalized within their own fields. As algorithms take over decision-making, creative, and analytical tasks, employees often find themselves relegated to the role of overseers of machine outputs. This transition undermines autonomy, reduces the application of expertise, and fosters role ambiguityconditions that are closely linked to stress, disengagement, and burnout. The study illustrates that the psychological effects are not solely a result of technology but are significantly influenced by how AI is deployed, interpreted, and managed within organizations.
The research further indicates that organizational culture is crucial. Environments that foster transparency, reskilling, and collaboration between humans and AI can convert AI into a beneficial asset rather than a potential threat. In contrast, workplaces
that view AI solely as a means to reduce costs exacerbate feelings of redundancy and invisibility among their workforce. Consequently, technological advancements should be paired with ethical and human-centered design principles.
This study adds to the expanding conversation regarding the future of work by emphasizing the emotional and identity-related aspects of AI integration. It is vital to recognize professionals as more than mere operators of intelligent systems for a sustainable digital transformation. Organizations, policymakers, and educators must collaborate to establish frameworks that uphold human dignity, promote lifelong learning, and ensure meaningful engagement in an increasingly automated environment.
Future studies ought to investigate the long-term impacts of AI exposure, analyze various professional sectors, and create intervention models that harmonize algorithmic efficiency with psychological health. It is only through these comprehensive strategies that society can avert the emergence of "ghosts" in the workplace and guarantee that AI serves as a tool that enhances rather than undermines human potential.
-
REFERENCES
-
Braverman, H. (1974). Labor and monopoly capital: The degradation of work in the twentieth century. Monthly Review Press.
-
Davenport, T. H., & Kirby, J. (2016). Only humans need apply: Winners and losers in the age of smart machines. Harper Business.
-
Hackman, J. R., & Oldham, G. R. (1976). Motivation through the design of work: Test of a theory. Organizational Behavior and Human Performance, 16(2), 250279.
-
Maslach, C., & Leiter, M. P. (2016). Burnout. In G. Fink (Ed.), Stress: Concepts, cognition, emotion, and behavior (pp. 351357). Academic Press.
-
Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2), 230253.
-
Petriglieri, J. L. (2011). Under threat: Responses to and consequences of threats to individuals identities. Academy of Management Review, 36(4), 641662.
-
Sennett, R. (1998). The corrosion of character: The personal consequences of work in the new capitalism. W. W. Norton.
-
Zuboff, S. (2019). The age of surveillance capitalism. PublicAffairs.
-
Brynjolfsson, E., & McAfee, A. (2017). Machine, platform, crowd: Harnessing our digital future. W. W. Norton.
-
Brougham, D., & Haar, J. (2018). Smart technology, artificial intelligence, robotics, and algorithms: Implications for work and employment. New Technology, Work and Employment, 33(3), 239257.
