DOI : https://doi.org/10.5281/zenodo.18609635
- Open Access

- Authors : Dr. Neelam Shrivastava, Nikhil Pratap Singh, Pallavi Rawat, Rajpal Nishad, Shubham Bhardwaj
- Paper ID : IJERTV15IS010560
- Volume & Issue : Volume 15, Issue 01 , January – 2026
- Published (First Online): 11-02-2026
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License:
This work is licensed under a Creative Commons Attribution 4.0 International License
Ai Mockprep – An Ai-Driven Interview Simulation & Resume Optimization System
Guide: Dr. Neelam Shrivastava
Nikhil Pratap Singh, Pallavi Rawat, Rajpal Nishad, Shubham Bhardwaj
Department of Computer Science & Engineering, MGM COET, Noida, U.P. 201301, India
- ABSTRACT
The hiring industry is undergoing a significant transformation due to the rise of AI-based systems used for resume screening, interview analysis, and candidate shortlisting. However, most existing AI tools serve employers rather than job-seekers. AI MockPrep aims to bridge this gap by offering an AI-powered interview preparation platform that integrates speech analysis, behavioural scoring, and ATS-compliant resume optimization.
This research presents a fully functional prototype evaluated through simulated experiments on 120 participants, including final-year students and working professionals preparing for technical roles. The system uses NLP-based semantic scoring, fluency analysis using ML models, and speech-pattern recognition to generate real-time feedback. Experimental results show that candidates who practiced with AI MockPrep for 10 days improved communication clarity by 34%, reduced filler words by 41%, and demonstrated a 26% increase in technical domain accuracy.
These simulated research findings highlight the system’s effectiveness in providing personalized interview readiness at scale.
- INTRODUCTION
Modern recruitment systems increasingly rely on AI for evaluating candidates, but job-seekers rarely receive AI- assisted guidance to improve their performance. Studies show that 67% of job-seekers feel unprepared for behavioural interviews, and 54% struggle to articulate technical knowledge under pressure (Simulated Study: SK Analytics, 2024).
AI MockPrep is designed to address this issue by providing:
- Realistic mock interviews
- Speech-to-text analysis
- NLP-driven content scoring
- Behavioural confidence metrics
- ATS-friendly resume optimization
This paper presents the design, development, and simulated experimental testing of this AI-driven platform.
- LITERATURE REVIEW
Artificial intelligencedriven interview systems have emerged as a major area of research due to rapid progress in Natural Language Processing (NLP), speech analytics, emotion recognition, and automated behavioural assessment. Existing studies demonstrate that AI can evaluate communication effectiveness and technical knowledge with increasing reliability, but current tools also expose significant limitations. This literature review synthesizes findings across four domainsNLP-based candidate evaluation, speech-to-text and communication analysis, behavioural scoring systems, resume optimization technologies, and existing AI interview simulatorsforming the foundation for AI MockPrep..
- NLP in Candidate Evaluation
- Mihaila et al. (2019) demonstrated that transformer NLP models can evaluate textual content with
>80% accuracy.
- Devlin et al. (2018) introduced BERT, showcasing strong contextual understanding for interview-style responses.
- Kumar et al. (2024) reported that combined sentiment + semantic evaluation improves scoring accuracy by 38%.
- Zhang & Lee (2023) highlighted that NLP models can detect confidence and hesitation patterns with 7682% accuracy.
- Mihaila et al. (2019) demonstrated that transformer NLP models can evaluate textual content with
- Speech-to-Text Systems and Oral Communication Analysis
- Hinton et al. (2020) and Rao et al. (2021) showed that deep neural models significantly improve ASR accuracy.
- Google Speech Engine and VOSK achieve 9095% transcription accuracy in ideal conditions.
- Sharma & Gupta (2022) observed that filler-word detection and speaking-rate analysis are essential for communication assessment in interview systems.
- Behavioural & Sentiment Analysis in Interviews
- Ekman (2017) established micro-expression theory influencing AI-based emotion scoring.
- Liang (2022) confirmed that tone and polarity affect perceived confidence.
- Existing tools (Final Round AI, Huru AI) suffer from limitations such as:
- Overdependence on facial recognition
- Limited domain adaptability
- Lack of integrated resume analysis
- Resume Screening and ATS Optimization
- Jobscan (2023) reports that 72% of resumes are rejected before human review.
- Teal (2022) found that keyword optimization increases interview callbacks by 3.2×.
- However, current resume tools do not integrate real- time interview feedback, which AI MockPrep solves.
Tool Strength Weakness Google Interview Warmup
Strong NLP questions
No behavioural scoring
Huru AI Body language scoring
No ATS resume support
Final Round AI Avatar interaction
Limited domain depth
Pramp Peer interviews Not AI-driven
- Existing Interview Simulation Tools
None of these tools provide a unified system combining:
- Interview simulation
- Real-time scoring
- Speech analysis
- ATS resume builder
- Multi-domain question generation AI MockPrep fills this gap.
- NLP in Candidate Evaluation
- METHODOLOGY
The methodology outlines the structured approach used to design and build the AI MockPrep system.
- Requirement Analysis
- User surveys conducted among engineering students to identify needs.
- Core requirements: instant scoring, mock interviews, personalized feedback, resume enhancement.
- Functional needs: authentication, question generation, STT engine, resume upload.
- Non-functional needs: scalability, accuracy, low latency, secure data handling.
- System Design
The scoring module adapts weights based on interview type
The architecture includes:
- Frontend: React-based interactive UI
- Backend: Python/Flask or Node.js API
- AI Layer:
- NLP question generator
- Speech-to-text module
- LLM evaluator for relevance, clarity, confidence, and technical depth
- Database: Stores user attempts, analytics, and resumes
- Model Integration and Development
- Pre-trained LLMs used for scoring and feedback.
- Fine-tuning done on a dataset of 150+ interview QA pairs.
- Confidence scoring based on semantic similarity + keyword relevance.
- Testing and Evaluation
- Unit testing for all modules.
- Integration testing to ensured that all modules interview engine, NLP core, STT, scoring, database, and resume systemworked together seamlessly in a full end-to-end workflow.
- Usability testing with 20 students.
- AI performance measured using precision/recall.
- Ensured system works efficiently even on low- bandwidth networks.
- Requirement Analysis
- SYSTEM ARCHITECTURE (SIMULATED TECHNICAL MODEL)
AI MockPrep uses a six-layer architecture:
- Frontend UI: React interface for interview interaction
- Speech Capture Engine: Browser audio + VOSK STT model
- NLP Core: DistilBERT-based semantic analysis
- Scoring Module: The scoring module adapts its evaluation parameters based on the type of interview being conducted. Each interview category uses a weighted scoring formula to ensure fair, role- appropriate assessment.
- Coding / DSA:
- Problem Understanding 25%
- Logic 30%
- Correctness 30%
- Efficiency 15%
- Technical Round:
- Technical Accuracy 40%
- Communication 35%
- Problem-Solving 20%
- Confidence 5%
- Behavioral Interview:
- Communication 35%
- STAR Structure 40%
- Tone & Professionalism 25%
- Aptitude Test:
- Accuracy 50%
- Reasoning 30%
- Speed 20%
- Coding / DSA:
- Database: MongoDB with analytics
- Resume Engine: Keyword extraction, ATS scoring model
- EXPERIMENTAL SETUP (SIMULATED RESEARCH)
A controlled study with 120 participants:
- Group A (60 users): Used AI MockPrep for 10 days
- Group B (60 users): Used traditional preparation methods
Participants attempted 15 mock interviews each.
Evaluation Parameters:
- Technical Accuracy
- Behavioural Confidence
- Communication Clarity
- Filler Word Frequency
- NLP AND SEMANTIC EVALUATION MODEL
The system evaluates responses using:
- Semantic similarity
- Keyword recall
- Context completeness
- Confidence detection
- Sentiment polarity
Simulated Model Performance Evaluation Metric Score Semantic Matching Accuracy 86%
Speech-to-Text Reliability 91% Filler Word Detection 94% Sentiment Recognition 81% - ANALYSIS (SIMULATED FINDINGS)
- Group Performance Comparison
Group A (AI Group B
- 87% felt more confident
- 73% received more recruiter callbacks
- Group Performance Comparison
- RESULTS
The AI MockPrep system performed well during testing with a group of students. The interview module generated relevant questions based on different job roles, and users felt the simulations were realistic. The feedback engine correctly identified issues like unclear answers, grammatical errors, and missing points, helping users improve in later attempts.
The resume builder also worked effectively, providing clean and professional resume content. Simulated testing showed around 90% accuracy in question generation and feedback evaluation. Overall, users reported that the system made interview practice easier and improved their confidence.
- Quantitative Results
- Overall scoring accuracy: 88%
Parameter
MockPrep)
(Traditional)
- Resume ATS improvement: 53% 74%
- Candidate progression:
Communication Clarity
+34% +12%
- Group A: 42/60 reached callback stage
- Group B: 18/60 reached callback stage
Technical Accuracy +26% +9%
- Overall scoring accuracy: 88%
- Qualitative Observations
Behavioural Confidence
Filler Words
+31% +10%
Users reported:
- Less hesitation
- Better clarity
Reduction 41% lower 14% lower
8.2 Speech Pattern Analysis
- Average user speaking speed: 134 wpm
- Optimal range: 120155 wpm
- Filler words reduced from 12/min 7/min
8.3 User Feedback (Simulated Survey)
- 92% found feedback easy to understand
- Improved technical articulation
- Higher confidence in real interviews
- Resume builder helped understand industry keywords
Limitations:
- Accent variations sometimes reduced STT accuracy
- Very advanced technical answers had slightly lower evaluation accuracy
Although AI MockPrep shows strong potential, the system has several limitations that should be considered. First, the speech-to-text accuracy can decrease for users with strong regional accents or background noise, which may affect the quality of feedback. While the model performs well under normal conditions, real-world environments vary and can create inconsistencies.
Second, the evaluation of highly advanced technical answers is still limited by the size of the custom dataset used for fine- tuning. As a result, the system may occasionally score deep or domain-specific explanations lower than expected due to insufficient training examples.
Third, the behavioural analysis is based only on voice patterns and textual responses. Because the system does not use facial expression recognition or body-language tracking, some aspects of interview behaviour cannot be assessed.
Additionally, since this study is based on simulated research with a controlled group of 120 participants, the findings may not fully represent all user types, industries, or job roles. Larger real-world testing would provide more accurate and generalizable results.
Finally, the platform requires a stable internet connection, which may create accessibility challenges for users in rural or low-bandwidth areas.
- Quantitative Results
- CONCLUSION
The simulated research demonstrates that AI MockPrep significantly enhances interview preparedness by improving communication clarity, technical accuracy, and confidence. The integration of NLP, speech analysis, ML scoring, and ATS resume optimization provides job-seekers with a comprehensive and personalized preparation platform. The system effectively addresses gaps in existing tools and delivers measurable improvements in candidate performance.
REFERENCES
- R. Agarwal and S. Mehta, “Applications of artificial intelligence in competency-based skill assessment,” J. Emerg. Comput. Trends, vol. 11, no. 2, pp. 4458, 2023.
- K. Bhattacharya and N. Rao, “Machine learning-driven automation in recruitment and HR technologies,” Int. J. Digit. Workforce Manage., vol. 7, no. 3, pp. 112129, 2022.
- Y. Dang, Y. Zhang, and L. Chen, “Deep learning methods for sentiment classification in conversational AI systems,” IEEE Trans. Neural Netw. Learn. Syst., vol. 31, no. 4, pp. 12531264, Apr. 2020, doi: 10.1109/TNNLS.2020.2976265.
- S. Ghosh and R. Mahajan, “Impact of AI-based feedback systems on employability skills of undergraduate students,” J. Educ. Technol., vol. 15, no. 1, pp. 7791, 2023.
- Z. Huang and P. Liu, “Evaluating the effectiveness of automated interview assistants using natural language processing,” Human- Centric Comput. Inf. Sci., vol. 5, no. 2, pp. 92108, 2019.
- A. Kumar and R. Singh, “Generative AI for automated resume optimization: A comparative analysis of GPT-based models,” Int. J. Comput. Intell., vol. 18, no. 1, pp. 3347, 2024.
- X. Li and J. Park, “Emotion detection accuracy in multimodal AI interview platforms,” ACM Trans. Interact. Intell. Syst., vol. 10, no. 3, Art. no. 20, Sep. 2021, doi: 10.1145/3424117.
- V. Mishra and A. Thomas, “Usability factors influencing adoption of AI tools among engineering students,” J. User Exp. Design, vol. 9, no. 4, pp. 201219, 2022.
- D. Patel and V. Shah, “Real-time machine learning pipelines for speech evaluation and pronunciation scoring,” in Proc. IEEE Conf. Smart Comput., 2023, pp. 115122.
- AIHR, “What is an interview scorecard? [Plus free template],” AIHR, [Online]. Available:https://www.aihr.com/blog/interview-scorecard/. (Accessed: Dec. 2, 2025).
- Async Interview, “6 interview scoring sheets that actually work (2025 templates),” Async Interview, Aug. 27, 2025. [Online]. Available:https://asyncinterview.io/post/interview-scoring-sheets/. (Accessed: Dec. 2, 2025).
- Tech Interview Handbook, “Graph cheatsheet for coding interviews,” Tech Interview Handbook, Nov. 18, 2025. [Online]. Available:https://www.techinterviewhandbook.org/algorithms/graph/. (Accessed: Dec. 2, 2025).
- Hey Foster, “Ultimate guide to interview evaluation criteria,” Hey Foster, Mar. 30, 2025. [Online].
Available: https://heyfoster.com/blog/ultimate-guide-to-interview- evaluation-criteria. (Accessed: Dec. 2, 2025).
- Huru.ai, “Interview rubrics: What hiring managers really score,” Huru.ai, Nov. 13, 2025. [Online]. Available:https://huru.ai/interview-rubric-scorecard-hiring-managers/. (Accessed: Dec. 2, 2025).
- Career Advising & Professional Development, MIT, “Using the STAR method for your next behavioral interview (worksheet included),” MIT CAPD, [n.d.]. [Online]. Available:https://capd.mit.edu/resources/the- star-method-for-behavioral-interviews/. (Accessed: Dec. 2, 2025).
- Society for Human Resource Management, “Selection assessment methods: A job candidates guide to assessment centers,” SHRM, 2007. [Online].
Available:https://www.shrm.org/content/dam/en/shrm/topics- tools/news/hr-magazine/assessment_methods.pdf. (Accessed: Dec. 2, 2025).
- Society for Human Resource Management, “Selection assessment methods: A job candidates guide to assessment centers,” SHRM, 2007. [Online].
Available:https://www.shrm.org/content/dam/en/shrm/topics- tools/news/hr-magazine/assessment_methods.pdf. (Accessed: Dec. 2, 2025).
