Placement Training using Machine Learning

DOI : 10.17577/IJERTV8IS060716

Download Full-Text PDF Cite this Publication

Text Only Version

Placement Training using Machine Learning

Aishwarya P R, Anamika, Ganga V M, Kamarunnisa K A

Department of Computer Science and Engineering, LBSITW, Trivandrum, Kerala

Abstract The paper presents a computer based placement training to replace manual training procedures adapted by institutes. As training is around the year activity involving thousands of candidates a need has been felt to automate the entire operations. The candidate would be able to attend a real time interview with the help of this system. Face expression recognition facility in the system will enrich the system with the capability of analyzing the gestures and ethics. The system will also provide feedbacks for the students to improve their performance. Instant feedbacks will in turn increase the diversity and inclusion.

Key Words: Convolutional Neural Network, Deep Neural Network, Global Average Pooling, Rectified Linear Unit, WebSpeech API.

  1. INTRODUCTION

    We are living in the most technology oriented time. Day by day technology reduces human task. Humans are interested in automation and want to reduce the burden of manual task. As it is the most competitive world, everyone wants to perform well during the placements. So it is a very natural need to automate the selection procedure. Knowing and understanding the recruiting analytics can help to win todays war for talent. In response, a vast array of human resources technology tools continue to enter the market place promising to provide a better, faster way to find quality candidates. Artificial intelligence is the most recent trend to appear in Human Resources, technical and one of the most promising. As a sub field of Artificial Intelligence technology, machine learning is the method of data analysis which constructs analytical models automatically and contains a large set of tuples in the data warehouse.

      1. Problem statement

        Real life scenario of the competitive world is that there are different types of students and academic performance and result do not always define them or their skills. So CGPA alone cannot define whether they will perform well in campus placements or not. Different company needs specialization in different sector. Practicing in front of the mirror or appearing for online test wont help in the real time interview process. It requires manpower and time for training the candidates. To solve this problem the Placement Training using Machine Learning system is designed for institutes to train the candidates for their upcoming recruitment. The training process enriched with the data of previous recruitment drive help the students to analyze their performance and keep a track on it. Parameter need to be selected by institute. They have freedom to change criteria or select questions according to the needs.

      2. Objective

    The primary motto was to develop such a system which will reduce the load of manual training process and provide an

    effective way to test and train candidates. So, there was a try to find out a solution to give intelligence to machine so that it can help to provide best training. The main objective was to understand the role of machine learning in the recruitment and to understand the benefits of using ML in placement training. When programmed appropriately, machines can be instructed to avoid specific pitfalls that humans may find hard to navigate. For example, system can be programmed to ask identical questions of candidates no matter their gender or age. This can result in an improvement in diversity and inclusion.

  2. METHODOLOGY

A new approach for evaluating candidates in recruitment systems is proposed. The implementation of work will be achieved by recognizing gestures and speech commands recognition for measuring the ethics that a candidate is supposed to follow in a recruitment process. It describes the developed human interface, designed to provide user friendly interaction. Machine learning tools help placement team members by tracking a candidates journey throughout the interview process and helping out to speed up the process of getting feedback. After evaluating, a feedback will be given to candidates in order to improve their competency level or to specify the area of improvement.

    1. System Architecture

      The system is used by the candidates, administrator and the placement unit. The questions and answers are stored in the database. A candidate interacts with a user-friendly interface. The system questions the candidate using predefined questions stored in database .The domain of the model is the technical interview questions asked during placements. The system also monitors the facial expression of the candidate and it gives instant feedback to the candidate. And finally system gives a feedback about his performance in the technical domain. This will give the candidate a rough idea regarding the areas to be improved during a real interview.

      At first the interface is designed using Net Beans IDE with Java Script and HTML. The user and admin login pages are designed and the field values are connected to the database using MYSQL for inserting the respective inputs to the database. The database is created simultaneously for storing the student details and interview questions with corresponding answers.

      Fig -1: Flowchart

      The interview page consist a speech-to-text module. The candidates answer to each question by clicking on microphone icon. The answers are converted to text and stored for further processing. The data obtained is then compared with the predefined answers. If correct, the score is generated. Meanwhile, the face expression recognition module detects the face and process the data obtained with the help of a model which is pre-constructed with a set of sample dataset. This processing produces a probability of the expressions defined in the dataset. The score and the face expression detected is the used to generate the feedback.

    2. Modules

      The placement training using machine learning comprises of four modules. They are

      1. Interface module – An interaction medium between user and the system. This module is designed using Net Beans IDE with JavaScript, HTML and MYSQL database. In the model a user login and admin login is provided.

        Admin Login: Admin is the one who administers the system and input updates. Admin is the one who have the complete access to the system. In this model CGPU or the placement officer is the administrator. It includes operation such as updating and deleting a candidate or to view the details of candidate. The model has UPDATE and DELETE buttons which perform the corresponding operation.

        User Login: Users should have an account to login into the system. A registered user can login by providing their respective username and password. A new user can create an account by registering in the Register page and can utilize the services. In our model, users are the candidates who take the technical interview. The model gives feedback to the users .The feedback comprises of number of correct answers and the wrong answers attempted by candidate, the score he got from interview and the answer of each question. All of these will be displayed on the feedback page. The interview page consist of a speech recognition medium .Questions are displayed as multiple choice format. Candidate answers in audio form. The audio is checked with the corresponding answer in the MySQL database. If answer is correct, score will be added.

      2. Machine learning module – Deep Neural Networks (DNN) are models inspired of the human brai, and particularly its ability to extract structures (patterns) from raw data. The most popular image processing structure of DNN is CNN which is

        constructed by three main processing layers: Convolutional Layer, Pooling Layer and Fully Connected Layer.

        Fig -2: Layers of processing

        It takes input from the face recognition module and process the data by using deep learning approach. System uses CNN image classifications which take an input image, process it and classify it to specified categories.

        So, in the first step the input image is taken using webcam and detect the face using OpenCV in python and try to get the features from the obtained face image using CNN concept of deep learning and for classification task, the extracted features are given to the classifier like Logistic Regression, SVM etc. and classifier predicts the recognized expression as output. The algorithm uses open source computer vision (OpenCV) and Machine learning with python.

      3. Face recognition module – The model gives the instant probability of each emotions. Our system contains 35888 facial expression dataset with 3 fields. The fields are emotion, pixels and usage. Emotions are the seven emotions that our system will recognize.

        The architecture is a Region-based fully-convolutional neural network composed of 9 convolution layers, ReLUs, Batch Normalization and Global Average Pooling. Region-based Fully Convolutional Network (R-FCN), which is fully convolutional, shares almost all of the computing on the entire image.

        Table -1: Emotion values

        As a result, R-FCN achieves competitive results. Meanwhile, speed of R-FCN is 2.5 to 20 times faster than the previous networks. R-FCN is much better than other region-based detector. Therefore, we select R-FCN to train the model. Most recent deep learning networks use Rectified Linear Units (ReLUs) for the hidden layers. A rectified linear unit has output

        0 if the input is less than 0, and raw output otherwise. That is, if the input is greater than 0, the output is equal to the input.

        Fig -3: Model for real time classification

        Batch normalization is a method we can use to normalize the inputs of each layer, in order to fight the internal covariate shift problem. Usually, in order to train a neural network, we do some pre-processing to the input data. In a neural network, the activation function is responsible for transforming the summed weighted input from the node into the activation of the node or output for that input. Softmax function calculates the probabilities distribution of the event over n different events. In general way of saying, this function will calculate the probabilities of each target class over all possible target classes. Later the calculated probabilities will be helpful for determining the target class for the given inputs. The softmax function squashes the outputs of each unit to be between 0 and 1, just like a sigmoid function. But it also divides each output such that the total sum of the outputs is equal to 1.The output of the softmax function is equivalent to a categorical probability distribution, it tells the probability that any of the classes are true.

        Mathematically the softmax function is shown below, where z is a vector of the inputs to the output layer (if there are 10 output units, then there are 10 elements in z). And again, j indexes the output units, so j = 1, 2 K.

        The final architecture is a fully-convolutional neural network where each convolution is followed by a batch normalization operation and a ReLU activation function. The last layer applies a global average pooling and a soft-max activation function to produce a prediction.

      4. Processing and feedback module – The processing module takes the data, process and analyses it to produce the output. Here, the system will record all the answers obtained from the candidates as a result of speech-to-text API. The answers will be compared with the set of predefined answers for the respective questions that are stored in the database. The system analyses the results and record the performance. Simultaneously it will process the inputs from face expression recognition system and analyze to produce the output. The output after processing and analysis is send to the feedback module.

      The Feedback module is responsible for producing the output in a form suitable for users. It will output the questions correctly answered by the candidates and the questions that the candidate failed to answer correctly. It will contain the correct answers to corresponding questions. This will help candidates to keep a track on their performance. It will also have a window producing the feedback on the expressions. It consists of the probability of each expression read by the FER. It will help the candidates to improve their ethics and gestures while attending a real time interview. A constant feedback will help the students to improve and excel in interview.

    3. ALGORITHM

Algorithm for facial expression recognition

Aim: Facial Expression Recognition Input: Image captured by webcam Output: P(E),Probability of each emotion

  1. Initialize parameters for loading data and images.

  2. Initialize hyper-parameters for bounding boxes shape.

  3. Load the model.

  4. Start video streaming.

  5. Read the frame.

  6. Extract the ROI of the face from the grayscale image, resize it to a fixed 28×28 pixels.

  7. Prepare the ROI for classification via the 9 layered CNN. The following operations are performed on each layer.

    1. ReLU function is applied and is imported from Tensorflow and BatchNormalization modules are imported from keras layers

    2. ReLU and Batch Normalisation is repeatedly done on each layer

  8. Global average pooling and softmax function is applied on the last layer for predicting the output.

      1. A tensor with dimensions h×w×d is reduced in size to dimensions 1×1×d by GAP. GAP layers reduce each h×w feature map to a single number by taking the average of all hw values.

      2. Softmax function calculates the probabilities of each target class over all possible target classes.

  9. Construct the label text.

  10. Draw the label + probability bar on the canvas.

  11. Exit.

Algorithm for interface module

Aim: User Login Processing

Input: Username and password of user Output: Performance evaluation feedback

  1. Create a database for storing the student details and interview questions with corresponding answers.

  2. The candidate login to the system using username and password. And user enters into the interview page.

    if a new user, then

    create an account by registering in the Register page

  3. The candidates answer each question by clicking on the microphone icon.

  4. The answers are converted to text by using speech-to-text API and stored for further processing.

  5. The data obtained is then compared with the predefined answers.

  6. If correct, the score is generated.

  7. Facial expression recognition is also done in parallel.

  8. Score and face expression detected is used to generate the feedback.

will not be biased at any point during the interview, thereby ensuring a system which will provide accurate feedback. Practice makes a man perfectthis model will allow candidates to practice mock interviews with less manpower and better performance. An immediate feedback is provided to the candidate .By which helping the candidate to improve in their field of technical domain and how to face an interview with confidence. The facial expression system analyze the candidates expressions and behaviours during a interview .An interview taken by a machine will reduce the time considerably to 40% than that taken by a human. The effort and time taken by an interviewer forconducting mock interview is also considerably saved in placement training using machine learning system. The ultimate aim of the proposed project is to make the candidate equipped for placement.

ACKNOWLEDGEMENT

3. RESULT

The accuracy of the FER module is evaluated manually and using WEKA tool for performing a comparative study. The classifier generates a decision tree (using J48) with 18 leaf nodes and the size of the tree is 35. It shows that the proposed system produces 66% accuracy. The corresponding confusion matrix is shown in Fig 4

We take this opportunity to express our deep sense of gratitude and sincere thanks to all who helped us to complete the project design phase successfully. With pleasure, we express our sincere thanks to our Principal, Dr. JAYAMOHAN J for rendering all the facilities for the completion of our project within campus. We are deeply indebted to Professor SREELETHA S H, Head of Department, Computer Science and Engineering, our project guide Professor SMITHA VAS, Assistant Professor, Department of Computer Science and Engineering, for their excellent guidance, positive criticism and valuable comments. We express deep gratitude to all the faculties and staff members of the Department of Computer Science and Engineering for their unflinching support and encouragement throughout the course of work. Finally we thank our parents and friends, near and dear ones who directly and indirectly contributed to the successful completion of our project.

Fig -4: Confusion Matrix J48 Decision Tree

After generating the trained CNN model, test data is used to evaluate the performance or accuracy. The system was tested against 40 test images and an accuracy of around 59% was obtained. The same dataset was trained and tested using WEKA tool, and an accuracy of 66% was obtained.

Therefore, on comparative analysis the system was found to have an approximate accuracy of 66%.

The final output of the system was candidates performance score and the predicted expression probability, which was displayed. This would help the candidate to analyse his performance and improve in future attempts.

3. CONCLUSIONS

This project presented the development of a machine which is able to provide practice on Technical-based interviews, specifically on job interviews. Candidate interacts with the system which contains some predefined questions and answers in the database. The data are processed and analyzed. The speech input is converted to text using web speech API and stored in database. Feedback given to the candidate help them to face more technical interviews with confidence .Machine

REFERENCES

  1. Moechammad Sarosa, Mochammad Junus, Mariana Ulfah Hoesny, Zamah Sari and Martin Fatnuriyah Classification Technique of Interviewer-Bot Result using Naïve Bayes and Phrase Reinforcement Algorithms , iJET Vol. 13, No. 2, 2018

  2. Abir Fathallah, Lotfi Abdi , Ali Douik Facial Expression Recognition via Deep Learning IEEE/ACS 14th International Conference on Computer Systems and Applications (AICCSA)-2017

  3. Octavio Arriaga, Paul G. Pl¨oger, Matias Valdenegro Real-time Convolutional Neural Networks for Emotion and Gender Classification 20 October 2017

  4. Marium-E-Jannat ,Sayma Sultana Chowdhury and Munira Akther A Probabilistic Machine Learning Approach for Eligible Candidate Selection , International Journal of Computer Applications (0975 8887) Volume 144 No.10, June 2016.

  5. Anagha S. Dhavalikar and Dr. R. K. Kulkarni Face Detection and Facial Expression Recognition System, 201.4 International Conference on Electronics and Communication System (ICECS – 2014)

  6. Evanthia Faliagka, Kostas Ramantas, Athanasios Tsakalidis and Giannis Tzimas Application of Machine Learning Algorithms to an online Recruitment System , ICIW 2012 The Seventh International Conference on Internet and Web Applications and Services.

  7. Yingli Tian, Takeo Kanade, and Jeffrey F. Cohn Facial Expression Recognition January 2011

Leave a Reply