Online Auditioning Using Facial Expression Recognition in Real-Time

DOI : 10.17577/IJERTV11IS020136

Download Full-Text PDF Cite this Publication

Text Only Version

Online Auditioning Using Facial Expression Recognition in Real-Time

Syed Muhammad Abdul Rafay1, Ayesha Khalique2, Muhammad Sohail3, Abdullah Masood4, Haseeb Jafar5, Taha Rushain6 1,2,3,4,5 Department of Computer Science & IT, Sir Syed University of Engineering and Technology, Karachi, Pakistan,

6 Institute of Business Administration, Karachi, Pakistan,

ABSTRACT

The issue of merit is very profound in any field of work. If a person gets a role that is best for them according to their skills and interest then the work can be a masterpiece. So, is the job of casting directors but we know for a fact that To err is human. In this paper, we have used a video emotion recognition system which may revolutionize the field of Film and TV. Finding the right man for the right job has never been an easy task but We aim to develop an online casting system in which anyone who needs an artist for a role can post the ad involving the script and then people interested in that work can post a video audition of 15 to 60 seconds.

Their clip will be trending basssssssed on their skills. The most accurate video according to the emotion recognition will be listed on top and hence be for the applicant to be shortlisted.

Keywords: EMOTION RECOGNITION ,

FACIAL EXPRESSION RECOGNITION, REAL TIME AUDITION

  1. INTRODUCTION

    Emotions play a major role when it comes to human interaction. In fact, human beings display emotions mostly through facial expressions. Charles Darwin was the first scientist to propose that Facial expression recognition is one of the most natural way of human beings to communicate their emotions and intentions. Emotions increase the level of effective communication. In addition, facial expressions also provide information about cognitive states i.e interest , confusion , stress, boredom. For the future of human computer interfaces (HCI), Computers must interact with humans the way humans interact with each other. This would take robotics and human-computer interfaces to a whole new level. Facial Expression/Emotion recognition (FER) has attracted significant attention from computing communities over the last decade as it is being used as an important front-end task in many applications such as virtual reality (VR) [2 , 3] and augmented reality (AR) [4] and advance driver assistant system (ADASs) [7]. Darwin proposed that facial expressions are universal, i.e most emotions are expressed in the same way on the human face regardless of race or culture. [8] Interpreting facial expressions can alter the interpretation of what is spoken. Emotions are responses or feelings to a particular situation or environment. The advancements in the field of

    artificial intelligence has led Emotion recognition to be an active topic of research and is being implemented in various applications. In the medical field, its applications are used in pain detection, psychological distress. It can also help physically disabled people (deaf, dumb and bedridden) to identify their current feelings. [10] Artificial Intelligence has flourished in the entertainment industry in animation and computer generated graphics (CGI). Trained neural network models are being used in Auto Desk Maya to provide an edge to Technical Directors and Artists doing 3D animation.

    In this paper, we proposed an idea that can give a major breakthrough in the entertainment industry. People always complain how some artist was not fit for such a role but to cope with this problem through an (FER) model takes things to another level which is not being proposed before or made. We have used a Face API of Java Scripts which focuses on classifying seven basic expressions: anger, disgust, fear, happiness, neutral sadness and surprise which is also the classification model of facial expression proposed by Paul Ekman. [9]

    A casting director may take a live audition through our web app, using real-time facial expression recognition to analyze If that actor/actress is fit for that role. This will take Online Casting to another level globally.

    1.1 EXISTING SYSTEM

    There are many websites/webapps for casting in the entertainment industry for roles in Film , TV , Theater , Modeling , Kids , Voiceover. Their main purpose is to connect an applicant to an employer. Following are some renowned websites and applications.

    • Backstage

      In this app, casting directors post their ad and talent is applied to them and get hired. Talents can be of crew, actors, photographers and when they see the post they apply to them, if they are good enough they get hired. Before creating an account backstage, there is a concept subscription too whether its quarterly, monthly or yearly.

    • Star Now

    Its a talent agency app where there are some open jobs about music, singers, actors and photographers, and people apply for that job to get hired. It does not have a

    subscription initially.

    FirstCut

    FirstCut is a revolutionary application that digitally connects aspiring Actors, Models & Singers to the production houses or casting agencies via the same old hiring method.

  2. PROPOSED SYSTEM

Unlike conventional websites or applications, our system will provide real-time video emotion recognition to ease the job of casting directors. It will be a Progressive Web app (PWA). PWA is a Web Application capable of not only working as a Web app on a browser but also as an installable app on our Mobile, tablets like any other native or cross platform app. There are three types of users that interact with the system.

  1. Talent

    Talent can register, update profile, deactivate account, look for an Ad, apply on ad and can give feedback.

  2. Talent Hunter

    Talent Hunter/recruiter can post/del/update ad, update profile, deactivate account, contact talent and can give feedback.

  3. Admin

    Admin can update, delete, modify the details of the users and have full control of the system.

    Figure 1: Architecture Design for Talnt

  4. UML CASE DIAGRAM

    Features

    Backstage

    Star Now

    First Cut

    TALNT

    Post Add

    Upload audition

    Facial Expression Recognition

    Audition videos can be viewed and upvoted

    Real time Audition

  5. COMPARATIVE ANALYSIS

  6. RESULT

    By comparing with other existing systems, our system provides more unique features for the users. Some of the snapshots are given below.

    • Logi

      • Edit your profile

      • Post Jobs

      • Register

        • Facial Expression

      • Job

  7. CONCLUSION AND FUTURE WOK

In this proposed work, facial expression recognition from video is taken as the main research topic. In different fields, video based emotion recognition is gaining huge significance. Our Proposed model overcomes the previous work by recognizing facial expressions in real time auditions. To our knowledge there isnt any system yet developed in the casting field that can capture real time facial expressions. The study and project included in the paper would be most helpful in the entertainment industry.

IN THE FUTURE, THE SYSTEM'S ACCURACY CAN BE IMPROVED BY CREATING OUR OWN FACIAL EXPRESSION MODEL USING ENHANCED DEEP 3D CONVOLUTIONAL NEURAL NETWORKS. WE WOULD ALSO LIKE TO TAKE OUR WEB APP TO

ANOTHER LEVEL BY INTRODUCING CROWDSOURCING WHERE

COMMON PEOPLE WILL BE ABLE TO UPVOTE THE AUDITION VIDEOS OF ARTISTS.

REFERENCES

[6] Robert Sawyer, Andy Smith, Jonathan Rowe, Roger Azevedo, and James Lester. 2017. Enhancing Student Models in Game-based Learning with Facial Expression Recognition. In Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization (UMAP '17). Association for Computing Machinery, New York, NY, USA, 192 201. DOI: https://doi.org/10.1145/3079628.3079686. https://dl.acm.org/doi/abs/10.1145/3079628.3079686?casa_t oken=zehAp2-856gAAAAA:WATp085pg0uefGY9k5o772 TsEVecsk2q6CwrI2BXKnRd5u10GWEjoTSh4ct94VjP2HX s9nUVQM- 2oo

[7] Jeong, Mira, and Byoung C. Ko. 2018. "Drivers Facial Expression Recognition in Real-Time for Safe Driving" Sensors 18, no. 12: 4270. https://doi.org/10.3390/s18124270

[8] Aya Hassouneh, A.M. Mutawa, M. Murugappan, Development of a Real-Time Emotion Recognition System Using Facial Expressions and EEG based on machine learning and deep neural network methods, Informatics in Medicine Unlocked, Volume 20, 2020, 100372, ISSN

2352-9148, https://doi.org/10.1016/j.imu.2020.100372. https://www.sciencedirect.com/science/article/pii/S23529148 2030201X)2030201X.

[9] https://www.paulekman.com/wp-content/uploads/2013/07/Basic-

Emotions.pdf

[10]https://www.sciencedirect.com/science/article/pii/S23529148203020 1X#bib13

[1] M. S. Bartlett, G. Littlewort, I. Fasel and J. R. Movellan, "Real Time Face Detection and Facial

Expression Recognition: Development and Applications to Human Computer Interaction.," 2003 Conference on Computer Vision and Pattern Recognition Workshop, 2003, pp. 53-53, doi:10.1109/CVPRW.2003.10057.

https://ieeexplore.ieee.org/abstract/document/4624313

[2] B. Houshmand and N. Mefraz Khan, "Facial Expression Recognition Under Partial Occlusion from Virtual Reality Headsets based on Transfer Learning," 2020 IEEE Sixth International Conference on Multimedia Big Data (BigMM), 2020, pp. 70-75, doi: 10.1109/BigMM50055.2020.00020.

https://ieeexplore.ieee.org/abstract/document/9232653

[3] S. Hickson, N. Dufour, A. Sud, V. Kwatra and I. Essa, "Eyemotion: Classifying Facial Expressions in VR Using Eye-Tracking Cameras," 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), 2019, pp. 1626-1635, doi: 10.1109/WACV.2019.00178.

https://ieeexplore.ieee.org/abstract/document/8658392.

[4] Chien-Hsu Chen, I-Jui Lee, Ling-Yi Lin, Augmented reality-based self-facial modeling to promote the emotional expression and social skills of adolescents with autism spectrum disorders, Research in Developmental Disabilities, Volume 36, 2015, Pages 396-403, ISSN

0891-4222, doi:10.1016/j.ridd.2014.10.015https://www.sciencedirect.co m/science/article/pii/S0891422214004296.

[5] Wong, Kok Wai – Zhan, Ce – Li, Wanqing – Ogunbona, Philip AU – Safaei, Farzad PY – 2008 DA – 2008/03/25 TI – A Real-Time Facial

Expression Recognition System for Online Games SP – 542918 VL -2008.SN-1687-7047UR-https://doi.org/10.1155/2008/54291 8 DO – 10