Review on Facial Expression Based Music Player

DOI : 10.17577/IJERTCONV6IS15116

Download Full-Text PDF Cite this Publication

Text Only Version

Review on Facial Expression Based Music Player

Review on Facial Expression Based Music Player

Preema J. S

Student

Alvas Institute of Engineering and Technology Mijar Moodbidri

Savitri H

Student

Alvas Institute of Engineering and Technology Mijar Moodbidri

Rajashree

Student

Alvas Institute of Engineering and Technology Mijar Moodbidri

Sahana M

Student

Alvas Institute of Engineering and Technology Mijar Moodbidri

Shruthi S. J

Assistant Professor

Alvas Institute of Engineering and Technology Mijar Moodbidri

Abstract:- Human often use nonverbal cues such as hand gestures, facial expressions, and tone of the voice to express feelings in interpersonal communications. The face of the human is an important organ of an individuals body and it plays an important role in extraction of an individuals behavior and emotional state. Facial expression provides current mind state of person. It is very time consuming and difficult to create and manage large playlists and to select songs from these playlists. Thus, it would be very helpful if the music player itself selects a song according to the current mood of the user. Manually segregating the list of songs associated, generating acceptable playlist supported an individuals emotions could be a terribly tedious, time overwhelming, intensive and upheld task Thus, an application can be developed to minimize these efforts of managing playlists. However the proposed existing algorithms in use are computationally slow and less accurate. This proposed system based on facial expression extracted will generate a playlist automatically thereby reducing the effort and time involved in rendering the process manually. Facial expressions are given using inbuilt camera. The image is captured using camera and that image is passed under different stages to detect the mood or emotion of the user. We will study about how to automatically detect the mood of the user and present him a playlist of songs which is suitable for his current mood. Proposed paper has used Viola-Jones algorithm and multiclass SVM (Support Vector Machine) for face detection and emotion detection respectively.

Keywords:- Viola Jones Algorithm, SVM, Facial Expression recognition

  1. INTRODUCTION

    In todays world, with ever increasing advancements in the field of multimedia and technology, various music players have been developed with features like fast forward, reverse, variable playback speed (seek and time compression), local playback, streaming playback with multicast streams. Although these features satisfy the users basic requirements, yet the user has to face the task of manually browsing through the playlist of songs and select songs based on his current mood and behavior. Music plays a very important role in enhancing an individuals life as it is an important medium of entertainment for music lovers and listeners and sometimes even imparts a therapeutic approach.

    Facial expressions give important clues about emotions. Computer systems based on affective interaction could play an important role in the next generation of computer vision systems. Face emotion can be used in areas of security, entertainment and human machine interface (HMI).A human can express his/her emotion through lip and eye. The work describes the development of Facial Expression Based Music Player, which is an application meant for users to minimize the efforts in managing large playlists. Generally people have a large number of songs in their database or playlists. Thus to avoid trouble of selecting a song, most people will just randomly select a song from their playlist and some of the songs may not be appropriate for the current mood of the user and it may disappoint the user. Facial Expression based Music Player is interactive, sophisticated and innovative mobile (Android) based application to be used as a music player in a different manner.

    The application works in a different manner from the traditional software as it scans and classifies the audio files present on the device and according to the predefined parameters (Audio Features) present on the application in order to produce a set of mood based playlists. The real time graphical input provided to the application is classified (Facial expression recognition) to produce a mood which will then be used to select the required playlist from the earlier set. The main objective of the paper is to design an efficient and accurate algorithm that would generate a playlist based on current emotional state and behavior of the user. Face detection and facial feature extraction from image is the first step in emotion based music player. For the face detection to work effectively, user needs to provide an input image which should not be blur and tilted.

    The application makes use of Viola-Jones algorithm that is used for face detection and facial feature extraction. The algorithm designed requires less memory overheads, less computational and processing time, reducing the cost of any additional hardware like EEG or sensors [1]. The facial expression would categorize into 5 different types of facial expressions like anger, joy, surprise, sad, and disgust. A high accurate audio extraction technique is proposed that extracts significant, critical and relevant information from

    an audio signal based on certain audio features in a much lesser time. The propose mechanism achieves a better efficiency and real time performance than the existing methodologies.

  2. LITERATURE SURVEY

    The potential abilities of humans to be able to provide inputs to any system in various ways has caught the attentions of various learners, scientists, engineers, etc from all over the world.

    The mind is a term that has always attracted scientists towards understanding it in a whole some manner. The most natural way to express emotions is using facial expressions. We humans, often use nonverbal cues such as hand gestures, facial expressions, and tone of the voice to express feelings in interpersonal communications. Nikhil Zaware et al [2] stated that it is very time consuming and difficult to create and manage large playlists and to select songs from these playlists. The paper states that a way to automatically detect the mood of the user and generate playlist of songs which is suitable for the current mood. The image is captured using webcam and that image is passed under different stages to detect the mood or emotion of the user. The application is thus developed in such a way that it can manage content accessed by user, analyze the image properties and determine the mood of the user. The application also includes the facility of sorting songs based on mp3 file properties so that they can be added into appropriate playlists according to the mood.

    Various techniques and approaches have been proposed and developed to classify human emotional state of behavior. The proposed approaches have focused only on the some of the basic emotions. For the purpose of feature recognition, facial features have been categorized into two major categories such as Appearance-based feature extraction and Geometric based feature extraction. Geometric based feature extraction technique considered only the shape or major prominent points of some important facial features such as mouth and eyes.

    An accurate and efficient statistical based approach for analyzing extracted facial expression features was proposed by Renuka R. Londhe [3]. The paper was majorly focused on the study of the changes in curvatures on the face and intensities of corresponding pixels of images Support Vector Machine (SVM) was used in the classification extracted features into 6 major universal emotions like anger, disgust, fear, happy, sad, and surprise.

    The paper by Henal Shah et al [4] conveys our proposed intelligent music player using sentimental or emotion analysis. The Emotions are a basic part of human nature. They play a vital role throughout the life. Human emotions are meant for mutual understanding and sharing feelings and intentions. The emotions are represented in verbal and through facial expressions. One can also express his emotions through written text. The paper mainly focus on the methodologies available for detecting human emotions for developing emotion based music player, the approaches used by available music players to detect emotions, the approach a music player follows to detect human emotions and how it is better to use the proposed system for emotion

    detection. It also gives brief idea about our systems working, playlist generation and emotion classification.

    Anukriti Dureha [5] suggested manual segregation of a playlist and annotation of songs, in accordance with the current emotional state of a user, as a labour intensive and time consuming job. Numerous algorithms have been proposed to automate this process. However the existing algorithms are slow, increase the overall cost of the system by using additional hardware (e.g. EEG systems and sensors) and have less accuracy. The paper presents an algorithm that automates the process of generating an audio playlist, based on the facial expressions of a user, for rendering salvage of time and labour, invested in performing the process manually. The algorithm proposed in the paper aims at reducing the overall computational time and the cost of the designed system. It also aims at increasing the accuracy of the designed system. The facial expression recognition module of the proposed algorithm is validated by testing the system against user dependent and user independent dataset.

    The paper by Hafeez Kabini et al [6] addressed the problem of the existing methods typically handle only deliberately displayed and exaggerated expressions of prototypical emotions despite the fact that deliberate behavior differs in visual appearance, audio profile, and timing from spontaneously occurring behavior, by taking efforts to develop algorithms that can process naturally occurring human affective behavior have recently emerged. The paper introduced and surveyed the recent advances and discussed human emotion perception from a psychological perspective. The paper examined available approaches to solving the problem of machine understanding of human affective behavior, and discusses important issues like the collection and availability of training and test data.

    Marsyas is a software framework for audio processing [7], written in C++. The framework is designed as a dataflow processing framework, with the advantage of efficiency and low memory usage. Various building blocks are available to build real-time applications for audio analysis, synthesis, segmentation, and classification. Marsyas is widely and successfully used for various tasks.

    The MIR toolbox is a Matlab toolbox dedicated to musical feature extraction [8]. Algorithms are decomposed into stages,that the user can parameterize. Functions are provided with a simple and adaptive syntax. The MIR toolbox relies on the Matlab environment and therefore benefits from already existing toolboxes and built-in visualization capabilities, but suffers from memory management limitations.

    K.McKay et. al designed xpod-a human activity and emotion aware music player[9]. Sensors employed in the system to collect information related to a users emotions and activities for music recommendation. The system was based upon client/server architecture.

    Michael lyons [10] et. al proposed a methodology for coding facial expressions with multi-orientation and multi- resolution set of Gabor filters, that were ordered topographically and were aligned approximately, with the face. The degree of correlation obtained was significantly

    high, but the overall computational complexity increased exponentially.

    Facial features, for the purpose of feature recognition, have been classified by zheng et. al [11] under two broad categories viz. Appearance-based features and Geometric features. The geometric features were derived from shape or prominent points of some important facial features such as mouth and eyes.

  3. BENEFITS AND LIMITATIONS

    1. Benefits

      1. Ease of use.

      2. Mixed mood detection.

      3. Improved accuracy.

      4. Reduced computational time.

    2. Limitations

      1. Manual selection of songs.

      2. Randomly played/shuffled songs may not match the mood of the user.

      3. Emotions based music players now in use are less accurate, time consuming and do not cover all emotions.

      4. CONCLUSION

        The aim of this paper was to explore the area of automatic facial expression recognition for implementation of an emotion based music player. Beginning with the psychological motivation for facial behavior analysis, this field of science has been extensively studied in terms of application and automation. The Emotion Based Music System will be of great advantage to users looking for music based on their mood and emotional behavior. It will help reduce the searching time for music thereby reducing the unnecessary computational time and thereby increasing the overall accuracy and efficiency of the system. The system will not only reduce physical stress but will also act as a boon for the music therapy systems and may also assist the music therapist to therapize a patient. Most of the media player provide list of songs in users music library and option to select or search the song but it becomes increasingly difficult task. System will provide better enjoyment to the music listeners by providing the most suitable or appropriate song to the user according to his current mood. In this paper, we present a proposed system and an approach for the automatic creation of mood based

        playlist. The proposed system will reduce the efforts of user in creating and managing playlist it will not only help user but also the songs are systematically sorted.

      5. ACKNOWLEDGMENTS

        We would like to thank our project guide Ms. Shruthi Shetty J, who guided us. She has been especially enthusiastic in giving her valuable guidance and critical reviews. We would like to thank Dr. Manjunath Kotari, head of department for his constant support. We would also like to show the gratitude to the principal, managing trustee for sharing their wisdom with us.

      6. REFERENCES

[1] Chun-Hung Lin and Ja-LingWu, Automatic Facial Feature Extraction by Genetic Algorithms IEEE transactions on image processing, vol. 8, no. 6, June 1999.

[2] Nikhil Zaware, Tejas Rajgure, Amey Bhadang, D.D. Sakpal EMOTION BASED MUSIC PLAYER International Journal of Innovative Research & Development, Volume 3, Issue 3, 2014.

[3] Renuka R. Londhe, Dr. Vrushshen P. Pawar, Analysis of Facial Expression and Recognition Based On Statistical Approach, International Journal of Soft Computing and Engineering (IJSCE) Volume-2, May 2012.

[4] Henal Shah, Tejas Magar, Purav Shah and Kailas Devadkar AN INTELLIGENT MUSIC PLAYER USING SENTIMENTAL

ANALYSIS International Journal of Innovative and Emerging Research in Engineering, Volume 2, Issue 4, 2015.

[5] Anukriti Dureha AN ACCURATE ALGORITHM FOR GENERATING A MUSIC PLAYLIST BASED ON FACIAL

EXPRESSIONS International Journal of Computer Applications, Volume 100-No.9, 2014.

[6] Hafeez Kabini, Sharik Khan, Omar Khan, Shabana Tadvi EMOTION BASED MUSIC PLAYER International Journal of Engineering Research and General Science, Volume 3, Issue 1, 2015.

[7] G. Tzanetakis, and P. Cook: MARSYAS: A framework for audio analysis, Org. Sound, Vol. 4, No.3, pp. 169-175, 1999.

[8] O. Lartillot, and P. Toiviainen: A MATLAB TOOLBOX FOR MUSICAL FEATURE EXTRACTION FROM AUDIO,

Proceedings of the International Conference on Digital Audio Effects (DAFx07), 2007.

[9] S. Dornbush, K. Fisher, K. McKay, A. Prikhodko and Z. Segall Xpod- A Human Activity and Emotion Aware Mobile Music Player, UMBC Ebiquity, November 2005.

[10] Chang, C. Hu, R. Feris, and M. Turk, Manifold based analysis of facial expression, Image Vision Comput , IEEE Trans. Pattern Anal. Mach. Intell. vol. 24, pp. 05614, June 2006.

[11] Z. Zeng, M. Pantic, G. I. Roisman, and T. S. Huang, A survey of affect recognition methods: Audio, visual, and spontaneous expressions, IEEE. Transaction Pattern Analysis, vol 31, January 2009.

Leave a Reply