GAIT Signature for Identification Systems

DOI : 10.17577/IJERTV10IS070322

Download Full-Text PDF Cite this Publication

Text Only Version

GAIT Signature for Identification Systems

Using Image Moments

Mudit Dixit

B.Tech CSE (Specialised in AI) Pune, India.

AbstractThis paper contains knowledge about GAIT and how it is used in the modern world in Identification Systems and how can we build on it to make the existing system better and more efficient. A novel approach of image moments is used.

KeywordsGAIT, recognition, Support Vector Machine, Image Moments

  1. INTRODUCTION

    In biometrics, physical or behavioral attributes of individuals are used to verify the identity. Physical biometrics like shape of body, iris scan are more consistent, whereas behavioral biometrics like speech, voice recognition vary over time and state of an individual. Recognition based on a combination of both kinds of attributes, static images and spatio-temporal data constitutes GAIT.

    GAIT is a cyclic combination of movements which are coordinated in a temporal pattern. GAIT recognition is a method to identify individuals based on their walking style, as it is assumed to be unique and can be used as a biometric. The features that set apart GAIT recognition from other biometric technologies are non-contactable, non-invasive and difficult to replicate.

    In general, biometric identification systems require individuals to voluntarily enroll themselves into a database, and can verify identity by matching the biometric from the database, since it is very accurate. But it becomes difficult to identify individuals who aren't enrolled. Hence this will make the technology highly useful in criminal investigation and access control.

  2. PROBLEM STATEMENT & OBJECTIVE

    With advancements in technology, large datasets of images with high resolution and clarity, and fast processors with high computing capacity are available. Alongside, there is a need for more accurate verification methods of unregistered individuals in various domains. The objective is to analyze preexisting GAIT recognition methods, determine a methodology to calculate accuracy of the GAIT recognition methods, suggest modifications and optimizations for better accuracy and computational efficiency, with respect to time and memory.

    Multiple variants of GAIT recognition technology have been proposed over multiple research papers, which follow a similar basis of methods. GAIT data acquisition, pre- processing, period extraction, feature extraction and classification.

    images are taken at various angles with respect to the subject. The images are pre-processed for noise removal, object viewmodel transformation and in many cases silhouette formation. Then the gait cycles are analyzed and the period is calculated. The features are extracted from these cycles and a dataset is built which can be used for further classification. Classification can be done using various methods with varying accuracies for specific datasets, which include k- nearest neighbors, naive bayes, support vector machines, neural networks, etc.

    Gait Recognition Using a View Transformation Model in the Frequency Domain [1]

    This method was proposed by Yasushi Makihara, gathers spatio-temporal silhouette images of GAIT, and apply fourier analysis to get frequency domain features. We use these features and train data on multiple people at multiple view directions, to generate a model. The image of a person which has to be identified is then applied on the model to predict the viewmodel from multiple view angles. Then the frequency domain features are matched to identify the person.

    Multi-View Gait Recognition Using Enhanced Gait Energy Image and Radon Transform Techniques [2]

    Iman and Nordin proposed a method to utilize Gait Energy Image and Radon Transform on human silhouette for identity verification. A gait energy image stores dynamic data like variation in frequency and phase along with shape and appearance of image. Usage of radon transform overcomes problems with geometrical attributes like translation, scale and rotation. The features are extracted and Principal Component Analysis is used to reduce dimension of feature vectors. Finally a simple euclidean distance measure is used to classify the instance for identity verification.

    GAIT Recognition Method on the basis of Genetic Fuzzy Support Vector Machine (GFSVM) [3]

    Jiwen Lu and Erhu Zhang proposed a method to recognise humans through three view fusion, i.e. perpendicularly, oblique and along with the direction of human walking. In reality the angle between the camera and the walkers direction is unpredictable. To get conviction results we one needs to put forward more multiple views fusion .This is a simple method for gait recognition based on human silhouettes and Independent Component Analysis (ICA).

  3. LITERATURE REVIEW

    The GAIT data is acquired by using single/multiple cameras, motion capture systems and cameras with depth sensors. The

    Novel Representations of appearance based models for human gait sequences [4]

    Dacheng Tao offered two important novel representations, Gabor GAIT and Tensor GAIT. To improve upon their task abilities some extensions are being made to the representations. In their research, three different approaches have been mentioned using Gabor functions which have been developed to reduce the computational complexities in calculating the representation, in training classifiers, and in testing.

    Evaluating the quality of Silhouette Sequences on the basis of 1D foreground [5]

    The Silhouette Quality Quantification (SQQ) method to evaluate the quality of silhouette sequences was proposed by Jianyi Liu. It analyses the quality of the sequence on the basis of the 1D foreground sum-signal modelling as well as signal processing techniques. SQW- Silhouette Quality Weighting is designed to enhance most of the current gait recognition algorithms using the sequence quality.

    Scanning Laser Rangefinders for the Unobtrusive Monitoring of Gait Parameters in Unsupervised Settings [6]

    Uses an Unobtrusive GAIT Monitoring(UGMO) which has Scanning Laser Rangefinders(SLR) in combination with an ambient light sensor and a processing unit, to detect variations in common GAIT parameters such as cadence, velocity and stride-length. This is done by placing the SLR at a height of approximately 15 cm above ground. The UGMO implements the following signal chain –

    The Movement detection/recording module handles the communication with the SLR, detects when a walk has been performed in front of the UGMO and records the SLR data of movement sequences. Therefore, the module initially (and regularly) collects a background scan (BG scan) against which subsequent scans are substituted. Movement is detected if scans significantly differ from the background. In these cases, the scans are recorded for later processing or could be transferred to a server for direct analysis via an available network connection. It has two parameters (the sensitivity parameter and delay parameter) to calibrate the accuracy of the movement-detection: The sensitivity parameter defines how many measured points have to be different from the background laser scan. The delay parameter describes how many consecutive laser scans have to pass the sensitivity in order to start (and end) a walk.

    Intended as a monitoring device, UGMO can either operate as standalone (by recording to a memory card) or can transmit recordings directly to the Internet. The UGMOs data recordings support sufficiently long recording durations: with each measure holding approx. 13 KB (resulting in 52 KB/s for SLR04 and 130 KB/s for the SLR10) and each walk (over a distance o approx. 5 m) approx. 3 MB, the UGMO

    could record approx. 355 days on a 32 GB memory card when assuming 30 walks per day and compression adding further power of 10, the data size is unproblematic. UGMOs power connection consists of a 5V DC, 7001000 mA power supply for the Raspberry Pi 3 and a power supply for the SLR

    In regard to the influence of the SLR type (using GAITRite as a comparison reference) (as associated to varying data rates, measuring areas and angular resolutions), the SLR-10 achieves a much higher sensitivity in terms of the correct detection of steps (98% compared to 77%) and walks (97% compared to 66%) than the cheaper SLR-04, whose lower performance might have been affected by the sensors range, frequency and angular resolution characteristics. Thus, the SLR-10 should be applied instead of the SLR-04 to ensure the correct detection of most walks.

  4. PROPOSED METHEDOLOGY

    Since the existing methodologies have become more popular in their domain of feature extraction method, ex. Fourier transform, radon transform, and lack scope for further improvisations, we have worked on a novel approach of extracting features from image silhouettes.

    In existing implementations, most feature extraction methods have been borrowed from the domain of shape representation, which include b-spline representation, chian codes, hough transform,etc. The one we choose is Image moments.

    Image moments are the measure of shape of an object, and since our silhouettes are binary threshold segmented, moments are perfect for this use case scenario. The image moments can be defined as the p+qth order of gray level f(x,y) for all x and y.

    The dateset used in this model will be from the Center for Biometrics and Security Research [7]. This provides us silhouettes of different people with their respective IDs at different orientations. The image moment at orders p=0-2, q=0-2 for each frame in the dataset for each person is calculated and stored as a feature.

    An infinite sequence of order of moments uniquely defines an object, hence higher the number of order, the more accurate the moments are in representing the silhouette. The reason to choose the p,q values are due to constraints of time and space complexity, while being viable enough to uniquely determine our persons ID.

    Software:

  5. WORKFLOW

    Fig. 1. Workflow Model

    This result is however affected by the classifier chosen, the order of moments, and the amount of dataset used in training. Due to hardware limitations only a small part of the dataset was chosen to train ( 40 minutes training time ), and may vary over large dataset, further improving the accuracy.

    VII. CONCLUSION AND FUTURE SCOPE

    In this paper, we propose a new novel feature extraction technique of using image moments as a metric to calculate shape of silhouette and use this to train datasets to be able to predict peoples ID. This however would require the people to be registered in the dataset beforehand. Although due to hardware limitations the amount of data trained and tested is smaller, the results are within desirable range and this technique will be viable.

    In future more modifications and improvements, for example increasing moment order, using multi-threading techniques for program optimization, using a larger dataset for training, using different classification techniques and algorithms will be implemented for further improvement of the proposed

    Programming Language – Python 3.9.4 Scikit-Learn – 0.24.2 ( Used for classifiers ) Pandas – 1.2.4 ( Used for loading datasets )

    methodology.

    SOURCE CODE

    Hardware:

    Intel I7-8570H – 3.1Ghz clock speed ( Single thread for training sample data – 40-50 minutes )

    VI. RESULT AND ANALYSIS

    The extracted features were stored in a dataset, and another subset of silhouettes were chosen as testing data. By using the nearest neighbour classifier for classifying the testing dataset, an accuracy of 71.42 % was obtained. This is the accuracy of the number of correctly classified samples per all number of samples used in testing.

    https://github.com/moodymudy/GAIT_Recogition_By_Mom ents

    REFERENCES

    1. Makihara Y., Sagawa R., Mukaigawa Y., Echigo T., Yagi Y. (2006) Gait Recognition Using a View Transformation Model in the Frequency Domain. In: Leonardis A., Bischof H., Pinz A. (eds) Computer Vision ECCV 2006. ECCV 2006. Lecture Notes in Computer Science, vol 3953. Springer, Berlin,

      Heidelberg. https://doi.org/10.1007/11744078_12

    2. Iman Mohammed Burhan and Md. Jan Nordin, 2015. Multi-View Gait Recognition Using Enhanced Gait Energy Image and Radon Transform Techniques. Asian Journal of Applied Sciences, 8: 138-148. https://scialert.net/abstract/?doi=ajaps.2015.138.148

    3. Jiwen Lu, Erhu Zhang, "Gait recognition for human identification based on ICA and fuzzy SVM through multiple views fusion", Pattern Recognition Letters, Vol. 28, pp. 24012411, 2007. https://www.researchgate.net/publication/41447841_An_Efficien t_Gait_Recognition_System_Fo r_Human_Identification_Using_Modified_ICA

    4. Dacheng Tao, Xuelong Li, Xindong Wu, and Stephen J. Maybank, "General Tensor Discriminant Analysis and Gabor Features for Gait Recognition", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 29, No. 10, pp. 1700- 1715, October 2007.

    5. Jianyi Liu, Nanning Zheng, and Lei Xiong, "Silhouette quality quantification for gait sequence analysis and recognition", Signal Processing, Vol. 89, No. 7, pp. 1417-1427, July 2009. https://www.sciencedirect.com/science/article/abs/pii/S01651684 09000334

    6. Fudickar, S.; Stolle, C.; Volkening, N.; Hein, A. Scanning Laser Rangefinders for the Unobtrusive Monitoring of Gait Parameters in Unsupervised Settings. Sensors 2018, 18, 3424. https://doi.org/10.3390/s18103424

    7. Gait Dataset (ia.ac.cn)

Leave a Reply