Cosmic View: the Ultimate Camera Application

DOI : 10.17577/IJERTV8IS050294

Download Full-Text PDF Cite this Publication

  • Open Access
  • Total Downloads : 87
  • Authors : Ms. Bibina Shajan , Mr. Akarsh Ramakrishnan , Ms. Anjana Babu , Ms. Elsa Eldhose, Dr.Barakkath Nisha U
  • Paper ID : IJERTV8IS050294
  • Volume & Issue : Volume 08, Issue 05 (May 2019)
  • Published (First Online): 23-05-2019
  • ISSN (Online) : 2278-0181
  • Publisher Name : IJERT
  • License: Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 International License

Text Only Version

Cosmic View: the Ultimate Camera Application

Ms. Bibina Shajan 1, Ms. Anjana Babu 2 , Ms. Elsa Eldhose 3, Mr. Akarsh Ramakrishnan 4 , Dr.Barakkath Nisha U 5

1,2,3,4 B.Tech ,Final Year CSE, ICET, Mulavoor , Kerala

5 Associate Professor, CSE, ICET, Mulavoor, Kerala

Abstract:- Face recognition is a biometric software application capable of uniquely identifying or verifying a person by comparing and analyzing patterns based on the persons facial contours.The basic aim of face detection is determine if there is any face in an image & then locate position of a face in an image. Evidently face detection is the first step towards creating an automated system which may involve other face processing. Face recognition (FR), the process of identifying people through facial images, has numerous practical applications in the area of biometrics, information security, access control, law enforcement, smart cards and surveillance system.

Keywords: Face Recognition, Back Propagation Neural Network, Recurrent Neural Network, Convolutional Neural Network, Local Phase Quantization, Local Binary Patterns.

INTRODUCTION

Face recognition (FR) system identifies a face by matching it with the facial database. It has gained great progress in the recent years due to improvement in design and learning of features and face recognition models. Also it provides a great advantage for the police department for finding the criminals.

Face recognition systems may be divided into two broad categories:

  • Find a person from his image in a large database of facial images (eg. a police database). These systems return the details of the person being searched for. Often only one image is available per person. It is usually not necessary for recognition to be done in real time.

  • Identify a person in real time. These are used in systems which allow access to a certain group of people and deny access to others. Multiple images per person are often available for training and real time recognition is required.

Initially, the face of the missing person who had done a crime is trained in the server which will be handled by the police department i.e., a query image is uploaded to the server. Then the WLAN (router) will send the query image to device having the cosmic view app. If we press the search button in the app, it starts searching for the person. When any device found a match, it will send the location, picture and date to the server.

LITERATURE REVIEW

  1. Deep Learning

    Deep learning techniques are a part of machine learning methods based on learning multiple levels of representation and abstraction that helps to make sense of data such as images, sound, and text. Deep learning replaces

    handcrafted feature extraction with efficient algorithms for unsupervised or semi-supervised feature learning and hierarchical feature extraction.

    The recent upsurge of deep learning techniques was set off by the groundbreaking work of in 2006, but deep learning has been here for quite a long time. In previous few years, deep learning has shown exceptional performance in natural languages, speech recognition and computer vision. Deep learning nowadays is generally centred on multilayered neural networks.

    The deep neural architectures are: feed forward neural networks, recurrent neural networks and convolution neural networks. Feed forward networks sends unstructured information from one end called input to the other end called output; so they are called feed forward. The feed forward networks approximates some function f by defining a mapping of y=f*(x; ) and then learning the value of the parameters that best approximates f, so they are seen as universal function approximations. Recurrent neural networks model dynamics over time (and space) using self- replicated components. RNNs are specialized for processing sequential data. They preserve some amount of memory, and can save long-term dependencies. RNNs are powerful computational machines they can approximate any program. A convolutional neural network has trainable filters and local neighbourhood pooling operations applied alternatingly on the input images which results in a hierarchy of increasingly complex features. The pooling operations down sample the input representation and enlarges the input patterns. CNNs take advantage of the repetitive local input patterns across time and space, so they are translation-invariant the capability found in visual cortex of a human. Local input patterns are small data slices, of distinct size, e.g., a group of pixels in an image.

  2. Face Recognition

    In previous years, the performance of face recognition algorithms has increased a great deal. The significance of face recognition is due to its technical challenges and wide potential application. The first popular face recognition technique is Eigen face (Principal Component Analysis). It can be pictured as a single layer linear model. Fisher face (Linear Discriminant Analysis) is also a single layer linear model. Laplacian face (Locality Preserving Projection) also used linear features. Then, many handcrafted local nonlinear feature based methods emerged, such as Local Phase Quantization (LPQ), Local Binary Patterns (LBP), and Fisher vectors. These hand crafted features achieved excellent face recognition performance, however it decreased considerably in unconstrained environments

    where the face images cover intra-personal variations like, pose, illumination, expression and occlusion as shown in Labelled Faces in the Wild (LFW) benchmark.

    In last few years, deep learning methods, especially CNN has achieved very impressive results on face recognition in unconstrained environment. The main benefit of CNNs is that all the processing layers, even the pixel level input have configurable parameters that can be learned from data. This averts the necessity for hand crafted feature design, and replaces it with supervised data driven learning of features. CNN learning based features are more resilient to complex intra-personal variations. CNN methods have attained the best three face recognition rates on the FRUE benchmark database LFW (Labelled Faces in the Wild).

  3. Back Propagation Neural Network

To overcome the limitation of perceptron, in 1986, Rumelhart et al. in, had describe a new supervised learning procedure known as Back Propagation Neural Network (BPNN) which is used for linear as well as non-linear classification. BPNN is a supervised algorithm in which error difference between the desired output and calculated output is back propagated. The procedure is repeated during learning to minimize the error by adjusting the weights thought the back propagation of error. As a result of weight adjustments, hidden units set their weights to represent important features of the task domain. BPNN consists of three layers:

  1. Input Layer

  2. Hidden Layer and

  3. Output Layer.

    Number of the hidden layers, and number of hidden units in each hidden layers depend upon the complexity of the problem. Learning in BPNN is a two step processes:

    • Forward Propagation:

In this step, depending upon the inputs and current weights, outputs are calculated. For such calculation, each hidden

unit and output unit calculates net excitation which depends on:

  • Values of previous layer units that are connected to the unit in consideration.

  • Weights between the previous layer unit and unit in consideration.

  • Threshold value on the unit in consideration./p>

    This net excitation is used by activation function which returns calculated output value for that unit. This activation function must be continuous and differentiable. There are various activation functions which can be used in BPNN. Sigmoid is widely used activation function.

    • Backward Propagation of Error:

During this step, error is calculated by difference between the targeted output and actual output of each output unit. This error is back propagated to the previous layer that is hidden layer. For each unit in the hidden layer N, error at that node is calculated. In the similar way, error at each node of previous hidden layer that is N-1 is calculated. These calculated errors are used to correct the weighs so that the error at each output unit is minimized. Forward and backward steps are repeated until the error is minimized up to the expected level.

PROPOSED METHODOLOGY

The face of the missing person who had done a crime is trained in the server which will be handled by the police department i.e., a query image is uploaded to the server. Then the WLAN (router) will send the query image to device having the cosmic view app. If we press the search button in the app, it starts searching for the person. When any device found a match, it will send the location, picture and date to the server. Figure 1 illustrates the proposed system framework.

Figure 1. Proposed System Framework

Convolution Neural Network

In deep learning, a convolutional neural network (CNN, or ConvNet) is a class of deep neural networks, most commonly applied to analysing visual imagery. CNNs are regularized versions of multilayer perceptron. Multilayer perceptron usually refers to fully connected networks, that is, each neuron in one layer is connected to all neurons in the next layer. The "fully-connectedness" of these networks make them prone to overfitting data. Typical ways of regularization include adding some form of magnitude measurement of weights to the loss function.

However, CNNs take a different approach towards regularization: they take advantage of the hierarchical pattern in data and assemble more complex patterns using smaller and simpler patterns. Therefore, on the scale of connectedness and complexity, CNNs are on the lower extreme. Figure 2 shows an example of CNN.CNNs use relatively little pre-processing compared to other image classification algorithms. This means that the network learns the filters that in traditional algorithms were hand- engineered. This independence from prior knowledge and human effort in feature design is a major advantage.

Figure 2. Convolution Neural Network Example

ALGORITHM

Step 1. Read the image and divide it into equal parts of pixels.

Step 2. Each part is converted to binary numbers and fix the output for each vector.

Step 3. Weight and bias are initialized randomly for training of each part separately using neural network tool through MATLAB.

Step 4. Calculate the output of each part from the previous step and apply it as an input to the next step.

Step 5. The above steps are applied to the initial image without dividing it. Step 6. Results obtained from step 4 and 5 are compared to check the accuracy of this approach.

Step 7. The procedure below can adjust the weight in the network

  1. Using sigmoid activation function compare hidden layer and output layer neuron.

  2. Calculate the errors of output layer and hidden layer and then calculate the total error of the network.

  3. Repeat the steps of adjusting weights until minimizing the squared mean error. Adjust Weight is given by the equation:

Wji(n) = j(n) yi(n), which is described later in back propagation algorithm and is the learning rate.

ALGORITHM

Step 1. Read the image and divide it into equal parts of pixels.

Step 2. Each part is converted to binary numbers and fix the output for each vector.

Step 3. Weight and bias are initialized randomly for training of each part separately using neural network tool through MATLAB.

Step 4. Calculate the output of each part from the previous step and apply it as an input to the next step.

Step 5. The above steps are applied to the initial image without dividing it. Step 6. Results obtained from step 4 and 5 are compared to check the accuracy of this approach.

Step 7. The procedure below can adjust the weight in the network

  1. Using sigmoid activation function compare hidden layer and output layer neuron.

  2. Calculate the errors of output layer and hidden layer and then calculate the total error of the network.

  3. Repeat the steps of adjusting weights until minimizing the squared mean error. Adjust Weight is given by the equation:

Wji(n) = j(n) yi(n), which is described later in back propagation algorithm and is the learning rate.

The following algorithm describes CNN procedure for face matching. Input face images (captured image via any device) are taken as input and compared with database image, If comparison value is true then captured image location are intimated to administrator, otherwise the status is set as Pending.

RESULTS AND DISCUSSION

This is the server website of Cosmic View. Here we can login into the site by entering the username and password. We can login into this page as admin and police. Figure 3 and Figure 4 shows admin login and police station detail entry respectively.

Figure 3. Admin Login Page

When police login into this page, they can manage the details of the specified police station.As the police details is

entered, then that particular police can enter as police admin.

Figure 4. Police Details Entry Form

In this page the admin can enter the details of the user phone, where the query image will be send. As the admin is managing this, we can ensure the security. The IMEI number of phone or MAC address of system of users which is registered on the server will become active. The cosmic

view app in the phone will capture the photos continuously and it will be working in the background. Figure 5 shows the mobile phone registration those who are willing to install cosmic view algorithm.

Figure 5. User Gadget Registration Form (Cosmic View installed devices)

Figure 6. Suspicious Person Information Form

Suspected person information are entered and illustrated in figure 6. Here the police will enter the details and upload a photo of the missing person. The face of an individual is compared according to the 68 points in their face shows in

figure 7. This will the completely different for each and every one. So if the face comparison is success, it will be updated in the database. Figure 8 show the success and pending status of proposed Cosmic view algorithm.

Figure 7. Face Detection with 68 points

Figure 8. Final Status of Cosmic View Success or Pending

When the match is found success, the location can be viewed in the google maps.So that the police will be able to find them easily. Figure 8 shows the location in terms of latitude and longitude of the registered user mobile phone,

In accordance with the location we will be able to see the exact date and time at the moment when the phone captured the image.

Figure 8. Suspected Person Location View via Google Map

CONCLUSION

Face detection is basically explained as determination of faces from an image using a certain search strategy under a complex background. It has many applications such as pattern recognition, video surveillance, interface applications and identification. Face detection takes images as input and locates face areas within these images. This is done by separating face areas from non-face background regions. Face recognition involves omparing an image with a database of stored faces to identify individual person in the input image. The related task of face detection has direct relevance to face recognition because images must be analysed and faces identified, before they can be recognized. Detecting faces in an image can help to focus the computational resources of the face recognition system, optimizing the systems speed and performance.

REFERENCES

  1. R-QiongXu , Bi-Cheng Li ,Bo Wang Face detection and recognition using neural network and hidden markov models, IEEE Int. Conf. Neural Networks & Signal Processing Nanjing, China, December 14-17,2003.

  2. EH.C Tivive and A. Bouzerdoum, A Face Detection System Using Shunting Inhibitory Convolutional Neural Networks IEEE 2004.

  3. B. Moghaddam, A. Pentland.Probabilistic visual learning for object representation, IEEE Trans. PAMI, Vol. 19(7), 696-720, 1997.

  4. E. Osuna, R. Freund, and F. Girosi, Training suppan vector machines: an application to face detection, Proc. CVPR'IY97, pp. 130-136.

  5. Lin-Lin Huang et. al.,face detection from cluttered image using a polynomial neural network IEEE 2001.

  6. R. Chellappa, C. L. Wilson, and S. Sirohey. Human and machine recognition of faces. a survey. In Proc. IEEE, volume 83, 1995.

  7. T. Sakai, M. Nagao, and T. Kanade. Computer analysis and classication of photographs of human faces. In Proc. First USA

    {Japan Computer Conference, page 2.7., 1972.}

  8. Hjelms, E. and Low, B.(2001). Face Detection: A Survey. Computer Vision and Image Understanding. 83, 236-274.

  9. Rowley, H.A., Baluja, S., and Kanade, T. (1998). Neural network- based face detection. IEEE Trans. Pattern Anal. Mach. Intelligence, 20, 23-38.

  10. Aamer Mohamed, et al (2008) Face Detection based Neural Networks using Robust Skin Color Segmentation, 5th International Multi-Conference on Systems, Signals and Devices, IEEE.

  11. Omaima N. A. AL-Allaf, Review of Face Detection Systems Based Artificial Neural Network, IJMA Vol.6, No.1, February 2014.

  12. Khairul Azha A. Aziz et. al.,Face Detection Using Radial Basis Function Neural Networks With Variance Spread Value 2009 International Conference of Soft Computing and Pattern Recognition, IEEE 2009.

  13. Moeen Tayyab, M. F. Zafar, Face Detection using 2D-Discrete Cosine Transform and Back Propagation Neural Network 2009 International Conference on Emerging Technologies, IEEE 2009.

  14. Krishna Dharavath*, Fazal Ahmed Talukdar and Rabul Hussain Laskar, Improving Face Recognition Rate with Image Preprocessing, Indian Journal of Science and Technology, Vol 7(8), 11701175, August 2014.

  15. Hossein Ziaei Nafchia,*, Seyed Morteza Ayatollahi, A set of criteria for face detection preprocessing Proceedings of the International Neural Network Society Winter Conference (INNS- WC 2012).

  16. Linlin Huang Akinobu Shimizu Hidefirmi Kobatake, Face Detection using a Modified Radial Basis Function Neural Network, IEEE 2002.

  17. Hyeonjoon Moon et. al., Computational and performance aspects of PCA-based face- recognition algorithm,

  18. P.Latha et. al., Face Recognition using Neural Networks, Signal Processing: An International Journal (SPIJ) Volume (3).

  19. Weitzenfeld, A., Arbib M. A., Alexander A.(2002)The Neural Simulation Language: A System for Brain Modeling, The MIT Press

  20. Bishop, C. M. (1995) Neural Networks for Pattern Recognition,

    Oxford University Press

  21. Facedetection,URL:http://www.vision.caltech.edu/htmlfiles/archi ve.html

  22. Tej Pal Singh, Face Recognition by using Feed Forward Back Propagation Neural Network, International Journal of Innovative Research in Technology & Science (IJIRTS).

Leave a Reply