A Novel Approach on Object Tracking for Face Recognition

Download Full-Text PDF Cite this Publication

Text Only Version

A Novel Approach on Object Tracking for Face Recognition

M. Sushma Sri

Faculty in ECE Government Institute Of Electronics

Secunderabad, India

Abstract: Now a days Object Tracking plays an important role in the area of image and video processing. During last decades there is a drastic change in object tracking which has ability to track objects from an image or a video scene. And this object tracking are reflected in areas of applications such video surveillance, human- computer vision and robot navigation. The main aim of this paper is know about object tracking and its methods for recognizing the face.

Key words- Object detection, Object classification, Object tracking, Kernel tracking

  1. INTRODUCTION

    In todays generation the object tracking plays a very important role in the field of computer vision. The object tracking has many applications in computer vision, such as video surveillance, video compression, robotics visual interface, and medicine. Tracking is frequently performed to locate the object from an image or video scene. The proposed method can be useful for real-time applications and works well for the detection of fast moving object

    detection method used for both fixed and moving camera captured video sequences [1], [2]. This paper presents the significance of object detection and object tracking. Section III, discusses the object detection, Section IV, explains the classification of object and Section V, presents the object tracking.

  2. METHODOLOGY

    Image and Video makes attraction towards researchers for processing images, detecting moving objects or for any tracking purposes. This Object tracking has been widely used in computer vision, because of its surroundings are often articulated in the real-world videos. The below flow chart shows the steps for object tracking as shown in below Fig.1. From a video sequence the object is taken into frames. The object is then detected which can be classified as human, vehicle, or any moving object. After detection the object is classified based on texture, colour, shape or any motion. Once the object is classified based on requirement then it is to be tracked.

    Video Sequence

    Object Detection

    Object Classification

    Object Tracking

    Fig.1. Flow chart

  3. OBJECT DETECTION & OBJECT CLASSIFICATION

  1. OBJECT DETECTION

    Firstly to track an arbitrary object we have to detect and identify the object from a video scene. Object detection is a

    process of finding or detecting objects/ images such as faces, vehicles or any buildings as shown in fig.1. This object detection typically used for extracting features to recognize an object [5]. For detecting the object the process can either be done in every frame or when the object first

    appears in the video [3]. There are many different methods for detecting objects. This section covers some of the most

    popular methods in this area like point detectors, Background subtraction and segmentation.

    Fig.1. Object detection

    1. Point Detectors

      As the name suggests this method refers to the detection of interest points in objects. In point detectors [4] the most commonly used terms corner detection and interest point detectors. The corner detection is a intersection of two edges is one way. This detection is frequently used in object recognition, video tracking, 3D modelling etc. An interest point detection is defined a particular interested point or position. The main application of interest pint detection yields in image matching and view based object recognition.

        1. Background subtraction

          Background subtraction is used to track the object from video sequence. The process of separating a foreground object from the background is known as Background subtraction as shown in Fig.2.1 & Fig.2.2 . If the existence of moving object occurs in both adjacent frames, the tracking area will be overestimated as shown below Fig.2.3.

          Fig.2.1. Background subtraction Image Fig.2.2. Next Image

          Fig.2.3. Tracking area

          The basic approach of this method is building a background model from a video scene. In this model reference image works and must therefore be continuously updated with no moving objects. Each frame is compared to the background model so that image changes can be recognized. By comparing each video frame against the background model it is possible to recognize moving objects in terms of deviations from the reference model [8]. There are two techniques in background subtraction; recursive technique and non recursive technique.

        2. Segmentation

      The process of partitioning an image into multiple segments, or regions is called Segmentation. The produced segments will collectively cover the entire image and be perceptually similar with respect to, for example, colour or texture.

  2. OBJECT CLASSIFICATION

Object classification is a process related to categorization of objects from a video scene. Classification is one of the examples for more general problem of pattern recognition

which is the automated recognition of patterns from a data or any image.

  1. OBJECT TRACKING

    Tracking is the process of estimating the motion parameters and locations of the object starting from the initial frame and for the subsequent frames. Object Tracking is a process of an object of interest from a video scene and keeping track of its motion, orientation, occlusion etc., in order to extract useful information [7]. The process of building an object tracker is usually divided into several steps, which are object classification, object detection, and object

    tracking which are already discussed in section II and III. Once a moving object is detected, tracking holds the responsibility of identifying the objects path in the subsequent frames through path alignment or prediction techniques or by simply indicating its location and direction of movement in each slide. Advanced tracking techniques require ensuring that each object is correlated properly with the same object in the following frames and therefore calls for identifying each object by a set of characteristics for making certain tracking. Once an object is detected and classified as a vehicle, the following phase includes tracking this object. The object tracking classification is shown in Fig.3.

    Object Tracking

    Kernel Tracking

    Kernel Tracking

    Template Based Multi-view Based

    Fig.3. Classification of Object Tracking

    A. Kernel tracking

    The kernel tracking approach is based on computing the motion of an object represented by a primitive object region. By using a motion model, i.e. computing the motion of an object from one frame to another, it is possible to determine its next position. Depending on the purpose of the tracking, different parts of the estimation are important. For instance when using the trajectory to analyze the object behaviour only motion is needed. However, the region enclosing the object also becomes important when identification of the object is needed [6]. Kernel, in this context, is referring to the look of the object,

    i.e. the shape and appearance. Different primitive shapes like a rectangle or ellipse templates are used to represent the object [8]. The kernel tracking method is divided into two groups template based and multi-view based.

      1. Template based

        Template based appearance models are considered relatively straight-forward to use and not as complex as other models. This has made them apopular choice in appearance model and has been widely used for long. With the template based approach the methods for tracking objects differ depending on if it is a single object or multiple objects being tracked. These are therefore divided into two groups namely; Single object tracking and Multiple object tracking.

        • Single object tracking

          Single object tracking is most common approach for template matching. This method is more efficient and flexible for matching template. Different features like colour or image intensity can be used to form the template [48]. The basic approach for this method is to search for a specific template pattern in an image, hence matching the template to a specific part of the image. The object template is generated from the previous frame.

        • Multiple object tracking

          Tracking for multiple objects is used more for interaction between object and background. One type of interaction is when one object partly or completely occludes another object. Modelling objects individually for single object tracking does not consider these difficulties [3]. Different methods have been suggested to solve the problem. One suggestion is to consider the image as a set of layers, where the number of objects being tracked determines the number of layers in the image. The method does also include an additional background layer. Each layer contains tools and models like layer appearance and a motion model corresponding to the object being represented. The background layer is used to compensate any background motion so that an objects motion can be calculated from the compensated image [9]. This way occlusion can be explicitly handled.

      2. Multi-view based

    A problem with generating, for instance, a template to represent the object to be tracked is that the representation is usually based on the latest observation of the object. This means that the representation only considers the object from the current visible view, in other words from one view only. This can be a problem with more complex objects that may appear different from different views. This in turn means that if the tracked object undergoes a drastic change in appearance or motion from one frames to another the information is no longer relevant and the tracking may be lost. The problem can also occur with occlusion and objects temporarily exiting the frame.

    In this paper we develop an object tracking system into simple three steps:

    Step 1: Detection

    To track an object firstly we have to detect the location from a video scene. The detector is configured for detecting location, once after this the object

    is classified based on requirement like we can use object detector to track face across successive video frames. There is a limitation due to classification model used for detection. To overcome this problem face detection for every video frame is computationally intensive.

    Step 2: Identification of features

    Once the location is detected, next we have to identify the facial features from a video scene. In this process we can use the shape, texture or colour of an object. Here below Fig.4.1 shows an example for face recognition in which the skin colour provides a good deal of contrast between the face and the background.

    Step 3: Tracking the face

    When the skin colour is selected as the feature for tracking, then we can now covert the detected object into Hue data channel for extraction of pixels as shown in Fig 4.2. These pixels are extracted from nose region in a detected face as shown in Fig.4.3.

    Fig.4.1. Detected Face Fig.4.2. Hue Image

    Fig.4.3. Object tracked at a particular

  2. CONCLUSION

The figures suggest that the most common categories of object tracking techniques are suitable for different applications, or environments. Tracking of a face in a video sequence is used for detecting facial features. Not only in video sequences, has it also been tested on live video using a webcam. Using this system many security and surveillance systems can be developed and required object can be traced down easily.

REFERENCES

  1. P.M. Jodoin, M. Mignotte, C. Rosenberger , Segmentation framework based on label field fusion, IEEE Trans. Image Process. 16 (10) 25352550, 2017.

  2. Wu-Chih Hua, Chao-Ho Chen b, Tsong-Yi Chen b, Deng-Yuan Huang c, Zong-Che Wud., Moving object detection and tracking from video captured by moving camera, J. Vis. Commun. Image, Elsevier 164 180, 2015.

  3. Grandham. Sindhuja and Renuka. Devi. A survey on detection and tracking of objects in video sequence. International Journal of Engineering Research and General Science, 3(2), 2015.

  4. Zakaria Moutakki, Imad Mohamed Ouloul, Karim Afde, Abdellah Amghar, "Real-Time System Based On Feature Extraction For

    Vehicle Detection And Classification", Transport and Telecommunication, 2018, volume 19, no. 2, 93102 , 2018.

  5. Dedeoglu, Y., Moving Object Detection, Tracking and Classification for Smart Video Surveillance, Masters thesis, Bilkent University,2004.

  6. J. Meja Patel and Bhumika Bhatt. A comparative study of object tracking techniques. International Journal of Innovative Research in Science, Engineering and Technology, 4(3), 2015.

  7. Sandeep Kumar Patel and Agya Mishra. Moving object tracking techniques: A critical review. Indian Journal of Computer Science and Engineering, 4(2):95102, 2013.

  8. R. Hemangi Patil and K. S. Bhagat. Detection and tracking of moving object: A survey. Int. Journal of Engineering Research and Applications, 5(11):138142, 2015.

  9. Peng Chen, Yuanjie Dang, Ronghua Liang., (2018), Real-Time Object Tracking on a Drone With Multi-Inertial Sensing Data, IEEE Transactions On Intelligent Transportation Systems, VOL. 19, NO. 1, JANUARY 2018.

  10. Y. Yuan, Z. Xiong, and Q. Wang, An incremental framework for video based traffic sign detection, tracking, and recognition, IEEE Trans. Intell. Transp. Syst., vol. 18, no. 7, pp. 19181929, Jul. 2017.

  11. Ramakrishnan, V., Prabhavathy, A. K. and Devishree, J, A Survey on Vehicle Detection Techniques in Aerial Surveillance, International Journal of Computer Applications 55(18),2014.

  12. G. Jemilda, S. Baulkani, (2018)., Moving Object Detection and Tracking using Genetic Algorithm Enabled Extreme Learning Machine, International Journal Of Computers Communications & Control Issn 1841-9836, 13(2), 162-174, April 2018.

  13. Shingade, A.; Ghotkar A.(2014); Survey of Object Tracking and Feature Extraction Using Genetic Algorithm, International Journal of Computer Science and Technology, 5(1), 2014.

Leave a Reply

Your email address will not be published. Required fields are marked *