Background Subtraction in Complex Scenarios using Spatio-Temporal Model

Download Full-Text PDF Cite this Publication

Text Only Version

Background Subtraction in Complex Scenarios using Spatio-Temporal Model

Swetha M1

PG Student,Digital Electronics and Communication,DSCE,Bangalore,India

Mrs. Shahla Sohail2

Assistant Professor,Department of Electronics and Communication,DSCE,Bangalore,India

Abstract:- Background subtraction is still an open challenge in complex scenarios.Traditional background subtraction methods have an assumption that scenes are static in nature,as a result they have limited application. In this paper we have a solution to this problem.Background subtraction in case of dynamic background, indistinct foreground objects,illumination variations is an complicated task.To overcome this challenge we propose an effective background subtraction method using spatio-Temporal model.

Index terms – Spatio-Temporal representations, Video survelliance

in real survelliance applications due to following challenges

  1. INTRODUCTION

    Background subtraction plays an important role in many video surveillance applications.object detection is of important task in many video survelliance applications. Background subtraction is commonly used in many applications.

    Background subtraction is a technique in image processing where a foreground object is extraction for further processing

    Background subtraction is also known as foreground detection .Background subtraction is still an open challenge

    Dynamic backgrounds: A scene environment is dynamic eg: heavy rain,camera jitter,rippling water

    Indistinct foreground objects: Foreground and background objects having similar appearances

    Fig1.Some challenging scenarios handled by our approach(a) floating bottle with randomly dynamic water(in the left column), (b) waving curtains around a person, (c) sudden light changes

    In this paper we overcome all the difficulties by providing an efficient background subtraction algorithm. Some of the challenges handled by our approach as shown in Fig1

  2. RELATED WORK

    There exits many background subtraction methods, here we introduce some of the approaches

    In pixel processing approach,scene is observed as set of pixels,In this method each pixel is described by different parametric distributions.

    In these methods [1], [2], [3], [4], each pixel in the scene can be described by different parametric distributions (e.g. Gaussian Mixture Models) to temporally adapt to the environment changes. The parametric models, however, were not always compatible with real complex data, as they were defined based upon some underlying assumptions. To overcome this problem, some other non-parametric

    estimations [5][6] were proposed, and effectively improved the robustness.

    The region-based methods built background models by taking advantages of inter-pixel relations, demonstrating impressive results on handling dynamic scenes. A batch of diverse approaches were proposed to model spatial structures of scenes, such as joint distributions of neighboring pixels [7], [8]

    The third category modeled scene backgrounds by exploiting both spatial and temporal information.

    Mahadevan et al. proposed to separate foreground objects from surroundings by judging the distinguished video patches, which contained different motions and appearances compared with the majority of the whole scene

    In addition, several saliency-based approaches provided alternative ways based on spatio-temporal saliency estimations [24], [28], [29]. The moving objects can be extracted according to their salient appearances and/or motions against the scene backgrounds

    Along with the above mentioned background models, a number of reliable image features were utilized to better handle the background noise Local Binary Pattern (LBP) features and color texture histograms .The LBP operators described each pixel by the relative graylevels of its neighboring pixels, and their effectiveness has been demonstrated in several vision tasks

  3. OVERVIEW

    We propose an effective background subtraction method using spatio-Temporal algorithm

    The algorithm can process 15 – 20 frames per second in the resolution 352 × 288 (pixels) on average

    The video is processed and Gaussian mixture model is used for background and foreground generation.

    In practice, the illumination in the scene could change gradually (daytime or weather conditions in an outdoor scene) or suddenly (switching light in an

    indoor scene). A new object could be brought into the scene or a present object removed from it. In order to adapt to changes we can update the training set by adding new samples and discarding the old ones.

    Local feature detection and description have gained a lot of interest in recent years since photometric descriptors computed for interest regions have proven to be very successful in many applications. In this paper, we propose a novel interest region descriptor which combines the strengths of the well-known SIFT descriptor and the LBP texture operator. It is called the center-symmetric local binary pattern (CS-LBP) descriptor. This new descriptor has several advantages such as tolerance to illumination changes, robustness on flat image areas, and computational efficiency.

      1. OCAL BINARY PATTERNS (LBP)

        LBP operator is one of the best performing texture descriptors and it has been widely used in various applications. It has proven to be highly discriminative and its key advantages, namely its invariance to monotonic gray level changes and computational efficiency, make it suitable for demanding image analysis tasks .

        The LBP operator was originally designed for texture description.

        The operator assigns a label to every pixel of an image by thresholding the 3×3-neighborhood of each pixel with the centre pixel value and considering the result as a binary number. Then the histogram of the labels can be used as a texture descriptor for an illustration of the basic LBP

        Input Video

        Frame Extraction Pre-processing Adaptive Background

        Generation (N)

        Resize, Color Conversion

        Foreground Generation (N+1) by background substraction

        otion Estimation

        otion Estimation

        Block Generation

        operator. Formally, the LBP operator takes the form.

        The whole video is processed and converted in to frames.Gaussian mixture model is applied to extract the foreground object and Adaptive background subtraction algorithm is used to extract background frames.For the generated blocks Local binary patterns(LBP) patterns are applied.

        Spatio-Temporal model is developed and it is then combined to extract the required foreground object and to subtract background.

        Salincy Detection

        M

        Salincy Detection

        M

        Segmented Object +

        LBP Features

        1. Hardware requirements and software requirements

          Fig2 Block diagram

          Spatio- Temporal model

          Hardware Requirements

          • SYSTEM : Pentium IV 2.4 GHz

          • HARD DISK : 40 GB

          • MONITOR : 15 VGA colour

          • MOUSE: Logitech.

          • RAM : 256 MB

          • KEYBOARD: 110 keys enhanced.

    Software Requirements

    • Operating system :- Windows XP Professional or Above

    • Coding Language :- Matlab (10)

    Platform: – Matlab 7.0 And Above

    MATLAB is a high-performance language for technical computing. It integrates computation visualization, and programming in an easy-to-use environment where problems and solutions are expressed in familiar mathematical notation. Typical uses include

  4. EXPERIMENTAL RESULTS

    OUTPUTS:

    Fig2:Some of the sample results of background subtraction by our approach.

  5. DISCUSSIONS

    1. Efficiency: Like other background models, there is a trade-off between the model stability and maintenance efficiency.

    2. Feature effectiveness: The contrast threshold is the only parameter in CS-STLBP operator, which affects the power of feature to character spatio-temporal information within video bricks.We can observe that the appropriate range of threshold is 0.15-0.25.

  6. CONCLUSION

    Compared to the traditional methods this method adapts quickly to the changes in the dynamic changes and proves to be an efficient background subtraction method.

    A good background subtractor should not only be able to robustly detect targets under different situations (e.g. moving and static), but also to adaptively maintain the background model against various influences (e.g. dynamic scenes and noises). This paper proposes a novel background modeling approach with these good characteristics

  7. REFERENCES

  1. C. Stauffer and W. Grimson, Adaptive background mixture models for real-time tracking, in Proc. IEEE Conf. CVPR, Jun. 1999.

  2. T. Bouwmans, F. E. Baf, and B. Vachon, Background modeling using mixture of Gaussians for foreground detection-a survey, Recent Patents Comput. Sci., vol. 1, no. 3, pp. 219237, 2008.

  3. L. Maddalena and A. Petrosino, A self-organizing approach to background subtraction for visual surveillance applications, IEEE Trans. Image Process., vol. 17, no. 7, pp. 11681177, Jul. 2008.

  4. D.-M. Tsai and S.-C. Lai, Independent component analysis-based background subtraction for indoor surveillance, IEEE Trans. Image Process., vol. 18, no. 1, pp. 158167, Jan. 2009.

  5. H. Chang, H. Jeong, and J. Choi, Active attentional sampling for speedup of background subtraction, in Proc. IEEE Conf. CVPR, Jun. 2012, pp. 20882095.

  6. X. Liu, L. Lin, S. Yan, H. Jin, and W. Tao, Integrating spatio- temporal context with multiview representation for object recognition in visual surveillance, IEEE Trans. Circuits Syst. Video

  7. Barnich and M. Droogenbroeck, Vibe: A universal background subtraction algorithm for video sequences, IEEE Trans. Image Process., vol. 20, no. 6, pp. 17091724, Jun. 2011.

  8. S. Liao, G. Zhao, V. Kellokumpu, M. Pietikainen, and S. Li, Modeling pixel process with scale invariant local patterns for background subtraction in complex scenes, in Proc. IEEE Int. Conf. CVPR, Jun. 2010,

Leave a Reply

Your email address will not be published. Required fields are marked *