Moving Vehicle Detection and Speed Measurement in Video Sequence

DOI : 10.17577/IJERTV2IS100920

Download Full-Text PDF Cite this Publication

Text Only Version

Moving Vehicle Detection and Speed Measurement in Video Sequence

Ms. Bhagyashri Makwana1 Prof. PraveshKumar Goel2

1 B. H. Gardi College of Engineering & Technology, Rajkot

2 B. H. Gardi College of Engineering & Technology, Rajkot

Abstract

Video and image processing has been used for traffic surveillance, analysis and monitoring of traffic conditions in many cities and urban areas.[5] This paper aims to present another approach to estimate the vehicles velocity. This work requires a video scene, comprising the following components: moving vehicle starting reference point and end point of reference. A chip dedicated digital signal processing techniques used to exploit image processing computationally more economical video sequence captured by the video camera fixed position to estimate the speed of moving vehicles are moving vehicles detected by analyzing the sequences of binary images which are constructed from the captured frames by employing the difference in interface or background subtraction algorithm. The system is designed to detect the position of the moving vehicle at the scene and the position of the reference points and calculate the speed of each frame of the static image detected positions[1].

  1. Introduction

    Various methods for speed estimation are proposed in recent years. All approaches attempt to increase accuracy and decrease cost of hardware implementation. Method of speed estimation is categorized into two classes. First, Active Method: The most popular methods include using RADAR and LIDAR devices to detect the speed of a vehicle. RADAR devices bounce a radio signal off to a moving vehicle, and the reflected signal is picked up by a receiver. The traffic radar receiver then measures the frequency difference between the original and reflected signals, and converts it into the speed of the moving vehicle. A LIDAR device times how long it takes a light pulse to travel from the LIDAR gun to the vehicle and back. Based on this information, LIDAR can quickly find the distance between the gun and the vehicle. By making several measurements and

    comparing the distance the vehicle travelled between measurements, LIDAR can determine the vehicle's speed accurately. Second, Passive Method [6,7,8]: In these Methods, speed information, is extracted from sequence of real-time traffic images, taken from passive camera. Moving edge is extracted and processed the resulting edge information to obtain quantitative geometric measurements of vehicles. Image processing techniques to lower computational cost are adopted and developed for vehicle detection and tracking. Image processing is the technology, which is based on software component that does not require special hardware. a video recording device and a computer typical normal [9] which can create a speed detection device. Using the theory of basic scientific rate, we can calculate the speed of a vehicle moving in the video scene from the distance and the time known that the vehicle has moved beyond.[2]

  2. Binary Image Generation

    The speed measurement is performed in binary image domain, i.e., each pixel is transformed into either 1 or 0 according to its motion information. To binarize the incoming input image and only detect the moving pixels, two different techniques are used: (1) Background subtraction (2) Inter frame difference.[1]

    1. Background subtraction

      Identifying moving objects from a video sequence is a fundamental and critical task in video surveillance, traffic monitoring and analysis, human detection and tracking, and gesture recognition in human-machine interface. A common approach to identifying the moving objects is background subtraction, where each video frame is compared against a reference or background model. Pixels in the current frame that deviate significantly from the background are considered to be moving objects. These foreground" pixels are further processed for object localization and tracking. Since background subtraction is often the first

      step in many computer vision applications, it is important that the extracted foreground pixels accurately correspond to the moving objects of interest. Even though many background subtraction algorithms have been proposed in the literature, the problem of identifying moving objects in complex environment is still far from being completely solved.[3]

      Figure 1 Flow diagram of a generic background subtraction algorithm.

      1. Pre processing

        In most computer vision systems, simple temporal and/or spatial smoothing are used in the early stage of processing to reduce camera noise. Smoothing can also be used to remove transient environmental noise such as rain and snow captured in outdoor camera. F or real- time systems, frame-size and frame-rate reduction are commonly used to reduce the data processing rate. If the camera is moving or multiple cameras are used at different locations, image registration between successive frames or among different cameras is needed before background modelling [10,11].Another key issue in pre-processing is the data format used by the particular background subtraction algorithm.[3]

      2. Background Modelling

        Background modelling is at the heart of any background subtraction algorithm. Much research has been devoted to developing a background model that is robust against environmental changes in the background, but sensitive enough to identify all moving objects of interest. We classify background modelling techniques into two broad categories {non-recursive and recursive.[3]

      3. Foreground Detection

        Foreground detection compares the input video frame with the background model, and identies candidate foreground pixels from the input frame.[3]

      4. Data Validation

        We define data validation as the process of improving the candidate foreground mask based on

        information obtained from outside the background model. All the background models in have three main limitations: first, they ignore any correlation between neighbouring pixels; second, the rate of adaption may not match the moving speed of the foreground objects; and third, non-stationary pixels from moving leaves or shadow cast by moving objects are easily mistaken as true foreground objects.[3]

        2.2. Inter frame difference

        Fundamental logic for detecting moving objects from the difference between the current frame and a reference frame, called background image and this method is known as FRAME DIFFERENCE METHOD. Frame difference:

        | Frameiframesi-1| > Th

        The estimated background is just the previous frame. It evidently works only in particular conditions of objects speed and frame rate. Very sensitive to the threshold Th.

  3. Speed Detection

    The speed of the vehicle in each frame is calculated using the position of the vehicle in each frame, so the next step is to find the spots Bounding, and the centre of gravity. Bubble centroid distance is important to understand the moving vehicle in consecutive frames and therefore is known as the frame rate for motion capture, the speed calculation becomes possible. This information must be recorded in a continuous array cell in the same size as the camera image captured because the distance travelled by the centroid is needed is a pixel with a specific coordinate on the image to determine the vehicle speed .[1] To find the distance travelled by the pixel, suppose the pixel has the coordinate as:

    1. Speed Violated vehicle Detection Using shrinking algorithm

      The speed estimtion process is related with the tracking objects[5] in binary difference image

      The tracking and speed estimation using consists of the following steps.[1]

      Step: 1 Use the binary image and segment it into groups of moving objects using the aforementioned shrinking algorithm to create s over region .

      Step: 2 Track each in consecutive frames and find its spatial bounding box coordinates,

      i.e., upper left side coordinate of the spatial bounding box( ) at time instant t.

      Step: 3 Trigger the timing when the object passes the first imaginary line located at, i.e.,

      and record its upper left side coordinate of the spatial bounding box, i.e., ( ).

      Step: 4 Trigger the timing when the object passes the first imaginary line located at, i.e.,

      and record its upper left side coordinate of the spatial bounding box, i.e., ( ).

  4. Result

    Step: 5

    Step: 6 If the speed V is low

    than the speed

    Figure 3 RGB image

    er

    limit, then discard the object and go to step 1.

    Step:7 Extract the license plate using colour information.

    Step: 8 Transmit the extracted license plate image to the Authorized remote station.

    Step: 9 Go to step (1).

    Figure 2 Configuration for the speed measurement of a moving vehicle

    Figure 4 Background frame

    Figure 5 Greyscale image

    Figure 6 Binary Image

    Vchile Number

    True Speed (km/h)

    Estimated Speed (km/h)

    Error (km/h)

    1

    60.60

    60.72

    0.12

    2

    72.80

    73.58

    0.78

    3

    64.60

    65.76

    1.16

    4

    73.30

    74.10

    0.80

    5

    63.20

    63.64

    0.44

    Table1.Vehicle Speed detection using Shrinking algorithm

  5. Conclusion

    In this paper, we have presented the speeding vehicles are detected using image processing techniques on the sequence of input images captured by the video camera fixed position. Image processing techniques are developed computationally economical and used to reduce energy consumption. And detected and tracked vehicles at high speed consecutive sequences of image of the vehicle. The accuracy of the system proposed in the speed measurement is comparable to the actual speed of the moving vehicles. The best results are obtained in shrinking algorithm.

  6. References

  1. Mr. .Jyothi kiran ,K.S. Roy, A Video Surveillance System for Speed Detection in Vehicles International Journal of Engineering Trends and Technology (IJETT) – Volume4Issue5- May 2013.

  2. Gholam ali rezai rad , Javad mohamadi Vehicle Speed Estimation Based On The Image 4 International Conference: Sciences of Electronic, Technologies of Information and Telecommunications March 25-29, 2007 TUNISIA.

  3. Sen-Ching S. Cheung and Chandrika Kamath Robust techniques for background subtraction in urban Trafficvideo.

  4. A. Mittal and D. Huttenlocher, \Scene modeling for wide area surveillancd and image synthesis," in Proceedings IEEE conference on computer vision and pattern recognition, 2, pp. 160{167, (Hilton Head Isand, SC), June 2000.

  5. Arash Gholami Rad and Abbas Dehghaniand Mohamed RehanKarim Vehicle speed detection in video image sequencesusingCVSmethod International Journal of the Physical Sciences Vol. 5(17), pp. 2555-2563, 18 December , 2010

  6. D. J. Dailey/ L.Li, An Algorithm to Estimate Vehicle Speed Using Un-Calibrated Cameras, Intelligent transportation systems, IEEE ITSC'99 , 5-8 October 1999 , Tokyo, Japan .

  7. Todd N.SCHOEPFLIN, and Daniel J.DAILEY, Algorithms for calibrating roadside traffic cameras and estimating mean Vehicle Speed, IEEE Intelligent Vehicles Symposium, 14-17 June, 2004, Parma, Italy.

  8. Mei Yu, Gangyi Jig, and Bokang Yu, An integrative method for video based traffic parameter extraction in ITS, The IEEE Asia-Pacific Conference, 4-6 Dec 2001

  9. Muhsen Ebrahimi Mughaddam and Mansour Jamzad,

    Motion blur using identigication in noisy images using fuzzy sets, Symposium on signal processing and InformationTechnologyIEEE,2005.

  10. A. Mittal and D. Huttenlocher, \Scene modeling for wide area surveillancd and image synthesis," in Proceedings IEEE conference on computer vision and pattern recognition, 2, pp. 160{167, (Hilton Head Isand, SC), June 2000.

  11. J. Kang, I. Cohen, and G. Medioni, \Continuous tracking within and across camera streams," in Proceedings IEEE conference on computer vision and pattern recognition, 1, pp. 267{272, (Madison, WI), June 2003.

Leave a Reply