Two Phase V ehicle D etection In Aerial O bservence U sing Pixelwise Classification

DOI : 10.17577/IJERTV2IS60602

Download Full-Text PDF Cite this Publication

Text Only Version

Two Phase V ehicle D etection In Aerial O bservence U sing Pixelwise Classification

Two Phase V ehicle D etection In Aerial O bservence U sing Pixelwise Classification

S. Sowmya Devi,

M.Tech Student, Department of CSE,

Dr.K.V.S.C.E.W, Kurnool.

Abstract

In automatic vehicle detection system for aerial observence, we escape from the stereotype and existing frameworks of vehicle detection in aerial observence, which are either region based or sliding window based. We design a pixelwise classification method for vehicle detection. The novelty lies in the fact that, in spite of performing pixelwise classification, relations among neighboring pixels in a region are preserved in the feature extraction process.We consider features including vehicle colors and local features. For vehicle color extraction, we utilize a color transform to separate vehicle colors and nonvehicle colors effectively. For edge detection, we apply moment preserving to adjust the thresholds of the Canny edge detector automatically, which increases the adaptability and the accuracy for detection in various aerial images. Afterward, a dynamic bayesian network is constructed for the classification purpose. We convert regional local features into quantitative observations that can be referenced when applying pixelwise classification via dynamic bayesian network . Experiments were conducted on a wide variety of aerial videos. The results demonstrate flexibility and good generalization abilities of the proposed method on a challenging data set with aerial observence images taken at different heights and under different camera angles.

D. Satyanarayana M.Tech., Asst.Professor, Department of CSE, Dr.K.V.S.C.E.W, Kurnool.

:

Introduction

This technology has a variety of applications, such as military,police, and traffic management. Compared with ground- plane observence systems, aerial observence is more suitable for monitoring larger spatial area. Therefore, aerial observence systems become an excellent supplement of ground- plane observance systems. One of the main topics in aerial image analysis is scene registration and alignment . Another very important topic in intelligent aerial observence is vehicle detection and tracking. The challenges of vehicle detection in aerial observence include camera motions such as panning, tilting,and rotation. In addition, airborne platforms at different heights result in different sizes of target objects. using hierarchical model that describes different levels of details of vehicle features. There is no specific vehicle models assumed, making the method flexible. However, their system would miss vehicles when the contrast is weak or when the influences of neighbouring objects are present. Considering multiple clues and used a mixture of experts to merge the clues for vehicle detection in aerial images. We performed color segmentation via mean- shift algorithm and motion analysis via change detection. In addition, we presented a trainable sequential maximum a posterior method for multiscale analysis and enforcement of contextual information. However, the motion analysis algorithm applied in their system cannot deal with

aforementioned camera motions and complex background changes. Moreover, in the information fusion step, their algorithm highly depends on the color segmentation results.

colors. The high computational complexity of mean-shift segmentation algorithm is another concern.

A method by subtracting background colors of each frame and then refined vehicle candidate regions by enforcing size constraints of vehicles. However, we assumed too many parameters such as the largest and smallest sizes of vehicles, and the height and the focus of the airborne camera. Assuming these parameters as known priors might not be realistic in real applications. In [11] authors proposed a moving-vehicle detection method based on cascade classifiers. A large number of positive and negative training samples need to be collected for the training purpose. Moreover, multiscale sliding windows are generated at the detection stage. The main disadvantage of this method is that there are a lot of miss detections on rotated vehicles. Such results are not surprising from the experiences of face detection using cascade classifiers. If only frontal faces are trained, then faces with poses are easily missed. However, if faces with poses are added as positive samples, the number of false alarms would surge. A vehicle detection algorithm using the symmetric property of car shapes. However, this cue is prone to false detections such as symmetrical details of buildings or road markings. Therefore, we applied a log-polar histogram shape escriptor to verify the shape of the candidates. Unfortunately, the shape descriptor is obtained from a fixed vehicle model, making the algorithm inflexible. The algorithm relied on mean-shift clustering algorithm for image color segmentation. The major drawback is that a vehicle tends to be separated as many regions since car roofs and windshields usually have different colors. Moreover, nearby vehicles might be clustered as one region if they have similar

Fig.1. Proposed system framework.

The framework can be divided into the training phase and the detection phase. In the training phase, we extract multiple features including local edge and corner features, as well as vehicle colors to train a dynamic bayesian network. In the detection phase, we first perform background color removal similar to the process proposed in [9]. Afterward, the same feature extraction procedure is performed as in the training phase.

The extracted features serve as the evidence to infer the unknown state of the trained dynamic bayesian network, which indicates whether a pixel belongs to a vehicle or not. we do not perform region- based classification, which would highly depend on results of color segmentation algorithms such as mean shift. There is no need to generate multiscale sliding windows either. The distinguishing feature of the proposed framework is that the detection task is based on pixelwise classification. However, the features are extracted in a neighborhood region of each pixel. Therefore, the extracted features comprise not only pixel-level information but also relationship among neighboring pixels in a region. Such design is more effective and

efficient than region-based or multiscale sliding window detection methods .

Proposed Method

  1. Background Color Removal

    Since nonvehicle regions cover most parts of the entire scene in aerial images, we construct the color histogram of each frame and remove the colors that appear most frequently in the scene.These removed pixels do not need to be considered

    in subsequent detection processes. Performing background color removal cannot only reduce false alarms but also speed up the detection process.

  2. Feature Extraction

    Feature extraction is performed in both the training phase and the detection phase. We consider local features and color features.

    1. Local Feature Analysis:

      Corners and edges are usually located in pixels with more information. We use the Harris corner detector to detect corners. To detect edges, we apply moment- preserving thresholding method on the classical Canny edge detector to select thresholds adaptively according to different scenes. In the Canny edge detector, there are two important thresholds, i.e., the lower threshold and the higher threshold. As the illumination in every aerial image differs, the desired thresholds vary and adaptive thresholds are required. Thresholds automatically and dynamically selected by our method give better performance on the edge detection.

    2. Color Transform and Color classification

    we apply the color transform to obtain components first and then use a

    support vector machine to classify vehicle colors and nonvehicle colors.

    Here a new color model to separate vehicle colors from nonvehicle colors effectively. This color model transforms color components into the color domain . It has been shown in that all the vehicle colors are concentrated in a much smaller area on the u-v plane than in other color spaces and are therefore easier to be separated from nonvehicle colors. we can observe that vehicle colors and nonvehicle colors have less overlapping regions under the color model.

    Fig. 4. Neighborhood region for feature extraction.

    When performing support vector machine training and classification, we take a block of pixels as a sample. we do not perform vehicle color classification via support vector machine for blocks that do not contain any local features. Those blocks are taken as nonvehicle color areas.

    The features are extracted in a neighborhood region of each pixel in our framework. Considering an NxN neighborhood Ap of pixel p, shown in fig, we extract five types of features i.e., S, C, E,A, Z , and , for the pixel. The first feature denotes the percentage of pixels in Ap that are classified as vehicle colors by support vector machine Nvehicle color the number of pixels in Ap that are classified as vehicle colors by support vehicle machine i.e.,

    2

    N

    Features C and E are defined as

    C=Ncorner

    2

    N

    E=Nedge

    2

    N

    Ncorner denotes to the number of pixels in Ap that are detected as corners by the Harris corner detector, and denotes the number of pixels in that are deteced as edges by the enhanced Canny edge detector. The pixels that are classified as vehicle colors are labeled as connected vehicle-color regions.

  3. pixelwise classification

    We perform pixelwise classification for vehicle detection using dynamic bayesian network . The design of the dynamic bayesian network model is illustrated as

    Fig.2. dynamic bayesian network model for pixelwise classification.

    Node indicates if a pixel belongs to a vehicle at time slice . Moreover, at each time slice , state has influences on the observation nodes .The observations are assumed to be independent of one another. We use means to cluster each observation into three clusters, i.e.,we use three discrete symbols for each observation node. In the training stage, we obtain the conditional probability tables of the dynamic bayesian network model via expectation maximization [18] algorithm by providing the ground-truth labeling of each pixel and its corresponding observed features from several training videos.In the detection phase, the Bayesian rule is used to obtain the probability that a pixel belongs to a vehicle. The joint

    probability is the probability that a pixel belongs to a vehicle pixel at time slice given all the observations and the state of the previous time instance. According to the naive Bayesian rule of conditional probability, the desired joint probability can be factorized since all the observations are assumed to be independent. Term is defined as the probability that a pixel belongs to a vehicle pixel at time slice given observation at time instance The proposed vehicle detection framework can also utilize a Bayesian network (BN) to classify a pixel as a vehicle or nonvehicle pixel.When performing vehicle detection using BN, the structure of the BN is set as one time slice of the dynamic bayesian network model in Fig.

  4. Post Processing

We use morphological operations to enhance the detection mask and perform connected component labeling to get the vehicle objects. The size and the aspect ratio constraints are applied again after morphological operations in the postprocessing stage to eliminate objects that are impossible to be vehicles.

Conclusion and Future Research

we have proposed an automatic vehicle detection system for aerial observence that does not assume any prior information of camera heights, vehicle sizes, and aspect ratios. we have not performed region-based classification,which would highly depend on computational intensive color segmentation algorithms such as mean shift. We have not generated multiscale sliding windows that are not suitable for detecting rotated vehicles either. Instead, we have proposed a pixelwise classification method for the vehicle detection using dynamic bayesian networks. In spite of performing pixelwise classification,

relations among neighboring pixels in a region are preserved in the feature extraction process. Therefore, the extracted features comprise not only pixel-level information but also region-level information. Since the colors of the vehicles would not dramatically change due to the influence of the camera angles and heights, we use only a small number of positive and negative samples to train the support vector machine for vehicle color classification. Moreover, the number of frames required to train the dynamic bayesian network is very small. Overall, the entire framework does not require a large amount of training samples. We have also applied moment preserving to enhance the Canny edge detector, which increases the adaptability and the accuracy for detection in various aerial images.The experimental results demonstrate flexibility and good generalization abilities of the proposed method on a challenging data set with aerial observence images taken at different heights and under different camera angles. For future work, performing vehicle tracking on the detected vehicles can further stabilize the detection results. Automatic vehicle detection and tracking could serve as the foundation for event analysis in intelligent aerial observence systems.

REFERENCES

  1. R. Kumar, H. Sawhney, S. Samarasekera, S. Hsu,

    T. Hai, G. Yanlin,

    K. Hanna, A. Pope, R. Wildes, D. Hirvonen,M. Hansen, and P. Burt,

    Aerial video observence and exploitation, Proc. IEEE, vol. 89, no.10, pp. 15181539, 2001.

  2. I. Emst, S. Sujew, K. U. Thiessenhusen,M. Hetscher, S. Rabmann, and M. Ruhe, LUMOS Airbome traffic monitoring system, in Proc.IEEE Intell. Transp. Syst., Oct. 2003, vol. 1, pp. 753759.

  3. L. D. Chou, J. Y. Yang, Y. C. Hsieh, D. C. Chang, and C. F. Tung, Intersection-

    based routing protocol for VANETs,Wirel. Pers. Commun.,vol. 60, no. 1, pp. 105124, Sep. 2011.

  4. S. Srinivasan, H. Latchman, J. Shea, T. Wong, and J. McNair, Airborne

    traffic observence systems: Video observence of highway traffic, in Proc. ACM 2nd Int. Workshop Video Observence Sens.Netw., 2004, pp. 131135.

  5. A. C. Shastry and R. A. Schowengerdt, Airborne video registration and traffic-flow parameter estimation, IEEE Trans. Intell. Transp.Syst., vol. 6, no. 4, pp. 391405, Dec. 2005.

  6. H. Cheng and J.Wus, Adaptive region of interest estimation for aerial observence video, in Proc. IEEE Int. Conf. Image Process., 2005,vol. 3, pp. 860863.

  7. S. Hinz and A. Baumgartner, Vehicle detection in aerial images using generic features, grouping, and context, in Proc. DAGM-Symp., Sep.2001, vol. 2191, Lecture Notes in Computer Science, pp. 4552.

  8. H. Cheng and D. Butler, Segmentation of aerial observence video using a mixture of experts, in Proc. IEEE Digit. Imaging Comput.Tech. Appl., 2005, p. 66.

  9. R. Lin, X. Cao, Y. Xu, C.Wu, and H. Qiao, Airborne moving vehicle detection for urban traffic observence, in Proc. 11th Int. IEEE Conf.Intell. Transp. Syst., Oct. 2008, pp. 163167.

  10. L. Hong, Y. Ruan, W. Li, D. Wicker, and J. Layne, Energy-based video tracking using joint target density processing with an application to unmanned aerial vehicle observence, IET Comput. Vis., vol. 2, no.

    1, pp. 112, 208.

  11. R. Lin, X. Cao, Y. Xu, C.Wu, and H. Qiao, Airborne moving vehicle

    detection for video observence of urban traffic, in

    Proc. IEEE Intell.Veh. Symp., 2009, pp. 203208.

  12. J. Y. Choi and Y. K. Yang, Vehicle detection from aerial images using local shape information, Adv. Image Video Technol., vol. 5414, Lecture Notes in Computer Science, pp. 227236, Jan. 2009.

  13. C. G. Harris and M. J. Stephens, A combined corner and edge detector, in Proc. 4th Alvey Vis. Conf., 1988, pp. 147151.

Leave a Reply