 Open Access
 Total Downloads : 177
 Authors : Mr. P. Vijaya Kumar, Dr. A. V. Senthil Kumar
 Paper ID : IJERTV5IS070116
 Volume & Issue : Volume 05, Issue 07 (July 2016)
 DOI : http://dx.doi.org/10.17577/IJERTV5IS070116
 Published (First Online): 07072016
 ISSN (Online) : 22780181
 Publisher Name : IJERT
 License: This work is licensed under a Creative Commons Attribution 4.0 International License
An Effective Multiple Moving Objects Tracking using Bayesian Particle FilterBased Median Enhanced Laplacian Thresholding
P. Vijaya Kumar1 Dr A. V. Senthil Kumar 2
Head, Department of Computer Applications, Director of MCA,
Sri Jayendra Saraswathy Maha Vidyalaya College of Arts & Science, Hindusthan College of Arts and Science,
Coimbatore. Coimbatore.
Abstract – Automated video surveillance systems are highly required to facilitate wide range of applications in computer vision. The applications include form human activity understanding, object tracking, traffic monitoring to conservation of endangered species and so on. Though efficient object tracking was performed using Support Vector Machine and Graph models, with the growing size involving in multiple moving object the error and object tracking accuracy gets compromised. Bayes Particle is one solution to solve these limitations which aims to minimize the error by filtering out noisy data from a given training dataset. In this paper, Bayesian Particle Filterbased Median Enhanced Laplacian Thresholding (BPFMELT) is introduced. The Bayes Sequential Estimation is constructed based on the posterior and prior function. To represent a comprehensive representation, Bayes Estimation algorithm discriminately trains the high quality video files with aiming at reducing the mean square tracking error. The Color Histogrambased Particle Filter measures the color histogram value using the likelihood function. The Histogrambased Particle Filter algorithm in BPFMELT framework improves the object detection accuracy. Experiments have been carried out on video images obtained from Internet Archive 501(c) (3). An intensive and comparative study shows the efficiency of these enhancements and shows better performance in terms of mean square error, peak signal to noise ratio, object tracing accuracy and object tracking execution time on high quality video files, true positive rate, and false positive rate. Experimental analysis shows that BPFMELT framework is able to reduce the Peak Signal to Noise Ratio by 7.79% and minimize the Mean Square Error by 29.89% when compared to the stateoftheart works.
Keywords – Support Vector Machine, Graph models, Bayes Particle, Median Enhanced Laplacian Thresholding, Bayes Sequential Estimation Color Histogram.
I.INTRODUCTION
Video tracking is the process of locating a moving object or multiple objects over time using a camera. The objective of video tracking is to associate target objects in consecutive video frames. The association can be especially difficult when the objects are moving fast relative to the frame rate. Many research works have been conducted on multiple moving object tracking. Object tracking using Support Vector Machines (SVM) [1] was carried out for multiple objects being tracked with aiming at solving the robustness and accuracy of object tracking.
Automatic Estimation of Multiple Motion (AEMM) [2] fields were performed using motion correspondence algorithm that aiming at improving the automatic identification of trajectories. A parallel histogram using particle filters [3] was applied to improve the robustness of object tracking.
Detection and localization of object tracking using SVM was designed in [4] to improve the accuracy of localization. In [5], sequential Monte Carlobased target tracking was performed with the objective of minimizing the localization error. A study of localizing target was conducted in [6] to improve detection accuracy. Though accuracy was improved, the mean square error incurred during object tracking remained unsolved. This mean square error is reduced by applying the Bayes Sequential Estimation in BPFMELT framework.
With the objective of reducing the distortion rate in [7], mean absolute difference was identified that also resulted in the accuracy rate. Multiple Kernel Nearest Neighbor was applied in [8] with aiming at improving the relative importance for object localization. Posterior Mapping and Retrieval (PMR) [9] was applied in order to reduce the effect of insufficient postures using KPosture Selection and Indexing. With the aim of removing the noise present in color video, fuzzy rules were applied [10] that resulted in the minimization of Mean Absolute Error (MAE) and Peak Signalto Noise Ratio (PSNR). Object tracking in Disruption Tolerant Network was applied in [11] to improve the tracking efficiency.
In [12], Dynamic Graphbased Tracker was applied for occluded object tracking with aiming at improving the targets being tracked. Another method for dynamic probabilistic was presented in [13] which using Time Warping (TW) with aiming at minimizing the noise introduced during dynamic entrance. Though error was reduced but at the cost of time. In BPFMELT framework, Histogrambased Particle Filter algorithm is applied to reduce the object tracking time.
Layered sensing is modus operandi for several applications where different number of sensors is provided with. The layered sensing of image refers to the imageries obtained from several aspects by different sensors. In [14], object tracking for layered images was performed using joint segmentation and registration technique. In [15], detection of objects using boosting algorithm and
constrained optimization problem was introduced to achieve object detection accuracy. In [16], a survey of object tracking in the presence of occlusions was performed using task driven approach.
A survey of visual object tracking was conducted in

both from theoretical and practical viewpoint. In [18], laserbased tracking of multiple objects was performed using online learning based method that resulted in the improvement of tracking under complex situations. Though tracking accuracy under the presence of occlusions was performed, the peak signal noise ratio remained unsolved. Using BPFMELT framework, this is solved using Bayes posterior and prior functions.
An online system for multiple interacting objects was performed in [19] using fusion based approach resulting in the improvement of object tracking accuracy and time cost factor. In [20], Adaptive Appearance Model was introduced for video tracking using application dependent thresholds. An ecient algorithm [21] was developed for detecting a moving object using background elimination technique. However, it not efficient for combination of higher dimensional features with some additional constraints in addition, a novel probabilistic approach [22] was designed for detecting and analyzing stationary objects driven visual events in video surveillance system. But, it is difficult to track all the objects accurately in crowded situations. Another method [23] is proposed for motion detection which detects moving objects precisely. However, the real object movement still cannot be separated from the background. In [24], ELT Method was introduced for multiple moving object segmentation in video surveillance. In [25], proposed a novel Framework for Specific Object Tracking in the Multiple Moving Objects Environment. In [26], MELT method was introduced for moving object segmentation in video sequences. In [27], a novel framework of object detection for video surveillance called Improvised Enhanced Laplacian Threshold (IELT) technique.
In order to overcome the above limitations, we propose a novel Bayesian Particle Filterbased Median Enhanced Laplacian Thresholding (BPFMELT) framework for multiple moving object tracking tasks. In BPFMELT framework, we also employ Histogrambased Particle Filter algorithm with the aim of improve the object tracking accuracy. We present an elegant extensionof Median Enhanced Laplacian Thresholding and Particle Filter model to a probabilistic Bayes Estimation model. We accomplish this by treating the problem in a generative probabilistic setting, both using the posterior and prior function.
The remainder of this paper is organized as follows. In Section II, analyze the Bayesian Particle Filterbased Median Enhanced Laplacian Thresholding framework. Next, to develop Bayes Estimation model and Color Histogrambased Particle Filter. Section III presents simplified empirical models with the aid of parametric definitions and metrics evaluation. Performance evaluations and discussions are presented in Section IV using table form and graph. Finally, Section V concludes the achievement of the paper.

BAYESIAN PARTICLE FILTERBASED MEDIAN ENHANCED LAPLACIAN THRESHOLDING
In this section, we introduce Bayesian Particle Filter based Median Enhanced Laplacian Thresholding framework in details. The BPFMELT framework is implemented under the Bayesian Particle filter framework and employs Color Histogram to train corresponding high quality video files respectively. We present how to implement the Bayes Sequential Estimation to estimate the multiple tracking of objects by evaluating posterior and prior function, and then introduce the Bayes Estimation algorithm, to determine the tracking results. Besides, we describe the Color Histogrambased Particle Filter with aiming at improving the object tracking accuracy.
A .Design of Bayes Sequential Estimation
The objective of the proposed framework is to perform simultaneous object detection and multiple tracking of objects in a video sequence. The Bayes Sequential Estimation in BPFMELT framework measures posterior and prior function to estimate the multiple tracking of objects in a video sequence (i.e. videos) with the objective of reducing the peak signal to noise ratio (i.e. PSNR). Fig.1 shows the block diagram of Bayes Sequential Estimation.
Bayes Sequential Estimation
Bayes Sequential Estimation
High quality video file
Posterior function Prior function
Result
Reduces PSNR
Fig 1. Block Diagram of Bayes Sequential Estimation
As shown in the figure, the Bayes Sequential Estimation accepts high quality video file from Median Enhanced Laplacian Thresholding. Then evaluates two functions such as posterior function and prior function is evaluated for the video frames in an iterative manner with the objective of reducing the PSNR rates multiple object tracking. Let us consider the video sequence where denotes the pixel values of image at discrete time instant . The state space representation model in the proposed framework requires a vector model and is formulated as given below.
(1)
From (1), the vector model for the video sequence
is extracted through the evolution of state i.e. . With the state space representation model, the link between the state space and current measurement is formulated as given below.
(2)
From (2), represents the state space measurement whereas represents the current measurement. With the improved video quality image given as input, the goal of Bayes Sequential Estimation Model is to formulate the probability function for multiple moving object tracking. It is formulated as given below
(3)
From (3), the probability function is measured according to the state at time for different video frames from to respectively. The assumption made in Bayes Sequential Estimation in the proposed
framework BPFMELT is that the initial posterior function of state space is known. Then the posterior function
is obtained in an iterative manner.
Let us assume that the posterior function for multiple object tracking is
given then the forecasting value for multiple object tracking is formulated as given below.
(4)
From (4), is the video frame at time and is the observation sequence of video frame from time interval to . Followed by this, the prior function
using Bayes Sequential Estimation for multiple object tracing is given as below.
(5)
From is the video frame at time and is the observation sequence of video frame from time interval to . Figure 2 shows the algorithmic description of Bayes Estimation model.
As shown in the Fig. 2, with the objective of minimizing the mean square error tracking using the forecasting value,
Bayes posterior and prior functions are applied for each video sequence. To start with, high quality video file from Median Enhanced Laplacian Thresholding is extracted. With the extracted video file, Bayes estimation of posterior and prior function is applied for multiple object tracking. With the application of Bayes estimation algorithm, mean square tracking error is reduced in a significant manner.
Input: video sequence
Output: Reduced minimum square error for multiple object tracking
Step 1: Begin
Step 2: For each video sequence
Step 3: Extract vector model of each video sequence using (1)
Step 4: Evaluate state space representation using (2)
Step 5: Measure the probability function using (3)
Step 6: Evaluate posterior function using (4) Step 7: Evaluate prior function using (5) Step 8: End for
Step 9: End
Fig 2. Bayes Estimation algorithm
B. Design of Color Histogrambased Particle Filter
After applying the Bayes Sequential Estimation model, we employ it into Color Histogrambased Particle Filter with aiming at improving the object tracking accuracy. Fig.3 shows the block diagram of Color Histogrambased Particle Filter.
Color Histogrambased Particle Filter
Evaluate likelihood function
Evaluate particle posterior function
Evaluate particle prior function
Fig 3. Block diagram of Color Histogrambased Particle Filter
From the above figure, the Color Histogrambased Particle Filter involves three steps. The first step evaluates the likelihood function of the color histogram of the objects to be tracked. The second and third step evaluates the particles posterior and prior function with aiming at improving the object detection accuracy. The BPFMELT technique extracts from the video file the color histogram , evaluated inside the image region specified by the state space vector model . The likelihood function of the color histogram is evaluated as given below.
(6)
From (6), the likelihood function is evaluated using the probability function, where measures the distance between the reference color histogram of objects to be tracked and the color histogram evaluated from the current video frame specified by the state vector . Color Histogrambased Particle Filter is a hypothesis tracker that approximates the filtered posterior distribution with the aid of set of weighted particles aiming at improving the object tracking accuracy. Here the sample video file taken as input is referred to as the because of their discrete representation.
The Color Histogrambased Particle Filter starts with a weighted set of samples
distributed according to , new objects are
tracked. This probability distribution depends on the prior state and the new measurements, i.e., and respectively.
Each particle evolves according to the state space model and yields an approximation of the prior function is formulated given below.
(7)
Once, the prior function using Color Histogrambased Particle Filter is obtaned, the posterior function for each particle is measured based on the likelihood model. The posterior function for each particle at time is formulated as given below.
(8)
(9)
From (8) and (9), the posterior function for each particle based on the color histogram and the weight of each particle. The likelihood model helps in improving the object tracking performance with the aid of color histogrambased particle filter. This in turn improves the object detection accuracy. Fig.4 shows the algorithmic description of Histogrambased Particle Filter model.
The Fig.4 shows the Histogrambased Particle Filter algorithm that includes three steps. For each video sequence, the first step evaluates the likelihood function of color histogram. Followed by this, the second and third step measures the particle prior and posterior function respectively. This in turn increases the object tracking accuracy.
Input: video sequence , color
histogram , particle
Output: Improved multiple object tracking accuracy
Step 1: Begin
Step 2: For each video sequence
Step 3: Evaluate likelihood function of the color histogram using (6)
Step 4: Measure particle prior function using (7) Step 5: Measure particle posterior function using (8) Step 6: End for
Step 7: End
Fig 4.Histogrambased Particle Filter algorithm

EXPERIMENTAL SETTING
The Bayesian Particle Filterbased Median Enhanced Laplacian Thresholding (BPFMELT) framework for object tracking performs experimental work on MATLAB. The BPFMELT framework using the video files and their sizes are listed in Table I obtained from Internet Archive 501(c) (3), a nonprofit organization. The Internet Archive includes texts, audio, moving images, and software as well as archived web pages.
The video file information listed in table 1 includes the name of the video file, resolution of the video files and their size respectively for evaluating the BPFMELT framework. The Internet Archive includes texts, audio, moving images, and software as well as archived web pages. The video used for moving object segmentation using MFELT method is shown below with detailed information. The BPFMELT framework is evaluated in different aspects, Peak SignaltoNoise Ratio, Mean Square Error, Object detection accuracy and time for object detection, true positive rate, false positive rate, with respect to different videos and video frames.
Name
Video file information
Video frames
Resolution
Size (KB)
Blossom.avi
216 * 192
349.5
Sample.avi
256 * 240
113.6
Vehicle.avi
510 * 420
323.7
Atheltic.avi
854 * 480
905.3
Person.avi
320 * 240
936.2
Name
Video file information
Video frames
Resolution
Size (KB)
Blossom.avi
216 * 192
349.5
Sample.avi
256 * 240
113.6
Vehicle.avi
510 * 420
323.7
Atheltic.avi
854 * 480
905.3
Person.avi
320 * 240
936.2
Table I. Video file information
Flower.avi
350 * 240
454.5
Rose.avi
458 * 213
635.2
Table I describes the video file information used in BPFMELT that comprises of video file name, resolution of video file and video file size used as sample video files in BPFMELT framework. The proposed BPFMELT method is compared with three existing methods namely, Median Filterbased Enhanced Laplacian Thresholding (MFELT), Object Tracking with Support Vector Machines (OTSVM) [1], and Automatic Estimation of Multiple Motion (AEMM) [2].
The quality of video files received at the receiving end measures the efficiency of the method. Therefore, during object tracking, the quality of the video files at the receiving end should not be deteriorated with the presence of noise. The Mean Square Error (MSE) in BPFMELT framework is used to measure the video quality being obtained during multiple object tracking. The MSE represents the error between the tracked frames and the original video frame. When lower the value of MSE, the method is said to be more efficient.
of frames taken for experimental purpose. The object tracking time is mathematically formulated as given below.
(13)
From (13), the object tracking time OTT is measured in terms of milliseconds (ms). The true positive rate is defined as the ratio of number of correctly detected frames from the video sequence to the total number of the frames in the video sequence. The true positive rate is formularized as given below,
* 100 (14)
When higher the true positive rate, the method is said to be more efficient and it is measured in terms of percentage (%). Then the false positive rate is defined as the ratio of number of incorrectly detected frames from the video sequence to the total number of the frames in the video sequence. The false positive rate is formularized as given below,
(10)
From (10), the mean square error is the
difference between the actual frame size and the
estimated frame size being tracked. Peak Signalto Noise Ratio measures the ratio between the reference video frame and the distortion video frame being tracked in a video file, given in decibels. The higher the PSNR, the closer the distorted video frame is to the original. As a result, higher PSNR value correlate with higher quality image and is mathematically formulated as given below.
(11)
From (11), the peak signaltonoise ratio is evaluated using the unsigned integer data type R (with size 255) with respect to mean square error rate respectively. The object tracking accuracy is the ratio of objects being tracked using the different methods to the total number of frames /second. The object tracking accuracy is formulated as below
(15)
When lower the false positive rate, the method is said to be more efficient and it is measured in terms of percentage (%).

DISCUSSION
The Bayesian Particle Filterbased Median Enhanced Laplacian Thresholding (BPFMELT) framework is compared against the existing Median Filterbased Enhanced Laplacian Thresholding (MFELT), Object Tracking with Support Vector Machines (OTSVM) [1], and Automatic Estimation of Multiple Motion (AEMM) [2]. The experimental results are compared and analyzed through the table and graph given below.

Impact of MSE
To support transient performance, in Table II we apply a Bayes Estimation algorithm and comparison made with three other existing methods namely MFELT, OT SVM [1] and AEMM [2].
Table II. Tabulation for mean square error
Size of video frame (KB)
Mean Square Error (db)
BPF MELT
MFELT
OTSVM
AEMM
113.6
19.3
24.4
26.5
32.4
323.7
21.4
26.5
28.7
34.6
349.5/p>
24.2
29.1
30.1
37.4
454.5
28.7
33.8
35.9
42.3
635.2
35.3
40.4
42.6
48.5
905.3
41.2
46.3
48.5
54.6
936.2
49.3
54.4
56.5
62.6
Size of video frame (KB)
Mean Square Error (db)
BPF MELT
MFELT
OTSVM
AEMM
113.6
19.3
24.4
26.5
32.4
323.7
21.4
26.5
28.7
34.6
349.5
24.2
29.1
30.1
37.4
454.5
28.7
33.8
35.9
42.3
635.2
35.3
40.4
42.6
48.5
905.3
41.2
46.3
48.5
54.6
936.2
49.3
54.4
56.5
62.6
(12)
From (12), represents the object tracking
accuracy whereas refers to the objects being
correctly tracked. Higher the object tracking accuracy, more efficient the method is said to be and it is measured in terms of percentage (%). The object tracking time is the time taken to track each frames with respect to the number
Table II show that the Bayesian Particle Filterbased Median Enhanced Laplacian Thresholding (BPFMELT) framework provides lower mean square error rate when compared to MEELT, OTSVM [1] and AEMM [2]. The mean square error is reduced with the application of Bayes estimation algorithm.
Fig 5. Measure of mean square error
Fig.5 shows the mean square error rate efficiency with size of the video frame taken as the input for multiple moving object tracking. With the application of Bayes Estimation algorithm, Bayes estimation of posterior and prior function is evaluated in an efficient manner. This in turn is applied for multiple moving object tracking and transformational data perturbation, the data perturbation is performed in an efficient manner with high quality video file extracted from Median Enhanced Laplacian Thresholding. This in turn helps in reducing the mean square error for multiple moving object tracking using BPFMELT by 17.92% compared to MFELT and 24.84% compared to OTSVM [1] respectively. Moreover, the BPFMELT framework by applying Bayes Sequential Estimation that takes state space as input that helps in decreasing the mean square error by 46.91% when compared to AEMM [2].

Impact of PSNR
The comparison of PSNR for multiple object tracking is presented in Table III with respect to the varied size of video frame in the range of 100KB to 1000 KB collected at different time stamps from the Internet Archive 501(c) (3). With increase in the size of video frames, the PSNR for object tracking is also increased.
Size of video frame (KB)
PSNR (db)
BPFMELT
MFELT
OTSVM
AEMM
113.6
46.13
44.14
43.95
42.89
323.7
54.25
48.22
46.17
43.11
349.5
61.82
55.79
50.74
47.68
454.5
65.32
59.29
54.24
51.18
635.2
70.14
62.11
57.06
54.01
905.3
73.45
64.42
60.37
57.31
936.2
79.47
69.44
66.39
63.33
Size of video frame (KB)
PSNR (db)
BPFMELT
MFELT
OTSVM
AEMM
113.6
46.13
44.14
43.95
42.89
323.7
54.25
48.22
46.17
43.11
349.5
61.82
55.79
50.74
47.68
454.5
65.32
59.29
54.24
51.18
635.2
70.14
62.11
57.06
54.01
905.3
73.45
64.42
60.37
57.31
936.2
79.47
69.44
66.39
63.33
Table III. Tabulation for PSNR
To ascertain the performance of the PSNR for multiple moving object tracking, comparison is made with three other existing methods MFELT, OTSVM [1] and AEMM [2]. In Fig.6, the size of video frames is varied between 100 KB and 1000 KB. From the figure it is illustrative that the PSNR using the proposed BPFMELT framework is reduced when compared to existing methods.
Fig 6. Measure of PSNR
The PSNR rate for multiple moving object tracking is reduced by applying Bayes Sequential Estimation in BPF MELT framework. With the application of Bayes Sequential Estimation, posterior function and prior function is evaluated for each video frames in an iterative manner that provides the results with respect to varying frame size reducing the PSNR by 10.11% compared to MFELT and 15.35% compared to OTSVM [1] respectively. Besides, by applying the state space representation model between the state space and current measurement, by making an initial posterior function, minimizes the PSNR for multiple moving objects tracking by 19.62% compared to AEMM [2].

Impact of object tracking accuracy
The rate of object tracking accuracy for multiple moving object tracking using BPFMELT, MFELT, OTSVM and AEMM is elaborated in table IV. We consider the framework with different number of frames/second in the range of 10 frames to 70 frames for experimental purpose using MATLAB.
Table IV. Tabulation for object tracking accuracy
No. of frames/sec
Object tracking accuracy (%)
BPFMELT
MFELT
OTSVM
AEMM
10
71.35
62.45
52.13
41.43
20
74.85
65.83
57.78
51.73
30
77.38
68.68
60.63
54.58
40
72.16
63.14
55.09
49.04
50
74.39
65.37
57.32
51.27
60
78.25
69.23
59.18
53.13
70
83.45
74.43
66.38
60.35
In Table IV, we depict the rate of object tracking accuracy with different frames pe second taken as input ranging from 10 frames to 70 frames for the purpose of experiment. From the figure, the object tracking accuracy using the
proposed BPFMELT framework is higher when compared to three other existing methods MFELT, OTSVM [1] and AEMM [2]. Besides it can also be observed that by increasing the number of frames per second, the rate of object tracking accuracy is also increased using all the methods. But comparatively, it is higher using BPFMELT framework.
Fig.7 as shown above measures the rate of object tracking accuracy and is observed to be higher using BPFMELT framework. The rate of object tracking accuracy is verified for different number of frames. By applying Color Histogrambased Particle Filter, the object tracking accuracy is improved using BPFMELT framework by 11.81% compared to MFELT. In addition, by evaluating the likelihood function of the color histogram in MFELT framework, the object tracking accuracy is improved by 23.25% compared to OTSVM [1]. Furthermore, the BPF MELT framework approximates the filtered posterior distribution which in turn improves the object tracking accuracy by 32.17% compared to AEMM [2].
Fig 7. Measure of object tracking accuracy

Impact of object tracking time
For example: (table calculation)

Proposed BPFMELT: Object tracking time (ms) = 10 * 3.9
= 39

Existing MFELT:
Object tracking time (ms) = 10 * 4.5
= 45

Existing OTSVM:
Object tracking time (ms) = 10 * 5.8
= 58

Existing AEMM:
Object tracking time (ms) = 10 * 7.0
= 70
Table V Tabulation for object tracking time
No. of frames/sec
Object tracking time (ms)
BPF MELT
MF ELT
OT SVM
AEMM
10
39
45
58
70
20
45
51
62
69
30
51
57
68
75
40
57
64
75
77
50
61
68
79
84
60
65
72
83
90
70
73
80
91
98
Fig 8. Measure of object tracking time
Table V and Fig.8 shows the measure of object tracking time with respect to different frames per second. As shown in the figure, the object tracking time is reduced using BPFMELT framework when compared to the three other existing methods MFELT, OTSVM [1] and AEMM [2]. This is because of the application of Histogrambased Particle Filter algorithm. By applying Histogrambased Particle Filter algorithm, with the aid of the probability function, the reference color histogram of objects to be tracked and the color histogram for the current video frame are evaluated in an efficient manner. This in turn reduces the object tracking time by 12% compared to MFELT and 33.32% compared to OTSVM [1] respectively. Furthermore, with the aid of set of weighted particles, and according to the state space model, multiple moving objects are tracked resulting in reducing the object tracking time of BPFMETL framework by 46.48% compared to AEMM [2].


Impact of true positive rate
The true positive rate for BPFMELT framework, MF ELT, OTSVM [1] and AEMM [2] versus increasing number of frames per second is shown in table VI. When the number of frames per second is increased, the true positive rate also increases gradually using BPFMELT framework.
No. of frames/sec
True positive rate (%)
BPFMELT
MFELT
OTSVM
AEMM
10
60.14
51.24
45.54
30.14
20
63.25
54.87
48.21
34.75
30
66.57
57.32
51.75
37.18
40
69.78
60.54
54.65
40.87
50
72.54
63.47
57.32
43.58
60
75.89
66.97
60.12
47.65
70
79.19
69.87
64.34
50.35
No. of frames/sec
True positive rate (%)
BPFMELT
MFELT
OTSVM
AEMM
10
60.14
51.24
45.54
30.14
20
63.25
54.87
48.21
34.75
30
66.57
57.32
51.75
37.18
40
69.78
60.54
54.65
40.87
50
72.54
63.47
57.32
43.58
60
75.89
66.97
60.12
47.65
70
79.19
69.87
64.34
50.35
Table VI. Tabulation for true positive rate
Fig 9. Measure of true positive rate
Fig.9 as shown above measures the true positive rate. From the figure, the true positive rate using the proposed BPFMELT framework is higher when compared to three other existing methods MFELT, OTSVM [1] and AEMM [2]. This is because of the application of Histogrambased Particle Filter algorithm in the BPFMELT framework. Histogrambased Particle Filter algorithm measures the particle prior and posterior function effectively thereby it improves the true positive rate by 13.03% and 21.79 % when compared to the MFELT, OTSVM [1] respectively. Furthermore, applying Bayes Sequential Estimation in BPFMELT framework also improves the true positive rate by 42% when compared to the AEMM [2].

Impact of false positive rate
Table VII given below shows the false positive rate for BPFMELT framework, MFELT, OTSVM [1] and AEMM [2] versus increasing number of frames per second. As shown in table VII, the false positive rate is decreases gradually when the number of frames per second gets increased using BPFMELT framework.
Table VII. Tabulation for false positive rate
No. of frames/sec
False positive rate (%)
BPF MELT
MFELT
OTSVM
AEMM
10
19.35
25.38
30.41
40.57
20
35.45
40.48
45.51
50.64
30
37.55
43.58
48.61
56.47
40
24.35
29.35
35.31
43.98
50
45.35
51.38
56.41
63.14
60
38.33
44.32
48.39
51.57
70
59.35
64.37
68.4
74.58
Fig10. Measure of false positive rate
From Fig.10, it is illustrative that the false positive rate using the proposed BPFMELT framework is lower when compared to three other existing methods MFELT, OTSVM [1] and AEMM [2]. This is because of the application of Histogrambased Particle Filter algorithm in the BPFMELT framework.Histogrambased Particle Filter algorithm evaluates the particle prior and posterior function effectively thereby it improves the true positive rate which in turn reduces the false positive rate by 17.04 % when compared to the MFELT. In addition, BPFMELT framework reduced the false positive ratio by 32.27% and
46.58 % when compared to the OTSVM [1] and AEMM
[2] respectively. 

CONCLUSION

A Bayesian Particle Filterbased Median Enhanced Laplacian Thresholding (BPFMELT) framework has been designed with the scope of minimizing the error by filtering the noisy data. The objective of providing such a design is to ensure efficient multiple moving object tracking and to decrease the object tracking time for various video frames. A Bayes Sequential Estimation model is designed as a measure for identifying the posterior and prior function for reducing the error (i.e. MSE and PSNR) while tracking multiple moving object. Bayes Estimation algorithm is also proposed to measure the state space representation model for each video frames. The proposed Color Histogram based Particle Filter provides object tracking accuracy for different high quality video files obtained for MFELT. In addition, the evaluation of likelihood function using probability function helps in improving the object tracking accuracy. Finally, by applying the Bayes estimation algorithm, the object tracking time is significantly reduced. Experimental evaluation is conducted with the video files extracted from Internet Archive 501(c) (3), a nonprofit organization to provide high quality object tracking accuracy and measured the performance in terms of MSE, PSNR, object tracking accuracy and object tracking time, true positive rate, and false positive rate. Performances results reveal that the proposed BPFMELT framework provides higher level of object tracking accuracy and also strengthen the execution time. The proposed BPFMELT framework provides 22.41% high rate of object tracking accuracy and minimizes the object tracking time by 46.48% when compared to state of the art works.
REFERENCES

Shunli Zhang, Xin Yu, Yao Sui, Sicong Zhao, and Li Zhang,Object Tracking with MultiView Support Vector Machines, IEEE Transactions on Multimedia, Vol. 17, No. 3, March 2015.

Manya V. Afonso, Jacinto C. Nascimento and Jorge S. Marques, Automatic Estimation of Multiple Motion Fields from Video Sequences Using a Region Matching Based Approach, IEEE Transactions on Multimedia, Vol. 16, No. 1, pp.114, January 2014.

Henry Medeiros, GermÃ¡n HolguÃn, Paul J. Shin, Johnny Park, A parallel histogrambased particle filter for object tracking on SIMD based smart cameras, Elsevier, Vol.114, No. 11, pp.12641272, November 2010.

Jie Yang, Yingying (Jennifer) Chen, Wade Trappe and Jerry Cheng, Detection and Localization of Multiple Spoofing Attackers in Wireless Networks, IEEE Transactions on Parallel and Distributed Systems, Vol.24, No. 1, pp. 4458, January 2013.

Alexander W. Min and Kang G. Shin, Robust Tracking of Small Scale Mobile Primary User in Cognitive Radio Networks, IEEE Transactions on Parallel and Distributed Systems, Vol. 24, No. 4, pp.778788, April 2013.

R. Kwitt1, N. Vasconcelosb, S. Razzaquec, S. Aylward, Localizing Target Structures in Ultrasound Video A Phantom Study, Elsevier, Vol.17, No. 7, pp.712722, October 2013.

Hassan Mansour, Panos Nasiopoulos and Vikram Krishnamurthy, Rate and Distortion Modeling of CGS Coded Scalable Video Content, IEEE Transactions on Multimedia, Vol. 13, No. 2, pp. 165180, April 2011.

Brian McFee, Carolina Galleguillos and Gert Lanckriet, Contextual Object Localization with Multiple Kernel Nearest Neighbor, IEEE Transactions on Image Processing, Vol. 20, No. 2, pp. 570585, February 2011.

ChihHung Ling, ChiaWen Lin, ChihWen Su, YongSheng Chen and HongYuan Mark Liao, Virtual Contour Guided Video Object in painting Using Posture Mapping and Retrieval, IEEE Transactions on Multimedia, Vol. 13, No. 2, pp. 292302, April 2011.

Tom MÃ©lange, Mike Nachtegael, and Etienne E. Kerre, Fuzzy Random Impulse Noise Removal from Color Image Sequences, IEEE Transactions on Image Processing, Vol. 20, No. 4, pp. 959 970, April 2011.

Wenzhong Li, Yuefei Hu, Xiaoming Fu, Sanglu Lu and Daoxu Chen, Cooperative Positioning and Tracking in Disruption Tolerant Networks, IEEE Transactions on Parallel and Distributed Systems, Vol. 26, No. 2, pp. 382391, February 2015.

Zhaowei Cai, Longyin Wen, Zhen Lei, Nuno Vasconcelos and Stan
Z. Li, Robust Deformable and Occluded Object Tracking with Dynamic Graph, IEEE Transactions on Image Processing, Vol. 23, No. 12, pp. 54975509, December 2014.

Mihalis A. Nicolaou, Vladimir Pavlovic and Maja Pantic, Dynamic Probabilistic CCA for Analysis of Affective Behavior and Fusion of Continuous Annotations, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 36, Issue 7, pp. 12991311, July 2014.

PingFeng Chen, Hamid Krim and Olga L. Mendoza, Multiphase Joint SegmentationRegistration and Object Tracking for Layered Images, IEEE Transactions on Image Processing, Vol. 19, No. 7, pp. 17061719, July 2010.

Mohammad Javad Saberian and Nuno Vasconcelos, Learning Optimal Embedded Cascades, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 34, No. 10, pp. 20052018, October 2012.

Ali O. Ercan Ã–zyein , Abbas El Gamal and Leonidas J. Guibas, Object Tracking in the Presence of Occlusions Using Multiple Cameras: A Sensor Network Approach, ACM Transactions on Sensor Networks (TOSN), Vol. 9, No. 2, pp. 136, March 2013.

Xi Li NLPR, Weiming Hu NLPR, Chunhua Shen, Zhongfei Zhang, Anthony Dick and Anton Van Den Hengel, A Survey of Appearance Models in Visual Object Tracking, ACM Transactions on Intelligent Systems and Technology (TIST), Vol. 4, No. 4, pp. 148. September 2013.

Xuan Song, JinshiCui, HuijingZhao, HongbinZha, RyosukeShibasaki, Laserbased tracking of multiple interacting pedestrians via online learning, Elsevier, Vol. 115, No. 4, pp.92 105, September 2013.

Xuan Song, Huijing Zhao, Jinshi Cui, Xiaowei Shao, Ryosuke Shibasaki andHongbin Zha, An Online System for Multiple Interacting Targets Tracking: Fusion of Laser and Vision, Tracking and Learning, ACM Transactions on Intelligent Systems and Technology, Vol. 4, No. 1, pp. 120, January 2013.

Samuele Salti, Andrea Cavallaro, Luigi Di Stefano, Adaptive Appearance Modeling for Video Tracking: Survey and Evaluation, IEEE Transactions on Image Processing, Vol. 21, No. 10, pp. 4334 4348, October 2012.

Vibha L, Chetana Hegde, P Deepa Shenoy, Venugopal K R, L M Patnaik, Dynamic Object Detection, Tracking and Counting in Video Streams for Multimedia Mining, IAENG international journal of computer science Vol. 35, No. 3, pp. 110, 2008

Thi Thi Zin, Pyke Tin, Takashi Toriu, and Hiromitsu Hama, A Novel Probabilistic Video Analysis for Stationary Object Detection in Video Surveillance Systems,IAENG International Journal of Computer Science, Vol. 39, No. 3, pp. 112, 2012

Nan Lu, Jihong Wang, Q.H. Wu and Li Yang, An Improved Motion Detection Method for RealTime Surveillance,IAENG International Journal of Computer Science, Vol. 35, No. 1, pp. 1 10, 2008.

P.Vijayakumar and A.V.Senthil Kumar, Moving Object Segmentation using Enhanced Laplacian Thresholding Method for Video Surveillance IJCA, Volume 100, Issue 4,August 2014,Pages 13 17.

P.Vijayakumar and A.V.Senthil Kumar A Novel Framework for Specific Object Tracking in the Multiple Moving Objects Environment,IJAER,Volume 10, Number 16 (2015) Pages 36540 36545

P.Vijayakumar and A.V.Senthil Kumar Moving Object Segmentation Using Median FilterBased Enhanced Laplacian Thresholding EJSR,Volume 133,Issue Pages 415427

P.Vijayakumar and A.V.Senthil Kumar Improvising Enhanced Laplacian Thresholding Technique For Efficient Moving Object Detection In Video Surveillance,IOSER,Volume 06 ,No. 1,pp.42 52