Local Binary Pattern based Hybrid Texture Descriptors for the Classification of Smoke Images

DOI : 10.17577/IJERTCONV7IS13002

Download Full-Text PDF Cite this Publication

Text Only Version

Local Binary Pattern based Hybrid Texture Descriptors for the Classification of Smoke Images

C. Emmy Prema

Department of Electronics and Communication Engg., Bethlahem Institute of Engineering,

Karungal, Tamilnadu, India, Pin: 629 157

S. Suresh

Department of Mechanical Engineering, University College of Engineering, Nagercoil, Tamilnadu, India, Pin: 629 004.

AbstractAn image processing approach for detection of smoke in video using texture features is a challenging task. It is assumed that the camera monitoring the scene is stationary. Video smoke detection methods have many advantages over traditional smoke detection methods due to large coverage area, fast response and noncontact. In order to reduce a false alarm rate, we propose a novel method to detect smoke by analyzing its texture features. Local binary patterns (LBP) have powerful discriminative capabilities. However, traditional methods with LBP histograms cannot capture spatial structures of LBP codes. To extract the spatial structures of an LBP code map, we compute and encode co- occurrence of LBP patterns using Gray Level Co-occurrence Matrix (GLCM) and Gray Level Run Length Matrix (GLRL). Thus the hybrid texture descriptors are formed to effectively describe the smoke texture. Extensive experiments show that proposed CRLBP-GLRL hybrid texture descriptor achieves competitive classification accuracy for smoke detection and is more computationally efficient

KeywordsLocal Binary Pattern, Compound Local Binary Pattern, Completed Local Binary Pattern, Completed Robust Local Binary Pattern, Gray Level Co-occurrence Matrix and Gray Level Run Length Matrix, Hybrid texture)

  1. INTRODUCTION

    Numerous fires threaten human lives and property throughout the world every day, so we need a reliable fire detection technique. Texture based smoke classification plays an important role in Video-based Fire Detection (VFD). VFD techniques detect fire by recognizing either smoke or flame anywhere within the field of view of the camera. Also, VFD helps to reduce detection time compared to the currently available sensors [1]. Smoke is the initial sign of disastrous forest fire; hence VFD techniques use recognition of smoke for early fire detection [2, 3]. The reason for developing smoke detection method is the fact that smoke spreads faster and will appear much quickly in the field of view of the cameras. Mainly in forest, the smoke produced by fires is visible much before the fire flames. Also, the toxic gases of smoke such as carbon monoxide, carbon dioxide, hydrogen sulfide etc., are very harmful to human beings and animals [4]. Hence the approach proposed in this paper focuses on the detection of smoke for early fire detection. Many researchers use color, movement and texture properties of smoke for video-based detection. In order to improve the detection rate some researchers [5, 6] use soft computing techniques. Gubbi et

    al. [7] described a block-based approach, in which the statistical parameters such as geometric mean, arithmetic mean, standard deviation, skewness, kurtosis and entropy are determined from the discrete cosine transform and discrete wavelet transform coefficients. Finally, SVM classifier is trained for extracted features to make a decision. Lee et al. [8] developed an approach for smoke detection using spatial and temporal analysis. Initially motion features are used to segment candidate smoke from the input image. Then the energy-based and the color-based features are analyzed in spatial, temporal and spatio temporal domains. All the extracted features are used as input for trained SVM. Gebejes and Huertas [9] analyzed texture using second order statistical measurements based on grey level co-occurrence matrix. The features used to analyze texture are contrast, homogeneity, dissimilarity, energy and entropy. Chen et al. [10] proposed a combination of block-based inter frame difference and local binary pattern from three orthogonal planes to analyze the dynamic characteristics of smoke. This method reduces false alarm by registering recent classification of smoke block with the help of smoke histogram image. It uses Support Vector Machine (SVM) for the classification of smoke block. Chunyu et al. [11] used texture analysis for smoke detection based on Gray Level Cooccurance Matrix (GLCM). Here, smoke features are distinguished from other non-smoke disturbances using the parameters determined from GLCM. Agrawal and Mishra [12] detect smoke by analyzing the texture of smoke using Local Binary Pattern (LBP). In this technique moving region is segmented as candidate smoke region. Then histogram of LBP is computed and given as input vector for AdaBoost (which is one of the machine learning techniques) to classify smoke and non-smoke. Tung and Kim [13] proposed a four stage smoke detection algorithm. In the first stage, moving region is segmented using approximate median method, in the second stage Fuzzy C-Means Clustering (FCM) algorithm is used to cluster the candidate smoke region, in the third stage spatial and temporal characteristics of candidate smoke region are extracted and in the final stage the candidate regions are classified as smoke or non-smoke by SVM classifier. This technique requires more time to process each frame of the input videos. Meng-Yu et al. [14] presented their algorithm based on discrete wavelet transform and correlation analysis for smoke detection. They used discrete wavelet transform to separate low and high frequency

    content of the image. Then high frequency information is analyzed using correlation analysis. Tian et al. [15] proposed smoke detection in video using sparse representation, local smoothness and blending parameters. Here the image is assumed to be a linear blending of smoke component and back ground image. Favorskaya and Levtin

    [16] developed an early smoke detection using spatio temporal clustering. In this method, after segmenting the moving region, color and texture analysis are carried out. Finally, fractal properties of smoke are used for confirming the candidate region. Jerome and Philippe [17] presented a technique to recognize smoke by analyzing the smoke plumes velocity. In this analysis energy of velocity distribution for smoke plume is higher than many other landscape phenomena except clouds. For clouds, standard deviation of velocity distribution is lower than smoke. The main criterion is that they use minimum energy and standard deviation threshold. Feiniu Yuan [18] proposed an accumulative motion orientation model based on integral image by fast estimating the motion of smoke. Using accumulation of motion, this technique can discriminate artificial lights and nonsmoker moving objects from smoke.

  2. TEXTURE ANALYSIS OF SMOKE USING LOCAL TEXTURE DESCRIPTORS

    1. Local Binary Pattern (LBP)

      In recent years, local binary pattern feature extraction method has made a remarkable progress in texture analysis. Ojala et al. (2002) introduced LBP texture operator, which encodes pixel wise information in texture images.

      The conventional LBP operator is defined by comparing the gray value of the central pixel gc with its eight neighborhood pixels gp. All the neighbour pixels that have the values higher than or equal to the value of the central pixels are given a value of one, while all the pixels that have the values lower than the central pixels are given a value of 0. The binary values corresponding to eight neighbors are then acquired sequentially in the clockwise

      1. (b) (c)

        Figure 2. LBP for smoke image

        (a) smoke texture (b) LBP image (c) LBP Histogram

        Figure 2 shows the LBP mage and the histogram of smoke. LBP method is not only relatively simple and low computation complexity, but also has the properties of rotational invariance, gray scale invariance and other significant advantages. Therefore, LBP is widely used in image matching, detection of objects in remote locations, biological and medical image analysis.

    2. Compound Local Binary Pattern (CPLBP)

    The LBP operator considers only the sign of the difference of two gray values and hence the LBP operator fails to generate binary codes consistent with the texture property of a local region. To overcome this failure, CPLBP operator is proposed. The CPLBP operator assigns a 2P-bit code to the center pixel comprising P neighbors. The proposed method uses two bits for each neighbor in order to encode the sign as well as the magnitude information of the difference between the center and the neighbor gray values. Here, the first bit represents the sign of the difference between the center and the corresponding neighbor gray values. The other bit is used to encode the magnitude of the difference with respect to a threshold value, which is the average magnitude Mavg.

    The CPLBP operator sets the first bit to 1 if the magnitude of the difference between the center and the corresponding neighbor is greater than the threshold Mavg. Otherwise, it is set to 0. The CPLBP descriptor can be defined by the function given in equation (2).

    00 ip ic < 0, |ip ic| Mavg

    direction to form a binary number. The decimal number of this 8-bit binary code is the gray value of the central pixel

    s(ip ic) =

    01 ip

    • ic

      < 0, |ip

    • ic| > M

    avg

    (2)

    and it is described in Figure 1. The mathematical formulation of LBP is described in Equation (1) and is defined by

    10 ip ic 0, |ip ic| Mavg

    {11 otherwise

    p-1

    LBPP,R(xc,yc)= S (gp-gc)2n

    p=0

    where, ic is the gray value of the center pixel, ip is the gray

    () = {1 0

    0

    (1)

    value of a neighbor pixel P, and Mavg is the average

    magnitude of the difference between ip and ic in the local neighborhood. The CPLBP operator can be described as shown in Figure 3. Here, the CPLBP code can be written

    where, gc is the gray value of central pixel (xc,yc) and gp

    is the gray value of neighboring pixels on a circle of radius R'. P is the total number of neighbors.

    Figure 1.

    Process of finding decimal value of the center pixel gcof 3×3 sub image

    as, (1010101010111111).

    10

    10

    11

    10

    C

    11

    10

    10

    11

    65

    65

    90

    65

    60

    90

    70

    70

    90

    65

    65

    90

    65

    60

    90

    70

    70

    90

    Figure 3. Illustration of generation of CLBP

    10

    10

    11

    10

    C

    11

    10

    10

    11

    10

    10

    11

    10

    C

    11

    10

    10

    11

    In this method, a different approach has been presented where all the CLBP binary patterns are split into two sub CPLBP patterns is shown in Figure 4. In other words, a 16- bit CPLBP pattern is split into two 8-bit CPLBP patterns, where the first CPLBP pattern is obtained by concatenating the bit values corresponding to the neighbors in the north, east, south, and west directions, respectively and the second CPLBP pattern is obtained, by concatenating the bit values corresponding to the neighbors in the north-east, south-east, south-west, and north-west directions, respectively. After splitting 16-bit CPLBP binary pattern in to two 8-bit CPLBP patterns, the corresponding encoded image representation is obtained for two CPLBP patterns. Then, histogram is applied to each 8-bit CPLBP pattern and all the generated histograms are concatenated to form unique histogram called CPLBP histogram.

    Basic CLBP binary code 1010101010111111

    Sub-CLBP1 code 10101111

    Sub-CLBP1 code 10101111

    ,

    (3)

    where, , and is the mean value of

    the of the whole image. CLBP is the better texture descriptor than LBP, and it solves the confusions of different patterns, but not all the patterns .The CLBP is less sensitive to noise than LBP because it uses the intensity differences.

    1. Completed Robust Local Binary Pattern (CRLBP)

      To differentiate the confusing patterns of LBP and CLBP, Yang Zhao et al. [20] proposed a new effective framework called Completed Robust Local Binary Pattern.

      (CRLBP). In CRLBP, the value of each centre pixel in a 3×3 local area is replaced by its average local gray level. Compared to the centre gray value, the average local gray level is more robust to illumination changes and noise. To make CRLBP more robust and stable, Weighted Local Gray Level (WLG) is introduced instead of the traditional gray value of the centre pixel. The mathematical equation to find CRLBP of the image is given in Equation (4).

      Sub-CLBP2 code 10101011

      Sub-CLBP2 code 10101011

      CRLBP (x ,y )= p-1 S (m -c)2n (4)

      P,R c c

      p=0 p

      Figure 4 Illustration of generation of sub-CLBP

      In this method, a different approach has been presented

      here, the magnitude mp is defined in Equation (5).

      (5)

      pi

      pi

      where all the CLBP binary patterns are split into two sub CPLBP patterns is shown in Figure 4. In other words, a 16- bit CPLBP pattern is split into two 8-bit CPLBP patterns, where the first CPLBP pattern is obtained by concatenating the bit values corresponding to the neighbors in the north,

      where, WLGp and WLGc are the Weighted Local Gray value of the neighboring pixel and the centre pixel respectively. g (i = 1,2, 8) denotes the gray value of neighbor pixel of g and g (i = 1,2, 8) denotes the

      gray value of neighbor pixels of g c is the mean value

      p

      p

      p ci

      p ci

      east, south, and west directions, respectively and the second

      CPLBP pattern is obtained, by concatenating the bit values

      of m of the whole image.

      c .

      is

      the parameter set by

      corresponding to the neighbors in the north-east, south-east, south-west, and north-west directions, respectively. After splitting 16-bit CPLBP binary pattern in to two 8-bit CPLBP patterns, the corresponding encoded image representation is obtained for two CPLBP patterns. Then, histogram is applied to each 8-bit CPLBP pattern and all the generated histograms are concatenated to form unique histogram called CPLBP histogram.

      C. Completed Local Binary Pattern (CLBP)

      In some cases, different structural pattern may have the same LBP code. But it is difficult to say that they have similar local structure. Hence, it is necessary to enhance the discriminative capability of LBP. To find solution to this problem completed local binary pattern descriptor (CLBP) is proposed. CLBP has two components namely, CLBP Sign (CLBP_S) and CLBP Magnitude (CLBP_M). CLBP_Sign is same as conventional LBP. CLBP_M measures local variance of magnitude. CLBP_M for the image is calculated using Equation (3).

      the user. For the magnitude m to be average local gray

      p

      p

      p

      p

      level is set as 1 and for magnitudem o be weighted local gray level, is set to be the values other than 1. The CRLBF measures local variance of WLG and is insensitive to noise and stable under varying illumination conditions. Hence, CRLBP is more efficient to extract the smoke texture features.

    2. Gray Level Co-occurrence Matrix (GLCM)

      Gray level co-occurrence matrix (GLCM) is one of the statistical methods to analyze the texture of the image. It not only considers the distribution of pixel intensities but also considers the relative position of pixel in the image. GLCM is a square matrix of dimensionk, in which k is the number of gray levels in the image. Each element P in the matrix is calculated by the number of times a pixel with value i is adjacent to the pixel with value j.Then, dividing the whole matrix by the total number of comparisons made leads to normalized GLCM (P). Therefore, each entry in P is considered a probability that

      each pixel of value i will be found adjacent to a pixel of value j and it is given in Eq. (7).

      Figure 5 GLCM Matrix

      The Figure 5 shows GLCM matrix. A GLCM provides the information about how frequently a pair of pixels occurs in an image towards a particular direction. Co-occurrence matrix method is based on the repeated occurrence of some gray level configuration in the texture. This configuration varies slowly with distance in course texture and rapidly in fine texture. The GLCM matrix can be described with the parameters given in equations (6)-(11).

      Maximum Probability:

      It measures the strongest response of G. Its given by the formula as,

      Entropy:

      This Measures the randomness of the elements of G. Its given by the formula,

      (11)

    3. Gray Level Run length Matrix (GLRL)

      A set of texture features based on GLRL is described. The run length is directly proportional to the number of runs in the image. r (i, j) be the (i, j) entry in the given run length matrix. A gray-level run is a set of consecutive, collinear points having the same gray level value. The length of the run is the number of picture points in the run. From an image we can compute the GLRL for the runs having any given direction. The matrix element (i, j) specifies the number of times that the picture contains a run of length j in the given direction. Various texture features can be derived from this run length matrix. A wide variety of feature has been used for texture analysis. Some of feature sets have included features based on GLRL are described from equations (12) – (18)

      Short Run Emphasis SRE: measures the distribution of short runs. The SRE is highly dependent on the occurrence of short runs and is expected large for fine textures.

      (6) (12)

      Correlation:

      A measure of how correlated a pixel is to its neighbor over the entire image. Its given by the formula,

      Long Run Emphasis LRE: measures distribution of long runs. The LRE is highly dependent on the occurrence of long runs and is expected large for coarse structural textures.

      LRE=1/nr (13)

      Contrast:

      (7)

      Gray-Level Non-uniformity GLNU: measures the similarity of gray level values throughout the image. The GLN is expected small if the gray level values are alike throughout the image.

      Contrast is a measure of intensity contrast between a pixel and its neighbor over the entire image. Its given by the formula,

      (8)

      Energy:

      A measure of Uniformity in the range [0, 1].Its given by the formula,

      GLNU=1/nr (14)

      Run Percentage RPC: measures the homogeneity and the distribution of runs of an image in a specific direction. The RPC is the largest when the length of runs is 1 for all gray levels in specific direction.

      Homogeneity:

      RPC=

      (15)

      RPC=

      (15)

      (9)

      Run Length Non-uniformity RLNU: measures the similarity of the length of runs throughout the image. The RLN is expected small if the run lengths are alike throughout the

      It measures the spatial closeness of the distribution of elements in G to the diagonal. Its given by the formula,

      (10)

      image.

      RLNU=1/nr (16)

      Low Gray-Level Run Emphasis LGRE: measures the distribution of low gray level values. The LGRE is expected large for the image with low gray level values.

      LGRE=1/nr (17)

      High Gray-Level Run Emphasis HGRE: measures the distribution of high gray level values. The HGRE is expected large for the image with high gray level values.

      (18)

      If P×Q be the size of the input gray scale image having maximum gray level say L, then the resulting GLRL matrix for this matrix is L×Q. The advantage of GLRLM approach is demonstrated experimentally by the classification of two texture data sets. Comparisons with other methods demonstrate that the run-length matrices contain great discriminatory information.

    4. Hybrid Texture Description of Smoke image

    Hybrid texture description refers to description of the image texture using more than one texture descriptors. In this texture description, an image that is described by one texture descriptor undergoes the description of another texture descriptor for the deep image feature representation.

    These hybrid texture description techniques are applied to extract an images texture features in different applications such as identification of finger print detection, detection of breast cancer tissues, identification of butterfly species, classification of microscopic images of hard wood pieces and classification of tea leaves. In the proposed technique, hybrid texture descriptors are incorporated to describe the local micro patterns present in the smoke image.

  3. RESULTS AND DISCUSSIONS

    1. Implementation details:

      The main objective of the work is to propose the hybrid texture descriptors and their effectiveness in classifying smoke and non-smoke frames in input video. Image processing toolbox of MATLAB R2015a is used to process the image and to find GLCM and GLRL of image. To implement the texture descriptors like LBP, CPLBP, CLBP and CRLBP , the full code has been written in MATLAB 2015(a). For the LBP variants like LBP, CPLBP, CLBP and CRLBP, histogram is calculated to describe the texture information. In all the LBP variants, to quantify the local structure of smoke images, radius of the circle in the sub image is selected as R=1,P=8; R=1.5,P=12 and R=3,P=16.

    2. Data base

      The input videos are taken from the website (http://signal.ee.bilkent.edu.tr/VisiFire/Demo/FireClips and

      http://signal.ee.bilkent.edu.tr/VisionFire/Demo/FireClips) and also real world images. Totally, 90 videos are used for simulation and some of the videos contain only smoke but the others contain both smoke and smoke-like disturbances. To calculate the performance metrics, the videos with smoke and smoke-like disturbances are used. Some of the videos tested using the proposed hybrid texture description techniques are shown in Figure 5. Each input video is converted into frames, and is then analyzed based on the smoke texture characteristics.

      Figure 5 Some of the Input Videos used for simulation

    3. Comparison of texture descriptors

    The performance of the proposed texture descriptor is evaluated using conventional texture descriptors. Then, the statistics (standard deviation, skewness and kurtosis) over these histograms for different radius are extracted. The smoke detection accuracies of the conventional texture descriptors, such as LBP, CPLBP, CLBP, CRLBP, GLCM and GLRL are shown in the Table1. The detection rate equals the ratio of the correctly detected frames minus the number of false detected frames to the number of frames with fire. From the figure 3.17, it is clear that, the average detection accuracy of CRLBP and DWT are higher than other texture descriptors. The seond best detection accuracy is achieved by Gabor transform. The GLCM texture descriptor achieves least detection accuracy.

    (a)

    (b)

    (c)

    (d) (e) (f)

    Figure 6 Result of LBP variants for =1 and P=8 (a) Input image (b) Gray image (c) LBP image (d) CPLBP image (e) CLBP image (f) CRLBP image

    Fig 6 shows texture description of smoke image using LBP variants for R=1 and P=8. It is observed that, LBP, CPLBP, CLBP and CRLBP image has significant information to discriminate smoke and non-smoke images. The micro patterns of smoke image are easily observed by CRLBP image descriptions. Compared to LBP, CPLBP and CLBP image descriptions. CRLBP effectively describes the smoke texture with random intensity variations. Applying the conventional texture descriptors such as GLCM, GLRL on the LBP, CLBP and CRLBP image descriptions form hybrid texture descriptors and it greatly helps to discriminate smoke from smoke like disturbances.

    Table1 Detection Accuracy of texture descriptors

    Texture Descriptors

    Parameter Settings

    Average Detection Accuracy (%)

    LBP

    methods

    LBP

    R=1,P=8

    84.8

    R=1.5,P=12

    86.3

    R=2,P=16

    89

    CPLBP

    R=1,P=8

    88.6

    R=1.5,P=12

    87.4

    R=2,P=16

    91.3

    CLBP

    R=1,P=8

    93.7

    R=1.5,P=12

    94.1

    R=2,P=16

    96.4

    CRLBP

    R=1,P=8

    96.2

    R=1.5,P=12

    97.3

    R=2,P=16

    97.2

    Non- LBP

    Methods

    GLCM

    94.6

    GLRL

    94.9

    Table2 Detection Accuracy of hybrid texture descriptors

    Hybrid Texture Descriptors

    Parameter Settings

    Average Detection Accuracy (%)

    LBP-GLCM

    R=1,P=8

    87.86

    R=1.5,P=12

    89.23

    R=2,P=16

    91.05

    CPLBP-GLCM

    R=1,P=8

    88.7

    R=1.5,P=12

    89.48

    R=2,P=16

    91

    CLBP-GLCM

    R=1,P=8

    96.7

    R=1.5,P=12

    97.1

    R=2,P=16

    97

    CRLBP-GLCM

    R=1,P=8

    98.72

    R=1.5,P=12

    97.8

    R=2,P=16

    98.91

    CLBP-GLRL

    R=1,P=8

    97.72

    R=1.5,P=12

    97.15

    R=2,P=16

    98.07

    CRLBP-GLRL

    R=1,P=8

    98.92

    R=1.5,P=12

    98.70

    R=2,P=16

    98.95

    The average detection accuracy for the 10 videos is shown in Tables 1 and 2. From the table, it is observed that

    the proposed CRLBP-GLRL texture descriptor outperformed the other descriptors with the highest detection rate. If the LBP and non-LBP methods are considered separately, the detection accuracy is relatively low as compared to Hybrid descriptors. All the hybrid descriptors exhibits impressive detection rate, So it is clear that, the proposed hybrid texture descriptor can effectively distinguish smoke-colored moving objects from real smoke.

  4. CONCLUSIONS

LBP is one of the structure and statistical based method and it describes the statistical intensities of the local structure of the image. Compared to the non-LBP methods like GLCM, GLRL, LBP methods are invariant to luminance and robust to noise. In order to improve the texture discrimination capability of LBP further, two texture descriptors namely, CPLBP, CLBP and CRLBP are incorporated. Compared to central gray value, the average local gray level is more robust to noise and illumination variation. Further, to improve the performance of smoke texture discrimination, hybrid texture descriptors are proposed. From the experimental results show that the CRLBP-GLRL texture descriptor is powerful texture descriptor for smoke texture discrimination.

REFERENCES

  1. Cetin EA, Dimitropoulos K, Gouverneur B, Grammalidis N, Gunay O, Habiboglu YH, Toreyin BU, Verstockt S (2013) Video fire detection-

    Review. Digit Signal Proc 23:18271843

  2. Qureshi WS, Ekpanyapong M, Dailey MN, Rinsurongkawong S, Malenichev A, Krasotkina O (2015) QuickBlaze: early fire detection using a combined video processing approach. Fire Technol. doi:10.1007/s10694-015-0489-7

  3. Ye Wei, Zhao Jianhui, Wang Song, Wang Yong, Zhang Dengyi, Yuan Zhiyong (2015) Dynamic texture based smoke detection using surfacelet wavelet transform and HMT model. Fire Saf J 73:91 101. doi:10.1016/j.firesaf.2015.03.001.

  4. Pagar PB, Shaikh AN (2013) Real time based fire and smoke detection without sensor by image processing. Int J Adv Electr Electron Eng 2:2534

  5. Maruta H, Nakamura A, Kurokawa F (2010) A new approach for smoke detection with texture analysis and support vector machine. In: IEEE International symposium on industrial electronics ISIE, 47 July 2010. Bari: IEEE, p.15501555. doi: 10.1109/ISIE.2010.5636301

  6. Chunyu Y, Jun F, Jinjun W, Yongming Z (2010) Video fire smoke detection using motion and color features. Fire Technol 46(3):651 663. doi:10.1007/s10694-009-0110-z.

  7. Gubbi J, Marusic S, Palaniswami M (2009) Smoke detection in video using wavelets and support vector machines. Fire Saf J 44(8):1110 1115. doi:10.1016/j.firesaf.2009.08.003.

  8. Lee CY, Lin CT, Hong CT, Su MT (2012) Smoke detection using spatial and temporal analysis. Int J Innov Comput Inf Control 8(7):4749 4770.

  9. Gebejes A, Huertas R (2013) Texture characterization based on grey- level co-occurrence matrix. In: Conference of Informatics and Management Sciences, Slovakia, March 25 29, p. 375378.

  10. Chen J, You Y, Peng Q (2013) Dynamic analysis for video based smoke detection. Int J Comput Sci 10(2):298304.

  11. Chunyu Y, Yongming Z, Jun F, Jinjun W (2009) Texture analysis of smoke for realtime fire detection. In: Computer Science and Engineering, WCSE09. Second International Workshop, Qingdao: IEEE, vol. 2, p. 511515. doi:10.1109/WCSE.2009.864.

  12. Agrawal DA, Mishra P (2014) Smoke detection using local binary pattern. Int J Curr Eng Technol 4(6):40524056.

  13. Tung TX, Kim JM (2011) An effective four-stage smoke-detection algorithms using video images for early fire-alarm systems. Fire Saf J 46:276282. doi:10.1016/j.firesaf. 2011.03.003

  14. Meng-Yu W, Ning H, Qin-Juan L (2012) A smoke detection algorithm based on discrete wavelet transform and correlation analysis. In: IEEE International conference on multimedia information networking and security, 24 November 2012. Nanjing: IEEE, p.281284. doi:10.1109/MINES.2012.46.

  15. Tian H, Li W, Wang L, Ogunbona P (2014) Smoke detection in video: an image separation approach. Int J Comput Vision 106(2):192209. doi:10.1007/s11263-013-0656-6.

  16. Favorskaya M, Levtin K (2013) Early smoke detection in outdoor space by spatio-temporal clustering using single video camera. Recent advances in knowledge-based paradigms and applications, advances in intelligent systems and computing, vol 234. Springer,

    Berlin, pp 4356

  17. Jerome V, Philippe G (2002) An image processing technique for automatically detecting forest fire. Int J Therm Sci 41(12):11131120. doi:10.1016/S1290-0729(02)01397-2.

  18. Yuan F (2008) A fast accumulative motion orientation mode based on integral image for video smoke detection. Pattern Recogn Lett 29(7):925932. doi:10.1016/ j.patrec.2008.01.013.

  19. Wen-Hui Li., Bo Fu., Lin-Chang Xiao., Ying Wang, and Pei-Xun Liu. 2013. A video smoke detection algorithm based on wavelet energy and optical flow eigen-values. Journal of Software 8 (1): 63-70. doi:10.4304/jsw.8.1.63-70.

  20. Yang Zhao, Wei Jia, Rong-Xiang Hu, Hai Min, Completed robust local bonary pattern for texture classification, Neurocomputing 106(2013): 68-76.doi:10.1016/j.neucom.2012.10.017.

Leave a Reply