Algorithm for Detection of Fire Smoke in a Video Based on Wavelet Energy Slope Fitting

Yi Zhang* , Haifeng Wang* and Xin Fan**

Abstract

Abstract: The existing methods for detection of fire smoke in a video easily lead to misjudgment of cloud, fog and moving distractors, such as a moving person, a moving vehicle and other non-smoke moving objects. Therefore, an algorithm for detection of fire smoke in a video based on wavelet energy slope fitting is proposed in this paper. The change in wavelet energy of the moving target foreground is used as the basis, and a time window of 40 continuous frames is set to fit the wavelet energy slope of the suspected area in every 20 frames, thus establishing a wavelet-energy-based smoke judgment criterion. The experimental data show that the algorithm described in this paper not only can detect smoke more quickly and more accurately, but also can effectively avoid the distraction of cloud, fog and moving object and prevent false alarm.

Keywords: Background Estimation Method , Least Square Method , Low-Frequency Wavelet Energy , Slope Fitting , Smoke Detection

1. Introduction

Thousands of fire accidents take place every day in the world, causing a large number of casualties or destroying vast areas of forest vegetation, and imposing serious threats on the safety of human life, property and natural environment [1]. A fire accident often comes suddenly and fiercely, and impacts a wide area. If it is not discovered in time, once the fire spreads, it will be difficult to control the fire within a short period of time. Therefore, it is very important to have real-time detection of fire accidents [2]. Generally speaking, flame is small at the early stage, but smoke is very obvious [3]. Based on this, a visual image based smoke warning system is used for rapidly improving smoke detection, and this study provide an important basis for judging whether a fire accident takes place in a timely manner [4,5].

A traditional smoke detection system mainly relies on its smoke sensor, which only works when smoke is close. Therefore, the traditional detection system is not applicable in the outdoor space. Moreover, because the sensors are easily distracted by dust, air flow and human factors, these detection systems often have a high false alarm rate. With the rapid development of video processing technology, a video-based algorithm for detection of fire smoke in a video has a promising future for wide applications [6-8].

Being colorful and having irregular textures and shapes are the characteristics of smoke [9,10]. At present, video-based algorithms for detection of fire smoke usually make a decision directly or with a sorter based on one or more characteristics of the smoke. Toreyin et al. [11] use the characteristics of smoke including its movement, flicker, color and blurred edge, to extract the variance between different ranges of the edge of smoke for detection of smoke. Said method needs to analyze the background of the composed scene, thus limiting the scope of application of the algorithm. Fujiwara and Terada [12] propose a method of extracting the smoking area in the image by use of fractal coding concept. However, for smoke images of low contrast or blurred smoke images, the fractal characteristics so extracted are not stable enough. Zhou et al. [13] propose an algorithm for detection of smoke in a video based on the study on both the static characteristics and dynamic characteristics of the smoke. Said method produces a false positive detection result when there is a non-smoke object that is extremely similar to the smoke in color and shape. Wang et al. [14] use a method where many characteristics of the smoke are combined to detect the smoke in the early stage of a fire accident, and said method is sensitive to the non-smoke moving distractors and is likely to produce a false positive alarm. Wang et al. [15] proposes a method of applying both the smoke diffusion model and the sway detection model to smoke detection. Yuan and his colleagues [16,17] use the total number of smoke pixels and a model for accumulating the directions of movement for detection of smoke, and said method has the disadvantage of inaccurate estimation of the directions of movement of the smoke. Yuan [18] also proposes a method for extracting the characteristics of smoke by the dual-mapping structure method and detecting smoke with an AdaBoost sorter. Yu et al. [19] proposes a method for summarizing the characteristics of smoke movement based on the optical flow algorithm and sorting by use of BP neural network, and said method can distinguish smoke from moving distractors. Zhou et al. [20] use an algorithm for detection of fire in a video by use of the characteristics of movement, and said algorithm reduces the number of suspected smoke areas and meets the requirement for real-time detection, but it is prone to produce a false positive alarm when a moving distractor is similar to the smoke in color.

The above methods for detection of smoke in a video have two main problems: (1) they need to spend a large amount of time in calculation when performing analysis and computation for recognition of various characteristics of the smoke, thus making it difficult to apply these algorithms to a real-time fire smoke detection system; and, (2) they are prone to produce a positive alarm for cloud, fog and moving distractors (such as human and vehicles). Therefore, the research objective of this paper is to achieve a more accurate algorithm and a faster processing speed in first detection of smoke than those algorithms that are based on analysis of characteristics and machine learning, while eliminating the distraction of moving objects such as pedestrians and cars. For this reason, we propose an algorithm for detection of fire smoke in a video based on the fitted wavelet energy slope.

This paper mainly contributes to these three aspects: firstly, the algorithm detects the suspected moving smoke object and calculates its wavelet energy; secondly, the algorithm fits the changing slopes of smoke in 20 continuous frames by use the least square method; and lastly, it gives a positive alarm by use of the relation between the changes in the 2 smoke slopes in 40 continuous frames. The experimental results show that the proposed algorithm is superior to other algorithms in both accuracy and processing speed of giving the first smoke alarm, and it is also better in eliminating non-smoke distractors such as pedestrians and cars. This paper is organized in the following structure: Section 2 provides a detailed introduction of the moving object detection based on background updates; Section 3 provides a detailed explanation on the principles and computing steps of the smoke detection algorithm; Section 4 provides the experimental results and analysis; and, Section 5 summarizes the innovations and shortcomings of the algorithm.

2. Detection of Moving Object Area

To judge whether there is any fire smoke in the images of a video, the slow-moving suspected smoke area needs to be extracted. Main methods for detection of moving objects include the frame difference method, the optical flow method and the mixed Gauss method. The optical flow method is insensitive to slow-moving smoke but very sensitive to the change of light, so it is unable to detect the moving smoke area accurately [21]. The frame difference method detects the moving object by use of the difference between several consecutive frames, but said method has a high requirement for the environment and it is only suitable for certain specific scenes and sensitive to environmental noise. Compared with other detection methods, the mixed Gauss method requires complexed and time-consuming computation, which is not suitable for a real-time detection system. Although these methods produce good results in detection of rigid objects, they have difficulties in extracting a complete smoke area of non-rigid smoke that appears to diffuse in movement and easily generates an empty space. The background estimation method is also a commonly-used algorithm for detection of a moving area, which method features a computing speed as fast as that of the frame difference method. In this paper, an improved background dynamic updating method is introduced for extracting the moving area.

2.1 Background Update

The background estimation method is key to background updating. Due to the diffusivity of smoke, traditional methods easily lead to empty space phenomenon [22]. Therefore, in smoke detection, the changes of adjacent frames are considered in background update, and the image of the first frame is also added as reference for the update. The background update matrix is represented as Eq. (1).

(1)
[TeX:] $$B_{n}=\left(1-M_{n-1}\right) .^{*} \alpha .^{*} I_{n}+B_{n-1}+\left(M_{n-1}-1\right) .^{*} \alpha .^{*} B_{n-1}$$

In the equation, n represents the current frame number, n–1 represents the previous frame; [TeX:] $$B_{n-1}$$ is the background image matrix of the previous frame, [TeX:] $$B_{n}$$ is the updated background image matrix to be obtained in estimation, [TeX:] $$I_{n}$$ is the current frame image matrix, [TeX:] $$M_{n-1}$$ is binarized image matrix of the moving foreground in the previous frame, α is the background-weighted updating coefficient, [TeX:] $$0<\alpha<1$$. In the actual computation process, the initial value [TeX:] $$M_{0}$$ is zero matrix, and [TeX:] $$B_{0}$$ is the matrix of the first frame image of the video. In this paper, is 0.05.

2.2 Binarization of Moving Foreground

After the background image is updated, the moving foreground area [TeX:] $$M_{n}$$ can be obtained through computation of the difference between the current frame [TeX:] $$I_{n}$$ and the background estimation image matrix B_n (as shown in Eq. (2)). Because the foreground region not only includes the whole moving area of the object, but also produces a lot of noises of which the gray value is not 0. Therefore, a background difference method with a threshold value is adopted, and the binarization of the moving object area is shown as Eq. (3).

(2)
[TeX:] $$M_{n}=\left|I_{n}-B_{n}\right|$$

(3)
[TeX:] $$M(i, j)=\left\{\begin{array}{ll} 1 & M(i, j)>T \\ 0 & M(i, j) \leq T \end{array}\right.$$

In the equation, i and j are the row and column variables of the foreground image M, respectively. The threshold value T is 14 in this paper, and the effect of the divided foreground in the smoke image is shown in Fig. 1.

Fig. 1.
Moving object detection: (a) screenshot from the video and (b) the moving object.

3. Algorithm for Smoke Detection

In the early stage of a fire accident, smoke is produced due to insufficient combustion. The smoke moves slowly and its volume gradually increases with time [23]. After wavelet conversion, the wavelet energy in the suspected smoke area is also growing bigger slowly. Based on this, this paper proposes an algorithm for detection of smoke in a video based on wavelet energy slope fitting. This algorithm avoids consumption of a lot of computation time for extraction and analysis of the smoke characteristics, thus greatly increasing the processing speed of the algorithm. It is an algorithm for monitoring smoke in the early stage of a fire accident under the scene of a fixed camera. The basic theory is as follows: firstly, the suspected moving object is extracted by use of the background estimation method to form a binarized mask image of the moving object foreground M; secondly, the low-frequency energy Eb decomposed in the monolayer wavelet is calculated by multiplying the binarized mask image M and the current frame image; and, lastly, the least square method is used to fit the 20 consecutive frames of video in sequence as 1 cycle and the slopes in 2 continuous cycles are obtained through fitting. If the fitted slope of the 2 cycles is greater than the set threshold value T, then a smoke alarm signal is given.

3.1 Criterion for Judgment of Smoke

The smoke in the early stage of a fire accident develops from nothing and moves up slowly. With the increase of smoke, the low-frequency wavelet energy of the suspected area also slowly increases. In fire surveillance video, it is difficult to avoid people and vehicles from moving in and causing a false appearance of moving smoke, which is a strong distractor that easily leads to a positive alarm [24]. In order to eliminate the distraction of non-smoke objects such as people and cars, the algorithm fits the change in wavelet energy slope k by use of the least square method in the sequentially consecutive 20 frames in the video. 40 consecutive frames are selected, and 1 slope is obtained by fitting every 20 frames. Total two slopes k1 and k2 are obtained through fitting. When the conditions of [TeX:] $$\left|k_{1}\right|>2 \times 10^{6}$$ and also [TeX:] $$\left|k_{2}\right|>2 \times 10^{6}$$ are met, it can be judged that smoke exists in the video. At the same time, it can overcome the distraction of non-smoke objects like the pedestrians and vehicles in the video.

The following shows the changes in the energy curve of the suspected smoke area in the 2 videos (25 fps and 640 × 480 resolution) and the changes in the fitted slopes. In video1, there are total 27 seconds of white smoke produced by the friction of vehicle tires with the ground, and white smoke appears in the 1st second (the [TeX:] $$$25^{\text {th }}$$$ frame). Video2 is a 22-second video, in which a man dressed in green on top enters from the right side of the picture at the [TeX:] $$$10^{\text {th }}$$$ second (the [TeX:] $$$250^{\text {th }}$$$ frame) in the surveillance video, and the white smoke is released at the [TeX:] $$$18^{\text {th }}$$$ second.

Fig. 2.
Curve of smoke change in the video with smoke only: (a) screenshot from the video, (b) smoke curve, and (c) change of the fitted slope.

Under no distraction of non-smoke objects, the smoke change curve is a straight line as long as there is no smoke (i.e., no moving object is detected); when smoke appears, the change curve reflecting the smoke in the early stage of fire should be a steep slope (Fig. 2). The Segment A in Fig. 3(b) shows the curve change when a pedestrian enters into the picture. The curve change is similar to that of the smoke. The slope changes greatly, indicating very strong distraction. The Segment B in Fig. 3(b) shows the stage where smoke is rising slowly. Non-smoke distraction can be effectively eliminated through the smoke judgment criterion, and an accurate smoke alarm can be given.

Fig. 3.
Curve of smoke change in the video with non-smoke distractors: (a) screenshot from the video, (b) smoke curve, and (c) change of the fitted slope
3.2 Smoke Detection Algorithm by Steps

All current smoke detection algorithms need to extract and analyze a large number of characteristics of the suspected smoke area, such as color, roundness and texture [25,26]. Analysis and computation of various characteristics of the suspected smoke area are required, and the algorithms are complexed and time-consuming. Hence it is difficult to use these algorithms in real-time fire monitoring video system [27]. The algorithm proposed in this paper for detection of fire smoke in a video based on wavelet energy slope fitting does not require extraction of these characteristics (Table 1). It calculates the low-frequency wavelet energy of the suspected smoke area and gives a smoke alarm signal through fitting the low-frequency wavelet energy slope change within the given window by use of the least square method.

(1) The input color image is converted into a gray image, and the first frame of smoke-free image is chosen as the background image for updating the reference image B;

(2) The suspected moving smoke area is extracted by use of the background estimation method and the mask M is binarized;

(3) The low-frequency wavelet energy [TeX:] $$E_{b}$$ is calculated by wavelet decomposition of the result of the mask image M multipled by current frame image I;

(4) The value of 20 consecutive frames Eb is selected, and the absolute value of the changing curve slope is obtained through fitting by use of the least square method.

(5) Slope k1 and Slope k2 are obtained through fitting within the cycle of 40 consecutive frames. When the conditions of [TeX:] $$\left|k_{1}\right|>2 \times 10^{6}$$ and also [TeX:] $$\left|k_{2}\right|>2 \times 10^{6}$$ are met under the criterion for smoke judgment, a smoke alarm signal will be given.

Table 1.
The proposed method

4. Analysis of Experimental Results

The algorithm in this paper is to detect the smoke generated in the early stage of a fire accident. During this stage, the smoke volume grows from small to large. Considering the characteristics of such change, a time window of 40 consecutive frames is set, and the slopes of every 20 frames are fitted to judge the change of the two slopes in this time window in order to give a smoke alarm. When a smoke alarm is given under the algorithm proposed in this paper, it does not mean that smoke is produced in every frame within the 40-frame cycle. The algorithm is a process of holistic judgement. The main purpose of the experiment is to test the accuracy and anti-distraction ability of the algorithm. The accuracy refers to the difference between the time when the occurrence of smoke is observed by human eyes and the time when the smoke is detected by the algorithm. The anti-distraction ability means whether the algorithm gives a positive alarm when any non-smoke moving distractor exists in the video. Five videos are selected in the experiment for comparison with [13] and [14], respectively. The videos have the resolution of 640×480 and the frame rate of 25 frames per second.

The videos in the experiment are mainly sourced from three websites, including the website of Yuan Feiniu Lab in Jiangxi University of Finance and Economics (http://staff.ustc.edu.cn/~yfn/vsd.html), the website of the CVPR Lab. at Keimyung University in South Korea (http://cvpr.kmu.ac.kr/), and the website of Internet Online Resources Library (http://imagelab.ing.unimore.it/visor/). The software used in the experiment is MATLAB R2016b, and the computer is configured with CPU G860, dual core 3.0 GHz and 4 G memory.

4.1 Experiment 1

Experiment 1 is to test the accuracy of the algorithm under the 100% smoke condition. Tables 2 and 3, respectively show the video description and detection data, and the rendering of detection result is as shown in Fig. 4.

Table 2.
Video scene description in Experiment 1

The smoke-observing frames refer to the number of frames in the video in which the occurrence of smoke is observed by human eyes. The smoke-detecting frames refer to the number of frames in the video in which smoke is detected by the algorithm. The error time is the difference between the smoke-detecting frames and the smoke-observing frames divided by the frame rate. The computation time is the time for the algorithm to complete detection through the entire video. The first image in the test results is the frame that obvious smoke frames observed by human eye in video; the remaining three images are the first smoke detection images of three algorithms in the reference documents and in this paper.

The detection data in Table 3 show that smoke is detected with each of those three algorithms, but we can discover from the error time that the algorithm proposed in this paper results in the minimum error time, which indicates that the algorithm proposed in this paper is more accurate than that of the other two algorithms. In addition, in terms of the time for processing the entire video, the algorithm proposed in this paper results in the shortest processing time, which indicates that the computation speed of the algorithm proposed in this paper is the fastest.

Table 3.
Smoke detection result in Experiment 1
Fig. 4.
Detection result of videos with smoke only.
4.2 Experiment 2

Experiment 2 uses the video before smoke occurs, and non-smoke distractors can be seen moving at different speeds in the video. The main purpose of the experiment is to test whether the algorithm gives a false positive alarm (i.e., test of the anti-distraction ability). The results of the experiment are as shown in Tables 4 and 5, and Fig. 5.

Table 4.
Video scene description in Experiment 2
Fig. 5.
Smoke video detection with moving distractors.

Experiment 2 tests the anti-distraction performance of 3 algorithms in the video. The data in the error time column in Table 5 show that moving distractors are considered as smoke and a false alarm is given in [13] and [14]. Only under the algorithm in this paper are the moving distractors effectively avoided and smoke is accurately detected. This is also well proven in Fig. 5. The data in the computation time column in Table 5 show that the algorithm in this paper takes less time in computation than the other two algorithms, which also explains that the algorithm in this paper has the fastest computation speed.

Table 5.
Detection data in Experiment 2

The notation “-” in the column “error time” indicates that the algorithm incorrectly detects the moving distractors as fire smoke and triggers a positive alarm.

4.3 Experiment 3

In Experiment 3, the video shows the change in the density of cloud or fog. The subtitles of the video are described in Table 6. The detection data and results are shown in Table 7 and Fig. 6, respectively.

Table 6.
Video scene description in Experiment 3
Fig. 6.
Cloud and fog video: (a) video10, (b) video11, (c) video12, and (d) video13.

From Table 7, it shows that the algorithms in both [14] and [13] mistakenly detect clouds and fog as fire smoke, and no smoke is detected by the algorithm in this paper. That is to say, under distraction of cloud, fog and other distractors of a similar color to the smoke, the algorithm in this paper has a stronger anti-distraction ability than those in [14] and [13]. In this algorithm, cloud or fog is not detected as smoke, mainly because the cloud or fog changes more slowly, so the change of the wavelet energy slope of the smoke image is insignificant in detection of the suspected moving smoke.

Table 7.
Detection data in Experiment 3
4.4 Experiment 4

In Experiment 4, smokeless videos with moving distractors similar in color to smoke are selected (Table 8, Fig. 7). The main purpose is to test the ability of the algorithm to distinguish smoke from non-smoke moving distractors.

Table 8.
Video scene description in Experiment 4
Fig. 7.
Video with moving distractors: (a) video14, (b) video15, (c) video16, and (d) video17.
Table 9.
Detection data in Experiment 4

Table 9 shows that smoke is detected under the algorithms in both [13] and [14]. Under the algorithms in this paper, smoke is only detected in video15 and video16, mainly because that the white moving distractors in video15 and video16 move fast from far to near, causing the algorithm to consider it as the change from having no smoke to having smoke, which is similar to the process of smoke production, thus producing a false positive result.

4.5 Experiment 5

In Experiment 5, smoke videos under weak lighting conditions are selected. The purpose is to test the impact of light intensity on the algorithm (Table 10, Fig. 8).

Table 10.
Video scene description in Experiment 5
Fig. 8.
Videos with different light intensities: (a) video18 and (b) video19.

Table 11 shows that under weak light condition only the algorithm in [14] has detected smoke in video19, while the two algorithms in this paper and the study by Zhou et al. [13] haven’t detected smoke effectively. Therefore, the light has a great impact on the algorithm in this paper.

Table 11.
Detection data in Experiment 5
4.6 Analysis of Experiments

In the above 5 experiments, different types of videos are selected to test the accuracy and anti-distraction ability of the algorithm. In Experiment 1, a video with smoke only and without any moving distractor is selected. Compared with other two algorithms, the algorithm proposed in this paper has the minimum error time (less than 1 second) in smoke detection, and the experimental result shows the highest accuracy in the algorithm proposed in this paper. Moreover, in terms of the density of smoke detected in Fig. 4, the smoke detected by the algorithm in this paper is in the lowest density, which indirectly proves that the algorithm in this paper offers the best accuracy. In Experiment 2, we choose the videos with moving persons as distractors before smoke occurs. According to the detection results in Fig. 5, the algorithm proposed in this paper has the best detection result without giving any false positive alarm, and smoke is shown in every image. In contrast, the other two algorithms mistake the moving person as smoke in a fire. Therefore, the result of Experiment 2 shows that the proposed algorithm has the best anti-distraction ability against moving distractors.

In Experiment 3, the videos with cloud and fog similar to the fire smoke are tested, and the test results show that the algorithm proposed in this paper does not detect fire smoke when the density change is not fast. In contrast, the other two algorithms mistake clouds and fog as fire smoke. The result of this experiment shows that the algorithm proposed in this paper has better anti-distraction ability against cloud and fog than the other two algorithms. In Experiment 4 on non-smoke moving distractors (human and vehicle), the proposed algorithm has been distracted. The main reason is that the pedestrian in video15 suddenly jumps upwards between the 22nd second and the 25th second, which is represented in the slope of the suspected moving smoke object within the 40 consecutive frames being similar to the slope of smoke movement. Similarly, during the longitudinal movement of the car in video16, a slope similar to that of the smoke movement occurs. That is to say, the algorithm detects a false positive result. In Experiment 5, the smoke video under a weak lighting condition is tested. No smoke is detected under all three algorithms, showing the accuracy of the detection by algorithms is much impacted by visible light. The main reason is that most of the current algorithms are designed based on visible light. The algorithm fails to work well due to weak visible light, i.e., the weaker the light is, the worse the detection result is.

5. Conclusion

Specific to the algorithm for detection of smoke in the early stage of a fire accident, a method for fast detection of smoke in the early stage of a fire accident is proposed based on the fitting of wavelet energy change slope of the smoke in the video. The algorithm uses the diffusing characteristic of smoke movement and gives a smoke alarm by fitting the wavelet energy change in the smoke image within the specified time window. This is the innovation of this paper. The experimental results show that said method can accurately detect the smoke in the early stage of a fire accident and eliminate the distraction of some non-smoke moving distractors (e.g., human and vehicle). A shortcoming of this method is that it easily misjudges when some moving distractors have significant movement from far to near. This will be the direction for future improvement of the algorithm proposed in this paper.

Acknowledgement

This paper is funded by Natural Science Foundation Program for Universities and Colleges in Jiangsu Province (No.18KJB520012), Changzhou Sci & Tech Program (No. CE20165049).

Biography

Yi Zhang
https://orcid.org/0000-0001-7030-6433

She graduated from the Nanjing Normal University in 2005. She is currently an engineer and works in Jiangsu University of Technology. Her research interests is mainly in digital image processing.

Biography

Haifeng Wang
https://orcid.org/0000-0003-2827-7709

He received M.S. degree in Automation School of Wuhan University of Technology in 2007. He is a senior engineer and works in Jiangsu University of Technology. His research interests include information hiding and digital image processing.

Biography

Xin Fan
https://orcid.org/0000-0003-2053-0634

He received M.S. degree in Jiangsu University in 2010. Since March 2017, he studied for a PhD in Jiangsu University. He is a senior experimentalist and works in Jiangsu University of Technology. His research interest is mainly in digital image processing.

References

  • 1 X. Sun, L. Sun, Y. Liu, Y. Huang, "Research of forest fire smoke recognition method based on gray bit plane technology," International Journal of Smart Home, vol. 9, no. 4, pp. 57-64, 2015.custom:[[[-]]]
  • 2 Y. Luo, L. Zhao, P. Liu, D. Huang, "Fire smoke detection algorithm based on motion characteristic and convolutional neural networks," Multimedia Tools and Applications, vol. 77, no. 12, pp. 15075-15092, 2018.doi:[[[10.1007/s11042-017-5090-2]]]
  • 3 Y. Jia, J. Y uan, J. Wang, J. Fang, Q. Zhang, Y. Zhang, "A saliency-based method for early smoke detection in video sequences," Fire Technology, vol. 52, no. 5, pp. 1271-1292, 2016.custom:[[[-]]]
  • 4 C. E. Prema, S. S. Vinsley, S. Suresh, "Multi feature analysis of smoke in YUV color space for early forest fire detection," Fire Technology, vol. 52, no. 5, pp. 1319-1342, 2016.custom:[[[-]]]
  • 5 A. Filonenko, D. C. Hernandez, K. H. Jo, "Fast smoke detection for video surveillance using CUDA," IEEE Transactions on Industrial Informatics, vol. 14, no. 2, pp. 725-733, 2017.doi:[[[10.1109/TII.2017.2757457]]]
  • 6 A. Singh, B. Singh, B. Grover, G. Bhutani, A. Sharma, "FSIT: fire safety in trains," International Journal of Smart Home, vol. 10, no. 6, pp. 61-70, 2016.custom:[[[-]]]
  • 7 S. Luo, C. Y an, K. Wu, J. Zheng, "Smoke detection based on condensed image," Fire Safety Journal, vol. 75, pp. 23-35, 2015.custom:[[[-]]]
  • 8 C. Yuan, Z. Liu, Y. Zhang, "Learning-based smoke detection for unmanned aerial vehicles applied to forest fire surveillance," Journal of Intelligent & Robotic Systems, vol. 93, no. 1-2, pp. 337-349, 2019.custom:[[[-]]]
  • 9 Y. Zhao, Z. Zhou, M. Xu, "Forest fire smoke video detection using spatiotemporal and dynamic texture features," Journal of Electrical and Computer Engineering, vol. 2015, no. 706187, 2015.doi:[[[10.1155/2015/706187]]]
  • 10 D. K. Appana, R. Islam, S. A. Khan, J. M. Kim, "A video-based smoke detection using smoke flow pattern and spatial-temporal energy analyses for alarm systems," Information Sciences, vol. 418, pp. 91-101, 2017.doi:[[[10.1016/j.ins.2017.08.001]]]
  • 11 B. U. Toreyin, Y. Dedeoglu, A. E. Cetin, "Wavelet based real-time smoke detection in video," in Proceedings of 2005 13th European Signal Processing Conference, Antalya, Turkey, 2005;pp. 1-4. custom:[[[-]]]
  • 12 N. Fujiwara, K. Terada, "Extraction of a smoke region using fractal coding," in Proceedings of IEEE International Symposium on Communications and Information Technology, Sapporo, Japan, 2004;pp. 659-662. IEEE. custom:[[[-]]]
  • 13 B. L. Zhou, Y. L. Song, M. H. Yu, "Fire smoke detection algorithm based on image disposal," Fire Science and Technology, vol. 35, no. 3, pp. 390-393, 2016.custom:[[[-]]]
  • 14 L. Wang, A. G. Li, X. N. Wang, Y. F. Y u, "An early fire smoke detection method based on multi-features fusion," Journal of Dalian Maritime University, vol. 40, no. 1, pp. 97-100, 2014.custom:[[[-]]]
  • 15 S. Wang, Y. He, J. J. Zou, D. Zhou, J. Wang, "Early smoke detection in video using swaying and diffusion feature," Journal of Intelligent & Fuzzy Systems, vol. 26, no. 1, pp. 267-275, 2014.doi:[[[10.3233/IFS-120735]]]
  • 16 F. Y uan, "A fast accumulative motion orientation model based on integral image for video smoke detection," Pattern Recognition Letters, vol. 29, no. 7, pp. 925-932, 2008.doi:[[[10.1016/j.patrec.2008.01.013]]]
  • 17 F. N. Yuan, Y. M. Zhang, S. X. Liu, C. Yu, S. Shen, "Video smoke detection based on accumulation and main motion orientation," Journal of Image and Graphics, vol. 13, no. 4, pp. 808-813, 2018.custom:[[[-]]]
  • 18 F. Yuan, "A double mapping framework for extraction of shape-invariant features based on multi-scale partitions with AdaBoost for video smoke detection," Pattern Recognition, vol. 45, no. 12, pp. 4326-4336, 2012.doi:[[[10.1016/j.patcog.2012.06.008]]]
  • 19 C. Yu, J. Fang, J. Wang, Y. Zhang, "Video fire smoke detection using motion and color features," Fire Technology, vol. 46, no. 3, pp. 651-663, 2010.custom:[[[-]]]
  • 20 Z. Zhou, Y. Zhao, Y. Tang, Y. Zhang, "Segmentation of forest fire video smoke region based on the temporal-spatial features," Journal of Chinese Agricultural Mechanization, vol. 37, no. 2, pp. 196-199, 2016.custom:[[[-]]]
  • 21 L. Zhao, Y. M. Luo, X. Y. Luo, "Based on dynamic background update and dark channel prior of fire smoke detection algorithm," Application Research of Computes, vol. 34, no. 2, pp. 957-960, 2017.custom:[[[-]]]
  • 22 F. Nazary, A. Fotouhi, "Toward a smoke detection system for early fire alarming based on video processing and fuzzy reasoning," International Journal of Engineering Intelligent Systems for Electrical Engineering and Communications, vol. 25, no. 1, pp. 41-48, 2017.custom:[[[-]]]
  • 23 F. Y uan, Z. Fang, S. Wu, Y. Yang, Y. Fang, "Real-time image smoke detection using staircase searching-based dual threshold AdaBoost and dynamic analysis," IET Image Processing, vol. 9, no. 10, pp. 849-856, 2015.doi:[[[10.1049/iet-ipr.2014.1032]]]
  • 24 Z. G. Liu, Y. Yang, X. H. Ji, "Flame detection algorithm based on a saliency detection technique and the uniform local binary pattern in the YCbCr color space," SignalImage and Video Processing, vol. 10, no. 2, pp. 277-284, 2016.doi:[[[10.1007/s11760-014-0738-0]]]
  • 25 Z. Zhou, Y. Q. Zhao, "A new smoke detection method of forest fire video with color and flutter," in Proceedings of the 2015 Chinese Intelligent Automation Conference. Heidelberg: Springer, 2015;pp. 151-161. custom:[[[-]]]
  • 26 F. Yuan, X. Xia, L. Shi, H. Li, G. Li, "Non-linear dimensionality reduction and gaussian process based classification method for smoke detection," IEEE Access, vol. 5, pp. 6833-6841, 2017.doi:[[[10.1109/ACCESS.2017.2697408]]]
  • 27 S. Wang, Y. He, H. Yang, K. Wang, J. Wang, "Video smoke detection using shape, color and dynamic features," Journal of Intelligent & Fuzzy Systems, vol. 33, no. 1, pp. 305-313, 2017.doi:[[[10.3233/JIFS-161605]]]

Table 1.

The proposed method
Method by Steps

1: Input the m×n gray video, and set M as m×n zero matrix, the initial value B as the first frame of the input video, a = 0.05;

2: Read the image I in every frame of the video, and calculate the background image: [TeX:] $$B=(1-M) .^{*} a .^{*} I+B+(M-1) .^{*} a .^{*} B$$

3: Calculate the binarized mask image M:

4: Decompose the I × M image with monolayer wavelet and calculate the wavelet energy Eb:[TeX:] $$E_{b}=\operatorname{sum}\left(\operatorname{sum}\left(c a .^{\wedge} 2\right)\right)$$

5: Continuously calculate the image in 40 frames, and obtain the change slope k1, k2 in every 20 frames through linear fitting: [TeX:] $$k=\operatorname{ployfit}\left(\underline{x}, E_{b}, 1\right) ; x is the number of frames, ployfit ( ) is the fitting function;$$

6: Judge if there is any smoke: if |[TeX:] $$\left|k_{1}\right|>2 \times 10^{6} \&\left|k_{2}\right|>2 \times 10^{6}$$, then give an alarm of fire;

Table 2.

Video scene description in Experiment 1
Video scene description
Video1 Smoke from explosion.
Video2 Smoke from explosion of a small firecracker.
Video3 Smoke from the friction of car tires against the ground.
Video4 Smoke from a smoke generator.
Video5 Smoke from smoldering hay.

Table 3.

Smoke detection result in Experiment 1
Algorithm Total number of frames Smoke-observing frame Smoke-detecting frame Error time (s) Computation time (s)
Video1 176 45th
Wang et al. [14] 40th - 13.39
Zhou et al. [13] 100th 2.20 12.80
Proposed 48th 0.12 7.38
Video2 290 50th
Wang et al. [14] 70th 0.80 21.57
Zhou et al. [13] 100th 2.00 22.39
Proposed 60th 0.40 13.02
Video3 676 23th
Wanget al. [14] 40th 0.68 71.75
Zhou etal. [13] 60th 1.48 95.16
Proposed 40th 0.68 26.68
Video4 151 25th
Wang et al. [14] 70th 1.80 12.22
Zhou et al. [13] 100th 3.00 11.75
Proposed 40th 0.60 6.43
Video5 1,200 40th
Wanget al. [14] 220th 7.20 86.75
Zhou etal. [13] 240th 8.00 92.43
Proposed 60th 0.80 49.87

Table 4.

Video scene description in Experiment 2
Video description
Video6 A man in green is seen walking at a relatively fast speed at 17″ before smoke occurs.
Video7 A man wearing deep blue tops is seen walking at a slow speed at 17″ before smoke occurs.
Video8 A man wearing deep blue tops and a man in white are seen walking at a relatively fast speed at 16″ before smoke occurs.
Video9 A man wearing yellow tops is seen walking at a normal speed at 16″ before smoke occurs.

Table 5.

Detection data in Experiment 2
Algorithm Total number of frames Smoke-observing frame Smoke-detecting frame Error time (s) Computation time (s)
Video6 2,151 450th
Wang et al. [14] 270th - 225.61
Zhou et al. [13] 360th - 156.86
Proposed 460th 0.2 93.99
Video7 2,344 410th
Wang et al. [14] 200th - 227.27
Zhou et al. [13] 240th - 160.55
Proposed 440th 1.2 96.23
Video8 2,023 390th
Wanget al. [14] 180th - 194.89
Zhou etal. [13] 120th - 138.52
Proposed 420th 1.2 83.50
Video9 1,879 410th
Wang et al. [14] 320th - 191.40
Zhou et al. [13] 360th - 133.75
Proposed 420th 0.2 78.45

Table 6.

Video scene description in Experiment 3
Video description
Video10 Dark clouds gradually occupy the entire sky, at a slow speed.
Video11 Fog gradually becomes denser, at a relatively fast speed.
Video12 Fog gradually becomes denser, at a fast speed.
Video13 Fog density basically does not change.

Table 7.

Detection data in Experiment 3
Algorithm Total number of frames Smoke-observing frame Smoke-detecting frame
Video10 941 No smoke
Wang et al. [14] 350th
Zhou et al. [13] 360th
Proposed Not detected
Video11 8569 No smoke
Wang et al. [14] 1800th
Zhou et al. [13] 1350th
Proposed Not detected
Video12 7619 No smoke
Wanget al. [14] 1790th
Zhou etal. [13] 960th
Proposed Not detected
Video13 2093 No smoke
Wang et al. [14] 1440th
Zhou et al. [13] 1200th
Proposed Not detected

Table 8.

Video scene description in Experiment 4
Video description
Video14 A white car and a man in white are seen in the video at the 7th second and the 10th second.
Video15 A white car and a blue car are seen at the top in the video.
Video16 Some students are playing basketball.
Video17 A man in white is seen walking at the 10th second in the video.

Table 9.

Detection data in Experiment 4
Algorithm Smoke-observing frame Smoke-detecting frame
Video14 No smoke
Wang et al. [14] 220th
Zhou et al. [13] 300th
Proposed Not detected
Video15 No smoke
Wang et al. [14] 140th
Zhou et al. [13] 210th
Proposed 140th
Video16 No smoke
Wanget al. [14] 80th
Zhou etal. [13] 120th
Proposed 40th
Video17 No smoke
Wang et al. [14] 220th
Zhou et al. [13] 270th
Proposed Not detected

Table 10.

Video scene description in Experiment 5
Video description
Video18 Smoke is released under a very dark lighting condition
Video19 Smoke is released under a relatively dark lighting condition.

Table 11.

Detection data in Experiment 5
Algorithm Total number of frames Smoke-observing frame Smoke-detecting frame
Video18 1217 375th
Wang et al. [14] Not detected
Zhou et al. [13] Not detected
Proposed Not detected
Video19 526 105th
Wang et al. [14] 180th
Zhou et al. [13] Not detected
Proposed Not detected
Moving object detection: (a) screenshot from the video and (b) the moving object.
Curve of smoke change in the video with smoke only: (a) screenshot from the video, (b) smoke curve, and (c) change of the fitted slope.
Curve of smoke change in the video with non-smoke distractors: (a) screenshot from the video, (b) smoke curve, and (c) change of the fitted slope
Detection result of videos with smoke only.
Smoke video detection with moving distractors.
Cloud and fog video: (a) video10, (b) video11, (c) video12, and (d) video13.
Video with moving distractors: (a) video14, (b) video15, (c) video16, and (d) video17.
Videos with different light intensities: (a) video18 and (b) video19.