国产日韩欧美一区二区三区三州_亚洲少妇熟女av_久久久久亚洲av国产精品_波多野结衣网站一区二区_亚洲欧美色片在线91_国产亚洲精品精品国产优播av_日本一区二区三区波多野结衣 _久久国产av不卡

?

A New Fire Detection Method Using a Multi-Expert System Based on Color Dispersion, Similarity and Centroid Motion in Indoor Environment

2020-02-29 14:21:28TengWangLepingBuZhikaiYangPengYuanandJinengOuyang
IEEE/CAA Journal of Automatica Sinica 2020年1期

Teng Wang, Leping Bu, Zhikai Yang, Peng Yuan, and Jineng Ouyang

Abstract—In this paper, a video fire detection method is proposed, which demonstrated good performance in indoor environment. Three main novel ideas have been introduced. Firstly, a flame color model in RGB and HIS color space is used to extract pre-detected regions instead of traditional motion differential method, as it’s more suitable for fire detection in indoor environment. Secondly, according to the flicker characteristic of the flame, similarity and two main values of centroid motion are proposed. At the same time, a simple but effective method for tracking the same regions in consecutive frames is established. Thirdly,a multi-expert system consisting of color component dispersion,similarity and centroid motion is established to identify flames.The proposed method has been tested on a very large dataset of fire videos acquired both in real indoor environment tests and from the Internet. The experimental results show that the proposed approach achieved a balance between the false positive rate and the false negative rate, and demonstrated a better performance in terms of overall accuracy and F standard with respect to other similar fire detection methods in indoor environment.

I. INTRODUCTION

VIDEO fire detection technology is a new technology which has just been applied in fire detection during the last few decades. Compared with conventional methods such as smoke detection and temperature detection, video fire detection method is faster, more intelligent and more reliable as it is a non-contact detection method [1]. Generally, an ordinary color camera is used to shoot videos of the scene, and some unique features of fire such as color and shape are extracted as the input of the recognition algorithm for fire detection [2].

As the most representative feature of the fire, color is often used in the video fire detection. Chenet al. [3] presented an early fire-alarm method by adopting an RGB (red, green,blue) model referring to the intensity and saturation of the R component to extract fire-pixels. Both disorder and dynamics of growth were used to verify if the extracted fire-pixels is a real fire. Besides, the growing ratio of flames is checked iteratively to be a main alarm-generating condition. ?eliket al. [4] proposed a rule-based generic color model for fire pixel classification in YCbCr color space. A large number of sample images were used to test the performance of the algorithm, which showed a fire detection rate up to 99%achieved by the proposed method. Wanget al. [5] proposed a new fire recognition model referring to the dispersion of fire color components, as fire can be divided into fire core, inner fire and outer fire areas depending on it is temperature and color. The threshold of B component standard deviation is calculated out by drawing the ROC curve of detecting results based on large number of sample images. A series of experiment results showed that the proposed color model can eliminate the influence of common interferences and noises,and detect out the fire areas accurately in the image, which has a good performance in indoor environment.

In order to reduce the rate of false alarm, the kinematic characteristics of flame are incorporated into the detection method. Lascioet al. [6] proposed a novel method for detecting fires with traditional surveillance cameras. In order to increase the reliability of the method, the color and movement information of the videos are combined into a multi expert system to detect fire regions. A very large dataset acquired in real environment were used to test the performance of the proposed algorithm. Chiuet al. [7]developed a video-based fire detection system (VFDS) based on the sophisticated computer models and novel mathematical calculations. The wavelet-based contour modeling approaches and temporal flicker model of flames are used as weak classifiers in the system. Zhanget al. [8] presented a fire detection algorithm by combining static and dynamic flame features based on the theory of the degree of belief. The processing rate of the algorithm is rapid. But interferences such as noises and fire-like objects can not be excluded effectively. A flame detection synthesis method is proposed in[9]. The presented approach applied the color clues, flame movement, and flame area variation to detect fire in video frames. In [10], two-dimensional wavelet analysis is used for video smoke and fire detection. Firstly, the moving objects are all extracted based on motion features. Secondly, smoke and fire regions are detected out according to color model separately. Thirdly, the two-dimensional wavelet analysis and disorder feature are used to eliminate the interference of motive objects looking like fire. The experimental results show that the algorithm can reduce the false positives.Habibo?luet al. [11] presents a video fire detection method based on color, spatial and temporal information. The video is divided into spatial-temporal blocks according to color information firstly. Then the fire blocks are checked out through the covariance features extracted from those blocks.At last, a support vector machine (SVM) classifier is used to train and test the extracted features for wide application of the system. The system shows a good detection performance without non-stationary cameras. In [12], Foggiaet al.proposed a video fire detection method based on color, shape variation, and motion analysis of video frames acquired by surveillance. Besides, they also presented a novel descriptor according to a bag-of-words approach for representing motion. A very large dataset of fire videos acquired in different environment is used to stress and test the multiexpert system combined by color, shape and motion features.The obtained results shows that the proposed method outperforms other similar approaches in terms of accuracy and false negatives.

The stereoscopic features of flame is also applied for fire detection. In [13], a new fire detection method with a stereo camera is proposed. Firstly, candidate fire regions were extracted using generic color models and a simple background difference model. Secondly, the size, shape, and motion variation of the flame were applied to fuzzy logic for real-time fire verification. Finally, the distance between the camera and the fire region was estimated by matching feature points of the left and right image. Besides, the 3D surface of the fire front was reconstructed. Verstocktet al. proposed two novel timeof-flight based methods for indoor and outdoor fire detection in [14]. The indoor flames can be detected very accurately by amplitude disorder and fast changing depth detection. As depth measurements are unreliable in outdoor environment, a visual flame detector is used instead of the fast changing depth detection for the outdoor fire detection. Gomeset al. [15]presented a video fire detection method applied in fixed surveillance smart cameras The method applies several wellknown techniques such as a context-based learning mechanism and an attentive mechanism to attain higher accuracy, robustness and effectiveness for fire detection.Besides, a new fire color model and a new fire frequency model based on wavelet analysis are proposed. The cameraworld mapping is also approximated with a GPS-based calibration process to realize dynamic fire detection and generate the location of fire alarms in the image plane.

The above detection methods basically took advantage of motion difference to extract moving regions as the blobs for further analysis. This pre-detected method works well under natural illumination, but can easily be affected by interference from lighting in indoor environment. The detail is showed in Section II-A. Therefore, a flame color model which can accurately extract the flame regions in indoor environment is used to extract the pre-detected regions instead of the motion difference method in this paper. And similarity and two main values of centroid motion are proposed according to the flicker characteristic of the flame, At last, a multi-expert system consisting of color dispersion, similarity and centroid motion is established to identify flames. Details of the method are fully described in Section II followed by testing and analysis on various videos in Section III. Concluding remarks are presented in Section IV.

II. METHOD

The proposed method is made of three major steps

1) Extracting candidate regions from video frames through RGB-HIS flame color model.

2) Calculating flame color dispersions, similarities and centroid varieties of candidate regions in consecutive frames.

3) The multi-expert system consisting of color dispersion,similarity and centroid motion is established to identify flames based on analysis of various videos acquired by surveillance camera in indoor environment.

Detailed descriptions of these steps are presented in the following subsections.

A. RGB-HIS Cadidate Region Detection

Although there are many flame colors, the initial flame color commonly ranges from red to yellow. Corresponding to the RGB color space, the gamut satisfiesRG>B. In the meantime, as the main component of the flame in the RGB image, R should be greater than a threshold valueRT. In order to avoid the interference of background light, the flame saturationSshould also be greater than a threshold. According to the above flame color characteristics, Chenet al. derived three deciding rules to extract flame areas in the flame image,which are as follows:

As shown in (1),RTandSTare the threshold of color component Red and Saturation, respectively. The relationship between Red and Saturation values of the flame pixels is shown in Fig 1 . According to statistics of numerous experimental results,STandRTare respectively in the range between 55 and 65 and 115 and 135 [7]. In this paper,STis defined as 55 andRTequals to 125.

In order to test the accuracy and reliability of the above flame color model, several flame images containing common interference are processed and the results are shown in Figs. 2-4.

According to the test results shown in Figs. 2-4, the flame region contour extracted via RGB-HSI color model is very clear and accurate with no suspected flame regions ignored,thus achieving a much better extraction result than the other two methods for fire region detection in an indoor environment. As a result, the RGB-HSI color model is used to extract suspected flame region as blobs for further analysis in this paper.

B. Color Dispersion

1) Dispersion of Flame Color Component

The fire detection method based on color dispersion has been mentioned in our last paper [5]. Different parts of fire have different levels of burning and temperature, which manifests as color dispersion. However, common interference objects such as fluorescent lights and electric welding show single color characteristics. Therefore the difference in dispersion of color component can be selected to separate fire zones from interference areas. The standard deviation of the color component is used as the expression of dispersion in this paper.

Wmeanis defined as the mean value ofWcolor component within a region consisting ofKpixels in the fire image.

In the above equation,W(xi,yi) is the value ofWcolor component in pixel (xi,yi).

Then, the standard deviation ofWcolor component within this regionWstdcan be calculated as below.

2) Choice of Color Component

In order to choose the standard deviation of the most suitable color component as the criteria to differentiate fire and interference objects, the standard deviation of R, G, B components of common standard fires and interference objects are all calculated. The experimental fires are tested indoors, in a square oil pan of 33 cm × 33 cm, and a Sony FCB-CX1020P high-resolution color camera is used to shoot the video in a distance ranging from 2 to 3 meters. Results are shown in Table I.

The results in the table above show that the standarddeviation of Blue component from fire objects and interference objects are drastically different. This is because the brightness of the fire and non-fire objects are usually very high with their Red component and Green component approaching 255, which show no big difference in their dispersion. However, the Blue component of fire is generated by burning oxygen, which varies dramatically in different parts of fire, thus its standard deviation of Blue component is large. Meanwhile, the Blue component of non-fire objects is usually caused by light, which shows no dispersion in small areas, thus their standard deviation of Blue component is very small. Therefore, it is feasible to choose the standard deviation of Blue component as the criteria to differentiate fire and nonfire objects.

Objects/Standard deviation Rstd Gstd Bstd Alcohol fire (Fire) 0.0594 4.0553 50.3641 Kerosene fire (Fire) 0.0588 2.8398 14.2169 Wood fire (Fire) 0.0485 5.3524 48.6095 Fluorescent lamp (Non-fire) 0.0021 0.0783 0.3724 Electric welding (Non-fire) 0.0446 1.7401 6.3091 White towel (Non-fire) 0.0371 2.2539 6.4057 Sunlight (Non-fire) 0.0420 1.0384 3.4186 Flashlight (Non-fire) 0.0213 0.4018 0.9323 Reflective metal (Non-fire) 0.0059 0.0853 2.0873

Based on the theory above, incorporating the RGB-HIS color model from Chen and the standard deviation of B component, a fire detection model based on dispersion of color component is proposed. The criteria are listed as following:

3) Threshold of Blue Component via Accuracy

The accuracy is used to evaluate the performance of threshold selection. When the accuracy reaches the maximum,the corresponding threshold range is considered to be optimal.

According to the concept of Accuracy, the true positive rate(TPR), true negative rate (TNR), false positive rate (FPR), and false negative rate (FNR) are defined as follows:

In the equations above,TPmeans true positive. Namely,when the true fire is detected as the fire, the number ofTPwould be added with 1;FPmeans false positive. When the interference is detected as the fire, the number ofFPwould be added with 1;TNmeans true negative. When the interference is detected as the interference, the number ofTNwould be added with 1;FNmeans false negative. When the interference is detected as the fire, the number ofFNwould be added with 1.

They are calculated for each video. We define the averaged accuracyAby

In order to accurately select the threshold with broad applicability, 1000 fire images from a large number of indoor fire footage with different types of fire and different stages of fire are analyzed as positive samples, and 1000 different images of common non-fire interference objects such as welding, reflective metal, and white towels are also analyzed as negative samples. According to the calculation based on these sample images, when the threshold of Blue dispersionBT=11,AccurancyMAX=99.10% 。

4) Testing of Single Frame

The performance of detection is showed below.

From the comparison of detection results in Figs. 5-8, the proposed fire recognition model proved to be superior to the other two models, and can accurately detect the fire area while completely eliminate the impact of interference object.However, the detecting result shown in Fig. 8 is not optimal,because the reflective light on the face is not evenly distributed, which will increase the false positive rate.Therefore, further processing with other fire characteristics is needed to eliminate similar interference.

C. Similarity of Detected Regions in Consective Frames

1) Similarity

The shape of fire flame seems irregular in a single frame,but it shows a certain similarity in a sequence of consecutive frames, especially within short intervals. The flame similarity also varies within a certain range, which is quite different from other fast moving light source or interference objects with fire color characteristics. Therefore similarity can be used as a basis for flame detection. Similarity is defined as

where Ωlrepresents contiguous regions to be evaluated in theith frame, and Ωl?1represents contiguous regions to be evaluated in the (i?1)thframe.

2) Area Tracking Algorithm for Consecutive Frames

Video tracking algorithm [16] includes feature-based tracking algorithm, contour model-based tracking algorithm,and prediction-based tracking algorithm such as Kalman filtering. Because the shape of flames is irregular and it is easily influenced by changes in wind direction and other natural environmental factors, the above video tracking algorithms are not suitable for fire detection. The main flicker frequency of fire is between 7-12 Hz [17], and the sampling frequency of ordinary surveillance cameras is at 30 frames/s. Therefore, it can be assumed that the flame burns in a fixed area during a short period. According to these flame characteristics, a regionbased tracking algorithm is presented in this paper and is implemented in Matlab. Details are as follows:

Step 1:“bwlabel” function is used to label different connected regions in the current frame imageItand the previous frame imageIt?1, respectively.Stis a matrix of the same size asIt, which contains labels for the connected components inIt.Numtis the number of connected objects found inIt.

Step 2:A difference imageMtis obtained by subtracting the previous frame imageIt?1from the current frame imageIt.

Step 3:For every region inSt, count the number of pixels whose value is 1 in corresponding region inMt(iis the regional label, andNiis the number of pixels whose value is 1 in regioniof imageMt).

IfNi>0, proceed to Step 4; Otherwise, continue to search in the (i+1)thimage until all regions are evaluated.

Because the flame usually burns in a fixed area during a short period, it is assumed that the two regions of which the centroid distance is the smallest represent the same suspect area in the neighboring frames. Namely, if=minthenandrepresent the same suspect area in the neighboring frame imagesItandIt?1.

Step 5:Calculate the similarityof the corresponding regionsandin the two consecutive frames.

3) Selection of Similarity Threshold

In order to study the motion characteristics of flame and interference images, the changes of their shape features in continuous frame images need to be analyzed. According to the common fires and interferences in indoor environment,alcohol fire, Kerosene fire, fuel fire, electric welding, car lights, moving flashlight and shaking white towel are tested as samples, which are shown in Fig. 9.

Corresponding similarities are calculated and the distribution histograms are shown in Fig.10.

As shown in Fig.10, the similarity values calculated from 300 continuous frames of flame sample images shown in subfigures (a), (b) and (c) are mainly distributed between 0.5 and 0.9, as only the outer flame part changes within the time interval between two consecutive frames. And the flicking frequency of fire is 7-12 Hz, while the average frame rate of the common camera is generally 20-30 Hz, so the calculated similarity value of fire is stable within this interval. As shown in the sub-figures (d) and (g), the calculated similarity values of the welding and shaking white towel sample images are mostly equal to 0, since their flicking frequencies are much less than the frame rate of the camera, leading to little changes between two consecutive frames. As shown in the sub-figure(f), since the moving flashlight moves in one-way during a short period of time, the calculated similarity values are mostly close to 1.

Based on the similarity distribution above, we still use the concept of accuracy to determine the thresholds of similarity.S TLis defined as the bottom threshold of similarity, andS THis defined as the top threshold of similarity.S TLincreases from 0 to 1 gradually with the step interval of 0.01, andS THincreases gradually with the step interval of 0.01 in [STL,1].The corresponding accuracy values in each threshold range[STL,STH] are calculated. It is considered that when the accuracy value reaches the maximum, the corresponding threshold range [STL,STH] is the best threshold.

After the calculation as introduced above, we find that when,. The corresponding results are shown in Table II.

D. Centroid Motion of Detected Regions in Consective Frames

1) Centroid

The centroid is defined as the average coordinate of the pixels in the region. It is defined as follows:

wherein (xj,yj) represents the pix coordinates in theith region, and (xi,yi) represents the centroid of theith region.nrepresents the number of pixels in the region.

2) Centroid Motion of Suspected Fire Regions in Consective Frames

In order to study the rule of fire centroid motion, a few standard types of fire and sources of interference are selected for analysis [18]. As shown in Fig.11, the experimental fires are tested in the square oil pan of 33 cm × 33 cm, and a Sony FCB-CX1020P high-resolution color camera is used to shoot the videos [19].

As shown in Fig.12, the motion of fire centroid is more obvious than that of the stationary interference source such as the reflective metal, and exhibits a repeating pattern in a certain period of time. Therefore, the centroid motion can be described by the following statistics.

3) Constructue of Centroid Motion Model

Define (xi,yi) as the centroid coordinate of the suspected flame region in theith frame (i=1,2,...,N), then

The displacement vector of the centroid in N frames in the horizontal direction can be calculated as follows:

Expert/Result S TL S TH TPR FPR TNR FNR A Similarity 0.62 0.88 86.7% 13.3% 67.4% 32.6% 77.05%

The displacement vector of the centroid in N frames in the vertical direction can be calculated as follow:

The displacement standard deviation of the centroid in N frames in the horizontal direction can be calculated as follow:

The displacement standard deviation of the centroid in N frames in the vertical direction can be calculated as follow:

The total displacement of the centroid in N frames can be calculated as follow:

The total distance of the centroid in N frames can be calculated as follow:

The ratio of the total displacement to the total distance of the centroid in N frames can be calculated as follow:

The average area of the suspected fire area in N frames can be calculated as follow:

Compared to common sources of interference, the centroid motion of flame region has a flickering characteristic, and the ratio of its range to the height of the flame is within a certain range. To simplify calculation, the square root of the area is used instead of the flame height. Therefore the ratio of flame centroid distance to flame area can be obtained:

Here, we can track 30 consecutive frames of the videos, and calculate the above statistics (in pixels).

As shown in Table III, in a certain period of time, the ratio of the total displacement to the total distance of the centroid motion in fire regions is less than a certain threshold. Because the flame is constantly flickering, its centroid moves in a repeated fashion. Common sources of interference, such as a moving flashlight, because they present a unified movement in a short period of time, the ratio is large. At the same time,as the flame flickering is periodic, the ratio of the distance of flame centroid motion to the square root of its area, BMS, will fall within certain range. On the contrary, for the stationary interference source, the absolute value of motion is almost negligible, therefore the ratio is also very small. Based on the above two characteristics, we can propose the centroid motion model

SH SV DH DV DS ZD RD MS BMS Alcohol fire 4.92 -2.50 2.19 4.01 5.52 113.04 0.049 76.17 0.43 Kerosene fire -0.82 7.09 1.56 8.03 7.14 175.82 0.041 132.37 0.51 Wood fire -1.53 -1.69 1.23 5.08 2.28 120.43 0.019 61.70 0.51 Shaking towel 5.03 35.78 3.86 10.86 36.13 259.87 0.139 68.97 1.04 Moving flashlight 1.95 34.85 0.514 0.56 34.90 37.90 0.921 98.53 0.13 Welding -5.81 9.40 1.59 2.33 11.05 72.41 0.153 185.03 0.18 Reflective metal -0.11 -0.16 0.10 0.12 0.20 4.08 0.048 13.43 0.04

Expert/Result RDT BMST TPR FPR TNR FNR A Centroid motion 0.31 0.28 99.5% 0.5% 90% 10% 94.75%

E. Multi-Expert System

MESs have been successfully applied in several applications, such as face detection and movie segmentation.As mentioned in [14], although several new strategies have been presented in the last years [20], one of the most robust to the errors of the combined classifiers is the weighted voting rule [21].The detailed principle and formula has been introduced in [12]. As shown in Fig.13, we use the color dispersion, similarity and centroid motion as three experts in the MES. Namely,

WDE(F),WS E(F) andWCE(F) are respectively the recognition accuracy of flame positive samples by dispersion,similarity and centroid movement based on the training data.And,andare respectively the recognition accuracy of interference samples by dispersion,similarity and centroid movement based on the training data.

Video Resolution Frame rate Frames Fire Notes Fire 1 320×240 30 1581 yes A close gasoline fire. The video has been acquired by the authors in a warehouse.Fire 2 320×240 30 1221 yes A close coal fire. The video has been acquired by the authors in a warehouse.Fire 3 320×240 30 402 yes A close wood fire. The video has been acquired by the authors in a warehouse.Fire 4 320×240 25 1650 yes A fire in a bucket in indoor environment. Video downloaded from [23].Fire 5 320×240 30 1805 yes A close newspaper fire. The video has been acquired by the authors in a warehouse.Fire 6 400×256 15 573 yes A sparking wire fire burning in a house. Video downloaded from [23].Fire 7 1280×720 25 1625 yes A far fuel fire in indoor environment with a strong light exposed through the windows. The video has been acquired by the authors in a dark warehouse.Fire 8 1280×720 25 2000 yes A fuel fire in indoor environment. The video has been acquired by the authors in a large warehouse.Fire 9 720×576 17.66 2187 yes A fuel fire under the strong flashing light. The video has been acquired by the authors in a large warehouse.Fire 10 720×576 19.1 2865 yes Fuel flames under the influence of blinking lights of a moving van in a large warehouse.Fire 11 720×576 18.5 1836 no A flashing van tail lights moving in a spacious warehouse. The video has been acquired by the authors.Fire 12 320×240 30 303 no A close flashlight held by a man walking tin the dark house. The video has been acquired by the authors.Fire 13 720×576 24.95 2495 no A small flashing flashlight held by a man walking around tin the large warehouse. The video has been acquired by the authors.Fire 14 720×576 24.95 3742 no A flashing car headlights moving in the large warehouse. The video has been acquired by the authors.Fire 15 720×576 24.95 1497 no A flashing car tail lights moving in the large warehouse. The video has been acquired by the authors.Fire 16 320×240 30 929 no A white towel shaken by a person dressed in camouflage coat. The video has been acquired by the authors.Fire 17 1280×720 25 252 no A reflective metal box moved by a person in the bright warehouse. The video has been acquired by the authors.Fire 18 1280×720 25 3000 no A man dressed in white walking around in the warehouse. The video has been acquired by the authors.Fire 19 320×240 30 501 no Four working fluorescent lamps in a dark house, The video has been acquired by the authors.Fire 20 720×576 18.8 2373 no A strong window lighting while the camera moves around in a large warehouse. The video has been acquired by the authors.Fire 21 320×240 30 660 no A reflective metal plate in indoor environment. The video has been acquired by the authors.Fire 22 320×240 30 2250 no A strong light exposed through the door into a dark house while the door is turned open and closed.Fire 23 320×240 10 900 no Some smoke pot near a red dust bin. Video downloaded from [23]Fire 24 320×240 30 806 no A close welding in indoor environment. The video has been acquired by the authors in a workshop.

III. EXPERIMENTS AND ANALYSIS

Most of the current fire methods (especially the ones based on the color information) are still tested by images instead of videos. No standard datasets for testing detection approaches have been made available up to now. T?reyinet al. [22] and Cetin [23] have made a large collection of videos for fire and smoke detection. But most of them are acquired in outdoor environment, which can not be used to test the proposed in this paper. So we acquired several long videos in indoor situations and create a new dataset composed of 37453 frames including 10223 frames downloaded from internet [23]. More information about the videos are showed in Table V, while most visual examples are shown in Fig. 14. The first 10 videos contains kinds of fire in different indoor environment and the last 14 videos do not contain fires but has objects or situations which can be wrongly classified as containing fire. Especially,the motive flashing lights may be misclassified by both colorbased methods and motion-based approaches. Besides, the burning fire is usually ignored because of the strong illumination in Fig.14 (i). Such composition allows us to test the system in many real indoor environments.

Firstly, random 2 0% videos of the dataset is used to train the Multi-expert system. And the training results are shown in the Table VI.

Because the actual damage caused by false negatives is much larger than the false positives, we set the false negatives cost as 10 times of false positives, and another objective function of each method can be set as follows [24]:

As shown in Table VII, the false negative rate of DE and VE are lower than SE. After the simple combination of the three experts, the false positive rate is reduced to 5.20%,showing the best performance in terms of false positive rate in all the methods listed. However, the simple combination greatly increased the false negative rate to 23.90%, making the system reach 85.45% and -0.681 respectively in the two comprehensive evaluation standard accuracy andF. The overall performance is poor. At the same time, the MES combination is effective in reducing the false negative rate by 16.86% based on the control of a low false negative rate as 1.00%, which achieves the best performance both in accuracy 91.07% and the F standard 1.721.

Expert /Result TPR FPR TNR FNR DE 100% 0 80% 20%SE 85.72% 14.28% 76.57% 23.43%VE 98.42% 1.58% 82.85% 17.15%

Table VII also compares the performance of our algorithm and the other four methods which are also based on the combination of color, motion, and shape information. Thedetailed analysis of the paper [3], [10], [11] has been introduced in [12]. And the proposed method in [12] built a multi-expert system combined by color, shape and motion information of video frames, which outperforms the previous three methods both in terms of accuracy and false negative rate. However, the whole analysis is based on the extraction of motive blobs, and the motion differential extraction does not work well in indoor environment, especially under the interference of strong illumination. Besides, compared to the color and shape experts presented in [12], the color dispersion,similarity and centroid motion experts proposed in this paper demonstrated a better performance in terms of accuracy respectively. As a result, the RGB-HIS flame color model has been employed to extract the pre-analysis blobs instead of motion differential method, and a similar MES has been built based on color dispersion, similarity and centroid motion. As expected, the experimental results based on a large dataset in indoor environment demonstrated that the proposed approach outperformed the other four methods in terms of accuracy,false negative rate and F standard.

Typology Method Accuracy False Positives False Negatives F Single expert DE Proposed 86.69% 22.70% 3.93% 1.341 SE Proposed 80.24% 22.69% 16.8% -0.077 VE Proposed 83.89% 28.03% 4.18% 1.259 Simple combination DE+SE 86.90% 5.95% 20.24% -0.286 DE+VE 90.06% 11.80% 8.08% 0.993 SE+VE 85.01% 9.51% 20.48% -0.347 DE+SE+VE 85.45% 5.20% 23.90% -0.681 MES DE+SE 84.81% 29.87% 0.50% 1.646 DE+VE 85.31% 29.36% 0.025% 1.704 SE+VE 79.13% 41.22% 0.525% 1.530 DE+SE+VE Proposed 91.07% 16.86% 1.00% 1.721 Other methods RGB+Shape+Motion [10] 65.58% 47.13% 21.72% -0.861 Color+Shape+Motion [11] 84.78% 12.58% 17.86% -0.090 Color+Shape+Motion [3] 81.36% 17.25% 20.03% -0.376 Color+Shape+Motion [12] 85.63% 15.80% 12.95% 0.418

IV. CONCLUSION

In this paper, a video fire detection method using a multiexperts system based on color dispersion, similarity, and centroid motion information of pre-detected blobs is proposed.Contrary to the weak robustness of motion differential approach in indoor environment, an RGB-HIS flame color model is used to extract pre-detected regions as the analysis blobs. The detection system has been tested on a large number of videos acquired by surveillance camera in indoor environment. Test results showed that the MES approach achieved a balance between false positive rate and the false negative, and demonstrated a better performance in terms of overall accuracy with respect to its component expert.Compared to the other four methods, the proposed system not only achieved optimum overall accuracy andFcriterion, but also reduced the false negative rate in indoor environment.

However, the proposed method still has the following deficiencies: The thresholds of each expert and their weight values in the multi-expert system are calculated based on the samples randomly selected from the dataset. Even in the same dataset, different key parameters will be got from different samples used. Besides, when other new data is added, those parameters all have to be recalculated thoroughly. That is the next problem to be solved in our research, and deep learning maybe a good solution.

江安县| 木兰县| 高雄县| 侯马市| 江山市| 朔州市| 武清区| 晋宁县| 德保县| 乌兰察布市| 吉安市| 漯河市| 新巴尔虎右旗| 娄烦县| 济源市| 乐亭县| 芦溪县| 通海县| 巴彦淖尔市| 夏河县| 嘉善县| 蕉岭县| 台北市| 锡林郭勒盟| 南康市| 灵石县| 壶关县| 乐亭县| 红河县| 溆浦县| 安吉县| 刚察县| 沂源县| 阿拉善右旗| 施甸县| 子长县| 祁阳县| 京山县| 旬邑县| 克山县| 霸州市|