国产日韩欧美一区二区三区三州_亚洲少妇熟女av_久久久久亚洲av国产精品_波多野结衣网站一区二区_亚洲欧美色片在线91_国产亚洲精品精品国产优播av_日本一区二区三区波多野结衣 _久久国产av不卡

?

Locally Linear Back-propagation Based Contribution for Nonlinear Process Fault Diagnosis

2020-05-21 05:44:08JinchuanQianLiJiangandZhihuanSong
IEEE/CAA Journal of Automatica Sinica 2020年3期

Jinchuan Qian, Li Jiang, and Zhihuan Song

Abstract—This paper proposes a novel locally linear backpropagation based contribution (LLBBC) for nonlinear process fault diagnosis. As a method based on the deep learning model of auto-encoder (AE), LLBBC can deal with the fault diagnosis problem through extracting nonlinear features. When the on-line fault diagnosis task is in progress, a locally linear model is firstly built at the current fault sample. According to the basic idea of reconstruction based contribution (RBC), the propagation of fault information is described by using back-propagation (BP) algorithm. Then, a contribution index is established to measure the correlation between the variable and the fault, and the final diagnosis result is obtained by searching variables with large contributions. The smearing effect, which is an important factor affecting the performance of fault diagnosis, can be suppressed as well,and the theoretical analysis reveals that the correct diagnosis can be guaranteed by LLBBC. Finally, the feasibility and effectiveness of the proposed method are verified through a nonlinear numerical example and the Tennessee Eastman benchmark process.

I. Introduction

IN order to keep modern industrial plants to work in normal operation and improve product qualities, process monitoring technique has been widely developed in recent decades.With the advanced computer and networked control system techniques, a large number of the process data have been recorded and stored in industrial databases in recent years.Meanwhile, data-driven multivariable statistical process monitoring (MSPM) has received great attention as extracting useful information from process data can be more convenient and flexible than traditional mechanism-based methods [1]–[3].Most of the primitive MSPM methods are based on linear models, such as principal component analysis (PCA) and partial least square (PLS), which assume that the correlation between different process variables are linear. However, in actual industry processes, nonlinear correlations are widespread,which will seriously affect the monitoring performance of those methods. Thus, several extensions of MSPM methods have been proposed to handle the nonlinear problem with the kernel PCA [4], the support vector data description (SVDD)[5] and neural network based methods like auto-associative neural networks [6] and principal curve based nonlinear PCA[7].

As a crucial part of the process monitoring, fault diagnosis aims to find the faulty part or component of the process,which can help engineers to locate the root causes of the faults and fix the responsible part in the entire process after the fault detection. And one way to do it is to find the critical variables to the detected fault, also known as fault identification.Contribution plots and reconstruction based contribution(RBC) are two traditional data-based methods for fault diagnosis [8]. Contribution plots can find the faulty variables by calculating the contribution of each process variable to the fault detection index [9], Tanet al.[10], [11] improve the performance of the contribution plot by combining this method with different monitoring models. However, because of the existence of the smearing effect, the contribution plot may not be able to give the correct diagnosis results, and RBC is proposed to solve this problem. Compared to the contribution plots method, RBC considers the fault information propagation in the model, which is able to suppress the smearing effect and has been proven to have better diagnosis performance [12]. The traditional RBC method is established based on the linear PCA model, which is not suitable for the nonlinear process. To address the nonlinear issue in the process, several improvements on RBC have been proposed. Alcalaet al. [13] have extended RBC to the kernel PCA (KPCA) model as a nonlinear version (KPCARBC). However, since the dimension of the kernel matrix equals the number of samples, the calculation would be severely time-consuming when dealing with large-scale datasets, which makes KPCA-RBC hard to be implemented in practice. Geet al. [14] approximate the nonlinear feature space with several linear subspaces to build linear RBC in each of them and combine the results by Bayesian inference PCA (BSPCA). It is effective and easy to construct,nevertheless, it may not be able to capture strong nonlinear features of processes. According to the recent works, variable selection methods are used to locate the critical variables to the fault. Yanet al. propose a least absolute shrinkage and selection operator (LASSO) based method to identify the faulty variable [15] and further combine LASSO with PLS and discriminant analysis to improve the performance [16].Yuet al.[17] build a fault relevance based on the kernel canonical correlation analysis (KCCA) to describe the correlation between variables and faults. Compared to the contribution plot and RBC, these methods have better performance when dealing with multi-variate fault, however,the variable selecting process is time-consuming and some of the works can only be performed when a fault dataset has been built, which means that these methods cannot give the result in time when used online.

Deep learning has become a hot research topic in the fields of artificial intelligent and machine learning in the recent years. It includes a series of powerful feature extract models,such as auto-encoder (AE), restricted Boltzman machine(RBM), deep brief network (DBN), etc. These models can learn the representation of the data and extract complex nonlinear features [18], [19]. Several works have been conducted to handle the practical industry problem such as quality predicting and process monitoring with the help of deep learning models. Yuanet al. [20]–[22] developed several novel nonlinear feature extraction methods based on variablewise weighted stacked AE and long short-term memory network (LSTM). Yanet al. [23] proposed a variant AE method to solve the nonlinear fault detection problem. Jianget al. [24] further improved the monitoring performance of AE with a denoising criterion. Besides, Zhao [25] proposed a new monitoring model by combining AE and PCA. For fault diagnosis, most work based on deep learning models treated it as a classification task. Shaoet al. [26] proposed a tracking deep wavelet auto-encoder method for fault diagnosis of electric locomotive bearings, Tamilselvanet al. [27] applied DBN for health diagnosis. Wanget al. [28] proposed a novel extended DBN model to perform fault diagnosis task in chemical process. However, few methods can be found for analyzing the critical variables of process faults with deep learning models.

The motivation of the paper is to develop a fault diagnosis method based on the deep learning model for industrial processes with strong nonlinear relationships between variables.A novel method called locally linear back-propagation based contribution (LLBBC) is proposed for fault diagnosis in this paper. In LLBBC, an AE model is firstly trained offline. When a fault sample is detected, a local linear model at the fault sample is built to approximate the whole AE model. Then the basic idea of RBC is utilized to calculate the contribution of each variable. Due to the similarity of the propagation of the fault information and the training error, back-propagation (BP)algorithm is used to describe the propagation of the fault in calculating the contribution. Theoretically, the nonlinear features extracted by AE make the method suit for the fault diagnosis of nonlinear process, and the local linear model of LLBBC can prevent diagnosis results from the smearing effect.Two case studies presented in the paper will demonstrate the superiority of the proposed method.

The organization of the remaining paper is given as follows:Section II presents a brief review about AE and stacked autoencoder (SAE) and the training strategy of denoising criterion.In Section III, two fault diagnosis methods in AE, including back-propagation based contribution (BBC) and the proposed LLBBC are introduced, theoretical proof of the validity of LLBBC and the relationship between LLBBC and RBC are also given in this section. Section IV provides two case studies including a numerical example and the Tennessee Eastman benchmark process. Finally, conclusions are made.

II. Preliminaries

A. Auto-encoder (AE) Model

The auto-encoder model is an unsupervised feed-forward neural network, which is widely used for feature extraction[19]. The architecture of the simplest AE model consists of three layers: an input layer, a hidden layer and an output layer(as shown in Fig. 1).

Fig. 1. Architecture of the basic AE.

The total mapping function of AE contains two parts, an encoder function, which maps the input to the feature space,and a decoder function, which is used to reconstruct the input[29]. And the parameters of AE are trained by minimizing the reconstruction error.

Assume that thenth input sample is denoted byx(n)∈Rm,wheremis the number of variables. Firstly, the samples are mapped to the feature space (hidden layer) by encoder function as follows:

Then the feature expression of the hidden layer is reconstructed to input space by the decoder function, given as follows:

wheredenote the weights and the bias of the decoder function, respectively. The total mapping function from the input layer to the output layer is shown as follows:

Finally, the BP algorithm can be utilized to optimize all the parametersW={We,Wd,be,bd} of the AE model by minimizing the reconstruction error as follows:

whereNis the number of the training samples.

B. Denoising Criterion

The denoising criterion is a training strategy that can help AE to extract more robust feature and structure in the input distribution, then AE can obtain better representation [24].The AE model trained by denoising criterion is also called denoising auto-encoder (DAE) [30].

The key point of the denoising criterion is adding some noise to the input data before training the whole AE model,and then use the corrupted data to reconstruct the original data. The loss-function of DAE can be described by the following equation:

where ε ~N(0,σ2I) is the random noise, and the same as the AE model, parameters of DAE can be obtained by minimizing the loss-function with BP algorithm.

When the training of the DAE model is completed, the original uncorrupt data is used as input to map these data to the hidden layer to get the feature representation.

C. Stacked Auto-encoder (SAE) Model

The stacked auto-encoder model can be used to extract features that are more complex, and a common method to train SAE is the greedy layer-wise approach, which means training each layer in turn [31]. Use the original data to train the first AE, and then use the features obtained by the first AE, i.e., the output of the first AE’s hidden layer to train the second AE. In a similar fashion, theith AE can be trained in the same way. After the training of theith AE is completed,connect them together as the architecture shown in Fig. 2.

As an example, consider the two layers SAE that is stacked by two AEs. The total mapping function of the SAE is expressed as follows:

whereW1e,W1d,b1e,b1dare the parameters of the first AE,andW2e,W2d,b2e,b2dare the parameters of the second AE.

And it is easy to extend it to the multilayer SAE that is stacked bynAEs, as shown in (7), whereWne,Wnd,bne,bnddenote the parameters of thenth AE.

Denoising criterion can also be used in the training of every single AE, which helps us to build a stacked denoising autoencoder (SDAE) to extract the more robust and complex nonlinear features.

Fig. 2. Training procedure of stacked auto-encoder model.

III. Fault Diagnosis Method Based on Auto-encoder

A. Back-propagation Based Contribution (BBC)

Back-propagation based contribution is a fault diagnosis method based on the AE model. In BBC, nonlinear features extracted by the AE model can be used to obtain better performance in fault diagnosis task. Moreover, when a fault happens, the fault information will propagate around the whole AE model, so all the output variables will contain the fault information, which will lead to the smearing effect and seriously affect the diagnosis result. Therefore, if we just calculate the contribution plot using the input and the output of the AE model, we may not get the correct result. With BBC, the smearing effect can be suppressed by considering the propagation of the fault using the BP algorithm.

Similar to the RBC, the basic idea of the BBC contains two parts. First of all, find anfito adjust theith variable of the online samplex(denoted byxi) such that the corresponding fault detection index is minimized [12]. Secondly, build an index to measure the magnitude offi, and the final diagnosis result is obtained based on the magnitude of the index.

When the fault detection task is performed in the AE model that is trained by denoising criterion, the squared prediction error (SPE) is chosen as the fault detection index, which is calculated by the following equation [12]:

whereis the output of the AE model.

In order to calculate thefi, Back-propagation (BP)algorithm is used to describe the propagation of the fault information. BP is a traditional algorithm for neural network training, and the most important part of the BP algorithm is to calculate the partial derivative of the predictive error with respect to the weight, then the value of the derivative can be used to update the weights until the loss function has approached its minimum. When we are performing the RBC,the first step is to calculate the magnitudefi, so as to minimize fault detection index ofx+ξi fi, where ξiis theith column of the identity matrix, and this is similar to the purpose of the BP algorithm.

In Fig. 3, a structure of AE is illustrated and the formulas below the structure show the procedure of the error back propagation, wheredenotes the output of theith nodes in thelth layer anddenotes the input of theith nodes in thelth layer, different formulas have different colors, and each of them corresponds to a path of the same color in the AE structure.

Fig. 3. Error propagation path in BP algorithm.

As shown in Fig. 3, when the BP algorithm calculates the gradient of the weight, the error propagates in an established path at the same time. Due to the similarity of the propagation of the fault information and the error, the BP algorithm can be regarded as a method to describe the propagation of the fault and help to calculate the magnitude of the fault reconstruction.Thus, when a fault sample is obtained, calculate the errorEbetween the input and output of the AE model first, and then calculate the differential ofEwith respect toxiby BP algorithm as shown in (9) and (10).

Assume the fault magnitude of the variablexiisfiand do the integral in the both sides of (10), thefican be calculated by (11).

The fault detection index of variablexiis built as the following equation:

where ξiis theith column of the identity matrix and ξ?idenotes the reconstruction result of ξiby the trained AE model.

Finally, the fault diagnosis task can be completed by finding the variables with significantly large contributions.

B. Locally Linear Back-propagation Based Contribution (LLBBC)

Although BBC has considered the propagation of the fault,it cannot prevent the smearing effect from affecting the right diagnosis result thoroughly. In this section, a locally linear back-propagation based contribution (LLBBC) method is proposed to solve this problem and get the correct diagnosis with the smearing effect.

It can be seen that the nonlinear part exists in the contribution index of BBC, which represents the mapping function of neural network and is hard to interpret. Thus, the contribution index of BBC is difficult to be described theoretically, and the contribution of the most relevant variables may not have the largest magnitude. However, we need the nonlinear mapping to make use of the nonlinear features extracted by AE. In this situation, a number of local linear models can be used to approximate the whole nonlinear AE model (as shown in the Fig. 4), which can help us to build a contribution index in a linear model. The nonlinear AE model can be specifically described and the nonlinear features extracted by AE can be effectively utilized at the same time.

Fig. 4. Description of a locally linear model.

The basic steps of LLBBC are the same as the BBC’s,however, the construction of the contribution index needs to be changed. First of all, AE is trained by the dataset from the normal operating state offline using the denoising criterion.When a fault occurs, the model is linearized at the fault samplex?. Since the use of the linear decoder, the nonlinear part of the original mapping function only exists in the encoder function. Then we only need to linearize the encoder function, which can be expressed as

The total mapping function of the local linear model becomes

The parametersKdeandBdein (14) can be calculated by the following equations:

whereti(x) is the output of theith node of hidden layer. The calculation method ofKeshown here is based on sigmoid function. If other activation functions are used, we only need to change diagonal elements to corresponding derivatives value.

Then the value offican be obtained by minimizing the square predictive error.

Assuming thatthe minimization can be completed by taking the first derivative ofJand equating it to zero.

It can be further changed to the following form:

Then the solution offican be expressed as

whereK=I?Kde. Since parameters of the AE are always full-rank,fican be calculated as

The next step is to build a contribution index. Follow the idea of building the contribution of the RBC, where a positive semi-definite matrixMis formed to give the correct diagnosis with the smearing effect. We can see from (20) that the matrixMcan be calculated byM=KT K, because the matrixKcan be used to map the original data to the feature space, which is similar to the loading matrix in the PCA model. However,differently from the RBC, the matrixMin LLBBC changes with the sample. Thus, the contribution of the LLBBC of the variablexican be built as follows:

whereM=KT Kis symmetrical and positive semi-definite.Equation (21) shows that LLBBC is calculated by reconstruction along each variable, however, like RBC, the diagnostic problem that LLBBC can deal with is not limited to single variable fault. With the help of the nonlinear features, the faulty variables will have significantly larger LLBBC values than those irrelevant variables.

The steps of the LLBBC can be summarized as follows and showed in Fig. 5:

1) Use the normal state data to train AE offline with denoising criterion.

2) When the fault samplex?has been obtained, calculate the parameters of the locally linearized AE model at the fault samplex?by (15).

3) Calculate the contribution index of variablexiby (21).

4) Complete the diagnosis task by finding the variables with large values of contributions.

Fig. 5. Flowchart of LLBBC.

C. LLBBC in SAE

In order to extract deeper features, SAE can be used to substitute AE. Then, LLBBC can be easily constructed in SAE with a similar structure as AE, where only a few changes are needed in the calculation of parameters.

Because SAE contains several encoders and decoders, the parameters of each AE should be calculated first, and then, the parameterKdecan be obtained as shown in (22).

wherediis the number of hidden layer nodes of theith AE,denotes the output ofjth hidden layer node in theith AE,represents the number of AE, andWei,Wdidenote the weights of theith AE.

AfterKdehas been calculated, M can be calculated as shown in (23) and the index of the contribution can be given by the same way as LLBBC.

D. Fault Smearing in LLBBC

In this section, it is proved that the smearing effect will not affect the diagnosis result in LLBBC. As shown in (21), after using the local linear model, the calculation of the contribution index is the same as RBC. Thus, the same procedure can be used in the proof here [12].

Assume that the fault samplex?is exactly in thejth direction, that is,x?=ξj f. Then the index can be calculated by the following equations:

According to the property of the semi-definite matrix, we have

which implies

It shows thatLLBBC j≥LLBBCi, which guarantees that LLBBC can give the correct diagnosis.

E. The Relationship Between LLBBC and RBC

If LLBBC is used in the linear PCA model, then the parameters of LLBBC can be given as follows:

wherePdenotes the loading matrix of the PCA model.

Then, putPinto (20), and the value ofcan be obtained by

Finally, the contribution index of LLBBC can be described by the following equations:

From the above two equations, it can be seen that both the value offiand the index are the same as those of RBC.Hence, it can be concluded that LLBBC equals RBC in the PCA model.

Fig. 6 shows the relationship between RBC, BBC and LLBBC. The three methods are based on different models, but have similar ideas. PCA and AE have similar model structures, while locally linearized AE is built on the basis of AE by performing local linearization at the online sample.And LLBBC can adapt to nonlinearity better.

Fig. 6. Relationship diagram between the three method.

IV. Case Study

In this section, the proposed fault diagnosis method LLBBC is applied to a nonlinear numerical example and the simulated Tennessee Eastman process, and the performances are compared with RBC (SPE index), BSPCA, KPCA-RBC and BBC. The number of the principle components in the PCA model is determined by the cumulative variance of 90%. The Gaussian kernelk(x,y)=exp(?∥x?y∥2/c) is used in the KPCA model here and the parametercis selected by the empirical formulac=10m, wheremis the dimension of the input space [4]. The maximum number of iteration is 20 in KPCA-RBC, and the convergence condition is set to|f(k?2)?f(k)|<0.0001, wheref(k) denotes the value of the contribution in KPCA-RBC after thekth iteration. All the simulations are performed in environment of Core i7-6700 CPU.

A. A Nonlinear Numerical Example

The data of the nonlinear numerical example are generated by (30).

wheret1,t2,t3are the latent variables subject to the zero-mean Gaussian distribution with variance 0.6, and ε denotes the noises, which follow the zero-mean Gaussian distribution with variance 0.02.

The faults are set by the formxfault=x?+ξi f, wherex?is the normal state data generated by (30), ξiis the fault direction, which is out of the six possible variable directions.

The training dataset includes 1000 normal state samples.The number of the hidden nodes in AE is 3, and the whole model is trained with the denoising criterion. The noise chosen to corrupt the original normal state data followsN(0,0.09). The testing dataset includes 1000 samples and 6 types of faults are set in the latter 600 samples. The details of the 6 fault modes are listed in Table I.

Fig. 7. Fault diagnosis results for the numerical example (a) RBC; (b) BSPCA; (c) KPCA_RBC; (d) BBC; (e) LLBBC.

TABLE I Fault Descriptions in the Numerical Example

Fig. 7 illustrates the detailed fault diagnosis results given by RBC, BSPCA, KPCA-RBC, BBC and LLBBC. The blue points in the Fig. 7 indicate the variable that owns the biggest contribution from sample 401 to sample 1000, i.e., the diagnosis result. And Table II shows the average fault diagnosis accuracy of different methods after testing 10 times.

According to the Fig. 7 and Table II, we can see that both BBC and LLBBC have better performance than other methods in the most faults, especially in the faults 1, 2, 3 and 6. Since BBC and LLBBC can utilize the nonlinear features extracted by AE, the diagnostic performance was improved enormously.Moreover, on the basis of BBC, LLBBC improves the construction of the contribution index to have better suppression of the smearing effect. Thus, from those results, it can be seen that LLBBC has higher accuracy than BBC in all faults, especially in fault 5, where BBC has the worst performance among all the methods, but LLBBC still has the highest accuracy.

B. Tennessee Eastman Process

The Tennessee Eastman (TE) process is a chemical testing experimental platform that developed from a realistic chemical joint reaction process. It has been widely used for the evaluation and comparison of the performance of various process monitoring methods in recent years [32]. The TE benchmark process contains five major operating units:reactor, condenser, compressor, separator, and stripper. The schematic diagram of the TE process is illustrated in Fig. 8.

TABLE II Average Fault Diagnosis Accuracy of Different Methods (%)

Among all the 52 process variables, 33 measurement variables are selected for fault diagnosis in the TE process and the descriptions are illustrated in Table III. Besides, 5 faults are chosen and listed in Table IV for the comparison of fault diagnosis performance. The training dataset includes 500 samples that are acquired from the normal operating condition, and five testing datasets are set with 100 fault samples in each fault mode.

The performances of RBC, BSPCA, KPCA-RBC, BBC,LLBBC and LLBBC in SAE are compared in this subsection.AE is also trained with denoising criterion, the noise for corruption followsN(0,0.09), and the number of the hidden layer nodes in AE is 28. The structure of SAE in this simulation is 33-60-28-60-33, which means that the input layer has 33 nodes, the first hidden layer has 60 nodes, the number of the second hidden layer nodes is 28.

Figs. 9–13 illustrate the detailed fault diagnosis results for the five faults and the length of each bar represents the contribution of each variable in different methods. And the contributions shown in these figures are the averaged result in the corresponding testing dataset. Table V collects the average time required for KPCA-RBC and LLBBC to diagnose one sample after testing 10 times. Note that the diagnosis time of LLBBC listed in the Table V is the sum of the time for establishing the local linear model and the time for computing the contribution.

Fig. 8. Schematic diagram of the TE process.

TABLE III Description of the Variables in TE Process

TABLE IV Fault Descriptions in the TE Process

Figs. 9 and 10 illustrate the diagnosis results of fault 1 and fault 4. According to the fault description listed in Table IV,the most relevant variables to fault 1 are variable 1 and variable 25, and the most relevant variable to fault 4 is variable 32. It can be seen from the Fig. 9 that the nonlinearmethods, KPCA-RBC, BBC, LLBBC, and LLBBC in SAE can all give the correct diagnosis, while the results given by RBC and BSPCA are not correct. This is because RBC, as a linear diagnosis method, cannot handle the nonlinear problem,and BSPCA has limited ability to extract nonlinear features.Fig. 10 shows that the results given by RBC and BSPCA are confusing for the contributions of the variable 9 have large magnitudes. LLBBC can utilize the nonlinear features extracted by AE and have better suppression of the smearing effect. Therefore, LLBBC can make the contributions of the irrelevant variables lower than other methods.

TABLE V Diagnostic Time for KPCA-RBC and LLBBC (s)

Fig. 9. Fault diagnosis of fault 1 (a) RBC; (b) BSPCA; (c) KPCA_RBC; (d) BBC; (e) LLBBC; (f) LLBBC in SAE.

Fig. 10. Fault diagnosis of fault 4 (a) RBC; (b) BSPCA; (c) KPCA_RBC; (d) BBC; (e) LLBBC; (f) LLBBC in SAE.

Fig. 11 shows the diagnosis results of fault 5. And the most relevant variable of the fault 5 is variable 33. And according to Fig. 11, we can find that RBC and BSPCA give a high contribution to the variable 17, which is irrelevant to the fault 5. Comparison among Figs. 11 (c)–(f) shows that LLBBC outperforms BBC for the variable 17 in LLBBC has a lower contribution, and we can see from Fig. 11 (f) that when LLBBC is used in SAE model, the contribution of the key variable is more obvious, which means the deeper features extracted by SAE help to provide a more convincing diagnosis result.

Fig. 11. Fault diagnosis of fault 5 (a) RBC; (b) BSPCA; (c) KPCA_RBC; (d) BBC; (e) LLBBC; (f) LLBBC in SAE.

Fig. 12. Fault diagnosis of fault 10 (a) RBC; (b) BSPCA; (c) KPCA_RBC; (d) BBC; (e) LLBBC; (f) LLBBC in SAE.

As for feedCtemperature fault (fault 10), the most relevant variable is variable 18, because materialCis sent directly to the stripper, the temperature of materialCcan be reflected in the temperature of the stripper. As shown in Fig. 12, all the 6 methods can give the correct diagnosis. And Fig. 13 illustrates the diagnosis result of the fault 14, and the key variables of fault 14 are variable 9, variable 21, and variable 32. And according to the results shown in Fig. 13, we can see that RBC and BSPCA can only identify the variable 21. All the other four nonlinear methods can give the correct diagnosis.However, it should be emphasized that LLBBC in contrast to BBC can perform better, for all the irrelevant variables in LLBBC have very low contribution.

Fig. 13. Fault diagnosis of fault 14 (a) RBC; (b) BSPCA; (c) KPCA_RBC; (d) BBC; (e) LLBBC; (f) LLBBC in SAE.

The results illustrated in Figs.9–13 show that the performance of the KPCA-RBC is as good as that of LLBBC,however, after comparing the diagnostic times listed in the Table V, we can see that the time required for KPCA-RBC to diagnose one sample is about 2500 times as long as that of LLBBC. When RBC is performed in KPCA model, it needs an iterative process to get the final contribution, and in each iteration, a kernel vector needs to be calculated, so it will take a long time to get the final diagnosis results, while LLBBC can get diagnosis result immediately, once the sample is obtained. Thus, LLBBC is more suitable for the online fault diagnosis than KPCA-RBC.

In summary, it can be seen from the above fault diagnosis results that with the help of the nonlinear features extracted by AE, both BBC and LLBBC have stronger fault diagnosis capabilities than RBC and BSPCA. Besides, compared to BBC, LLBBC can give a more accurate diagnosis result,giving those irrelevant variables a lower contribution. Also,the suppression of the smearing effect can be seen in the results of LLBBC. Moreover, compared to KPCA-RBC,LLBBC has much faster diagnostic speed. Since few deeper features exist in the simulation case, the improvement given by SAE is not so obvious. However, after comparing the performance of LLBBC in AE and LLBBC in SAE, it can be seen that the performance of LLBBC in SAE is slightly better in some of the diagnosis results, for the contributions of the key variables are more obvious among the whole variables.

V. Conclusion

In this paper, a novel locally linear back-propagation based contribution is proposed for industrial process fault diagnosis.The basic idea behind this method is similar to that of the traditional RBC method. However, compared to the traditional RBC method, the LLBBC method is based on the AE model,which can use the nonlinear features extracted by the AE model to improve the fault diagnosis accuracy. Besides,instead of using the trained AE model directly, LLBBC uses a locally linear model at the current fault sample to calculate the contribution, which can always give the correct diagnosis with the smearing effect because of the special structure of the contribution. Furthermore, LLBBC can be easily used in the SAE model, which is able to extract more complex nonlinear features. The results of two case studies, including one nonlinear numerical process and the TE process show that the LLBBC owns a better diagnosis performance.

The local linearization skill used in LLBBC can also be considered as a solution to connect the RBC and nonlinear model, which can help RBC to be more suitable for the nonlinear fault diagnosis task and suppress the smearing effect at the same time.

改则县| 长宁区| 长白| 大安市| 和龙市| 无为县| 崇文区| 襄垣县| 汶川县| 五大连池市| 若羌县| 西华县| 佛学| 南丰县| 镇雄县| 嘉峪关市| 土默特左旗| 大姚县| 治多县| 高台县| 浮山县| 平顶山市| 古丈县| 雷州市| 汝南县| 泰顺县| 蚌埠市| 东山县| 普兰店市| 阜阳市| 乌拉特后旗| 朔州市| 奇台县| 垦利县| 濉溪县| 巫山县| 义马市| 册亨县| 文水县| 玛多县| 东山县|