Professional Documents
Culture Documents
27, 2017.
Digital Object Identifier 10.1109/ACCESS.2017.2749498
ABSTRACT Dynamic principal component analysis (DPCA) is generally employed in monitoring dynamic
processes and typically incorporates all measured variables. However, for a large-scale process, the inclusion
of variables without fault-relevant information may cause redundancy and degrade monitoring performance.
In this paper, the influence of variable and time-lagged variable selection on the DPCA monitoring perfor-
mance is analyzed. Then, a fault-relevant performance-driven distributed monitoring scheme is proposed
to achieve efficient fault detection and diagnosis. First, performance-driven process decomposition is
performed, and the optimal subset of variables and time-lagged variables for each fault are selected through
a stochastic optimization algorithm. Second, local DPCA models are established to characterize the process
dynamics and generate fault signature evidence. Finally, a Bayesian diagnosis system with the most efficient
evidence sources is established to identify the process status. Case studies on a numerical example and the
Tennessee Eastman benchmark process demonstrate the efficiency of the proposed monitoring scheme.
INDEX TERMS Distributed monitoring, large-scale dynamic process, dynamic principal component
analysis, Bayesian fault diagnosis.
2169-3536
2017 IEEE. Translations and content mining are permitted for academic research only.
VOLUME 5, 2017 Personal use is also permitted, but republication/redistribution requires IEEE permission. 18325
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
Y. Wang et al.: Data-Driven Optimized Distributed DPCA for Efficient Monitoring of Large-Scale Dynamic Processes
fault diagnosis, and numerous studies on Bayesian fault diag- auto-correlation also exists and the current values of vari-
nosis have been conducted [21]–[23]. Recently, an optimal ables depend on the past values. The auto-correlation or at
Bayesian diagnosis system for PCA-based process monitor- least the relationship between the current sample and
ing was developed [20]. However, inclusion of the Bayesian previous sample should be captured. DPCA takes auto-
diagnosis system into the distributed monitoring of a large- correlation into account by using the ‘‘time lag shift’’
scale dynamic process has not been performed. method [7], [9]. In DPCA, the training data set becomes
The main contributions of the current work can be Z = [X (k) , X (k − 1) , . . . , X (k − l)] ∈ RN ×(lm+m) . Here,
summarized as follows. (i) The influence of process decom- we take the first-order model, i.e., Z = [X (k) , X (k − 1)],
position on the DPCA monitoring performance is theoret- as an example. A new sample at time k then becomes
ically analyzed, which will enhance the dynamic process z = [x1 (k), . . . , xm (k), x1 (k − 1), . . . , xm (k − 1)]T .
monitoring theory basics. (ii) A performance-driven process
decomposition method, which selects the optimal subset of B. INFLUENCE OF VARIABLE SELECTION ON DPCA
variables and time-lagged variables to achieve the best pos- MONITORING PERFORMANCE
sible monitoring performance, is proposed. (iii) An efficient In monitoring a large-scale process, a distributed monitor-
Bayesian diagnosis system with optimal evidence sources is ing scheme can be employed to reduce the redundancy,
established, which identifies the process status of a dynamic in which the process decomposition is the key step and
process. The remainder of the paper is structured as fol- will significantly affect the monitoring performance. In the
lows. Section 2 reviews the basics of DPCA monitoring present study, we analyze the influence of variable selection
and Bayesian diagnosis. The influence of process decom- on DPCA monitoring performance by considering cross-
position on the DPCA monitoring performance is also ana- correlation and auto-correlation.
lyzed. Section 3 presents in detail the performance-driven Let a numerical example with two variables x1 and x2 be
distributed DPCA monitoring scheme. Section 4 provides
application results on a numerical example and the Tennessee x1 (k) = φx1 (k − 1) + ε1 (k)
Eastman (TE) benchmark process. Finally, Sections 5 draws x2 (k) = θ x1 (k) + ε2 (k), (4)
the conclusions.
where integer k denotes the time index and ε1 (k) and ε2 (k) are
II. PRELIMINARIES
the serial and joint independent zero-mean Gaussian random
A. DPCA PROCESS MONITORING
variables with variances 1−φ 2 and σ 2 , respectively. The auto-
correlation and cross-correlation are determined by parame-
PCA establishes a correlation-concerned model for pro-
ters φ and θ , respectively. The fault model can be constructed
cess monitoring. Given a normalized data matrix X =
as follows:
[x1 , . . . , xm ] ∈ RN ×m with N denoting the number of
observations and m denoting the number of measured vari- x1 (k) = x1,N (k) + β1 f
ables, the PCA transformation matrix U can be obtained x1 (k − 1) = x1,N (k − 1) + β2 f
by performing eigenvalue decomposition on the covariance
matrix 6. Given a new sample x ∈ Rm×1 , the i-th principal x2 (k) = x2,N (k) + β3 f , (5)
component (PC) score ti can be calculated as follows [20]: where x1,N (k), x1,N (k − 1) and x2,N (k) represent the stan-
xT p dardized data under normal status data and f denotes the
i
ti = √ , (1) fault specified by parameters β1 , β2 , and β3 . The fol-
λi
lowing cases are considered in this study: (i) without
where λi is the i-th eigenvalue corresponding to the i-th PC, auto-correlation or cross-correlation; and (ii) with auto-
and pi ∈ Rm×1 is the i-th eigenvector in the transformation correlation or cross-correlation.
matrix U. Two statistics named T 2 and Q are constructed for
fault detection as follows [1]: 1) WITHOUT AUTO-CORRELATION OR
r CROSS-CORRELATION
If φ = 0 and θ = 0, then x1 (k), x1 (k − 1)
X
T 2 = tT t = ti2 ≤ Tlim
2
, (2)
i=1 and x2 (k) are uncorrelated. Let z = [z1 , z2 , z3 ]T =
[x1 (k), x1 (k − 1), x2 (k)]T denote a measurement, and the
T
x − x̂ = xT I − U r U Tr x ≤ Qlim , (3)
Q = x − x̂
covariance matrix of a full DPCA with all the three variables
2 denotes the involved can be obtained as
where r is the number of retained PCs, Tlim
2
confidence limit of T , x̂ = U r t is the recovered data, U r is 1 0 0
the loading matrix with r retained projecting vectors, and 6 = 0 1 0 . (6)
Qlim is the confidence limit of the Q statistic. 0 0 1
The above equation represents the static PCA moni- The full DPCA can be obtained as
toring model, which efficiently characterizes the cross-
correlation among variables. For a dynamic process, t1 = z 1 , t2 = z 2 , t3 = z 3 , (7)
which indicates that the three PCs (t1 , t2 , and t3 ) are exactly
the same as the three variables (z1 , z2 , and z3 ). The T 2 statistic
of a DPCA model can be constructed as
3
X
T = 2
tj2 = z21 + z22 + z23 ≤ χα2 (3) . (8)
j=1
and x2 (k) (or x1 (k − 1)), and the dashed red lines denote the using z1 and z2 can be given as
control limits used for monitoring each variable individually. z21 + z22 − 2a12 z1 z2
Various faults can be generated by the fault parameters β1 , β2 ,
2
Tr2 = ≤ χα2 (2) , (15)
1 − a212
and β3 . When a fault occurs along the F1 direction (yellow
area), the DPCA model with only one variable x1 (k) will
r
(1−a212 )χα2 (2)
and the detection delay is J2 = . χα2 (2) <
perform better because χα2 (1) < χα2 (2). Thus, the inclusion 2 β1 +β2 −2a12 β1 β2
2
of the variable x2 (k) or the time-lagged variable x1 (k − 1) in χα2 (3), and in this situation, J2 will be generally smaller
the DPCA model is unnecessary. When a fault occurs along than Jf , and the reduced DPCA model will provide better
the F2 direction (green area), the DPCA model with only performance if we assume that β3 = 0.
x2 (k) or x1 (k − 1) will perform better. Thus, the inclusion of Then, the monitoring performance of two-variable DPCA
x1 (k) in the model is unnecessary. When a fault occurs along model and one-variable DPCA model is geometrically
the F3 direction (gray area), the PCA model with x1 (k) and explained. The first PC will be used to generate T 2 statistic,
x2 (k) will provide good monitoring performance. whereas the second PC will be left in the residual space.
The Q statistic is constructed to monitor the variations in where n(ei |DFj ) isthe number of evidence ei under fault status
the residual subspace. The monitoring chart using T 2 and Fj and α ei |DFJ is the number of prior samples assigned
Q in DPCA monitoring is presented in Fig. 1(b), which to ei under fault status Fj . The likelihood accounts prior
geometrically illustrates the DPCA monitoring. In Fig. 1(b), and historical samples and will converge to the relative
the dashed blue ellipse denotes the control limit of T 2 with frequency determined by the historical data as the num-
x1 (k) and x2 (k) (or x1 (k − 1)), the parallel lines C1 D1 and ber of historical samples increases [23]. Meanwhile, due
C2 D2 denote the control limit of Q, the parallel lines A1 B1 to the existence of the prior sample term, the calculation
and A2 B2 denote the control limit of T 2 with only one PC amount can significantly increase with the number of evi-
retained, and the dashed red lines denote the control limits dence sources [20], [19]. Then, the main task in using the
used for monitoring each variable individually. Various faults Bayesian diagnosis system falls on how to generate efficient
can be generated by the fault parameters β1 , β2 , and β3 . When fault signature evidence.
the fault occurs along the F1 direction (yellow area), the
PCA model with only x1 (k) will perform better because the III. FAULT-RELEVANT PERFORMANCE-DRIVEN
confidence limit of x1 (k) will be reached earlier. When a fault DISTRIBUTED MONITORING SCHEME
occurs along the F2 direction (green area), the PCA model A. FAULT-RELEVANT PERFORMANCE-DRIVEN
with only x2 (k) or x1 (k − 1) will perform better. When the PROCESS DECOMPOSITION
fault occurs along the F3 direction (gray area), the monitoring The objective of process decomposition is to decompose
model with x1 (k) and x2 (k) will provide better monitoring a process into sub-blocks to reduce the redundancy while
performance. The gray area is the largest among the three the detectability should not be destroyed. When accurate
areas, indicating that the inclusion of the two variables in one process model and fault model are available, the decom-
DPCA model is important. position can be conducted through mathematical analysis.
In some situations, φ 6 = 0 and θ 6 = 0, which means However, in practice, the analysis may be difficult because
that all variables are correlated with one another. In this (i) the process model or fault model is usually unavailable
situation, no fault irrelevant variable usually exists because and (ii) the analysis will be complex when the number of pro-
the correlation between the variables will provide useful cess variables and fault parameters is large [19]. In practical
information to detect a fault influencing any of the vari- applications, some faults occur constantly or periodically, and
ables. Based on the above analysis, the existence of irrelevant these fault data can be collected from process history. Given
variables may cause redundancy in the monitoring and may these reasons, the current study uses a stochastic optimization
degrade the monitoring performance, which requests that a approach, i.e., the genetic algorithm (GA), to achieve optimal
reduced monitoring model should be established. Determin- process decomposition.
ing a fault irrelevant variable through mathematical analysis In using GA, the first step is to establish the fitness
is difficult because the process model is usually unavailable. function. Assume that process data of G faults, denoted
In the current work, a stochastic optimization-based method as F = {F1 , F2 , . . . , FG }, can be obtained from pro-
is employed. This method eliminates the fault irrelevant vari- cess history. Let the measurement in DPCA monitoring be
ables based on totally processed data. z = [z1 , . . . , z2m ]T = [x1 (k), . . . , xm (k), x1 (k − 1), . . .,
xm (k − 1)]T . In performance-driven process decomposition,
C. BAYESIAN FAULT DIAGNOSIS the sub-blocks are constructed by selecting a subset of vari-
Bayesian diagnosis identifies the underlying process status ables for each fault. These variables provide the best descrip-
based on the currently obtained evidence ec and historical tion of the fault, and the DPCA monitoring model built on
data D. According to Bayes rule, the posterior probability of these variables achieves the best possible fault detection per-
each possible process status can be calculated as follows [20]: formance (defined by non-detection rate, NDR). Meanwhile,
the false alarm rate (FAR) should be retained at an acceptable
p(ec |F, D)p(F|D) level that can be specified according to practical application
p(F|ec , D) = P c
, (16)
F p(e |F, D)p(F|D) requirements. For a specific fault Fi , the fitness function is
established as follows [19]:
where p(ec |F, D) is the likelihood of evidence ec under pro-
NF,N
cess status F obtained from historical data and p(F|D) is the min NDR = × 100%
prior probability of fault status F. The process status with the zj NF
largest posterior probability can be determined as the underly- NN ,F
s.t. FAR = × 100% ≤ CL, (18)
ing process status based on the maximum a posteriori (MAP) NN
principle. To obtain the likelihood from the process history, a where NF,N and NF denote the number of non-detected
marginalization-based solution that involves prior knowledge fault points and the number of fault points, respectively;
is provided as follows [22], [23]: NN ,F and NN denote the number of false alarm points
and the number of normal samples, respectively; and
n(ei |DFj ) + α ei |DFj
p(ei |Fj , D) = PK PK , (17) CL is the specified maximum FAR allowed in practical
h=1 n(eh |DFj ) + h=1 α(eh |DFj ) application.
FIGURE 2. Chromosome design for fault-relevant performance-driven FIGURE 3. Chromosome design for fault-relevant performance-driven
process decomposition. evidence source selection.
After constructing the fitness function, the chromosome in is employed. For fault diagnosis problem, the main index is
GA is designed, as illustrated in Fig. 2. In the chromosome, misclassification rate (MCR), and then the fitness function
each gene (element) is designed to encode a variable. For can be constructed as follows [20]:
instance, a ‘‘1’’ indicates that the corresponding variable
should be selected, whereas a ‘‘0’’ indicates that the corre- NMC
min MCR = × 100%
sponding variable should be removed. Using the temporarily πi N
selected variables, a DPCA model can be established. Based s.t. s ≤ smax , (23)
on the validation data, the fitness function value can be calcu-
where NMC and N denote the number of wrongly classified
lated. The GA continues until the best possible performance
points and the number of all considered points, respectively.
is achieved or a stop rule is reached.
smax is the acceptable maximum number of retained evi-
dence sources, which is specified according to computation
B. FAULT-RELEVANT PERFORMANCE-DRIVEN OPTIMAL
capacity. The chromosome is designed to include all evidence
DESIGN OF BAYESIAN DIAGNOSIS SYSTEM
sources, and the existences of the sources are encoded by
In using Bayesian diagnosis, the first step is to generate
the values of the elements in the chromosome, as illustrated
fault signature evidence. For the b-th sub-block, assuming
T in Fig. 3.
that mb variables are present in zb = zb,1 , zb,2 . . . , zb,mb ,
The procedures of the proposed monitoring scheme consist
the DPCA model can be established in the sub-block. The of offline modeling and online monitoring as follows:
fault signature evidence can be generated by examining the
status of each principal component and the DPCA residual. 1) OFFLINE MODELING
The scaled i-th component score can be obtained as follows: 1) Collect historical training data under different process
zT pb,i statuses;
tb,i = pb , (19) 2) Construct new measurements using the ‘time lag shift’
λb,i
method;
2 can
and the corresponding T 2 statistic of this component Tb,i 3) Perform GA-based process decomposition;
be calculated as follows: 4) Establish distributed DPCA monitoring models and
generate fault signature evidence;
2
Tb,i 2
= tb,i ∼ χ 2 (1) . (20)
5) Establish Bayesian fault diagnosis system.
The evidence eb = [πb,1 , πb,2 , . . . , πb,rb , πb,rb +1 ] can be
generated by discretizing the statistics as follows: 2) ONLINE MONITORING
(
2 ≤ χ 2 (1) 1) Construct new measurement using the ‘time lag shift’
0, if Tb,i β
πb,i = 2 > χ 2 (1)
(i = 1, 2 . . . , rb ), (21) method;
1, if Tb,i β 2) Divide the new measurement as the previously obtained
(
0, if Qb ≤ QCL,b sub-blocks;
πb,rb +1 = (22) 3) Transform the measurement into fault signature evi-
1, if Qb > QCL,b ,
dence;
where rb denotes the number of retained PCs in the b-th sub- 4) Make decision using the Bayesian diagnosis system.
block, Qb is the Q statistic in the b-th sub-block, and QCL,b
is the control limit of Qb . Then, for all B sub-blocks, s = IV. CASE STUDIES
B
P A. CASE STUDY ON A NUMERICAL EXAMPLE
rb + B bits of evidence sources will be generated, and the
b=1 A numerical AR(1) process modified from [7] is employed to
total number of possible evidence values will be K = 2s . illustrate the performance of the proposed distributed moni-
The calculation amount can significantly increase with the toring scheme, which is as follows:
number of evidence sources, which can eventually exceed the
computer capacity. By contrast, the evidence sources have u1 (k) = A1 u1 (k − 1) + A2 w1 (k − 1)
distinct importance in discriminating a fault and the existence h1 (k) = A3 h1 (k − 1) + A4 u1 (k − 1)
of irrelevant evidence sources may degrade the monitoring
u2 (k) = A1 u2 (k − 1) + A2 w2 (k − 1)
performance [19]. Determining an appropriate number of the
most efficient evidence sources to serve as the input of the h2 (k) = A3 h2 (k − 1) + A4 u2 (k − 1)
Bayesian diagnosis system is thus important. Here, the GA y(k) = h1 (k) + h2 (k) + v(k), (24)
FIGURE 5. Full DPCA fault detection results for (a) Fault 1, (b) Fault 2,
and (c) Fault 3.
FIGURE 4. Variable selection results for (a) Fault 1, (b) Fault 2, and
(c) Fault 3.
0.811 −0.226 0.193 0.689
where A1 = , A2 = ,
0.477 0.415 −0.320
−0.749
0.118 −0.191 1 2 FIGURE 6. Reduced DPCA fault detection results for (a) Fault 1,
A3 = , and A4 = ; input (b) Fault 2, and (c) Fault 3.
0.847 0.264 3 −4
wi (i = 1, 2) is the random noise with zero mean and
variance 1; and output v is the random noise with zero
mean and variance 0.1. The measurement in DPCA
model consists of 12 variables as z = [z1 , . . . , z12]T =
T
u1 (k) uT1 (k − 1) uT2 (k) uT2 (k − 1) yT (k) yT (k − 1) ,
T
T T
where ui = ui,1 , ui,2 and yi = yi,1 , yi,2 . Three different
FIGURE 9. Full DPCA fault detection results for (a) Fault 5, (b) Fault 10,
FIGURE 8. GA-based variable selection results for (a) Fault 5, (b) Fault 10, and (c) Fault 20.
and (c) Fault 20.
V. CONCLUSIONS
In this work, the influence of variable and time-lagged vari-
able selection on the DPCA monitoring performance is ana-
lyzed, and a fault-relevant performance-driven distributed
DPCA monitoring scheme for large-scale dynamic processes
is developed. First, the GA-based fault-relevant variable
selection is performed to decompose a large-scale process
into several local units and achieve the best possible fault
detection performance for each fault. Second, in each unit,
a DPCA monitoring model is established to deal with the
process dynamics. Then, fault signature evidence is gener-
ated, and the Bayesian diagnosis system is established to
identify the running-on process status. The proposed moni-
FIGURE 10. Reduced DPCA fault detection results for (a) Fault 5,
(b) Fault 10, and (c) Fault 20. toring scheme is applied on a numerical example and the TE
benchmark process, and the efficiency is demonstrated.
REFERENCES
[1] L. H. Chiang, E. L. Russell, and R. D. Braatz, Fault Detection and Diag-
nosis in Industrial Systems. London, U.K.: Springer-Verlag, 2001.
[2] S. X. Ding, ‘‘Data-driven design of monitoring and diagnosis systems for
dynamic processes: A review of subspace technique based schemes and
some recent results,’’ J. Process Control, vol. 24, no. 2, pp. 431–449, 2014.
[3] S. Yin, S. X. Ding, X. Xie, and H. Luo, ‘‘A review on basic data-driven
approaches for industrial process monitoring,’’ IEEE Trans. Ind. Electron.,
vol. 61, no. 11, pp. 6418–6428, Nov. 2014.
[4] S. J. Qin, ‘‘Statistical process monitoring: Basics and beyond,’’
J. Chemometrics, vol. 17, nos. 7–8, pp. 480–502, 2003.
[5] Y. Ren and D.-W. Ding, ‘‘Fault detection for two-dimensional Roesser
systems with sensor faults,’’ IEEE Access, vol. 4, pp. 6197–6203, 2016.
[6] J. Chen and K.-C. Liu, ‘‘On-line batch process monitoring using dynamic
PCA and dynamic PLS models,’’ Chem. Eng. Sci., vol. 57, no. 1, pp. 63–75,
2002.
[7] W. Ku, R. H. Storer, and C. Georgakis, ‘‘Disturbance detection and isola-
tion by dynamic principal component analysis,’’ Chemometrics Intell. Lab.
Syst., vol. 30, no. 1, pp. 179–196, 1995.
[8] G. Li, B. Liu, S. J. Qin, and D. Zhou, ‘‘Quality relevant data-
driven modeling and monitoring of multivariate dynamic processes:
The dynamic T-PLS approach,’’ IEEE Trans. Neural Netw., vol. 22, no. 12,
pp. 2262–2271, Dec. 2011.
[9] A. Negiz and A. Çlinar, ‘‘Statistical monitoring of multivariable
dynamic processes with state-space models,’’ AIChE J., vol. 43, no. 8,
FIGURE 11. (a) GA evidence source selection results, (b) diagnosis results pp. 2002–2020, 1997.
using all evidence sources, and (c) diagnosis results using the selected [10] S. W. Choi and I. B. Lee, ‘‘Multiblock PLS-based localized process diag-
evidence sources. nosis,’’ J. Process Control, vol. 15, no. 3, pp. 295–306, Apr. 2005.
[11] Q. Liu, S. J. Qin, and T. Chai, ‘‘Multiblock concurrent PLS for decen-
of some state-of-the-art methods on the considered faults are tralized monitoring of continuous annealing processes,’’ IEEE Trans. Ind.
Electron., vol. 61, no. 11, pp. 6429–6437, Nov. 2014.
presented in the Table 2. The reduced DPCA method gener- [12] S. J. Qin, S. Valle, and M. J. Piovoso, ‘‘On unifying multiblock analysis
ally provides the best monitoring results for the considered with application to decentralized process monitoring,’’ J. Chemometrics,
five types of faults. vol. 15, no. 9, pp. 715–742, 2001.
[13] J. A. Westerhuis, T. Kourti, and J. F. MacGregor, ‘‘Analysis of multiblock
To establish the Bayesian diagnosis system, 65 bits of and hierarchical PCA and PLS models,’’ J. Chemometrics, vol. 12, no. 5,
evidence sources are generated, which will cause significant pp. 301–321, 1998.
calculation amount (265 ). The GA-based evidence source [14] Y. Zhang, H. Zhou, S. J. Qin, and T. Chai, ‘‘Decentralized fault diagnosis of
large-scale processes using multiblock kernel partial least squares,’’ IEEE
selection results are presented in Fig. 11(a), in which the Trans. Ind. Informat., vol. 6, no. 1, pp. 3–10, Feb. 2010.
maximum number is limited by 12. The fault diagnosis [15] Z. Ge and Z. Song, ‘‘Distributed PCA model for plant-wide process
results for the TE training data and testing data are presented monitoring,’’ Ind. Eng. Chem. Res., vol. 52, no. 5, pp. 1947–1957, 2013.
[16] Q. Jiang and X. Yan, ‘‘Nonlinear plant-wide process monitoring using
in Fig. 11(b) and Fig. 11(c), respectively. Some misclassified MI-spectral clustering and Bayesian inference-based multiblock KPCA,’’
points for Fault 20 (marked by the ellipse) are also observed, J. Process Control, vol. 32, pp. 38–50, Aug. 2015.
[17] Q. Jiang and X. Yan, ‘‘Just-in-time reorganized PCA integrated with SVDD QINGCHAO JIANG received the B.E. and Ph.D.
for chemical process monitoring,’’ AIChE J., vol. 60, no. 3, pp. 949–965, degrees from the Department of Automation, East
2014. China University of Science and Technology,
[18] Z. Ge and J. Chen, ‘‘Plant-wide industrial process monitoring: A dis- Shanghai, China, in 2010 and 2015, respectively.
tributed modeling framework,’’ IEEE Trans. Ind. Informat., vol. 12, no. 1, In 2015, he was a Post-Doctoral Fellow with the
pp. 310–321, Feb. 2016. Department of Chemical and Materials Engineer-
[19] Q. Jiang, X. Yan, and B. Huang, ‘‘Performance-driven distributed PCA ing, University of Alberta, Canada. From 2015 to
process monitoring based on fault-relevant variable selection and Bayesian
2016, he was a Humboldt Research Fellow with
inference,’’ IEEE Trans. Ind. Electron., vol. 63, no. 1, pp. 377–386,
the Institute for Automatic Control and Complex
Jan. 2016.
[20] Q. Jiang, B. Huang, and X. Yan, ‘‘GMM and optimal principal Systems, University of Duisburg-Essen, Germany.
components-based Bayesian method for multimode fault diagnosis,’’ He is currently an Associate Professor with the East China University
Comput. Chem. Eng., vol. 84, pp. 338–349, Jan. 2016. of Science and Technology, Shanghai. His research interests include data
[21] B. Huang, ‘‘Bayesian methods for control loop monitoring and diagnosis,’’ mining and analysis, soft sensor, multivariate statistical process monitoring,
J. Process Control, vol. 18, no. 9, pp. 829–838, 2008. and Bayesian fault diagnosis.
[22] A. Pernestal, ‘‘Probabilistic fault diagnosis with automotive applications,’’
Ph.D dissertation, School of Elect. Eng., KTH Royal Inst. Technol.,
Stockholm, Sweden, 2007.
[23] F. Qi and B. Huang, ‘‘Bayesian methods for control loop diagnosis in the
presence of temporal dependent evidences,’’ Automatica, vol. 47, no. 7,
pp. 1349–1356, 2011.
[24] K. Ghosh, M. Ramteke, and R. Srinivasan, ‘‘Optimal variable selection
for effective statistical process monitoring,’’ Comput. Chem. Eng., vol. 60,
pp. 260–276, Jan. 2014.
[25] J. J. Downs and E. F. Vogel, ‘‘A plant-wide industrial process control
problem,’’ Comput. Chem. Eng., vol. 17, no. 3, pp. 245–255, Mar. 1993.
[26] Z. Ge and Z. Song, ‘‘Performance-driven ensemble learning ICA model for
improved non-Gaussian process monitoring,’’ Chemometrics Intell. Lab.
Syst., vol. 123, pp. 1–8, Apr. 2013.