Process monitoring

Goal-Oriented Tuning of Particle Filters for the Fault Diagnostics of Process Systems

This study introduces particle filtering (PF) for the tracking and fault diagnostics of complex process systems. In process systems, model equations are often nonlinear and environmental noise is non-Gaussian. We propose a method for state estimation and fault detection in a wastewater treatment system. The contributions of the paper are the following: (1) A method is suggested for sensor placement based on the state estimation performance; (2) based on the sensitivity analysis of the particle filter parameters, a tuning method is proposed; (3) a case study is presented to compare the performances of the classical PF and intelligent particle filtering (IPF) algorithms; (4) for fault diagnostics purposes, bias and impact sensor faults were examined; moreover, the efficiency of fault detection was evaluated. The results verify that particle filtering is applicable and highly efficient for tracking and fault diagnostics tasks in process systems.


Post date: 09 March 2023

Particle filtering supported probability density estimation of mobility patterns

This paper presents a methodology that aims to enhance the accuracy of probability density estimation in mobility pattern analysis by integrating prior knowledge of system dynamics and contextual information into the particle filter algorithm. The quality of the data used for density estimation is often inadequate due to measurement noise, which significantly influences the distribution of the measurement data. Thus, it is crucial to augment the information content of the input data by incorporating additional sources of information beyond the measured position data. These other sources can include the dynamic model of movement and the spatial model of the environment, which influences motion patterns. To effectively combine the information provided by positional measurements with system and environment models, the particle filter algorithm is employed, which generates discrete probability distributions. By subjecting these discrete distributions to exploratory techniques, it becomes possible to extract more certain information compared to using raw measurement data alone. Consequently, this study proposes a methodology in which probability density estimation is not solely based on raw positional data but rather on probability-weighted samples generated through the particle filter. This approach yields more compact and precise modeling distributions. Specifically, the method is applied to process position measurement data using a nonparametric density estimator known as kernel density estimation. The proposed methodology is thoroughly tested and validated using information-theoretic and probability metrics. The applicability of the methodology is demonstrated through a practical example of mobility pattern analysis based on forklift data in a warehouse environment.

Post Date: 15 April 2024

Sparse PCA supports exploration of process structures for decentralized fault detection

With the ever-increasing use of sensor technologies in industrial processes, and more data becoming available to engineers, the fault detection and isolation activities in the context of process monitoring have gained significant momentum in recent years. A statistical procedure frequently used in this domain is Principal Component Analysis (PCA), which can reduce the dimensionality of large data sets without compromising the information content. While most process monitoring methods offer satisfactory detection capabilities, understanding the root cause of malfunctions and providing the physical basis for their occurrence have been challengin. The relatively new sparse PCA techniques represent a further development of the PCA in which not only the data dimension is reduced but the data is also made more interpretable, revealing clear correlation structures among variables. Hence, taking a step forward from classical fault detection methods, in this work, a decentralized monitoring approach is proposed based on a sparse algorithm. The resulting control charts reveal the correlation structures associated with the monitored process and facilitate a structural analysis of the occurred faults. The applicability of the proposed method is demonstrated using data generated from the simulation of the benchmark vinyl acetate process. It is shown that the sparse principal components, as a foundation to decentralized multivariate monitoring framework, can provide physical insight towards the origins of process faults. 


Process-Data-Warehousing-Based Operator Support System for Complex Production Technologies

Process manufacturing is increasingly being driven by market forces, customer needs and perceptions, resulting in more and more complex multi-product manufacturing technologies. The increasing automation and tighter quality constraints related to these processes make the operator’s job more and more difficult. This makes decision support systems for the operator more important than ever before. Traditional OSS focuses only on specific tasks, which are performed. In case of complex processes the design of integrated information system is extremely important.

The huge amount of data recorded by modern production systems definitely have the potential to provide information for product and process design, monitoring and control. This project proposes soft-computing (SC)-based approaches for the extraction of knowledge from the historical data of production.

The proposed data warehouse based Operator Support System makes possible linking complex and isolated production units based on the integration of the heterogeneous information collected from the production units of a complex production process. The developed OSS is based on a data warehouse designed by following the proposed focus on process data warehouse design approach, which means stronger focus on the material and information flow through the entire enterprise. The resulted OSS follows the process through the organization instead of focusing separate tasks of the isolated process units. For human-computer interaction front-end tools have been worked out where exploratory data analysis and advanced multivariate statistical models are applied to extract the most informative features of the operation of the technology. The concept is illustrated by an industrial case study, where the OSS is designed for the monitoring and control of a high-density polyethylene plant.

Monitoring process transitions by Kalman filtering and time-series segmentation

The analysis of historical process data of technological systems plays important role in process monitoring, modelling and control. Time-series segmentation algorithms are often used to detect homogenous periods of operation-based on input–output process data. However, historical process data alone may not be sufficient for the monitoring of complex processes. This paper incorporates the first-principle model of the process into the segmentation algorithm. The key idea is to use a model-based non-linear state-estimation algorithm to detect the changes in the correlation among the state-variables. The homogeneity of the time-series segments is measured using a PCA similarity factor calculated from the covariance matrices given by the state-estimation algorithm. The whole approach is applied to the monitoring of an industrial high-density polyethylene plant. 

B. Feil., J. Abonyi, S. Nemeth, P. Arva, Monitoring process transitions by Kalman filtering and Time Series segmentation, Computers & Chemical Engineering, Preliminary Accepted, 2004, IF: 0.784

Modified Gath–Geva clustering for fuzzy segmentation of multivariate time-series

Partitioning a time-series into internally homogeneous segments is an important data-mining problem. The changes of the variables of a multivariate time-series are usually vague and do not focus on any particular time point. Therefore, it is not practical to define crisp bounds of the segments. Although fuzzy clustering algorithms are widely used to group overlapping and vague objects, they cannot be directly applied to time-series segmentation, because the clusters need to be contiguous in time. This paper proposes a clustering algorithm for the simultaneous identification of local probabilistic principal component analysis (PPCA) models used to measure the homogeneity of the segments and fuzzy sets used to represent the segments in time. The algorithm favors contiguous clusters in time and is able to detect changes in the hidden structure of multivariate time-series. A fuzzy decision making algorithm based on a compatibility criteria of the clusters has been worked out to determine the required number of segments, while the required number of principal components are determined by the screeplots of the eigenvalues of the fuzzy covariance matrices. The application example shows that this new technique is a useful tool for the analysis of historical process data.

Semi-mechanistic Models for State-Estimation – Soft Sensor for Polymer Melt Index Prediction

Nonlinear state estimation is a useful approach to the monitoring of industrial (polymerization) processes. This paper investigates how this approach can be followed to the development of a soft sensor of the product quality (melt index). The bottleneck of the successful application of advanced state estimation algorithms is the identification of models that can accurately describe the process. This paper presents a semi-mechanistic modeling approach where neural networks describe the unknown phenomena of the system that cannot be formulated by prior knowledge based differential equations. Since in the presented semi-mechanistic model structure the neural network is a part of a nonlinear algebraic-differential equation set, there are no available direct input-output data to train the weights of the network. To handle this problem in this paper a simple, yet practically useful spline-smoothing based technique has been used. The results show that the developed semi-mechanistic model can be efficiently used for on-line state estimation.

Fuzzy Clustering Based Segmentation of Time-Series

The segmentation of time-series is a constrained clustering problem: the data points should be grouped by their similarity, but with the constraint that all points in a cluster must come from successive time points. The changes of the variables of a time-series are usually vague and do not focused on any particular time point. Therefore it is not practical to define crisp bounds of the segments. Although fuzzy clustering algorithms are widely used to group overlapping and vague objects, they cannot be directly applied to time-series segmentation. This paper pro- poses a clustering algorithm for the simultaneous identification of fuzzy sets which represent the segments in time and the local PCA models used to measure the homogeneity of the segments. The algorithm is applied to the monitoring of the production of high-density polyethylene.

Process analysis and product quality estimation by Self-Organizing Maps with an application to polyethylene production

The huge amount of data recorded by modern production systems definitely have the potential to provide information for product and process design, monitoring and control. This paper presents a soft-computing (SC)-based approach for the extraction of knowledge from the historical data of production. Since Self-Organizing Maps (SOM) provide compact representation of the data distribution, efficient process monitoring can be performed in the two-dimensional projection of the process variables. For the estimation of the product quality, multiple local linear models are identified, where the operating regimes of the local models are obtained by the Voronoi diagram of the prototype vectors of the SOM. The proposed approach is applied to the analysis of an industrial polyethylene plant. The detailed application study demonstrates that the SOM is very effective in the detection of the typical operating regions related to different product grades, and the model can be used to predict the product quality (melt index and density) based on measured process variables.

Operator Support System for Multi Product Processes - Application to Polyethylene Production

Process manufacturing is increasingly being driven by market forces and customer needs and perceptions, resulting in necessity of flexible multi-product manufacturing. The increasing automation and tighter quality constraints related to these processes make the operator's job more and more difficult. This makes decision support systems for the operator more important than ever before. Based on the three-level model of skilled operators, this paper proposes a modular Operator Support System (OSS). As the proposed approach extensively uses process data, the OSS is based on a data warehouse designed with the help of enterprise and process modeling tools. For human-computer interaction, front-end tools have been worked out where advanced multivariate statistical models are applied to extract the most informative features. The concept is illustrated by an industrial case study, where the OSS is designed for the monitoring and control of a high-density polyethylene plant.

Process data warehouse

Process data warehousing supports efficiently the monitoring and analysis of complex multi-product technologies. Main steps and conditions of a novel process information system were  developed to monitor and analyze complex multi-product technologies.

In order to extract the relevant information both exploratory analysis and data mining methodologies were elaborated.

A two-level structure of the developed information system was determined, in which the process data warehouse is the central element of the second, analytical level. It contains non-violate, consistent and pre-processed data and works independently from other databases at the first, operating level. Analyzer tools and applications connected to the data warehouse were also developed. The developed process information system was evaluated on a complex, multi-product polypropylene technology. The evaluation has proved that the system works efficiently and the various statistical analyzes and data mining methods mine the historical process data effectively.

Self-Organizing Maps in Process Monitoring and Control

For the effective application of Self-Organizing Maps in model based control a control-related framework have been worked out. This framework is based on the following generating elements: Identification, Adaptation, Inversion, and Linearization. By means of these building blocks, inverse model based and model predictive (MPC) controllers were generated.

J. Abonyi, S. Nemeth, Cs. Vincze, P. Arva, Process analysis and product quality estimation by Self-Organizing maps with an application to polyethylene production, Computers in Industry, Special Issue on Soft Computing in Industrial Applications, 52 (3): 221-234 DEC, 2003, IF: 0.602