Optimization and

optimal experiment design

Improvement of PSO algorithm by memory based gradient search - application in inventory management

Advanced inventory management in complex supply chains requires effective and robust nonlinear optimization due to the stochastic nature of supply and demand variations. Application of estimated gradients can boost up the convergence of Particle Swarm Optimization (PSO) algorithm but classical gradient calculation cannot be applied to stochastic and uncertain systems. In these situations Monte-Carlo (MC) simulation can be applied to determine the gradient. We developed a memory based algorithm where instead of generating and evaluating new simulated samples the stored and shared former function evaluations of the particles are sampled to estimate the gradients by local weighted least squares regression. The performance of the resulted regional gradient-based PSO is verified by several benchmark problems and in a complex application example where optimal reorder points of a supply chain are determined.

Multi- and conflicting- objective process optimization problems can be effectively solved by interactive optimization

Process optimization problems often lead to multi-objective problems where optimization goals are non-commensurable and they are in conflict with each other. In such cases, the common approach, namely the application of a quantitative cost-function, may be very difficult or pointless. For these problems, we developed a method that handles these problems by introducing a human user into the evaluation procedure. Namely, the poposed method uses the expert knowledge directly in the optimization procedure. This approach has been applied successfully in computer graphics and engineering construction design, but it has not been used for chemical process engineering problems so far. During the development of the algorithm, we adopted this approach to typical process engineering problems. The results illustrate that the proposed tool offers a more flexible way to make a compromise among different goals than the conventional optimization methods do. The practical usefulness of the framework was demonstrated through two application examples: tuning of a multi-input multi-output controller and optimization of a fermentation process.

J. Madár, J. Abonyi, F. Szeifert, Interactive evolutionary computation in process engineering, Computers & Chemical Engineering, Volume 29, Issue 7, 15 June 2005, Pages 1591-1597, IF: 1.678 A MATLAB toolbox has been developed based on the proposed concept of Interactive Evolutionary Computation (example 1 and 2) , and it has more than 1500 users at the webpage of the MATLAB central file exchange (www.mathworks.com)

Interactive Evolutionary Computing


In some real-life optimization problems the objectives are often non-commensurable and are explicitly/mathematically not available. Interactive Evolutionary Computation (IEC) can effectively handle these problems.

Tamas Varga, Andras Kiraly, Janos Abonyi, Improvement of PSO algorithm by memory based gradient search - application in inventory management, Swarm Intelligence and Bio-inspired Computation, Elsevier, Xin-She Yang, Zhihua Cui, Renbin Xiao, Amir Hossein Gandomi, Mehmet Karamanoglu, pp.403-422

Webpage of the algorithm and the related papers

Genetic programming for model structure identification - genetic programming MATLAB toolbox

Linear-in-parameters models are quite widespread in process engineering, e.g. NAARX, polynomial ARMA models, etc. Genetic Programming (GP) is able to generate nonlinear input-output models of dynamical systems that are represented in a tree structure. This GP-OLS toolbox applies Orthogonal Least Squares algorithm (OLS) to estimate the contribution of the branches of the tree to the accuracy of the model. This method results in more robust and interpretable models than the classical GP method

Economic oriented stochastic optimization in process control using Taguchi’s method

The optimal operating region of complex production systems is situated close to process constraints related to quality or safety requirements. Higher profit can be realized only by assuring a relatively low frequency of violation of these constraints. We defined a Taguchi-type loss function to aggregate these constraints, target values, and desired ranges of product quality. We evaluate this loss function by Monte-Carlo simulation to handle the stochastic nature of the process and apply the gradient-free Mesh Adaptive Direct Search algorithm to optimize the resulted robust cost function. This optimization scheme is applied to determine the optimal set-point values of control loops with respect to pre-determined risk levels, uncertainties and costs of violation of process constraints. The concept is illustrated by a well-known benchmark problem related to the control of a linear dynamical system and the model predictive control of a more complex nonlinear polymerization process. The application examples illustrate that the loss function of Taguchi is an ideal tool to represent performance requirements of control loops and the proposed Monte-Carlo simulation based optimization scheme is effective to find the optimal operating regions of controlled processes.

A. Király, L. Dobos, J. Abonyi, Economic oriented stochastic optimization in process control using Taguchi’s method, Optimization and Engineering 2013, Volume 14, Issue 4, pp 547-563

Optimal experiment design

Optimal experiment design supported by evolutionary strategy is an effective tool for iterative and interactive model development and parameter identification tasks.

Central question of the sequential experiment design method is how to select input profile or time series of a system during the iterative model development phase in order to have the system outputs be most informative regarding the model parameters. This problem can be solved by an iterative-sequential method called optimal experiment design (OED) where the applied extremum-searching algorithm has a key role. The original algorithm was further developed in two elements: (i) I have shown that at these steps, applying evolutionary strategy improves efficiency while (ii) collecting previous results in a database (data ware-house) and using their outcome in the current experiment serves as further improvement for the parameter identification process. In this way, model developments and parameter identification can be managed with less energy efforts and higher reliability.

Controller tuning of district heating networks using experiment design techniques

There are various governmental policies aimed at reducing the dependence on fossil fuel for space heating and the reduction in its associated emission of greenhouse gases. DHNs (District heating networks) could provide an efficient method for house and space heating by utilizing residual industrial waste heat. In such systems, heat is proceduced and/or thermally upgraded in a central plant and then distributed to the end users trough a pipeline network. The control strategies of these networks are rather difficult thanks to the non-linearity of the system and the strong interconnection between the controlled variables. That is why a NMPC (non-linear model predictive controller) could be applied to be method for the applied NMPC to fulfill the control goal as soon as possible. The performance of the controller is characterized by an economic cost function based on pre-defined operation ranges. A methodology from the field of experiment design is applied to tune the model predictive controller to reach the best performance. The efficiency of the proposed methology is proven throughout a case study of a simulated NMPC controlled DHN.

L. Dobos, J. Abonyi, Controller tuning of district heating networks using experiment design techniques, Energy 36:(8) pp. 4633-4639. (2011)