Computers & Chemical Engineering

Published by Elsevier
Print ISSN: 0098-1354
Publications
Lyophilization can induce aggregation in therapeutic proteins, but the relative importance of protein structure, formulation and processing conditions are poorly understood. To evaluate the contribution of protein structure to lyophilization-induced aggregation, fifteen proteins were co-lyophilized with each of five excipients. Extent of aggregation following lyophilization, measured using size-exclusion chromatography, was correlated with computational and biophysical protein structural descriptors via multiple linear regression. Descriptor selection was performed using exhaustive search and forward selection. The results demonstrate that, for a given excipient, extent of aggregation is highly correlated by eight to twelve structural descriptors. Leave-one-out cross validation showed that the correlations were able to successfully predict the aggregation for a protein "left out" of the data set. Selected descriptors varied with excipient, indicating both protein structure and excipient type contribute to lyophilization-induced aggregation. The results show some descriptors used to predict protein aggregation in solution are useful in predicting lyophilized protein aggregation.
 
A beat-to-beat variation in the electric wave propagation morphology in myocardium is referred to as cardiac alternans and it has been linked to the onset of life threatening arrhythmias and sudden cardiac death. Experimental studies have demonstrated that alternans can be annihilated by the feedback modulation of the basic pacing interval in a small piece of cardiac tissue. In this work, we study the capability of feedback control to suppress alternans both spatially and temporally in an extracted rabbit heart and in a cable of cardiac cells. This work demonstrates real-time control of cardiac alternans in an extracted rabbit heart and provides an analysis of the control methodology applied in the case of a one-dimensional (1D) cable of cardiac cells. The real-time system control is realized through feedback by proportional perturbation of the basic pacing cycle length (PCL). The measurements of the electric wave propagation are obtained by optical mapping of fluorescent dye from the surface of the heart and are fed into a custom-designed software that provides the control action signal that perturbs the basic pacing cycle length. In addition, a novel pacing protocol that avoids conduction block is applied. A numerical analysis, complementary to the experimental study is also carried out, by the ionic model of a 1D cable of cardiac cells under a self-referencing feedback protocol, which is identical to the one applied in the experimental study. Further, the amplitude of alternans linear parabolic PDE that is associated with the 1D ionic cardiac cell cable model under full state feedback control is analyzed. We provide an analysis of the amplitude of alternans parabolic PDE which admits a standard evolutionary form in a well defined functional space. Standard modal decomposition techniques are used in the analysis and the controller synthesis is carried out through pole-placement. State and output feedback controller realizations are developed and the important issue of measurement noise in the controller implementation is addressed. The analysis of stabilization of the amplitude of alternans PDE is in agreement with the experimental results and numerical results produced by the ionic 1D cable of cardiac cells model. Finally, a discussion is provided in light of these results in order to use control to suppress alternans in the human myocardium.
 
Biological systems can be modeled as networks of interacting components across multiple scales. A central problem in computational systems biology is to identify those critical components and the rules that define their interactions and give rise to the emergent behavior of a host response. In this paper we will discuss two fundamental problems related to the construction of transcription factor networks and the identification of networks of functional modules describing disease progression. We focus on inflammation as a key physiological response of clinical and translational importance.
 
Well-mixed and lattice-based descriptions of stochastic chemical kinetics have been extensively used in the literature. Realizations of the corresponding stochastic processes are obtained by the Gillespie stochastic simulation algorithm and lattice kinetic Monte Carlo algorithms, respectively. However, the two frameworks have remained disconnected. We show the equivalence of these frameworks whereby the stochastic lattice kinetics reduces to effective well-mixed kinetics in the limit of fast diffusion. In the latter, the lattice structure appears implicitly, as the lumped rate of bimolecular reactions depends on the number of neighbors of a site on the lattice. Moreover, we propose a mapping between the stochastic propensities and the deterministic rates of the well-mixed vessel and lattice dynamics that illustrates the hierarchy of models and the key parameters that enable model reduction.
 
This paper discusses recent optimization approaches to the protein side-chain prediction problem, protein structural alignment, and molecular structure determination from X-ray diffraction measurements. The machinery employed to solve these problems has included algorithms from linear programming, dynamic programming, combinatorial optimization, and mixed-integer nonlinear programming. Many of these problems are purely continuous in nature. Yet, to this date, they have been approached mostly via combinatorial optimization algorithms that are applied to discrete approximations. The main purpose of the paper is to offer an introduction and motivate further systems approaches to these problems.
 
Chemical process systems engineering considers complex supply chains which are coupled networks of dynamically interacting systems. The quest to optimize the supply chain while meeting robustness and flexibility constraints in the face of ever changing environments necessitated the development of theoretical and computational tools for the analysis, synthesis and design of such complex engineered architectures. However, it was realized early on that optimality is a complex characteristic required to achieve proper balance between multiple, often competing, objectives. As we begin to unravel life's intricate complexities, we realize that that living systems share similar structural and dynamic characteristics; hence much can be learned about biological complexity from engineered systems. In this article, we draw analogies between concepts in process systems engineering and conceptual models of health and disease; establish connections between these concepts and physiologic modeling; and describe how these mirror onto the physiological counterparts of engineered systems.
 
The control of the quality of a product, by selectively timing its withdrawal rate from a mixed reservoir where it is being stored, is studied. Bounds on the applicability of a feedforward strategy to achieve quality control, are evaluated by simulation and analysis of a simple case, where a periodic disturbance of known frequency is to be compensated for. Average delivery rate, reservoir capacity limits and disturbance frequency delineate the bounds on permissible control actions. Within those bounds, an optimal withdrawal strategy is demonstrated, that yields improved results, as compared to natural attenuation. It is noted, however, that this deterministic approach cannot be expected to work for uncharacterized or random disturbance.
 
Focuses on transport-reaction processes with unknown time-varying parameters and disturbances, and addresses the problem of computing optimal actuator locations for robust nonlinear controllers. The theoretical results are successfully applied to a typical diffusion-reaction process with uncertainty
 
Many applications of principal component analysis (PCA) can be found in the literature. But principal component analysis is a linear method, and most engineering problems are nonlinear. Sometimes using the linear PCA method in nonlinear problems can bring distorted and misleading results. So there is a need for a nonlinear principal component analysis (NLPCA) method. The principal curve algorithm was a breakthrough of solving the NLPCA problem, but the algorithm does not yield an NLPCA model which can be used for predictions. In this paper the authors present an NLPCA method which integrates the principal curve algorithm and neural networks. The results on both simulated and real problems show that the method is excellent for solving nonlinear principal component problems. Potential applications of NLPCA are also discussed in this paper.
 
Singular Value Analysis has been used to design structural compensators for the material balance control scheme in distillation. The control system obtained is similar to (1) column profile control, (2) implicit decoupling, (3) modal control, (4) output decoupling and, under some conditions, (5) extensive variable control. The material balance control scheme does not require an input compensator because the right singular vectors (input rotation matrix) are naturally aligned with the standard basis vectors. The output compensator (measurement matrix) is designed to create a non-interacting output frame. The resulting control scheme pairs distillate flowrate with the sum of two tray temperatures, and reboiler heat duty with the difference between the tray temperatures.
 
This paper focuses on two-dimensional incompressible Newtonian fluid flow over a flat plate and studies the problem of reducing the frictional drag exerted on the plate using active feedback control. Several alternative control configurations, including both pointwise and spatially uniform control actuation and sensing, are developed and tested through computer simulations. All control configurations use control actuation in the form of blowing/suction on the plate and measurements of shear stresses along the plate. The simulation results indicate that the use of active feedback control, which employs reasonable control effort, can significantly reduce the frictional drag exerted on the plate compared to the open-loop values
 
A method for assessing certain validity aspects of predictions made by neural networks used to approximate continuous (in time) dynamical systems is presented. This method searches for noninvertibility (nonuniqueness of the reverse time dynamics) of the fitted model, an indication of breakdown of proper dynamical behavior. It is useful for computing bounds on the valid range of network predictions
 
The properties of a data base management system specifically designed for the support of process engineering have been investigated. A flow-sheeting program, CHESS, has been integrated into Chiyoda's DPLS. The advantages of data-based simulation stem primarily from the data-centered architecture of the integrated system. The scope of the system can be expanded freely without altering the initial design. User commands separatethe steps which build the project data base from those which execute any portion of the complete simulation as well as from the report generation steps.
 
The rapid development of computing and control is reviewed and placed in a historical context. Two examples are given to illustrate the present discrepancy between the way in which we deal in the design of systems with the human and the mechanical components, and this discrepancy is traced to the inadmissibility of purpose in our science-based technology. An alternative way of regarding science and technology is suggested.
 
To improve the quality of decision making in the process operations, it is essential to implement integrated planning and scheduling optimization. Major challenge for the integration lies in that the corresponding optimization problem is generally hard to solve because of the intractable model size. In this paper, augmented Lagrangian method is applied to solve the full-space integration problem which takes a block angular structure. To resolve the non-separability issue in the augmented Lagrangian relaxation, we study the traditional method which approximates the cross-product term through linearization and also propose a new decomposition strategy based on two-level optimization. The results from case study show that the augmented Lagrangian method is effective in solving the large integration problem and generating a feasible solution. Furthermore, the proposed decomposition strategy based on two-level optimization can get better feasible solution than the traditional linearization method.
 
Finding optimal facility layouts is a classic problem in process system engineering as well as operations research. As initial models were unsuitable for instances of unequal facility sizes or unknown potential positions, continuous facility layout (CFL) models were introduced to address these limitations by modeling facilities as geometric entities and searching for an optimal two-dimensional packing. However, solving these new models becomes dramatically harder: finding optimal layouts for these models is beyond reach of current optimization techniques, except for tiny instances. In this paper, we present several important theoretical results. We prove that it suffices to enumerate finitely many candidate solutions to secure an optimal solution, despite the fact that CFL admits infinitely many feasible layouts. We then develop a specialized branch-and-bound algorithm to further boost search efficiency by exploiting problem structure to prune large portions of the solution space. Comprehensive computational studies show that this new algorithm substantially outperforms three existing approaches. We also discuss extensions of this algorithm to more general layout instances.
 
There is an increasing interest in the control of particulate systems, with a focus on the control of distributions (e.g. particle size distribution (PSD)). It is known that such distributions have strong correlations with end-product quality indices, this motivating their tight regulation. This paper addresses the design of a control strategy for control of PSD in a semi-batch emulsion polymerization process. In particular, this work presents a hybrid modeling approach for batch-to-batch optimization in which a fundamental population balance model describing PSD evolution is augmented by a Partial Least Squares (PLS) model. This hybrid model is then embedded in a successive quadratic program (SQP) to design surfactant and initiator input trajectories that drive the process to a target PSD.
 
Ontologies reflect our view of what exists, and developing ontologies for a given domain requires a common context. This context can be characterized explicitly by means of an upper ontology. Upper ontologies define top-level concepts such as physical objects, activities, mereological and topological relations from which more specific classes and relations can be defined. As an effort to support the development of domain ontologies, we are developing an OWL ontology based on the ISO 15926 standard. This paper introduces the key aspects of the ontology, describes some of its main classes and properties and discusses its benefits and applications in the process engineering domain.
 
Model predictive control (MPC) is currently the most widely implemented advanced process control technology for petroleum refineries and chemical plants. Based on the present state of the art in theory and practice, MPC works well for processes operating over a narrow range of conditions. However, processes frequently have to operate over a wide range of conditions, for reasons such as varying feedstocks, fluctuating markets for products and raw materials, large process disturbances, and equipment wear. Unsatisfactory MPC performance over widely ranging operating conditions may result in process downtime, environmental and safety risks, and waste of resources, with substantial economic losses. Therefore, there is a need for flexible MPC systems that perform well over a wide range of process operating conditions. While the inner complexity of such (next-generation) MPC systems may be high (to realize the sought improvements in control performance), the complexity of the design, operation, and maintenance of such systems by process engineers and operators should be low. The development and implementation of flexible MPC systems will almost certainly be facilitated by the future availability of predictably even more powerful computers and communication hardware.
 
A general optimization framework for discrete and continuous problems based on genetic algorithmic techniques is presented. The proposed framework exhibits a structured information exchange leading to an efficient search procedure. The utility of the method is demonstrated by applying it to the design of an unsplit heat exchanger network. We also extend the basic idea of genetic algorithms to develop a new method, called the extended genetic search (EGS), for optimization in continuous spaces. The performance of the extended genetic search for several test cases of nonlinear unconstrained and constrained optimization problems is also presented. We also compare the performance of this technique with others, including simulated annealing. The proposed framework turns out to be quite robust and is able to locate global optimum solutions for problems where gradient-based algorithms fail. The results obtained were comparable to the ones obtained with simulated annealing. The implementational simplicity of the EGS framework makes it particularly useful as a tool for problems where approximate solutions are needed quickly.
 
In most multiproduct batch plants, the short-term planning activity starts by considering the set of product orders to be filled during the scheduling period. Each order specifies the product and the amount to be manufactured as well as the promised due date and the release time. Several orders can be related to the same product, though featuring different quantities and due-dates. The initial task to be accomplished by the scheduler is the so-called batching process that transforms the product orders to fill into equivalent sets of batches to be scheduled and subsequently assigns a due date to each one. To execute the batching procedure for a particular product, the scheduler should not only account for the preferred unit sizes but also for all the orders related to such a product and their corresponding deadlines. Frequently, a batch is shared by several orders with the earliest one determining the batch due-date. In this paper, a new two-step systematic methodology for the scheduling of single-stage multiproduct batch plants is presented. In the first phase, the product batching process is accomplished to minimize the work-in-process inventory while meeting the orders’ due-dates. The set of batches so attained is then optimally scheduled to meet the product orders as close to their due dates as possible. New MILP continuous-time models for both the batching and the scheduling problems were developed. In addition, widely known heuristic rules can be easily embedded in the scheduling problem formulation to get a faster convergence to near-optimal schedules for ‘real-world’ industrial problems. Three example problems involving up to 29 production orders have been successfully solved in low computational time.
 
A grey-box model, based on a priori knowledge about heat transfer with phase change and on microwave effects approached by a continuous function, is proposed to simulate the thawing of food in a microwave guide in fundamental TE1,0 mode. The partial differential equation describing heat transfer with phase change is reduced into a lumped parameters system using a 2D finite volumes scheme. The heating due to the microwaves is considered in the heat equation as the spatial variation of the microwave power flux. Flux variation is approached by a continuous function of temperature and of the wave power, whose coefficients are estimated via reverse techniques. Finally, the model is validated during the thawing of a block of tylose filling the section of the wave guide.
 
Solutions to models with different length scales may contain regions such as shocks, steep fronts and other near discontinuities. Adaptive meshing strategies, in which a spatial mesh network is adjusted dynamically so as to capture the local behavior accurately, will be described. The algorithm will be tested on an example of solid-solid combustion.
 
The basic idea of the proposed optimisation method is to replace the model equations by an equivalent neural network (NN) that mimics the phenomenological model, and use this NN to carry out a grid search, mapping all the region of interest. The proposed optimisation approach was applied to the industrial process of nylon-6,6 polymerisation in a twin-screw extruder reactor. This corresponds to the finishing stage of an industrial polymerisation plant. A qualitative optimisation procedure is used taking in account safe operation conditions, wear and tear of the equipment, product quality and energy consumption. The chosen operational variables are then checked with the phenomenological model. This approach provides more comprehensive information for the engineer's analysis than the conventional non-linear programming procedure.
 
A numerical investigation into the physical characteristics of dilute gas–particle flows over a square-sectioned 90° bend is reported. The modified Eulerian two-fluid model is employed to predict the gas–particle flows. The computational results using both the methods are compared with the LDV results of Kliafas and Holt, wherein particles with corresponding diameter of 50 μm are simulated with a flow Reynolds number of 3.47 × 105. RNG-based κ–ɛ model is used as the turbulent closure, wherein additional transport equations are solved to account for the combined gas–particle interactions and turbulence kinetic energy of the particle phase turbulence. Moreover, using the current turbulence modelling formulation, a better understanding of the particle and the combined gas–particle turbulent interaction has been shown. The Eulerian–Eulerian model used in the current study was found to yield good agreement with the measured values.
 
This paper addresses the detection of abnormal process situations during plant operation via an effective trending strategy. Wavelet-domain hidden Markov models (HMMs) are exploited as a powerful tool for statistical modeling and processing of wavelet coefficients. We focus on the multivariate problem as many variables contribute to the decision regarding process status. A simulation study illustrates the salient features of the proposed framework.
 
Enterprises today have realized the importance of supply chain management to achieve operational efficiency, cut costs, and maintain quality. Uncertainties in supply, demand, transportation, market conditions, and many other factors can interrupt supply chain operations, causing significant adverse effects. These uncertainties motivate the development of decision support systems for managing disruptions in the supply chain. In this paper, we propose a model-based framework for rescheduling operations in the face of supply chain disruptions. A causal model, called the composite-operations graph, captures the cause-and-effect among all the variables in supply chain operation. Its subgraph, called scheduled-operations graph, captures the causal relationships in a schedule and is used for identifying the consequences of a disruption. Rescheduling is done by searching a rectifications-graph, which captures all possible options to overcome the disruption effects, based on a user-specified utility function. In contrast to heuristic approaches, the main advantages of the proposed model-based rescheduling method are the completeness of solution search and flexibility of the utility function. The proposed framework is illustrated using a refinery supply chain example.
 
An existing hydrogen plant is simulated using a rigorous model for the steam reformer, shift converters, CO2 absorber using DGA (Diglycolamine) and the Methanator. The CO2 absorber using DGA along with the methanator modeled in this work replace the PSA (pressure swing adsorption) system used in other studies, making the current study unique in this respect. A close agreement is observed between the results of the modeling and industrial data from the hydrogen plant of the Tehran refinery. Thereafter, an adaptation of the non-dominated sorting genetic algorithm is employed to perform a multi-objective optimization. Simultaneous maximization of the hydrogen product and export steam flow rates are considered as two objective functions. For the design configuration considered in this study, sets of Pareto-optimal operating conditions are obtained.
 
The recently proposed mathematical model of Suchak et al., A.I.Ch.E. Jl.37, 23 (1991) for the absorption of NOx into nitric acid solutions in a countercurrent flow packed column has been modified by taking into account separate energy balances for the gas and liquid phases and variation of the molar flow rates of water and nitric acid in the liquid phase as a function of the height of the column.
 
A rigorous computer model is developed for the simulation of absorption and single or complex reactions in packed or plate columns.Part I deals with the packed-column version. It allows the computation of the concentration and pressure profiles along the column and of the concentration profiles in both the gas and liquid film at any height in the column. The use of mass-transfer coefficients leads to the real—not the theoretical—height of the column. The rigorous solution is compared with approximate solutions.Part II deals with the plate-column version. Here too, mass-transfer coefficients are used. Non-isothermal conditions are taken care of through enthalpy balances.The computer program is presented in some detail and applications to industrial situations are illustrated.
 
An approach to manage the complexity involved in the redesign/retrofit of chemical processes is proposed in this paper. This approach uses model-based reasoning (MBR) to generate automatically alternative representations of an existing chemical process at several levels of abstraction. The hierarchical representation results in sets of equipments and sections organised according to their functions and purposes inside the overall process.Our approach is an extension of the multimodelling and the multilevel flow modelling methodologies. These, together with the methodology of Douglas and Turton, are used to represent a process hierarchically, thus improving the identification of analogous units (equipment) or meta-units (process sections) from different chemical processes. The hierarchical representation results in sets of units and meta-units organised according to their functional and teleological model.The implemented tool includes a CBR system to retrieve from a library of cases similar units and/or meta-units to the one selected by the user (similarity is measured according to the function and teleology of a unit and its neighbours). As an output of the CBR system we obtain a set of cases (units and meta-units) ranked according to their similarity. In the case of meta-units, the replacement of process sections may result in changes to the process topology while keeping the same functionality (e.g. a reactive-distillation unit replacing a reactor and separation system). The CBR operates at every level of the abstraction hierarchy both for the problem case and the library cases, opening for the first time the possibility to identify similar solutions at different levels of abstraction and thus widening the scope and the quality of search for similar solutions in the library of cases.
 
Process systems engineering (PSE) has been an active research field for almost 50 years. Its major achievements include methodologies and tools to support process modeling, simulation and optimization (MSO). Mature, commercially available technologies have been penetrating all fields of chemical engineering in academia as well as in industrial practice. MSO technologies have become a commodity, they are not a distinguishing feature of the PSE field any more. Consequently, PSE has to reassess and to reposition its future research agenda. Emphasis should be put on model-based applications in all PSE domains including product and process design, control and operations. Furthermore, systems thinking and systems problem solving have to be prioritized rather than the mere application of computational problem solving methods. This essay reflects on the past, present and future of PSE from an academic and industrial point of view. It redefines PSE as an active and future-proof research field which can play an active role in providing enabling technologies for product and process innovations in the chemical industries and beyond.
 
The classical SMB-process including its ModiCon-variant is considered. The mathematical model corresponds to a set of coupled nonlinear partial differential equations completed by initial and boundary conditions. In order to predict the quality of the separation process accurately and efficiently discretization schemes are needed. We propose a new cascadic multilevel method for the fast calculation of the cyclic steady state offering high accuracy combined with good mass conservation. The proposed cascadic multilevel approach can be also applied to other discretization methods.
 
Process optimization problems are frequently characterized by large models, with many variables and constraints but relatively few degrees of freedom. Reduced Hessian decomposition methods applied to successive quadratic programming (SQP) exploit the low dimensionality of the subspace of the decision variables, and have been very successful for a wide variety of process applications. However, further development is needed for improving the efficient large-scale use of these tools.In this study we develop an improved SQP algorithm decomposition with coordinate bases that includes an inexpensive second-order correction term. The resulting algorithm is 1-step Q-superlinearly convergent. More importantly, though, the resulting algorithm is largely independent of the specific decomposition steps. Thus, the inexpensive factorization of the coordinate decomposition can be applied in a reliable and efficient manner.With this efficient and easy-to-implement NLP strategy, we continue to improve the performance of the optimization algorithm by exploiting the mathematical structure of existing process engineering models. Here we consider the tailoring of a reduced Hessian method for the block tridiagonal structure of the model equations for distillation columns. This approach is applied to the Naphthali—Sandholm algorithm implemented within the UNIDIST and NRDIST programs. Our reduced Hessian SQP strategy is incorporated with only minor changes in the program's interface and data structures. Through this integration, reductions of 20–80% in the total CPU time are obtained compared to general reduced space optimization; an order of magnitude reduction is obtained when compared to conventional sequential strategies.Consequently, this approach shows considerable potential for efficient and reliable large-scale process optimization, particularly when complex Newton-based process models are already available.
 
This article describes an effective and simple primal heuristic to greedily encourage a reduction in the number of binary or 0–1 logic variables before an implicit enumerative-type search heuristic is deployed to find integer-feasible solutions to ‘hard’ production scheduling problems. The basis of the technique is to employ well-known smoothing functions used to solve complementarity problems to the local optimization problem of minimizing the weighted sum over all binary variables the product of themselves multiplied by their complement. The basic algorithm of the ‘smooth-and-dive accelerator’ (SDA) is to solve successive linear programming (LP) relaxations with the smoothing functions added to the existing problem's objective function and to use, if required, a sequence of binary variable fixings known as ‘diving’. If the smoothing function term is not driven to zero as part of the recursion then a branch-and-bound or branch-and-cut search heuristic is called to close the procedure finding at least integer-feasible primal infeasible solutions. The heuristic's effectiveness is illustrated by its application to an oil-refinery's crude-oil blendshop scheduling problem, which has commonality to many other production scheduling problems in the continuous and semi-continuous (CSC) process domains.
 
The chemical industry as a whole does not learn from past accidents. Although the individuals concerned and, to some extent, their companies will learn the lessons, the information may not be widely disseminated throughout the industry. Efforts to change this situation have been made over the recent years. Most notably, the reporting of accidents, sometimes anonymously, in various publications. This in turn has led to the creation of computer databases so that appropriate information can be retrieved quickly.This paper identifies the main indexing and retrieval problems with conventional databases. The use of a domain model, in the form of classification hierarchies, is used to improve the process of indexing, identifying and ranking information. Existing databases are on the whole provided as stand alone computer programs and the user must seek out information about accidents by specifying and submitting a query. This paper considers the novel approach of integrating accident databases with computer tools used by chemical plant designers, operators and maintenance engineers, such as CAD systems and digital control systems, so that appropriate accident reports can be automatically retrieved and filtered in response to the context in which the user is working.
 
The control system of a process plant, as other physical systems, can break up or fail to function satisfactorily, which means that control objectives are not achieved. In order to avoid the shut downs or accidents related to the control objectives unachieved, an analysis of both the process and control system must be performed. This paper introduces the use of functional modeling for control system reconfiguration. An extension of the Multilevel Flow Modeling (MFM) methodology is presented; this extension provides the functionality that analyzes the control system status as well as possible reconfigurations in a semiautomatic and consistent way, which means to enhance systems’ autonomy and robustness. In order to be able to have an integrated model an open, neutral, domain and platform independent language is needed. The Systems Modeling Language (SysML) complies with these requirements and can be applied to develop MFM models.
 
Fault diagnosis is an important aspect of process operations. There is an abundance of literature on fault diagnosis for process industries. So far most of the work is focussed on fault detection, isolation, and identification. The use of results from fault diagnosis often performed by the operator through table-looking strategies, are often incomplete, time-consuming, and prone to errors. In this study, a dynamic optimization based control redesign for fault accommodation is proposed. The whole fault accommodation system includes a dynamic process model, fault detection and isolation, fault identification and state estimation, and dynamic optimization block for control redesign. This method simultaneously achieves the economic optimality and operation safety due to its capability to handle both hard equipment constraints and operational constraints.
 
Fault detection and isolation (FDI) for industrial processes has been actively studied during the last decades. Traditionally, the most widely implemented FDI methods have been based on model-based approaches. In modern process industry, however, there is a demand for data-based methods due to the complexity and limited availability of the mechanistic models. The aim of this paper is to present a data-based, fault tolerant control (FTC) system for a simulated industrial benchmark process, Shell control problem. Data-based FDI systems, employing principal component analysis (PCA), partial least squares (PLS) and subspace model identification (SMI) are presented for achieving fault tolerance in co-operation with controllers. The effectiveness of the methods is tested by introducing faults in simulated process measurements. The process is controlled by using model predictive control (MPC). To compare the effectiveness of the MPC, the FTC system is also tested with a control strategy based on a set of PI controllers.
 
A numerical scheme is presented for computer simulation of multicomponent gas transport possibly accompanied by reversible chemical reactions in a macroporous medium, based on the dustygasmodel. Using analytical solutions for simple systems it is shown that the derivation of the scheme is mathematically correct and the implementation into Borland Dephi code is performed without vital programming errors. Simulations showed remarkable accuracy, robustness and efficiency. A demonstration version of the computer program is available on the internet; http://www.ct.utwente.nl/∼ims/.
 
In this paper the problem of designing a polymer repeat unit with fine-tuned or optimized thermophysical and mechanical properties is addressed. The values of these properties are estimated using group contribution methods based on the number and type of the molecular groups participating in the polymer repeat unit. By exploiting the prevailing mathematical features of the structure-property relations the following two research objectives are addressed: (i) How to efficiently locate a ranked list of the best polymer repeat unit architectures with mathematical certainty; and (ii) how to quantify the effect of imprecision in property estimation in the optimal design of polymers. A blend of mixed-integer linear optimization, chance-constrained programming, and multilinear regression is utilized to answer these questions. The proposed methodology is highlighted with a illustrative example. Preliminary results (see also Maranas (1996a,b)) demonstrate that the proposed framework identifies the the mathematically best molecular design and quantifies the profound effect that property prediction uncertainty may have in optimal molecular design.
 
Practical application of the general rate model is often hindered by computational complexity and by unknown parameter values. Model simplifications and parameter correlations are not always applicable, and repeated model solutions are required for parameter estimation and process optimization. We have hence developed and implemented a fast and accurate solver for the general rate model with various adsorption isotherms.Several state of the art scientific computing techniques have been combined for maximal solver performance: (1) the model equations are spatially discretized with finite volumes and the weighted essentially non-oscillatory (WENO) method. (2) A non-commercial solver with variable stepwidth and order is applied for time integration. (3) The internal linear solver module is replaced by customized code that is based on domain decomposition and can be executed on parallel computers.We demonstrate the speed and accuracy of our solver with numerical examples. The WENO method significantly improves accuracy even on rather coarse grids, and the solver runtime scales linearly with relevant grid sizes and processor numbers. Code parallelization is essential for utilizing the full power of multiprocessor and multicore technology in modern personal computers. The general rate model of a strongly non-linear two component system is solved in few seconds.
 
A proper application of the quadrature method for numerical solution of a model of an isothermal reactor with axial mixing is presented. It is shown that the present method alleviates the numerical difficulties encountered in the previous finite difference and quadrature solutions while satisfying the boundary conditions accurately. In addition, the present method captured the affect of the pressure waves which is missing in the previously reported solutions.
 
Appreciable efforts have been expended in the manufacture of chemical products in such a way that pollution may be minimized and, if possible, even avoided. The conceptual design of such production plants involves an interplay between design and operational requirements, which attempt to balance non-commensurable objectives such as economics, safety health and environmental impact. If the environmental impact problem is not considered in the conceptual phase of the process development, there is a need to extensive end-of-pipe treatment of wastes which is always an expensive activity with negative economic effect. In this sense, in this work, a conceptual design of an acetaldehyde plant with zero avoidable pollution is presented. The proposed process is based on the oxidation of ethyl alcohol over FeMo catalyst. In order to obtain high performance operation (high conversion and complete selectivity) a non-conventional reactor design is proposed. The computer aided design involves the experimental conversion of ethanol to acetaldehyde, the reactor analysis through extensive simulation using specific computer simulation code. The separating problem and the whole plant analysis is made using the HYSIM commercial simulator. It is shown the great potential of the proposed process which permits very high conversion and complete selectivity leading to zero waste emission. The process methodology considered in this work involves since the experimental conversion of ethanol to acetaldehyde up to the proposition of a new reactor design, a set of separation units and the whole acetaldehyde plant simulation.
 
We describe a hierarchy of methods, models, and calculation techniques that support the design of reactive distillation columns. The models require increasingly sophisticated data needs as the hierarchy is implemented. The approach is illustrated for the production of methyl acetate because of its commercial importance, and because of the availability of adequate published data for comparison. In the limit of reaction and phase equilibrium, we show (1) the existence of both a minimum and a maximum reflux, (2) there is a narrow range of reflux ratios that will produce high conversions and high purity methyl acetate, and (3) the existence of multiple steady states throughout the entire range of feasible reflux ratios. For finite rates of reaction, we find (4) that the desired product compositions are feasible over a wide range of reaction rates, up to and including reaction equilibrium, and (5) that multiple steady states do not occur over the range of realistic reflux ratios, but they are found at high reflux ratios outside the range of normal operation. Our calculations are in good agreement with experimental results reported by Bessling et al., [Chemical Engineering Technology 21 (1998) 393].
 
Polyethylene (PE) is one of the most widely used polymers in chemical industry. In all the PE processes, low density polyethylene (LDPE) process is generally considered as the most profitable segment of the polyethylene business worldwide. There are two basic processes used for the manufacturing of LDPE: autoclave and tubular reactor, where autoclave reactor is often used to produce high-value specialty-type grades including ethylene-vinyl acetate (EVA) copolymer. Depending on different grades of the product, reactor zone temperatures in this autoclave process can be in the range 150–230 °C and pressure in the range of 1500–2000 kg/cm2. In this paper, dynamic simulation of an industrial EVA copolymerization reactor will be established. Industrial operating condition data of seven product grades with melt index ranging from ones to over four hundred will be used as fitted data to obtain the key kinetic parameters in the model. The predicted outputs of melt index, vinyl acetate wt.% in polymer, and polymer production rate will be compared to plant data. This dynamic model was used in determining the controller tuning constants of the three important zone temperature control loops of the autoclave reactor and was also used for comparative study of various operating policies during grade transition.
 
Drug-delivery systems, with predictable delivery rates, were designed using an artificial neural network-based optimization algorithm. A two-chamber diffusion cell was used to study the permeation of estradiol through ethylene–vinyl acetate copolymer membrane. The explanatory variables were the vinyl acetate (VA) content of the membrane, poly(ethylene glycol) (PEG)—solvent-composition, and membrane thickness. After deriving a neural network model to predict estradiol delivery rates as a function of these input variables, a constrained optimization procedure was applied to estimate the membrane/vehicle properties necessary to achieve a prescribed dosage. The results compared adequately well with experimental data with 71% of the data agreeing within one standard deviation. Input sensitivity analysis showed that at specific VA levels, drug delivery was more sensitive to changes in PEG compositions. The non-uniqueness of the inversion method and the accuracy of the procedure were investigated using neural network-based two-dimensional contour plots. The methodology proposed could be used to design customized polymer-based drug-delivery systems that meet specific end-user requirements.
 
The alternative fuel butanol can be produced via acetone–butanol–ethanol (ABE) fermentation from biomass. The high costs for the separation of ABE from the dilute fermentation broth have so far prohibited the industrial-scale production of bio-butanol. In order to facilitate an effective and energy-efficient product removal, we suggest a hybrid extraction–distillation downstream process with ABE extraction in an external column. By means of computer-aided molecular design (CAMD), mesitylene is identified as novel solvent with excellent properties for ABE extraction from the fermentation broth. An optimal flowsheet is developed by systematic process synthesis which combines shortcut and rigorous models with rigorous numerical optimization. Optimization of the flowsheet structure and the operating point, consideration of heat integration, and the evaluation of the minimum energy demands are covered. It is shown that the total annualized costs of the novel process are considerably lower compared to the costs of alternative hybrid or pure distillation processes.
 
Quantitative measures are determined for the evaluation of the structural properties of nonlinear processes. These include simple structural measures for the evaluation of the “best” achievable control quality in a given process, regardless of the type of the controller being used. The notions of relative orders (time delays) with respect to manipulated inputs and disturbances and characteristic matrix are used to determine the best achievable closed-loop response for each controlled output and to derive the conditions under which the best closed-loop response in all outputs together with complete input-output decoupling is feasible.
 
The formulation of policies requires the selection and configuration of effective and acceptable courses of action to reach explicit goals. A one-size-fits-all policy is unlikely to achieve the desired goals; as a result, the identification of a suite of alternative policies, together with clear indications of their trade-offs, is crucial to accommodate the diversity of stakeholders’ preferences. At present, the formulation of transport policies is done manually; this fact, together with the size of the space of possible policies, results in a large part of that space being left unexplored. A six-step framework to explore the space of alternative transport policies in order to achieve environmental targets is proposed. The process starts with a user-defined set of specific policy measures, using them as building blocks in the generation of alternative policy packages, clusters and future images according to the user's preferences and goals.
 
Top-cited authors
Venkat Venkatasubramanian
  • Columbia University
Lorenz T. Biegler
  • Carnegie Mellon University
Rafiqul Gani
  • PSE for SPEED Company
Wolfgang Marquardt
  • RWTH Aachen University
Efstratios N. Pistikopoulos
  • Texas A&M University