Article

A Globally Convergent, Locally Optimal Min-H Algorithm for Hybrid Optimal Control

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Existing algorithms for the indirect solution of hybrid optimal control problems suffer from several deficiencies: Min-H algorithms are not applicable to hybrid systems and are not globally convergent. Indirect multiple shooting and indirect collocation are difficult to initialize and have a small domain of convergence. Contrary to these existing algorithms, a novel min-H algorithm is introduced here, which is initialized intuitively and converges globally to a locally optimal solution. The algorithm solves hybrid optimal control problems with autonomous switching, a fixed sequence of discrete states, and unspecified switching times. Furthermore, the convergence of the proposed algorithm is at least quadratic near the optimum, and solutions are found with high accuracy. A numerical example shows the efficiency of the novel min-H algorithm.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... To apply the indirect approach, an extension of the Pontryagin's maximum principle to hybrid systems is used to compute the time-trajectories of the continuous variables given the sequences of the discrete variables. In general, this requires to solve a differential-algebraic system of equations which is a difficult problem [18,19,12]. However, it is shown that for the special case of affine hybrid systems with quadratic cost functionals, the problem reduces to solution of an algebraic system of equations in terms of only the jump times within the prediction horizon which is much easier to solve. ...
... The conditions given in the Proposition 1 together with (4a), (4e), and (7) constitute a differential-algebraic system of equations that can be solved for x, u, and t s i for i ∈ [1..n−1] (in order to solve the Problem 2). In general, the solution can be obtained by using the numerical methods for hybrid optimal control based on the maximum principle [18,19,12]. However, the solution process becomes considerably easier for the class of affine hybrid systems as explained in the next part. ...
Preprint
Full-text available
Application of the model predictive control (MPC) to systems with hybrid dynamics is considered in this work. The response of a hybrid system within the prediction horizon is composed of both discrete-valued sequences and continuous-valued time-trajectories. Given a cost functional, the optimal continuous trajectories can be calculated given the discrete sequences by the means of the recent results on the hybrid maximum principle. It is shown that these calculations reduce to the solution of a system of algebraic equations in the case of affine hybrid systems. Then, an algorithm is proposed for hybrid MPC which determines the control inputs by iterating on the discrete sequences. It is shown that the algorithm finds the correct solution in a finite number of steps if the selected cost functional satisfies certain conditions. The efficiency of the proposed method is shown during a case study.
... If the calculation satisfies the inequality above, the computation stops with the proper solution. Moreover, as reported by Passenberg et al. (2014), a further advantage of the min-H algorithm in comparison to indirect multiple shooting and indirect collocation is the simplified initialization, firstly due to the enlarged domain of convergence and secondly, as instead of physically nonintuitive adjoint variables, a physically intuitive control history and some multipliers have to be guessed for initialization. For successful convergence, it is sufficient to initialize the multipliers with zero and the control with some constant functions such that the desired switching manifolds are reached. ...
Article
This paper is concerned with the optimal boundary control of a non-dimensional non-linear parabolic system consisting of the Kuramoto–Sivashinsky–Korteweg–de Vries equation and a heat equation. By the Dubovitskii and Milyutin functional analytical approach, first in the fixed final horizon case we prove the Pontryagin maximum principle of the optimal control problem of this coupled system. Then under weaker additional conditions, we study the controlled system in the free final horizon case and present further investigational results of current interests. The necessary optimality conditions are established for optimal control problems in these two cases. Finally, a remark on how to utilize the obtained results is also made for illustration.
Article
Hybrid control systems are considered, combining continuous-time dynamics and discrete-time dynamics, and modeled by differential equations or inclusions, by difference equations or inclusions, and by constraints on the resulting dynamics. Solutions are defined on hybrid time domains. Finite-horizon and infinite-horizon optimal control problems for such control systems are considered. Existence of optimal open-loop controls is shown. The assumptions used include, essentially, the existence for the (non-hybrid) continuous-time case; the existence for the (non-hybrid) discrete-time case; mild conditions on the endpoint penalties; and closedness and boundedness, in the finite-horizon case, of the set of admissible hybrid time domains. Examples involving switching systems and hybrid automata are included.
Article
In the article the main features of direct and indirect approaches for solving optimal control problems were presented. The mentioned methods can be effectively applied in the Model Predictive Control of the complex technological systems in electrical, chemical and aerospace engineering, often described by nonlinear differential-algebraic equations. Among the direct and indirect methods for solving optimal control problems one can mention Euler-Lagrange equations, direct optimization methods and indirect gradients methods.
Article
Full-text available
We propose a very general framework that systematizes the notion of a hybrid system, combining differential equations and automata, governed by a hybrid controller that issues continuous-variable commands and makes logical decisions. We first identify the phenomena that arise in real-world hybrid systems. Then, we introduce a mathematical model of hybrid systems as interacting collections of dynamical systems, evolving on continuous-variable state spaces and subject to continuous controls and discrete transitions. The model captures the identified phenomena, subsumes previous models, yet retains enough structure to pose and solve meaningful control problems. We develop a theory for synthesizing hybrid controllers for hybrid plants in all optimal control framework. In particular, we demonstrate the existence of optimal (relaxed) and near-optimal (precise) controls and derive “generalized quasi-variational inequalities” that the associated value function satisfies. We summarize algorithms for solving these inequalities based on a generalized Bellman equation, impulse control, and linear programming
Article
Full-text available
This paper presents an approach for solving optimal control problems of switched systems. In general, in such problems one needs to find both optimal continuous inputs and optimal switching sequences, since the system dynamics vary before and after every switching instant. After formulating a general optimal control problem, we propose a two stage optimization methodology. Since many practical problems only concern optimization where the number of switchings and the sequence of active subsystems are given, we concentrate on such problems and propose a method which uses nonlinear optimization and is based on direct differentiations of value functions. The method is then applied to general switched linear quadratic (GSLQ) problems. Examples illustrate the results.
Conference Paper
Full-text available
The paper presents an optimization-based approach to compute controllers for a class of hybrid systems with switched dynamics. The starting point is a representation as a hybrid automaton which models autonomous switching between different nonlinear dynamics and includes discrete as well as continuous control inputs. The automaton is transformed into a linear discrete-time model in equation-based form. The task of generating an optimal control law to drive the system from an initial state into a target region (while avoiding forbidden states) is solved by mixed-integer programming performed in a moving-horizon setting with variable time steps.
Article
Full-text available
In this paper optimal control for hybrid systems will be discussed. While defining hybrid systems as causal and consistent dynamical systems, a general formulation for an optimal hybrid control problem is proposed. The main contribution of this paper shows how necessary conditions can be derived from the maximum principle and the Bellman principle. An illustrative example shows how optimal hybrid control via a set of Hamiltonian systems and using dynamic programming can be achieved. However, as in the classical case, difficulties related to numerical solutions exist and are increased by the discontinuous aspect of the problem. Looking for efficient algorithms remains a difficult and open problem which is not the purpose of this contribution.
Article
Full-text available
This paper gives a brief list of commonly used direct and indirect efficient methods for the numerical solution of optimal control problems. To improve the low accuracy of the direct methods and to increase the convergence areas of the indirect methods we suggest a hybrid approach. For this a special direct collocation method is presented. In a hybrid approach this direct method can be used in combination with multiple shooting. Numerical examples illustrate the direct method and the hybrid approach.
Conference Paper
Full-text available
This paper provides necessary conditions of optimality, in the form of a maximum principle, for a broad class of hybrid optimal control problems, in which the dynamics of the constituent processes take the form of differential equations with control terms, and restrictions on the transitions or switches between operating modes are described by collections of functional equality and inequality constraints. Different choices of the constraint functionals capture a wide range of possible autonomous and controlled switching strategies. A notable feature of our formulation is the provision it makes for pathwise state constraints on the continuous variables
Conference Paper
Full-text available
The hybrid minimum principle (HMP) gives necessary conditions to be satisfied for optimal solutions of a hybrid dynamical system. In particular, the HMP accounts for autonomous switching between discrete states that occurs whenever the trajectory hits switching manifolds. In this paper, the existing HMP is extended for hybrid systems with partitioned state space to provide necessary conditions for optimal trajectories that pass through an intersection of switching manifolds. This extension is especially useful for the numerical solution of hybrid optimal control problems as it allows for algorithms with significant reduction of computational complexity. Algorithms based on previous versions of the HMP solve separate optimal control problems for each possible sequence of discrete states. The extension enables us to consider the optimal sequence as subject of optimal control that is varied and finally determined during a single optimization run. A first numerical result illustrates the effectiveness of an algorithm based on the extended HMP.
Conference Paper
Full-text available
We consider a hybrid control system and general optimal control problems for this system. We suppose that the switching strategy imposes restrictions on control sets and weprovide necessary conditions for an optimal hybrid trajectory, stating a Hybrid Necessary Principle (HNP). Our result generalizes various necessary principles available in the literature.
Article
Full-text available
This paper presents a new approach for solving optimal control problems for switched systems. We focus on problems in which a prespecified sequence of active subsystems is given. For such problems, we need to seek both the optimal switching instants and the optimal continuous inputs. In order to search for the optimal switching instants, the derivatives of the optimal cost with respect to the switching instants need to be known. The most important contribution of the paper is a method which first transcribes an optimal control problem into an equivalent problem parameterized by the switching instants and then obtains the values of the derivatives based on the solution of a two point boundary value differential algebraic equation formed by the state, costate, stationarity equations, the boundary and continuity conditions, along with their differentiations. This method is applied to general switched linear quadratic problems and an efficient method based on the solution of an initial value ordinary differential equation is developed. An extension of the method is also applied to problems with internally forced switching. Examples are shown to illustrate the results in the paper.
Article
Full-text available
This paper presents a method for optimal control of hybrid systems. An inequality of Bellman type is considered and every solution to this inequality gives a lower bound on the optimal value function. A discretization of this "hybrid Bellman inequality" leads to a convex optimization problem in terms of finitedimensional linear programming. From the solution of the discretized problem, a value function that preserves the lower bound property can be constructed. An approximation of the optimal feedback control law is given and tried on some examples.
Article
Dynamic programming provides a method to solve hybrid optimal control problems. This contribution extends existing numerical methods originally developed for purely continuous systems, to a class of hybrid systems with autonomous as well as controlled switching behavior. The hybrid dynamics is approximated by a locally consistent discrete Markov decision process. The original optimal control problem is then reformulated for the Markov decision process and solved by standard dynamic programming methods. The convergence of the discrete approximation to the original problem is ensured. The viability of the numerical scheme is illustrated by a two gear transmission system used previously in literature.
Conference Paper
Dynamic programming provides a method to solve hybrid optimal control problems. This contribution extends existing numerical methods originally developed for purely continuous systems, to a class of hybrid systems with autonomous as well as controlled switching behavior. The hybrid dynamics is approximated by a locally consistent discrete Markov decision process. The original optimal control problem is then reformulated for the Markov decision process and solved by standard dynamic programming methods. The convergence of the discrete approximation to the original problem is ensured. The viability of the numerical scheme is illustrated by a two gear transmission system used previously in literature.
Article
This paper considers the design of feedback controllers for linear, time-invariant, spatially distributed systems in an approach which generalises the H^~-framework and in particular the H^~ loop-shaping method. To this end, we introduce a class of spatially ...
Article
A general problem in the calculus of variations is formulated and the necessary conditions for an extremal solution given. Two iterative methods of solution based upon a strategy that minimizes the variational Hamiltonian flXMin-H Strategy) are proposed. The way in which the calculus-of-variations, steepest-ascent, and Min-H methods are related is established, and a means for including the effect of constraint violations in the Min-H process is developed. An extension of the formulation provides for control parameters as well as control variables, and allows the convergence rates of the parameters and variables to be separately specified. Feedback optimization is possible using the extended formulation. In an illustrative example, the pay load that a three-stage launch vehicle can place into a 72-hr transfer ellipse to the moon, after having passed through a parking orbit, is maximized, and convergence is shown to be both extremely rapid and sure.
Article
Conjugate gradient methods have recently been applied to some simple optimization problems and have been shown to converge faster than the methods of steepest descent. The present paper considers application of these methods to more complicated problems involving terminal constraints. As an example, minimum time paths for the climb phase of a V/STOL aircraft have been obtained using the conjugate gradient algorithm. In conclusion, some remarks are made about the relative efficiency of the different optimization schemes presently available for the solution of optimal control problems. © 1969 American Institute of Aeronautics and Astronautics, Inc., All rights reserved.
Article
A theory of necessary conditions for optimal multiprocesses is presented. Optimal multiprocesses are solutions to dynamic optimization problems described by families of control systems coupled through the boundary conditions and cost functions. The theory treats in a unified fashion a wide range of nonstandard dynamic optimization problems, and in many cases provides new optimality conditions. These include problems arising in impulse control robotics, and optimal investment. Even when specialized to the (single process) free time optimal control problem, the theory improves on known necessary conditions.
Article
The goal of this paper is to conduct a complete study of second-order conditions for the optimal control problem with mixed state-control constraints. The conjugate point theory is presented and a necessary condition in terms of the corresponding Riccati equation is obtained. Sufficiency criteria are developed in terms of strengthened necessary conditions, including the Riccati equation. The results generalize the known ones for pure control constraints as well as for the mixed state-control constraints.
Article
A method for the automatic calculation of costates using only results obtained from direct optimization techniques is presented. The approach is based on finite differences and exploits the relation between the costates and certain sensitivities of the cost function. The complete theory for treating free, control constrained, interior-point constrained, and state constrained optimal control problems is presented. An important advantage of the method presented here is that it does not require a priori identification of the optimal switching structure. As a numerical example, a state constrained version of the brachistochrone problem is solved, and the results are compared to the optimal solution obtained from Pontryagin’s minimum principle. The agreement is found to be excellent.
Article
If an optimal control problem (OCP) for hybrid systems with autonomous switching is solved by use of the hybrid minimum principle (HMP), it is necessary to apply the HMP to each possible discrete location sequence separately, i.e. the complexity is exponential in the number of switches. To reduce the combinatorial complexity, this paper proposes a graph search algorithm where the graph encodes the location sequence and the underlying OCP is solved by using the HMP. First, the hybrid OCP with autonomous switching is relaxed to a problem with controlled switching. When tightening the relaxation iteratively, a branch-and-bound scheme is used to prune the graph for reducing the search space for the optimal location sequence. The efficiency of the algorithm is illustrated for a numerical example.
Article
The interacting continuous and discrete dynamics in hybrid systems may lead to Zeno executions, which are solutions of the system having infinitely many discrete transitions in finite time. Although physical systems do not show Zeno behaviour, models of real systems may be Zeno due to modelling abstraction. It is hard to analyse such models with the existing theory. Since abstraction is an important tool in the hierarchical design of hybrid systems, one would like to determine when it may lead to Zeno models. Zeno hybrid systems are studied in detail in the paper. Necessary and sufficient conditions for the existence of Zeno executions are given. The Zeno set is introduced as the ω limit set of a Zeno execution. Properties of the Zeno set are derived for a fairly large class of hybrid systems. Copyright 2001 © John Wiley & Sons, Ltd.
Article
An entire class of rapid-convergence algorithms, called second-variation methods, is developed for the solution of dynamic optimization problems. Several well-known numerical optimization techniques included in this class are developed from a unified point of view. The generalized Riccati transformation can be applied in conjunction with any second-variation method. This fact is demonstrated for the Newton-Raphson or quasilinearization technique.
Article
This paper summarizes recent advances in the area of gradient algorithms for optimal control problems, with particular emphasis on the work performed by the staff of the Aero-Astronautics Group of Rice University. The following basic problem is considered: minimize a functionalI which depends on the statex(t), the controlu(t), and the parameter π. Here,I is a scalar,x ann-vector,u anm-vector, and π ap-vector. At the initial point, the state is prescribed. At the final point, the statex and the parameter π are required to satisfyq scalar relations. Along the interval of integration, the state, the control, and the parameter are required to satisfyn scalar differential equations. First, the sequential gradient-restoration algorithm and the combined gradient-restoration algorithm are presented. The descent properties of these algorithms are studied, and schemes to determine the optimum stepsize are discussed. Both of the above algorithms require the solution of a linear, two-point boundary-value problem at each iteration. Hence, a discussion of integration techniques is given. Next, a family of gradient-restoration algorithms is introduced. Not only does this family include the previous two algorithms as particular cases, but it allows one to generate several additional algorithms, namely, those with alternate restoration and optional restoration. Then, two modifications of the sequential gradient-restoration algorithm are presented in an effort to accelerate terminal convergence. In the first modification, the quadratic constraint imposed on the variations of the control is modified by the inclusion of a positive-definite weighting matrix (the matrix of the second derivatives of the Hamiltonian with respect to the control). The second modification is a conjugate-gradient extension of the sequential gradient-restoration algorithm. Next, the addition of a nondifferential constraint, to be satisfied everywhere along the interval of integration, is considered. In theory, this seems to be only a minor modification of the basic problem. In practice, the change is considerable in that it enlarges dramatically the number and variety of problems of optimal control which can be treated by gradient-restoration algorithms. Indeed, by suitable transformations, almost every known problem of optimal control theory can be brought into this scheme. This statement applies, for instance, to the following situations: (i) problems with control equality constraints, (ii) problems with state equality constraints, (iii) problems with equality constraints on the time rate of change of the state, (iv) problems with control inequality constraints, (v) problems with state inequality constraints, and (vi) problems with inequality constraints on the time rate of change of the state. Finally, the simultaneous presence of nondifferential constraints and multiple subarcs is considered. The possibility that the analytical form of the functions under consideration might change from one subarc to another is taken into account. The resulting formulation is particularly relevant to those problems of optimal control involving bounds on the control or the state or the time derivative of the state. For these problems, one might be unwilling to accept the simplistic view of a continuous extremal arc. Indeed, one might want to take the more realistic view of an extremal arc composed of several subarcs, some internal to the boundary being considered and some lying on the boundary. The paper ends with a section dealing with transformation techniques. This section illustrates several analytical devices by means of which a great number of problems of optimal control can be reduced to one of the formulations presented here. In particular, the following topics are treated: (i) time normalization, (ii) free initial state, (iii) bounded control, and (iv) bounded state.
Conference Paper
A set of sufficient conditions for a weak minimum is derived for a form of the nonsingular Bolza problem of variational calculus, with interior point constraints and discontinuities in the system equations. Generalized versions of the conjugate point/focal point, normality, convexity and nontangency conditions associated with the ordinary Bolza problem are obtained. The resulting set of sufficient conditions is minimal, in that only minor modifications are required in order to obtain necessary conditions for normal, nonsingular problems of this form. These conditions are relatively easy to implement. Analogous second-order optimality conditions for problems with natural corners or control constraints are also obtained. Previously stated sufficiency conditions for problems with control constraints are shown to be unnecessarily restrictive, in some cases.
Article
This paper discusses about correction to ldquoon the hybrid optimal control problem: theory and algorithms.In the above paper, the formulas below should read as shown: y(tj+1-)= lim irarrinfin 1/epsii delta xi (tj+1-)= Phij+1 (tj+1, tj) x( Phij (tj,t) [ fj(x0 (t),v)- fj(x0 (t), u0(t))] + gammaj).
Article
Robust stability and control for systems that combine continuous-time and discrete-time dynamics. This article is a tutorial on modeling the dynamics of hybrid systems, on the elements of stability theory for hybrid systems, and on the basics of hybrid control. The presentation and selection of material is oriented toward the analysis of asymptotic stability in hybrid systems and the design of stabilizing hybrid controllers. Our emphasis on the robustness of asymptotic stability to data perturbation, external disturbances, and measurement error distinguishes the approach taken here from other approaches to hybrid systems. While we make some connections to alternative approaches, this article does not aspire to be a survey of the hybrid system literature, which is vast and multifaceted.
Article
This paper develops a technique for numerically solving hybrid optimal control problems. The theoretical foundation of the approach is a recently developed methodology by S.C. Bengea and R.A. DeCarlo [Optimal control of switching systems, Automatica. A Journal of IFAC 41 (1) (2005) 11–27] for solving switched optimal control problems through embedding. The methodology is extended to incorporate hybrid behavior stemming from autonomous (uncontrolled) switches that results in plant equations with piecewise smooth vector fields. We demonstrate that when the system has no memory, the embedding technique can be used to reduce the hybrid optimal control problem for such systems to the traditional one. In particular, we show that the solution methodology does not require mixed integer programming (MIP) methods, but rather can utilize traditional nonlinear programming techniques such as sequential quadratic programming (SQP). By dramatically reducing the computational complexity over existing approaches, the proposed techniques make optimal control highly appealing for hybrid systems. This appeal is concretely demonstrated in an exhaustive application to a unicycle model that contains both autonomous and controlled switches; optimal and model predictive control solutions are given for two types of models using both a minimum energy and minimum time performance index. Controller performance is evaluated in the presence of a step frictional disturbance and parameter uncertainties which demonstrates the robustness of the controllers.
Article
Fundamental properties of hybrid automata, such as existence and uniqueness of executions, are studied. Particular attention is devoted to Zeno hybrid automata, which are hybrid automata that take infinitely many discrete transitions in finite time. It is shown that regularization techniques can be used to extend the Zeno executions of these automata to times beyond the Zeno time. Different types of regularization may, however, lead to different extensions. A water tank control problem and a bouncing ball system are used to illustrate the results.
Conference Paper
This contribution addresses the task of computing optimal control trajectories for hybrid systems with switching dynamics. Start- ing from a continuous-time formulation of the control task we derive an optimization problem in which the system behavior is modelled by a hy- brid automaton with linear discrete-time dynamics and discrete as well as continuous inputs. In order to transform the discrete dynamics into an equation-based form we present and compare two difierent approaches: one uses the 'traditional' M-formulation and one is based on disjunctive formulations. The control problem is then solved by mixed integer pro- gramming using a moving horizon setting. As illustrated for an example, the disjunctive formulation can lead to a considerable reduction of the computational efiort.
Conference Paper
An algorithm for hybrid optimal control is proposed that varies the discrete state sequence based on gradient information during the search for an optimal trajectory. The algorithm is developed for hybrid systems with partitioned state space. It uses a version of the hybrid minimum principle that allows optimal trajectories to pass through intersections of switching manifolds, which enables the algorithm to vary the sequence. Consequently, the combinatorial complexity of former algorithms can be avoided, since not each possible sequence has to be investigated separately anymore. The convergence of the algorithm is proven and a numerical example demonstrates the efficiency of the algorithm.
Book
Computer methods for ordinary differential equations and differential-algebraic equations are presented. Topics discussed include: ordinary differential equations; initial value problems; boundary value problems; differential-algebraic equations; dynamical systems; Euler equation; nonlinear equations; Runge-Kutta methods; error estimation; implicit linear multistep methods; difference methods; index reduction and stabilization; and Radau collocation.
Article
A general convergence theorem for the gradient methodis proved under hypotheses which are given below. It is thenshown that the usual steepest descent and modified steepestdescent algorithms converge under the some hypotheses. Themodified steepest descent algorithm allows for the possibilityof variable stepsize.
Conference Paper
Presents a version of the maximum principle for hybrid optimal control problems under weak regularity conditions. In particular, we only consider autonomous systems, in which the dynamical behavior and the cost are invariant under time translations. The maximum principle is stated as a general assertion involving terms that are not yet precisely defined, and without a detailed specification of technical assumptions. One version of the principle, where the terms are precisely defined and the appropriate technical requirements are completely specified, is stated for problems where all the basic objects-the dynamics, the Lagrangian and the cost functions for the switchings and the end-point constraints-are differentiable along the reference arc. Another version, involving nonsmooth maps, is also stated, and some brief remarks on even more general versions are given. To illustrate the use of the maximum principle, two very simple examples are shown, involving problems that can easily be solved directly. Our results are stronger than the usual versions of the finite-dimensional maximum principle. For example, even the theorem for classical differentials applies to situations where the maps are not of class C1, and can fail to be Lipschitz-continuous. The nonsmooth result applies to maps that are neither Lipschitz-continuous nor differentiable in the classical sense. In each case, it would be trivial to construct hybrid examples of a similar nature. On the other hand, the results presented in this paper are much weaker than what can actually be proved by our methods
Conference Paper
This paper presents a method for optimal control of hybrid systems. An inequality of Bellman type is considered and every solution to this inequality gives a lower bound on the optimal value function. A discretization of this “hybrid Bellman inequality” leads to a convex optimization problem in terms of finite-dimensional linear programming. From the solution of the discretized problem, a value function that preserves the lower bound property can be constructed. An approximation of the optimal feedback control law is given and tried on some examples
Article
A class of hybrid optimal control problems (HOCP) for systems with controlled and autonomous location transitions is formulated and a set of necessary conditions for hybrid system trajectory optimality is presented which together constitute generalizations of the standard Maximum Principle; these are given for the cases of open bounded control value sets and compact control value sets. The derivations in the paper employ: (i) classical variational and needle variation techniques; and (ii) a local controllability condition which is used to establish the adjoint and Hamiltonian jump conditions in the autonomous switching case. Employing the hybrid minimum principle (HMP) necessary conditions, a class of general HMP based algorithms for hybrid systems optimization are presented and analyzed for the autonomous switchings case and the controlled switchings case. Using results from the theory of penalty function methods and Ekeland's variational principle the convergence of these algorithms is established under reasonable assumptions. The efficacy of the proposed algorithms is illustrated via computational examples.
Article
This note considers the problem of determining optimal switching times at which mode transitions should occur in multimodal, hybrid systems. It derives a simple formula for the gradient of the cost functional with respect to the switching times, and uses it in a gradient-descent algorithm. Much of the analysis is carried out in the setting of optimization problems involving fixed switching-mode sequences, but a possible extension is pointed out for the case where the switching-mode sequence is a part of the variable. Numerical examples testify to the viability of the proposed approach.
Article
Two new Bernoulli substitution methods for solving the Riccati differential equation are tested numerically against direct integration of the Riccati equation, the Chandrasekhar algorithm, and the Davison-Maki method on a large set of problems taken from the literature. The first of these new methods was developed for the time-invariant case and uses the matrix analog of completing the square to transform the problem to a bisymmetric Riecati equation whose solution can be given explicitly in terms of a matrix exponential of order n . This method is fast and accurate when the extremal solutions of the associated algebraic Riccati equation are well separated. The second new method was developed as a means of eliminating the instabilities associated with the Davison-Maki algorithm. By using reinitialization at each time step the Davison-Maki algorithm can be recast as a recursion which is over three times faster than the original method and is easily shown to be stable for both time-invariant and time-dependent problems. From the results of our study we conclude that the modified Davison-Maki method gives superior performance except for those problems where the number of observers and controllers is small relative to the number of states in which ease the Chandrasekhar algorithm is better.
Article
This paper extends the conjugate gradient minimization method of Fletcher and Reeves to optimal control problems. The technique is directly applicable only to unconstrained problems; if terminal conditions and inequality constraints are present, the problem must be converted to an unconstrained form; e.g., by penalty functions. Only the gradient trajectory, its norm, and one additional trajectory, the actual direction of search, need be stored. These search directions are generated from past and present values of the objective and its gradient. Successive points are determined by linear minimization down these directions, which are always directions of descent. Thus, the method tends to converge, even from poor approximations to the minimum. Since, near its minimum, a general nonlinear problem can be approximated by one with a linear system and quadratic objective, the rate of convergence is studied by considering this case. Here, the directions of search are conjugate and hence the objective is minimized over an expanding sequence of sets. Also, the distance from the current point to the miminum is reduced at each step. Three examples are presented to compare the method with the method of steepest descent. Convergence of the proposed method is much more rapid in all cases. A comparison with a second variational technique is also given in Example 3.
Article
This paper proposes a framework for modeling and controlling systems described by interdependent physical laws, logic rules, and operating constraints, denoted as Mixed Logical Dynamical (MLD) systems. These are described by linear dynamic equations subject to linear inequalities involving real and integer variables. MLD systems include constrained linear systems, finite state machines, some classes of discrete event systems, and nonlinear systems which can be approximated by piecewise linear functions. A predictive control scheme is proposed which is able to stabilize MLD systems on desired reference trajectories while fulfilling operating constraints, and possibly take into account previous qualitative knowledge in the form of heuristic rules. Due to the presence of integer variables, the resulting on-line optimization procedures are solved through Mixed Integer Quadratic Programming (MIQP), for which e#cient solvers have been recently developed. Some examples and a simulation case s...
Numerische Verfahren zur Lö unrestringierter Opti-mierungsaufgaben
  • C Geiger
  • C Kanzow
C. Geiger and C. Kanzow, Numerische Verfahren zur Lö unrestringierter Opti-mierungsaufgaben, Springer, Berlin, 1999.
About solving hybrid optimal control problems
  • P Riedinger
  • J Daafouz
  • C Iung
P. Riedinger, J. Daafouz, and C. Iung, About solving hybrid optimal control problems, in Proceedings of the 17th IMACS World Congress, Paris, France, 2005.