Article

On Stability of Bang-Bang Type Controls

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

From the theory of nonlinear optimal control problems it is known that the solution stability w.r. t. data perturbations and conditions for strict local optimality are closely related facts. For important classes of control problems, sufficient optimality conditions can be formulated as a combination of the independence of active constraints' gradients and certain coercivity criteria. In the case of discontinuous controls, however, common pointwise coercivity approaches may fail.In the paper, we consider sufficient optimality conditions for strong local minimizers which make use of an integrated Hamilton--Jacobi inequality. In the case of linear system dynamics, we show that the solution stability (including the switching points localization) is ensured under relatively mild regularity assumptions on the switching function zeros. For the objective functional, local quadratic growth estimates in L1 sense are provided. An example illustrates stability as well as instability effects in case the regularity condition is violated.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... This theorem is an extension of theorem 2 in Haunschmied et al. [25] for Mayer-type optimal control problem with nonlinear system. (x N , u N ) of the discrete problem (26)- (27) and the corresponding co-state (the adjoint functions)p and p N , it holds that ...
... Bang-bang control problems have extensively been discussed for linear and nonlinear system; see previous studies [19,[26][27][28][29][30]. We recall that̄(·) = ∇ u H (·,x,p,ū) is the switching function in the case of bang-bang controls corresponding to the reference solution (x,p,ū). ...
... Assume that (A1) and (A3) are fulfilled; then, there exists c ′ > 0 such that for any solution (x N , u N ) of the discrete problem (26)- (27) and the corresponding co-state (the adjoint functions)p and p N , it holds that ...
Article
This paper is devoted to sufficient condition for Strong Metric sub‐Regularity (SMsR for short) of the set‐valued mapping corresponding to the local description of Pontryagin maximum principle for the Mayer‐type optimal control problems with convexity condition of the Hamiltonian and functional. In particular, stability property of optimal control for the Mayer‐type problem has been established for the occasion of a polyhedral control set and entirely bang‐bang solution structure. Moreover, based on the sufficiency of SMsR and stability property of optimal control, we give the approximate errors of Euler discretization methods utilized to such problems.
... Versions of the following assumption are standard in the literature on affine optimal control problems, see, e.g., [1,6,18,24]. ...
... Denoting by s the number of elements of K, we have due to the inductive assumption (18), that ...
... The minimum of the sum in the right-hand side with respect to {l j } subject to the relations around (24) From here and the second inequality in (18) we obtain that ...
Article
Full-text available
The paper investigates the accuracy of the Model Predictive Control (MPC) method for finding on-line approximate optimal feedback control for Bolza type problems on a fixed finite horizon. The predictions for the dynamics, the state measurements, and the solution of the auxiliary open-loop control problems that appear at every step of the MPC method may be inaccurate. The main result provides an error estimate of the MPC-generated solution compared with the optimal open-loop solution of the “ideal” problem, where all predictions and measurements are exact. The technique of proving the estimate involves an extension of the notion of strong metric subregularity of set-valued mappings and utilization of a specific new metric in the control space, which makes the proof non-standard. The result is specialized for two problem classes: coercive problems, and affine problems.
... The obtained sufficient conditions are, nevertheless, restrictive since they require a purely bang-bang structure of the optimal control and additional properties, which exclude the case of singular arcs. Here we mention that similar conditions have been involved in various contexts (including sufficiency of the Pontryagin conditions and error analysis of approximation methods) in several papers out of which we mention [1,7,11,8,2,3]. ...
... As another application of our version of the Lyusternik-Graves theorem applied to affine optimal control problems, we prove in Section 4 a result about stability of the Sbi-MR for affine problems with disturbances, extending to the non-linear case results from [7] and [13]. ...
... Then, let µ, κ ′ , a ′ , b ′ , and γ the constants introduced in Theorem 2.2. We will verify that φ fulfils the conditions (6) and (7). Let us start with (6). ...
Article
Full-text available
The paper establishes properties of the type of (strong) metric regularity of the set-valued map associated with the system of necessary optimality conditions for optimal control problems that are affine with respect to the control (shortly, affine problems). It is shown that for such problems it is reasonable to extend the standard notions of metric regularity by involving two metrics in the image space of the map. This is done by introducing (following an earlier paper by the first and the third named author) the concept of (strong) bi-metric regularity in a general space setting. Lyusternik-Graves-type theorems are proved for (strongly) bi-metrically regular maps, which claim stability of these regularity properties with respect to “appropriately small” perturbations. Based on that, it is proved that in the case of a map associated with affine optimal control problems, the strong bi-metric regularity is invariant with respect to linearization. This result is complemented with a sufficient condition for strong bi-metric regularity for linear-quadratic affine optimal control problems, which applies to the “linearization” of a nonlinear affine problem. Thus the same conditions are also sufficient for strong bi-metric regularity in the nonlinear affine problem.
... New second-order optimality conditions for optimal control problems with control appearing linearly have been developed during the last 10-15 years (see e.g. Felgenhauer [15][16][17][18]20], Maurer et al. [29], Osmolovskii and Maurer [32][33][34] and the papers cited therein). In case of bang-bang controls these conditions have been used in Alt et al. [4], Alt and Seydenschwanz [7], and in Seydenschwanz [40] to obtain error estimates for Euler discretization of linear-quadratic optimal control problems governed by ordinary differential equations and in Deckelnick and Hinze [8] for discretizations of elliptic control problems. ...
... We use instead a secondorder condition for the switching function σ * defined by (2.11). This condition has been introduced by Felgenhauer [15] (see also Maurer and Osmolovskii [30], Maurer et al. [29]) and has been used e.g. in Alt et al. [6], Alt and Seydenschwanz [7], Seydenschwanz [40] to investigate Euler discretization of linear quadratic control problems: ...
... More general results can be found in Maurer [28], and Maurer and Zowe [31] (see also Alt [2]). A general result on sufficient optimality conditions for optimal control problems can be found in Felgenhauer [15]. We need some auxiliary results which are modifications of corresponding results in Sect. 3 of Alt [2]. ...
Article
Full-text available
We investigate Euler discretization for a class of optimal control problems with a nonlinear cost functional of Mayer type, a nonlinear system equation with control appearing linearly and constraints defined by lower and upper bounds for the controls. Under the assumption that the cost functional satisfies a growth condition we prove for the discrete solutions Hölder type error estimates w.r.t. the mesh size of the discretization. If a stronger second-order optimality condition is satisfied the order of convergence can be improved. Numerical experiments confirm the theoretical findings.
... Theoretical and numerical questions related to this control problem attracted much interest in recent years, see, e.g., [1,2,4,8,9,[18][19][20][21][22], and [13]. The last four papers are concerned with T being the solution operator of an ordinary differential equation, the first three papers with T being a solution operator of an elliptic PDE as in Example 1, and the remaining references with T being a general linear operator as here. ...
... A condition related to the measure condition was also used to establish stability results for bang-bang control problems with autonomous ODEs, see [8,Assumption 2]. ...
... 3. Let Assumption 7.2 be satisfied with meas(A c ) = 0 (measure condition holds a.e. on the domain) for a solutionū 0 of (P 0 ). From (8) we conclude thatū 0 is the unique solution of (P 0 ). Then the estimates ...
Article
Full-text available
We consider Tikhonov regularization of control-constrained optimal control problems. We present new a-priori estimates for the regularization error assuming measure and source-measure conditions. In the special case of bang-bang solutions, we introduce another assumption to obtain the same convergence rates. This new condition turns out to be useful in the derivation of error estimates for the discretized problem. The necessity of the just mentioned assumptions to obtain certain convergence rates is analyzed. Finally, numerical examples confirm the analytical findings.
... The bang-bang structure of the optimal control brings a challenge also for numerical approximations. We refer to the recent papers [9,10,11] on stability analysis and to [1,2,3,17] about error analysis for problems with bang-bang solutions. ...
... We mention that the function [B (t)p(t)] j is known in the literature as switching function for the jth control component (cf. [9]). Clearly, [B i (t)p(t)] j is the ith derivative of the jth switching function. ...
... We mention that a result in the same spirit as Theorem 4.2 is proved in [9] in the case k = 1 and with U = [−1, 1] r . It concerns the stronger notion of structural stability and the proof relies on an inverse function theorem for the switching points of the optimal control, which has no counterpart in the case k > 1. ...
Article
Full-text available
This paper studies stability properties of the solutions of optimal control problems for linear systems. The analysis is based on an adapted concept of metric regularity, the strong bi-metric regularity, which is introduced and investigated in the paper. It allows one to give a more precise description of the effect of perturbations on the optimal solutions in terms of a Hölder-type estimate and to investigate the robustness of this estimate. The Hölder exponent depends on a natural number k, which is known as the controllability index of the reference solution. An inverse function theorem for strongly bi-metrically regular mappings is obtained, which is used in the case k=1 for proving stability of the solution of the considered optimal control problem under small nonlinear perturbations. Moreover, a new stability result with respect to perturbations in the matrices of the system is proved in the general case k≥1.
... The metric in U that we define in the present paper captures some structural similarities of the controls, thus the regularity property in this metric is closer to (but weaker then) the so called structural stability, investigated in e.g. [7,8]. The SMs-R or HSMs-R properties of the optimality map Φ in this metric is especially important in the analysis of Model Predictive Control algorithms. ...
... This following assumption is standard in the literature on affine optimal control problems, see e.g. [3,7,9]. ...
Chapter
Full-text available
This paper revisits the issue of Hölder Strong Metric sub-Regularity (HSMs-R) of the optimality system associated with ODE optimal control problems that are affine with respect to the control. The main contributions are as follows. First, the metric in the control space, introduced in this paper, differs from the ones used so far in the literature in that it allows to take into consideration the bang-bang structure of the optimal control functions. This is especially important in the analysis of Model Predictive Control algorithms. Second, the obtained sufficient conditions for HSMs-R extend the known ones in a way which makes them applicable to some problems which are non-linear in the state variable and the Hölder exponent is smaller than one (that is, the regularity is not Lipschitz).
... The proof includes argumentation similar to those used in the case of a box-like set U , see [7,Lem. 3.3], [14], [16,Lem. ...
... The result of Proposition 4.1 was taken as an assumption there, but as mentioned in [1], this result was essentially known from e.g. [7,16]. However, only the case of a box-like set U was investigated in these papers (which brings technical simplifications), and even in this case the assumptions made were somewhat stronger than our (B'). ...
Article
Full-text available
The paper investigates the property of Strong Metric sub-Regularity (SMsR) of the mapping representing the first order optimality system for a Lagrange-type optimal control problem which is affine with respect to the control. The terminal time is fixed, the terminal state is free, and the control values are restricted in a convex compact set $U$. The SMsR property is associated with a reference solution of the optimality system and ensures that small additive perturbations in the system result in solutions which are at distance to the reference one, at most proportional to the size of the perturbations. A general sufficient condition for SMsR is obtained for appropriate space settings and then specialized in the case of a polyhedral set $U$ and purely bang-bang reference control. Sufficient second-order optimality conditions are obtained as a by-product of the analysis. Finally, the obtained results are utilized for error analysis of the Euler discretization scheme applied to affine problems.
... Nevertheless, only few papers address the stability analysis in case of non-coercive problems and such with discontinuous optimal controls; in fact, many relevant questions still remain unanswered. Recent progress was made in [11,12,14,20] for control-affine problems and in [23] for problems with linear dynamics, and we build on these papers. We mention also the paper [25] and the references therein for problems with group sparsity. ...
... A similar assumption is introduced in [11] in the case κ = 1 and in [23,26] for κ ≥ 1. The set U of admissible controls will be considered as a metric space with the metric induced by the L 1 -norm. ...
Article
Full-text available
The paper investigates the Lipschitz/Hölder stability with respect to perturbations of optimal control problems with linear dynamic and cost functional which is quadratic in the state and linear in the control variable. The optimal control is assumed to be of bang-bang type and the problem to enjoy certain convexity properties. Conditions for bi-metric regularity and (Hölder) metric sub-regularity are established, involving only the order of the zeros of the associated switching function and smoothness of the data. These results provide a basis for the investigation of various approximation methods. They are utilized in this paper for the convergence analysis of a Newton-type method applied to optimal control problems which are affine with respect to the control.
... Recently, optimal control problems with bang-bang solutions attract more attention. Stability and error analysis of bang-bang controls can be found in [14,26,32]. Euler discretizations for linear-quadratic optimal control problems with bang-bang solutions were studied in [1,2,5,29]. ...
... Many variations of this assumption are used in the literature about bangbang controls. To our knowledge the first assumption of this type was introduced by Felgenhauer [14] for continuously differentiable switching functions with θ = 1 to study the stability of bang-bang controls. Alt et al. [1,2,4] used a slightly stronger version of B3 with θ = 1, that additionally excludes the endpoints 0 and T as zeros of the switching function, to investigate the error bound for Euler approximation of linear-quadratic optimal control problems with bang-bang solutions. ...
Article
Full-text available
We revisit the gradient projection method in the framework of nonlinear optimal control problems with bang–bang solutions. We obtain the strong convergence of the iterative sequence of controls and the corresponding trajectories. Moreover, we establish a convergence rate, depending on a constant appearing in the corresponding switching function and prove that this convergence rate estimate is sharp. Some numerical illustrations are reported confirming the theoretical results.
... Nevertheless, only few papers address the stability analysis in case of absence of strict coercivity and discontinuous optimal controls; in fact many relevant questions still remain unanswered. Recent progress was made in [10,18,12,21,11] and we build on these papers. We mention also the paper [23] and the references therein for problems with group sparsity. ...
... A similar assumption is introduced in [10] in the case κ = 1 and in [21,24] for κ ≥ 1. The set U of admissible controls will be considered as a metric space with the metric induced by the L 1 -norm. ...
... In the literature on control problems governed by ordinary differential equations (ODEs) there are many contributions dealing with second-order conditions in the bang-bang case; see, e.g., [11,13,14,17,18,19,20]. In these contributions one typically assumes that the (differentiable) switching function σ : [0, T ] → R possesses only finitely many zeros and that |σ(t)| > 0 is satisfied for all zeros t of σ. ...
... This assumption and some variants of it have been made in some other mathematical contexts; see [1,8,11,24,26]. Property (2.16) holds ifφ ∈ C 1 (Ω) and there exists a constant C > 0 satisfying |∇φ(x)| ≥ C for all x ∈Ω such thatφ(x) = 0; see [8]. ...
Article
We provide sufficient optimality conditions for optimal control problems with bangbang controls. Building on a structural assumption on the adjoint state, we additionally need a weak second-order condition. This second-order condition is formulated with functions from an extended critical cone, and it is equivalent to a formulation posed on measures supported on the set where the adjoint state vanishes. If our sufficient optimality condition is satisfied, we obtain a local quadratic growth condition in L1(). © 2017 Society for Industrial and Applied Mathematics Publications. All rights reserved.
... Therefore, the optimal control u 0 is of bang-bang type or may have singular arcs. In this case we assume that for the reference problem (PQ) α with α = 0 the following conditions hold for σ = σ 0 and the set of zeros Σ = Σ 0 of σ 0 (compare Felgenhauer [12], Alt et al. [1]): ...
... Putting together (10), (11), (12), and (15) resp. (18), we construct Problem (DPQ) α , which is the dual of Problem (PQ) α , in a more explicit form as follows: ...
Article
Full-text available
We consider linear-quadratic (LQ) control problems, where the control variable appears linearly and is box-constrained. It is well-known that these problems exhibit bang–bang and singular solutions. We assume that the solution is of bang–bang type, which is computationally challenging to obtain. We employ a quadratic regularization of the LQ control problem by embedding the L2-norm of the control variable into the cost functional. First, we find a dual problem guided by the methodology of Fenchel duality. Then we prove strong duality and the saddle point property, which together ensure that the primal solution can be recovered from the dual solution. We propose a discretization scheme for the dual problem, under which a diagram depicting the relations between the primal and dual problems and their discretization commutes. The commuting diagram ensures that, given convergence results for the discrete primal variables, discrete dual variables also converge to a solution of the dual problem with a similar error bound. We demonstrate via a simple but illustrative example that significant computational savings can be achieved by solving the dual, rather than the primal, problem.
... where (x, u, p) is a solution of (48), (52), (58) with initial conditions x 0 and p 0 , and λ := (β, p). Note that solving (OS) consists of finding ν ∈ D(S) such that ...
... Some other results were obtained by Sarychev [115], Poggiolini and Spadini [107], Maurer and Osmolovskii [94,93]. Felgenhauer in [52,53,54] studied both second order optimality conditions and sensitivity of the optimal solution. ...
Thesis
Full-text available
This thesis deals with optimal control problems for systems that are affine in one part of the control variable. First, we state necessary and sufficient second order conditions when all control variables enter linearly. We have bound control constraints and a bang-singular solution. The sufficient condition is restricted to the scalar control case. We propose a shooting algorithm and provide a sufficient condition for its local quadratic convergence. This condition guarantees the stability of the optimal solution and the local quadratic convergence of the algorithm for the perturbed problem in some cases. We present numerical tests that validate our method. Afterwards, we investigate an optimal control problems with systems that are affine in one part of the control variable. We obtain second order necessary and sufficient conditions for optimality. We propose a shooting algorithm, and we show that the sufficient condition just mentioned is also sufficient for the local quadratic convergence. Finally, we study a model of optimal hydrothermal scheduling. We investigate, by means of necessary conditions due to Goh, the possible occurrence of a singular arc.
... We mention also that the error analyses of discrete approximations to control problems for linear systems is facilitated by the recent papers [4][5][6][7]. However, our analyses is based on the "companion" paper [9], which extends in an appropriate way the concept of metric regularity of the optimality conditions for optimal control of linear systems. ...
... The O(h) error estimate in this case (again with assumed k = 1) becomes nontrivial since the overall interconnected system (8)-(11) has to be investigated. However, its analysis is based on the structural stability of the switching structure of the optimal control (obtained in [4]). Such a stability is no longer valid if k > 1, which case is captured by Theorem 2. A different proof is needed in this case and in the present paper it is based on the results of [9]. ...
Conference Paper
Full-text available
Although optimal control problems for linear systems have been profoundly investigated in the past more than 50 years, the issue of numerical approximations and precise error analyses remains challenging due the bang-bang structure of the optimal controls. Based on a recent paper by M. Quincampoix and V.M. Veliov on metric regularity of the optimality conditions for control problems of linear systems the paper presents new error estimates for the Euler discretization scheme applied to such problems. It turns out that the accuracy of the Euler method depends on the “controllability index” associated with the optimal solution, and a sharp error estimate is given in terms of this index. The result extends and strengthens in several directions some recently published ones.
... Let us now comment a bit on the related literature. For optimal control problems governed by ordinary differential equations, the stability analysis of bang-bang minimizers started with [21] for linear quadratic problems, and continued with [22] for more general affine systems. After that, several refinements and analyses of numerical schemes came. ...
Preprint
Full-text available
We analyse the role of the bang-bang property in affine optimal control problems. We show that many essential stability properties of affine problems are only satisfied when minimizers have the bang-bang property. Moreover, we prove that almost any perturbation in an affine optimal control problem leads to a bang-bang strict global minimizer. We work in an abstract framework that allows to cover many problems in the literature of optimal control, this includes problems constrained by partial and ordinary differential equations. We give examples that show the applicability of our results to specific optimal control problems.
... 3.2]. A kind of this assumption, where a differentiable switching function is assumed to be in place of the state y, was used in [30] and the references therein to deal with the second-order optimality conditions in the bang-bang controls for the control problem governed by ordinary differential equations. A close assumption that is the same with (4.39) but imposed on the adjoint state p instead of the state y was also exploited in [15,Sec. ...
Preprint
Full-text available
This paper is concerned with first- and second-order optimality conditions for non-smooth semilinear optimal control problems involving $L^1$ norm of the control in the cost functional. In addition to the appearance of $L^1$ norm leading to the non-differentiability of the objective and promoting the sparsity of the optimal controls, the non-smoothness of the nonlinear coefficient in the state equation causes the same property of the control-to-state operator. Exploiting a regularization scheme, we derive $C$-stationarity conditions for any local optimal control. Under a structural assumption on the associated state, we define the curvature functional for the smooth part (in the state variable) of the objective for which the second-order necessary and sufficient optimality conditions with minimal gap are shown. Furthermore, under a more restricted structural assumption imposing on the mentioned state, an explicit formulation of the curvature is established and thus the explicit second-order optimality conditions are stated.
... The implication (A2') =⇒ (A2) is obvious. Now we focus on the first-order term in (32) under an additional condition introduced in [5] in a somewhat stronger form and for box-like sets U . ...
Article
Full-text available
The paper presents new sufficient conditions for the property of strong bi-metric regularity of the optimality map associated with an optimal control problem which is affine with respect to the control variable (affine problem). The optimality map represents the system of first order optimality conditions (Pontryagin principle), and its regularity is of key importance for the qualitative and numerical analysis of optimal control problems. The case of affine problems is especially challenging due to the typical discontinuity of the optimal control functions. A remarkable feature of the obtained sufficient conditions is that they do not require convexity of the objective functional. As an application, the result is used for proving uniform convergence of the Euler discretization method for a family of affine optimal control problems.
... For optimal control problems with control appearing linearly, the optimal control may be discontinuous, for an instance, bang-bang controller, and such conditions may not be satisfied. In that respect, there have been many studies to develop new second-order optimality conditions for the optimal control problems with control appearing linearly [3,21,31,32]. The second-order Runge-Kutta approximations for the OPCs was studied in [15]. ...
Article
Full-text available
In this paper, we are concerned with a nonlinear optimal control problem of ordinary differential equations. We consider a discretization of the problem with the discontinuous Galerkin method with arbitrary order $ r \in \mathbb{N}\cup \{0\} $. Under suitable regularity assumptions on the cost functional and solutions of the state equations, we first show the existence of a local solution to the discretized problem. We then provide sharp estimates for the $ L^2 $-error of the approximate solutions. The convergence rate of the error depends on the regularity of the optimal solution $ \bar{u} $ and its adjoint state with the degree of piecewise polynomials. Numerical experiments are presented supporting the theoretical results.
... For optimal control problems with control appearing linearly, the optimal control may be discontinuous, for an instance, bang-bang controller, and such conditions are not satisfied. In that respect, there have been many studies to develop new second-order optimality conditions for the optimal control problems with control appearing linearly [2,9,12,13]. ...
Preprint
In this paper, we are concerned with a nonlinear optimal control problem of ordinary differential equations. We consider discretization of the problem with the discontinuous Galerkin method with arbitrary order $r \in \mathbb{N}$. Under suitable regularity assumptions on the cost functional and solutions of the state equations, we provide sharp estimates for the error of the approximate solutions. Numerical experiments are presented supporting the theoretical results.
... In the optimal control of ordinary differential equations dealing with bang-bang solutions one typically assumes that the differentiable switching function σ : [0, T ] → R possess only finitely many zeros and that |σ(t)| > 0 holds for all t where σ(t) = 0, see e.g. [36,68,73] and the references therein. This condition cannot be directly transferred to the control of partial differential equations. ...
Thesis
Full-text available
This thesis deals with the construction and analysis of solution methods for a class of ill-posed optimal control problems involving elliptic partial differential equations as well as inequality constraints for the control and state variables. The objective functional is of tracking type, without any additional \(L^2\)-regularization terms. This makes the problem ill-posed and numerically challenging. We split this thesis in two parts. The first part deals with linear elliptic partial differential equations. In this case, the resulting solution operator of the partial differential equation is linear, making the objective functional linear-quadratic. To cope with additional control constraints we introduce and analyse an iterative regularization method based on Bregman distances. This method reduces to the proximal point method for a specific choice of the regularization functional. It turns out that this is an efficient method for the solution of ill-posed optimal control problems. We derive regularization error estimates under a regularity assumption which is a combination of a source condition and a structural assumption on the active sets. If additional state constraints are present we combine an augmented Lagrange approach with a Tikhonov regularization scheme to solve this problem. The second part deals with non-linear elliptic partial differential equations. This significantly increases the complexity of the optimal control as the associated solution operator of the partial differential equation is now non-linear. In order to regularize and solve this problem we apply a Tikhonov regularization method and analyse this problem with the help of a suitable second order condition. Regularization error estimates are again derived under a regularity assumption. These results are then extended to a sparsity promoting objective functional.
... where N U (u) is the normal cone to U at u. Following [10], we assume that the optimal controlû is strictly bang-bang, with a finite number of switching times on [0, T ], and that the so-called switching function, Assumptions (A1)-(A3) will be standing in this section. ...
Article
Full-text available
The paper presents new results about convergence of the gradient projection and the conditional gradient methods for abstract minimization problems on strongly convex sets. In particular, linear convergence is proved, although the objective functional does not need to be convex. Such problems arise, in particular, when a recently developed discretization technique is applied to optimal control problems which are affine with respect to the control. This discretization technique has the advantage to provide higher accuracy of discretization (compared with the known discretization schemes) and involves strongly convex constraints and possibly non-convex objective functional. The applicability of the abstract results is proved in the case of linear-quadratic affine optimal control problems. A numerical example is given, confirming the theoretical findings.
... In the context of optimal control problems with ODEs, the functions t → σ n (t) = (B * z (t)) n = (e n ,z(t)) L 2 (Ω) are referred to as switching functions. Here, one typically assumes that each σ n has only finitely many roots with non-vanishing first derivatives (see, e.g., [17,26]), which again implies (3.11) with Ψ (ε) = Cε. ...
Preprint
In this paper a priori error estimates are derived for full discretization (in space and time) of time-optimal control problems. Various convergence results for the optimal time and the control variable are proved under different assumptions. Especially the case of bang-bang controls is investigated. Numerical examples are provided to illustrate the results.
... We refer to [3,22] for analysis of secondorder necessary conditions for bang-bang and singular-bang controls, respectively. Results on the stability of solutions with respect to disturbances were also recently obtained, see [4,12,14,25] and the bibliography therein. Based on these results, error estimates for the accuracy of the Euler discretization scheme applied to various classes of affine optimal control problems were obtained in [1,2,13,18,26,27]. ...
Article
Full-text available
This paper considers a linear-quadratic optimal control problem where the control function appears linearly and takes values in a hypercube. It is assumed that the optimal controls are of purely bang–bang type and that the switching function, associated with the problem, exhibits a suitable growth around its zeros. The authors introduce a scheme for the discretization of the problem that doubles the rate of convergence of the Euler’s scheme. The proof of the accuracy estimate employs some recently obtained results concerning the stability of the optimal solutions with respect to disturbances.
... It extends earlier works [7,9], which focused on problems with the control appearing linearly, to the bilinear case. In the literature on control problems governed by ordinary differential equations there are many contributions dealing with second-order conditions in the bang-bang case, e.g., [13,16,17,19,20,21,22]. In these contributions one typically assumes that the (differentiable) switching function σ : [0, T ] → R has finitely many zeros. ...
Article
We consider bilinear optimal control problems, whose objective functionals do not depend on the controls. Hence, bang-bang solutions will appear. We investigate sufficient second-order conditions for bang-bang controls, which guarantee local quadratic growth of the objective functional in $L^1$. In addition, we prove that for controls that are not bang-bang, no such growth can be expected. Finally, we study the finite-element discretization, and prove error estimates of bang-bang controls in $L^1$-norms.
... Quite a number of stability results are available for optimal control problems with bangbang controls subject to ordinary differential equations, see, e.g., [5,6]. The stability is based on assumptions on the switching function, which imply our condition (A4.ae). ...
Article
Full-text available
In this paper, we investigate solution stability for control problems of partial differential equations with the cost functional not involving the usual quadratic term for the control. We first establish a sufficient optimality condition for the optimal control problems with bang-bang controls. Then we obtain criteria for solution stability for the optimal control problems of bang-bang controls under linear perturbations. We prove Hölder stability of optimal controls in (Formula presented.). © 2018
... Theoretical and numerical questions related to this control problem attracted much interest in recent years, see, e.g., [DH12], [WW11a], [WW11b], [WW13], [Wac13], [Wac14], [GY11], [Fel03], [Alt+12], [AS11], and [Sey15]. The last four papers are concerned with T being the solution operator of an ordinary differential equation, the former papers with T being a solution operator of an elliptic PDE or T being a continuous linear operator. ...
Article
Full-text available
We consider a control-constrained parabolic optimal control problem without Tikhonov term in the tracking functional. For the numerical treatment, we use variational discretization of its Tikhonov regularization: For the state and the adjoint equation, we apply Petrov-Galerkin schemes from [Daniels et al 2015] in time and usual conforming finite elements in space. We prove a-priori estimates for the error between the discretized regularized problem and the limit problem. Since these estimates are not robust if the regularization parameter tends to zero, we establish robust estimates, which --- depending on the problem's regularity --- enhance the previous ones. In the special case of bang-bang solutions, these estimates are further improved. A numerical example confirms our analytical findings.
... Therefore, the optimal control u 0 is of bang–bang type or may have singular arcs. In this case we assume that for the reference problem (PQ) α with α = 0 the following conditions hold for σ = σ 0 and the set of zeros Σ = Σ 0 of σ 0 (compare Felgenhauer [12], Alt et al. [1]): ...
... In a forthcoming paper we will give a more detailed description of this approach and larger set of references, furthermore we will give some suggestions about possible extensions and further investigations. For different approaches here we quote only [8], [14], [19] and reference therein. ...
Article
Full-text available
In this paper we develop a Hamiltonian approach to sufficient conditions in optimal control problems. We extend the known conditions for $C^2$ maximised Hamiltonians into two directions: on the one hand we explain the role of a super Hamiltonian (i.e. a Hamiltonian which is greater then or equal to the maximised one) on the other we develop the theory under some minimal regularity assumptions. The results we present enclose many known results and they can be used to tackle new problems.
... The error analysis in case of discontinuous optimal controls is currently under investigation even for simple classes of ODE optimal control problems (see e.g. [16,10,1,2,13]). ...
Article
The paper presents a numerical procedure for solving a class of optimal control problems for heterogeneous systems. The latter are described by parameterized systems of ordinary differential equations, coupled by integrals along the parameter space. Such problems arise in economics, demography, epidemiology, management of biological resources, etc. The numerical procedure includes discretization and a gradient projection method for solving the resulting discrete problem. A main point of the paper is the performed error analysis, which is based on the property of metric regularity of the system of necessary optimality conditions associated with the considered problem.
Article
This paper is concerned with first- and second-order optimality conditions as well as the stability for non-smooth semilinear optimal control problems involving the \(L^1\)-norm of the control in the cost functional. In addition to the appearance of the \(L^1\)-norm leading to the non-differentiability of the objective and promoting the sparsity of the optimal controls, the non-smoothness of the nonlinear coefficient in the state equation causes the same property of the control-to-state operator. Exploiting a regularization scheme, we derive C-stationarity conditions for any local optimal control. Under a structural assumption on the associated state, we define the curvature functional for the part not including the \(L^1\)-norm of controls of the objective for which the second-order necessary and sufficient optimality conditions with minimal gap are shown. Furthermore, under a more restrictive structural assumption imposed on the mentioned state, an explicit formula for the curvature is established and thus the explicit second-order optimality conditions are stated. Finally, the Lipschitz stability of local solutions with respect to the sparsity parameter is shown.
Chapter
We investigate the implicit Euler discretization for linear-quadratic optimal control problems with index two DAEs. There is a discrepancy between the necessary conditions of problems with higher index DAEs and their discretizations, since the necessary conditions of the continuous problem coincide with the necessary conditions of the index reduced problem. This implicit index reduction does not occur for the discretized problem. Thus, the respective switching functions cannot be related. The discrepancy is overcome by reformulating the discretized problem, which yields an approximation of the index reduced problem with suitable necessary conditions. If the switching function has a certain structure, such that the optimal control is of bang–bang type, we can show that the controls converge with an order of 12 in the L1-norm. We then improve these error estimates with slightly stronger smoothness conditions of the problems data and switching function, which gives us a convergence order of one.
Article
It is well known that optimal control problems with L¹-control costs produce sparse solutions, i.e., the optimal control is zero on whole intervals. In this paper, we study a general class of convex linear-quadratic optimal control problems with a sparsity functional that promotes a so-called group sparsity structure of the optimal controls. In this case, the components of the control function take the value of zero on parts of the time interval, simultaneously. These problems are both theoretically interesting and practically relevant. After obtaining results about the structure of the optimal controls, we derive stability estimates for the solution of the problem w.r.t. perturbations and L²-regularization. These results are consequently applied to prove convergence of the Euler discretization. Finally, the usefulness of our approach is demonstrated by solving an illustrative example using a semismooth Newton method.
Article
We analyze the implicit Euler discretization for a class of convex linear-quadratic optimal control problems with control appearing linearly. Constraints are defined by lower and upper bounds for the controls, and the cost functional may depend on a regularization parameter ν. Without any structural assumption on the optimal control we prove convergence of order 1 w.r.t. the mesh size for the discrete optimal values. Under the additional assumption that the optimal control is of bang-bang type and the switching function satisfies a growth condition around their zeros we show that the solutions are calm functions of perturbation and regularization parameters. By applying this result to the implicit Euler discretization we improve existing error estimates for discretizations based on the explicit Euler method. Numerical experiments confirm the theoretical findings and demonstrate the usefulness of implicit methods and regularization in case of bang-bang controls.
Chapter
We survey the results on no-gap second-order optimality conditions (both necessary and sufficient) in the Calculus of Variations and Optimal Control, that were obtained in the monographs Milyutin and Osmolovskii (Calculus of Variations and Optimal Control. Translations of Mathematical Monographs. American Mathematical Society, Providence, 1998) and Osmolovskii and Maurer (Applications to Regular and Bang-Bang Control: Second-Order Necessary and Sufficient Optimality Conditions in Calculus of Variations and Optimal Control. SIAM Series Design and Control, vol. DC 24. SIAM Publications, Philadelphia, 2012), and discuss their further development. First, we formulate such conditions for broken extremals in the simplest problem of the Calculus of Variations and then, we consider them for discontinuous controls in optimal control problems with endpoint and mixed state-control constraints, considered on a variable time interval. Further, we discuss such conditions for bang-bang controls in optimal control problems, where the control appears linearly in the Pontryagin-Hamilton function with control constraints given in the form of a convex polyhedron. Bang-bang controls induce an optimization problem with respect to the switching times of the control, the so-called Induced Optimization Problem. We show that second-order sufficient condition for the Induced Optimization Problem together with the so-called strict bang-bang property ensures second-order sufficient conditions for the bang-bang control problem. Finally, we discuss optimal control problems with mixed control-state constraints and control appearing linearly. Taking the mixed constraint as a new control variable we convert such problems to bang-bang control problems. The numerical verification of second-order conditions is illustrated on three examples.
Conference Paper
We analyze \(L^2\)-regularization of a class of linear-quadratic optimal control problems with an additional \(L^1\)-control cost depending on a parameter \(\beta \). To deal with this nonsmooth problem we use an augmentation approach known from linear programming in which the number of control variables is doubled. It is shown that if the optimal control for a given \(\beta ^*\ge 0\) is bang-zero-bang, the solutions are continuous functions of the parameter \(\beta \) and the regularization parameter \(\alpha \). Moreover we derive error estimates for Euler discretization.
Article
Full-text available
We survey the results on no-gap second order optimality conditions (both necessary and sufficient) in the Calculus of Variations and Optimal Control, that were obtained in the monographs [31] and [40], and discuss their further development. First, we formulate such conditions for broken extremals in the simplest problem of the Calculus of Variations and then, we consider them for discontinuous controls in optimal control problems with endpoint and mixed state-control constraints, considered on a variable time interval. Further, we discuss such conditions for bang-bang controls in optimal control problems, where the control appears linearly in the Pontryagin-Hamilton function with control constraints given in the form of a convex polyhedron. Bang-bang controls induce an optimization problem with respect to the switching times of the control, the so-called Induced Optimization Problem. We show that second-order sufficient condition for the Induced Optimization Problem together with the so-called strict bang-bang property ensure second-order sufficient conditions for the bang-bang control problem. Finally, we discuss optimal control problems with mixed control-state constraints and control appearing linearly. Taking the mixed constraint as a new control variable we convert such problems to bang-bang control problems. The numerical verification of second-order conditions is illustrated on three examples.
Chapter
As a precursor to the proof of the maximum principle for a general nonlinear system, in this chapter we develop the classical results about the structure of the reachable set for linear time-invariant systems with bounded control sets and prove Theorem 2.5.3 of Chap. 2.
Chapter
We have seen in Chap. 4 that necessary conditions for optimality follow from separation results using convex approximations for the reachable set from a point. If the reachable sets are known exactly, not only necessary conditions, but complete solutions can be obtained for related optimal control problems (e.g., the time-optimal control problem). In general, determining these sets is as difficult a problem as solving an optimal control problem.
Chapter
Our overall objective is to analyze the mapping properties of a flow of extremal controlled trajectories and to show that the cost-to-go function satisfies the Hamilton–Jacobi–Bellman equation in regions where this flow covers an open set of the state injectively (x-space in the time-invariant case, respectively (t, x)-space in the time-dependent case).
Chapter
In this chapter, we prove the Pontryagin maximum principle. The proof we present follows arguments by Hector Sussmann [244, 247, 248], but in a smooth setting. It is somewhat technical, but provides a uniform treatment of first- and high-order variations. As a result, we not only prove Theorem 2.2.1, but obtain a general high-order version of the maximum principle (e.g., see [140]) from which we then derive the high-order necessary conditions for optimality that were introduced in Sect. 2.8.
Chapter
We now proceed to the study of a finite-dimensional optimal control problem, i.e., a dynamic optimization problem in which the state of the system, x = x(t), is linked in time to the application of a control function, u = u(t), by means of the solution to an ordinary differential equation whose right-hand side is shaped by the control. We now consider multidimensional systems in which both the state and the control variables no longer need to be scalar. In particular, the results presented here also provide high-dimensional generalizations for the classical theorems of the calculus of variations developed in Chap. 1. So far, we have considered only the simplest problem in the calculus of variations in which the functional is minimized over all curves that satisfy prescribed boundary conditions. Much more than in the calculus of variations, an optimal control problem is determined by its constraints.
Chapter
So far, our focus has been on necessary conditions for optimality. The conditions of the Pontryagin maximum principle, Theorem 2.2.1, collectively form the first-order necessary conditions for optimality of a controlled trajectory (aside from the much stronger minimum condition on the Hamiltonian that generalizes the Weierstrass condition of the calculus of variations). Clearly, as in ordinary calculus, first-order conditions by themselves are no guarantee that even a local extremum is attained. High-order tests, based on second- and increasingly higher-order derivatives, like the Legendre–Clebsch conditions for singular controls, can be used to restrict the class of candidates for optimality further, but in the end, sufficient conditions need to be provided that at least guarantee some kind of local optimality. These will be the topic of the next two chapters of our text.
Chapter
We begin with an introduction to the historical origin of optimal control theory, the calculus of variations. But it is not our intention to give a comprehensive treatment of this topic. Rather, we introduce the fundamental necessary and sufficient conditions for optimality by fully analyzing two of the cornerstone problems of the theory, the brachistochrone problem and the problem of determining surfaces of revolution with minimum surface area, so-called minimal surfaces. Our emphasis is on illustrating the methods and techniques required for getting complete solutions for these problems. More generally, we use the so-called fixed-endpoint problem, the problem of minimizing a functional over all differentiable curves that satisfy given boundary conditions, as a vehicle to introduce the classical results of the theory: (a) the Euler–Lagrange equation as the fundamental first-order necessary condition for optimality, (b) the Legendre and Jacobi conditions, both in the form of necessary and sufficient second-order conditions for local optimality, (c) the Weierstrass condition as additional necessary condition for optimality for so-called strong minima, and (d) its connection with field theory, the fundamental idea in any sufficiency theory. Throughout our presentation, we emphasize geometric constructions and a geometric interpretation of the conditions. For example, we present the connections between envelopes and conjugate points of a fold type and use these arguments to give a full solution for the minimum surfaces of revolution.
Article
We investigate the simultaneous regularization and discretization of an optimal control problem with pointwise control constraints. Typically such problems exhibit bang–bang solutions: the optimal control almost everywhere takes values at the control bounds. We derive discretization error estimates that are robust with respect to the regularization parameter. These estimates can be used to make an optimal choice of the regularization parameter with respect to discretization error estimates.
Article
Optimal control problems with fixed terminal time are considered for multi-input bilinear systems with the control set given by a compact interval and the objective function affine in the controls. Systems of this type have been widely used in the modeling of cell-cycle specific cancer chemotherapy over a prescribed therapy horizon for both homogeneous and heterogeneous tumor populations. Necessary conditions for optimality lead to concatenations of bang and singular controls as prime candidates for optimality. In this paper, the method of characteristics will be formulated as a general procedure to embed such a controlled reference extremal into a field of broken extremals. Sufficient conditions for the strong local optimality of a controlled reference bang-bang trajectory will be formulated in terms of solutions to associated sensitivity equations. These results will be applied to a model for cell cycle specific cancer chemotherapy with cytotoxic and cytostatic agents.
Article
We analyze a combined regularization–discretization approach for a class of linear-quadratic optimal control problems. By choosing the regularization parameter \(\alpha \) with respect to the mesh size \(h\) of the discretization we approximate the optimal bang–bang control. Under weaker assumptions on the structure of the switching function we generalize existing convergence results and prove error estimates of order \({\mathcal {O}}(h^{1/(k+1)})\) with respect to the controllability index \(k\).
Article
Strong second-order conditions in mathematical programming play an important role not only as optimality tests but also as an intrinsic feature in stability and convergence theory of related numerical methods. Besides of appropriate firstorder regularity conditions, the crucial point consists in local growth estimation for the objective which yields inverse stability information on the solution. In optimal control, similar results are known in case of continuous control functions, and for bang–bang optimal controls when the state system is linear. The paper provides a generalization of the latter result to bang–bang optimal control problems for systems which are affine-linear w.r.t. the control but depend nonlinearly on the state. Local quadratic growth in terms of L 1 norms of the control variation are obtained under appropriate structural and second-order sufficient optimality conditions.
Article
In this paper, we derive some sufficient second order optimality conditions for control problems of partial differential equations (PDEs) when the cost functional does not involve the usual quadratic term for the control or higher nonlinearities for it. Though not always, in this situation the optimal control is typically bang-bang. Two different control problems are studied. The second differs from the first in the presence of the L-1 norm of the control. This term leads to optimal controls that are sparse and usually take only three different values (we call them bang-bang-bang controls). Though the proofs are detailed in the case of a semilinear elliptic state equation, the approach can be extended to parabolic control problems. Some hints are provided in the last section to extend the results.
Article
Full-text available
We analyze the Euler approximation to a state constrained control problem. We show that if the active constraints satisfy an independence condition and the Lagrangian satisfies a coercivity condition, then locally there exists a solution to the Euler discretization, and the error is bounded by a constant times the mesh size. The proof couples recent stability results for state constrained control problems with results established here on discretetime regularity. The analysis utilizes mappings of the discrete variables into continuous spaces where classical finite element estimates can be invoked.
Article
Full-text available
. Second order sufficient conditions (SSC) for control problems with control--state constraints and free final time are presented. Instead of deriving such SSC de initio, the control problem with free final time is tranformed into an augmented control problem with fixed final time for which well-known SSC exist. SSC are then expressed as a condition on the positive definiteness of the second variation. A convenient numerical tool for verifying this condition is based on the Riccati approach where one has to find a bounded solution of an associated Riccati equation satisfying specific boundary conditions. The augmented Riccati equations for the augmented control problem are derived and their modifications on the boundary of the control--state constraint are discussed. Two numerical examples, (1) the classical Earth-Mars orbit transfer in minimal time, (2) the Rayleigh problem in electrical engineering, demonstrate that the Riccati equation approach provides a viable numerical test of SS...
Article
Quadratic conditions for optimality developed by the author in an earlier paper are applied to the analysis of extremals with switchings of the control. Several specific problems of optimal control are studied, among them time-optimal control problems for linear systems with constant coefficients describing "chains" of dimension 2, 3, and 4. Consideration is given also to nonlinear systems with integral cost functionals, and in particular to harmonic oscillatory systems with speed constraints.
Article
Quadratic conditions for optimality developed by the author in an earlier paper [Russ. J. Math. Phys. 2, No. 4, 487-516 (1994; Zbl 0916.49016)] are applied to the analysis of extremals with switchings of the control. Several specific problems of optimal control are studied, among them time-optimal control problems for linear systems with constant coefficients describing “chains” of dimension 2, 3, and 4. Consideration is given also to nonlinear systems with integral cost functionals, and in particular to harmonic oscillatory systems with speed constraints.
Article
In [13] a new sufficiency criterion for strong local minimality in multidimensional non-convex control problems with pure state constraint was developed. In this paper we use a similar method to obtain sufficient conditions for weak local minimality in multidimensional control problems with mixed state-control restrictions. The result is obtained by applying duality theory for control problems of Klötzler [11] as well as first and second order optimality conditions for optimization problems described by $C^$-functions having a locally Lipschitzian gradient mapping. The main theorem contains the result of Zeidan [17] for one-dimensional problems withoutstate restrictions.
Article
A weak maximum principle is shown for general problems$${\text{minimize}}\,f\left( {x,{\text{ }}w} \right)\,\,{\text{on }}{X_0} \times {X_{\text{1}}}\,{\text{with respect to}}\,linear\,{\text{state constraints}}\,{A_0}x = {A_{\text{1}}}w$$ in Banach spaces X 0 and local convex topological vector spaces X 1, where f(x, •) is a convex functional on X 1 and X j are linear and continuous operators from X j to a Hilbert space X (j = 0,1). The proved theorem is applied to Dieudonné-Rashevsky-type and relaxed control problems.
Article
Besides stating the problem and the results, we shall give in this section a brief overview of the classical necessary and sufficient conditions in the calculus of variations, in order to clearly situate the contribution of this article. 1.1 The problem . We are given an interval [ a, b ], two points x a , x b in R n , and a function L (the Lagrangian) mapping [ a, b ] × R n × R n to R . The basic problem in the calculus of variations, labeled (P), is that of minimizing the functional over some class X of functions x and subject to the constraints x(a) = x a , x (b) = x b . Let us take for now the class X of functions to be the continuously differentiable mappings from [ a, b ] to R n ; we call such functions smooth arcs .
Article
We study $L_{1}$-local optimality of a given control $\tu$ in the time-optimal control problem for an affine control system. We start with the necessary optimality condition---the Pontryagin maximum principle, which selects the candidates for minimizers, the extremal controls. Generally the corresponding Pont\-ryagin extremals consist of bang-bang and singular subarcs, separated by switching points. In the present paper we treat only pure bang-bang extremals. We introduce extended first and second variations along a bang-bang extremal and establish first- and second-order sufficient optimality conditions for the bang-bang extremal controls.
Article
We consider the problem of time-optimal control for systems of the form dx/dt=f(x)+ug(x) where f and g are smooth vector fields and admissible controls are measurable functions u with values in -1≤u≤1. Under the assumption that f, g and [f,g] are independent, we prove that generically every point has a neighborhood U such that bang-bang trajectories that lie in U and have more than 7 switchings are not time-optimal.
Article
In recent years, sufficient optimality criteria and solution stability in optimal control have been investigated widely and used in the analysis of discrete numerical methods. These results were concerned mainly with weak local optima, whereas strong optimality has been considered often as a purely theoretical aspect. In this paper, we show via an example problem how “weak” the weak local optimality can be and derive new strong optimality conditions. The criteria are suitable for practical verification and can be applied to the case of discontinuous controls with changes in the set of active constraints.
Article
In recent years, stability and discrete approximations for nonlinear control problems have been widely investigated. Essential progress has been made in deriving optimality conditions for weak local optima which ensure L-infinity-stability tinder small data perturbations as well as certain regularity of the control function. For the case of continuous controls, the convergence of traditional discretizations has been proved. When the optimal control is discontinuous, the mentioned stability results in general cannot be applied in case of shifts in the jump localizations, In the present paper, we thus consider so-called bounded-strong extremal points. By means of a general duality approach we first derive sufficient optimality conditions for weak and for bounded-strong optima, and secondly, discuss the solution approximability by minimizing sequences. It will be shown that a bounded-strong extremum is an attractive point for minimizing sequences from a certain L-2-neighborhood instead of L-infinity-neighborhoods as for weak local minima.
Chapter
We announce a new sufficient condition for a bang-bang extremal to be a strong local optimum for a control problem in the Mayer form. The controls appear linearly and take values in a polyhedron and the state space and the constraints are smooth, finite dimensional manifolds.
Article
We survey the results of SPP 1253 project "Numerical Analysis of State-Constrained Optimal Control Problems for PDEs ". In the first part, we consider Lavrentiev-type regularization of both distributed and boundary control. In the second part, we present a priori error estimates for elliptic control problems with finite dimensional control space and state-constraints both in finitely many points and in all points of a subdomain with nonempty interior.
Article
Parametric nonlinear control problems subject to vector-valued mixed control-state constraints are investigated. The model perturbations are implemented by a parameter p of a Banach-space P. We prove solution differentiability in the sense that the optimal solution and the associated adjoint multiplier function are differentiable functions of the parameter. The main assumptions for solution differentiability are composed by regularity conditions and recently developed second-order sufficient conditions (SSC). The analysis generalizes the approach in [16, 20] and establishes a link between (1) shooting techniques for solving the associated boundary value problem (BVP) and (2) SSC. We shall make use of sensitivity results from finite-dimensional parametric programming and exploit the relationships between the variational system associated to BVP and its corresponding Riccati equation. Solution differentiability is the theoretical backbone for any numerical sensitivity analysis. A numerical example with a vector-valued control is presented that illustrates sensitivity analysis in detail.
Article
Parameter-dependent optimal control problems for nonlinear ordinary differential equations, subject to control and state constraints, are considered. Sufficient conditions are formulated under which the solutions and the associated Lagrange multipliers are locally Lipschitz continuous and directionally differentiable functions of the parameter. The directional derivatives are characterized.
Article
References 1–4 develop second-order sufficient conditions for local minima of optimal control problems with state and control constraints. These second-order conditions tighten the gap between necessary and sufficient conditions by evaluating a positive-definiteness criterion on the tangent space of the active constraints. The purpose of this paper is twofold. First, we extend the methods in Refs. 3, 4 and include general boundary conditions. Then, we relate the approach to the two-norm approach developed in Ref. 5. A direct sufficiency criterion is based on a quadratic function that satisfies a Hamilton-Jacobi inequality. A specific form of such a function is obtained by applying the second-order sufficient conditions to a parametric optimization problem. The resulting second-order positive-definiteness conditions can be verified by solving Riccati equations.
Chapter
A survey of stability and sensitivity results for the solutions to parameter depenedent cone constrained optimization problems in abstract Banach spaces is presented. An application to optimal control problems for nonlinear ordinary differential equations subject to control and state constraints is given.
Article
A theoretical sensitivity analysis for parametric optimal control problems subject to pure state constraints has recently been elaborated in [7,8]. The articles consider both first and higher order state constraints and develop conditions for solution differentiability of optimal solutions with respect to parameters. In this paper, we treat the numerical aspects of computing sensitivity differentials via appropriate boundary value problems. In particular, numerical methods are proposed that allow to verify all assumptions underlying solution differentiability. Three numerical examples with state constraints of order one, two and four are discussed in detail.
Article
We formulate a version of the method of characteristics based on parametrizations of extremals by their terminal values. Sufficient conditions are given for imbedding a reference trajectory into a local field of broken extremals. For a problem with terminal manifold of codimension 1 it is shown that a broken extremal is a relative minimum if (i) the restrictions of the flow to intervals where the control is continuous have nonsingular partial derivatives with respect to the parameter and (ii) the switching surfaces are crossed transversally.
Article
The theory of discretization methods to control problems and their convergence under strong stable optimality conditions in recent years has been thoroughly investigated by several authors. A particularly interesting question is to ask for a “natural” smoothness category for the optimal controls as functions of time. In several papers, Hager and Dontchev considered Riemann integrable controls. This smoothness class is characterized by global, averaged criteria. In contrast, we consider strictly local properties of the solution function. As a first step, we introduce tools for the analysis of L ∞ elements “at a point”. Using afterwards Robinson's strong regularity theory, under appropriate first and second order optimality conditions we obtain structural as well as certain pseudo-Lipschitz properties with respect to the time variable for the control. Consequences for the behavior of discrete solution approximations are discussed in the concluding section with respect to L ∞ as well as L 2 topologies.
Article
A family of parameter dependent optimal control problems is considered. The problems are subject to higher-order inequality type state constraints. It is assumed that, at the reference value of the parameter, the solution exists and is regular. Regularity conditions are formulated under which the original problems are locally equivalent to some other problems subject to equality type constraints only. The classical implicit function theorem is applied to these new problems to investigate Fréchet dif ferentiability of the stationarity points with respect to the parameter.
Conference Paper
Parametric nonlinear control problems subject to vector-valued mixed control-state constraints are investigated. The model perturbations are implemented by a parameter p of a Banach-space P. We prove solution differentiability in the sense that the optimal solution and the associated adjoint multiplier function are differentiable functions of the parameter. The main assumptions for solution differentiability are composed by regularity conditions and recently developed second-order sufficient conditions (SSC). The analysis generalizes the approach in [16, 20] and establishes a link between (1) shooting techniques for solving the associated boundary value problem (BVP) and (2) SSC. We shall make use of sensitivity results from finite-dimensional parametric programming and exploit the relationships between the variational system associated to BVP and its corresponding Riccati equation. Solution differentiability is the theoretical backbone for any numerical sensitivity analysis. A numerical example with a vector-valued control is presented that illustrates sensitivity analysis in detail.
Article
. We consider a nonlinear optimal control problem with inequality control constraints and subject to canonical perturbations. We prove that the primal-dual pair satisfying the first-order necessary conditions is locally single-valued and Lipschitz continuous, with the primal component being a locally optimal solution, if and only if the combination of an independence conditions for the gradients of the active constraints and a coercivity condition holds. Key Words. Optimal Control, Canonical Perturbations, Lipschitzian Stability. AMS Classification. 49K40, 49K15. 1 Introduction In this paper we present necessary and su#cient conditions for lipschitzian stability in optimal control. We are motivated by the recently obtained characterization of the lipschitzian stability of the standard finite-dimensional mathematical programming problem. Specifically, it was shown in [4] that the combination of the linear independence of the active constraints and the strong second-order su#cient condi...
Diskretisierung von Steuerungsproblemen unter stabilen Optimalitä-bedingungen, Habilitation thesis
  • U Felgenhauer
U. Felgenhauer, Diskretisierung von Steuerungsproblemen unter stabilen Optimalitä-bedingungen, Habilitation thesis, Brandenburgische Technische Universitä Cottbus, 1999.
Second order sufficient conditions for control problems with free final time
  • H Maurer
H. Maurer, Second order sufficient conditions for control problems with free final time, in Proceedings of the 3rd European Control Conference, Rome, 1995, A. Isidori et al., eds., 1995, pp. 3602–3606.
Symplectic methods for strong local optimality in the bang-bang case, in Contemporary Trends in Nonlinear Geometric Control Theory and Its Applications
  • A A Agrachev
  • G Stefani
  • P L Zezza
A. A. Agrachev, G. Stefani, and P. L. Zezza, Symplectic methods for strong local optimality in the bang-bang case, in Contemporary Trends in Nonlinear Geometric Control Theory and Its Applications, Mexico City, 2000, World Scientific, River Edge, NJ, 2002, pp. 169–181.
Second-order conditions for broken extremals, in Calculus of Variations and Optimal Control
  • N P Osmolovskii
N. P. Osmolovskii, Second-order conditions for broken extremals, in Calculus of Variations and Optimal Control, Haifa, 1998, Chapman and Hall/CRC Press, Boca Raton, FL, 2001, pp. 198–216.
Convergence of approximations to nonlinear control problems, in Mathematical Programming with Data Perturbation
  • K Malanowski
  • C Bü
  • H Maurer
K. Malanowski, C. Bü, and H. Maurer, Convergence of approximations to nonlinear control problems, in Mathematical Programming with Data Perturbation, Lecture Notes in Pure and Appl. Math. 195, A. V. Fiacco, ed., Marcel Dekker, New York, 1997, pp. 253–284.
  • Milyutin A.