Fig 2 - uploaded by Tamás Terlaky
Content may be subject to copyright.

# The optimal value function f ((, λ) of Example 4.1

Source publication
Article
Full-text available
In bi-parametric linear optimization (LO), perturbation occurs in both the right-hand-side and the objective function data with different parameters. In this paper, the bi-parametric LO problem is considered and we are interested in identifying the regions where the optimal partitions are invariant. These regions are referred to as invariancy regio...

## Contexts in source publication

Context 1
... π( , λ) = (B(( , λ), N (( , λ)) and π( u , λ) = (B(( u , λ), N (( u , λ)) be optimal partitions at vertical transition lines = and = u , respectively. Using Corollary IV.63 in (Roos et al. 2005), we conclude that B(( , λ) ⊆ B and B(( u , λ) ⊆ B. The proof is complete because, λ is an arbitrary parameter value in (λ , λ u ). ...
Context 2
... lines denote the transition (half-)line segments. The actual invariancy region is the transition line segment {((, 0)| − 2.5 < < < 1.5}. Optimal partitions on all invariancy regions, including transition (half-)line segments and non- trivial invariancy regions, are presented in Table 1. The optimal value function of this problem is depicted in Fig. 2. It clearly shows that this function in neither a convex nor a concave ...
Context 3
... that for a fixed parameter value (in this case, = −2.5 and = 1.5), the bi-parametric LO problem P(b, c, ,, λ) reduces to a uni-parametric LO problem with perturbation in only the OFD. This result is in agreement with Lemma 2.11. 4. The continuity of the optimal value function at transition lines is visible from the Table 1 as well as from Fig. 2. Moreover, it is seen that the optimal value function is not differentiable on transition ...

## Similar publications

Article
Full-text available
Uncertainties exist in both physics-based and data-driven models. Variance-based sensitivity analysis characterizes how the variance of a model output is propagated from the model inputs. The Sobol index is one of the most widely used sensitivity indices for models with independent inputs. For models with dependent inputs, different approaches have...

## Citations

... Although the general question has so far been elusive, many results have been obtained in answer of the question for some special cases. For example, a linearity set is known to be convex for polyhedra [30], and a nonlinearity set is open for spectrahedra [66]; a polyhedron has only linearity subsets, and a specthedron usually has a nonlinearity subset (see Corollary 4.18 and the corresponding remarks). Such two types of invariancy sets were first distinguished by Mohammad-Nezhad and Terlaky [48] in the context of SDPs, although the first type has been studied extensively in LPs (Refs. ...
... 10.[66, Corollary 5.4] If V is a linearity set of Θ P , then V is convex, and for all v ∈ cl(V) and u ∈ Ψ(V), one has v ∈ Φ(u).For LPs, Ghaffari Hadigheh et al.[30] presented a similar result for the biparametric optimal partition. ...
Preprint
Full-text available
In this paper we investigate the optimal partition approach for multiparametric conic linear optimization (mpCLO) problems in which the objective function depends linearly on vectors. We first establish more useful properties of the set-valued mappings early given by us (arXiv:2006.08104) for mpCLOs, including continuity, monotonicity and semialgebraic property. These properties characterize the notions of so-called linearity and nonlinearity subsets of a feasible set, which serve as stability regions of the partition of a conic (linear inequality) representable set. We then use the arguments from algebraic geometry to show that a semialgebraic conic representable set can be decomposed into a union of finite linearity and/or nonlinearity subsets. As an application, we investigate the boundary structure of the feasible set of generic semialgebraic mpCLOs and obtain several nice structural results in this direction, especially for the spectrahedon.
... The case when these two parameters vary independently has been investigated in [5]. The authors proved that invariancy regions are rectangles (in special cases, lines and points), making a mesh-like convex area that might be unbounded. ...
... Optimal partition invariancy analysis of an LO with different parameters on the right-hand side and the objective function was studied in [5]. It was shown that the region is a Cartesian product of two intervals for each uni-parametric problem. ...
... Proof. The proof is similar to the proof of Theorem 2.6 of [5]. One only needs to replace A with A+ε A in the argument. ...
Article
In a linear optimization problem, objective function, coefficients matrix, and the right-hand side might be perturbed with distinct parameters independently. For such a problem, we are interested in finding the region that contains the origin, and the optimal partition remains invariant. A computational methodology is presented here for detecting the boundary of this region. The cases where perturbation occurs only in the coefficients matrix and right-hand side vector or the objective function are specified as special cases. The findings are illustrated with some simple examples.
... which, by (19) and the compactness of S(δ,¯ ), implies that α(δ), β(δ) ∈ S π(¯ ) . Furthermore, the inequality (15) indicates that the length of α(δ), β(δ) can be controlled by using a suitable choice of δ > 0. Then the result follows from the finiteness of the number of connected components of S π(¯ ) , see Theorem 1, and the fact that ∈ [α(δ), β(δ)] for every δ ≥ 0: ...
... Extension While a one-dimensional perturbation setting is of practical interest, our notions of invariancy and nonlinearity intervals can be modified for SOCO problems with other kind of perturbations, e.g., simultaneous perturbation of the objective and right hand side vectors or higher-dimensional perturbations, see e.g., [15]. Under these conditions, we need a nontrivial extension of Algorithm 1 for the computation of a nonlinearity region. ...
Article
In this paper, using an optimal partition approach, we study the parametric analysis of a second-order conic optimization problem, where the objective function is perturbed along a fixed direction. We characterize the notions of so-called invariancy set and nonlinearity interval, which serve as stability regions of the optimal partition. We then propose, under the strict complementarity condition, an iterative procedure to compute a nonlinearity interval of the optimal partition. Furthermore, under primal and dual nondegeneracy conditions, we show that a boundary point of a nonlinearity interval can be numerically identified from a nonlinear reformulation of the parametric second-order conic optimization problem. Our theoretical results are supported by numerical experiments.
... Furthermore, Theorem 3.5 shows that the projection polyhedron Ω P (d, M) can be decomposed into a union of the invariant regions and the transition faces of Ω P (d, M). In particular, if r = 1, then the transition face degenerates a transition point, see Ghaffari et al. [8]. At the transition point, the objective optimal value of the problem (9) is nonsmooth because the corresponding slack vector is a vertex of Ω P . ...
Preprint
Full-text available
It is well known how to clarify whether there is a polynomial time simplex algorithm is the most challenging open problem in optimization and discrete geometry. Under the assumption of regularization, this paper gives a affirmative answer to this open question by the use of the parametric analysis technique. We show that there is a maximum of n steps pivot rule, where n denotes the number of variables of a linear program.
... Adler and Monteiro [1] investigated the sensitivity analysis of LP problems first using the optimal partition approach, in which they identify the range of parameters where the optimal partition remains invariant. Other treatments of parametric analysis for LP problems based on the optimal partition approach was given by Jansen et al. [25], Greenberg [19], and Roos et al. [39], Ghaffari-Hadigheh et al. [15], Berkelaar et al. [6], Dehghan et al. [12], Hladík [24] and etc. The actual invariancy region has been studied extensively both in the setting of SDP , see, e.g., Goldfarb and Scheinberg [16], Mohammad-Nezhad and Terlaky [28]; and more generally in conic linear optimization, see Yildirim [45]. ...
... Such a treatment helps us to present a geometric framework that unifies and extends some of the properties of parametric LP and SDP problems to the case of conic linear optimization. Our main goal is to develop the optimal partition approach given in [1] for parametric LP and in [15] for parametric SDP and to present theoretical results for the sensitivity of the optimal partition. ...
... The concept of the optimal partition (or the invariancy interval) has been defined for parametric LP in [1] and for parametric SDP in [15]. We define a new notion of the invariancy region. ...
Preprint
Full-text available
This paper focuses on the parametric analysis of a conic linear optimization problem with respect to the perturbation of the objective function along many fixed directions. We introduce the concept of the primal and dual conic linear inequality representable sets, which is very helpful for converting the correlation of the parametric conic linear optimization problems into the set-valued mapping relation. We discuss the relationships between these two sets and present the invariant region decomposition of a conic linear inequality representable set. We study the behaviour of the optimal partition and investigate the sensitivity of the optimal partition for conic linear optimization problems. All results are corroborated by examples having correlation among parameters.
... One formulation of bi-parametric optimization is to have one of the parameters in the right-hand side of the constraints and the second one in the objective function data. This point of view to bi-parametric optimization has been considered extensively in LO and QO [21,22,27]. ...
... The simplest form of the invariancy region is an interval obtained from the sensitivity analysis on a linear program with only one perturbation at either the objective function or the constraints. Other parameter analysis can be found in[24], which studies the linear program with the independent disturbance occurring in the objective and the constraints, showing that the invariancy regions for this problem are mesh-like areas separated by vertical and horizontal lines in a two-dimensional region; the work[25], which studies the bi-parametric convex quadratic program and the optimal partition being investigated divides the indices into three sets, shows that the invariancy regions are convex and providing an algorithm with illustrative results to detect the boundary of an invariancy region and to transit to the adjacent regions. ...
Data
... The simplest form of the invariancy region is an interval obtained from the sensitivity analysis on a linear program with only one perturbation at either the objective function or the constraints. Other parameter analysis can be found in [24], which studies the linear program with the independent disturbance occurring in the objective and the constraints, showing that the invariancy regions for this problem are mesh-like areas separated by vertical and horizontal lines in a two-dimensional region; the work [25], which studies the bi-parametric convex quadratic program and the optimal partition being investigated divides the indices into three sets, shows that the invariancy regions are convex and providing an algorithm with illustrative results to detect the boundary of an invariancy region and to transit to the adjacent regions. ...
Article
Full-text available
A linear program with linear complementarity constraints (LPCC) is among the simplest mathematical programs with complementarity constraints. Yet the global solution of the LPCC remains difficult to find and/or verify. In this work we study a specific type of the LPCC which we term a bi-parametric LPCC. Reformulating the bi-parametric LPCC as a non-convex quadratically constrained program, we develop a domain-partitioning algorithm that solves a series of the linear subproblems and/or convex quadratically constrained subprograms obtained by the relaxations of the complementarity constraint. The choice of an artificial constants-pair allows us to control the domain on which the partitioning is done. Numerical results of robustly solving 105 randomly generated bi-parametric LPCC instances of different structures associated with different numbers of complementarity constraints by the algorithm are presented.
... One formulation of bi-parametric optimization is to have one of the parameters in the right-hand side of the constraints and the second one in the objective function data. This point of view to bi-parametric optimization has been considered extensively in LO and QO [21,22,27]. ...
... The last considered sensitivity invariancy is the optimal partition invariancy [4,5,11,12,13,16,21]. Let P * be the optimal solution set to (2) and D * the optimal solution set to its dual. ...
Article
Full-text available
In many practical linear programming problems, it is often important to know how different optimality criteria (optimal solution, optimal basis, optimal partition, etc.) change under input data perturbations. Our aim is to compute tolerances (intervals) for the objective function and the right-hand side coefficients such that these coefficients can independently and simultaneously vary inside their tolerances while preserving the corresponding optimality criterion. We put tolerance analysis in a unified framework that is convenient for algorithmic processing and that is applicable not only in linear programming but for other linear systems as well. We survey the known results (pioneered by R.E. Wendell) and propose an improvement that is optimal in some sense (the resulting tolerances are maximal and they take into account proportionality). We apply our approach to several optimality invariancies: optimal basis, support set and optimal partition invariancy. Thus, the approach is useful not only for simplex method solvers, but for the interior points methods, too. We also discuss time complexity and show that it is NP-hard to determine the maximal tolerances.