Yo Ishizuka’s research while affiliated with Sophia University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (17)


Two-Level Mathematical Programming Problem
  • Chapter

January 1997

·

10 Reads

·

2 Citations

Kiyotaka Shimizu

·

Yo Ishizuka

·

In the latter half of this book we are concerned with theories and solution methods for various two-level optimization problems, which are generally called two-level mathematical programs. The purpose of this chapter is to present a unified treatment of various two-level optimization problems in the context of general two-level nonlinear programming. In so doing, we show how several variants fit the general model. As such, one can achieve a unified framework for each individual problem discussed in the following chapters.


Mathematical Preliminaries

January 1997

·

3 Reads

This subsection is devoted to introducing notation and definitions that are used throughout the book. Constants, variables and functions are boldfaced to show that they are vectors or vector-valued. Vectors are column oriented unless otherwise specified. For two vectors x = \left( {\begin{array}{*{20}{c}} {{x_1}} \\ \vdots \\ {{x_n}} \end{array}} \right),{\text{ }}y = \left( {\begin{array}{*{20}{c}} {{y_1}} \\ \vdots \\ {{y_n}} \end{array}} \right) the inner product,\( \Sigma _{i = 1}^n{x_i}{y_i} \), of x and y is denoted by xTy. Superscript T stands for the transposition. When x is a row vector, we write the inner product as x • y.


Min-Max Type Multi-Objective Programming Problem

January 1997

·

3 Reads

Let f i , j = 1, …, N, be the criterion functions that depend on the decision maker’s variable x∈ R n and opponent’s (or disturbance) variables yi ∈ R mj, j =1, …, N. In the absence of information about the opponent’s strategies a conservative decision maker would assume that he will experience the worst possible outcome at the hands of his opponents. In such a case, the objective functions to be minimized by the decision maker are defined as follows:


Satisfaction Optimization Problem

January 1997

·

14 Reads

This chapter deals with an optimization problem involving unknown parameters (uncertainty). We consider a decision problem whose objective function is minimized under the condition that a certain performance function should always be less than or equal to a prescribed permissible level (for every value of the unknown parameters). In the case that the set in which the unknown parameters must lie contains an infinite number of elements, we say that the corresponding optimization problem has an infinite number of inequality constraints and call it an infinitely constrained optimization problem.†



Optimal-Value Functions

January 1997

·

5 Reads

·

2 Citations

In the following nine chapters we study optimization problems whose formulations contain minimization and maximization operations in their description — optimization problems with a two-level structure. In many instances, these problems include optimal-value functions that are not necessarily differentiable and hence difficult to work with. In this chapter we highlight a number of important properties of optimal-value functions that derive from results by Clarke [C9], Gauvin-Dubeau [G4], Fiacco [F3], and Hogan [H12] on differentiable stability analysis for nonlinear programs. These results provide the basis for several computational techniques discussed presently.


Nondifferentiable Nonlinear Programming

January 1997

·

8 Reads

In this chapter we shall discuss optimality conditions and solutions for nondifferentiable optimization problems — optimization problems with nondifferentiable objective and constraint functions. The Kuhn-Tucker conditions [K6] are well known optimality conditions for nonlinear programming problems consisting of differentiable functions. Here, we shall derive Kuhn-Tucker like conditions for two classes of non-differentiable optimization problems consisting of locally Lipschitz functions and of quasidifferentiable functions.


Large-Scale Nonlinear Programming: Decomposition Methods

January 1997

·

6 Reads

The use of primal and dual methods are at the heart of finding solutions to large-scale nonlinear programming problems. Both methods are algorithms of a two-level type where the lower-level decision makers work independently to solve their individual subproblems generated by the decomposition of the original (overall) problem. At the same time, the upper-level decision maker solves his coordinating problem by using the results coming from the lower-level optimizations. These algorithms perform optimization calculations successively by an iterative exchange of information between the two levels.


The Stackelberg Problem: Linear and Convex Case

January 1997

·

2 Reads

The vast majority of research on two-level programming has centered on the linear Stackelberg game, alternatively known as the linear bilevel programming problem (BLPP). In this chapter we present several of the most successful algorithms that have been developed for this case, and when possible, compare their performance. We begin with some basic notation and a discussion of the theoretical character of the problem.


Min-Max Problem

January 1997

·

13 Reads

·

1 Citation

The min-max problem is a model for decision making under uncertainty. The aim is to minimize the function f (x, y) but the decision maker only has control of the vector x ∈ R n . After he selects a value for x, an “opponent” chooses a value for y ∈ R m which alternatively can be viewed as a vector of disturbances. When the decision maker is risk averse and has no information about how y will be chosen, it is natural for him to assume the worst. In other words, the second decision maker is completely antagonistic and will try to maximize f (x,y) once x is fixed. The corresponding solution is called the min-max solution and is one of several conservative approaches to decision making under uncertainty. When stochastic information is available for y other approaches might be more appropriate (e.g., see [S4, E3]).


Citations (2)


... By using this formulation, the network is at its best possible state from the collective view of the users. P2 can be incorporated into the constraint set of P1 (16). Let X* be the optimal solution vector for P2. ...

Reference:

Methodology for Determining Vulnerable Links in a Transportation Network
Two-Level Mathematical Programming Problem
  • Citing Chapter
  • January 1997