Article

Una clase de problemas de decision bajo incertidumbre parcial

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Se estudia el Problema de Decisión cuando el ambiente es de incertidumbre parcial, en el sentido de que la distribucióna priori—que se supone absolutamente continua—sobre el espacio de estados—un intervalo real—no se conoce en su totalidad, sino que tan sólo se posee información respecto a las probabilidades de algunos subintervalos de Θ o acotaciones de éstas, así como algunas restricciones sobre los momentos y ciertas generalizaciones de éstas, dentro de este contexto. Además de las correspondientes caracterizaciones, se dan algoritmos de resolución, los cuales son también analizados. The Decision Problem when there is an, environment of partial uncertainty is studied, in the sense that the a priori distribution—which is supposed as absolutely continuous—about the space of conditions—a real interval—is not completely known, but we have only got information in relation to the probabilities of some Θ subintervals, or boundaries of these, as well as some constraints about the moments, and certain extensions of these within this contexto.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

Article
Se estudia el problema de decisión (Θ,δ,ρ) cuando Θ es un intervalo finito de ℝ y el decisor posee información acerca de las probabilidades de una partición de Θ en subintervalos, de la monotonía de las f.d.d. en dichos subintervalos y de algunas restricciones sobre los momentos de la distribución y ciertos generalizadores de éstas dentro de este contexto. The decision problem (Θ,δ,ρ) is studied when Θ is a finite interval of ℝ and the decider possesses information about the probabilities of a Θ-partition in subintervals, about the monotonies of the f.d.d. on these subintervals of Θ, and some constraints about the moments of the distribution and certain extensions of these within this context. In addition to the corresponding characterizations, we find algorithms of resolution.
Article
This note shows that a framework I developed earlier can be used to give a simplified proof of conditions given by Eaves and Zangwill (which weaken the uniform concavity requirement on my earlier objective function) under which inactive constraints may be dropped after each subproblem in cutting-plane algorithms. Here the convergence rate I established previously as an extension of the results of Levitin and Polyak is improved and its application extended.
Article
The main purpose of this paper is to discuss numerical optimization procedures, based on duality theory, for stochastic extremal problems in which the distribution function is only partially known. We formulate such problems as minimax problems in which the 'inner' problem involves optimization with respect to probability measures. The latter problem is solved using generalized linear programming techniques. Then we state the dual problem to the initial stochastic optimization problem. Numerical procedures that avoid the difficulties associated with solving the 'inner' problem are proposed.
Article
An area of considerable recent research interest has involved the extension and modification of the basic model for two-person zero-sum game theory. One particular type of extension found in the literature involves the introduction of risk and uncertainty into the model by allowing the m × n payoff matrix A = aij to be a discrete random matrix that can assume a finite set of values. This paper considers both one-and two-person games and investigates the situation in which A is a discrete random matrix that can assume a countably infinite set of values {Ak}k=1∞. We assume that the players possess certain partial information about P, the distribution of A, in which case the game problems for players 1 and 2 can be reduced to programming equivalents. We prove minimax theorems for both semi-infinite and infinite games, and give some properties of optimal mixed strategies. The paper also develops some extensions of a theorem due to Caratheodory.
Article
This paper treats a simple recourse problem. Considered is the problem of reducing the feasible set into a smaller efficient set when partial information about the random parameters is known. The authors analyze some examples and give applications to stochastic programs with complete information.
Article
This paper gives general conditions for the convergence of a class of cutting-plane algorithms without requiring that the constraint sets for the sub-problems be sequentially nested. Conditions are given under which inactive constraints may be dropped after each subproblem. Procedures for generating cutting-planes include those of Kelley, Cheney and Goldstein, and a generalization of the one used by both Zoutendijk and Veinott. For algorithms with nested constraint sets, these conditions reduce to a special case of those of Zangwill for such problems and include as special cases the algorithms of Kelley, Cheney and Goldstein, and Veinott. Finally, the paper gives an arithmetic convergence rate.
Article
Let g1,,gng_1, \cdots, g_n and h be given real-valued Borel measurable functions on a fixed measurable space T=(T,A)T = (T, \mathscr{A}). We shall be interested in methods for determining the best upper and lower bound on the integral μ(h)=Th(t)μ(dt),\mu(h) = \int_Th(t)\mu(dt), given that μ\mu is a probability measure on T with known moments μ(gj)=yj,j=1,,n\mu(g_j) = y_j, j = 1, \cdots, n. More precisely, denote by M+=M+(T)\mathscr{M}^+ = \mathscr{M}^+(T) the collection of all probability measures on T such that μ(gj)<(j=1,,n)\mu(|g_j|) < \infty (j = 1, \cdots, n) and μ(h)<\mu(|h|) < \infty. For each y=(y1,,yn)εRny = (y_1, \cdots, y_n) \varepsilon R^n, consider the bounds L(y)=L(yh)=infμ(h),U(y)=U(yh)=supμ(h),L(y) = L(y | h) = \inf \mu(h), U(y) = U(y | h) = \sup \mu(h), where μ\mu is restricted by μεM+(T);μ(g1)=y1,,μ(gn)=yn.\mu \varepsilon \mathscr{M}^+(T); \mu(g_1) = y_1, \cdots, \mu(g_n) = y_n. If there is no such measure μ\mu we put L(y)=+,U(y)=L(y) = + \infty, U(y) = - \infty. In many applications, h is the characteristic function (indicator function) h=Ish = I_s of a given measurable subset S of T. In that case we usually write instead L(yIs)=Ls(y),U(yIs)=Us(y)L(y | I_s) = L_s(y), U(y | I_s) = U_s(y). Thus, Ls(y)μ(S)Us(y)L_s(y) \leqq \mu(S) \leqq U_s(y) are the best possible bounds on the probability mass μ(S)\mu(S) contained in S, given that μεM+\mu \varepsilon \mathscr{M}^+ and that μ(g)=y\mu(g) = y. Here, g denotes the mapping g:TRng:T \rightarrow R^n defined by g(t)=(g1(t),,gn(t))g(t) = (g_1(t), \cdots, g_n(t)). By g0g_0 we shall denote the function on T with g0(t)=1g_0(t) = 1 for all tεTt \varepsilon T. The following tentative method for finding L(yh)L(y \mid h) may be said to go back to Markov [8] and Riesz [13], see [7]. Choose an (n+1)(n + 1)-tuple d=(d0,d1,,dn)d^\ast = (d_0, d_1, \cdots, d_n) of real numbers such that d0+d1g1(t)++dngn(t)h(t)for alltεT,d_0 + d_1g_1(t) + \cdots + d_ng_n(t) \leqq h(t) \text{for all} t \varepsilon T, and define B(d)={zεRn:z=g(t)for sometεTwithj=0ndjgj(t)=h(t)}.B(d^\ast) = \{z \varepsilon R^n: z = g(t) \text{for some} t \varepsilon T \text{with} \sum^n_{j=0} d_jg_j(t) = h(t)\}. Then L(yh)=d0+j=1ndjyjfor eachyεconvB(dast),L(y \mid h) = d_0 + \sum^n_{j=1} d_jy_j \text{for each} y \varepsilon \operatorname{conv} B(d^ast), (conv=\operatorname{conv} = convex hull). The main purpose of the present paper is to investigate the merits of this method and certain more general methods. It turns out (Theorem 5) that for almost all yϵRny \epsilon R^n there exists at most one admissible dd^\ast with yεconvB(d)y \varepsilon \operatorname{conv} B(d^\ast). Moreover, provided yε(V)y \varepsilon \int (V) where V=convg(T)V = \operatorname{conv} g(T), there exists at least one such dd^\ast if and only if there exists a measure μεM+\mu \varepsilon \mathscr{M}^+ with μ(g)=y\mu(g) = y and μ(h)=L(yh)\mu(h) = L(y \mid h). A sufficient condition for the latter would be that T has a compact topology with respect to which g is continuous and h is lower semi-continuous. More interesting is a related method for finding L(yh)L(y \mid h), see Theorem 6, which will work for each yε(V)y \varepsilon \int (V) as soon as g is bounded. The situation where y∉(V)y \not\in \int (V) is discussed in Section 4. It appears that the assumption yε(V)y \varepsilon \int(V) is a rather natural one. We have chosen to develop the important special case h=Ish = I_s in a partly independent manner, see the Sections 5, 6, and 7. In this case, the (n+1)(n + 1)-tuple dd^\ast must satisfy \begin{align*}d_0 + \sum^n_{j=1} d_jz_j \leqq 1 \text{for all} z \varepsilon g(T),\\ \leqq 0 \text{for all} z \varepsilon g(S').\end{align*} Here, SS' denotes the complement of S in T. Assuming that d1,,dnd_1, \cdots, d_n are not all zero, let us associate to dd^\ast the pair of hyperplanes H and HH' with equations j=1djzjn=1d0andj=1ndjzj=d0,\sum^n_{j=1 d_jz_j} = 1 - d_0 \text{and} \sum^n_{j=1} d_jz_j = -d_0, respectively. This pair is such that H,HH, H' are distinct parallel hyperplanes with g(S)g(S') and H on opposite sides of HH' and g(T) and HH' on the same side of H; such a pair H,HH, H' will be said to be admissible. Observe that B(d)=(g(S)capH)cup(g(S)capH),B(d^\ast) = (g(S) \mathbf{cap} H) \mathbf{cup} (g(S') \mathbf{cap} H'), with H,HH, H' as the admissible pair determined by dd^\ast. The present (n+1)(n + 1)-tuple dd^\ast is useful, for determining Ls(y)=L(yIs)L_s(y) = L(y \mid I_s) for at least some points y, only when both g(S)capH≢0g(S) \mathbf{cap} H \not\equiv 0 and g(S)capH≢0g(S') \mathbf{cap} H' \not\equiv 0. That is, HH' should not only support the set g(S)g(S') but even "intersect" it; similarly, H and g(S). Fortunately, one can usually replace "intersect" by "touch". More precisely (Corollary 13), if H and HH' form an admissible pair as above then Ls(y)=d0+j=1ndjyjL_s(y) = d_0 + \sum^n_{j=1} d_jy_j for each point y such that both yε(V),yεconv[{Hcapconvg(S)}cup{Hcapconvg(S)}],y \varepsilon \int(V),\quad y \varepsilon \operatorname{conv}\lbrack\{H \mathbf{cap} \overline{\operatorname{conv}} g(S)\} \mathbf{cup} \{H' \mathbf{cap} \overline{\operatorname{conv}} g(S')\}\rbrack, a bar denoting closure. Provided g is bounded the latter generalization will yield the value Ls(y)L_s(y) for all relevant y, see Theorem 7. Whether or not g is bounded, we have for almost all y that there can be at most one admissible pair of hyperplanes H and HH' yielding Ls(y)L_s(y) in the above manner. A detailed discussion of the method on hand may be found in Section 6. The present method is geometrical in the following sense: (i) one only needs to know the sets g(S) and g(S)g(S') in Rn;R^n; (ii) afterwards, one considers all the pairs H and HH' of parallel hyperplanes touching g(S) and g(S)g(S') in the above manner. Each such pair yields Ls(y)L_s(y) for certain values y; varying the pair H,HH, H' one often obtains the value Ls(y)L_s(y) for all relevant yεRny \varepsilon R^n. Usually, there are many different regions in y-space, each with its own analytic formula for Ls(y)L_s(y). Nevertheless, all these different formulae are derived from one and the same geometrical principle. A number of specific illustrations, all with n=2n = 2, are presented in Section 7. They indicate that it is often quite easy to solve the following problem in a geometric manner. Let X be a random variable taking its values in a measurable space T, such that E(g1(X))=y1,E(g2(X))=y2,E(g_1(X)) = y_1,\quad E(g_2(X)) = y_2, with g1g_1 and g2g_2 as known real-valued Borel measurable functions on T. The problem is to determine the best possible lower bound Ls(y)L_s(y) on Pr(XεS)\mathrm{Pr} (X \varepsilon S) where S is a given Borel measurable subset of T.
): <<The General moment problem. A geometric approach)~, The Annals of
  • Fel Dbaum
FEL'DBAUM, A. A. (1965):,Optimal Control Systems,, Mathematics in Science and Engineering, vol. 22, Academic Press. KEMPERMAN, J. H. B. (1968): <<The General moment problem. A geometric approach)~, The Annals of Mathematical Statistics, vol. 39, pp. 93-122.
Decisi6n bajo informaci6n de monotonias de densi-dades y razones de fallo de la distribuci6n a priori sobre los estados~, Tesis doctoralCutting-plane methods without nested constraint sets~
  • Z W Kmietowicz
  • A D Pearman
  • England
  • M Salvador
  • Zaragoza
  • D Topkis
KMIETOWICZ, Z. W., y PEARMAN, A. D. (1982): Decision Theory and Incomplete Knowledge, Editorial Gower, Pub. Comp. Ltd. England. SALVADOR, M. (1987):,Decisi6n bajo informaci6n de monotonias de densi-dades y razones de fallo de la distribuci6n a priori sobre los estados~, Tesis doctoral, Zaragoza. TOPKIS, D. (1970):,Cutting-plane methods without nested constraint sets~, Operations Research, vol. 18, pp. 404-413.
«Decisión bajo información de monotonías de densidades y razones de fallo de la distribución a priori sobre los estados
  • M Salvador