To read the file of this research, you can request a copy directly from the authors.
Abstract
This paper introduces a discrete-time stochastic game class on , which plays the role of a toy model for the well-known problem of stochastic homogenization of Hamilton-Jacobi equations. Conditions are provided under which the n-stage game value converges as n tends to infinity, and connections with homogenization theory is discussed.
We prove homogenization for a class of nonconvex (possibly degenerate) viscous Hamilton-Jacobi equations in stationary ergodic random environments in one space dimension. The results concern Hamiltonians of the form G(p)+V(x,ω), where the nonlinearity G is the minimum of two or more convex functions with the same absolute minimum, and the potential V is a bounded stationary process satisfying an additional scaled hill and valley condition. This condition is trivially satisfied in the inviscid case, while it is equivalent to the original hill and valley condition of A. Yilmaz and O. Zeitouni [32] in the uniformly elliptic case. Our approach is based on PDE methods and does not rely on representation formulas for solutions. Using only comparison with suitably constructed super- and sub- solutions, we obtain tight upper and lower bounds for solutions with linear initial data x↦θx. Another important ingredient is a general result of P. Cardaliaguet and P. E. Souganidis [13] which guarantees the existence of sublinear correctors for all θ outside “flat parts” of effective Hamiltonians associated with the convex functions from which G is built. We derive crucial derivative estimates for these correctors which allow us to use them as correctors for G.
We consider Hamilton–Jacobi equations in one space dimension with Hamiltonians of the form , where is a stationary and ergodic potential of unit amplitude. The homogenization of such equations is established in a 2016 paper of Armstrong, Tran and Yu for all continuous and coercive G. Under the extra condition that G is a double-well function (i.e., it has precisely two local minima), we give a new and fully constructive proof of homogenization which yields a formula for the effective Hamiltonian . We use this formula to provide a complete list of the heights at which the graph of has a flat piece. We illustrate our results by analyzing basic classes of examples, highlight some corollaries that clarify the dependence of on G, and the law of , and discuss a generalization to even-symmetric triple-well Hamiltonians.
In this paper, we prove the random homogenization of general coercive non-convex Hamilton–Jacobi equations in the one dimensional case. This extends the result of Armstrong, Tran and Yu when the Hamiltonian has a separable form H(p,x,ω)=H(p)+V(x,ω) for any coercive H(p).
We study random homogenization of second-order, degenerate and quasilinear
Hamilton-Jacobi equations which are positively homogeneous in the gradient.
Included are the equations of forced mean curvature motion and others
describing geometric motions of level sets as well as a large class of viscous,
non-convex Hamilton-Jacobi equations. The main results include the first proof
of qualitative stochastic homogenization for such equations. We also present
quantitative error estimates which imply give an algebraic rate of
homogenization.
We prove explicit estimates for the error in random homogenization of
degenerate, second-order Hamilton-Jacobi equations, assuming the coefficients
satisfy a finite range of dependence. In particular, we obtain an algebraic
rate of convergence with overwhelming probability under certain structural
conditions on the Hamiltonian.
Fix a bounded domain Omega in R^d, a continuous function F on the boundary of Omega, and constants epsilon>0, p>1, and q>1 with p^{-1} + q^{-1} = 1. For each x in Omega, let u^epsilon(x) be the value for player I of the following two-player, zero-sum game. The initial game position is x. At each stage, a fair coin is tossed and the player who wins the toss chooses a vector v of length at most epsilon to add to the game position, after which a random ``noise vector'' with mean zero and variance (q/p)|v|^2 in each orthogonal direction is also added. The game ends when the game position reaches some y on the boundary of Omega, and player I's payoff is F(y). We show that (for sufficiently regular Omega) as epsilon tends to zero the functions u^epsilon converge uniformly to the unique p-harmonic extension of F. Using a modified game (in which epsilon gets smaller as the game position approaches the boundary), we prove similar statements for general bounded domains Omega and resolutive functions F. These games and their variants interpolate between the tug of war games studied by Peres, Schramm, Sheffield, and Wilson (p=infinity) and the motion-by-curvature games introduced by Spencer and studied by Kohn and Serfaty (p=1). They generalize the relationship between Brownian motion and the ordinary Laplacian and yield new results about p-capacity and p-harmonic measure.
This paper considers the problem of homogenization of a class of convex Hamilton-Jacobi equations in spatio-temporal stationary ergodic environments. Special attention is placed on the interplay between the use of the Subadditive Ergodic Theorem and continuity estimates for the solutions that are independent of the oscillations in the equation. Moreover, an inf-sup formula for the effective Hamiltonian is provided.
We study the homogenization of "viscous" Hamilton-Jacobi equations in sta-tionary ergodic media. The "viscosity" and the spatial oscillations are assumed to be of the same order. We identify the asymptotic (effective) equation, which is a first-order deterministic Hamilton-Jacobi equation. We also provide examples which show that the associated macroscopic problem does not admit suitable solutions (correctors). Finally, we present as applications results about large deviations of diffusion processes and front propagation (asymptotics of reaction-diffusion equations) in random environments.
Homogenization asks whether average behavior can be discerned from partial differential equations that are subject to high-frequency
fluctuations when those fluctuations result from a dependence on two widely separated spatial scales. We prove homogenization
for certain stochastic Hamilton-Jacobi partial differential equations; the idea is to use the subadditive ergodic theorem
to establish the existence of an average in the infinite scale-separation limit. In some cases, we also establish a central
limit theorem.
We estimate the variance of the value function for a random optimal control
problem. The value function is the solution of a Hamilton-Jacobi
equation with random Hamiltonian
in dimension . It is known that homogenization occurs as , but little is known about the statistical fluctuations of .
Our main result shows that the variance of the solution is bounded
by . The proof relies on a modified Poincar\'e
inequality of Talagrand.
We present exponential error estimates and demonstrate an algebraic
convergence rate for the homogenization of level-set convex Hamilton-Jacobi
equations in i.i.d. random environments, the first quantitative homogenization
results for these equations in the stochastic setting. By taking advantage of a
connection between the metric approach to homogenization and the theory of
first-passage percolation, we obtain estimates on the fluctuations of the
solutions to the approximate cell problem in the ballistic regime (away from
flat spot of the effective Hamiltonian). In the sub-ballistic regime (on the
flat spot), we show that the fluctuations are governed by an entirely different
mechanism and the homogenization may proceed, without further assumptions, at
an arbitrarily slow rate. We identify a necessary and sufficient condition on
the law of the Hamiltonian for an algebraic rate of convergence to hold in the
sub-ballistic regime and show, under this hypothesis, that the two rates may be
merged to yield comprehensive error estimates and an algebraic rate of
convergence for homogenization.
Our methods are novel and quite different from the techniques employed in the
periodic setting, although we benefit from previous works in both first-passage
percolation and homogenization. The link between the rate of homogenization and
the flat spot of the effective Hamiltonian, which is related to the
nonexistence of correctors, is a purely random phenomenon observed here for the
first time.
We give a proof of Lipschitz continuity of p-harmonious functions, that are
tug-of-war game analogies of ordinary p-harmonic functions. This result is used
to obtain a new proof of Harnack's inequality for p-harmonic functions in the
case that avoids classical techniques like Moser iteration, but instead
relies on suitable choices of strategies for the stochastic tug-of-war game.
We prove that every bounded Lipschitz function F F on a subset Y Y of a length space X X admits a tautest extension to X X , i.e., a unique Lipschitz extension u : X → R u:X \rightarrow \mathbb {R} for which Lip U u = Lip ∂ U u \operatorname {Lip}_U u =\operatorname {Lip}_{\partial U} u for all open U ⊂ X ∖ Y U \subset X\smallsetminus Y . This was previously known only for bounded domains in R n \mathbb {R}^n , in which case u u is infinity harmonic ; that is, a viscosity solution to Δ ∞ u = 0 \Delta _\infty u = 0 , where We also prove the first general uniqueness results for Δ ∞ u = g \Delta _{\infty } u = g on bounded subsets of R n \mathbb {R}^n (when g g is uniformly continuous and bounded away from 0 0 ) and analogous results for bounded length spaces. The proofs rely on a new game-theoretic description of u u . Let u ε ( x ) u^\varepsilon (x) be the value of the following two-player zero-sum game, called tug-of-war : fix x 0 = x ∈ X ∖ Y x_0=x\in X \smallsetminus Y . At the k t h k^{\mathrm {th}} turn, the players toss a coin and the winner chooses an x k x_k with d ( x k , x k − 1 ) > ε d(x_k, x_{k-1})> \varepsilon . The game ends when x k ∈ Y x_k \in Y , and player I ’s payoff is F ( x k ) − ε 2 2 ∑ i = 0 k − 1 g ( x i ) F(x_k) - \frac {\varepsilon ^2}{2}\sum _{i=0}^{k-1} g(x_i) . We show that ‖ u ε − u ‖ ∞ → 0 \|u^\varepsilon - u\|_{\infty } \to 0 . Even for bounded domains in R n \mathbb {R}^n , the game theoretic description of infinity harmonic functions yields new intuition and estimates; for instance, we prove power law bounds for infinity harmonic functions in the unit disk with boundary values supported in a δ \delta -neighborhood of a Cantor set on the unit circle.
Zero-sum stochastic games, henceforth stochastic games, are a classical model in game theory in which two opponents interact and the environment changes in response to the players’ behavior. The central solution concepts for these games are the discounted values and the value, which represent what playing the game is worth to the players for different levels of impatience. In the present manuscript, we provide algorithms for computing exact expressions for the discounted values and for the value, which are polynomial in the number of pure stationary strategies of the players. This result considerably improves all the existing algorithms.
In the classic prophet inequality, a well-known problem in optimal stopping theory, samples from independent random variables (possibly differently distributed) arrive online. A gambler who knows the distributions, but cannot see the future, must decide at each point in time whether to stop and pick the current sample or to continue and lose that sample forever. The goal of the gambler is to maximize the expected value of what she picks and the performance measure is the worst case ratio between the expected value the gambler gets and what a prophet that sees all the realizations in advance gets. In the late seventies, Krengel and Sucheston (Bull Am Math Soc 83(4):745–747, 1977), established that this worst case ratio is 0.5. A particularly interesting variant is the so-called prophet secretary problem, in which the only difference is that the samples arrive in a uniformly random order. For this variant several algorithms are known to achieve a constant of and very recently this barrier was slightly improved by Azar et al. (in: Proceedings of the ACM conference on economics and computation, EC, 2018). In this paper we introduce a new type of multi-threshold strategy, called blind strategy. Such a strategy sets a nonincreasing sequence of thresholds that depends only on the distribution of the maximum of the random variables, and the gambler stops the first time a sample surpasses the threshold of the stage. Our main result shows that these strategies can achieve a constant of 0.669 for the prophet secretary problem, improving upon the best known result of Azar et al. (in: Proceedings of the ACM conference on economics and computation, EC, 2018), and even that of Beyhaghi et al. (Improved approximations for posted price and second price mechanisms. CoRR arXiv:1807.03435, 2018) that works in the case in which the gambler can select the order of the samples. The crux of the result is a very precise analysis of the underlying stopping time distribution for the gambler’s strategy that is inspired by the theory of Schur-convex functions. We further prove that our family of blind strategies cannot lead to a constant better than 0.675. Finally we prove that no algorithm for the gambler can achieve a constant better than , which also improves upon a recent result of Azar et al. (in: Proceedings of the ACM conference on economics and computation, EC, 2018). This implies that the upper bound on what the gambler can get in the prophet secretary problem is strictly lower than what she can get in the i.i.d. case. This constitutes the first separation between the prophet secretary problem and the i.i.d. prophet inequality.
We continue the study of the homogenization of coercive non-convex Hamilton-Jacobi equations in random media identifying two general classes of Hamiltonians with very distinct behavior. For the first class there is no homogenization in a particular environment while for the second homogenization takes place in environments with finite range dependence. Motivated by the recent counter-example of Ziliotto, who constructed a coercive but non-convex Hamilton-Jacobi equation with stationary ergodic random potential field for which homogenization does not hold, we show that same happens for coercive Hamiltonians which have a strict saddle-point, a very local property. We also identify, based on the recent work of Armstrong and Cardaliaguet on the homogenization of positively homogeneous random Hamiltonians in environments with finite range dependence, a new general class Hamiltonians, namely equations with uniformly strictly star-shaped sub-level sets, which homogenize.
We survey old and new results concerning stochastic games with signals and finitely many states, actions, and signals. We state Mertens’ conjectures regarding the existence of the asymptotic value and its characterization, and present Ziliotto’s (Ann Probab, 2013, to appear) counter, example for these conjectures.
We provide an example of a Hamilton-Jacobi equation in which stochastic
homogenization does not occur. The Hamiltonian involved in this example
satisfies the standard assumptions of the literature, except that it is not
convex.
We prove a Tauberian theorem for nonexpansive operators, and apply it to the
model of zero-sum stochastic game. Under mild assumptions, we prove that the
value of the lambda-discounted game v_{lambda} converges uniformly when lambda
goes to 0 if and only if the value of the n-stage game v_n converges uniformly
when n goes to infinity. This generalizes the Tauberian theorem of Lehrer and
Sorin (1992) to the two-player zero-sum case. We also provide the first example
of a stochastic game with public signals on the state and perfect observation
of actions, with finite state space, signal sets and action sets, in which for
some initial state k_1 known by both players, (v_{lambda}(k_1)) and (v_n(k_1))
converge to distinct limits.
Homogenization-type results for the Cauchy problem for first-order PDE (Hamilton-Jacobi equations) are presented. The main assumption is that the Hamiltonian is superlinear and convex with respect to the gradient and stationary and ergodic with respect to the spatial variable. Some of applications to related problems as well as to the asymptotics of reaction-diffusion equations and turbulent combustion are also presented.
We present a proof of qualitative stochastic homogenization for a nonconvex
Hamilton-Jacobi equation. The new idea is to introduce a family of
"sub-equations" and to control solutions of the original equation by the
maximal subsolutions of the latter, which have deterministic limits by the
subadditive ergodic theorem and maximality.
We present stochastic homogenization results for viscous Hamilton-Jacobi
equations using a new argument which is based only on the subadditive structure
of maximal subsolutions (solutions of the "metric problem"). This permits us to
give qualitative homogenization results under very general hypotheses: in
particular, we treat non-uniformly coercive Hamiltonians which satisfy instead
a weaker averaging condition. As an application, we derive a general quenched
large deviations principle for diffusions in random environments and with
absorbing random potentials.
In this note we revisit the homogenization theory of Hamilton-Jacobi and “viscous”-
Hamilton-Jacobi partial differential equations with convex nonlinearities in stationary ergodic envi-
ronments. We present a new simple proof for the homogenization in probability. The argument uses
some a priori bounds (uniform modulus of continuity) on the solution and the convexity and coer-
civity (growth) of the nonlinearity. It does not rely, however, on the control interpretation formula
of the solution as was the case with all previously known proofs. We also introduce a new formula
for the effective Hamiltonian for Hamilton-Jacobi and “viscous” Hamilton-Jacobi equations.
The paper investigates the long time average of the solutions of Hamilton–Jacobi equations with a noncoercive, nonconvex Hamiltonian in the torus \mathbb{R}^{2}/ \mathbb{Z}^{2} . We give nonresonance conditions under which the long-time average converges to a constant. In the resonant case, we show that the limit still exists, although it is nonconstant in general. We compute the limit at points where it is not locally constant.
Résumé
Nous considérons le comportement en temps grand de la moyenne temporelle de solutions d'équations de Hamilton–Jacobi pour un hamiltonien non convexe et non coercif dans le tore \mathbb{R}^{2}/ \mathbb{Z}^{2} . Nous mettons en évidence des conditions de non-résonnance sous lesquelles cette moyenne converge vers une constante. Dans le cas où il y a résonnance, nous montrons que la limite existe, bien qu'étant non constante en général. Nous calculons la limite aux points où celle-ci est non localement constante.
We provide an example of a two-player zero-sum repeated game with public
signals and perfect observation of the actions, where neither the value of the
lambda-discounted game nor the value of the n-stage game converges, when
respectively lambda goes to 0 and n goes to infinity. It is a counterexample to
two long-standing conjectures, formulated by Mertens: first, in any zero-sum
repeated game, the asymptotic value exists, and secondly, when Player 1 is more
informed than Player 2, Player 1 is able to guarantee the limit value of the
n-stage game in the long run. The aforementioned example involves seven states,
two actions and two signals for each player. Remarkably, players observe the
payoffs, and play in turn (at each step the action of one player only has an
effect on the payoff and the transition). Moreover, it can be adapted to fit in
the class of standard stochastic games where the state is not observed.
Recent work by the authors and others has demonstrated the connections between the dynamic programming approach for two-person, zero-sum differential games and the new notion of viscosity solutions of Hamilton-Jacobi PDE, (Partial Differential Equations). The basic idea is that the dynamic programming optimality conditions imply that the values of a two-person, zero-sum differential game are viscosity solutions of appropriate PDE. This paper proves the above, when the values of the differential games are defined following Elliott-Kalton. This results in a great simplification in the statements and proofs, as the definitions are explicit and do not entail any kind of approximations. Moreover, as an application of the above results, the paper contains a representation formula for the solution of a fully nonlinear first-order PDE. This is then used to prove results about the level sets of solutions of Hamilton-Jacobi equations with homogeneous Hamiltonians. These results are also related to the theory of Huygen's principle and geometric optics.
We give an example of a zero-sum stochastic game with four states, compact
action sets for each player, and continuous payoff and transition functions,
such that the discounted value does not converge as the discount factor tends
to 0, and the value of the n-stage game does not converge as n goes to
infinity.
We present a simple new proof for the stochastic homogenization of
quasiconvex (level-set convex) Hamilton-Jacobi equations set in stationary
ergodic environments. Our approach, which is new even in the convex case,
yields more information about the qualitative behavior of the effective
nonlinearity.
Shapley's discounted stochastic games, Everett's recursive games and Gillette's undiscounted stochastic games are classical models of game theory describing two-player zero-sum games of potentially infinite duration. We describe algorithms for exactly solving these games. When the number of positions of the game isbconstant, our algorithms run in polynomial time.
We consider the homogenization of Hamilton-Jacobi equations and degenerate
Bellman equations in stationary, ergodic, unbounded environments. We prove
that, as the microscopic scale tends to zero, the equation averages to a
deterministic Hamilton-Jacobi equation and study some properties of the
effective Hamiltonian. We discover a connection between the effective
Hamiltonian and an eikonal-type equation in exterior domains. In particular, we
obtain a new formula for the effective Hamiltonian. To prove the results we
introduce a new strategy to obtain almost sure homogenization, completing a
program proposed by Lions and Souganidis that previously yielded homogenization
in probability. The class of problems we study is strongly motivated by
Sznitman's study of the quenched large deviations of Brownian motion
interacting with a Poissonian potential, but applies to a general class of
problems which are not amenable to probabilistic tools.
The main purpose of this paper is to approximate several non-local evolution equations by zero-sum repeated games in the spirit of the previous works of Kohn and the second author (2006 and 2009): general fully non-linear parabolic integro-differential equations on the one hand, and the integral curvature flow of an interface (Imbert, 2008) on the other hand. In order to do so, we start by constructing such a game for eikonal equations whose speed has a non-constant sign. This provides a (discrete) deterministic control interpretation of these evolution equations. In all our games, two players choose positions successively, and their final payoff is determined by their positions and additional parameters of choice. Because of the non-locality of the problems approximated, by contrast with local problems, their choices have to "collect" information far from their current position. For integral curvature flows, players choose hypersurfaces in the whole space and positions on these hypersurfaces. For parabolic integro-differential equations, players choose smooth functions on the whole space.
We present a modified version of the two-player "tug-of-war" game introduced
by Peres, Schramm, Sheffield, and Wilson. This new tug-of-war game is identical
to the original except near the boundary of the domain , but
its associated value functions are more regular. The dynamic programming
principle implies that the value functions satisfy a certain finite difference
equation. By studying this difference equation directly and adapting techniques
from viscosity solution theory, we prove a number of new results. We show that
the finite difference equation has unique maximal and minimal solutions, which
are identified as the value functions for the two tug-of-war players. We
demonstrate uniqueness, and hence the existence of a value for the game, in the
case that the running payoff function is nonnegative. We also show that
uniqueness holds in certain cases for sign-changing running payoff functions
which are sufficiently small. In the limit , we obtain the
convergence of the value functions to a viscosity solution of the normalized
infinity Laplace equation. We also obtain several new results for the
normalized infinity Laplace equation . In particular, we
demonstrate the existence of solutions to the Dirichlet problem for any bounded
continuous f, and continuous boundary data, as well as the uniqueness of
solutions to this problem in the generic case. We present a new elementary
proof of uniqueness in the case that , , or . The
stability of the solutions with respect to f is also studied, and an explicit
continuous dependence estimate from is obtained.
Jan 2018
L Attia
M Oliu-Barton
L. Attia and M. Oliu-Barton. A solution for stochastic games. arXiv preprint
arXiv:1809.06102, accepted for publication in PNAS, 2018.
Envelopes and nonconvex hamilton-jacobi equations. Calculus of Variations and Partial Differential Equations
Jan 2014
257-282
L Evans
L. Evans. Envelopes and nonconvex hamilton-jacobi equations. Calculus of Variations and Partial Differential Equations, 50(1-2):257-282, 2014.
A partial homogenization result for nonconvex viscous hamilton-jacobi equations
Jan 2014
B Fehrman
B. Fehrman. A partial homogenization result for nonconvex viscous hamilton-jacobi
equations. arXiv preprint arXiv:1402.5191, 2014.
Non-zero-sum stochastic games. Handbook of dynamic game theory
Jan 2016
1-64
A Jaśkiewicz
A Nowak
A. Jaśkiewicz and A. Nowak. Non-zero-sum stochastic games. Handbook of dynamic
game theory, pages 1-64, 2016.
Turbulent combustion
Jan 1985
97-131
F Williams
F. Williams. Turbulent combustion. In The mathematics of combustion, pages
97-131. SIAM, 1985.