Let A be a Las Vegas algorithm, i.e., A is a
randomized algorithm that always produces the correct answer when its
stops but whose running time is a random variable. The authors consider
the problem of minimizing the expected time required to obtain an answer
from A using strategies which simulate A as follows:
run A for a fixed amount of time t <sub>1</sub>, then
run A independent for a fixed amount of time t <sub>2
</sub>, etc. The simulation stops if A completes its execution
during any of the runs. Let S =( t <sub>1</sub>, t
<sub>2</sub>,. . .) be a strategy, and let
l <sub>A</sub>=inf<sub>S</sub> T ( A , S ),
where T ( A , S ) is the expected value of the
running time of the simulation of A under strategy S .
The authors describe a simple universal strategy S <sup>univ
</sup>, with the property that, for any algorithm A ,
T ( A , S <sup>univ</sup>)=O( l <sub>A
</sub>log( l <sub>A</sub>)). Furthermore, they show that this is
the best performance that can be achieved, up to a constant factor, by
any universal strategy