September 2024
·
2 Reads
This paper addresses the problem of approximating an unknown probability distribution with density f -- which can only be evaluated up to an unknown scaling factor -- with the help of a sequential algorithm that produces at each iteration an estimated density .The proposed method optimizes the Kullback-Leibler divergence using a mirror descent (MD) algorithm directly on the space of density functions, while a stochastic approximation technique helps to manage between algorithm complexity and variability. One of the key innovations of this work is the theoretical guarantee that is provided for an algorithm with a fixed MD learning rate . The main result is that the sequence converges almost surely to the target density f uniformly on compact sets. Through numerical experiments, we show that fixing the learning rate significantly improves the algorithm's performance, particularly in the context of multi-modal target distributions where a small value of allows to increase the chance of finding all modes. Additionally, we propose a particle subsampling method to enhance computational efficiency and compare our method against other approaches through numerical experiments.