Probability density function estimation with weighted samples is a major topic of interest in statistics. In particular, it is the main foundation of all adaptive importance sampling (IS) algorithms. Classically, a target distribution is approximated either by a non-parametric model or within a parametric family. However, these models suffer from the curse of dimensionality or from their lack of flexibility. In this contribution, we suggest to use as the approximating model a distribution parameterised by a variational autoencoder (VAE). We extend the existing framework of VAEs to the case of weighted samples by introducing a new objective function. The flexibility of this family makes it close to a non-parametric model, and despite the very high number of parameters to estimate, this family is much more efficient in high dimension than the classical Gaussian or Gaussian mixture families. Moreover, in order to add flexibility to the model and to be able to learn multimodal distributions, we use a learnable prior distribution for the latent variable. We also introduce a new pre-training procedure for the VAE to find good starting weights of the neural networks to prevent as much as possible the posterior collapse phenomenon to happen. At last, we explicit how to use the resulting distribution in an IS context, and we introduce the proposed procedure in an adaptive IS algorithm to estimate a rare event probability in high dimension on two multimodal problems.
- Poster