Аланов Айбек -
Variance Reduction Methods in Stochastic Computational Graphs
Прикладная математика и информатика
Recent advances in deep variational inference methods lead research into scaling the training process to large models. There was a breakthrough in estimating the gradient in stochastic neural networks with continuous latent variables which allow training these models for large datasets. Now the attention of researchers focuses on the problem of estimating the gradient for models with discrete latent variables. This paper considers the state-of-the-art variance reduction methods in stochastic neural networks with discrete latent variables. A comprehensive comparison between these methods based on their performances across a range of real-world datasets was provided.