Dmitry Molchanov
- Research Fellow:Faculty of Computer Science / Big Data and Information Retrieval School / Samsung-HSE Laboratory
- Visiting Scholar:Faculty of Computer Science / Big Data and Information Retrieval School
- Dmitry Molchanov has been at HSE since 2017.
Responsibilities
Conduction of the research on neurobayesian methods, work on laboratory's industrial projects, scientific papers writing
Education
Bachelor's in Applied Mathematics and Information Science
Lomonosov Moscow State University
Courses (2018/2019)
- Research Seminar "Machine Learning and Applications 1" (Bachelor’s programme; Faculty of Computer Science; programme "Applied Mathematics and Information Science")Rus
- Research Seminar "Machine Learning and Applications 2" (Bachelor’s programme; Faculty of Computer Science; programme "Applied Mathematics and Information Science"; 4 year, 1-3 module)Rus
Publications5
- Preprint Molchanov D., Kharitonov V., Artem Sobolev, Vetrov D. Doubly Semi-Implicit Variational Inference / Cornell University. Series arxiv.org "stat.ML". 2018.
- Chapter Ashukha A., Vetrov D., Molchanov D., Neklyudov K. O., Atanov A. Uncertainty Estimation via Stochastic Batch Normalization, in: Workshop of the 6th International Conference on Learning Representations (ICLR).. , 2018.
- Preprint Kharitonov V., Molchanov D., Vetrov D. Variational Dropout via Empirical Bayes / Cornell University. Series arxiv.org "stat.ML". 2018.
- Chapter Neklyudov K. O., Molchanov D., Ashukha A., Vetrov D. Structured Bayesian Pruning via Log-Normal Multiplicative Noise, in: Advances in Neural Information Processing Systems 30 (NIPS 2017). Montreal : Curran Associates, 2017. P. 6776-6785.
- Chapter Molchanov D., Ashukha A., Vetrov D. Variational Dropout Sparsifies Deep Neural Networks, in: Proceedings of Machine Learning Research. Proceedings of the International Conference on Machine Learning (ICML 2017) Vol. 70. Sydney : , 2017. P. 2498-2507.
Employment history
03/17–12/17 Intern researcher at Yandex Research.
03/17– Intern researcher at the International Laboratory of Deep Learning and Bayesian Methods, NRU HSE.
The faculty members will present their research on ICLR and AISTATS conferences
One paper will be presented at AISTATS (Japan, April 2019) and three papers will be presented at ICLR (USA, May 2019).
The faculty presented the results of their research at the largest international machine learning conference NeurIPS
Researchers of the Faculty of Computer Science presented their papers at the annual conference of Neural Information Processing Systems (NeurIPS), which was held from 2 to 8 December 2018 in Montreal, Canada.
How to Adjust a Smaller Size Neural Network without Quality Loss
Staff members of the HSE Faculty of Computer Science recently presented their papers at the biggest international conference on machine learning, Neural Information Processing Systems (NIPS)’.
How to Adjust a Smaller Size Neural Network without Quality Loss
Staff members of the HSE Faculty of Computer Science recently presented their papers at the biggest international conference on machine learning, Neural Information Processing Systems (NIPS)’.
Faculty of Computer Science Staff Attend International Conference on Machine Learning
On August 6-11 the 34th International Conference on Machine Learning was held in Sydney, Australia. This conference is ranked A* by CORE, and is one of two leading conferences in the field of machine learning. It has been held annually since 2000, and this year, more than 1,000 participants from different countries took part.
Variational dropout sparsifies DNNs paper has been accepted to ICML'17
The paper authored by laboratory's research assistants Dmitry Molchanov and Arsenii Ashukha and head Dmitry Vetrov has been accepted to the International Conference on Machine Learning'2017. In this research a state-of-the-art result in deep neural networks sparsification was achieved using Bayesian framework applied to deep learning.