Dmitry Kropotov
- Senior Lecturer:Faculty of Computer Science / Big Data and Information Retrieval School
- Research Fellow:Faculty of Computer Science / Big Data and Information Retrieval School / Centre of Deep Learning and Bayesian Methods
- Dmitry Kropotov has been at HSE University since 2017.
Courses (2021/2022)
- Optimization in Machine Learning (Bachelor’s programme; Faculty of Computer Science; 3 year, 3, 4 module)Rus
- Past Courses
Courses (2020/2021)
Optimization in Machine Learning (Bachelor’s programme; Faculty of Computer Science; 3 year, 3, 4 module)Rus
Publications5
- Article Rodomanov A., Kropotov D. A randomized coordinate descent method with volume sampling // SIAM Journal on Optimization. 2020. Vol. 30. No. 3. P. 1878-1904. doi
- Preprint Kodryan M., Kropotov D., Vetrov D. MARS: Masked Automatic Ranks Selection in Tensor Decompositions / First Workshop on Quantum Tensor Networks in Machine Learning, 34th Conference on Neural Information Processing Systems (NeurIPS 2020). Series QTNML 2020 "First Workshop on Quantum Tensor Networks in Machine Learning, NeurIPS 2020". 2020.
- Chapter Izmailov P., Novikov A., Kropotov D. Scalable Gaussian Processes with Billions of Inducing Inputs via Tensor Train Decomposition, in: Proceedings of Machine Learning Research. Proceedings of The International Conference on Artificial Intelligence and Statistics (AISTATS 2018). , 2018. P. 726-735.
- Article Izmailov P., Kropotov D. Faster variational inducing input Gaussian process classification // Journal of machine learning and data analysis. 2017. Vol. 3. No. 1. P. 20-35. doi
- Chapter Rodomanov A., Kropotov D. A Superlinearly-Convergent Proximal Newton-type Method for the Optimization of Finite Sums, in: Proceedings of Machine Learning Research. Proceedings of the International Conference on Machine Learning (ICML 2016) Vol. 48. NY : , 2016. P. 2597-2605.
Conferences
- 2018
The 21st International Conference on Artificial Intelligence and Statistics (AISTATS 2018) (Плайя-Бланка, Ланзароте, Канарские острова). Presentation: Scalable Gaussian Processes with Billions of Inducing Inputs via Tensor Train Decomposition
- 2016
The 33rd International Conference on Machine Learning (ICML 2016) (Нью-Йорк). Presentation: A Superlinearly-Convergent Proximal Newton-Type Method for the Optimization of Finite Sums