• A
  • A
  • A
  • АБВ
  • АБВ
  • АБВ
  • А
  • А
  • А
  • А
  • А
Обычная версия сайта

Bayesian Methods for Machine Learning

2018/2019
Учебный год
ENG
Обучение ведется на английском языке
2
Кредиты
Статус:
Курс по выбору
Когда читается:
1-й курс, 3 модуль

Course Syllabus

Abstract

Bayesian methods are used in lots of fields: from game development to drug discovery. They give superpowers to many machine learning algorithms: handling missing data, extracting much more information from small datasets. Bayesian methods also allow us to estimate uncertainty in predictions, which is a really desirable feature for fields like medicine.
Learning Objectives

Learning Objectives

  • gain hands-on experience of the basics of Bayesian methods: from how to define a probabilistic model to how to make predictions from it, how one can fully automate this work-flow and how to speed it up using some advanced techniques, applications of Bayesian methods to deep learning and how to generate new images with it.
Expected Learning Outcomes

Expected Learning Outcomes

  • Applying variational autoencoders and Categorical Reparametrization with Gumbel-Softmax
  • Applying Bayesian Optimization and Gaussian Processes
Course Contents

Course Contents

  • Introduction to Bayesian methods & Conjugate priors
    Welcome to first week of our course! Today we will discuss what bayesian methods are and what are probabilistic models. We will see how they can be used to model real-life situations and how to make conclusions from them. We will also learn about conjugate priors — a class of models where all math becomes really simple.
  • Expectation-Maximization algorithm
    This week we will about the central topic in probabilistic modeling: the Latent Variable Models and how to train them, namely the Expectation Maximization algorithm. We will see models for clustering and dimensionality reduction where Expectation Maximization algorithm can be ap-plied as is. In the following weeks, we will spend weeks 3, 4, and 5 discussing numerous exten-sions to this algorithm to make it work for more complicated models and scale to large datasets.
  • Variational Inference & Latent Dirichlet Allocation
    This week we will move on to approximate inference methods. We will see why we care about approximating distributions and see variational inference — one of the most powerful methods for this task. We will also see mean-field approximation in details. And apply it to text-mining algorithm called Latent Dirichlet Allocation.
  • Markov chain Monte Carlo
    This week we will learn how to approximate training and inference with sampling and how to sample from complicated distributions. This will allow us to build simple method to deal with LDA and with Bayesian Neural Networks — Neural Networks which weights are random varia-bles themselves and instead of training (finding the best value for the weights) we will sample from the posterior distributions on weights.
  • Variational Autoencoder
    Welcome to the fifth week of the course! This week we will combine many ideas from the previ-ous weeks and add some new to build Variational Autoencoder -- a model that can learn a distri-bution over structured data (like photographs or molecules) and then sample new data points from the learned distribution, hallucinating new photographs of non-existing people. We will also the same techniques to Bayesian Neural Networks and will see how this can greatly compress the weights of the network without reducing the accuracy.
  • Gaussian processes & Bayesian optimization
    Welcome to the final week of our course! This time we will see nonparametric Bayesian methods. Specifically, we will learn about Gaussian processes and their application to Bayesian opti-mization that allows one to perform optimization for scenarios in which each function evaluation is very expensive: oil probe, drug discovery and neural network architecture tuning.
Assessment Elements

Assessment Elements

  • non-blocking Assessment obtained on the platform https://www.coursera.org/learn/bayesian-methods-in-machine-lea
    Результат прохождения курса
  • non-blocking Exam
  • non-blocking Контрольно-измерительные материалы
Interim Assessment

Interim Assessment

  • Interim assessment (3 module)
    Оценка за итоговый контроль выставляется по 10-балльной шкале. Оценивание проводится в форме собеседования после предъявления студентом результатов тестирования.
Bibliography

Bibliography

Recommended Core Bibliography

  • Колемаев В.А., Калинина В.Н. - Теория вероятностей и математическая статистика - КноРус - 2013 - 376с. - ISBN: 978-5-406-02819-3 - Текст электронный // ЭБС BOOKRU - URL: https://book.ru/book/919349

Recommended Additional Bibliography

  • Géron, A. (2017). Hands-On Machine Learning with Scikit-Learn and TensorFlow : Concepts, Tools, and Techniques to Build Intelligent Systems (Vol. First edition). Sebastopol, CA: O’Reilly Media. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&site=eds-live&db=nlebk&AN=1486117