• A
  • A
  • A
  • АБВ
  • АБВ
  • АБВ
  • А
  • А
  • А
  • А
  • А
Обычная версия сайта

Modern Methods of Data Analysis: Stochastic Calculus

2019/2020
Учебный год
ENG
Обучение ведется на английском языке
6
Кредиты
Статус:
Курс обязательный
Когда читается:
1-й курс, 1, 2 модуль

Преподаватели

Course Syllabus

Abstract

The aim of this course is to provide an introduction to the modern methods of stochastic calculus. The course consists from two parts. The main emphasis of the first part will be on Markov chains. We discuss properties of Markov Chains, study their invariant distributions and convergence to stationary distributions. At the end of the course we discuss Markov Chain Monte-Carlo method (MCMC). The main emphasis of the second part will be in stochastic differential equations, their analytic and numerical solutions. We also briefly recall all necessary facts from the basic of random processes, Wiener process and Martingales.
Learning Objectives

Learning Objectives

  • Students will study how to apply the main modern probabilistic methods in practice and learn important topics from the stochastic calculus.
Expected Learning Outcomes

Expected Learning Outcomes

  • Know definition of Markov chains, be able to solve theoretical and practical problems
  • Be able to calculate conditional expectations, probabilities and apply their properties (e.g. tower property or total probability property)
  • Be able to apply Markov Chain Monte-Carlo methods in practice
  • Acquaintance with the main aspects of the measure concentration phenomenon
  • Know definition of Wiener process, know properties of its trajectories.
  • Know definition of martingales and its properties
  • Know definition of stochastic integral and its properties
  • Be able to solve SDE numerically. Know main properties of SDE and their solutions
  • Be able to apply MCMC methods like ULA or MALA in practice
Course Contents

Course Contents

  • Markov chains, discrete state space and discrete time
    Definitions and simple properties; Markov’s property; ergodicity; stationary distribution; LLN; Perron Frobenius theorem; exponential convergence
  • Markov chains, continuous time and discrete state spaces
    Poisson process, birth-death processes; Markov’s semigroup, generator
  • Conditional probability and conditional distributions
    Definition; Basic properties of conditional expectation and probability
  • Markov chains, General state spaces
    Markov’s property, kernel, Kolmogorov’s equations; reversibility, small sets, Doeblin’s condition
  • Conections with concentration of measure
    Tensorization of variance, Poincare inequality, etc
  • MCMC
    Desciption and properties of MCMC algorithms: Metropolis Hastings, Gibbs sampler, Grauber dynamics; connection with optimization
  • Martingales
    Definition, main properties, main inequalities
  • Wiener process
    Definition, trajectories, Markov’s property, construction of Wiener’s process
  • Ito’s integral
    Simple function, Ito’s isometry, Ito’s formula
  • Stochastic differential equations
    Existence, uniqueness, analytical and numerical solutions
  • Unadjusted Langevin algorithm (ULA), Metropolis adjusted Langevin algorithm (MALA)
    Description of algorithms. Convergence to stationary distribution
Assessment Elements

Assessment Elements

  • non-blocking письменный экзамен
  • non-blocking домашняя работа
  • non-blocking экзамен
Interim Assessment

Interim Assessment

  • Interim assessment (2 module)
    0.3 * домашняя работа + 0.3 * письменный экзамен + 0.4 * экзамен
Bibliography

Bibliography

Recommended Core Bibliography

  • Christophe Andrieu, & Nando De Freitas. (2003). An Introduction to MCMC for Machine Learning. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&site=eds-live&db=edsbas&AN=edsbas.C161414B
  • Вероятность. Кн. 1: Вероятность - 1: Элементарная теория вероятностей. Математические основания. Предельные теоремы, Ширяев А. Н., 2004
  • Теория случайных процессов, Булинский А. В., Ширяев А. Н., 2003

Recommended Additional Bibliography

  • Durmus, A., & Moulines, E. (2016). High-dimensional Bayesian inference via the Unadjusted Langevin Algorithm. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&site=eds-live&db=edsbas&AN=edsbas.A78D09BB