• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site
Master 2018/2019

Modern Methods of Decision Making

Type: Compulsory course (Data Science)
Area of studies: Applied Mathematics and Informatics
When: 1 year, 3, 4 module
Mode of studies: Full time
Instructors: Quentin Paris
Master’s programme: Data Science
Language: English
ECTS credits: 4

Course Syllabus

Abstract

This course presents an introduction to the mathematical foundations of statistical learning theory. The presentation is oriented towards the most important algorithms and methods in the field. Topics studied include: empirical risk minimisation, local averaging, boosting and support vector machines. We will also provide an introduction to online and reinforcement learning methods.
Learning Objectives

Learning Objectives

  • Knowledge of classical learning algorithms and their performance
  • Understand the basic tradeoff between model complexity and computational tractability
Expected Learning Outcomes

Expected Learning Outcomes

  • Understand introduction to learning theory.
  • Understand and use optimal predictors.
  • Understand learning with finite classes and know how to use it.
  • Understand the learning with complex models and know how to use it.
  • Understand learning through optimisation and know how to use it.
  • Understand algorithmic interlude and know how to use it.
  • Understand and use online learning.
Course Contents

Course Contents

  • Introduction to learning theory
    In this chapter, we introduce the basic language of supervised learning theory and describe the notions of: learning sample, loss function, risk, optimal/Bayes predictors and excess risk. In particular, we define the PAC learning paradigm (Probably Approximately Correct)
  • Optimal predictors
    We provide a full description of optimal (or ideal) predictors associated to a given loss function.
  • Learning with finite classes
    We study the ERM (Empirical Risk Minimisation) principle in the simple context of a model composed of a finite number of candidate predictors.
  • Learning with complex models
    We study a generalisation of the results seen in the previous chapter for models composed of potentially an infinite number of candidate predictors. We introduce the notion of VC dimension of a model and prove the fundamental theorem of learning theory.
  • Learning through optimisation
    We study classical algorithms from convex optimisation to compute empirical risk minimisers in practice.
  • Algorithmic interlude
    We explore several algorithms, related to ERM, such as Boosting, SVM's.
  • Online learning
    We provide a first look at the world of online learning. We focus on the framework of prediction with expert advice with full and limited feedback. We will provide in particular a full analysis of the EWA and EXP4.P algorithms.
Assessment Elements

Assessment Elements

  • non-blocking Home assignment 1
  • non-blocking Home assignment 2
  • non-blocking Final exam
    3 hours written test
Interim Assessment

Interim Assessment

  • Interim assessment (4 module)
    0.2 * Final exam + 0.4 * Home assignment 1 + 0.4 * Home assignment 2
Bibliography

Bibliography

Recommended Core Bibliography

  • Mohri, M., Talwalkar, A., & Rostamizadeh, A. (2012). Foundations of Machine Learning. Cambridge, MA: The MIT Press. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&site=eds-live&db=edsebk&AN=478737

Recommended Additional Bibliography

  • Christopher M. Bishop. (n.d.). Australian National University Pattern Recognition and Machine Learning. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&site=eds-live&db=edsbas&AN=edsbas.EBA0C705