• A
  • A
  • A
  • АБB
  • АБB
  • АБB
  • А
  • А
  • А
  • А
  • А
Обычная версия сайта
Магистратура 2019/2020

Введение в глубинное обучение

Статус: Курс по выбору (Финансовый инжиниринг)
Направление: 38.04.08. Финансы и кредит
Кто читает: Практико-ориентированные магистерские программы факультета экономических наук
Когда читается: 2-й курс, 2 модуль
Формат изучения: с онлайн-курсом
Прогр. обучения: Финансовый инжиниринг
Язык: английский
Кредиты: 3
Контактные часы: 2

Course Syllabus

Abstract

This specialization gives an introduction to deep learning, reinforcement learning, natural language understanding, computer vision and Bayesian methods. Top Kaggle machine learning practitioners and CERN scientists will share their experience of solving real-world problems and help you to fill the gaps between theory and practice. Upon completion of 7 courses you will be able to apply modern machine learning methods in enterprise and understand the caveats of real-world data and settings.
Learning Objectives

Learning Objectives

  • The goal of this course is to give learners basic understanding of modern neural networks and their applications in computer vision and natural language understanding.
Expected Learning Outcomes

Expected Learning Outcomes

  • We'll consider the reinforcement learning formalisms in a more rigorous, mathematical way. You'll learn how to effectively compute the return your agent gets for a particular action - and how to pick best actions based on that return.
  • We'll find out how to apply last week's ideas to the real world problems: ones where you don't have a perfect model of your environment.
  • We'll learn to scale things even farther up by training agents based on neural networks.
  • You'll learn how to build better exploration strategies with a focus on contextual bandit setup
Course Contents

Course Contents

  • Intro: why should i care?
    In this module we gonna define and "taste" what reinforcement learning is about. We'll also learn one simple algorithm that can solve reinforcement learning problems with embarrassing efficiency.
  • At the heart of RL: Dynamic Programming
    This week we'll consider the reinforcement learning formalisms in a more rigorous, mathematical way. You'll learn how to effectively compute the return your agent gets for a particular action - and how to pick best actions based on that return.
  • Model-free methods
    This week we'll find out how to apply last week's ideas to the real world problems: ones where you don't have a perfect model of your environment.
  • Approximate Value Based Methods
    This week we'll learn to scale things even farther up by training agents based on neural networks.
  • Policy-based methods
    We spent 3 previous modules working on the value-based methods: learning state values, action values and whatnot. Now's the time to see an alternative approach that doesn't require you to predict all future rewards to learn something.
  • Exploration
    In this final week you'll learn how to build better exploration strategies with a focus on contextual bandit setup. In honor track, you'll also learn how to apply reinforcement learning to train structured deep learning models.
Assessment Elements

Assessment Elements

  • non-blocking Tests
  • non-blocking Tasks
Interim Assessment

Interim Assessment

  • Interim assessment (2 module)
    0.5 * Tasks + 0.5 * Tests
Bibliography

Bibliography

Recommended Core Bibliography

  • Fabozzi, F. J. (2002). The Handbook of Financial Instruments. Hoboken, N.J.: Wiley. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&site=eds-live&db=edsebk&AN=81949
  • Анализ данных на компьютере, Тюрин, Ю. Н., 2003

Recommended Additional Bibliography

  • Microsoft SQL Server 2005 Analysis Services. OLAP и многомерный анализ данных, Бергер, А., 2007