• A
  • A
  • A
  • АБB
  • АБB
  • АБB
  • А
  • А
  • А
  • А
  • А
Обычная версия сайта
Магистратура 2020/2021

Введение в машинное обучение

Статус: Курс по выбору (Финансовый инжиниринг)
Направление: 38.04.08. Финансы и кредит
Кто читает: Практико-ориентированные магистерские программы факультета экономических наук
Когда читается: 2-й курс, 2 модуль
Формат изучения: с онлайн-курсом
Охват аудитории: для своего кампуса
Прогр. обучения: Финансовый инжиниринг
Язык: русский
Кредиты: 3

Программа дисциплины

Аннотация

This specialization gives an introduction to deep learning, reinforcement learning, natural language understanding, computer vision and Bayesian methods. Top Kaggle machine learning practitioners and CERN scientists will share their experience of solving real-world problems and help you to fill the gaps between theory and practice. Upon completion of 7 courses you will be able to apply modern machine learning methods in enterprise and understand the caveats of real-world data and settings.
Цель освоения дисциплины

Цель освоения дисциплины

  • The goal of this course is to give learners basic understanding of modern neural networks and their applications in computer vision and natural language understanding.
Планируемые результаты обучения

Планируемые результаты обучения

  • We'll consider the reinforcement learning formalisms in a more rigorous, mathematical way. You'll learn how to effectively compute the return your agent gets for a particular action - and how to pick best actions based on that return.
  • We'll find out how to apply last week's ideas to the real world problems: ones where you don't have a perfect model of your environment.
  • We'll learn to scale things even farther up by training agents based on neural networks.
  • You'll learn how to build better exploration strategies with a focus on contextual bandit setup
Содержание учебной дисциплины

Содержание учебной дисциплины

  • Intro: why should i care?
    In this module we gonna define and "taste" what reinforcement learning is about. We'll also learn one simple algorithm that can solve reinforcement learning problems with embarrassing efficiency.
  • At the heart of RL: Dynamic Programming
    This week we'll consider the reinforcement learning formalisms in a more rigorous, mathematical way. You'll learn how to effectively compute the return your agent gets for a particular action - and how to pick best actions based on that return.
  • Model-free methods
    This week we'll find out how to apply last week's ideas to the real world problems: ones where you don't have a perfect model of your environment.
  • Approximate Value Based Methods
    This week we'll learn to scale things even farther up by training agents based on neural networks.
  • Policy-based methods
    We spent 3 previous modules working on the value-based methods: learning state values, action values and whatnot. Now's the time to see an alternative approach that doesn't require you to predict all future rewards to learn something.
  • Exploration
    In this final week you'll learn how to build better exploration strategies with a focus on contextual bandit setup. In honor track, you'll also learn how to apply reinforcement learning to train structured deep learning models.
Элементы контроля

Элементы контроля

  • неблокирующий Tests
  • неблокирующий Tasks
Промежуточная аттестация

Промежуточная аттестация

  • Промежуточная аттестация (2 модуль)
    0.5 * Tasks + 0.5 * Tests
Список литературы

Список литературы

Рекомендуемая основная литература

  • Fabozzi, F. J. (2002). The Handbook of Financial Instruments. Hoboken, N.J.: Wiley. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&site=eds-live&db=edsebk&AN=81949
  • Анализ данных на компьютере, Тюрин Ю. Н., Макаров А. А., 2003

Рекомендуемая дополнительная литература

  • Microsoft SQL Server 2005 Analysis Services. OLAP и многомерный анализ данных, Бергер А., Горбач И., 2007