• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

Learning Deep Models with Small Data

Student: Atanov Andrei

Supervisor: Dmitry Vetrov

Faculty: Faculty of Computer Science

Educational Programme: Statistical Learning Theory (Master)

Final Grade: 10

Year of Graduation: 2020

Modern supervised deep learning models achieve state-of-the-art results on many problems. These methods, however, require large-scale labeled datasets. In many real-world problems, it is challenging and expensive to collect a large dataset or label new data. In this work, we consider the problem of training deep learning models with a small amount of labeled data. For a fully-supervised setting with a small amount of labeled data, we leverage the Bayesian inference framework. It is known to provide a general framework for incorporating prior knowledge or specific properties into machine learning models via carefully choosing a prior distribution. In this work, we propose a new type of prior distributions for convolutional neural networks, deep weight prior (dwp), that exploit generative models to encourage a specific structure of trained convolutional filters, e.g., spatial correlations. We define dwp in the form of an implicit distribution and propose a method for variational inference with such type of implicit priors. In experiments, we show that dwp improves the performance of Bayesian neural networks when training data are limited, and the initialization of weights with samples from dwp accelerates the training of conventional convolutional neural networks. For a semi-supervised setting, we propose a semi-conditional normalizing flow model. The model uses both labeled and unlabeled data to learn an explicit model of joint distribution over objects and labels. The semi-conditional architecture of the model allows us to efficiently compute the value and gradients of the marginal likelihood for unlabeled objects. The conditional part of the model is based on a proposed conditional coupling layer. We demonstrate the performance of the model for semi-supervised classification problems on different datasets. The model outperforms the baseline approach based on variational auto-encoders on the MNIST dataset.

Full text (added May 20, 2020)

Student Theses at HSE must be completed in accordance with the University Rules and regulations specified by each educational programme.

Summaries of all theses must be published and made freely available on the HSE website.

The full text of a thesis can be published in open access on the HSE website only if the authoring student (copyright holder) agrees, or, if the thesis was written by a team of students, if all the co-authors (copyright holders) agree. After a thesis is published on the HSE website, it obtains the status of an online publication.

Student theses are objects of copyright and their use is subject to limitations in accordance with the Russian Federation’s law on intellectual property.

In the event that a thesis is quoted or otherwise used, reference to the author’s name and the source of quotation is required.

Search all student theses