• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

Speeding up Neural Networks with Variational Bayes

Student: Matiushin Leonid

Supervisor: Andrey Ustyuzhanin

Faculty: Faculty of Computer Science

Educational Programme: Statistical Learning Theory (Master)

Year of Graduation: 2020

This paper considers the task of supervised training of neural networks with Toeplitz linear layers (that is, the corresponding matrix is Toeplitz). Toeplitz matrices allow fast matrix-vector multiplication via the effective algorithm. This algorithm leads to faster predictions when compared against a fully connected linear layer. The issue of training of structured matrices as a component of neural networks is studied in a number of papers. One of the main challenges is that replacing fully connected layers with structured layers (in particular, the Toeplitz ones) leads to a prediction quality degradation. In all these studies, structured layers are trained in a non-Bayesian way (by maximizing likelihood) with explicit restrictions on the parameter space. In this paper, we approach the problem of training of structured matrices using variational Bayesian methods, which turned out to be extremely fruitful in many problems (in particular, in the problem of training sparse matrices). The methods presented in this work are modifications of variational approaches to the dropout procedure in neural networks, namely, «Variational dropout» and «Concrete dropou». Aforementioned methods were used in following popular benchmark problems: the handwriting recognition problem MNIST, as well as the HIGGS noise-signal classification problem. We measure the accuracy of the final prediction, the drop in accuracy compared to a fully connected layer, as well as the corresponding acceleration of the calculation of the result. The conclusion of this paper is that the developed methods, firstly, provide Toeplitz linear layers as the result of a training procedure. Secondly, they make it possible to achieve a lower drop in quality compared to fully connected layers. Thus, the presented methods can significantly accelerate neural networks at the inference stage at the cost of less quality loss.

Student Theses at HSE must be completed in accordance with the University Rules and regulations specified by each educational programme.

Summaries of all theses must be published and made freely available on the HSE website.

The full text of a thesis can be published in open access on the HSE website only if the authoring student (copyright holder) agrees, or, if the thesis was written by a team of students, if all the co-authors (copyright holders) agree. After a thesis is published on the HSE website, it obtains the status of an online publication.

Student theses are objects of copyright and their use is subject to limitations in accordance with the Russian Federation’s law on intellectual property.

In the event that a thesis is quoted or otherwise used, reference to the author’s name and the source of quotation is required.

Search all student theses