• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

Non-Autoregressive Language Model Based on Transformer

Student: Sergey Pletenev

Supervisor: Ekaterina Lobacheva

Faculty: Faculty of Humanities

Educational Programme: Computational Linguistics (Master)

Final Grade: 8

Year of Graduation: 2020

Many modern NLP models have structures that can be distinguished and compared. One of the most widely used architectures for language modeling is autoregressive models. But there is a big amount of non-autoregressive models, which show completely different results. In this paper, we have analyzed and compared non-autoregressive and autoregressive models from linguistic positions. The diploma represents the results of our experiments to find coherence and other syntactic structures in NLP models. In addition, we have identified and shown effective patterns of attention mechanism that are found in these models.

Full text (added November 10, 2020)

Student Theses at HSE must be completed in accordance with the University Rules and regulations specified by each educational programme.

Summaries of all theses must be published and made freely available on the HSE website.

The full text of a thesis can be published in open access on the HSE website only if the authoring student (copyright holder) agrees, or, if the thesis was written by a team of students, if all the co-authors (copyright holders) agree. After a thesis is published on the HSE website, it obtains the status of an online publication.

Student theses are objects of copyright and their use is subject to limitations in accordance with the Russian Federation’s law on intellectual property.

In the event that a thesis is quoted or otherwise used, reference to the author’s name and the source of quotation is required.

Search all student theses