• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

Interpretable Deep Neural Networks for Text Classification

Student: Brovkin Evgeniy

Supervisor: Oleg Durandin

Faculty: Faculty of Informatics, Mathematics, and Computer Science (HSE Nizhny Novgorod)

Educational Programme: Data Mining (Master)

Final Grade: 7

Year of Graduation: 2019

The ability to interpret importance of a predictive model input properly is very important. Good interpretability lead to understanding of a process being modeled, make people who use the model trust it, and can be used to find new ways to improve some aspects of the model. In this work new approach to interpret neural networks for text classification is presented. New algorithm allow to find key words in a text sequence. The algorithm based on Shapley values and its' approximation in works of other authors. A number of experiments and tests based on adversarial attacks were made using different architectures and several text corpora in Russian and English. This experiments have shown the correctness of an implemented approach.

Full text (added May 21, 2019)

Student Theses at HSE must be completed in accordance with the University Rules and regulations specified by each educational programme.

Summaries of all theses must be published and made freely available on the HSE website.

The full text of a thesis can be published in open access on the HSE website only if the authoring student (copyright holder) agrees, or, if the thesis was written by a team of students, if all the co-authors (copyright holders) agree. After a thesis is published on the HSE website, it obtains the status of an online publication.

Student theses are objects of copyright and their use is subject to limitations in accordance with the Russian Federation’s law on intellectual property.

In the event that a thesis is quoted or otherwise used, reference to the author’s name and the source of quotation is required.

Search all student theses