• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site
Language Proficiency
+7 (495) 772-95-90
Address: 11 Pokrovsky Bulvar, Pokrovka Complex, room S822
Personal webpage
SPIN-RSCI: 8537-1776
ORCID: 0000-0001-8188-3391
ResearcherID: X-3960-2018
Google Scholar
V. V. Podolskii
D. Vetrov
Printable version


Have you spotted a typo?
Highlight it, click Ctrl+Enter and send us a message. Thank you for your help!
To be used only for spelling or punctuation mistakes.

Nadezhda Chirkova

  • Nadezhda Chirkova has been at HSE University since 2016.


  • research in the area of deep neural networks



Bachelor, Master in Applied Mathematics and Information Science
Lomonosov Moscow State University

Awards and Accomplishments

Best Teacher – 2021, 2019

Young Faculty Support Program (Group of Young Academic Professionals)
Category "New Researchers" (2018)

Courses (2021/2022)

Courses (2020/2021)

Courses (2019/2020)

Courses (2018/2019)

Courses (2017/2018)


Employment history

Junior machine learning researcher, Antiplagiat JSC, 2016 July→2016 August. Developing a prototype of domain specific search system that incorporates hierarchical topic structure learned from the domain data.

Machine Learning Teacher Assistant, Coursera machine learning specialization, 2016 January→2017 March. Developing practical assignments for the students explaining how machine learning algorithms work.

Timetable for today

Full timetable

Faculty Submits Ten Papers to NeurIPS 2021

35th Conference on Neural Information Processing Systems (NeurIPS 2021) is one of the world's largest conferences on machine learning and neural networks. It takes place on December 6-14, 2021.

Two papers were accepted to NAACL 2021

Two papers were accepted to the 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2021):“On the Embeddings of Variables in Recurrent Neural Networks for Source Code” by Nadezhda Chirkova;“A Simple Approach for Handling Out-of-Vocabulary Identifiers in Deep Learning for Source Code” by Nadezhda Chirkova and Sergey Troshin.The final versions of the papers and the source code will be released soon. The research is conducted with the use of the computational resources of the HSE Supercomputer Modeling Unit.Both papers address the problem of improving the quality of deep learning models for source code by utilizing the specifics of variables and identifiers. The first paper proposes a recurrent architecture that explicitly models the semantic meaning of each variable in the program. The second paper proposes a simple method for preprocessing rarely used identifiers in the program so that a neural network (particularly, Transformer architecture) would better recognize the patterns in the program. The proposed methods were shown to significantly improve the quality of code completion and variable misuse detection.

NeurIPS — 2020 Accepts Three Articles from Faculty of Computer Science’s Researchers

34th conference on neural information processing systems NeurIPS 2020 is one of the largest conferences on machine learning in the world, taking place since 1989. It was going to take place in Vancouver, Canada on December 6-12, but takes place online.NeurIPS is as prestigious as ever, with 9,454 articles submitted and 1,300 articles accepted by the conference. Among those accepted are three articles by the researchers of the Faculty of Computer Science:

The paper "On Power Laws in Deep Ensembles" accepted as a spotlight to NeurIPS'20

The faculty presented the results of their research at the largest international machine learning conference NeurIPS

Researchers of the Faculty of Computer Science presented their papers at the annual conference of Neural Information Processing Systems (NeurIPS), which was held from 2 to 8 December 2018 in Montreal, Canada.

Faculty of Computer Science Staff Attend International Conference on Machine Learning 

On August 6-11 the 34th International Conference on Machine Learning was held in Sydney, Australia. This conference is ranked A* by CORE, and is one of two leading conferences in the field of machine learning. It has been held annually since 2000, and this year, more than 1,000 participants from different countries took part.

'Machine Learning Algorithm Able to Find Data Patterns a Human Could Not'

In December 2016, five new international laboratories opened up at the Higher School of Economics, one of which was the International Laboratory of Deep Learning and Bayesian Methods. This lab focuses on combined neural Bayesian models that bring together two of the most successful paradigms in modern-day machine learning – the neural network paradigm and the Bayesian paradigm.