• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site
Language Proficiency
English
Contacts
Phone:
27252
+7 (926) 718-38-58
Address: 11 Pokrovsky Bulvar, Pokrovka Complex, room S822
Timetable
ORCID: 0000-0002-8439-390X
ResearcherID: AAL-5985-2021
Google Scholar
Office hours
11.00 - 19.00
Supervisors
D. Vetrov
V. V. Podolskii
Printable version

 

Have you spotted a typo?
Highlight it, click Ctrl+Enter and send us a message. Thank you for your help!
To be used only for spelling or punctuation mistakes.

Sergey Troshin

  • Sergey Troshin has been at HSE University since 2017.

Responsibilities

To conduct research for Centre of Deep Learning and Bayesian Methods

Young Faculty Support Program (Group of Young Academic Professionals)
Category "New Researchers" (2022)

Courses (2021/2022)

Courses (2020/2021)

Publications2


Employment history

Research Assistant at Advanced Research NLP Group ABBYY, June-August 2019.
Development of a system for active learning of neural networks to search for relationships between entities in a text.

Timetable for today

Full timetable

‘One Year of Combined-Track Studies Expands Students’ Research Horizons’

HSE University continues to develop its new study format for students embarking on a research career: the Combined Master's-PhD track. This year, there will be 75 places for Master’s students on the track—two thirds more than last year. HSE Vice Rector Sergey Roshchin talks about the appeal of the combined-track option, how to enrol, and the achievements of last year’s applicants.

Two papers were accepted to NAACL 2021

Two papers were accepted to the 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2021):“On the Embeddings of Variables in Recurrent Neural Networks for Source Code” by Nadezhda Chirkova;“A Simple Approach for Handling Out-of-Vocabulary Identifiers in Deep Learning for Source Code” by Nadezhda Chirkova and Sergey Troshin.The final versions of the papers and the source code will be released soon. The research is conducted with the use of the computational resources of the HSE Supercomputer Modeling Unit.Both papers address the problem of improving the quality of deep learning models for source code by utilizing the specifics of variables and identifiers. The first paper proposes a recurrent architecture that explicitly models the semantic meaning of each variable in the program. The second paper proposes a simple method for preprocessing rarely used identifiers in the program so that a neural network (particularly, Transformer architecture) would better recognize the patterns in the program. The proposed methods were shown to significantly improve the quality of code completion and variable misuse detection.