• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

Deep Reinforcement Learning in Vizdoom FPS

Student: Akimov Dmitrii

Supervisor: Ilya Makarov

Faculty: Faculty of Computer Science

Educational Programme: Data Science (Master)

Final Grade: 9

Year of Graduation: 2018

In this work we study the effect of combining existent improvements for DQN in MDP setting in POMDP setting. Combinations of several heuristic such as Distributional Learning and Dueling architectures etc improvements for MDP is well-studied and works much better than vanilla DQN, which explains why this combination is so popular. However, no one tried to combine these improvements for partially observable processes before. Instead, for partially observable processes, model-based approach is more popular. It is obvious that model-based agent development is harder and resulting agent will not be as universal as model-free. We proposed a new combination method of simple DQN extensions and develop a new model-free reinforcement learning agent, which works with partially observable processes and joined well-studied improvements from fully observable processes. To test our agent we choose VizDoom environment, which is old but advanced in terms of gameplay first person shooter with many scenarios. VizDoom provided API which allow researchers to interact with environment and train autonomous agents. We develop several agents for following scenarios in VizDoom FPS: Basic, Defend The Center, Health Gathering. We prove that improvements used in MDP setting may be used in POMDP setting as well and our combined agents can converge to better policies. We develop an agent with combination of several improvements showing superior game performance in practice. We compare our agent with DRQN with Prioritized Experience Replay and Snaphot Ensembling agent (Schulze et al, 2018) and get approximately triple increase in per episode reward. We believe that our agent may be improved further with model-based methods as well as serve as back-bone to more sophisticated method to play different VizDoom scenarios. Keywords: deep reinforcement learning, neural networks, first person shooter, VizDoom.

Full text (added May 28, 2018)

Student Theses at HSE must be completed in accordance with the University Rules and regulations specified by each educational programme.

Summaries of all theses must be published and made freely available on the HSE website.

The full text of a thesis can be published in open access on the HSE website only if the authoring student (copyright holder) agrees, or, if the thesis was written by a team of students, if all the co-authors (copyright holders) agree. After a thesis is published on the HSE website, it obtains the status of an online publication.

Student theses are objects of copyright and their use is subject to limitations in accordance with the Russian Federation’s law on intellectual property.

In the event that a thesis is quoted or otherwise used, reference to the author’s name and the source of quotation is required.

Search all student theses