• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site
  • HSE University
  • News
  • HSE Opens New Laboratory of Stochastic Algorithms and High-Dimensional Inference

HSE Opens New Laboratory of Stochastic Algorithms and High-Dimensional Inference

The HSE Faculty of Computer Science has launched the International Laboratory of Stochastic Algorithms and High-Dimensional Inference (HDI Lab). The Lab’s Academic Supervisor,Eric Moulines, and Chief Research Fellow, Vladimir Spokoiny, spoke to us about the fundamental and applied aspects of research performed by the laboratory, their relations with machine learning, and Russian-French academic ties.

HDI Lab brings together researchers who are engaged in various fields of mathematics, from probability theory to contemporary mathematical statistics, to solving data analysis problems. The lab has formally existed since April 2018, but its official launch was at the September International Workshop ‘Structural Inference in High-Dimensional Models’. The event also saw the launch of a series of annual meetings to discuss probability theory with statistical researchers from France and Russia.

Mathematical Methods in Machine Learning: Why Teach AI to Doubt?

Eric Moulines, Academic Supervisor of HDI Lab

Our lab will focus on developing methodology of machine learning, i.e. new techniques, ideas and approaches which can then be used in various applications. We will do some ‘proof of concept’ to show that a certain method can then be applied. There are huge opportunities at the moment for research in the field of machine learning or artificial intelligence but if you stick to traditional mathematical statistics it would be too limiting. In terms of mathematical technique machine learning is an extension of statistics. But if you say to a student do you want to do statistics or artificial intelligence, I think AI is much more appealing. I am less theoretical than some of my new colleagues at the lab. So, together we can develop more applied research, which is research that can be presented at machine learning conferences.

There are many fields in machine learning that are being developed very rapidly, for example, uncertainty quantification, and the development of novel Bayesian techniques, topological data analysis, deep learning, and fairness analysis.

Machine learning algorithms are most of the time over-confident: a machine learning algorithm must be able to say that it does not know

This is very important, for example, in the development of autonomous cars. In real time scene analysis the machine should be able to send a signal to the human driver if it detects an object it has not seen before so that the car slows down or the driver takes control. So the machine should be able to determine if it is sure of its decision of if it has doubts which have to be resolved with the help of human intervention.

The way the machine is learning to do things is not the same as for humans. There is still a lot to do to achieve the level of performance that is close to that of humans. I think that, for instance, in terms of understanding the speech, in 10 years the computer will fully understand what people are saying. However, computer will not replicate the human brain. Just like with the invention of flying machines – planes do not fly like birds but they fly. At the same time, there is no bird that can carry 500 people at 1000 km/hr. So, computers will be able to do some tasks but they will do them differently and more efficiently than people. And for some tasks we will still need people.

 

Vladimir Spokoiny, chief research fellow at HDI Lab

High-dimensional Analysis: How to Extract Useful Information from a Large Array?

Analyzing big arrays of high-dimensional data is a real challenge today. This is a complicated task that doesn’t have one basic universal solution. In today’s world, we can see information everywhere, which is accumulated in all possible ways: images, speech data, internet networks, etc. On the face of it, the information deficit is part of the past. But humanity now faces a new problem: how to use this data stored in huge arrays to obtain information that we need and are able to understand?

Digital images provide a typical example. Formally, it’s a vector with a dimension of several million pixels. How can we understand what actually is in such an image? Does it contain a cat, a dog, or a human? How can we understand whether it’s the same person in different photos? The human eye can do this easily, but how do we teach a computer to do it?

Despite the fact that such data is quite abundant, it features complicated, partially probabilistic structures. So, there’s a lot of uncertainty. And this uncertainty may vary; it may depend on errors in observation or data transfer. For example, in medicine, uncertainty may relate to the conditions of tests and patients; while in sociological data, probability is driven by subjective factors. Probability is also related to future events, such as stock prices or weather forecasts: they may be predicted with a certain extent of probability, but never definitively.

Therefore, we extract information from complex data with an existing degree of uncertainty. This reflects the stochastic (probability) nature of data. And it’s a vast area. It includes various fields of applied mathematics, as well as modern methods of machine learning, such as deep networks. Our lab aims at developing mathematical methods and approaches to analyzing complex structured data.

The key assumption serving as the basis of modern approaches to data analysis is that even very complex data, such as images, videos, and social networks, have a certain structure. For example, an important role in photo recognition is played by the form and location of one’s eye, nose, and mouth outlines. Knowledge of these structures facilitates analysis, and the objective is to extract structural information from data and to use it effectively. To do this, we combine methods that are used in applied mathematics fields today, such as statistics, probability theory, optimization theory, optimal control, and partial differential equations. We try to apply them in analysis of complex data with an unknown structure.

The work we are involved in is globally known as statistical learning theory. While machine learning and artificial intelligence focus on creating new algorithms, learning theory is about developing and analyzing structural approaches to data analysis, as well as assessing the effectiveness of these approaches.

For example, everyone is crazy about deep learning today, but no one has explained why it works. There still is no theoretical foundation.

We are not just trying to build a data model and evaluate its parameters as in statistics. This is what researchers did in the 20th century. Structural inference is much more complicated. For instance, at first, on the basis of the existing practical examples, we must understand the types and forms of structural assumptions in regards to the data. This helps us to decrease the task’s size, as well as its complexity. Then, we evaluate the structural parameters and model parameters.

An important problem in contemporary data analysis is building effective (scalable) algorithms. A given solution’s complexity must be proportionate to the total amount of data. It turns out that there are so-called NP problems, which are known to be unsolvable with algorithms; their complexity is too high for a computer, even a quantum one. A typical task in this area is item-by-item examination of all possible subsets of a given set, or all possible scenarios in a complex system’s development. Solutions for such tasks require the use of statistics and machine learning methods, on one hand, and theoretical computer science, on the other.

The lab’s practical activities include analysis of financial markets, biomedical images, video streams, graphs and networks. This is a huge industry involving many institutions and companies. We can’t compete with huge teams that develop software packages. However, we are trying to produce new outcomes, which can help us understand what methods are most effective. We are also trying to develop new, structural modeling-based methods. 


Photo: Mikhail Dmitriev, Higher School of Economics

About the Lab’s Partners, Future Plans, and Academic Cooperation between France and Russia

Vladimir Spokoiny: Our key partners were represented at the workshop: École Polytechnique from France, École normale supérieure (Paris), ENSAE ParisTech, the University of Toulouse, and the Humboldt University of Berlin, which I represent personally. We have a laboratory and have received several grants from the Russian Science Foundation. Furthermore, we are waiting for the results of another big grant competition in December. There are many grants, but what’s important today is focusing on strengthening existing cooperation. We’ve only just started and are focused on intensive development.

Based on the example of the new Master's programme in Statistical Learning Theory, which we are implementing together with SkolTech, it is clear that, out of the Faculty of Computer Science’s 200 undergrads, no more than 10% of them will proceed to our Master’s programme. And this is good; we don’t need more. We can offer them a specific way of transitioning from student life to research work, if they are interested in pursuing it.

Eric Moulines: We are also going to cooperate with Samsung-HSE Laboratory, headed by Dmitry Vetrov, because there is an obvious connection between our two labs. So there are definitely opportunities for joint research. Dmitry’s laboratory focuses more on the applied side – developing software – and we are more on the theoretical, methodological side. So there is very good complementarity.

École Polytechnique, where I work now, has long-standing relations with HSE and is committed to developing this international collaboration. A memorandum of understanding was signed a couple of years ago, but it was not very concrete. There was a tiny flux of students from Moscow to Paris and from Paris to Moscow. However, there are plans to increase the interaction, given that HSE is the target institution in Russia for École Polytechnique.

It is interesting that Russia and France are very compatible in terms of research culture.

For example, computer science students in these two countries are exposed to quite a massive intake of maths in comparison with other countries. This training is very appropriate for statistical machine learning. It is easy for us to interact constructively and work together. There are also a lot of professors in statistics in France who come from Russia and post-Soviet countries, such as Alexander Tsybakov, Oleg Lepski, Yuri Golubev, and Yuri Kutoyants. These professors have worked very actively and have mentored a lot of students.

A very significant part of the statistical machine learning community in France has close links with Russia. Russia is really the place where the theory of statistical machine learning and nonparametric statistics has been extremely active. There were a lot of big names in statistics, like Vladimir Vapnik, Ildar Ibragimov, Rafail Khasminskii, who, together with their extremely sharp students, have laid the foundations of most of the modern statistical corpus.