• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site
  • HSE University
  • News
  • Ruslan Ibragimov: HSE University Is on the Cutting Edge of Legal Research in Ethics and Law in the Digital Environment

Ruslan Ibragimov: HSE University Is on the Cutting Edge of Legal Research in Ethics and Law in the Digital Environment

Ruslan Ibragimov

Ruslan Ibragimov
Photo by MTS

Last year, the Institute for Law in the Digital Environment was founded as part of HSE University. Ruslan Ibragimov, HSE Director for Legal Research, spoke to the HSE News Service about the research carried out by the new institute, some of the outcomes of its work, plans for the future, and the legal challenges surrounding the use of artificial intelligence.

When and why was the Institute for Law in the Digital Environment created?

It was founded by decision of the HSE Academic Council in April 2020 following a Rector’s proposal to develop a concept for the institute and a preliminary research plan. By then, the national project ‘Digital Economy’ was already in full operation. Its tasks included developing a regulatory framework for the faster digital transformation of the Russian economy and the preparation of the relevant human resources. Initially, these two areas were the cornerstone of our institute’s work.

© iStock

We believed that our research would be of interest to the Russian government, and started working with a focus on the ‘Legal Regulation of the Digital Environment’ federal project as part of the ‘Digital Economy’ national programme, which outlined the government’s priorities in regulation.

At the time, legal science was poorly utilized in digital transformation. But it was important to use its potential in implementing the national programme, providing a logical continuum between traditional and digital law.

How have you combined research tasks with educational ones?

From the beginning, we planned to use the results of research in teaching. This bridges the gap between research and education. If we develop certain concepts at HSE University, it makes sense to trial them in the study process, to attract students to research, to teach research methods. Students adapt to digitalization from their first years of study and can network the ideas born at the institute in the student environment.

Today, we participate in developing minors in digital law. We have developed several programmes for the Faculty of Law, and several more are in development. We are going to start teaching next year. We believe that in the near future, the economy will show a high demand for ‘digital’ lawyers.

What are the key areas of research at the Institute for Law in the Digital Environment?

In 2020, which was not a full year of operations, we carried out 11 research projects on the digital environment. This year, we have a plan of 16 projects, some of which are commissioned by the state, while others are part of HSE University’s activities. The projects fall under several groups.

The first group is dedicated to the legal regulation of artificial intelligence (AI) and related activities. The second group is about the regulation of big data, which is the foundation of the digital economy. This is a very pressing topic today. This year, our task is to study issues related to the legal regulation of the circulation of digital data, including anonymized data. Current regulation limits the civil circulation of anonymized data, which is essential for the development of digital economy. Improving regulation can promote the circulation of data. Our task is to identify the barriers and make suggestions on how to eliminate them.

The third group lies at the intersection of intellectual property and AI.

The fourth includes studies into the applied use of AI in fields such as in medicine. In particular, we have started to study the problem of the legal status of people with implanted cyber-physical systems.

In addition, we decided to study matters of digital ethics. AI aims to imitate human behaviour, and we are worried about the ethical foundations of its operations. We have created a relevant research area at our Laboratory for Digital Law and Ethics. It will deal with questions of ethics and law in the digital reality.

I believe that HSE University is on the cutting edge of this field of legal science. In November, we are organizing a big international conference on digital ethics and law; we want to demonstrate our first outcomes and emphasize the importance of ethical rules and standards in the comprehensive regulation of AI and related issues.

Are the studies conducted by the institute’s experts applied to specific regulations?

It has only been about half a year since we submitted our first research outcomes. Policy making is a long process. The ideas found in our studies are currently being discussed by the market and regulators. We are in contact with the relevant bodies; we are explaining our approaches and conclusions, and we hope that soon, they will be used in regulations.

What are some of the most pressing legal problems that come with the expanded use of AI?

The key upcoming feature of AI that will force us to reinvent our approach is its autonomy. Its application in various fields of our lives could have unpredictable consequences legally, morally and ethically. All these aspects are important to us. I would emphasize three key potential problems that may follow from its development.

First is the violation of human rights and freedoms, privacy, physical and mental integrity, as well as possible discrimination. Some AI technologies are directly related to these problems. For example, the so-called social scoring system is controversial—this is when an AI assigns someone a certain ranking that determines which rights they have. Biometric identification also needs to be clarified in terms of legal regulation. Different European countries have different positions on this. The discussion is ongoing, and we’ll be looking at this problem. We can see that an upgrade of the regulations will be essential.

© iStock

The second problem is the protection and restoration of rights when they have been violated, since AI systems are still lacking technological and legal transparency. To many people, it is unclear how an AI makes decisions. The lack of transparency has legal ramifications and violates citizens’ rights. That’s why it’s necessary to inform people about AI decision-making procedures and how to appeal against such decisions.

The third key legal problem is the fair distribution of responsibility. The existing legal mechanisms of AI-related responsibility are not optimal; it is impossible to assess their autonomy and unpredictability.

We have not yet come to the point where we use autonomous systems. Existing systems mostly process information, but we are one step away from seeing autonomous systems, which means that we already have to analyse the emerging risks and suggest solutions.

Solving the problem of responsibility would give people a basis for trusting AI. If we manage to clearly outline how the responsibility for harm caused by AI is defined, it will increase the level of trust in such systems.

Is it possible to prevent the dishonest use of AI systems?

This question is relevant to Russia and the rest of the world, since technological progress potentially comes with the risks of human rights violations. At the same time, such systems will allow the state and society to rise to new levels of economic development.

We need to make proper use of the experience gained in the regulation of other technologies that were new to their time, such as nuclear energy or air transportation. In order to make AI safe, we will need three key elements: technical requirements and standards, including standards of preventive regulation in critical areas; industry rules covering the application of AI in certain fields, taking into account a risk-oriented approach; and finally, general legal regulation, which outlines the scope of liability and responsibility.

Back to the topic of responsibility: who should be responsible for the negative outcomes of the application of AI?

Of course, this is one of the most important questions. No trust or development will be possible without an understanding of how liability for harm will be assigned. The question of liability is particularly important in the context of the launch of an experimental legal framework for driverless vehicles.

Different approaches are being discussed worldwide. There are extreme ones, such as the suggestion that AIs should be held responsible for harm caused. But most people say that the responsibility for damage caused by AIs should lie with a human: an individual or a legal entity, not a machine. At the moment, we’re inclined to share the latter attitude, but it is hard to predict how the law will evolve in the long term.

In any case, I would advise anyone who uses AI to get third-party liability insurance.

How should copyright for works created using AI be regulated?

AI is widely used to create intellectual property objects, from films to inventions. The development of this trend might lead to the displacement of humans from the creative sphere, which is unwanted. Yet, the use of AI in this field is not widely showcased, since intellectual property is secured only if it is created by a human.

I believe we need to consider the use of AI in the creation of intellectual property as a specific field. We have to be clear that the products of intellectual work can be created with the use of AI, but with limited expiration periods and a narrower scope of rights governing the results of intellectual work than humans have under current law. Such rights can be given to an individual, just as they are to someone who writes a program or a database. In this case, investments in AI will be protected, while individuals will be interested in personal participation and creative work. This will also help maintain the rights of consumers, since they will be informed about the way intellectual property, such as films, music, etc, was created.

What do you think about the adoption of the AI Code of Ethics and its possible impact on the use of AI?

My colleagues and I actively participated in the development of this code together with AI Alliance. I believe that the code plays an essential role in building a comprehensive regulatory system. Its provisions will be reflected in the technical standards and legal regulation of AI use. Ethical rules will be considered in the creation of legal norms, as has always been the case in the history of law.

Which provisions of the code do you believe to be the most important?

I think that after being discussed on various platforms, the code contains everything that matters. I would pay attention to the section about the practical implementation of the code’s principles in the work of companies that adopt it, as well as about creating a system to promote its use by market participants.

It’s difficult to imagine modern life without social media. Is it possible to talk about privacy now? What counts as a privacy violation?

We can see that many users don’t even think that the free services offered by social media are actually paid for with the users’ time and attention, which provide important information to advertisers. But it is rather difficult to outline the legal boundary between publicly available information and private life. Some people like to show others how they spend their vacation or show off their pets, while others don’t like to reveal these things or talk about themselves or their families. Users have the right to establish these boundaries, and others have to respect their choices. Ideally, social media has a responsibility to make sure that the personal boundaries established by each user are not violated.

What counts as a violation of privacy? When we talk about danger to life and health, about blackmailing (regardless of the methods used), these things are punishable under civil or criminal law. But the problem can’t be solved through legal methods exclusively: the solution lies at the intersection of ethics and law. We need to be persistent in communicating the rules of privacy in cyberspace to users and media.

We see that these topics are highly relevant and there is a demand in society to learn more about such problems. The institute is going to study topics that are important to many people: data leaks, protection from social media bullying, etc.

The Institute for Law in the Digital Environment and the Laboratory for Digital Law and Ethics are organizing a nationwide online research conference: World of Humans and Machines: Ethical and Legal Aspects of Digital Transformation. The event is supported by the Znanie Association.

See also:

HSE University to Improve Qualifications of Russian Lawyers from Companies Doing Business in China and India

HSE University’s Faculty of Law has developed two new Continuing Professional Development programmes: ‘Introduction to the Indian Legal System’ and ‘Introduction to the Chinese Legal System.’ What makes these programmes unique is that well-known practising lawyers from these countries will teach there, while leading local law universities will act as partners. The target audience of the programmes is lawyers from Russian companies conducting foreign economic activities in India or China.

Justice 'Ex Machina': Using Artificial Intelligence to Fight Corruption

In Mexico, a pilot project applying artificial intelligence (AI) algorithms enabled the Tax Administration Service to detect 1200 tax-evading companies and 3500 fraudulent transactions within three months – a task that would have taken 18 months using conventional methods. Despite some obvious benefits, the use of AI-based solutions to counter corruption also entails several challenges, according to experts of the HSE Laboratory for Anti-Corruption Policy (LAP) and the HSE Faculty of Law who have examined the relevant experience of several countries. A report based on the study’s* findings was presented at the XXIII Yasin (April) International Academic Conference hosted by the Higher School of Economics.

HSE University Ranks Third Among Russian Law Schools for Graduates’ Salaries

HSE University has risen from fourth to third place in the Superjob University Ranking in terms of the salaries made by young professionals in the legal field who graduated in 2014–2019. According to the ranking’s authors, our graduates can expect to earn an average of 100,000 rubles a month in Moscow.

University of London and HSE University Offer a Second Degree Programme in Law

Students who have been accepted to the HSE bachelor’s programmes in ‘Law’ and ‘Private Law’ may simultaneously enroll in the second degree programme at the University of London to supplement their basic training. The second degree programme is also open to external students and working professionals.

Faculty of Law to Launch New Programmes with University of London

This year, the HSE Faculty of Law is launching new extended programmes in Common Law, for which graduates will receive a degree from the University of London. These programmes are open for first-year students, as well as for other students and professionals.

HSE Team Advances to Quarterfinals of the Foreign Direct Investment Moot

For the first time in the history of Higher School of Economics, the HSE Faculty of Law team has reached the quarterfinals rounds of the Foreign Direct Investment Moot Court (“FDI Moot”) 2018 in Stockholm, the most prestigious competition in the field of international investment arbitration.

Where Future Lawyers Study

A piano in the cafeteria, photographs of fingerprints and Malevich prints on the walls – these are the types of interiors that law students of the Higher School of Economics are accustomed to. In the latest edition of the Open House Project, third-year undergraduate law students Alyona Geraschenko and Baira Bembeeva take us on an excursion to the Faculty of Law.

HSE Students Come in Second at ICC Moot Court Competition

On May 19th, 2017, students of the HSE Faculty of Law took second place in the final round of the International Criminal Court (ICC) Moot Court Competition which took place in The Hague. One of the HSE students won the award for best speaker in the finals.

HSE Students Succeeded in Willem C. Vis International Commercial Arbitration Moot

This year, the team of HSE Faculty of Law participated in two competitions at the same time — in Vienna and in Hong Kong. Ekaterina Nuzhdova, Maria Zinovieva, Evgeny Puchkov and Polina Sizikova represented HSE in Vienna. Ksenia Solovyova, Aleksander Zhdanovich, Elsa Dauletshina and Konstantin Vashchenko took part in the contest in Hong Kong.

International Law Expert Explores Solutions to Sovereignty Conflicts

On May 17, Dr Jorge Emilio Nunez, a Senior Lecturer at Manchester Law School (UK), delivered a lecture at HSE on the themes from his latest book, ‘Sovereignty Conflicts and International Law and Politics’ (Routledge 2017). While addressing members of the HSE community, he explored a solution of egalitarian shared sovereignty, evaluating what sorts of institutions and arrangements could, and would, best realize shared sovereignty, and how it might be applied to territory, population, government and law.