HSE Experts Determine Who Should Be Liable for AI Actions
According to experts of the Digital Environment Law Institute of HSE University, Artificial Intelligence (AI) may soon become a participant of the legal decision-making process, and the AI system will be entrusted with authority. Therefore, the issue of liability for ‘automated decision making’ should be addressed. HSE experts presented a special report that evaluates whether AI designers should be liable for the automated systems they create and what actions the government can take to develop AI investment projects.
According to the report, entitled, ‘Legal aspects of using the Artificial Intelligence: Urgent challenges and likely solutions’, the world will soon see a process of gradual legal recognition of certain AI ‘actions’ and their consequences, as well as the formalization of such actions. The rapid development of artificial intelligence technologies calls for identifying the legal status of AI systems with different degrees of autonomy: according to some predictions, by 2075, the thinking processes of robots will no longer be distinguishable from those of humans.
The experts emphasize that ‘making legally important decisions may be of a law enforcement character, which means that the AI systems will be given legal authority. If so, it is necessary that the issue of liability for such automated decision making be addressed.’
The increased use of AI presents a number of challenges. One of the most serious challenges is how to achieve balance between economic interests and the interests of society, which has reflected a deep conflict between business and ethics. In Russia, ethical concerns related to digital technologies developed and used in the country are not currently at the top of the agenda of government authorities, businesses, and even society, the authors of the report say.
Nevertheless, the government will have to look into this matter. When considering AI liability, it is appropriate to speak primarily of tort liability, that is, liability measures should be established as a response to the damage that AI may cause now or in the future.
‘Should AI be recognized as an object of law only, which is what the world is inclined to do so far, the liability should be extend throughout the different AI life cycle stages (the design stage, the operation stage, the decommissioning stage, etc.). At the same time, the AI liability is not reduced to punitive and educational measures, but to establish an effective mechanism for risk management, HSE experts believe.
In light of the possible imposition of liability on AI designers, it is necessary, at least at the initial stages, to provide a balanced system of immunity for them by adding mandatory liability insurance and registration of AI systems. In case AI is deemed to be a subject of law, it is possible to establish a joint liability regime, when vicarious liability may be borne by both the AI designer and owner, or any other subject of law, the study says.
HSE experts also proposed ways of removing barriers to investment projects in the field of artificial intelligence and robotics. In particular, it would be reasonable to create a pilot legal regime to facilitate the development of new technologies as well as some special legal regimes attracting investments and providing tax incentives.
Artificial Intelligence (AI) has become a fundamental component of many activities in economics and finance in recent years. On April 26,Panos Pardalos, Academic Supervisor at theLaboratory of Algorithms and Technologies for Networks Analysis (LATNA at HSE Nizhny Novgorod) and Distinguished Professor of Industrial and Systems Engineering at the University of Florida, will talk about its impact, future developments and limitations in his honorary lecture Artificial Intelligence (AI) in Economics and Finance.
What is affect and why is it important for humans? How can feelings be defined and what is their relation to emotions and consciousness? What might be used in making a soft robot? Professor Antonio Damasio (University of Southern California, USA) discussed these and other questions in his honorary lecture, entitled 'Feeling, Knowing, and Artificial Intelligence'.The talk was delivered on April 16 at the at the XXII April International Academic Conference held by HSE University jointly with Sberbank.
HSE master’s programme alumni and an HSE doctoral student received an international Catalyst Grant from Digital Science in support of the development of their startup, MLprior, a service for researchers and scientists. HSE News Service spoke with Vladislav Ishimtsev, one of the startup creators, about the biggest ‘thorns’ in researchers’ sides, artificial intelligence, and the possibility of a machine uprising.