HSE Experts Determine Who Should Be Liable for AI Actions
According to experts of the Digital Environment Law Institute of HSE University, Artificial Intelligence (AI) may soon become a participant of the legal decision-making process, and the AI system will be entrusted with authority. Therefore, the issue of liability for ‘automated decision making’ should be addressed. HSE experts presented a special report that evaluates whether AI designers should be liable for the automated systems they create and what actions the government can take to develop AI investment projects.
According to the report, entitled, ‘Legal aspects of using the Artificial Intelligence: Urgent challenges and likely solutions’, the world will soon see a process of gradual legal recognition of certain AI ‘actions’ and their consequences, as well as the formalization of such actions. The rapid development of artificial intelligence technologies calls for identifying the legal status of AI systems with different degrees of autonomy: according to some predictions, by 2075, the thinking processes of robots will no longer be distinguishable from those of humans.
The experts emphasize that ‘making legally important decisions may be of a law enforcement character, which means that the AI systems will be given legal authority. If so, it is necessary that the issue of liability for such automated decision making be addressed.’
The increased use of AI presents a number of challenges. One of the most serious challenges is how to achieve balance between economic interests and the interests of society, which has reflected a deep conflict between business and ethics. In Russia, ethical concerns related to digital technologies developed and used in the country are not currently at the top of the agenda of government authorities, businesses, and even society, the authors of the report say.
Nevertheless, the government will have to look into this matter. When considering AI liability, it is appropriate to speak primarily of tort liability, that is, liability measures should be established as a response to the damage that AI may cause now or in the future.
‘Should AI be recognized as an object of law only, which is what the world is inclined to do so far, the liability should be extend throughout the different AI life cycle stages (the design stage, the operation stage, the decommissioning stage, etc.). At the same time, the AI liability is not reduced to punitive and educational measures, but to establish an effective mechanism for risk management, HSE experts believe.
In light of the possible imposition of liability on AI designers, it is necessary, at least at the initial stages, to provide a balanced system of immunity for them by adding mandatory liability insurance and registration of AI systems. In case AI is deemed to be a subject of law, it is possible to establish a joint liability regime, when vicarious liability may be borne by both the AI designer and owner, or any other subject of law, the study says.
HSE experts also proposed ways of removing barriers to investment projects in the field of artificial intelligence and robotics. In particular, it would be reasonable to create a pilot legal regime to facilitate the development of new technologies as well as some special legal regimes attracting investments and providing tax incentives.
Researchers from HSE University and Moscow Polytechnic University have discovered that AI models are unable to represent features of human vision due to a lack of tight coupling with the respective physiology, so they are worse at recognizing images. The results of the study were published in the Proceedings of the Seventh International Congress on Information and Communication Technology.
The SNAD team, an international network of researchers including Matvey Kornilov, Associate Professor of the HSE University Faculty of Physics, has discovered 11 previously undetected space anomalies, seven of which are supernova candidates. The researchers analysed digital images of the Northern sky taken in 2018 using a k-D tree to detect anomalies through the ‘nearest neighbour’ method. Machine learning algorithms helped automate the search. The paper is published in New Astronomy.
Tharaa Ali, from Syria, got her Master’s in Moscow and is now a first-year PhD student in informatics and computer engineering at HSE University. Below, she talks about the use of AI in learning, winning scholarships, and doing research at HSE University.
Whether researching how the human brain works, identifying the source of COVID-19, running complex calculations or testing scientific hypotheses, supercomputers can help us solve the most complex tasks. One of the most powerful supercomputers in the CIS is cHARISMa, which is now in its third year of operation at HSE University.Pavel Kostenetskiy, Head of the HSE UniversitySupercomputer Modeling Unit, talks about how the supercomputer works and what kind of projects it works on.
The HSE Centre for Artificial Intelligence, together with its partners in industry, is working on 25 applied projects in the fields of telecommunications, finance, education, medicine, etc. The results of the work by researchers and developers were recently presented at a meeting of a Russian government working group. That meeting summed up the initial results of the federal Artificial Intelligence project, part of the national Digital Economy programme.
A research team from the HSE University Artificial Intelligence Centre led by Ivan Yamshchikov has developed a model to predict the success of efforts to rehabilitate homeless people. The model can predict the effectiveness of the work of organisations for the homeless with about 80% accuracy. The project was presented at a conference dedicated to the activities of social centres.
The Federal Brain and Neural Technology Centre at the Federal Medical and Biological Agency is launching the Laboratory of Medical Neural Interfaces and Artificial Intelligence for Clinical Applications, which has been created by employees of HSE University. Read below to find out about the Laboratory and its objectives.
In Mexico, a pilot project applying artificial intelligence (AI) algorithms enabled the Tax Administration Service to detect 1200 tax-evading companies and 3500 fraudulent transactions within three months – a task that would have taken 18 months using conventional methods. Despite some obvious benefits, the use of AI-based solutions to counter corruption also entails several challenges, according to experts of the HSE Laboratory for Anti-Corruption Policy (LAP) and the HSE Faculty of Law who have examined the relevant experience of several countries. A report based on the study’s* findings was presented at the XXIII Yasin (April) International Academic Conference hosted by the Higher School of Economics.
Innopolis University has announced the results of Global Al Challenge, an international AI industry online hackathon in which teams of developers compete to create new materials using artificial intelligence. The DrugANNs team, which included students from the HSE University Faculty of Computer Science, took third place.
In recent years, advanced technologies for creating deepfake images have made it almost impossible to distinguish them from real photos and videos. Researchers discussed the future development of deepfakes and how to protect yourself from new types of fraud during the round-table discussion ‘Fake News: An Interdisciplinary Approach’ as part of the XXIII Yasin (April) International Academic Conference on Economic and Social Development.