Author: Cao Jianfeng, senior researcher at Tencent Research Institute
Responsibility for artificial intelligence accidents is the core issue in the AI era< /strong>
After the "golden decade" of deep learning, the AI era has arrived. AI systems have become one of the most significant and important technological objects in the 21st century, constantly spawning various new technologies. Intelligent products, services and applications, such as Robotaxi, AI companion applications, humanoid robots, etc. Moreover, under the guidance of the scaling law, AI technology represented by large models is still accelerating its development, even triggering differences in development concepts of "AI accelerationism vs. AI value alignment." Well-known experts and scholars in the field of AI have predicted the arrival time of AGI and envisioned the changes and impacts in the next ten years. People may not be able to fully predict what changes and impacts this AI revolution will bring, but we should at least not underestimate the long-term impact of AI technology.
At present, the development and application of AI technology are not only making various products increasingly have independent intelligence, but also accelerating people's transition from the Internet era to the use of algorithms, robots and AI agents. (AI agent) An AI society or algorithmic society (algorithmic society) , the algorithm has thus become the core technical factor supporting the development of networking, digitalization and intelligence. This can lead to significant improvements in safety and efficiency, but there is no guarantee that all accidents and risks will be eliminated. There is no absolutely safe technology. While technology reduces and eliminates risks in some aspects, it may create new risks in other aspects. In recent years, AI accidents that have harmed the rights and interests of others have increased rapidly, from safety accidents of self-driving cars and tangible robots to incorrect diagnosis of AI diagnostic and treatment software to algorithmic discrimination and unfair decision-making in various automated decision-making systems. It can be said that AI Accidents and AI infringement are increasingly becoming the “new normal” in the AI society. According to OCED's monitoring of AI accidents worldwide, global AI accidents have increased rapidly since January 2014, and the total number has reached 7,195 as of December 2023. For example, since October 2024, the AI chatbot platform Character AI has faced controversies such as causing the suicide of American teenagers, and has become a defendant in at least two lawsuits. The plaintiff claimed that Character AI has design defects and is a defective product, requiring its developers to bear product liability .
Today, when the application of artificial intelligence is everywhere, people must face up to the legal liability issues of AI accidents and AI infringement. When AI systems cause accidents and damage, the law must provide fair and effective relief to the victims. But the question is, who should be responsible for AI accidents and AI infringements? Highly autonomous AI systems may act or make decisions independently without direct human control, intervention, or supervision. This means that in the context of artificial intelligence, as related behaviors and decisions shift from humans to intelligent systems, accidents and damage also begin to shift from being caused by humans and human behaviors to being caused by AI systems and their behaviors. This shift poses challenges to the allocation and assumption of legal responsibilities. The relevant challenge lies not only in the difficulty of identifying the responsible party, but also in the difficulties brought by the autonomy, inexplicability, unpredictability and other characteristics of the AI system to prove fault/defect, causation and other liability components, and even more in the difficulty of AI infringement. Difficulties in how to assume responsibility (for example, how to take punitive measures such as behavioral bans and shutdowns against AI systems).
Are the three new options for AI tort liability really feasible?
To this end, some people have proposed establishing a new liability system for AI infringement, which roughly includes three options.
The first is the personality plan. Simply put, it gives the AI system legal subject status, so that the AI system can directly bear legal responsibility for its own actions. It is a very tempting idea to treat autonomous and complex AI systems as independent legal subjects, thereby transferring tort liability from humans to artificial intelligence. EU lawmakers had proposed creating a special legal status for "cyborgs" for autonomous robots, but ultimately rejected the idea. Some scholars have proposed that AI systems can be given a legal person status similar to a limited liability company (LLC) to solve the problem of liability. People imagine that just as large models may make "1-person companies" a reality, future development of artificial intelligence may also make "0-person companies" a reality, that is, AI systems with the ability to act autonomously< strong>(agentic AI)can run a company independently without any human employees.
The second is new liability schemes such as vicarious liability and high-risk no-fault liability, according to one theory, especially in the context of alternative artificial intelligence , if an enterprise uses an AI system to replace human employees, it should bear vicarious liability for the actions of the so-called "AI employees" because this is consistent with the principle of functional equivalence. As the capabilities of large models continue to increase, we can envision a future in which people may not only have personal AI assistants that can truly act on their behalf, but may also work and collaborate with so-called “AI colleagues.” Therefore, it seems reasonable to hold operators vicariously liable for the actions of “AI employees”. Another idea is to base on the risk-based AI supervision path, allowing providers, owners, users and other entities to bear no-fault liability for the damage caused by high-risk AI systems. For example, the core idea of the EU Artificial Intelligence Act is to focus on safety supervision of high-risk AI systems based on the typing of AI risks, and to prohibit AI systems with unacceptable risks.
The third is the insurance solution. For damage caused by a completely autonomous AI system, you can consider completely replacing the existing insurance-based no-fault compensation mechanisms such as social insurance and compensation funds. A tort liability regime, because circumventing tort law would avoid many of the difficulties faced in applying existing liability rules to artificial intelligence. In the past, it was not uncommon for no-fault compensation mechanisms to completely replace tort damages. Similar practices existed in the fields of work injuries, traffic accidents, medical injuries, vaccine damages, etc.
The establishment of an AI tort liability system It is necessary to get rid of several misunderstandings
However, these new issues regarding AI tort liability The plan is too radical and difficult to ensure the balance between security and freedom. Not only is it inconsistent with the social reality that we are still in the early stages of the development of the AI revolution and the era of weak artificial intelligence, it is also based on several misunderstandings of blame that need to be avoided.
Myth 1: Attributing blame to artificial intelligence itself.
Attributing responsibility to the AI system itself means treating the AI system as a legal subject. However, at this stage, legal personality for artificial intelligence is morally unnecessary and legally asking for trouble. Most of the arguments in favor of AI legal personality are both too simple and too complex. Too simple because AI exists in a vaguely bounded sphere, and there is currently no meaningful category that can be recognized as a legal subject; too complex because Many arguments are variations on the “robot fallacy” (such as believing that robots will be just like humans) and are based on specious assumptions about the future development of artificial intelligence. At present, granting legal personality to AI systems is not a "panacea" to solve their "behavior" responsibilities. Instead, it may open a "Pandora's box" and trigger a series of new legal and ethical issues. In particular, AI legal personality will easily lead to Abuse becomes a mechanism to avoid and shift legal responsibilities and obligations. In other words, AI legal personality may be a kind of “legal black hole,” an entity that sucks away the legal responsibilities of human actors without any trace of accountability. In short, artificial intelligence, as an activity of human beings, no matter how complex, intelligent, or advanced it is, is only a tool to serve human beings and achieve human purposes, fully demonstrating the need to serve as a legal object and promote human well-being. Fundamentally, we need to develop tool AI (tool AI) rather than the so-called subjective AI that is fully close to humans.
Myth 2: Connect the concept of AI risk typing in public law with AI tort liability rules.
One of the main ideas for global artificial intelligence regulation is to adopt "risk-based regulation" and adopt differentiated supervision for AI systems with different levels of risk. The EU Artificial Intelligence Act is a typical representative of this idea. According to the level of risk, AI systems are divided into four categories: unacceptable risk AI, high risk AI, limited risk AI and minimal risk AI, and focus on high-risk AI. requirements and obligations of relevant operators (providers, deployers, etc.). Among them, the criterion for judging high-risk AI is that the AI system poses a significant risk of damage to the health, safety, and basic rights of natural persons. Under this regulatory approach, people tend to link the risk level of AI systems to liability principles, such as linking high-risk AI to no-fault liability, while linking low-risk AI or non-high-risk AI to fault liability or fault liability. Presumptive liability is linked. The “Draft Regulation on Responsibility for Artificial Intelligence Operations” previously proposed by EU legislators is a typical representative of this idea of attribution. However, it is unreasonable to uniformly connect and match risk-based AI typing to different liability rules under the public law regulatory framework. In fact, it is a misplacement. The main reason is that high-risk AI cannot be simply equated with unusually dangerous objects or activities targeted by traditional no-fault liability theories;On the contrary, the introduction of artificial intelligence may change people’s understanding of dangerous areas. It is known that so-called high-risk AI may actually be safer than similar objects or activities controlled and operated by humans. In other words, so-called high-risk AI is actually designed to reduce risk, increase safety, and is actually safer than the human activities it replaces.
Myth 3: Negligently evaluating the “behavior” of the AI system.
When an AI system causes an accident or causes damage, how to evaluate the "behavior" or performance of the AI system is a key issue. Some people have proposed that the principle of negligence be applied to the AI system itself. Specifically, by analogy with the "rational person" standard used to judge whether a human actor is negligent, the "rational robot" standard can be used to judge whether the AI system has so-called "negligence". ”, in order to limit the liability of relevant entities for AI systems. For example, in the previous case in the United States, in the case of Nilsson v. Gen. Motors LLC, the plaintiff sued the American self-driving car company Cruise, but did not file a product liability claim, but chose a theory based on negligence: The motorcycle owner claimed that Cruise’s automatic The car was driven in such a negligent manner that it entered an adjacent lane and knocked him down without regard for passing vehicles. This may be the first time in history that a robot has been formally accused of negligent operation - a tort charge that was once reserved for human actors. However, this idea of attribution should be rejected at any time. Even if the identification of negligence in modern tort law has tended to be objective, the concept of negligence always points to the behavior of human actors and is linked to human subjectivity. It is unrealistic to apply a negligence standard to the “conduct” or performance of an AI system. It is foreseeable that as the autonomy of AI systems increases, in many AI accidents in the future, courts will need to evaluate the user(such as the driver) Behavior has changed more to the behavior of evaluating AI systems(such as autonomous driving systems), and the “behavior” or performance of AI systems should be viewed from the perspective of product defects rather than from the perspective of product defects. Evaluate from the perspective of fault. This requires us to promptly update the product liability system for traditional products in the industrial era.
Myth 4: Substitute accountability for entities that deploy and operate AI systems based on the principle of functional equivalence.
The principle of functional equivalence holds that if the use of autonomous technologies such as AI systems is functionally equivalent to hiring human auxiliary personnel and causes harm, then the operator ( The responsibility of operators to deploy and use this technology should correspond to the existing vicarious liability mechanism of the principal (Principal) for its human auxiliary (Human Auxiliary), that is, the operator of the AI system shall bear vicarious liability for the damage caused by the AI system. . However, this line of thinking is asking for trouble. The analogy of responsibilities based on functional equivalence may seem reasonable at first glance, but in fact it is not feasible. Moreover, the functional equivalence theory only superficially focuses on the substitution effect of technology, but fails to gain insight into the real risk creation and control behind this technological phenomenon. For example, in the era before artificial intelligence, factories used automated devices to replace workers. If the automated device malfunctioned and caused damage, the victim would consider pursuing product liability against the manufacturer of the automated device, rather than letting the factory bear vicarious liability for the automated device. While the risk profiles of AI systems may vary, they are simply more advanced and smarter tools than traditional automation devices, which means one needs to cut through the fog of functional equivalence and examine which subjects (generally i.e. providers and users of tools) create or control risks. Because ultimately people just want someone to be held accountable for the damage caused by AI systems, rather than holding AI systems accountable in the same way that human actors are held accountable.
The tort liability system in the AI era Where is the road ahead?
Although artificial intelligence poses challenges to the effective application of the current tort liability system, this does not mean that we need to start from scratch and adopt a new liability scheme. On the contrary, at this stage, by making necessary adjustments to existing tort liability rules such as fault liability and product liability, we can adapt the tort liability system to the development needs of the AI era and achieve a balance between safety and innovation.
First, adhere to the legal object status of artificial intelligence and implement human responsibility in AI accidents and AI infringements. Starting from technical reality, no matter how advanced and intelligent the current AI system is, someone always needs to develop it and put it into use. Specifically, although the AI value chain is complex, we can relatively clearly distinguish two groups: the provider camp and the user camp. This distinction makes legal sense because within each group (for example, between producers and suppliers, owners and users), liability can be assigned relatively easily to one of its members through contractual instruments Or shared among several members. For example, the EU Artificial Intelligence Act distinguishes between AI providers and AI users (deployer of AI systems), and focuses on imposing relevant obligations and responsibilities on these two types of subjects. Therefore, for the purpose of tort liability, it is necessary and important to establish standards for the identification and determination of AI providers and AI users.
Second, innovate the product liability system for the AI era Although in many specific usage scenarios of artificial intelligence applications, users still need to fulfill certain obligations of care (such as using according to intended purposes, ensuring data quality, monitoring, maintenance, etc.), they can control the use of artificial intelligence. However, in the long run, the user's duty of care will be reduced, which means that the user's responsibility may also be reduced accordingly. As the roles and control of AI owners and users continue to weaken, the liability of AI providers may enter the center stage of tort liability law in the future. As a new type of "smart" product, AI systems call for necessary innovations in the existing product liability system, including product concept, producer definition, defects, compensable damage, causality, burden of proof, etc. For example, in terms of artificial intelligence regulation, EU legislators, while formulating the world's first comprehensive artificial intelligence bill, comprehensively revised the EU Product Liability Directive introduced in 1985, aiming to establish a new regulation for the digital and AI eras. Product liability system. At the same time, EU lawmakers are still preparing the "AI Liability Directive" (AI Liability Directive), which aims to establish clearer and more operable rules for the responsibilities of AI users.
Third, insurance should be used as a useful supplementary mechanism to the AI liability framework, rather than a substitute mechanism. As a risk management tool, insurance plays an important role that cannot be ignored in promoting the safe integration of new technologies into society, such as by providing financial security to stimulate innovation and ensure the safe implementation of new technologies. With appropriate adjustments and regulatory intervention, insurance can continue to support technological innovation and provide necessary protection to society. Existing insurance systems can be used to regulate AI systems, but there is no need to develop a dedicated or comprehensive AI insurance policy. At the same time, we should be cautious about introducing mandatory insurance policies for artificial intelligence applications, so as not to backfire and hinder the promotion and popularization of AI technology that can bring significant economic and social benefits.
Fourth, in addition to the AI infringement liability system, we need to pay attention to and actively respond to the security risks of cutting-edge AI. In terms of artificial intelligence governance, AI tort liability rules are necessary, but their role is limited. Although they can effectively deal with the risk of damage to people’s personal and property rights and interests caused by AI systems, they are not suitable for super intelligence, etc. It is difficult for AI tort liability rules to play a substantial role in the extreme risks or catastrophic risks that frontier AI may bring. Under the accelerating development trend of AI, superintelligence (superintelligence) is already on the horizon, and its potential security risks are increasingly receiving active attention and attention from governments, research communities, and industries around the world. Foreign AI experts pointed out that in the long run, most people underestimate how serious the security risks of superintelligence may be. Therefore, actively advocate, develop, and build well-being AI (wellbeing AI) and use artificial intelligence to maximize personal, social and environmental well-being, and It is particularly important for the concept of human-machine alignment, including AI value alignment, to be integrated into the development of super intelligence.
The content of the article is only for academic discussion and does not Representing the views of the employer
[1] Zheng Zhifeng: "Legislative Update on Artificial Intelligence Product Liability", published in "Legal Science (Journal of Northwest University of Political Science and Law)" 2024 Issue 4 of the year
[2] Cao Jianfeng: "Human-computer alignment in the context of large models", published in "Chinese Social Sciences Journal"
https://www.cssn.cn/skgz/bwyc/202410/t20241029_5797216.shtml
[3] https://www.youtube.com/watch?v=559VdVIz5V4
[4] https://eur-lex.europa.eu/eli /dir/2024/2853/oj
[5] https://mp.weixin.qq.com/s/HKbVSvQzULG13BSLCfVpBQ
[6] https://darioamodei.com/machines-of-loving-grace
[7] https: //ia.samaltman.com/
[8]https://www.washingtonpost.com/documents/028582a9-7e6d-4e60-8692-a061f4f4e745. pdf