Release of Report on Artificial Intelligence and Civil Liability

May 2, 2024

BY Greg Blue

If you are injured by a robot waiter, hit by a driverless taxi, or libelled by chatbot, what should the legal consequences be? With BCLI’s release (in May 2024) of the Report on Artificial Intelligence and Civil Liability, we offer answers to this question and many others regarding civil justice in cases involving harm caused by AI and AI-directed machines.

The foibles of large language models have shown the world that AI isn’t infallible and that its benefits come with substantial risk. More than a year before ChatGPT’s debut at the end of 2022, BCLI had already begun the Artificial Intelligence and Civil Liability Project to address the civil justice implications of AI. For this project, we formed a highly qualified project committee with expertise in computer science, engineering, medicine and law. The resulting report reflects two years of collaborative efforts by this interdisciplinary committee to explore and find solutions for thorny legal issues emerging from the risks created by autonomous AI. Here, “autonomous” means “operating with minimal or no real-time human direction or control.”

When you consider all the ways in which AI is being used, it becomes evident that the scope for harm from faulty automated decision-making goes well beyond the occasional generative AI flights of fancy. AI is used in making medical diagnoses and treatment decisions, vetting job applications, finding new drugs, determining eligibility for benefits and operating driverless taxis, to name only a few of its many applications.

Examples of AI gone wrong and producing harm to ordinary individuals aren’t hard to find. A chatbot intended to replace human hotline operators gave dangerous dietary advice to distressed people with eating disorders. Facial recognition systems have led to numerous false arrests due to misidentification. A woman walking a bicycle was hit and killed by an autonomous vehicle because it could only recognize a pedestrian or a bicycle in motion, but not the combination. Amazon’s AI-based experimental system for vetting job applications discriminated against female candidates by considering female gender as a negative criterion. An AI system for predicting recidivism that was used to recommend granting or denial of bail displayed a marked racial bias, underpredicting recidivism for white individuals and overpredicting it by a similar percentage against black individuals. In a US city’s school system, teachers were fired en masse based on performance evaluations by an unreliable algorithm that failed to produce consistent results when provided with the same input data on separate occasions.

When loss or damage occurs because of wrongful behaviour, the branch of law known as torts comes into play. Torts are non-contractual civil wrongs like negligence, trespass, assault and battery or defamation (libel and slander) that entitle the victim to sue the wrongdoers for compensation in the form of damages, or sometimes for court orders to prohibit ongoing wrongful conduct. The rules of tort law have developed over centuries to address the harm caused by humans. When people use AI intentionally to harm others, such as by deepfaking photos using their images, the usual rules can still be readily applied. In this scenario, AI is just a tool of a human tortfeasor (the term for someone who commits a tort).

The application of tort law is more problematic when damage or loss is caused by AI making autonomous decisions without human intervention or oversight. That’s because its rules depend on concepts like fault, causation, intention, foreseeability and standards of care, which are inherently associated with human reasoning, behaviour and experience. When non-human behaviour and decision-making is in question, however, the concepts underlying the law of torts become somewhat unstuck.

Newer AI systems with machine learning capabilities do not require all their logical processes to be pre-programmed by humans. Instead, they derive inferences from input data they process, create internal models of external reality, and apply them to new data and new situations. Their inferences, internal models and other internal decision-making processes are not always explainable. While this gives them tremendous capabilities, it also lessens the predictability of their outputs. They have been known to exhibit unprogrammed, original behaviour called “emergence,” which is sometimes harmful.

Some legal theorists believe autonomous AI entirely overturns basic concepts of fault and causation in tort law, especially in relation to the tort of negligence. There is a school of thought that the developers of an AI system, or anyone using the system for their own purposes, should be strictly liable for anything that the system does. In other words, they would be liable for AI-related harm regardless of whether they are at fault.

Our interdisciplinary project committee disagrees that strict liability is the sole solution to the civil justice challenges that AI presents. Its members rejected the suggestion that AI requires an entirely new law of torts. They developed instead a set of law reform recommendations intended to strike a careful balance between risk of harm and the benefits of ongoing innovation in the AI field. While preserving fundamental concepts, the recommendations set out and explained in the Report on Artificial Intelligence and Civil Liability would adapt the existing law of torts to provide fair and just outcomes in the age of AI.

Download the Report on Artificial Intelligence and Civil Liability from BCLI’s website at www.bcli.org/project/artificial-intelligence-and-civil-liability-project.


If you are injured by a robot waiter, hit by a driverless taxi, or libelled by chatbot, what should the legal consequences be? With BCLI’s release (in May 2024) of the Report on Artificial Intelligence and Civil Liability, we offer answers to this question and many others regarding civil justice in cases involving harm caused by AI and AI-directed machines.

The foibles of large language models have shown the world that AI isn’t infallible and that its benefits come with substantial risk. More than a year before ChatGPT’s debut at the end of 2022, BCLI had already begun the Artificial Intelligence and Civil Liability Project to address the civil justice implications of AI. For this project, we formed a highly qualified project committee with expertise in computer science, engineering, medicine and law. The resulting report reflects two years of collaborative efforts by this interdisciplinary committee to explore and find solutions for thorny legal issues emerging from the risks created by autonomous AI. Here, “autonomous” means “operating with minimal or no real-time human direction or control.”

When you consider all the ways in which AI is being used, it becomes evident that the scope for harm from faulty automated decision-making goes well beyond the occasional generative AI flights of fancy. AI is used in making medical diagnoses and treatment decisions, vetting job applications, finding new drugs, determining eligibility for benefits and operating driverless taxis, to name only a few of its many applications.

Examples of AI gone wrong and producing harm to ordinary individuals aren’t hard to find. A chatbot intended to replace human hotline operators gave dangerous dietary advice to distressed people with eating disorders. Facial recognition systems have led to numerous false arrests due to misidentification. A woman walking a bicycle was hit and killed by an autonomous vehicle because it could only recognize a pedestrian or a bicycle in motion, but not the combination. Amazon’s AI-based experimental system for vetting job applications discriminated against female candidates by considering female gender as a negative criterion. An AI system for predicting recidivism that was used to recommend granting or denial of bail displayed a marked racial bias, underpredicting recidivism for white individuals and overpredicting it by a similar percentage against black individuals. In a US city’s school system, teachers were fired en masse based on performance evaluations by an unreliable algorithm that failed to produce consistent results when provided with the same input data on separate occasions.

When loss or damage occurs because of wrongful behaviour, the branch of law known as torts comes into play. Torts are non-contractual civil wrongs like negligence, trespass, assault and battery or defamation (libel and slander) that entitle the victim to sue the wrongdoers for compensation in the form of damages, or sometimes for court orders to prohibit ongoing wrongful conduct. The rules of tort law have developed over centuries to address the harm caused by humans. When people use AI intentionally to harm others, such as by deepfaking photos using their images, the usual rules can still be readily applied. In this scenario, AI is just a tool of a human tortfeasor (the term for someone who commits a tort).

The application of tort law is more problematic when damage or loss is caused by AI making autonomous decisions without human intervention or oversight. That’s because its rules depend on concepts like fault, causation, intention, foreseeability and standards of care, which are inherently associated with human reasoning, behaviour and experience. When non-human behaviour and decision-making is in question, however, the concepts underlying the law of torts become somewhat unstuck.

Newer AI systems with machine learning capabilities do not require all their logical processes to be pre-programmed by humans. Instead, they derive inferences from input data they process, create internal models of external reality, and apply them to new data and new situations. Their inferences, internal models and other internal decision-making processes are not always explainable. While this gives them tremendous capabilities, it also lessens the predictability of their outputs. They have been known to exhibit unprogrammed, original behaviour called “emergence,” which is sometimes harmful.

Some legal theorists believe autonomous AI entirely overturns basic concepts of fault and causation in tort law, especially in relation to the tort of negligence. There is a school of thought that the developers of an AI system, or anyone using the system for their own purposes, should be strictly liable for anything that the system does. In other words, they would be liable for AI-related harm regardless of whether they are at fault.

Our interdisciplinary project committee disagrees that strict liability is the sole solution to the civil justice challenges that AI presents. Its members rejected the suggestion that AI requires an entirely new law of torts. They developed instead a set of law reform recommendations intended to strike a careful balance between risk of harm and the benefits of ongoing innovation in the AI field. While preserving fundamental concepts, the recommendations set out and explained in the Report on Artificial Intelligence and Civil Liability would adapt the existing law of torts to provide fair and just outcomes in the age of AI.

Download the Report on Artificial Intelligence and Civil Liability from BCLI’s website at www.bcli.org/project/artificial-intelligence-and-civil-liability-project.