International Call for Six-Month Pause in Giant AI Experiments Shows Importance and Timeliness of BCLI’s AI and Civil Liability Project

April 3, 2023

BY Greg Blue

BCLI has had an active project on artificial intelligence (“AI”) and civil liability underway since late 2021.  Together with an interdisciplinary expert committee, we have been developing law reform recommendations on how the law of tort needs to be adapted to deal with cases where AI causes harm to persons and property. Two news items in the headlines this week demonstrate the timeliness and relevance of our Artificial Intelligence and Civil Liability Project.

On 29 March 2023 an open letter signed by leading tech experts calling for a pause on Giant AI Experiments was released.[1]  It asks governments and the global research community for a minimum 6-month pause in training artificial intelligence systems larger than GPT-4, the next iteration of the system that powers ChatGPT.[2] Signatories include  Elon Musk, Steve Wozniak, and prominent AI experts and researchers in major AI labs around the world. An author of a leading textbook on AI has signed on. The open letter raises serious concerns about the uncontrolled development and expansion of AI systems that even the creators of the systems cannot “understand, predict, or reliably control.” It emphasizes a need for agreement on safety protocols for the design and development of advanced AI systems to curb what it sees as “the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”[3] The open letter includes liability for AI-caused harm in a list of several essential policy elements for the governance of AI. 

Through practical, just, and balanced reform, the BCLI Artificial Intelligence and Civil Liability Project is aimed at ensuring that tort remedies will provide effective redress for AI-related harm at a time in wherein AI is increasingly pervasive.  We will issue a consultation paper in the spring of 2023 setting out tentative recommendations for public comment.  Look for the consultation paper later this spring on the project webpage.


[1] See Pause Giant AI Experiments: An Open Letter – Future of Life Institute.

[2] ChatGPT is currently powered by GPT-3.5.  GPT stands for “Generative Pre-trained Transformer.”  While ChatGPT was designed to produce text resembling the writing of humans in response to inputs by humans, GPT-4 is a large general-purpose language model that is able to perform many other functions.

[3] “Emergent capabilities” refers to the phenomenon of “emergence.”  Emergence is unpredictable, original, and unprogrammed behaviour of an AI system or AI-controlled device as it seeks ways of achieving programmed objectives.

BCLI has had an active project on artificial intelligence (“AI”) and civil liability underway since late 2021.  Together with an interdisciplinary expert committee, we have been developing law reform recommendations on how the law of tort needs to be adapted to deal with cases where AI causes harm to persons and property. Two news items in the headlines this week demonstrate the timeliness and relevance of our Artificial Intelligence and Civil Liability Project.

On 29 March 2023 an open letter signed by leading tech experts calling for a pause on Giant AI Experiments was released.[1]  It asks governments and the global research community for a minimum 6-month pause in training artificial intelligence systems larger than GPT-4, the next iteration of the system that powers ChatGPT.[2] Signatories include  Elon Musk, Steve Wozniak, and prominent AI experts and researchers in major AI labs around the world. An author of a leading textbook on AI has signed on. The open letter raises serious concerns about the uncontrolled development and expansion of AI systems that even the creators of the systems cannot “understand, predict, or reliably control.” It emphasizes a need for agreement on safety protocols for the design and development of advanced AI systems to curb what it sees as “the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”[3] The open letter includes liability for AI-caused harm in a list of several essential policy elements for the governance of AI. 

Through practical, just, and balanced reform, the BCLI Artificial Intelligence and Civil Liability Project is aimed at ensuring that tort remedies will provide effective redress for AI-related harm at a time in wherein AI is increasingly pervasive.  We will issue a consultation paper in the spring of 2023 setting out tentative recommendations for public comment.  Look for the consultation paper later this spring on the project webpage.


[1] See Pause Giant AI Experiments: An Open Letter – Future of Life Institute.

[2] ChatGPT is currently powered by GPT-3.5.  GPT stands for “Generative Pre-trained Transformer.”  While ChatGPT was designed to produce text resembling the writing of humans in response to inputs by humans, GPT-4 is a large general-purpose language model that is able to perform many other functions.

[3] “Emergent capabilities” refers to the phenomenon of “emergence.”  Emergence is unpredictable, original, and unprogrammed behaviour of an AI system or AI-controlled device as it seeks ways of achieving programmed objectives.