top of page
Writer's pictureAndrew Roy

Canada's Flawed Regulation of Artificial Intelligence



If you enjoy this content, please consider signing up. Creating a member account is free, and you will

· receive new content delivered directly to your inbox;

· have exclusive access to members only content;

· gain access to our online booking tool;

· collect 500 bonus points in our points program.

 

Regulation of AI

The development and launch of Chat GPT has triggered an arms race in the field of artificial intelligence (AI), raising debates about the need for and methods of regulating AI. Recently, Elon Musk led a petition to pause the training of AI systems more powerful than GPT-4 for at least six months due to potential risks involved with AI. Governments worldwide are attempting to develop a regulatory framework, but with the rapid evolution of the industry, creating an effective regulatory framework has become challenging. For example, the EU's attempt to regulate AI has faced significant issues, including ChatGPT breaking the EU's plan to regulate AI.


The Artificial Intelligence and Data Act

The Canadian federal government aims to lead AI regulation and introduced Bill C-27, Part 3, the Artificial Intelligence and Data Act (AIDA) into the Canadian Federal Parliament on June 16, 2022. However, many commentators have identified serious concerns with the bill, including lack of consultation, exclusion of the public sector from the scope of the bill, the offloading of the most important work of the bill to its regulations, and an unclear enforcement model.


Lack of Consultation

The first and most significant issue with AIDA is the lack of meaningful consultation before its development and drafting. The Federal government failed to engage stakeholders in consultations before creating AIDA. Due to the complexity and ever-changing nature of AI, consultation is necessary to harness the vast potential of AI for Canadians while mitigating and addressing the risks and challenges associated with it. The lack of consultation will likely create a regulatory regime that will drive away investment from Canada while failing to protect Canadians


Exclusion of the Public Sector

The second issue with AIDA is the exclusion of public sector entities from the bill's scope, including government agencies, government departments, and federal political parties. This exclusion means that the Canadian federal government can design and develop AI systems that impact the entire Canadian population without oversight or regulatory approval. At a time when people are anxious and distrustful of government overreach into their privacy, this exclusion is concerning. The Federal government must maintain public trust by subjecting itself to the same regulatory regime as private industry. This concern is heightened because AIDA does not appoint an independent ombudsman for reporting, compliance, and enforcement.


The Regulation Performs the "Heavy Lifting"

The third issue with AIDA is the offloading of its most important "work" to its regulations. The bill creates a a regulatory system which is triggered by a "high impact system," but it does not define or provide guidance on the nature of a "high impact system." This raises serious questions, including whether a high impact system is a bright line test or judged on a sliding scale, how to measure impact, and how to differentiate high, medium, or low impact. Determining this definition and answering these questions are left to the regulations, which unfairly shortcuts meaningful debate in Parliament and casts a shadow of uncertainty on the scope and efficiency of AIDA. This uncertainty will negatively affect private industry's ability to prepare for this new regulatory regime.


Onerous and Confusing Enforcement

The final issue with AIDA is its enforcement model, including strict criminal penalties for non-compliance. Criminal offenses are mens rea offenses that include jail time. These strict penalties are combined with uncertainty regarding when one should notify and report. The definition of "material harm," another term mentioned but not defined in the bill, is left to the regulations, which leads to unanswered questions on what actually constitutes "material harm." This combination of strict penalties with an uncertain notification and reporting regime will lead to overreporting, translating into increased regulatory burden and costs, deterring AI start-ups from investing in the Canadian market.


Scrap AIDA and Restart

In light of these fatal problems with AIDA, the Federal government must scrap it and restart. The regulatory regime must be rethought and reworked, starting with meaningful consultation with stakeholders. The lack of consultation, the exclusion of public sector entities, the offloading of the bill's most important work to its regulations, and the unclear notification and reporting regime are detrimental to AIDA, which will drive away investment from Canadian markets, while providing dubious protection for the Canadian population.

 

Enjoyed this article? Sign up and receive 500 points, gain access to our online booking tool, and receive notification of new articles delivered directly to your mailbox.


Do you need bespoke advice on ensuring your company remains compliant with Canada’s privacy and technology regulatory regime? Andrew Roy Legal offers professional advice on all areas of intellectual property and technology law. Book a free consultation today!



The information in this article is not legal advice and does not establish an attorney-client relationship.


© 2023 Andrew Roy

Comments


bottom of page