Federal legislation to protect Canadians against AI harms could stifle innovation and productivity, experts say

Mark Lowey
August 7, 2024

Ottawa’s proposed legislation to protect Canadians against harms caused by artificial intelligence could end up hobbling innovation, AI adoption and productivity, say legal and policy experts.

The Artificial Intelligence and Data Act (AIDA) is ill-defined, takes too broad an approach, doesn’t reflect real-world AI development and use, and needs more consultation  before it becomes law, panelists said in a webinar by Ottawa-based Centre for Canadian Innovation and Competitiveness. It is an affiliate of the Information Technology & Innovation Foundation.

“There was no time to consult, or there were certainly no consultations before AIDA was introduced,” said Michael Fekete (photo at right), partner in Osler, Hoskin & Harcourt LLP’s technology group.

More than two years ago, when AIDA was introduced in Parliament, there also was no international consensus on what the core concepts and issues in AI regulation could look like.

Now there is some consensus and Canada could improve AIDA by looking to other AI legislative frameworks such as the European Union’s and the United Kingdom’s, Fekete said.

AIDA, which is part of Bill C-27, is currently under consideration by the House of Commons Standing Committee on Industry and Technology.

AIDA has been much debated and criticized since being introduced in 2022. Business groups have said the proposed legislation is too vague, while civil society organizations have complained the government didn’t do any consultations, and that more recent consultations by Innovation, Science and Economic Development Canada (ISED) have skewed heavily toward industry groups.

AI pioneer Yoshua Bengio, a computer science professor at the Université de Montréal and scientific director of the Mila AI research institute in Quebec, publicly urged Ottawa in February to quickly implement legislation to regulate AI, saying rejecting or delaying its adoption “would be taking a terrible risk with the public’s protection.”

However, Fekete said the sense of urgency with which the federal government introduced AIDA – fearing some catastrophic harm caused by AI – was unwarranted because no such harm has occurred in the two years since AIDA was introduced.

“Our urgency to be at the forefront of regulation comes with the risk of being out of step with what’s happening elsewhere,” he said.

Canada now has an opportunity to put the current version of AIDA on “pause,” do some extensive consultation across the AI ecosystem and with different groups, and make the changes necessary before enacting AIDA into law, Fekete said.

“It’s so important that we get it right. It’s important for productivity enhancement in Canada, for economic growth in Canada, and for the AI ecosystem,” he said.

Melika Carroll (photo at left), head of global government affairs and public policy for Toronto-based AI developer Cohere, said the pan-Canadian AI strategy launched by Ottawa in 2017 has been very successful in making Canada a global hotspot for developing talent and research in AI.

“It’s important that the legislative framework foster that environment, continues that, and keeps Canada in the forefront of AI,” she said.

However, more work is required on AIDA – including clarifying definitions and the obligations of AI developments and users of the technology – to get the balance right between innovation, public safety and mitigating risk, Carroll said. “We need to make sure that the requirements and the obligations are proportional to the risks of each circumstance.”

The legislation should address immediate risks that can be identified and managed, such as bias, privacy, security and how to keep people in the loop for decision-making, Carroll said. “It should be to build trust, mitigate risk, enable innovation – all of those concepts are not very well reflected in this legislation.”

Colin McKay, head of Canadian policy and public relations at Toronto-based autonomous vehicle company Waabi, said AIDA’s broad-based approach to regulating AI technology overall, and the way the legislation is constructed, doesn’t account for the regulatory structures and bodies that already exist in Canada to oversee specific sectors and industries.

For example, Waabi has developed a foundational AI model that it’s integrating into its products for deployment in the physical world – such as in autonomous trucks.

Transport Canada already has a regulatory framework and regulations, and works with counterparts in the U.S. and other countries, to ensure there’s an “absolutely essential safety regime” for on-road vehicles, McKay noted.

AIDA would create a new AI and Data Commissioner within ISED, to oversee and enforce the new law. The most serious violations could result in fines of up to $25 million or five per cent of the offending company’s global revenue.

McKay said AIDA has the potential to introduce “double regulation” or “double regulators” for specific industries and sectors that are only seeking to use AI to improve their products and services in low-risk use cases.

Focus legislation on reducing harms, not broadly regulating AI technology

McKay said if AIDA becomes law, companies are going to have to spend a lot of time and limited resources – now focused on innovative product development – to interpret the yet-to-come regulations, doing risk assessments, and fulfilling other requirements under AIDA.

 “The challenge here isn’t to create regulatory frameworks that define, codify and then guarantee safety within [specific] industries and sectors,” he said.

Rather, he added, the challenge is trying to arrive at a legislative model that allows for innovation and experimentation, and then full deployment of the technology with the confidence AI and product developers understand, reflect and are integrating Canadians’ concerns into the technology and product, McKay said.

“Unfortunately, we’ve had two years of debate about the legislation and we’re looking at another two years of debate about the regulations, while we have other jurisdictions moving forward with comprehensive [legislative] frameworks,” he said.

Carroll said Canadian AI developers are already investing resources to develop safe and responsible AI models. But Canada is a small market globally, she pointed out.

If AIDA’s broad-based requirements and subsequent regulations become too burdensome, “some companies can choose to avoid this market and not comply,” she said. “But those of us who are established in Canada and growing here have to comply and have to make sure that we’re following these rules.”

“So it’s important that [a legislative framework] not be overly burdensome for a Canadian company versus the rest of the world who are looking to deploy these models.”

For example, under AIDA, every time a company changes its AI model, a third-party audit at some level would be required. But models change continuously, as companies refine and update them, Carroll pointed out.

AIDA leaves unanswered questions such as: what will trigger the audit, what kind of audit, who will do the audit, what sort of the expertise will be required, and will an audit still be necessary if a company is just doing an update to improve the model?

“There’s just a lot of work that can be done [with AIDA] to make the compliance with the legislation more effective,” Carroll said.

Fekete said in thinking about a regulatory regime to cover a new technology that offers beneficial uses across society, Canada should be very careful only to regulate as much as necessary to address identified risks.

“We should be focusing on harm in my view,” he said. “[Legislative] requirements should be carefully developed to think about implications and only go so far as necessary to ensure that the regulatory requirement matches the harm or the risk that’s being addressed.”

But with AIDA, all general purpose AI systems will be regulated, he said. The identification and classification of  “high-impact” AI systems of most concern is very broad and general, and isn’t tied to specific harm or a harm threshold within individual use cases, he added.

This is bound to create real challenges for industry looking to adopt AI and for investors looking to invest in AI, Fekete said. “If we go too broad with regulation, we’ll undercut innovation, we’ll undercut economic development, we’ll undercut our efforts to improve Canada’s productivity performance.”

What Canada can learn from other countries’ AI regulation approaches

Fekete pointed out that the concept of a user of AI isn’t found in AIDA. Rather, the regulated activity is making available a general purpose AI system or managing the operation of such a system.

However, there are lots of specific uses cases where the most beneficial deployments of AI systems will require fine-tuning general purpose AI models provided by AI developers.

For example, a law firm may want to fine-tune a general purpose AI model with data specific to the firm’s operations, to more productively deliver its services to clients.

Under AIDA, the law firm potentially may be the entity responsible for managing the operation of this general purpose AI system. This means the law firm will have to comply with AIDA’s long list of assessments and testing, including third-party audits.

 “That creates a real disincentive for industry, including law firms, to fine-tune general purpose [AI] models,” Fekete said. “This is the risk of a broad regulatory approach.”

Similarly, Carroll pointed out that if Cohere had provided the general purpose AI system to the law firm Fekete cited as an example, Cohere wouldn’t be able to manage the operation of that system because the company doesn’t see what its clients are doing with the model.

“Once we ship it to them, in most cases we don’t have access to it anymore. That’s why companies trust it. It’s private, it’s secure, it’s onsite in some cases.”

As for what Canada can learn from other countries, the EU’s AI Act – which came into effect on August 1 – takes a risk-based approach, categorizing AI systems into risk levels of “unacceptable,” “high,” and “low/minimal risk” – each with corresponding regulatory obligations. “It focuses on proportionality, it brings the expected response to the risks,” Fekete said.

It is a crucial public policy objective to ensure that Canadian legislative frameworks for regulating AI align and are interoperable with regulatory frameworks adopted by Canada’s major trading partners.

Law firm Osler has done an interoperability comparison between the EU AI Act and AIDA, and found a dozen or so “very critically important distinctions” between the two. “The examples highlight that, if enacted, AIDA will impose material regulatory obligations on a substantially broader range of AI systems and machine learning models than under the EU AI Act,” based on the comparison.

For example, AIDA is designed to mitigate economic, physical and psychological harm to Canadians from “high-impact” systems, including algorithms used to make determinations related to employment, health care and content moderation on search engines and social media.

But the EU and the U.S., in their approaches to AI regulation, use the term “high-risk” systems – focusing on the risk aspect – rather than the technology-focused “high-impact” systems AIDA uses.

Also, the EU AI Act includes an overarching requirement that to be identified “high risk,” there has to be a significant risk to health, safety or the fundamental rights of individuals, Fekete noted.

“We don’t have a corresponding concept in Canada [with AIDA],” he said. “In Canada, the definitions frankly are just much broader than what we see in the EU AI Act. The net being cast in Canada by AIDA is much broader than the EU AI Act.”

Some of the world’s leading thinkers on AI point to broad existential risks from the most advanced “frontier” AI systems – such as AI with human-like intelligence that might harm people.

So there could be rules for and guardrails around those particular systems, Fekete said. “But let’s be very focused and targeted. Let’s limit regulation to what’s needed to address that type of more existential risk.”

Fekete said the U.K. has actually adopted a pro-innovation approach to regulating AI. Their legislative framework focused on existing individual regulators crafting the regulatory models that would enable the regulators to develop appropriate rules within their scope of responsibility.

The framework provides a “backstop” that could quickly fill in gaps if regulators don’t address the harms that need to be addressed or if unforeseen harms arise.

Fekete called this a “very good approach” that focuses on existing laws and regulators, with subject matter experts addressing risks about new technology within their specific areas of expertise.

“To coordinate internationally, you have a light touch approach to regulation. You regulate as you need, but you don’t necessarily regulate everything.”        

Openness needed to making fundamental changes to AIDA

In seeking to improve AIDA, McKay said companies that develop and then deploy AI foundation models and AI adoption tools, and the companies that are prospective users of this technology, “have not had a thorough voice” in AIDA’s drafting or subsequent deliberation.

 “The voice of startups and scale-ups in the AI ecosystem or individual researchers have not been represented fairly in the process,” he said.

AIDA needs to be paused until a consultation with these stakeholders is done, along with a detailed analysis of the legislative framework’s impact on innovation and industrial development in Canada, he said.

The other two parts of Bill C27 – focused on consumer privacy and protection of personal information – have had a thorough consultation and discussion, and should be moved forward, McKay said.

“Unfortunately, if they [government] continue to move forward as three parts [including AIDA] of the same legislative package, we’re going to be looking at band-aid solutions to try and address concerns.”

Last December,  François-Philippe Champagne, Minister for Innovation, Science and Economic Development Canada, in letter to the chair of the Standing Committee on Industry and Technology, proposed several amendments to AIDA.

The amendments included:

  • updating the definition of “high-impact” AI systems to include several classes of high-impact systems that would be regulated.
  • requiring organizations making high-impact systems commercially available to assess and report the impacts of both intended and foreseeable uses of their systems.
  • requiring organizations with general purpose generative AI systems (such as ChatGPT and others now on the market), which can create text, audio, images and video that appear to either depict or to have been created by real humans, to ensure such “person-seeming” outputs can be readily detected by people.

But Carroll, who noted that Cohere has been working with government to try to make changes to AIDA, said the proposed legislation needs “more changes than just fixing it in the regulations. There are significant changes that need to be addressed in the legislation through some process.”

Fekete said there needs to be an openness by government and stakeholders to look at first principles and regulatory objectives and ask if they’re really being met in AIDA.

The government went, in its initial draft of Bill C-27, from “bare framework legislation” to much more detailed prescriptive legislation that’s now seen in AIDA. In doing so, Ottawa lost “in many respects the opportunity to take a more targeted approach as we’ve seen in the EU,” he said.

Fekete said there needs to be openness to making significant changes to AIDA, including to core concepts. This is fundamental, he said, “to ensure we don’t over-regulate or double-regulate, or create disincentives to investment in the AI ecosystem or adoption of the AI tools that will enhance the Canadian economy, help us become more productive, help us overcome some of the really fundamental challenges that we face as a country and as a world.”

R$


Other News






Events For Leaders in
Science, Tech, Innovation, and Policy


Discuss and learn from those in the know at our virtual and in-person events.



See Upcoming Events










You have 1 free article remaining.
Don't miss out - start your free trial today.

Start your FREE trial    Already a member? Log in






Top

By using this website, you agree to our use of cookies. We use cookies to provide you with a great experience and to help our website run effectively in accordance with our Privacy Policy and Terms of Service.