Use it or lose it, says report on Canada's AI advantage

Mark Lowey
June 8, 2022

Canada risks losing its competitive advantage in artificial intelligence unless it takes “decisive steps” to move beyond the country’s existing strengths in AI, according to a new report by the Council of Canadian Academies (CCA).

To date, investment and growth in AI has been focused heavily on research and talent, said the report, produced by a nine-member expert panel for the National Research Council of Canada.

The NRC asked the CCA to examine the legal, regulatory, ethical, social and policy challenges associated with deploying AI technologies to enable scientific and engineering research design and discovery.

“There’s a pressing need to better integrate knowledge and skills across multiple disciplines for the responsible development and use of [AI technology] in a broader way,” said the CCA’s report.

There has been “vertical growth” and development of AI research in Canada, which has developed real strengths in certain areas such as robotics and computer vision, said Teresa Scassa, SJD, Canada Research Chair in Information Law and Policy in the Faculty of Law at the University of Ottawa, who chaired the expert panel.

“We need to shift from vertical AI growth and investment to a more horizontal approach,” Scassa told Research Money. “It’s not just expanding on other certain areas of strength, but it’s horizontal growth geographically, across the country and across disciplines.”

An important theme in the report is the need to take a transdisciplinary approach to AI research as well as AI implementation across sectors, Scassa said.

This must include incorporating humanities and social science research — not only science and engineering “alone in their silos” — into AI innovation at its inception, to help address the social and ethical implications at the earliest stages of development through to application, she said.

Shifting to more horizontal growth in AI also means moving from the research context to the commercial environment and addressing challenges such as intellectual property ownership, governance of IP, and regulatory approval systems, she added.

“There is work that needs to be done on those laws and policies around commercialization in Canada,” Scassa said.

AI poised to transform research community

The explosion of AI innovation will result in systemic transformations to the way research is done and to the research community in Canada, according to the report,

AI has transformative potential in terms of spurring innovation and furthering scientific understanding, perhaps going beyond human capabilities, Scassa said. However, she added: “How do you embrace the power of this technology and do it in a safe and responsible way?”

According to the report, AI could drive future scientific investigation by allowing for automated hypothesis generation, experiment design, experimentation, interpretation, and analysis. But researchers will be challenged to understand and assess the quality and reliability of research carried out using AI.

Using AI in the research process complicates issues such as reproducibility, explainability and accuracy, stated the report, which suggested these matters would require an update to the Tri-Agency Framework for Responsible Conduct of Research. “The concept of research excellence may need to be revised if AI systems take on greater roles in driving research.”

Data stewardship and data management principles, including for ownership of access to data, will need to be implemented to facilitate responsible and ethical data sharing and use, said the report.

Researchers will face methodological and ethical challenges, both in the types of research to be undertaken but also the impacts of that research and the potential for bias and discrimination, Scassa said.

“We are probably on the cusp of much greater use of AI for things like peer review, or evaluation of grant applications, or the evaluation of researchers for funding or for chairs,” she noted.

While AI could speed up and even enhance such processes, the technology carries with it the risk of bias and discrimination within the research context as well as in the broader society, Scassa said.

AI could perpetuate existing biases, expand “digital divide”

The lack of gender and racial diversity in the field of AI research already is well documented, according to the report.

For example, in an area of research that has traditionally been dominated by men, using data sets drawn from past research and employing this data to train an AI algorithm could embed in the algorithm certain biases toward male researchers.

“By looking for certain attributes and career patterns, that [algorithm] might actually then perpetuate discrimination against female researchers,” Scassa said.

Another example is using algorithms that look for certain continuity in research output. Young women researchers might not have such continuity because they’ve taken maternity leave, Scassa said. “Therefore, their continuity is not there and they’re given a lower rating or ranking.”

The report pointed out that there are currently high levels of inequality in the existing distribution of resources, infrastructure, and skills in the context of the production, dissemination, and use of AI for scientific research.

AI could expand this “digital divide,” given the high cost of computational resources and increased competition, especially when public investment has primarily benefited the private sector rather than universities or the public sector, said the report.

There could be potential for bias where some researchers at institutions have less capacity or potential to use AI, Scassa said.

“So people may be proposing research projects that don’t rely heavily on AI and may not be able to get funding for that research, because the determination is made that the best method for doing that research is to use AI,” she said.

Other issues highlighted in the report included:

  • the need for transparency in sharing AI-related code and data;
  • the need for harmonized standards for data, protection of privacy and regulations;
  • AI’s impact on the labour market in science and engineering;

Some job displacement is inevitable, but the primary effect of AI on scientific and engineering occupations is job transformation, according to the report.

  • the need to accelerate development of legal and regulatory frameworks that govern AI systems, because currently technological development is outpacing such frameworks.

“There is no single law for AI regulation or governance in Canada,” noted the report. “The federal-provincial/territorial division of powers presents challenges to the creation of a single regulatory framework.”

R$


Other News






Events For Leaders in
Science, Tech, Innovation, and Policy


Discuss and learn from those in the know at our virtual and in-person events.



See Upcoming Events










You have 1 free article remaining.
Don't miss out - start your free trial today.

Start your FREE trial    Already a member? Log in






Top

By using this website, you agree to our use of cookies. We use cookies to provide you with a great experience and to help our website run effectively in accordance with our Privacy Policy and Terms of Service.