New algorithmic assessment tool promotes responsible use of AI in government

Lindsay Borthwick
February 10, 2021

The federal government now has a tool to gauge the impact of algorithms used in the delivery of government programs and services, the first of its kind in the world.

Treasury Board issued a directive in 2019 to guide the transparent and accountable use of artificial intelligence (AI) in government. The Directive on Automated Decision-Making, which took effect last year, was lauded as an example of innovative policy-making domestically and burnished Canada’s reputation as a world leader in responsible AI.

A critical part of that directive is the Algorithmic Impact Assessment (AIA) tool. The AIA is designed to help civil servants assess the impact of using automated decision systems—and avoid issues that may arise as a result of their use. An assessment is now required for any AI system developed or procured by federal departments and agencies. 

Developed through an open collaboration with experts in data science, privacy, human rights and other areas, and made available on an open source platform, the AIA is also an experiment in developing policy in public. At a recent meeting of the Pan-Canadian AI Strategy, it was held up by Deborah Raji, a fellow at the Mozilla Foundation who is researching algorithmic auditing and evaluation, as an example of “agile policy development.”

Dr. Fenwick McElvey (PhD), an associate professor in Communications Studies at Concordia University, has been following the federal government’s AI policy innovations and participated in consultations on the tool. “I think [the AIA is] an important and increasingly necessary tool as we come to depend on technology more and more in our everyday lives,” he said in an interview with Research Money.  

“There’s a big gap between how we think about and design artificial intelligence and how it comes to be lived in the real world. I think impact assessment, in some form, is useful as a way of trying to alter that design process and ensure that the social impacts of these technologies are built into that process better,” he said.

In an email to Research Money, Carole Piovesan, a managing partner at INQ Law who specializes in privacy, data governance and artificial intelligence risk management, said that the AIA is a good tool for gauging risks associated with AI systems. She added, "The corresponding directive on automated decision-making is a critical component because it provides guidance on actions to be taken to mitigate risks identified in the AIA."

How significant the impact of the directive and tool will be is still unclear. While the federal government has been commended for putting checks and balances in place around the adoption of AI, there are early signs that compliance may be an issue.

How the tool assesses impact

The AIA was developed in recognition that new technologies, like AI, are leaving no sector untouched. Their potential benefits to government are manifold: for example, they could forecast pandemics and other threats, detect fraud and regulatory non-compliance, optimize the allocation of precious resources, and streamline and personalize service delivery. But so too are the risks to individuals and communities of replacing human decision-makers with automated ones. AI systems can introduce and amplify bias, compromise privacy and often lack transparency and accountability. 

The Treasury Board of Canada Secretariat, which sets digital policies, plans and standards for the federal government, led the development of the AIA. Much like assessment tools in other areas, such as environmental policy, privacy law and human rights, the AIA is designed to assess and reduce risk.

The resulting tool lives on the software development platform GitHub, where it can be accessed by government employees—in Canada or elsewhere—and by companies building AI systems for government. It is a questionnaire of approximately 60 questions related to business processes, data, and system design decisions. Questions include: “Who collected the data for training your system?” and “Does the system enable override of human decisions?” 

When complete, the AIA returns an impact score—a measurement of impact on rights, health and economic interests—from one to four. Depending on the score, users are required to take steps to address potential problems, such as having software independently peer-reviewed.

Given the growing legitimacy crisis surrounding new digital technologies, the tool is a significant advance, McKelvey says. He points to the Privacy Commissioner of Canada’s investigation of Clearview AI’s facial recognition technology as a recent example. “There are deep questions about due diligence," he said. "How is it that a technology is developed, deployed and released without at least consulting about its potential impacts or asking, is this an appropriate technology to develop given the risks it might have?”

McKelvey also sees an upside for innovators who are creating automated decision-making systems. A low risk score could help legitimize a company’s application of AI, and addressing issues flagged by the assessment tool could ultimately lead to a better and more successful product. “A big barrier to innovation right now is that new technologies can miss their operating context and face a lot of regulatory blowback,” he said. The AIA, though just a part of a broader policy and regulatory framework for the responsible use of AI, helps de-risk product development.

But there have also been criticisms of the tool, according to McKelvey. The first is its format as a questionnaire, which may limit its power as an instrument of evaluation; the second is the impact score, which leaves room for interpretation.

Compliance could be a problem

It is too soon to tell how effective the Directive on Automated Decision-Making will be, but a report in the Globe and Mail earlier this week suggests compliance could be an issue. According to the Globe, only one Algorithmic Impact Assessment has been submitted—and that submission was made by the Treasury Board itself.

In a statement to Research Money, the Treasury Board said institutions are responsible for publishing the final results of algorithmic impact assessments on the Open Government Portal. “Further assessments will be published to the portal as more AIAs become available,” a spokesperson said.

The Globe's article also revealed that the Department for National Defense (DND) failed to conduct a privacy impact assessment and an algorithmic impact assessment for two AI-based hiring services that it used as part of an executive recruitment campaign. 

That just invites a lot of doubt,” McKelvey said. “And I think that's one meta concern: Who is ultimately deciding what does and does not get an AI impact assessment, and how well is compliance fostered? Is the impact assessment just seen as a novelty, or is this something serious?”

A spokesperson for the DND told the Globe that it didn’t complete the Treasury Board’s algorithmic assessment because AI wasn’t used to make final hiring decisions.

"I appreciate it hasn’t been widely used within government to date," wrote Piovesan at INQ Law. "I expect it will start to be used more and more as governments look to modernize legacy systems and use advanced technologies to increase service efficiencies. The AIA is a leader and has been referred to by governments around the world."

R$


Other News






Events For Leaders in
Science, Tech, Innovation, and Policy


Discuss and learn from those in the know at our virtual and in-person events.



See Upcoming Events










You have 1 free article remaining.
Don't miss out - start your free trial today.

Start your FREE trial    Already a member? Log in






Top

By using this website, you agree to our use of cookies. We use cookies to provide you with a great experience and to help our website run effectively in accordance with our Privacy Policy and Terms of Service.