Ottawa is using AI mainly for governance and security, not for increasing transparency and public accountability

Mark Lowey
April 8, 2026

The federal government is using artificial intelligence mainly for governance, security and the economy rather than to increase transparency and public accountability, according to a report by Ottawa-based not-for-profit Evidence for Democracy (E4D).

More than 58 percent of federally deployed AI systems are used for analysis purposes, including virtual assistants and machine learning analytical tools, said the report, AI and Democracy: Navigating trust, truth, and technology in policymaking.

Two out of five AI systems used within the federal public service were designed by external vendors.

Seven out of 10 external vendors are American-owned, with nearly one-third represented by just three companies: Microsoft, Google, and Amazon – raising concerns about the security of Canadians’ personal data.

“While new technologies ostensibly promise improved efficiency and productivity, their deployment has raised serious questions around perceived lack of transparency, gaps in regulation, implementation challenges, and potential for misuse and biased decision-making,” E4D’s report said.

The federal AI register has been framed as a solution to public trust issues and as a practical tool for showing how the federal government uses AI systems, said report author Trevor Potts (photo at right), director of research and policy at E4D.

Treasury Board President Shafqat Ali has claimed that the federal government is “committed to providing Canadians with information about how it is being used to support programs and services” – seemingly suggesting that the AI register was designed to increase public transparency, Potts said.

“However, this framing is not entirely accurate,” he noted.

Further on in the government’s news release, Ottawa highlights the true inspiration for the AI register: “The Register will serve as an additional tool to support planning, reduce duplication, and to help departments identify opportunities to work more efficiently.”

“Rather than being originally created to increase transparency and public accountability, the AI register instead aims to encourage and accelerate AI adoption across other federal departments,” Potts said.

This goal to maximize AI adoption expresses itself not only in what the AI register includes – but in how it communicates these insights publicly, he said.

E4D carried out a broad review of the federal AI register, launched last November, which showed more than 400 instances of AI usage within the federal public service, ranging from early-stage research to active deployment.

One in five AI tools are being used to create content that actively shapes federal decision-making, including development AI-generated videos and voiceovers, writing policy briefs, and Large Language Model-drafted police reports, according to E4D’s report.

The Canadian School of Public Service accounts for the highest number of AI systems deployed agency-wide, with 32.

According to a study by KPMG, nearly one in two public servants (48 percent) use AI tools for work, with nearly half (49 percent) indicating that their agencies have either implemented or plan to implement AI tools.

AI was most commonly used in three key areas: governance and public services; industry and innovation; and immigration, borders and security, E4D’s report said.

Departments from these three key areas were often associated with the deployment of more concerning uses of AI systems, such as the use of AI facial recognition technologies to flag “high risk” travellers, and to make recommendations on immigration decisions.

For example, a 2024 McGill University study found that current AI-based facial recognition technology used within the Canadian immigration system introduces racial bias into immigration decisions.

The study found that biased facial recognition systems “function as a modern mechanism of racial exclusion, risk denying Black and racialized immigrants access to refugee protection, and exacerbating deportation risk.”

Canada Border Services Agency’s “Client Reporting and Engagement System (CRES)/ReportIn” actively uses AI-driven facial recognition technology (FRT) to scan individual applications and flag potential “threats.”

“This is particularly concerning given the wide body of literature demonstrating the bias and harm that FRT introduces towards racialized communities,” according to E4D’s report.

In terms of usage, the majority of AI systems in the federal government have been deployed for “Analysis” purposes, meaning cases in which AI technologies were employed to analyze and evaluate data without actively generating content (e.g. machine learning analytic tools and virtual assistants).

Examples from the AI register include the Department of Fisheries and Oceans “Oceanographic Anomaly Detection tool” and the Canadian Radio-television and Telecommunications Commission “CANchat generative AI chatbot.”

More concerningly, E4D found multiple cases where deployed AI technologies in the “Content Creation” category directly impact federal policy decisions.

For example, the RCMP’s “Draft One AI software” transcribes audio material and drafts Large Language Model-generated police reports of incidents to officers, while Global Affairs Canada’s “AI-generated briefing notes” tool creates AI-generated policy documents. 

While not always directly influencing policymaking, the use of AI to generate content without rigorous review can lead to serious mistakes, AI slop, and outright falsehoods, as demonstrated in the Government of Newfoundland and Labrador’s education action plan, which was found to be riddled with fake sources and citation errors, E4D noted.

“Not only do these types of AI tools expose Canadian residents to potential privacy violations and racial biases, they actively shape federal policies and decision-making processes within security, border, immigration, and policing bodies.”

Government’s use of American vendors presents potential threats to national security and data sovereignty

The fact that 69.4 percent of the AI systems and infrastructure used by the federal government are American vendors “presents potential threats to national security and data sovereignty, given that one in three AI systems used by the federal government use Canadians’ personal and private data,” E4D’s report noted.

One-third of Canada’s 300+ data centres are American-owned. The U.S. CLOUD Act compels U.S.-based technology companies to provide stored data to U.S. law enforcement, upon obtaining a warrant or court order subject to judicial approval, regardless of whether it is stored in the U.S. or in a foreign company.

Ultimately, efforts to improve transparency and accountability around AI usage in the federal government must be multifaceted, E4D said.

This means Canada needs a more inclusive public consultations that meaningfully engage communities, as well as better communication around technologies listed in the AI register, and continued sustained investment in digital media literacy for Canadians.

Furthermore, international best practices – especially the EU Digital Services Act and EU Artificial Intelligence Act – provide valuable models for improving platform accountability, content regulation and risk management, and for referencing when designing our own legislation and strategies.

E4D pointed out that Statistics Canada reports that two-thirds of all Canadians have low trust in the federal government, and a recent Organisation for Economic Development and Cooperation  study found that the majority of Canadians have low to no trust in the civil service (46 percent) and political parties (66 percent) – with the lowest trust levels among historically marginalized demographic groups, including female, low-income, and younger (18-29 year old) Canadians.

That growing distrust is actively being fuelled by feelings of alienation and perceptions of the government’s inability to act in the public interest, E4D said.

In a recent KPMG study on the impacts of AI , nearly half of all Canadians (45 percent) believe the risks of AI outweigh the benefits (compared with 32 percent globally), with two in five  Canadians indicating they have directly experienced or observed negative outcomes due to AI, and 68 percent who are worried about the impacts of AI on their lives.

Canada ranks 44th out of 47 countries on AI literacy, with many Canadians expressing limited knowledge about AI, and also that they lack confidence in their ability to use these AI tools effectively.

“Overall, given the relatively low levels of public trust in artificial intelligence, government decision-making, and federal institutions, it is essential that the Government of Canada seriously consider transparency, privacy, and accountability aspects as it deploys AI systems across the federal public service,” E4D said.

More than three in five Canadians say that the government does not effectively involve the public in federal decision-making, and three-quarters of all Canadians believe that the federal government would not refuse a corporation’s demand that could be harmful to the public. 

The report makes several policy recommendations, including:

  • Create an independent regulatory authority: Launch a Digital Safety Commission as an independent regulatory authority with enforcement powers and a clear mandate to coordinate national digital governance.
  • Institutionalize accountability mechanisms: Launch an AI & Digital Safety Ombudsperson, under the Digital Safety Commission, with the authority to investigate user concerns and potential violations.
  • Introduce duty of care requirements for developers: Amend the Consumer Privacy Protection Act to include source transparency requirements, including labelling, watermarking, and data used to train the AI tool. Include user-focused data rights and interoperable data standards.
  • Institutionalize transparency measures for researchers and civil society: Empower the Digital Safety Commissioner with statutory mechanisms for data access, transparency and digital oversight.
  • Mandate public consultation mechanisms on AI usage in government, including standing citizens’ assemblies, deliberative panels and advisory councils with special focus groups such as youth and Indigenous communities.

“At a time when Canada’s democracy is facing historic lows in public trust, coordinated misinformation campaigns, and major disruptions posed by new AI systems and technologies, we need to take action to safeguard our democracy and ensure that evidence-informed decision-making continues to build a better future for all Canadians,” E4D said.

R$

 


Other News






Events For Leaders in
Science, Tech, Innovation, and Policy


Discuss and learn from those in the know at our virtual and in-person events.



See Upcoming Events










You have 0 free articles remaining.
Don't miss out - start your free trial today.

Start your FREE trial    Already a member? Log in






Top

By using this website, you agree to our use of cookies. We use cookies to provide you with a great experience and to help our website run effectively in accordance with our Privacy Policy and Terms of Service.