Increasing use in Canada of artificial intelligence for political purposes can interfere with voters’ behaviour, undermine democratic participation and erode Canadians’ trust, according to a report from the University of Ottawa.
AI is already being used in the political realm to analyze social media posts and voters’ history, predict elections, target political advertising, create synthetic images, and more, says the report, “The Political Uses of AI in Canada.”
“The inability to distinguish fact from fiction has the potential to dilute our trust, not just in images or videos or in the news media, but in our institutions and in each other,” said report co-authors Michelle Bartleman (photo at left) and Dr. Elizabeth Dubois, PhD. (photo at right). “ AI-generated content has destabilized our confidence in the age-old adage that ‘seeing is believing.’”
“The opportunities to put AI to use in political contexts are far reaching and will only continue to grow,” their report says.
Dubois is an associate professor of communication, the University Research Chair in Politics, Communications and Technology, and a faculty member at uOttawa’s Centre for Law, Technology and Society. Bartleman is a PhD candidate.
Their report is part of the university’s AI + Society Initiative, aimed at better understanding and framing the ethical, legal and societal implications of AI by leveraging a transdisciplinary approach.
Questions around the uses of AI in political domains are not just technical ones, the report notes. “They are fundamental questions about how societies are governed and how they should be governed. AI, by definition, involves a degree of decision making, which challenges the notions of democratic participation and self-rule.”
The report lists several ways AI-enabled tools can be used to accomplish political tasks more effectively and efficiently, including:
However, the same powerful AI tools can also be used to spread disinformation, create confusion and undermine trust in democratic systems, or interfere with elections, the report’s co-authors warned.
For example, AI-powered augmented analytics make it easier to target people with particular messaging that might influence their voting behaviour. Machine learning has led to biased or inaccurate models. Synthetic content can easily be used to misrepresent political players and mislead voters. Generative AI can produce depictions of people, places and things that don’t exist.
AI is being used in Canada for political purposes
Dubois and Bartleman’s report cites several examples of how AI has already been used in Canada for political purposes, including:
Synthetic images and text showing up in political advertising
Synthetic images, videos and text have now started to show up in political advertising, according to Dubois and Bartleman's report.
AI can help create personalized messaging in automated calls, text messages or chatbots, which can insert customized greetings or additional knowledge about a voter or citizen. Synthetic text, for example, could be generated in order to change the style or tone of an email to make it more compelling for different types of voters, while voice cloning can be used to have a political candidate make “personalized” calls or messages.
Google and Meta both announced last fall they will require AI use in political advertisements to be flagged, while Microsoft created a tool to embed watermarks to make AI content more identifiable.
In the U.S., AI was used in a recent robocall in New Hampshire using a deepfake of President Joe Biden’s voice – reportedly made by a Texas company – to urge voters not to vote in the state’s presidential primary. Following that incident, the U.S. Federal Communications Commission announced on February 8 that, effective immediately, robocalls made with AI-generated voices are illegal under the Telephone Consumer Protection Act.
Dubiois and Bartleman’s report notes that in September 2018, Elections Canada put a bid out to purchase AI-enabled social listening tools used to collect information about what was being said on social media related to the upcoming federal election and identify misinformation and disinformation in circulation.
After the 2019 federal election, the agency reported that the number of occurrences of disinformation was limited, and that most inaccurate content seemed to be unintentional or meant as a joke.
Following the 2021 federal election, Elections Canada noted in its statutory report that there had been “an improvement in the agency’s ability to monitor certain election-related topics in the public environment and to address potential misinformation or disinformation that could affect electors’ ability to vote.”
But Dubois and Bartleman point out there’s no mention of specific tools in the agency’s report. Elections Canada also mentioned the creation of an Environmental Monitoring Centre in 2020, that would help “deepen its understanding of the information environment and observe inaccurate narratives as they developed,” but no specifics were provided.
The federal government’s Bill C-27 proposed amendments to the Artificial Intelligence and Data Act would require organizations with “general purpose” generative AI systems (such as ChatGPT and others now on the market), which can create text, audio, images and video that appear to either depict or to have been created by real humans, to ensure such “person-seeming” outputs can be readily detected by people, and to advise a human that they are communicating with an AI system. However, Bill C-27 is still being reviewed by the Standing Committee on Industry and Technology.
How will AI be used in Canada’s next election?
“Deep fakes” are media manipulations based on advanced AI, where images, voices, videos or text are digitally altered or fully generated by AI. This technology can be used to falsely place anyone or anything into a situation in which they did not participate – a conversation, an activity, a location.
Dubois and Bartleman’s report says deep fakes have been on the radar of Canadian intelligence and government since at least 2018, when a parliamentary report in response to privacy breaches related to the Cambridge Analytica scandal briefly mentions this use of AI.
(In the 2010s, personal data belonging to millions of Facebook users was collected without their consent by British consulting firm Cambridge Analytica, mainly to be used for political advertising. Cambridge Analytica used the data to provide analytical assistance to the 2016 presidential campaigns of Donald Trump and Ted Cruz.).
In 2019, the Library of Parliament published a report, “Deep Fakes: What Can Be Done About Synthetic Audio and Video?”
The Canadian Centre for Cyber Security first noted deep fakes as a “layer of uncertainty and confusion for the targets of disinformation campaigns” in its 2020 National Cyber Threat Assessment. In its 2021 report on cyber threats to Canadian democracy, the Cyber Centre noted that deep fake text was particularly challenging to detect, and had the potential to undermine electoral processes.
By its 2023-24 threat assessment report, the Cyber Centre was citing instances of political deep fakes, and advised that “synthetic content calls all information into question.”
“The deployment of AI in our social systems is a Collingridge dilemma playing out in real time: new technologies are easier to regulate and control, but you don’t really know the full impacts until they are fully deployed, at which point it is too late to implement the regulations or controls that are actually required,” the uOttawa researchers say in their report.
(The Collingridge dilemma is a methodological quandary in which efforts to influence or control the further development of technology face a double-bind problem: impacts cannot be easily predicted until the technology is extensive developed and widely used; and control or change is difficult when the technology has become entrenched).
The uOttawa report was sparked by a panel discussion in April 2023, hosted by Dubois, with five expert panelists. For their report, Dubois and Bartleman asked these experts what they expected to see in how AI technology is used in Canada’s next election.
Dr. Wendy Wong, PhD, professor of political science and Principal’s Research Chair at the University of British Columbia, Okanagan, noted that Canada has some of the most prominent AI researchers in the world. She’s hoping Canadians talk more about how AI fits into the national pan-Canadian AI strategy.
“I think it’s time that we as data subjects become data stakeholders, and one of the things that I’m hoping the government brings into play is thinking about digital literacy in a very serious way, which is helping all of us decipher what the machine is doing, and how we can change the terms of that co-existence,” Wong said.
Dr. Wendy Hui Kyong Chun, PhD, a professor of communication and the Canada 150 Research Chair in New Media at Simon Fraser University, said she expects to see AI being employed in the increasing use of divisive issues to create “angry clusters. What’s key is that these clusters, often focused around seemingly niche issues, are strung together to form larger clusters. So [expect] a proliferation of micro- divisions, and a linking of them all together in order to form majorities of anger.”
Dr. Fenwick McKelvey, PhD, associate professor in communication studies at Concordia University and co-director of the Applied AI Institute, said an important test will be whether political parties advertise using AI as part of their “war room showcasing, whether we’ve entered a moment where AI really has swung from something that’s cool to something that we’re worried about.”
“I think it’ll be very interesting to see how AI is framed as a policy issue: whether we’ll see an uptake in these very tangible, clear, accepted issues related to the problems with AI or if we’re going to be left constantly debating whether we live in the next version of The Terminator,” McKelvey said.
Said report co-author Dubois: “Sometimes we’re tempted to think of AI as independent entities with agency. While these tools have some decision-making ability, they are designed by humans, built by humans, and trained by humans. So, it follows that, as humans, we can also choose how we want to use these tools, what guardrails to put up, and how to make these systems transparent and equitable.”
R$