In early November, Innovation Solutions Canada put out an innocuous-sounding innovation challenge for an “AI decryption service” for the Royal Canadian Mounted Police (RCMP). The contract—worth $150,000 for phase 1 and a maximum of $1 million for phase 2—is for a method for breaking into a device if a suspect refuses to divulge their password.
Civil-liberties experts told the Globe and Mail at the time that such a technology poses risks to personal privacy. Compelling people to divulge passwords is unconstitutional, and designing a work-around means police would be able to invade people’s personal lives, they said. Access to a phone or personal laptop can tell you almost everything about a person: their political beliefs, sexuality, closest relationships and other private information.
The RCMP’s case for the technology is that criminal actors often encrypt their devices for these exact reasons. The police released a research brief that made the case that Canada is behind its allies in preventing digital evidence from “going dark” during investigations. That may be true, but the RCMP has also been criticized for lacking transparency and for monitoring environmental groups engaged in peaceful protest.
These debates can’t be left only to politicians and privacy advocates. Researchers and tech companies must contend with the societal consequences of the technologies that they develop. One concern raised about the RCMP’s challenge is that these workaround techniques, once developed, could then be used by hackers or other malicious actors. If you were a CEO of an up-and-coming AI company, would you sign on to the challenge? The police know what they’re doing and it will only target criminals, you might reason. Or do you decide not to send in a proposal, either on principle or to avoid potential backlash?
Researchers and industry have found themselves in similar political quagmires this year when working with Chinese partners. The RCMP, to be clear, is not equivalent to the Chinese government. But it is another case of innovation projects crashing into political realities. New national security measures have made it harder for Canadian researchers to collaborate with Chinese institutions and companies, and universities have had to confront the political implications of their scientific work.
Some allegations have been extremely serious, such as the claim that the tech giant Huawei — a frequent collaborator with Canadian researchers — worked on a facial-recognition system that could identify Uighurs, the members of an oppressed minority group in China. No Canadian researcher has been involved in such a project, but they still have to consider the consequences of developing technologies that could be put to other uses later.
These issues require both a policy and individual response. Research Money’s senior correspondent Lindsay Borthwick has reported this year on efforts to make AI systems more ethical and accountable, from a federal government’s algorithmic assessment tool to the start-up Armilla AI’s work to help companies use AI responsibly to guiding principles for AI and machine learning introduced by Health Canada. There are ways to work with law enforcement — and foreign companies — while ensuring that technology is used responsibly. But researchers and companies must also have internal discussion and debates among themselves about what they stand for, who they want to partner with, and where they draw the line.