Fully sovereign AI is not possible for Canada, but the country can protect sensitive data and avoid coercion

Mark Lowey
April 29, 2026

Canada will be unable to achieve fully sovereign AI due to the interconnected and interdependent global supply chain required for building the AI tech stack, policy researchers say.

But the country can take measures to protect its most sensitive data and avoid economic coercion by being too dependent on foreign suppliers, they said during a webinar on “Canada and AI Sovereignty,” presented by the Munk School of Global Affairs & Public Policy at the University of Toronto.

The Mark Carney government is investing $2 billion over five years in its Sovereign AI Compute Strategy.

However, “Canada is not going to build a completely sovereign, completely isolated, independent technological stack for something as sophisticated as AI,” said Sean Mullin (photo at right), senior fellow at the Munk School.

“Is it even possible to have this total AI sovereignty? Some of our conclusions is that it would be very, very challenging to do so,” said Jaxson Khan (photo at left), senior fellow at the Munk School.

According to a Munk School report by Mullin and Khan, sovereignty in the AI era means freedom from coercion, not digital isolationism, technological self-sufficiency or a retreat from the global economy.

(See “Canada needs sovereign artificial intelligence, including cloud infrastructure and procurement,” in March 18, 2026 Short Report).

Khan and Mullin were key architects of the Liberal government’s $2.4-billion Sovereign AI Compute Strategy, presented in the 2024 federal budget.

While no country can achieve complete independence across the AI technology stack, the question is how to structure dependencies to preserve choice, reduce foreign leverage, and ensure that Canadian data and infrastructure remain governed by Canadian laws and values, their report said.

“What we want to do is avoid coercion,” Mullin said. “We want to avoid situations where Canada is vulnerable because we're reliant on this [AI] tech stack.”

“We want to take choices and policy decisions to make it so we’re less reliant,” he added.

“So when you think about it that way, almost any action you do that reduces your vulnerability in a particular layer or area of the tech stack, and increase Canada's option set, is going to help you move towards a more resilient future when you're dealing with this technology,” Mullin said.

In thinking about sovereignty, one question is how much of the AI tech stack that Canada wants to control and be Canadian-owned and -operated, Khan said.

“Maybe it's not always U.S. dependency [on an AI hyperscaler]. Maybe there's another foreign partner, and maybe that helps reduce our dependency and ‘increase our sovereignty,’” he said.

As a middle power, Canada has to make strategic choices and be really thoughtful about what the options and trade-offs are, Khan said.

“Sovereignty means different things to different people,” said Bruce Schneier, founding director of the Munk School and co-author (with data scientist Nathan Sanders) of the book, Rewiring Democracy: How AI Will Transform Politics, Government, and Citizenship.

“China is the only country that conceivably can build a sovereign [AI] tech stack,” Schneier said.

“So when we think of sovereignty, especially in a middle power, the question is, ‘What do we mean?’ There will never be a domestic-only one of these [AI stacks].”

Generally, Schneier said, sovereign AI and compute should have three components: a national identity system, a national financial exchange system for moving money around, and a way to securely exchange information between individuals and the government or between organizations.

However, he noted that given the interconnected global supply chain, “even if we have AI sovereignty, we don’t have sovereignty of the AI that’s coming at us – just [of] the AI we use.”

Several “choke points” exist in trying to achieve full AI sovereignty

For example, when it comes to computing hardware, the vast majority of the chips or “brains” used to build and train AI models are manufactured in Taiwan at a very specific facility owned by one company, Khan pointed out.

Almost 100 percent of the complex machines that create those chips are designed by one company, ASML, in the Netherlands.

Moreover, almost all the design of the most advanced chips comes from one U.S.-based company, NVIDIA.

Another crucial piece of the AI stack is the critical minerals that underpin the semiconductor and microelectronics sector and are vital in developing AI technologies.

“We think that that would be very, very hard for Canada to make a really big dent in any time in the next five- to 10-year period to mobilize our supply chain to get all the critical minerals, the designs, the very, very precise manufacturing facilities” [required to build a fully sovereign AI stack],” Khan said.

That doesn’t mean Canada, which he noted has a lot of chip design talent, should have some domestic chip fabrication facilities, he added.

Another choke point to full AI sovereignty is data centres. While over half of the data centres in Canada are owned by Canadian providers, these are not the most advanced data centres, which are typically owned by U.S. hyperscalers such as Amazon, Microsoft and Google and used to provide cloud services.

The is only one Canadian company, Ontario-based ThinkOn, that’s approved at the federal government level for procurement for cloud services, Khan noted. The vast majority of the other companies are U.S.-owned.

“It may mean that we are subject to foreign dependency,” he said. “It may also mean that Canadian data, Canadian services are subject to potentially foreign jurisdiction that we may not be able to fully control.”

However, there may be some technological measures to reduce that dependency, such as through using encryption keys [to access data] that are owned only by Canadians, Khan said.

Also, other countries such as France and Germany have set their own base requirements and outcomes for U.S. hyperscalers that want to operate in those countries.

“Are there ways that Canadian-owned infrastructure could perhaps give us a bit less foreign dependency and perhaps a bit more control? It's definitely possible,” he said.

But there are trade-offs, he noted. Building and operating Canadian-owned data centres and other compute infrastructure might be more expensive – what’s called a “sovereignty premium.”

It might make sense to have this sort of infrastructure only for certain types of data, such as national security data or highly personal data such as Canadians’ health data and financial data, Khan said.

“So perhaps at that level, there are some measures that can be enhancing of our sovereignty, but don't necessarily mean that all of that infrastructure has to be 100-percent Canadian-owned and -operated.”

“We think [based on our report] that cloud infrastructure is one of the main areas that we actually have control over as a country,” Khan said.”

“We likely need infrastructure [to do it,” he said. “We may also need different laws, measures, procurement measures that can help to secure that.”

 Government procurement system, regulation, lack of expertise in public service are issues

Janice Stein (photo at right), founding director of the Munk School and Belzberg Professor of Conflict Management, said if you use the metric of freedom from coercion that Mullin’s and Khan’s report uses as a measure of sovereignty, then it becomes important whether you use an American cloud provider – making the Canadian data subject to U.S. law that provides access to U.S. authorities – or a Korean cloud provider.

It is a “huge issue” for Canada to adjust its procurement system to invest in domestic companies, such as cloud providers and AI stack component providers, she noted.

“And we're not there yet. There's a start, but the devil is really going to be in the details, and it's really complicated for this country to do, because we have trade obligations,” such as under the World Trade Organization, that limit Canada’s capacity, Stein said.

Khan agreed, saying that Canada hasn’t figured out a perfect way to grow and scale companies in general in Canada, let alone export their technologies.

Part of fixing the procurement pipeline “if we have a sovereign component or an application or service that is developed here, or at least one domestically, [is to] make sure that at least they have a fair playing field, because historically, we haven't always done that,” Khan said.

Stein said Canada also has to be very careful in how it regulates AI. “European regulation of AI has been “an absolute disaster. It went too far, and I think it's a very cautionary tale for Canada that we not do this, she said.

“I think we have to be very, very careful as we move forward, especially in a country like Canada, where the innovation ecosystem is more fragile. We cannot afford to make the mistake that Europe made,” Stein said.

There is also a deficit inside government of people who understand the AI tech stack and what it takes to build it, she pointed out.

This means the Munk School and other scholars and experts outside of government have an obligation to take on some of this policy development work in mapping and laying out the choices and trade-offs, Stein said.

Khan agreed that the federal public service needs more capacity to understand AI technology and what the options are.

The U.K. government, for example, invested £$500 million to build a sovereign AI unit in-house and procure from domestic firms, he noted. “It’s a significant amount of money to build and buy and invest in technologies that are sovereign to the U.K.”

In comparison, Canada now has a minister of AI and digital innovation – Evan Solomon – but the “machinery” under him is still part of Innovation, Science and Economic Development, Khan noted. “We actually don’t have a ministry that is under that minister, which is a little strange when we compare again to other countries.”

“There’s a certain class of technical and national security skill sets that you need in order to make sure that you're able to achieve some of these sovereignty goals if you're trying to do that on various dimensions, and that's a workforce challenge. We don't have enough of those people, we need to train them,” Khan said.

The U.K.’s in-house sovereign AI unit also cycles people from academic and industry into government for more efficiency, so there’s an interchange of information among different sectors. “That's something that we've sometimes struggled to do in Canada,” he said.

The federal government proposed in Budget 2025 a new program, Build Canada Exchange, that would bring 50 external leaders from technology, finance and the science sectors into the public service to accelerate digital transformation, advance AI and address complex economic and defence challenges.

But that’s a small start given the size and scope of the federal public service, along with the scale of digital transformation the government is going through, Khan said.

Along with increasing state capacity, Canada is struggling to reverse a “massive trust gap in our institutions,” he said.

Only 64 percent of Canadians have “a lot” or “some” trust in their government’s ability to regulate AI, according to a survey by the Pew Research Centre.

Canadians are wary after seeing federal IT procurement debacles, such as the ArriveCan app scandal that revealed massive overspending, poor record-keeping and contractor fraud, and the decade-long, multi-billion-dollar Phoenix payroll system affecting over 483,000 public servants that has cost more than $3.5 billion.

“I think it's a systemic issue where we've been struggling for decades now to accomplish big digital projects. So getting that capacity to deal with digital, let alone AI problems, it's just an essential, almost precondition to doing almost everything that we're talking about” in terms of sovereign AI, Khan said.

Canada has an opportunity in AI applications, but needs modernized privacy legislation

Schneier pointed out that policymakers also have to fight the corporate rhetoric, such as the big hyperscalers who have a monopoly constantly saying that having the latest frontier AI model is all that matters.

“There's this U.S.-China [AI] arms race kind of bullshit narrative that's going on, this notion that if you're not here, you're nowhere. Which turns out not to be true,” he said.

There is a lot more being done now with smaller and nimbler AI models, and the performance gap between the top AI models and smaller AI models is shrinking, Schneier said.

Khan said the broadest area for opportunity for Canada is in AI applications, where companies are building AI products specific to, for example, the health sector, financial sector or legal tech AI.

“Canada has a really good base in building enterprise technology companies and service companies,” he said.

“What if we just did one thing and got that right? I mean, wouldn't that be remarkable? I think that could be a specific piece of technology, perhaps it's a certain application, perhaps it is a certain purpose-fit piece of, ‘sovereign cloud.’”

However, Khan pointed out that Canada still has 25-year-old data framework laws. Modernization of the Personal Information Protection and Electronic Documents Act (PIPEDA) has been talked about for several years, but still hasn’t happened.

Bill C-27, the Digital Charter Implementation Act, 2022, was designed to replace PIPEDA with the new Consumer Privacy Protection Act and introduce AI regulation. But Bill C-27 died on the order paper following the prorogation of Parliament on January 6, 2025.

The Carney government is expected to reintroduce or revise this comprehensive private sector privacy reform sometime this year.

“We do identify that this is a vulnerability for us, and it's a fixable one,” Khan said. “It’s a massive gap that does currently exist in the country: a modern data framework.”

Mullin said AI sovereignty for a middle power like Canada could even be a competitive advantage. He noted that Cohere, Canada’s largest AI foundational model builder, is signing deals with German and French defence companies that don’t want to use American AI model providers.

For example, Canada might encourage Taiwan to guarantee access to supply of some chips in return for guaranteed access to some Canadian natural resources, he suggested. “So I actually think that there’s an opportunity to brand [Canadian] AI services.”

Building “public AI” could counterbalance profit-driven corporate AI

Schneier said another option for Canada is “public AI,” which he described as a counterbalance to corporate AI.

Public AI “is an AI system that is not designed by a for-profit corporation, not owned by a white male Silicon Valley tech billionaire. It is to take this technology out of the exclusive hands of corporations,” he said.

Public AI provides benefits like universal access, transparent oversight into how systems work, political oversight and accountability, responsiveness to republic demands, and then acts as shared infrastructure and perhaps as a public utility, Schneier said.

Switzerland has now built a public AI model called Apertus, the first fully realized public AI model with open source code, public training data and open model weights and is free to use, he said. Apertus is trained on all public material, so there’s no illegally taken copyrighted material like some hyperscalers use.

The large-scale, fully open, multilingual language, public AI model was developed by researchers at ETH Zurich, EPFL, and the Swiss Supercomputing Centre, with funding from government. It is designed for research and business.

Apertus represents a new approach to foundational model development: “built by public institutions, designed for the public good,” according to the Apertus website.

“It’s available online. You just search ‘Apertus,’ you can use it and it is designed to be a universal model for Swiss to build on top of,” Schneier said.

A public AI system “is not built for the near-term financial interests of an American company,” he noted. “We get political accountability, not just corporate accountability. We get a transparency that we just can't get from a corporation.”

A public AI system could be used as a “bargaining chip” against corporate AI, Schneier suggested. “So there’s power in having a public AI model. I think there’s flexibility in having a public AI model. There’s potentially more trust in having a public AI model.”

Public AI would mean Canadian universities and public agencies building and operating AI models optimized not for global scale and corporate profit, but for practical use by Canadians, he said.

“If we want sovereignty, it’s going to be sovereignty on top [of existing AI tech stack supply chains], he said. “But we can build a trustworthy AI model that is not a corporate model.

It really feels like a useful thing for us to do.”

The majority of new AI investment over the next half decade will reshape the AI technology stack, particularly for inference and deployment: the operational layer where AI systems process data, serve users and generate value, according to Mullin’s and Khan’s report.

“AI sovereignty will not be achieved overnight, but meaningful progress is achievable by 2030 with deliberate action,” they said.

“The question for Canada is not whether to have dependencies, but how to structure them to preserve choice, reduce vulnerability to foreign leverage, and ensure that Canadian data and infrastructure remain governed by Canadian laws and values.”

R$


Other News






Events For Leaders in
Science, Tech, Innovation, and Policy


Discuss and learn from those in the know at our virtual and in-person events.



See Upcoming Events










You have 0 free articles remaining.
Don't miss out - start your free trial today.

Start your FREE trial    Already a member? Log in






Top

By using this website, you agree to our use of cookies. We use cookies to provide you with a great experience and to help our website run effectively in accordance with our Privacy Policy and Terms of Service.