In February, I attended the AI Futures Policy Lab in Montreal, part of a Canada-wide series of workshops seeking to engage ”future policy leaders” in conversations about the thorny questions raised by rapid advancements in the development of artificial intelligence. Hosted by Element AI and presented by CIFAR in partnership with Brookfield Institute for Innovation + Entrepreneurship (BII+E), the labs have been the linchpin of CIFAR’s AI & Society program, and a complement to the $125-million Pan-Canadian Artificial Intelligence Strategy.
Brent Barron, CIFAR’s director of public policy, said the inspiration for the workshop series came when he realized that the conversations about AI-related policy questions often simply aren’t happening at the government level. “We’re trying to start the conversation now, before particular areas [become] urgent and reaction-driven,” he told me.
When I walked into the room, several dozen participants were hunched around tables, designing fictional AI products from the future: a voice-recognition device that sounds an alarm when people use too much jargon in meetings, or an autonomous public transit system that chooses the route based on energy conservation. The concepts were just barely grounded in reality, but they were feasible enough to prompt animated conversations about how to handle the policy concerns associated with such technologies.
It may seem outlandish to dwell on imaginary outcomes for AI when there are already so many concrete concerns related to bias, privacy, and so on, but the reality is that even our near-term future with AI remains shadowy and difficult to predict. To set the stage for meaningful AI policy, we need to acknowledge that the questions still outnumber the answers. Beyond that, we need to get creative as we try to envision the impacts of autonomous systems. AI research is advancing so rapidly, we need every possible tool to keep stakeholders — that is, everybody — engaged in the conversation.