AI development is like speeding in a car on a twisty mountain road shrouded in fog, says AI pioneer Yoshua Bengio

Mark Lowey
October 16, 2024

The way most artificial intelligence is being developed is like driving a car on a winding mountain road in a fog, says AI pioneer Yoshua Bengio (photo at right).

There’s a lot of uncertainty about whether the road will be straight or whether there’ll be a sharp turn and a dangerous cliff.

“There might be a precipice behind that fog. We just don’t have enough scientific evidence [to say one way or the other],” Bengio told the Scale AI ALL IN conference in Montreal.

“We’re racing ahead. There’s a competition between companies, and there’s also a dangerous competition that’s going to happen between countries that’s going to push everyone towards accelerating,” he said.

Currently, scientists don’t know how to build future AI systems that might eventually be smarter than humans in such a way that these systems would behave well, act morally and not harm people, Bengio said.

“We need to digest that fact. We are racing ahead, [and] we don’t know how to build it so it’s going to be safe.”

 At the same time, we don’t know what the timeline is to having artificial general intelligence performing at human level or better, Bengio said. “It could be a few years, it could be decades.”

But if there’s no clear answer to the timeline question, then society has to abide by the precautionary principle, especially governments whose role is to protect the public, he said.

Some future AI scenarios are so catastrophic – such as AI undermining democracy or threatening humanity’s existence – that even if the chance of such scenarios happening is small, we need to ensure the AI systems that are built are safe, Bengio said.

“We need to build systems that are going to be trustworthy and have the kind of humility that wise people and philosophers think we should have when we take decisions.”

Bengio said the positive news is the increasing awareness among governments over the last 18 months that they need to put AI safety on their agenda.

Some 30 countries have met twice, once in the U.K. last November and again in South Korea in May this year, to look at the risks of AI and the safety questions, he said. “There’s a great momentum to take these questions seriously.”

On the negative side, he added, over the last year the big tech lobbies – including the multinationals developing advanced “frontier” AI systems – are influencing governments, researchers, investors and startup owners “to try to push the ecosystem towards rejecting any kind of governance intervention.”

“I think that’s sad, because we need to have a healthy debate so we can take the right collective decisions,” Bengio said.

“Even if, as a business leader, you have good values and you want to behave ethically, if we don’t have the right government guardrails, that’s going to favour the people with less ethics. So we need governance to level the playing field.

What are the risks of AI to humanity?

Bengio said his own thinking about the future of AI started changing around January 2023, after the launch of OpenAI’s ChatGPT chatbot.

“Like everyone, I started playing with it and trying to understand its strengths and weaknesses,” he said. “But at some point, it dawned on me that it was mastering language.”

Bengio pointed out that Alan Turing, the visionary British computer scientist and mathematician who was one of the founders of the field of computer science, had written about the future of machine thinking.

Turing said that once machines reach the mastery of language, to the point that people can’t be sure if they’re talking to a machine or a human, then we’re very close to having dangerous machines we can’t control.

The vast majority of AI scientists think that within the next 20 years we will almost certainly have artificial general intelligence with human or better performance, Bengio said. “I was thinking about my children, [and] I have a grandchild who’s almost three years old.”

It’s amazing in retrospect that the makers of Stanley Kubrick’s movie, 2001: A Space Odyssey, anticipated the kind of AI risks that scientists talk about now, he said.

In the movie, astronauts aboard a space station lose control to an AI system named Hal, because the AI has some goals which require it to survive. But because of a technical problem with Hal, the astronauts think they should unplug the AI system.

Hal has this self-preservation goal, so it can’t allow that to happen, and it has to physically terminate the humans.

“If we have AIs that have a self-preservation goal, a dominant self-preservation goal, that can come – depending on the circumstances – in conflict with our own goals and our wellbeing, and potentially, the future of humanity,” Bengio said.

AI experts are divided about whether AI presents an existential risk to humanity. For example, French-American computer scientist Yann LeCun, chief AI scientist at Meta and professor at New York State University, thinks the potential risk from general intelligence AI is essentially a solvable engineering problem to make AI systems safe.

LeCun, Bengio, Max Tegmark (a Swedish-American professor of physics at MIT) and Melanie Mitchell (an American scientist and Davis Professor of Complexity at the Santa Fe Institute) participated last year in a Munk Debate contesting the resolution: “Be it resolved, AI research and development poses an existential threat.”

Bengio told Scale AI’s conference that another risk is that if there are serious and major accidents with AI systems, like there were with nuclear power plants, there could be a widespread public backlash against developing AI which could stifle innovation and hurt businesses.

Bengio said governments can implement two major things – transparency and liability – to change the current AI race in favour of more careful, collective development of AI that is trustworthy and has ethical behaviour.

Transparency and liability are two of the main strengths of the California Senate’s Bill 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, he noted.

The bill, which had garnered wide support in the legislature but was opposed by nearly all of the U.S. tech industry, was vetoed by California Governor Gavin Newsom at the end of September.

Transparency means companies are required to make their AI safety plans public, and whistleblowers are protected by law, Bengio said. “Right now, these things [AI models] are hidden, so that’s a very strong force.”

As for liability, it doesn’t mean governments telling companies precisely how they should be developing AI, he said.

With the right guardrails, AI companies’ scientists and lawyers should anticipate that if they don’t behave well and don’t act reasonably given the current understanding of the risks, they could be liable in future lawsuits.

“That’s another incentive which is a way for governments to behave ethically without telling them [AI developers] exactly what to do. And that actually stimulates innovation,” Bengio said.

However, in Canada, the federal government’s proposed bill on AI, the Artificial Intelligence and Data Act, appears to be stalled, he said. “With the current political landscape, I’m afraid it’s not going to go through.”

Canada’s AI ecosystem in the global context

Asked about the current state of Canada’s AI ecosystem, Bengio said one of his central motivations over the last decade has been to try to stimulate an AI ecosystem in Canada and make sure that, for example, the students trained here remain in the country and get hired by Canadian companies.

“There are now AI ecosystems in several of the main cities in Canada, especially Montreal,” he said. “We have one of the strongest talent pools in the world.”

At Quebec’s Mila AI institute alone, there are about 1,500 researchers, mostly graduate students, he noted. “It’s really important that we continue in this direction.”

“But I think now as the global conversation is putting the attention on the question of risks, some of that talent can go into the technical advances that our businesses need to build systems that are going to be trustworthy and safe.”

Bengio said he was struck by a recent study by EPLF (Swiss Federal Institute of Technology Lausanne) in Switzerland that involved pitting GPT4, the latest version of OpenAI’s chatbot, against university students to see whether the AI system or the students were the most persuasive in getting people to change their minds about something.

The study involved ChatGPT either having access to the Facebook of the person to be persuaded, or not.

“When the AI has access to the Facebook page, it’s substantially stronger than the humans at persuasion,” Bengio said.

If that’s the current state of AI, how powerful are systems going to be in six months or a year from now? he asked.

Technically, if someone has access to an AI model that was good at persuasion, they could fine-tune the system so the AI actively practised persuasion and continuously got better at it.

That could make such AI systems “sufficiently powerful as engines of political influence to be scary, and we should try to make sure it doesn’t happen,” Bengio said.

Society needs to acknowledge its lack of understanding and certainty about how AI development will unfold, so that we can take the right decisions for moral reasons, he said. “But also if we care about humanity, if we care about democracy, if we care about stability on the geopolitical scene, there’s much to be done.”

“We will not be able to benefit from all the good things – the prosperity, the technological advances, the scientific advances, the medical advances – that AI could bring in years to come or decades to come if we don’t also make sure we build trustworthy and safe systems.”

Bengio said the single most important factor that’s going to move society in the right direction collectively is global awareness and better understanding by more people about AI’s risks and safety.

The situation is similar to the climate change issue, he said. “It took decades, but eventually people came to understand the risk and why we have to act. Hopefully, this happens for AI at a faster pace, because we may not have 30 years.”

R$


Other News






Events For Leaders in
Science, Tech, Innovation, and Policy


Discuss and learn from those in the know at our virtual and in-person events.



See Upcoming Events










You have 0 free articles remaining.
Don't miss out - start your free trial today.

Start your FREE trial    Already a member? Log in






Top

By using this website, you agree to our use of cookies. We use cookies to provide you with a great experience and to help our website run effectively in accordance with our Privacy Policy and Terms of Service.