Turning AI harm into help

Leah Geller
March 1, 2023

Justice Paul Rouleau minced no words about the role social media and misinformation played in the Freedom Convoy for his Report of the Public Inquiry into the 2022 Public Order Emergency. “Social media…served as an accelerant for misinformation and disinformation,” which, in turn, he called “inherently destructive and divisive.”

And if you thought social media presented a challenge, there is more to come. According to Philip Mai, a Senior Researcher and Co-Director of the Social Media Lab at Toronto Metropolitan University’s Ted Rogers School of Management, new AI tools, such as ChatGPT, could make us nostalgic for those heady days of 2022. 

“The ease with which AI-generated content can now be created makes it easier for malicious actors to produce synthetic content and accounts, all with the aim of making disinformation appear more authentic and credible,” he explained in an e-mail interview with Research Money.

“While the technology is still in its infancy, there are already some notable examples of its misuse,” he added. “For example, a state-sponsored information campaign, recently reported by The New York Times, used deepfakes to produce and disseminate fake news videos to spread pro-China narratives on social media in the United States.”

Allying with ChatGPT

Now, Mai and the Social Media Lab are investigating the possibility of using the same AI-based technology to fight disinformation on social media. At the beginning of February, they launched the inaugural, two-month #AI Misinformation Hackathon. The event invites teams of Canadian students to create prototypes using AI technology in innovative ways to fight the spread or impact of on-line misinformation.

“ChatGPT or similar AI language tools could be trained to detect patterns of false information, or to verify the accuracy of information in any given text,” Mai explained. “Another potential use of ChatGPT is in creating ‘honeypots’ — computer-based decoys that can lure and track ‘bad’ bots that spread disinformation on social media.”

The latest advances in AI models, and the availability of public application programming interfaces (APIs), such as OpenAI, simplify the development of apps. Students and researchers can easily and affordably access them, facilitating experiments with new and, potentially more effective techniques to study and combat online misinformation.

According to Mai, social media platforms are already leveraging AI to flag potentially harmful content and misinformation, with some success. For instance, Meta detected and took actions on some 1.5 billion fake accounts in the third quarter of 2022, with most of them detected automatically.

European policy insights

Mai and others argue that sweeping and effective solutions to the spread of misinformation will likely depend on strong legislation and public policy. A policy paper on misinformation commissioned by the 2022 inquiry and written by Dr. Emily Laidlaw, Canada Research Chair in Cybersecurity Law at the University of Calgary, concluded that the 2022 Convoy protests exposed major gaps in Canadian law and policy on social media regulation.

“Any decisions about how to address Convoy content posted to social media was by the social media companies, based on their community guidelines and using various technical solutions,” she wrote. “While each platform is different and these companies can devise creative, human-rights sensitive solutions, there is an important discussion to be had about how to incentivize these solutions, create industry standards and hold companies accountable.”

Mai pointed to promising initiatives recently mounted in Europe, which could serve as models for Canadian efforts. The EU 2022 Digital Services Act (DSA), for example, will require major social media platforms to disclose if and how information is being automatically recommended to their users.

“Transparency of algorithms is key, especially when platforms amplify misinformation by automatically recommending it to its users, just because it gets more engagement,” said Mai. “Our Lab found that anti-vaccine videos on YouTube were more likely to be recommended to users than pro-vaccine videos.”

The Act also includes provisions requiring major social media platforms and search engines to provide open access to data for public research. Mai noted, however, that the implementation of this regulation is still underway and many details remain to be clarified.

What exists now is EU 2022 Code of Practice on Disinformation, a voluntary code of conduct under the DSA, whereby social media signatories commit to take action against disinformation. This includes demonetizing the dissemination of disinformation, ensuring the transparency of political advertising, and empowering the fact-checking community.

Just this month, the signatories of the 2022 Code, including all major online platforms — Google, Meta, Microsoft, TikTok and Twitter — launched a Transparency Centre, which will provide insights and data about on-line disinformation. They also published, for the first time, baseline reports on how they turn their commitments from the Code into practice.

“Thanks to the EU Code, we now have access to public reports that summarize the main efforts made by major social media platforms to combat disinformation on their platforms,” concluded Mai. “This is an important step.”

R$


Other News






Events For Leaders in
Science, Tech, Innovation, and Policy


Discuss and learn from those in the know at our virtual and in-person events.



See Upcoming Events










You have 1 free article remaining.
Don't miss out - start your free trial today.

Start your FREE trial    Already a member? Log in






Top

By using this website, you agree to our use of cookies. We use cookies to provide you with a great experience and to help our website run effectively in accordance with our Privacy Policy and Terms of Service.