In 2019, when Italy’s former Prime Minister Matteo Renzi made a bewildering appearance on a satirical news show, hurling insults at his political counterparts, threatening to leave and using inappropriate language, he caused a massive stir.
However, soon after the video surfaced, it became apparent that it was not Renzi speaking. It was a sophisticated deepfake parody. Upon closer examination, the voice and gestures were slightly different from the real Renzi and his face appeared unnaturally smooth.
Fast forward four years, deepfakes have become more advanced, common and dangerously impactful.
In the dynamic realm of Artificial Intelligence (AI), the infiltration of deepfakes is perpetuating the spread of misinformation through remarkably realistic digital manipulation.
In 2024, a historic election year, the presence of deepfakes will matter more than ever. The world will witness elections in over 50 countries representing a combined population of about half of the people in the world. Consequently, 2024 is likely to witness a surge in deepfakes and disinformation.
Mislead, inflame, divide
Policymakers around the world are worried about how deepfakes could mislead, inflame, divide and weaken our societies. Left unleashed, deepfakes can be easily employed for personal, professional, or political gains. In the case of politicians, they are used as part of political campaigns to discredit opponents. And in a heavily polarized world, deepfakes are a tool for foreign interference and subversion.
“The technology can be misused to very effectively disseminate fake news and disinformation,” says Jutta Jahnel from the Karlsruhe Institute of Technology. “Faked audio documents can be used to influence or discredit legal proceedings and eventually threaten the judicial system. A faked video might be used not only to harm a politician personally, but to influence the chances of her or his party in elections, thus harming the trust in democratic institutions in general.”
Of course, lying about political foes is not a new tactic. But the level of manipulation and the speed with which it can be disseminated have changed the relationship with the truth forever. The notion that “seeing is believing” can no longer be applied.
“The potential to sway the outcome of an election is real, particularly if the attacker is able to time the distribution such that there will be enough window for the fake to circulate but not enough window for the victim to debunk it effectively – assuming it can be debunked at all,” said Danielle Citron, a deepfake scholar and professor at the University of Virginia School of Law.
Bangladesh
In recent months, Bangladesh offered a glimpse into how the above risks can rapidly unfold. Some media channels and influential figures in Bangladesh have been actively spreading AI generated misinformation produced with cheap and accessible tools.
In a video posted on X, a news anchor for “World News,” presented a studio segment – interspersed with images of rioting – in which he accused US diplomats of interfering in Bangladeshi elections and blamed them for political violence. The video was made using HeyGen, a Los Angeles-based video generator creating clips fronted by AI avatars starting at US$ 24 a month.
Renowned Bangladeshi fact-checker Qadaruddin Shishir attributes the recurrence of disinformation and deepfakes during elections to coordinated efforts to destabilize the political scene.
Women in the public eye
Of course, new sophisticated technologies, AI and deepfakes included, come with some advantages too. The democratization of AI tools has opened up new horizons across various domains including entertainment, art and even criminal forensics. They can serve as sources of satire and parody, offering amusement but provided viewers understand the content is not real.
However, the infiltration of deepfakes into sinister activities poses significant risks to individuals and institutions. They’ve been employed to tarnish reputations and impersonate individuals, as seen in the United Kingdom ahead of the 2024 elections, where MPs expressed concerns about AI’s potential to disrupt electoral integrity.
Women, both in the public eye and private citizens, are particularly vulnerable to online manipulation. Deepfake technology is being widely used to put female politicians into fake but compromising positions.
According to legal expert in online abuse Professor Clare McGlynn, “there are examples …where deepfake porn has been used against women in the public eye and women politicians as a way of harassing and abusing them, minimizing their seriousness. It is definitely being weaponized against women politicians.”
This falls into a wider trend of abuse targeting women in the public scene. Sexism, harassment, and violence against women MPs are quickly becoming a global problem impeding gender equality and undermining the foundations of democracy.
The IPU has taken the lead in documenting and identifying the best ways to tackle this phenomenon. See our latest publications on violence against women MPs and how to eliminate it.
Fortifying the truth
As AI-generated deepfakes continue to proliferate, initiatives aimed at enhancing media literacy are crucial in cultivating a discerning public.
So how can we fortify the truth in the era of deepfakes?
Sam Gregory, Executive Director of WITNESS, an international nonprofit organization that helps people use video and technology to protect and defend human rights, says the technological advances are becoming so sophisticated it is unreasonable to expect consumers and citizens to be able to “spot” deceptive imagery and voices.
“Guidance to ‘look for the six-fingered hands,’ or inspect visual errors in a Pope in a puffer jacket, or to look to see if a suspect image does not blink, or listen super-close to the audio to hope to hear an error are not sufficient and do not help in the long run or even the medium-term,” he says.
That’s why targeted legislation, proactive provenance and disclosure techniques and detection tools addressing the potential harms are urgently needed to address the growing threats.
The measures need to be accompanied by collaborative efforts from the technology industry, civil society and policymakers alike. And as always, parliaments will play a critical role in building the bridges too.
IPU workshops for parliamentarians
The IPU is also organizing three capacity-building workshops from January to March, open to parliamentarians and parliamentary staff. The AI workshops will inform a forthcoming IPU resolution entitled The impact of artificial intelligence on democracy, human rights and the rule of law. The resolution will be debated at the 148th IPU Assembly in March with a view to being adopted at the 149th IPU Assembly in October.
1. A changing landscape: An overview of recent advances in artificial intelligence – Monday, 22 January 2024, 15:00 (Geneva, CET). Moderated by Michelle Rempel Garner, Member of the House of Commons of Canada, co-Rapporteur of the IPU resolution. Watch the recording.
2. The emerging impacts of artificial intelligence on society – Thursday, 15 February 2024, 10:00 (Geneva, CET). Moderated by Neema Lugangira, Member of Parliament of the United Republic of Tanzania, co-Rapporteur of the IPU resolution. Watch the recording.
3. Global responses to emerging advanced artificial intelligence technology – Wednesday, 6 March 2024, 15:00 (Geneva, CET). Moderated by Denis Naughten, Member of the Dáil Éireann of Ireland, Chair of the IPU Working Group on Science and Technology. Register