Experts Warn of AI-Powered Disinformation
ADF STAFF
Information warfare has become a ubiquitous threat in nearly all of Africa’s violent conflicts.
In the Democratic Republic of the Congo (DRC), Ethiopia, Somalia, Sudan and the Sahel region, social media platforms have augmented hate speech and disinformation campaigns.
Experts such as Somali economist Abdullahi Alim are warning that new technology, specifically those powered by artificial intelligence (AI), have the power to bring far greater devastation and war to a continent already struggling with ethnic, communal and racial fractures.
“Low levels of digital literacy, fragile politics and limited online safety systems render the continent ripe for hate speech and violence,” Alim wrote in a June 21 article for Foreign Policy magazine.
“The advent of adversarial artificial intelligence — which involves algorithms that seek to dodge content moderation tools — could light the match of the continent’s next war, and most social media companies are woefully underprepared. With limited oversight, this can easily tip some communities — ones that are already fraught with tensions — toward conflict and collapse.”
Alim is CEO of the Africa Future Fund, a platform that aims to accelerate tech initiatives across the continent. He looks at the chaos in the eastern DRC, thinks about the region’s history of genocide and worries what the future might hold.
“Suppose [the Rwandan genocide] had happened today, in the age of the algorithm,” he said. “How much more chaos and murder would ensue if doctored images and deepfakes were proliferating on social media rather than radio, and radicalizing even more of the public? None of this is beyond reach.”
Chukwuemeka Monyei, an AI researcher, lawyer and cybersecurity expert based in Abuja, Nigeria, pointed to a 2023 incident in Sudan in which AI-powered audio tools helped create a deepfake recording of former dictator Omar al-Bashir purportedly criticizing Sudanese Armed Forces leader Gen. Abdel Fattah al-Burhan.
“The reality is that the technology has developed faster than the tools to mitigate the risk,” Monyei told Voice of America in 2023. “This is not surprising, because these are things that experts had warned about at the onset of the AI revolution.
“We have seen in many states where technologies have been deployed for election interference or disinformation like we saw during the pandemic.”
Ethiopia’s 2020-2022 civil war offered another grim example of hate speech amplified by social media.
In a 2023 report, rights group Amnesty International said Facebook’s algorithms “supercharged the spread of harmful rhetoric targeting the Tigrayan community, while the platform’s content moderation systems failed to detect and respond appropriately to such content.”
Alim said adversarial AI tools can help purveyors of disinformation evade moderation and other content-based safety systems.
“An adversarial AI program might slightly change the video frames of a deepfake, such that it’s still recognizable to the human eye, but the slight alteration (technically known as noise) causes the algorithm to misclassify it, thereby dodging content moderation tools,” he wrote.
Alim is calling on social media companies to share data that would allow third-party organizations to monitor and mitigate disinformation campaigns.
“Technology companies should, in the near term, invest in algorithms that can detect hate speech in local languages; build a more expansive network of content moderators and research experts; and prioritize far greater transparency and collaboration that would allow independent experts to conduct audits, design policy interventions, and ultimately measure progress,” he wrote.
Comments are closed.