Less than two years after Sudan’s civil war began in April 2023, the two warring factions had killed more than 28,700 people, more than a quarter of whom were civilians. Half the population needed humanitarian aid, and nearly a third had fled their homes.
Yet before bombs and bullets spilled blood and felled buildings, a hidden element of warfare already was wreaking havoc in Sudan’s cyber realm. Sudan has a history of shutting down internet access dating back to the Omar al-Bashir regime. As citizens protested for his removal in 2019, al-Bashir’s government partnered with Russian mercenaries to spread false information, foreign policy nonprofit news website Inkstick reported.
That same year, the Rapid Support Forces (RSF) “organized an influence campaign to whitewash the reputation of its leaders,” Inkstick reported. As soon as its war against the Sudanese Armed Forces (SAF) began in 2023, a fake account surfaced on X and falsely claimed that RSF leader Mohamed Hamdan Dagalo, known as “Hemedti,” had died from combat wounds.
In the lead-up to kinetic fighting, the RSF obtained a type of spyware known as Predator. The software lets users mine data and track infected cellphones. Monitors can access messages, media files, locations, browsing histories and call logs. The program works in stealth mode and lets users customize what they collect.
It was clear that tanks, planes, soldiers, bombs and bullets would not be exclusive weapons in the war. Combatants would add keyboards, motherboards, computer programs and hackers to their arsenals.
“There’s a habit of not paying attention to the cyberwarfare of conflicts until months after the physical conflict,” Nate Allen, cyber operations lead of the Africa Center for Strategic Studies (ACSS), told Inkstick. “And cyber warfare also transcends the hard timing of the actual conflict.”
Like kinetic battlefield tools, cyber weapons are varied and efficient. Malware, spyware, malign social media accounts, viruses and artificial intelligence (AI) deepfakes are just a few tools reshaping conflict and opening a multitude of fronts.

‘DIGITAL SWISS ARMY KNIFE’
Social media can be a cheap and potent tool for shaping new realities. Such platforms have been used to influence civilians and hide abuses in junta-led nations. Terrorist groups use the platforms to recruit and sway public opinion.
“Groups like Boko Haram and al-Shabaab frequently disseminate fake news, manipulated videos, or exaggerated claims of victory, graphic content to instill fear,” Idayat Hassan of Nigeria, a senior associate with the Center for Strategic & International Studies, told ADF by email. “This tactic aims to sow discord, incite panic and undermines trust in governments.”
Al-Shabaab targets young people in Kenya and Tanzania through Swahili-language social media posts, Hassan said. The Islamic State group (IS) recruited Africans from Ghana, Nigeria and beyond to join its fight in Syria.
Extremists also use encrypted messaging apps such as Signal and Telegram to secure their internal communications and plan attacks, she said. “These platforms enable coordination of attacks and dissemination of information to a wider public.”
Social media doesn’t just give extremists new ways to communicate, “it’s fundamentally reshaping the nature of insurgency itself,” security expert Brandon Schingh wrote in a July 2024 article for the Irregular Warfare Initiative.

For example, in 2014, IS started its #AllEyesonISIS recruitment campaign. A group that started with 12,000 to 15,000 fighters quickly grew to 40,000 from more than
110 countries. “This surge isn’t just a military boost; it’s a testament to the raw power of social media in modern conflict,” Schingh wrote.
He called the platforms a “digital Swiss Army knife” for their varied potential functions.
In addition to supercharging recruitment, social media lets bad actors adapt with lightning speed, “turning every smartphone into a command center,” he wrote. Likewise, every user becomes a potential broadcaster for terrorist propaganda.
The challenge of terrorist group propaganda will continue to grow as internet and social media access continues to rapidly expand across the continent. About 300 million Africans have joined social media platforms in seven years, bringing the total to 400 million active users, according to a March 2024 ACSS report. An additional 200 million are using the internet.
REAL OR IMAGINED?
With digital growth comes an increase in internet-connected devices and systems known as
“the Internet of Things.” Such connections often gather, transmit, and store private or sensitive information vulnerable to hacking. Interconnectivity also increases the risk of large-scale malware infections and denial-of-service attacks. Firewalls, robust authentication procedures and encryption can
help address such vulnerabilities.
The latest frontier of cyber threats to nations and their security forces is the use of AI. “The applications of AI in insurgency are as diverse as they are concerning,” Schingh wrote, adding that AI-created propaganda can exploit cultural and societal divisions by amplifying grievances and creating confusion. This can sway public opinion and supercharge terrorist recruitment. AI-driven algorithms can do a hacker’s job in a fraction of the time, thus enabling huge data harvests and communications disruptions, he wrote.

Perhaps the most frightening aspect of AI is its ability to alter people’s perceptions of reality, including through “deepfakes.” Deepfakes are manipulated or fabricated video or audio files that seemingly show famous people, politicians or others saying or doing things they did not say or do. Imagine the ramifications of a video that falsely shows an African leader uttering terrorist group propaganda. Likewise, AI could be used to manipulate a known person’s voice to extort money or information from targets who presume it to be authentic.
“Everybody is praising how helpful AI will be for African governments. But no one is mentioning the risks, which are not science fiction,” Julie Owono, executive director of Internet Without Borders, told Mother Jones magazine. “We’ve seen what’s possible with written content, but we haven’t even seen yet what’s possible with video content.”
There have been previews of the trouble AI-generated content can cause. In early 2019, then-Gabonese President Ali Bongo had spent months out of the country for medical treatment after a stroke, according to Mother Jones. The extended absence led to speculation about his status, including suspicions that he had died. The government released a silent video of Bongo. For some, it was a relief; for others, an indication of deceit. Gabon’s military attempted a coup a week later, citing the video as evidence that something was amiss. One Bongo rival called the video a deepfake. Experts were divided on whether that was true, but the damage was done.
Deepfakes aren’t the only AI threats. Automated fake social media profiles and bots can emulate human interaction, enabling extremists to radicalize and recruit on an enormous scale, Hassan told ADF. AI-generated imagery and text could help criminals raise money through fraudulent humanitarian appeals, diverting resources from legitimate causes. AI also could enhance hackers’ ability to access surveillance and infrastructure systems.

AFRICAN NATIONS RESPOND
As these threats grow, some countries are mounting defenses. Nigeria’s National Cybersecurity Coordination Centre (NCCC) leads efforts to establish a network protected from malicious attacks by state and nonstate actors such as terrorist groups.
The NCCC also is “strengthening the Nigerian Computer Emergency Response Team to enhance its capabilities in detecting, responding to, and mitigating online threats,” Hassan said. “These efforts include countering malign actors and defending against cyberattacks targeting critical national information infrastructure.”
Nations also are sharing their experiences. From July 29 to August 2, 2024, military and cybersecurity experts from across the continent participated in Africa Endeavor in Livingstone, Zambia, to discuss strategies and build cooperation. The symposium’s goal is to improve cybersecurity capabilities in militaries. The 2024 iteration focused on developing cybersecurity policies and strategies.
“Africa Endeavor is an important platform that accords us an opportunity to learn from one another, share expertise and promote best practices on how to address cyber challenges,” said Zambian Minister of Defence Ambrose Lwiji Lufuma.

Kenya’s Ministry of Defence co-hosted a workshop on the military’s responsible use of AI in June 2024. The two-day event in Nairobi brought together personnel from more than a dozen nations to learn about opportunities and risks associated with AI, according to defenceWeb.
Gen. Charles Kahariri, Kenya’s chief of the Defence Forces, said comprehensive regulations are essential to govern the use of AI in military operations.
“Building local capabilities to develop, deploy and regulate AI is crucial,” Kahariri said. “These frameworks should address issues such as data privacy, security and ethical use. Policymakers must work closely with technologies, ethicists and military experts to create policies that balance innovation with responsibility.”