Islamic State group and al-Qaida terrorists have been tapping social media technology for more than a decade to recruit and spread propaganda to a global audience. Now they and other groups are leveraging new tools that can supercharge messaging and possibly help perpetrate attacks.
Artificial intelligence lets terrorists churn out slickly produced propaganda that proliferates across various platforms while requiring few people and limited resources. Voice-cloning technology, video and photo manipulation, and generative text capability all help terrorists distort reality and bend it toward new meaning, replacing the tedious tasks of typing long screeds and producing videos from scratch.
AI technology can easily put words in the mouths of real-life celebrities, politicians and other notable people. Free AI-powered computer applications can mimic voices, create movie-quality video clips and empower terrorists to create fake news reports. This can transform the spread of propaganda and bolster recruitment.
“Unlike human recruiters, AI-based chatbots can operate continuously across multiple platforms, engaging in conversations that mimic human interactions,” according to an April 11, 2025, Global Network on Extremism and Technology (GNET) article by Fabrizio Minniti. These AI chatbots also can analyze behavior and adapt their responses based on a person’s ideology and vulnerabilities. “The danger of passive recruitment with the malign use of AI is extreme,” he wrote.
AI is so new and its capabilities so varied that few if any countries have policies or responses ready to confront the threats it presents. The Africa Center for Strategic Studies held six webinars from February to April 2025 to address challenges and opportunities presented by AI. “The question prevails,” said Abdul-Hakeem Ajijola of Nigeria, chair of the African Union Cyber Security Expert Group, during a February 21 webinar. “Are our defenses evolving as fast as AI-powered threats?”

A ‘GIFT’ FOR TERRORISTS
Social media platforms give AI-generated posts an automatic global reach and the capacity to go viral, which often happens with more benign humorous memes and videos. Terrorist groups have no qualms about setting up shop on apps famous for dance fads and frothy videos, such as TikTok. Boko Haram and the Islamic State West Africa Province (ISWAP) already are using the platform in the Lake Chad Basin to host live programs and answer user questions, Bulama Bukarti, a security analyst with the Tony Blair Institute for Global Change, told Channels Television.
A May 2024 SITE Intelligence Group report by Rita Katz said it is hard to overstate what a gift AI is for terrorists because of their media dependence. “Productions that once took weeks, even months to make their way through teams of writers, editors, video editors, translators, graphic designers, or narrators can now be created with AI tools by one person in hours.”
IS operatives are so enthralled with AI that they have used it to create a media program called News Harvest to disseminate propaganda videos. Broadcasts show AI-generated news anchors discussing IS operations, each created with cheap, user-friendly AI tools, Katz wrote.
The same tools now used by terrorists also can be used by their supporters, thus multiplying and magnifying the reach of extremist messaging with little to no cost or effort. Observers expect that as freely available apps combine with rapidly evolving AI technology, threats will only grow, causing security agencies to play catch-up, The Guardian newspaper reported in July 2025.
“Our research predicted exactly what we’re observing: terrorists deploying AI to accelerate existing activities rather than revolutionise their operational capabilities,” Adam Hadley, founder and executive director of Tech Against Terrorism, a group that works to disrupt terrorist activity online, told The Guardian.
Evidence shows that terrorists are fully aware of the power and capability at their fingertips. IS, for instance, put out a guide in 2023 on how to use generative AI securely, according to The Soufan Center. In February 2024, a media group associated with al-Qaida called The Islamic Media Cooperation Council announced an AI workshop, Katz wrote.

REGULATION AND RESPONSE
To this point, AI tools have helped terrorist groups lend productive power and reach to their propaganda and communication campaigns. Some observers, however, think the technology soon could be employed in attacks as well.
Middle East Media Research Institute Executive Director Steve Stalinsky, writing for the Forbes Nonprofit Council in June 2025, said some groups and individuals already are talking about using AI to organize uprisings against governments, make weapons of mass destruction and develop weapons systems such as “drones and self-driving car bombs.”
The time has come, he wrote, for the AI industry to agree on best practices and standards to prevent use by terrorists. Most online platforms and tools publish terms of service that prohibit users from engaging in abusive, criminal or other harmful behavior, but enforcement always has been a challenge. Industry leaders have failed to curb the spread of terrorism and hate, Stalinsky wrote. So, governments will have to work with the industry on prevention.
The AU in 2024 adopted its Continental Artificial Intelligence Strategy to guide governance of AI in Africa, but counterterrorism is not one of its stated priorities, according to a June 2025 article by Brenda Mwale for GNET. Mwale, a lawyer and expert in counterterrorism law, wrote that as authorities continue to assess security risks that AI poses, “attention should also be paid to the emerging trends around terrorist exploitation of AI.”
How nations respond remains to be seen. Meanwhile, look for AI security threats to get worse before they get better, Ajijola said in the Africa Center webinar.

Akoh Baudouin, national liaison and security officer for the United Nations Development Programme in Cameroon, told the webinar that African security forces first need to understand how AI is being used in various security threats. Then they need to be proactive and adaptive in responding to them, including through counterpropaganda measures.
Ajijola agreed that being proactive is key. African nations will need to move from being passive consumers of AI technology to active leaders of AI-driven security development and strategies. This could start with the AU and regional bodies, making sure that nations draft, pass and harmonize AI security laws. Then police and security forces must learn AI-driven defense and digital forensics and join forces in ways that allow for intelligence sharing, security agreements and cooperation.
All of this will be expensive, but the costs to African security go beyond money. “You’ve got to invest,” Ajijola said. “I think Africa needs to make some decisions. Will Africa lead, or will it be led? You have to invest; that’s the bottom line.”
Terrorist applications of generative AI
Tech Against Terrorism has come up with classifications of risks posed by terrorists’ use of generative AI.
Media spawning. Terrorists can generate thousands of malicious variants from a single image or video that can circumvent automated detection mechanisms.
Automated multilingual translation. After publishing a message, terrorists could translate text-based propaganda into multiple languages, thus overwhelming manual detection efforts.
Fully synthetic propaganda. Terrorists could generate completely artificial content such as speeches, images and interactive environments meant to overwhelm moderation efforts.
Variant recycling. Terrorists could use generative AI to repurpose old propaganda in a way that could evade previous detection efforts.
Personalized propaganda. AI tools could customize messaging to better target recruitment of specific demographics.
Subverting moderation. AI could design propaganda that is specifically engineered to bypass moderation efforts.
Although generative AI poses risks in terrorists’ hands, it also provides opportunities to stay ahead of the threat. Cooperation and innovation will help officials understand AI vulnerabilities and provide proactive solutions to mitigate the threats.
Source: Tech Against Terrorism
