Realistic-looking news anchors delivering propaganda, popular television characters singing terrorist battle songs, online chatbots that tailor their responses to a user’s interests — these are all ways terrorist groups are using artificial intelligence (AI) to spread their message and recruit.
As AI technologies have spread across the internet, they have become tools that terrorist groups such as the Islamic State group (IS) and al-Qaida use to reach out to young people in Africa and elsewhere who have grown up with the internet and get their information from social media.
Cloaking terrorist propaganda in authentic-looking content helps get the messages past social media moderators, according to Daniel Siegel, who researches digital propaganda at Columbia University’s School of International and Public Affairs.
“By embedding extremist narratives within content that mimics the tone and style of popular entertainment, these videos navigate past the usual scrutiny applied to such messages, making the ideology more accessible and attractive to a wider audience,” Siegel wrote in an analysis for the Global Network on Extremism and Technology.
Although the content often is designed to be funny, it also exploits viewers’ love for the characters to lure them into consuming more of the content without realizing they’re being indoctrinated, Siegel wrote.
“Deepfakes,” AI-generated messages that look real, are making it nearly impossible to tell fact from fiction, experts say. That undermines faith in legitimate media organizations and government institutions alike, according to researcher Lidia Bernd.
“Imagine a deepfake video depicting a political leader declaring war or a religious figure calling for violence,” Bernd wrote recently in the Georgetown Security Studies Review. “The potential for chaos and violence spurred by such content is enormous.”
Terrorist groups already use AI to create hyper-realistic fake content such as scenes of injured children or fabricated attacks designed to stoke viewers’ emotions.
By hiding terrorists’ actual human propagandists behind deepfake technology, AI undercuts facial recognition tools and hamstrings counterterrorism efforts, analyst Soumya Awasthi wrote recently for the Observer Research Foundation.
At least one group affiliated with al-Qaida has offered workshops on using AI to develop visual propaganda and a how-to guide for using chatbots to radicalize potential recruits. Chatbots and other AI technology also can generate computer code for cyberattacks, plan physical attacks and raise money through cryptocurrency.
Terrorist groups use AI to quickly produce propaganda content using video footage captured by drones on the battlefield. Those fake news videos can mirror the look of legitimate news operations such as Al Jazeera or CNN. AI-generated anchors can be tailored to resemble people from geographic regions or ethnic groups terrorists are targeting for recruitment.
IS uses such AI content as part of its “News Harvest” propaganda broadcasts. AI text-to-speech technology turns written scripts into human-sounding audio.
Counterterrorism experts say governments and social media companies need to do more to detect AI-generated content like that being created by IS and al-Qaida.
For social media companies, that can mean boosting open-source intelligence to keep up with terrorism trends. For AI companies, that can mean working with social media and government authorities to refine methods of detecting and blocking malicious use of their technology.
Terrorists’ use of AI is not without its limits. Some members of Islamist terror groups object to depicting human faces in the AI-generated images, forcing some creators to obscure the faces, lessening the impact of the video.
Terrorists also fear having AI turned against them.
Groups affiliated with al-Qaida have warned their members that security forces could use AI-generated audio to give fake commands to followers or otherwise to sow confusion and disrupt terrorist operations.
According to HumAngle, one such warning went out in Arabic via the Telegram messaging app to members of the Boko Haram splinter group Jama’atu Ansarul Muslimina fi Biladis Sudan in Nigeria.
The message, according to HumAngle, said: “New technologies have made it possible to create voices. Although they are yet to be as sophisticated as natural voices, they are getting better and can be used against Jihadists.”