New laws and tactics are needed to catch extremists online
ADF STAFF
In July 2017, as Iraqi forces were pushing ISIS out of the city of Mosul, authorities found a 16-year-old girl hiding in a tunnel. She had run away from her home in Germany a year earlier to join the extremist group. She had gotten her plane ticket by posing as her mother.
Pictures of the sad, frightened-looking girl, surrounded by her smiling captors, were published and posted on the internet all over the world.
The girl had been raised in a Protestant family but had shown little interest in religion. Then, in the spring of 2016, she told her parents she was interested in Islam. She began teaching herself Arabic, carrying a copy of the Quran to school, and wearing conservative clothes.
Authorities said she was initially recruited by ISIS in online chat rooms. One of her recruiters persuaded her to join him in Syria. She was convinced she was in love with him. Her mother later said her daughter had been “completely brainwashed” by ISIS recruiters.
Hers is not an unusual story. All over the world, if people have access to the internet, they have access to extremists and their propaganda. ISIS and other extremist groups have been using the internet, along with social media, to recruit members for years — and authorities have been grappling with ways to stop them.
During an ISIS offensive in Iraq in 2014, the BBC reported, a number of Twitter accounts claiming to represent ISIS in Iraq and Syria included live updates on the group’s progress. Some of the feeds included photographs that were taken and posted with cellphones.
ISIS has been using Twitter since at least 2012. The group uses it for recruiting, planning, issuing threats and taking credit for its attacks.
In late January 2017, The Sun of Nigeria reported that the extremist group Boko Haram was refocusing its resources to a media and propaganda campaign against the nation. The revised strategy was discovered in materials left behind by extremists who had been routed from a stronghold in the northeast part of the country. The materials included documents, phones and computers “that contained detailed information on the Boko Haram media and propaganda strategy,” the newspaper reported.
“The documents, written in Arabic, also outlined the media strategy that Boko Haram Commanders should employ and how the surviving members should ensure the propagation of the Boko Haram doctrine using Social Media,” said Alhaji Lai Mohammed, Nigeria’s minister of information and culture, in a statement to the press.
The minister said the recovered materials confirmed the announcement of a new media wing of Boko Haram called “Wadi Baya,” or “Clear speech.”
It is hard to overestimate the importance of social media in Africa. Throughout the continent, social media is used for far more than just recreation and communication. It’s a primary source of news. The newsgroups in WhatsApp alone are a major news outlet in Africa.
Many parts of Africa are without newspapers, and in some countries, the press has been muzzled. Mobile phones have a higher penetration than television in Sub-Saharan Africa, and smartphone use more than doubled from 2014 to 2016.
One way for countries to block extremists’ messages on social media is to shut down the internet during certain times. But that approach is open to abuse, and many countries have blocked access to social media to keep legitimate information from the public. As of mid-2017, the digital news outlet Quartz reported, at least seven African nations had blocked access to social media during elections and at other politically sensitive times.
RAMPING UP IN AFRICA
David Fidler of the U.S.-based Council on Foreign Relations said there is no doubt that ISIS and other extremist organizations are ramping up their efforts in Africa.
“Following its online playbook, the Islamic State is trying to harness social media to strengthen its power and position in Libya,” wrote Fidler for the Defense One website. “Other groups, particularly al-Shabaab in Somalia and Boko Haram in Nigeria, are copying the Islamic State’s social media strategies. Such cyber-facilitated extremism is unfolding as African cyberspace undergoes rapid changes, including efforts to expand Internet access and increase use of social media.”
Spokesmen for Twitter, Facebook and other social media acknowledge that their services are being abused by extremist groups, including ISIS and Boko Haram. But critics say that by not doing more to shut down the accounts of extremists, the social media are, in fact, abetting them.
It’s not as if social media services are encouraging the extremists. Twitter guidelines, as an example, include a specific reference to extremists: “You may not make threats of violence or promote violence, including threatening or promoting terrorism.” Facebook’s rules include a provision banning hate speech and anything that incites violence. Instagram says it is “not a place to support or praise terrorism, organized crime, or hate groups.” WhatsApp, which is owned by Facebook, says its services cannot be used in ways that are “illegal, obscene, defamatory, threatening, intimidating, harassing, hateful, racially, or ethnically offensive, or instigate or encourage conduct that would be illegal, or otherwise inappropriate, including promoting violent crimes.” YouTube bans the posting of videos that contain “hate speech, including verbal attacks based on gender, sexual orientation, race, ethnicity, religion, disability or nationality.” In an information-overloaded world, it appears that we humans have developed the ability to take in marketing messages very quickly. That might be a reason why themarketingheaven.com video ads are all the rage. It’s also the reason why you need to capture your viewers’ attention – fast – to get them to even click on your video.
The terrorist who killed four people and wounded dozens more in a March 2017 attack in London reportedly used the messaging service WhatsApp. Britain’s Home Secretary Amber Rudd said she wants WhatsApp and other social media services to make their platforms more accessible to authorities in cases like the London attack.
“We need to make sure that organizations like WhatsApp — and there are plenty of others like that — don’t provide a secret place for terrorists to communicate with each other,” she said in an interview.
WhatsApp is particularly useful to extremists, because it is “end-to-end” encrypted, meaning the only people who can read WhatsApp messages are the sender and specific recipients. It has become particularly popular in countries where governments have blocked other messaging services to clamp down on dissent.
Government officials have been pressuring social media companies and cellphone manufacturers to provide them a “back door,” or secret entrance, to bypass encryption. Software and cellphone companies maintain that if you give a back door to a legitimate government, oppressive governments will demand them as well.
Most of the social media services say they are trying to police themselves. In March 2017, Twitter announced that it had suspended 636,248 accounts from August 1, 2015, through December 31, 2016, for “violations related to promotion of terrorism.” The website Naked Security reported that in June 2017, Facebook was developing artificial intelligence to detect extremist postings and employing 150 experts to make the platform “a hostile place for terrorists.” About the same time, Google announced that it would use more artificial intelligence software, also known as “machine learning technology,” and was adding 50 expert nongovernmental organizations to the 63 organizations that were already part of YouTube’s Trusted Flagger program.
Critics want more. In January 2017, family members of terrorist victims sued Twitter, alleging that through neglect, Twitter was providing support and resources to ISIS. “We believe that Twitter doesn’t do enough to proactively monitor, identify and remove terrorist-related accounts and hasn’t made an effective or prolonged effort to ensure that the accounts are not re-established,” wrote the plaintiffs, according to The New York Times. “In short, Twitter’s actions are too little, too late.”
WHAT TO DO
Although policies and approaches vary from country to country, critics are unanimous in saying that countries should be more demanding about the transparency of social media platforms operating within their borders. They contend that such sites need to be more engaged in looking for extremist postings and that governments should push them in that direction.
Hany Farid, chairman of the Computer Science Department at Dartmouth College in the United States, has developed software that can block extremist-related posts on the internet, including social media. The software could be used to create a database of known extremist content and prevent such content from being posted. The social media services, he said, have so far been reluctant to aggressively employ his software.
“The problem I have with the tech companies is, whenever they want to do something unpopular that’s in their financial interest, they hide behind their terms of service,” Farid told Enterprise magazine. He said he believes that increasing pressure from governments, the media and the public will force tech companies to do their part.
The family of a woman killed in the November 2015 Paris terror attacks is trying to get the courts to make social media more proactive in dealing with extremists. The parents of Nohemi Gonzales have sued YouTube and Google in United States federal court, saying the two media giants were complicit in the attack and deaths. YouTube, they said, repeatedly allowed ISIS and other groups to post videos, sometimes with paid advertisements for legitimate products. YouTube has acknowledged that it did, in fact, inadvertently post the ads alongside the terrorist videos but said it is working to make sure it won’t happen again.
Google, the parent company of YouTube, is relying on the U.S. Communications Decency Act, a 1996 law that was passed before most social media services were invented. The law says operators of websites are not publishers and cannot be held liable for things posted and viewed by users.
It’s clear that the world has changed significantly since the Communications Decency Act and other restrictions became law. Robert Tolchin, a lawyer for the parents in the suit, said, “The things that we are seeing in terms of the way that the internet is being used were not even imagined by the people that created that Communications Decency Act.”
Fidler said that Africa, and the world, face a new era that will require new strategies and new policies: “With the Islamic State bringing its cyber-facilitated extremism to Africa and with African terrorist groups adopting the Islamic State’s online playbook, the need for a comprehensive approach to the cyber components of violent extremism in Africa is becoming a more pressing policy issue.”