South African AI Project Designed to Detect Language that Could Inspire Violence
ADF STAFF
South Africa’s Media Monitoring Africa (MMA) organization is developing an artificial intelligence tool designed to monitor social media posts and flag those suspected of inciting violence.
Insights into Incitement, shortened to I3, will analyze text data such as social media posts, articles, political party commentaries and public complaints, then rank how likely an individual comment is to incite violence. A publicly available online dashboard compiles the data in a searchable format.
The project was a response to violence in KwaZulu-Natal and Gauteng provinces in 2021 after former President Jacob Zuma was jailed for contempt of court. The riots and looting that erupted left 300 people dead and caused millions of dollars in property damage. A subsequent investigation by the South African Human Rights Commission highlighted the ways that commenters on social media platforms instigated and incited the violence.
“At particular risk are minorities, fueled by xeno- and Afrophobia as well as vulnerable groups, including women,” the project designers wrote on its website.
Media Monitoring Africa is a nonprofit organization that works to counteract misinformation and disinformation as it promotes ethical journalism.
For the purpose of training I3, incitement is defined as written communication that encourages, glorifies or directly calls for acts of violence. The I3 developers are training it by using words and phrases commonly found in inciting posts. That training helps the software identify instances of incitement across social media. It ranks them red, yellow or green according to how dangerous they are, with red indicating the highest risk.
I3 expands MMA’s existing work around misinformation and disinformation and its Real411.org, a website that lets the public report harmful digital communication.
As AI tools such as I3 expand across Africa, the irony is that some of the potentially inciting material that the I3 tool identifies could have been created by other AI tools such as ChatGPT.
“Electoral periods and moments of political crisis served as flashpoints for AI-generated content,” researchers with Freedom House wrote in a 2023 report on AI. “AI-generated disinformation campaigns disproportionately victimize and vilify segments of society that are already under threat,” the researchers added. “In extreme cases, it could galvanize violence against individuals or whole communities.”
Technology experts and free speech advocates continue to point out the need for guardrails to ensure that AI tools, such as I3, are used appropriately, wherever they’re used.
“Innovations in the AI field have allowed governments to carry out more precise censorship that is less detectable, minimizing public backlash and reducing the political cost to those in power,” Freedom House researchers wrote.
On LinkedIn, South African attorney Zinhle Novazi wrote: “The MMA emphasizes that online incitement can lead to real-world violence, and this tool (I3) is essential for providing insights to prevent such occurrences.”
Novazi, a lecturer at Stellenbosch University and expert on technology and the law, noted that I3 can reduce the time needed to remove violent threats to just a few hours. She called that change “a significant improvement over traditional judicial processes that may take weeks or months.”
While the I3 project is commendable for its attempt to improve public safety, she wrote on LinkedIn, the project raises concerns about the potential impact on free speech and the possibility for misuse.
“The challenge lies in ensuring that the tool is used responsibly and does not infringe upon legitimate expressions of opinion or dissent,” Novazi wrote.
Comments are closed.