ADF STAFF
Recent research shows a dramatic rise over the past year in misinformation generated by artificial intelligence and presented as authentic news that, experts say, puts at risk international peacekeeping operations in Africa.
At last count, more than 675 websites publish what one group of experts describes as “unreliable” news articles generated by artificial intelligence (AI) — articles that frequently present falsehoods and misinformation as the truth.
NewsGuard, a group that monitors misinformation efforts, identified the sites. Researchers found AI-generated content published in 15 languages, including Arabic, Chinese, English, French and Portuguese, on sites whose names make them seem like legitimate sources of information.
“This obscures [the fact] that the sites operate with little to no human oversight and publish articles written largely or entirely by bots,” NewsGuard researchers wrote.
The researchers found that the number of websites presenting AI-generated content ballooned from 49 in May 2023 to more than 600 by the end of the year. The sites enrich their creators through advertising by global brands that is placed by Google Ads, whose policy prohibits advertising on sites with content that is not original, according to NewsGuard.
The technology behind the false reports is known as generative AI, which pulls data from the internet to construct completely artificial text, video and audio content with minimal human involvement. AI can quickly translate content from one language to another, making it easy for that content to be published widely online and shared across multiple social media platforms, according to experts.
“These disinformation actors take advantage of algorithms that are designed to prioritize outrage and user engagement,” said Melissa Fleming, the U.N.’s communications chief. “By design they, at the same time, limit the spread of factual information.”
AI-generated falsehoods might include claims about the death of key leaders or other information designed to incite the recipients to respond violently, according to experts. At least one AI company is creating completely artificial-but-lifelike news readers for broadcasters.
United Nations peacekeepers are frequently the target of AI-generated falsehoods. A recent survey of United Nations peacekeepers showed more than 70% experienced severe disruptions to their work due to misinformation and disinformation, endangering the safety of both staff and civilians involved in peacekeeping operations, according to Fleming.
In 2023, the U.N.’s four African peacekeeping missions — MONUSCO in the Democratic Republic of Congo, MINUSCA in the Central African Republic, MINUSMA in Mali, and UNMISS in South Sudan — all experienced AI-generated disinformation campaigns aimed at undermining their credibility with the local population.
Peacekeepers fought back by enlisting a “digital army” of smartphone and social media users to counteract the false claims.
The explosion of AI-generated misinformation coincides with a continued decline in trust globally for government and media information sources, with survey respondents saying they put more faith in information shared by friends and family members or social media sites such as TikTok for their news, according to the Reuters Digital News Report 2023.
Used in the public interest, generative AI could increase access to information, enhance freedom of expression and expand knowledge about health and other issues, Fleming recently told the U.N. However, the low cost of using AI also is driving the explosion of AI-generated misinformation and disinformation instead.
“AI can be used to defame a person’s honor and reputation,” Fleming said. “We are, at the U.N., anticipating it being used to scale up attacks against the U.N. and its staff to undermine our mission and our public image.”