African militaries are increasingly using artificial intelligence (AI) for surveillance, situational awareness, intelligence gathering and to increase operational efficiency in conflict zones.
But analysts say the burgeoning technology comes with risks, including the lack of human control of autonomous weapons systems (AWS), cyber vulnerabilities, and AI’s potential to make biased or inaccurate decisions based on data collection, such as in drone target information. This can lead to unintended consequences, such as attacks on civilians.
The use of drones in military operations already is fraught with risks. In Nigeria, Beacon Consulting, a security intelligence and risk management company, has documented 18 incidents of inadvertent drone strikes by the Nigerian military that killed more than 400 civilians in the past six or seven years.
“There is enormous pressure on the Nigerian military to ensure that these … incidents do not happen again,” Kabir Adamu, Beacon’s managing director, said during a recent Africa Center for Strategic Studies (ACSS) webinar. “The military has created several units to ensure that compliance with international human law is observed.”
As AI technology and AWS continue to advance, a range of data collection procedures and kinetic operations could be removed from direct human control. A fully autonomous weapon can conduct operations without human decision-making, according to the Center for Arms Control and Non-Proliferation.
Turkish drone manufacturer Baykar recently tested a new version of its popular TB2 drone that is equipped with advanced AI, according to a February report from The Defense Post. The new TB2T-AI includes three advanced AI computers that bolsters its ability to operate autonomously. AI allows it to identify and track targets, recognize terrains, optimize routes, and takeoff and land automatically.
As noted by the African Union, AI systems might not yet be fully able to explain their decision-making. There also are concerns about the overall protection of human rights, and safety and security issues in civil and military settings. Other risks include cyber threats to AI applications such as untraceable artificial videos, images and audio files known as deepfakes.
These types of risks become more problematic as AI becomes more advanced, ACSS researcher Nate Allen said.
“It’s not enough to adopt a particular AI system; you have to think about the context [in which] you will use it,” he said during the webinar. “I think this will be important for our African partners. You have to think strategically at the operational level about how it integrates with other systems.”
Allen and other analysts say there is concern over who should control the technology and the degree to which AI systems should comply with international law.
International human law is commonly interpreted as requiring users of AWS to predict and limit the effects of the use of force, according to the Stockholm International Peace Research Institute (SIPRI). These rules are not clearly stated, but many nations around the globe agree that the use of AI in conflict should depend on the context in which the weapons are used.
Human oversight of AI systems is critical to ensure ethical and lawful decision-making in warfare, according to analysts such as Carlos Batallas of Spain’s IE School of Politics, Economics & Global Affairs. Batallas has argued that human operators should not simply rubber-stamp all AI recommendations, especially in the use of AWS.
“Experts emphasize that preserving human judgment in military decision-making on the use of force is crucial to reducing humanitarian risks, addressing ethical concerns, and facilitating compliance with International Humanitarian Law,” Batallas wrote for the university’s online publication. “This principle is particularly challenged by autonomous weapons systems, which are designed to operate without direct human control.”
SIPRI researchers have recommended that militaries use scenario exercises to identify design characteristics and specific uses of AWS that are prohibited under international law and specify standards and behavioral requirements of humans in the development and use of AWS. Such exercises also may identify limits on AWS, including legal, ethical, policy, security and operational considerations.
To harness AI’s benefits, the AU has developed a strategy to build AI capabilities, minimize risks, stimulate investment and foster cooperation. The strategy aims to:
* Harness AI to benefit African people, institutions and the private sector.
* Address the risks associated with the use of AI, with attention to things such as governance, human rights, peace and security, and other issues.
* Accelerate AU member states’ capabilities in infrastructure, datasets, AI innovation and research, while honing AI talent and skills.
* Foster regional and international cooperation and partnerships to develop national and regional AI capabilities.
* Encourage public and private investment in AI at national and regional levels.
“The onus now is on supporting member states to develop their local capacity,” Adamu said. “Most members states have a long way to go in that regard. Going forward, we’re going to see more international cooperation.”