It has been revealed that an Artificial Intelligence-powered military drone was able to identify and attack human targets in Libya. The drone, Kargu-2, is made by a Turkish company (STM) and fitted with a payload that explodes once it makes an impact or is in close proximity with its AI-identified target.
It is not clear whether the attacks resulted in any deaths.
The revelations were made in a report published in March 2021 by the United Nations (UN) Panel of Experts on Libya which stated that the drone was a “lethal autonomous weapon” which had “hunted down and remotely engaged” soldiers which are believed to have been loyal to Libya’s General Khalifa Haftar.
"Logistics convoys and retreating HAF were subsequently hunted down and
remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 (see annex 30) and other loitering munitions. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true 'fire, forget and find' capability. The unmanned combat aerial vehicles and the small drone intelligence, surveillance and reconnaissance capability of HAF were neutralized by electronic jamming from the Koral electronic warfare system. The concentrated firepower and situational awareness that those new battlefield technologies provided was a significant force multiplier for the ground units of GNA-AF, which slowly degraded the HAF operational capability," reads part of page 17 of the letter dated 8 March 2021 from the UN Panel of Experts on Libya sent to the UN Security Council.
Lethal autonomous weapons
Military drones are not a new concept, they have been in existence for over a decade and been used by various countries in military attacks on enemies. However, what has happened in Libya is a new development given the fact that the drone did not have any human operating it when it executed the attack, it relied on AI to identify and strike its targets.
This strike by a “lethal autonomous weapon” as the UN has phrased it, takes the conversation on the ethics of using drones in military attacks to a new level but also introduces another element: how reliable is the AI behind the STM Kargu-2 drones?
We have previously observed and covered extensively how biased some algorithms and AI-based systems can be, especially towards Africans. In this military scenario, the fear is that such bias could be fatal and thus lead to death or permanent and irreversible damage.
Death by machine
To somehow counter this, STM lists among the Kargu-2 drone's capabilities and competencies as its ability to effect "autonomous and precise hit with minimal collateral damage." Unfortunately, in such situations, all it takes is one attack gone wrong for the AI used on the military drones to be questioned.
As the UN report also alludes to, the introduction of such technology in military conflicts introduces us to a new era of "killer robots" as had previously only been imagined in Sci-Fi.
"The introduction by Turkey of advanced military technology into the conflict was a decisive element in the often unseen, and certainly uneven, war of attrition that resulted in the defeat of HAF in western Libya during 2020. Remote air technology, combined with an effective fusion intelligence and intelligence, surveillance and reconnaissance capability, turned the tide for GNA-AF in what had previously been a low-intensity, low-technology conflict in which casualty avoidance and force protection were a priority for both parties to the conflict."
Subscribe to our Daily Brief newsletterShare this via:
Insights and analysis into how business and technology impact Africa. We promise to leave you smarter and asking the right questions every time after you read it. Sent out every Monday to Friday.