Home / Researches / Conflict, Peace and Security / Asia / The proliferation of AI-enabled military technology in the Middle East

The proliferation of AI-enabled military technology in the Middle East

Militaries’ investments in artificial intelligence-enabled military technology highlight a requirement for further regulation to maintain the strength of international-humanitarian law and protect civilians, and the inability of existing governance frameworks to manage commercial providers.

The Israel–Hamas war of May 2021 was described in the Israeli press as ‘the world’s first AI war’, integrating a number of new artificial intelligence (AI) systems into military technologies, from new target-identification processes to enhanced weaponry. Since then, the integration of AI into military technologies has progressed in leaps and bounds, with countries across the region seeking to make AI a part of their military architecture. Much of this has involved partnerships with commercial entities, from Israeli start-ups to big-tech corporations including Amazon, Google and Microsoft. As these entities have shown a tendency to circumvent their self-professed human-rights commitments and due-diligence obligations, greater regulation will be required to protect civilian lives and infrastructure during armed conflict.  

Proliferating military use of AI

As the player with the greatest access to this technology, Israel is pioneering the deployment of AI-enabled military technologies in the region, often to devastating effect. It first did so on a significant scale in the May 2021 war, but its use of these technologies has increased exponentially since 7 October 2023. Within the first few weeks of the Gaza war, Lavender, an AI decision-support system (DSS), was reportedly used to generate a list of 37,000 individual targets.

Israel has invested heavily in integrating AI across its military, from establishing an AI and Autonomy Administration within the Directorate of Defense Research & Development in the Ministry of Defense, to enabling the elite signals-intelligence Unit 8200 to develop some of the Israel Defense Forces’ (IDF) own AI tools. While AI has been integrated into weapons systems to improve target tracking and kill rates, as in the case of Smart Shooter’s SMASH optic site, Israel’s most significant innovations have been in the development of AI DSS. Examples of Israel’s AI DSS include Lavender, which rates individuals for targeting purposes according to suspected affiliation with armed groups; TheGospel (Habsora), which generates target lists; and Where’s Daddy?, which tracks individuals’ locations ahead of potential strikes. Although the integration of AI in DSS allows militaries to analyse data more quickly and expedite decision-making cycles, it increases the risk of errors due to targets being generated at rates too high for effective human verification, and due to a high risk of institutional bias towards AI- rather than human-generated assessments.

The United States is also deploying these technologies across the region. Notably, the US Department of War has used AI DSS to identify targets across Iran, Iraq, Syria and Yemen. Most recently, Operation Epic Fury hit 1,000 targets in Iran within 24 hours. A key factor in the scale and rapid selection of targets has been the US military’s use of Palantir’s Maven Smart System, which also integrates Anthropic’s Claude AI, to analyse surveillance data, create targeting lists and enable target-prioritisiation. Many of the targets hit in Iran have been civilian, including a school and healthcare and residential facilities, illustrating the risks of rapid target generation. Iran in turn has targeted AWS data centres in the United Arab Emirates (UAE) and Bahrain to ‘identify the role of these centers in supporting the enemy’s military and intelligence activities’, possibly a reference to the hosting of Palantir’s artificial-intelligence platform, which integrates Anthropic’s Claude AI, on AWS servers.

Facial-recognition software is another major use case. Israel, for example, has rolled out mass facial-recognition programmes in both Gaza and the West Bank. In the West Bank, the IDF uses a family of systems that access a database called Wolf Pack, which stores information on Palestinians. Red Wolf, installed at checkpoints, and Blue Wolf, installed on the smartphones of Israeli soldiers, automatically enroll Palestinian biometric data into Wolf Pack, which creates intelligence profiles of Palestinians and shares them with Israel’s internal-security agency, Shin Bet. Such widespread and involuntary facial-recognition programmes violate international human-rights law (IHRL) protections including the right to privacy (Article 17, ICCPR).

Other regional players are trying to catch up. The former supreme leader of Iran, Sayyid Ali Khamenei, had called on the country to ‘master AI’, although details of Iran’s progress cannot be independently verified. While the UAE does not have substantial AI systems yet, state-owned defence conglomerate EDGE is buying a 30% stake in Israeli AI-drone detection company Thirdeye Systems, and embarking on a joint venture with US arms manufacturer Anduril to co-produce drones with AI-enhanced capabilities. Turkish arms manufacturers STM and Baykar Defense have pioneered drones equipped with AI image-processing software; the former’s Kargu drone was reported to have engaged General Khalifa Haftar’s forces in Libya in 2020.  

Commercial enablers

Underlying these technologies is a vast and complex network of commercial providers. Some are companies with an explicit national-security purpose, such as the United States’ Palantir and Israel’s Corsight AI. Israel and Palantir signed a strategic partnership in 2024 to ‘harness Palantir’s advanced technology in support of war-related missions’.

Many other commercial providers, however, have not trained their AI functionality for a specific security or military function. Big-tech companies such as Amazon, Anthropic, Google, Microsoft, and OpenAI have provided AI products to a variety of defence ministries, including the US and Israel. A 2024 draft contract between Google and the Israeli defence ministry highlighted the latter’s pre-existing exclusive ‘landing zone’ to access cloud infrastructure, and new plans to create specific landing zones for military units.

Under the terms of Project Nimbus, Israel’s state-owned Israel Aerospace Industries and Rafael Defence Industries are required to use the cloud services provided by Amazon and Google for their cloud-computing needs. As Israel stands accused of war crimes, crimes against humanity and genocide at both the International Criminal Court and the International Court of Justice, its deepening cooperation with commercial providers could potentially expose them to liability under both domestic and international-law frameworks. 

A deficient regulatory patchwork

International law regulates permitted and prohibited uses of force – including those that involve the use of AI-enabled military technology – through international-humanitarian law (IHL), and related IHRL protections. Key principles include the civil-military distinction, which distinguishes between civilians and combatants and prohibits the direct targeting of civilians and civilian objects (Articles 48 and 52, AP1; customary international law as affirmed in the Nuclear Weapons advisory opinion); proportionality, which prohibits committing excessive civilian damage; necessity, which restricts military force; and the right to privacy.

While IHL governance frameworks are directed at and bind actors at the state-level, states can create and enforce obligations on commercial entities by implementing and integrating these provisions into their domestic legal frameworks and relying on the principle of universal jurisdiction for international crimes. Sweden is one jurisdiction that enables corporate criminal liability in this way, with a trial of two former energy executives for allegedly aiding and abetting war crimes in Sudan currently ongoing.

There is, however, no binding international regulation covering the specific risks and uses of AI-enabled military technologies. Instead, a patchwork of declarations and resolutions address some aspects of the AI-military nexus. The United Nations General Assembly’s resolution 79/239 on AI military technology affirmed that military systems enabled by AI are governed by IHL and IHRL. The Bletchley Declaration, signed by 28 states in November 2023, called for responsible innovation based on international collaboration. The US-led Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, endorsed by 58 states, called for military uses of AI to comply with international law, for humans to oversee uses of AI and for states to take steps to minimise unintended biases. The non-binding nature of these instruments, however, limits their effectiveness in placing legal and ethical curbs on the use of AI-enabled military technologies.

Many commercial AI providers explicitly refer to the UN Guiding Principles on Business and Human Rights in their human-rights policies. These principles state that business enterprises should ‘treat the risk of causing or contributing to gross human rights abuses as a legal compliance issue’ (Principle 23(c)), citing potential civil liability and criminal responsibility for failing to do so. While these principles are not legally binding, their incorporation into companies’ human-rights policies creates internal obligations that commercial providers are expected to uphold.

Supply-chain due-diligence obligations in domestic jurisdictions represent one avenue through which commercial entities may be subject to litigation. The Irish Council for Civil Liberties, a human-rights non-governmental organisation, has requested that Ireland’s Data Protection Commission investigate its complaint into Microsoft’s processing of mass-surveillance data of Palestinians on behalf of Israel. Litigation could conceivably expand to commercial providers of AI in the future. To this end, a collective of legal-advocacy groups recently notified Microsoft that there is ‘credible basis’ to find that the company – through its service provision to Israel – has ‘played a direct role in Israel’s commission of grave crimes against the Palestinian population of Gaza’, opening Microsoft to civil and criminal liability at both the international and domestic court level.

Following investigative reporting of Israel’s use of Microsoft Azure servers to host mass-surveillance data on Palestinians, an external investigation found that Unit 8200 had violated Microsoft’s terms of service by storing mass-surveillance data, contrary to IHRL, including intercepted calls. While Microsoft has terminated Unit 8200’s access, many other Israeli entities retain their access to Microsoft’s services. A group of Microsoft shareholders also submitted a proposal to investigate the strength of Microsoft’s human-rights due-diligence procedures in relation to Israeli use of Microsoft technologies, although it failed to pass.

Tech companies employ various methods to sidestep corporate-governance mechanisms. OpenAI and Google, for example, have quietly changed their terms of use to insert ‘national security’ exemptions and removed commitments that prohibited their clients from using AI for weapons and surveillance purposes. Meanwhile, investigative reporters reviewing the terms of Google and Amazon’s US$1.2 billion Project Nimbus contract with Israel revealed that the contract includes terms that limit Google and Amazon’s ability to restrict the use of their technologies by Israeli government authorities, with the deal reportedly including a term prohibiting Amazon and Google from suspending Israel’s access should Israel be found to violate their terms of service. Project Nimbus also reportedly includes provision for the creation of a secret ‘winking mechanism’ in which Amazon and Google commit to covertly signal to Israel that a third country has ordered either of the two companies to hand over Israeli data. Both Amazon and Google have denied these claims.  

Outlook

AI-enabled military technologies can be expected to continue to proliferate across Middle Eastern battlefields, scaling up the damage done to civilians and civilian objects and exacerbating humanitarian crises. Many of these technologies are also being applied outside of conflict zones for the purposes of predictive policing and mass public surveillance, globalising potential IHRL violations. In the absence of adequate accountability, the Middle East has become a testing ground for AI-enabled military technology, which is then marketed internationally as battle-tested.

Share link:

Related Post

Privacy & Cookies

We use cookies and similar technologies to enhance your browsing experience, personalize content, and analyze our traffic. By accepting, you consent to our use of cookies. If you reject, only essential cookies will be used