March 20, 2026
The integration of artificial intelligence (AI) into military systems presents one of the most significant challenges to global security in the 21st century. AI is becoming increasingly integrated into military operations and transforming warfare, but its unregulated development is also increasing the risk of unchecked proliferation, unintended escalation, and ethical dilemmas. While AI has the potential to improve decision-making, increase operational efficiency, and reduce battlefield casualties, it also raises profound concerns.
One major concern is whether lethal autonomous weapons systems (LAWS) should be used in warfare at all. This question raises serious moral and accountability issues. However, despite these concerns, Russia and the United States do not support China’s proposal to outright ban LAWS, which could theoretically address this issue today.
Another concern is the presence of regulatory gaps and ambiguity. Although the United Nations has launched AI governance initiatives aimed at guiding development, AI is considered one of the fastest-moving technologies in history, and international law is unlikely to keep pace with its rapid evolution. Even now, Russia is already using AI technologies in support of its hybrid, gray-zone, and information warfare operations abroad, highlighting the dangers of collateral damage and a lack of transparency.
There is also growing concern that the militarization of AI could intensify arms races and expose new vulnerabilities. With AI militarization now a top priority for the United States, China, Russia, and others, the absence of proper guidance could lead to destabilizing competition in the development of LAWS, AI-enabled intelligence systems, and decision-support tools that compress military decision-making. At the same time, AI systems are introducing new and poorly understood vulnerabilities, including cyber threats such as data poisoning, deepfake manipulation, and adversarial attacks that can degrade or misdirect military operations. Given these risks, the world must urgently establish clear governance frameworks, ethical safeguards, and cooperative security measures to ensure that the militarization of AI enhances stability rather than undermining international security.
To address these challenges, the United States should adopt a multi-track governance approach by leading global AI arms control discussions while simultaneously implementing domestic regulations. This approach would allow the United States to lead in promoting the responsible use of AI in warfare without compromising national security or falling behind technologically.
First, the United States should lead efforts to promote an international treaty framework restricting or prohibiting LAWS. To do this, Washington could call for a Convention on AI Weapons aimed at codifying legal and ethical boundaries on machine-based targeting and lethal autonomy. The benefits of this approach would include reinforcing US moral leadership, supporting international humanitarian law, and reducing the risk of accidental conflict escalation. The challenge, however, is that China and Russia are unlikely to join any binding agreement, citing national sovereignty and security priorities. Their noncompliance could deter allies from fully committing and limit broader global buy-in. To build greater international pressure, the United States should establish a NATO AI Ethics Task Force that also includes foreign partners outside the alliance. This would help create a more unified democratic front and increase pressure on adversarial states to follow responsible norms agreed upon before any formal UN convention.
Second, the United States should adopt binding domestic policies that ensure the ethical deployment of AI in military contexts. A key principle should be a human-on-the-loop directive mandating human oversight for any system capable of using lethal force. This would mirror existing practices in drone operations, requiring high-level approval and post-strike review protocols. The benefits of this approach include preserving US strategic capabilities while also building public confidence in military AI applications. Recent updates to Section 1061 of the FY26 NDAA, which require oversight mechanisms before the development of autonomous weapons systems that can select and target without an operator, represent a step in this direction. At the same time, overregulation could slow AI integration in military operations and potentially place the United States behind rivals such as China, which is rapidly integrating AI into its military.
Lastly, the United States needs to work in close coordination with the private sector, balancing regulation with the need to sustain innovation. The Trump administration has already taken steps in this direction by releasing an AI Action Plan that promotes the export of American AI, encourages the rapid buildout of data centers, and supports innovation and adoption. In coordination with the private sector, the Department of Defense is also creating an “AI and Autonomous Systems Virtual Proving Ground,” which is essential for the testing and evaluation needed to build trust and scale adoption. The consensus view within the private sector is that AI will continue to be a transformative technology essential to 21st-century combat. Tech firms are lobbying for increased federal funding along with deregulation. However, without stronger regulation requiring frontier AI firms to evaluate their systems, the United States risks enabling AI to accelerate the development of weapons of mass destruction. Private-sector policy should therefore include restrictions on the most likely ways large language models could share dangerous or harmful information globally.
The rapid expansion of AI in warfare presents unprecedented opportunities and profound challenges. On one hand, AI technologies offer the potential to enhance operational efficiency, improve situational awareness, and maintain strategic advantage. On the other hand, without clear regulation, these technologies carry significant risks, including ethical dilemmas, heightened conflict escalation, and the destabilizing effects of an AI arms race involving adversaries such as China and Russia. The United States must adopt a balanced and forward-looking approach that secures its leadership in AI while ensuring responsible use through ethical safeguards.
Tyler Rodriguez is a graduate student pursuing an M.A. in International Security at the Schar School of Policy and Government at George Mason University. He is currently a national security and international affairs professional with more than nine years of experience supporting Department of War on security cooperation, international cyber governance and policy, global defense cooperation initiatives, and emerging technology issues. His research interests focus on U.S. national intelligence, building partner capacity programs, and the impact of AI-enabled warfare. Tyler holds a B.S. in History and a minor in Political Science from Towson University.
Image can be found here.
