← Back to all diffs
NPR

How the U.S. is using AI in the war in Iran

View original article →
+1379 words added -36 words removed
− By Ryan Benk , Ayesha Rascoe NPR's Ayesha Rascoe talks to Lauren Kahn of Georgetown University's Center for Security and Emerging Technology about the role of artificial intelligence in war.
+ Accessibility links Skip to main content Keyboard shortcuts for audio player Open Navigation Menu --> Newsletters NPR Shop Close Navigation Menu Home News Expand/collapse submenu for News National World Politics Business Health Science Climate Race Culture Expand/collapse submenu for Culture Books Movies Television Pop Culture Food Art & Design Performing Arts Life Kit Gaming Music Expand/collapse submenu for Music Tiny Desk New Music Friday All Songs Considered Music Features Live Sessions The Best Music of 2025 Podcasts & Shows Expand/collapse submenu for Podcasts & Shows Daily Morning Edition Weekend Edition Saturday Weekend Edition Sunday All Things Considered Up First Here & Now NPR Politics Podcast Featured Wait Wait...Don't Tell Me! Fresh Air Wild Card with Rachel Martin It's Been a Minute Planet Money Get NPR+ More Podcasts & Shows Search Newsletters NPR Shop Tiny Desk New Music Friday All Songs Considered Music Features Live Sessions The Best Music of 2025 About NPR Diversity Support Careers Press Ethics How the U.S. is using AI in the war in Iran NPR's Ayesha Rascoe talks to Lauren Kahn of Georgetown University's Center for Security and Emerging Technology about the role of artificial intelligence in war. Technology How the U.S. is using AI in the war in Iran March 15, 20268:09 AM ET Heard on Weekend Edition Sunday By Ryan Benk , Ayesha Rascoe How the U.S. is using AI in the war in Iran Listen &middot; 5:02 5:02 Transcript Toggle more options Download Embed Embed "> <iframe src="https://www.npr.org/player/embed/nx-s1-5745863/nx-s1-9688689" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player"> Transcript NPR's Ayesha Rascoe talks to Lauren Kahn of Georgetown University's Center for Security and Emerging Technology about the role of artificial intelligence in war. Sponsor Message AYESHA RASCOE, HOST: The U.S.-Israeli war with Iran is now into its third week, and the human toll is adding up. At least 1,300 people have been killed in Iran, according to officials there. Among that number, Iran's health ministry says, are more than 400 women and children. Thirteen U.S. service members are dead. So the stakes are high. And in this war, the U.S. has employed some of the most advanced military technology ever used in an active war zone, much of it powered by artificial intelligence.
− Sponsor Message Become an NPR sponsor
+ Lauren Kahn is a senior research analyst for Georgetown University's Center for Security and Emerging Technology, and she joins me now. Welcome. LAUREN KAHN: Thank you for having me. RASCOE: So we're seeing, you know, reports like the earlier strike on an elementary school and a strike on low-income housing, as reported by Iranian state media this morning. So there's a human toll, and we don't know how these strikes happened. But let's get to the big picture of just how extensively AI is being used in this war right now. KAHN: Absolutely. I think it's important to think about - we are four years out now from the war in Ukraine. And the United States is clearly internalizing some of the lessons that we've seen there, that being the first real drone war, the first real AI war that we've seen. And so now we're seeing artificial intelligence being used across the gamut - things from natural language processing, large language models, computer vision models, different things to help with everything from decision support to autonomy and systems. So we're really seeing it across the board. RASCOE: So when you say decision support, autonomy of systems, for the average listener, what does that mean? Decisions on where to strike, who to strike, how to set up logistics? Is that what you're talking about? KAHN: Everything from boardroom to battlefield, right? We're talking about logistics, things like maintenance. When do we need to update a plane? Right beforehand, using data fusion to bring sources together from different sensors and shooters from really everywhere. So it is used on - to enable autonomous systems, things like drones, in part, to make them more capable of, you know, traveling to a location by themselves. Or things, again, like where you're seeing compiling information to get information from disparate systems into the hands of the war fighter. RASCOE: Is artificial intelligence making decisions about who lives and who dies? KAHN: Not at this moment, no. RASCOE: And so, like, how do we know that? KAHN: Well, we know that in terms of how then it's integrated, right? We have to eventually come to a point where we're deciding - what are the ranges and where are the locations that AI should not be used? Right now I think what's more insidious is that it's being integrated in places where, you know, it's hard to tell in some places where this - we're having this blurring of - to your point exactly, of where the AI starts and where the AI stops. We need to be really sensitive about where we integrate it and what places are off-limits. I think for a good example, we can all agree that maybe logistics is a really kind of easy one, or disaster relief is a really easy one. There have been agreements for - between states, from the United States and China, for example, that, you know, we don't want any AI over nuclear decision-making. So there's a tiered sort of framework we can look at. There's a sliding scale of where AI should be integrated and not. RASCOE: But does it require an amount of trusting the government that the government is not using artificial intelligence to, you know, have a drone go out there and drop somewhere without the human intervention or the human looking and saying, hey, hey - this might not be the right space? KAHN: Yeah, absolutely. Humans are always responsible over the use of force. That is entirely compliant with international humanitarian law, and there's no one system that is or is not compliant. It's how it's employed and how it's used, and so we have to look at the policies that are in place. The United States has had a policy, for example, on lethal autonomous weapons systems and autonomous weapons systems from 2013 about - review of when a system might come into play. So we don't know how that's happened yet, but there are things that they're thinking about. So that will be definitely a problem down the line. RASCOE: So what do you make of the feud between AI company Anthropic and the Pentagon? Anthropic says it won't allow its Claude technology to be used in fully autonomous weapons. KAHN: Right. I think actually the - what's interesting about that kind of debate is that it's - the difference between Anthropic and the Pentagon wasn't very large, right? The Pentagon said, we wanted to use it for all lethal - all, excuse me, legal applications that we're allowed to use it in. And Anthropic wanted a specific carve-out for autonomous systems, you know, based on their red line, but has used it in the past and is using it in this current conflict. And so I think it was - really came down to that - to your point, a breakdown of trust, rather than any real application of the technology being used today. RASCOE: That's Lauren Kahn with the Center for Security and Emerging Technology. Thank you so much for being here. KAHN: Thanks for having me. Copyright &copy; 2026 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information. Accuracy and availability of NPR transcripts may vary. Transcript text may be revised to correct errors or match updates to audio. Audio on npr.org may be edited after its original broadcast or publication. The authoritative record of NPR’s programming is the audio record. Facebook Flipboard Email Read & Listen Home News Culture Music Podcasts & Shows Connect Newsletters Facebook Instagram Press Public Editor Corrections Transcripts Contact & Help About NPR Overview Diversity NPR Network Accessibility Ethics Finances Get Involved Support Public Radio Sponsor NPR NPR Careers NPR Shop NPR Extra Terms of Use Privacy Your Privacy Choices Text Only Sponsor Message Sponsor MessageBecome an NPR sponsor (function () { var loadPageJs = function () { (window.webpackJsonp=window.webpackJsonp||[]).push([[22],{1167:function(e,n,c){e.exports=c(323)},323:function(e,n,c){"use strict";c.p=NPR.serverVars.webpackPublicPath,Promise.all([c.e(1),c.e(2),c.e(3),c.e(4),c.e(84)]).then(function(e){c(3),c(1140),c(116),c(94),c(52),c(493),c(239),c(102),c(104),c(1141),c(143),c(1142),c(238),c(48),c(1143)}.bind(null,c)).catch(c.oe)}},[[1167,0]]]); }; if (document.readyState === 'complete') { loadPageJs(); } else { window.addEventListener('load', function load() { window.removeEventListener('load', load, false); loadPageJs(); }); } })();