From the New York Times, I'm Natalie Kytrowaf. This is The Daily. As the US bombardment of Iran has escalated, it's become increasingly clear just how much the US military has been relying on sophisticated artificial intelligence. That's made the Defense Department's bitter fight with the AI giant Anthropic over who controls that technology, one of the most high-stake strategic battles of our time. Today, my colleague, Shira Frankl, on the standoff between the Trump administration and Anthropic, and what it really reveals about the future of war There. It's Monday, March ninth. Shira, it's wonderful to have you back on The Daily.
Thank you for having me.
So as this war in the Middle East has progressed, we've been hearing more and more about the US using AI in its attacks on Iran. It's one of the first times, really, where this technology is very clearly having a practical application for the US military. We are seeing it in action. At the same time, in the background, there has been this ongoing bubbling battle over the use of that technology. We're going to get into the specifics of all of that. But first, can you just lay out what this fight is fundamentally about?
Well, this fight is so much bigger than one company in this particular moment with the Pentagon. It's really about the future of warfare and the role that AI is going to play in war. Right now in the Middle East, as the US looks for targets to strike, it is using Anthropics technology to analyze intelligence, analyze satellite imagery, and figure out where it wants to hit. Ai can analyze data for the military faster than a human being possibly could. It's proving its worthiness every single day. In a sense, these private technology companies based in Silicon Valley and the Pentagon need each other more than ever. But there's a question about how they're going to work together going forward. As we all hurdle towards this vision of robot wars, of AI-backed weapons, fighting AI-backed weapons, they're trying to figure out who gets to say what's safe and what's not. On one side, you have these private Silicon Valley companies. You have Anthropic, which is the first AI company that was authorized to work on classified US military systems. You have OpenAI, which is this behemoth of AI companies. You have long-standing companies like Google and Microsoft, which have AI divisions.
You really have a number of very powerful companies in the Valley that want to do business with the Pentagon and are, in some cases, doing some business with the Pentagon, figuring out how to navigate that relationship. On the other side, you have the Pentagon, which is thinking We're talking about this global AI arms race against China, Iran, and Russia, and how America is going to fare in that.
Just to get a lay of the land here, can you just explain how the Pentagon is broadly making use of this technology? What function it plays?
Right now, AI plays a huge role in what's called SIGINT, or Signals Intelligence. What I mean by that is that the military at any given time is ingesting an incredible amount of data. Text messages, postings on social media pages, phone calls. All of this is intelligence that's gathered by the military and then used to make critical decisions. Now, in the past, there was a room full of human beings that would have to sit there and analyze all this intelligence. But now we have AI, and this is exactly what AI is really good at. It ingests data, and then it tells you, Here's an important note you should take out of this. Here's my summary. Here's one phone call that's better than all the other phone calls that you should actually be listening to. This is critically important right now in the Middle East, where we're seeing this AI technology being used. But spinning forward, it's only going to become more important as AI gets better and better, and the military wants to integrate it into more parts of its weapons arsenal.
Okay, so a hugely important debate happening at a very important time. Just orient us, Shera. How did this whole fight start?
It actually starts in this very positive, optimistic way in that the Pentagon issues a call out last year saying it wants to introduce AI. It invites all these AI companies to basically come into the military and show them how they can be helpful. How can the Pentagon, the Department of Defense, start integrating AI into its own systems? They immediately get a lot of takers. You've got Silicon Valley's biggest AI companies, Google, XAI, AI, Anthropic and Open AI, all raise their hands and say, We want to participate. We want to work with the Pentagon. Of all the AI companies that begin working with the Pentagon, Anthropic emerges as the best and the most seamlessly integrated into the Pentagon system. Systems. It's working with Palantir, this data analytics company. It's one of the only ones that is approved to work on classified systems. People across the DOD tell us that it really quickly became absolutely fundamental to their work and made their lives easier.
Okay, so I just want to pause here because from what I know of Anthropic, this is a company that brands itself as the socially responsible AI company, the company that emphasizes AI safety a lot. It's It was interesting to me to hear that they were the first ones to be so embedded within the US military.
That's true. This is a company that was founded by people who left OpenAI because they wanted a safer AI company. They said they wanted more safeguards. I mean, this is their entire premise and how they draw employees to work there. What they also are, however, is a company that really believes in working with the government. We've seen their top executives say that they think AI can make our country safer. It can help the US military defend against adversaries. They are, by all accounts, deeply patriotic as well. While the two things don't seem to naturally go hand in hand, I think in the minds of their chief executives, at least from people that are sitting in the room with them, they say, yes, they wanted to work with the government, and they thought they could be the ones to do it safely.
Okay, so that explains why at this point in the story, all sides are working well together. When do things start to change?
Things start to change on January ninth, when the Secretary of Defense Pete Hexeth, comes out with this pretty big memo, and he tells the military, he tells everyone across Silicon Valley that things are about to change. Ai is critical for the future of warfare. China is developing AI weapons, Russia is developing AI weapons. If the US wants to be competitive, AI has to be at the center of everything, from autonomous weapons like drones or fighter jets that have no pilots to data systems. This kicks off a need for new contracts with all the AI companies, and they do what companies do. Their lawyers start sending contracts back and forth with the Pentagon's lawyers, trying to figure out how they can come to some new agreement about this.
How does that go?
They have differences. They have things that they're trying to figure out, but it's all happening quietly behind the scenes when all of a sudden something happens that ends up escalating tensions between Anthropic and the Pentagon. News reports emerge that Anthropic's Clawed technology was used as part of the capture of Nicolas Madora, Venezuela's as leader.
Right. I remember when that came out. It was this surprising moment to find out that an AI model was used to do something like that, like this very on the ground operation that involved boots on the ground and lots of planning. Ai was in the middle of it.
Yeah. I mean, I think it was even surprising, confusing for people who work at Anthropic, who did not know if their technology was used in the mature raid. It even came up in a meeting that happened between one employee at Anthropic and another employee at Palantier. The Anthropic guy asked, Do you know anything about this? Is our technology being used? It was not something that they appeared aware of. But whether or not Anthropic's technology was used at the Pentagon. The fact that a private Silicon Valley company would even be raising questions about this was seen as inappropriate. You had the Secretary of Defense, Hexeth, telling people around him that he didn't like Anthropic, even asking questions about how their technology was being used. In the midst of all these sensitive negotiations happening about the future of Anthropic and the Pentagon, this was the kindling that they didn't need.
Basically, the Defense Department sees this inquiry by this employee at Anthropic as a sign that the company is challenging the military's use of the technology.
Yeah, exactly. They see it as a sign that this private company that's talked a lot about safety is going to try and impose its own rules, its own guardrails its own ideas of safety onto the Pentagon. In the midst of all these sensitive negotiations, it suddenly becomes a crisis. It suddenly spills over from emails back and forth between lawyers to big public statements by senior figures at the Pentagon.
What is the crux of the crisis itself?
The crux of the crisis is over Anthropic wanting to define safety and wanting to limit two specific ways in which the Pentagon can use their technology. They want it codified into their contract with the Pentagon that their technology will not be used for the mass surveillance of Americans, and it will not be used for autonomous weapons.
Why has Anthropic drawn those red lines on these uses of AI? What's the rationale here?
Well, they're worried about a few different things here. First and foremost, they're not sure that AI is ready. Ai might have a 1% or 2% error rate, but when it comes to something like picking a target to hit with a missile, that error rate could mean life death.
Right. Huge consequences.
Huge. Now, imagine, secondly, the PR disaster. If a news story comes out that Anthropics AI was used to hit a target that ended up being wrong, suddenly this company has an absolute PR nightmare on their hands, where Americans are contending with this very real-life use case where AI, or in science fiction books, they always say the robot, it chose the wrong target and humans were killed. Thirdly, they've got to worry about their own employees. People who work there are not comfortable with working with the military. People who work there are worried about the use of AI in war. They really risk alienating a lot of the people that they paid a lot of money to come work at that company.
It's worth saying that these employees are very valuable. There's a total talent war on to attract these people, and you don't want to risk losing them.
Yeah, that's right. They're some of the most highly sought after engineers across Silicon Valley, and that's saying a lot. We're talking about contracts potentially worth tens of millions of dollars to acquire some of these people.
Got it. It sounds like there is a broad set of reasons why Anthropic is not wanting to do this. What about the Pentagon? What do they make of this?
The Pentagon is mad. They're sitting there and saying, Hey, you are a private company. You do not get to make these calls. Whoever decides that AI is ready to control a weapon should be sitting here in the Pentagon, in the military. We are the ones that make these calls. Really, how dare you, is their view, as a private company, try to tell us how to build our weapon systems.
They're saying it's not your role. It's our role. That's our job. Exactly.
The Pentagon is saying, We are going to implement all lawful uses of this technology. They're making the argument that Anthropic is really asking for something that isn't necessary. Things escalate and escalate, and they result in this meeting between the Secretary of Defense, Pete Hexet, and the Chief Executive of Anthropicic, Dario Amoudet.
The CEO of one of the biggest AI companies in the world is meeting with Defense Secretary, Pete Hexeth, today as the Pentagon threatens to essentially blacklist that company, Anthropic, from lucrative government contracts if the AI company- And it's civil for the most part, until the very end. Defense Secretary, Pete Hexet, gave CEO Dario Amoudet until the end of the week to sign a document ensuring the military would have full access to the company's AI model.
The secretary tells Dario Amadeh, Hey, you have until Friday, 5: 00 PM Eastern Time to compromise. Work it out, figure it out, but we are giving you a hard deadline, or are we going to take some type of against you.
And what is the action? What's the threat?
So there's actually two threats made against Anthropicic, and they're pretty opposed to one another. One is that Anthropicic will be labeled a supply chain risk. This is a designation that America has used in the past, mostly for foreign companies who produce something abroad and which America feels is not safe for national security reasons for the government to be buying. So they It would be essentially saying, Hey, Anthropic, we think you're dangerous as a company for national security, and nobody in the government can use you. The other threat would see them invoke this Defense Production Act, which labels a company so necessary to national security that they have to work with the federal government.
These seem like pretty extreme threats. I mean, the government is saying, We're either going to force Anthropic to comply or inflict a ton of pain on this company by punishing anybody else that does business with them, essentially.
Yeah, they are extreme. It leads to this rare moment of solidarity across Silicon Valley. These companies who usually, quite honestly, hate each other suddenly come together and they say, We stand behind Anthropic Anthropic, the AI community, stands behind Anthropic and their red lines. I think of all the voices that emerged, the most interesting is Sam Altman, who's the chief executive of OpenAI. He historically has not gotten along with Anthropic. These are a bunch of guys that left his company and said his company wasn't safe and started their own company. There is no love lost between the leadership at OpenAI and the leadership at Anthropic. He even stands up and he says, No, I back them. I back Anthropic.
Here we should just disclose for transparency that the New York Times is currently suing OpenAI over the use of its models.
That's right. All of Friday, tension is building. People are tweeting in support of Anthropic. They're telling the company to hold the red lines. Anthropic executives, their lawyers are on the phone. I mean, minutes, minutes before the deadline hits. They're still on the phone with the Pentagon trying to figure this all out. Then the deadline happens, 14 minutes pass, and two things quickly happen. Now to a major development in the clash between the US Department of Defense and Anthropic. President Trump has ordered the federal government to stop using its technology after the AI firm refused to lift guard. One is that the DOD announced there is no deal.
Defense Secretary, Pete Hegset, says he will designate Anthropic a supply chain risk to national security.
Anthropic is a supply chain risk. It's going to be booted, banned from the entire federal government. Saying any contractor that does business with the US military will not be allowed to conduct commercial activity with Anthropic. President Trump called Anthropic a radical left woke company, which will not dictate how the United States fights and wins wars. And then they issue another surprise. They actually have an ace in their back pocket.
Anthropic's relationship appears to have ended, but OpenAI is ready to make a deal.
This whole time in the background, they've been quietly negotiating directly with Sam Altman, the chief executive of Open AI. Wow. And this whole time, he's been negotiating himself directly with the Pentagon. And Sam Altman says that he got exactly the deal that Anthropic wanted, but he had actually decided to take a very different approach to the entire negotiation.
We'll be right back. Okay, Shira, you said that Sam Altman took a much different tack with the Pentagon in these negotiations. What do you mean by that?
Anthropic had been asking this entire time for certain things to be codified into their contract. They wanted established that their technology could not be used in these very specific ways that were important to company. What Sam Altman did was say, Hey, we don't need that type of language into the contract. What we're going to do is write our own guardrails, our own safety measures into the code itself. Engineers call this writing into the stacks, and it's something that AI companies do all the time. They update their safety measures. They quote, Write into the stacks guardrails that they think are important. So he's saying, It's not on you, it's on us. Whatever's important to us, whatever safety measures we have as OpenAI, we are going to make sure are there. Sure.
Just explain why that version of things, where the company is in control of writing these safeguards into the models, why that wasn't good enough for Anthropic.
People who work at Anthropic make the argument that when you write something into the stacks, it can be unwritten. You can write something else the next day. It is not permanent. These stacks get changed daily. They could even be changed hourly. In their view, there was not enough to stop the from saying, Okay, well, you wrote that into the stacks today, but tomorrow we're telling you to do something else.
Essentially, you're saying their fear is that this guardrail is much more movable. It's not permanent enough. It doesn't guarantee that the limits will be respected long term.
Exactly.
So the Pentagon came out of this winning, it sounds like.
I mean, I think that from their point of view, from the DOD folks we've talked to, they are happy they got OpenAI on board. I think that where the Pentagon may run into problems long term is the broader AI community in Silicon Valley and how this has really brought to the forefront this bigger question of AI and weapons, AI and the government. Is AI going to be dangerous and is the government thinking about it in a responsible way? I think that whole debate is now in the public consciousness.
Right. I have to imagine that the extent to which this administration was willing to really throw the book at this American AI company that has to have had something of a chilling effect in the industry, right?
Oh, definitely. I spoke to someone who works at Google who said, That's terrifying. If they can threaten to label Anthropic a supply chain risk or to use this Defense Production Act against them, what's to stop them from doing it to any tech company in Silicon Valley if they don't get their way? There's been this moment of trust building between Silicon Valley and the Pentagon That's happened slowly over the Trump administration, and we've really seen a lot of that shattered in the last week or so.
What about the companies at the center of this, Shira? How do they net out? Because obviously, OpenAI has this victory in terms of getting the contract. But at the same time, it's hard to ignore the PR benefits that have come out of this for Anthropic. This company was very popular among software engineer types. But before all of this, it was by no It means well known among the general public. Now, all of a sudden, Anthropic is this topic of national conversation.
Right. I mean, we saw that in the immediate aftermath of all this, Anthropic's Clawed technology shoots to the top of the app store for the first time in the company's history. They have not just become a household name, but they've become a household name that's synonymous with security, with safe AI. That's a huge PR win in a moment where so many people are still afraid of AI.
Right. You're saying it's not just that people are talking about the company, it's that they're talking about it as a company that values safety and responsibility. You can see why that might be appealing.
That's right. Out here in Silicon Valley, I think Anthropic is really emerging as a winner in terms of the PR battle for the hearts and minds of engineers. Right now, Anthropic is really being seen as an ethical company that stood by its guns and did what it said it was going to do in terms of safety measures. Here in Silicon Valley, engineers are talking about how they want to go work for them. That could net out really as a big win for Anthropic. After Altman signed the deal, there was a lot of blowback across Silicon Valley for the terms that he had reached for the Pentagon. I actually saw people in the streets of San Francisco holding up a sign saying, Anthropic stands strong. You see online people who work at these companies voicing both support for Anthropic and dismay with OpenAI. That pushback from engineers has complicated things for Sam Altman. He's had to meet with his own employees more than once to assure them that he's going to seek a safe contract with the Pentagon. He's had to do a lot of internal PR work among people at his company.
To try to do damage control, it sounds like, with his own employees.
Exactly. We've seen him announced subsequently that he may have made a mistake rushing too quickly into a deal with the Pentagon, and that he's actually sought new language now around the mass surveillance of Americans and other assurances so that his employees will not be as upset as they have been in the last few days about this contract with the Pentagon. Where this stands now is that you have two of Silicon Valley's largest companies basically battling it out over what safe AI looks like. On one hand, you have Sam Altman, OpenAI, in his version of working with the Pentagon, and on the other, you have Darryl Amadeh and Anthropic, saying, This is how we think safe AI should play out.
Shira, But through all this, it's clear that both companies are trying to win the optics battle in all of this. Both are claiming the mantle of safety, asserting or reassuring people, their own employees, that that's what they care about. But I just want to push on what they actually mean by that, by safety. Because when we were talking earlier about the red lines, Anthropic insisting that its model shouldn't be used for mass surveillance or autonomous weapons, they were saying their models just aren't ready yet. They're still error prone. It sounds like they're arguing it's not safe to use their model in those ways now. But Do you think these companies are opposed to those models being used for mass surveillance, for autonomous weapons ever?
No. I think ultimately these companies are well aware that the way the world is headed is that AI is going to be at the center of pretty much everything the government does, from surveillance to weapon systems. Ai is going to play a role. You also have to remember these companies are really competitive. They're technologists who love what they do. They love the future of AI. There's also a personal vested interest in making the AI good enough to play this really central role across the government.
Right. I mean, and there's billions at stake, we should say, in this industry being invested These companies are locked into competition with each other, and there's no going back, is what you're saying.
There is no going back. When you speak to some of these technologists, they describe what the world looks like in the future. Honestly, depending on how much sci-fi you've read in your life, that is a very attractive vision or a really scary vision of the future. So they look forward and they imagine a war in which there's no human soldier on the battlefield. Where back in Washington or wherever on some military base, there's a guy with a headset who's controlling a fleet of drones or submarines or fighterless jets, and they're fighting against another nation state, which has very much the same. The surveillance of all these targets is happening through AI systems that can comb through imagery faster than the human brain can process a single photograph, and all these decisions are happening at lightning speeds. That's what they see all of us hurtling towards.
What you're saying is this fight that we've been describing between Anthropic and the Pentagon and OpenAI, it didn't actually forestal the future. In some ways, it just made clear to everyone that it's coming.
That's right. They are all clear that it's inevitable. What all these companies agree on, what the Pentagon agrees on, is that they're all active partners in making this a reality.
Shira, thank you so much. Thank you for having me. We'll be right back.
Here's what else you need to know today. Iran has named a new Supreme Leader, Moshtaba Hamanei. Hamanei is the 56-year-old son of the recently killed Supreme Leader, and his appointment signals the government's desire for continuity. Hamanei has been coordinating military and intelligence operations at his father's office, and he has very close ties to the powerful Islamic Revolutionary Guard Corps. President Trump has called the younger Hamanei an unacceptable choice. Before the announcement, Trump told ABC that whoever is selected as Iran's next leader is, not going to last long without the approval of the United States. And... Over the weekend, the US and Israel intensified their attacks on Iranian military targets and vital energy infrastructure. Israeli warplanes bombed several fuel depots in and around Tehran, saying they were being used by Iran's military. The airstrikes created an apocalyptic scene in the capital, setting off oil fires that turned the horizon orange and blanketed the city with dark, oily smoke. Water desalination plants were also struck in Iran and on the Persian Gulf Island of Bahrain, threatening to further disrupt the lives of millions in the region who depend on desalination for drinking water. Finally, on Sunday evening, oil prices surged to over $100 a barrel for the first time in four years, a worrying sign about the war's potential effect on gas prices.
Trump said in a Truth Social post on Sunday that higher oil prices would be short-lived and called them a, Very small price to pay for peace. Today's episode was produced by Ricky Nowetzky, Rochelle Bonja, Diana Wyn, Eric Krupke, and Michael Simon Johnson. With help from Mary Wilson. It was edited by Mark George and Lisa Chou. Contains music by Marion Lozano, Ron Nemistow, and Dan Powell. Our theme music is by Wunderly. This episode was engineered Moxley. That's it for The Daily. I'm Natalie Kittroweth. See you tomorrow.
In recent weeks, the Defense Department has tussled with Anthropic over how its artificial intelligence could be used on classified systems. That fight became bitter and negotiations fell apart. And war in the Middle East has made it increasingly clear how much the U.S. military has been relying on A.I.
Sheera Frenkel, who covers technology for The New York Times, explains the standoff and what it reveals about the future of warfare.
Guest: Sheera Frenkel, a New York Times reporter who covers how technology affects our lives.
Background reading:
How talks between Anthropic and the Defense Department fell apart.
Here is a guide to the Pentagon’s dance with Anthropic and OpenAI.
Photo: Brendan Smialowski/Agence France-Presse — Getty Images
For more information on today’s episode, visit nytimes.com/thedaily. Transcripts of each episode will be made available by the next workday.
Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.