[
OpenAI says it is renegotiating its “rushed” agreement with the Pentagon to add explicit prohibitions on the use of its artificial intelligence for domestic surveillance of American citizens—a provision that addresses one of the most contentious issues in the standoff between the U.S. military and the AI industry.
In an internal memo shared on social media, OpenAI CEO Sam Altman said the company “shouldn’t have rushed” to get the agreement out on Friday.
“The issues are super complex, and demand clear communication,” he wrote. “We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”
According to Altman, the new contract language will state that OpenAI’s AI systems shall not be “intentionally used for domestic surveillance of U.S. persons and nationals,” consistent with the Fourth Amendment, the National Security Act of 1947, and the Foreign Intelligence Surveillance Act of 1978.
Katrina Mulligan, OpenAI’s head of national security partnerships and a former senior official at the Pentagon, NSC, and DOJ, also said that Defense Intelligence Components—including the National Security Agency, National Geospatial Intelligence Agency, and Defense Intelligence Agency—would be barred from using OpenAI’s services under the agreement, and any use by those agencies would require a separate contract modification.
The renegotiated terms will also add explicit restrictions that cover commercially purchased data—such as cell phone location records or fitness app information—which has been a legal gray area. According to a report in The Atlantic, rival Anthropic had specifically sought similar guarantees against domestic surveillance in its own negotiations with the Pentagon. Its insistence on harder safeguards to prohibit the use of its tools for surveillance was reportedly one of the major stumbling blocks that ultimately collapsed those talks.
Despite the renegotiated terms, legal experts have questioned over how enforceable the restrictions are.
“This seems like a significant improvement over the previous language with respect to surveillance, and I’m glad to see it,” said Charles Bullock, a senior research fellow at the Institute for Law and AI, in a post on X. “It does not address autonomous weapons concerns, nor does it claim to.”
Independent analysts as well as OpenAI staff have also advocated for a process in which independent lawyers would be able to review the full contract and share their analysis with concerned employees.
Backlash from employees
The renegotiated terms come after OpenAI faced a wave of backlash from inside and outside the company. Altman had already acknowledged that the optics of agreeing to the Pentagon deal hours after the Trump administration labeled rival Anthropic a “supply chain risk” for refusing a contract without explicit AI safeguards didn’t “look great.” This was especially true since Altman had previously said publicly that he supported Anthropic’s redlines around mass surveillance and autonomous weapons.
Anthropic had sought two hard limits in its negotiations with the Pentagon: a ban on its AI being used for mass surveillance of American citizens, and a prohibition on its technology being incorporated into autonomous weapons systems—defined as those capable of making a decision to strike targets without direct human oversight.
Critics, including Jonathan Iwry, a Fellow at the Accountable AI Lab at the Wharton School of the University of Pennsylvania, accused OpenAI of undercutting Anthropic at a critical moment.
“What is particularly disappointing is that the rest of the AI industry failed to come to Anthropic’s support,” Iwry told Fortune. “If these companies were serious about their commitment to safe and responsible AI (on which some of them built their reputations), they could have closed ranks and stood together against the Pentagon on behalf of the public. Instead, they let the administration play them off against one another as market competitors.”
Many of OpenAI’s own employees signed an open letter supporting Anthropic following the standoff. Consumers also signaled their support by sending Claude, Anthropic’s AI assistant, to the top of Apple’s App Store charts for the first time, suggesting users were switching in protest. Chalk graffiti criticizing OpenAI’s decision also appeared on the sidewalk outside its San Francisco offices.
Some OpenAI researchers even spoke out publicly. Aidan McLaughlin, a research scientist at the company, posted on X that he personally did not think “this deal was worth it” in a post that drew nearly 500,000 views.
https://fortune.com/img-assets/wp-content/uploads/2026/03/GettyImages-2261852484.jpg?resize=1200,600
https://fortune.com/2026/03/03/sam-altman-openai-pentagon-renegotiating-deal-anthropic/
Beatrice Nolan




