OpenAI — trusted by millions with sensitive materials ranging from legal paperwork to tax filings — is amending its recently signed agreement with the U.S. Department of War, only days after announcing it would replace Anthropic in certain government systems. The initial rollout drew criticism for appearing rushed and “opportunistic and sloppy.”

Following the collapse of negotiations between the Pentagon and rival AI firm Anthropic last Friday, OpenAI quickly finalized a deal to provide its models for use in classified military operations. The breakdown occurred after discussions with Defense Secretary Pete Hegseth regarding how advanced AI tools could be deployed across government systems.At first, OpenAI described its agreement as including “more guardrails than any previous agreement for classified AI deployments, including Anthropic’s.” However, by Monday, CEO Sam Altman announced revisions were underway to insert explicit language prohibiting the intentional use of OpenAI systems for domestic surveillance of U.S. citizens or nationals.“The AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals,” Altman said, adding that intelligence agencies such as the National Security Agency (NSA) would not be covered under the current agreement.While the revised language may offer additional legal protection, concerns remain about potential unintended applications. In a Monday update, OpenAI stated that the Department had affirmed its shared commitment to preventing domestic surveillance. The companies jointly added clarifying language specifying that OpenAI tools would not be used to monitor U.S. persons, including through the purchase or use of commercially available personal data.The Department further confirmed that OpenAI’s services would not extend to intelligence agencies like the NSA unless a separate agreement is reached.The updated contract language states:“Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals. For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.”The Department of War also plans to establish a working group composed of leaders from frontier AI labs, major cloud providers, and departmental policy and operational teams. OpenAI expects to participate, viewing the forum as a key venue for ongoing discussions about emerging AI technologies, privacy protections, and national security considerations. The company hopes the updated framework could create opportunities for other AI firms to collaborate with the Department as well.***Guardrails, Technical Safeguards, and Legal TensionsOpenAI maintains that it can enforce its restrictions through a combination of contractual terms and technical oversight. Rather than embedding its models directly into military hardware, the company intends to provide access through cloud-based systems while keeping its own personnel involved. It has also reiterated that its AI cannot be used to operate autonomous weapons systems.Altman previously suggested that existing legal frameworks provided sufficient safeguards. “Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with,” he said.By Monday, however, Altman acknowledged that the complexity of AI-enabled data collection warranted clearer communication. “We shouldn’t have rushed to get this out on Friday,” he wrote in a message to employees later reposted on X. “The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”The revised terms now explicitly prohibit “deliberate tracking, surveillance or monitoring of U.S. persons or nationals,” including through the use of commercially acquired personal data.***Aftermath of Anthropic’s Failed Negotiations

The Pentagon’s shift to OpenAI followed the collapse of talks with Anthropic over two primary red lines set by its CEO, Dario Amodei: a ban on domestic mass surveillance and a prohibition on using AI in lethal autonomous weapons systems, along with a requirement that battlefield use receive specific authorization.According to the Financial Times, Secretary Hegseth pushed for language permitting AI models for “all lawful use.” Anthropic executives argued that current U.S. law might still allow broad AI-driven data collection and sought stricter contractual limitations until new legislation could be enacted. Negotiations reportedly faltered over provisions governing large-scale collection of publicly available data.Although the Pentagon appeared willing to refine language Anthropic considered overly broad, and company leaders believed a deal was near, discussions ultimately broke down.In the aftermath, the Trump administration moved swiftly against Anthropic. President Donald Trump directed federal agencies to phase out the company’s AI tools. The Treasury Department, Federal Housing Finance Agency, and government-sponsored mortgage entities Fannie Mae and Freddie Mac all announced plans to terminate contracts, with full disengagement expected within six months. The Pentagon also labeled Anthropic a supply chain risk.***Internal Pushback and Public ReactionThe agreement has sparked internal dissent within OpenAI and broader concern across the tech industry. Employees have expressed unease both internally and on social media. Nearly 900 workers from OpenAI and Google signed an open letter urging leadership to reject government demands involving domestic mass surveillance or autonomous lethal capabilities.Over the weekend, chalk graffiti appeared outside OpenAI’s San Francisco headquarters stating “NO TO MASS SURVEILLANCE” and encouraging employees to “Do the right thing!”The controversy has also influenced consumer behavior. Anthropic’s chatbot, Claude, temporarily surpassed ChatGPT in Apple’s App Store rankings, according to Sensor Tower, amid online campaigns encouraging users to delete ChatGPT.Miles Brundage, OpenAI’s former head of policy research, publicly criticized the company’s handling of the situation. He wrote that employees’ “default assumption” should be that OpenAI “caved + framed it as not caving,” though he acknowledged the organization’s complexity and recognized that some staff aimed to reach what they believed was a responsible compromise.
