Tech
OpenAI Revises Pentagon AI Deal After Backlash Over Military Use
OpenAI says it is amending its recent agreement with the United States Department of Defense following criticism over the potential use of its technology in classified military operations.
Chief executive Sam Altman announced that the company will insert clearer restrictions into the contract, explicitly prohibiting the intentional use of its systems for domestic surveillance of US citizens and nationals.
The controversy emerged after tensions between OpenAI’s rival Anthropic and the Pentagon, related to concerns that Anthropic’s AI model, Claude, could be used for mass surveillance or in fully autonomous weapons systems.
In a statement over the weekend, OpenAI said its Pentagon agreement contained “more guardrails than any previous agreement for classified AI deployments”. However, Altman later acknowledged that the rollout of the deal had been rushed.
“The issues are super complex, and demand clear communication,” he wrote on social media, adding that the company had sought to de-escalate tensions but recognised that the announcement appeared “opportunistic and sloppy”.
Under the revised terms, intelligence agencies such as the National Security Agency would require additional contractual modifications before being permitted to use OpenAI systems.
The backlash has had measurable effects. Reports indicate that day-over-day uninstalls of the ChatGPT mobile app surged sharply following the announcement, while Anthropic’s Claude climbed to the top of Apple’s App Store rankings.
Anthropic’s model had previously been blacklisted by the administration of Donald Trump after the company refused to abandon a corporate principle barring the use of its technology in fully autonomous weapons. Despite that position, reports have since indicated that Claude was used in the US-Israel conflict with Iran shortly after the ban.
The Pentagon has declined to comment on its arrangements with Anthropic.
