OpenAI’s Altman says defense deal was ‘opportunistic and sloppy

OpenAI’s Altman says defense deal was ‘opportunistic and sloppy


OpenAI CEO Sam Altman addresses the gathering on the AI Impression Summit, in New Delhi, India, February 19, 2026.

Bhawika Chhabra | Reuters

OpenAI CEO Sam Altman stated Monday that the corporate “should not have rushed” its current cope with the U.S. Division of Protection and outlined revisions to the settlement.

Altman shared what he described as a repost of an inner memo on X, saying the corporate would amend the contract to incorporate some new language, including that “the AI system shall not be deliberately used for home surveillance of U.S. individuals and nationals.” 

It comes after the ChatGPT maker announced it had struck a brand new cope with the Protection Division on Friday, simply hours after U.S. President Donald Trump directed federal businesses to cease utilizing rival AI firm Anthropic’s instruments, and hours earlier than Washington would perform strikes on Iran. 

He added that the Protection Division had affirmed that OpenAI’s instruments wouldn’t be utilized by intelligence businesses such because the NSA. 

“There are various issues the know-how simply is not prepared for, and lots of areas we do not but perceive the tradeoffs required for security,” Altman stated, including that the corporate would work with the Pentagon on technical safeguards.

The CEO additionally admitted he had made a mistake and “should not have rushed” to get the deal out on Friday.

“We had been genuinely attempting to de-escalate issues and keep away from a a lot worse consequence, however I feel it simply seemed opportunistic and sloppy,” he stated.

The acknowledgment comes after a public feud between Anthropic and Washington over safeguards for its Claude AI techniques, which ended with out an settlement. Protection Secretary Pete Hegseth stated Friday the corporate could be designated a supply-chain menace. 

Following an preliminary deal final 12 months, Anthropic was the primary AI lab to deploy its fashions throughout the Protection Division’s categorized community.

The corporate had later sought guarantees that its instruments wouldn’t be used for functions comparable to home surveillance within the U.S., or to function and develop autonomous weapons with out human management. 

The dispute started after it was revealed that Anthropic’s Claude had been utilized by the U.S. navy in its raid to seize Venezuelan president Nicolás Maduro in January, although the corporate didn’t publicly object to that use case.

OpenAI’s cope with the Pentagon got here proper after talks between Anthropic and the Protection Division broke down, although Altman had informed staff in a Thursday memo that OpenAI shared the identical “purple traces” as Anthropic. He stated in a submit Friday that the Protection Division agreed to the corporate’s restrictions.

It stays unclear why the Protection Division agreed to accommodate OpenAI and never Anthropic, although authorities officers have for months criticized Anthropic for allegedly being overly involved with AI security.

The timing of OpenAI’s cope with the Protection Division prompted on-line backlash, with many customers reportedly ditching ChatGPT for Claude on app shops.

In his submit, Altman additional addressed the controversy, saying: “In my conversations over the weekend, I reiterated that Anthropic shouldn’t be designated as a [supply chain risk], and that we hope the [Department of Defense] presents them the identical phrases we have agreed to.”

Anthropic was based in 2021 by a gaggle of former OpenAI employees and researchers, together with Dario Amodei, who left the corporate after disagreements over its path. The corporate has marketed itself as a “safety-first” various.



Source link