OpenAI CEO Sam Altman is still in the hot seat this week after his company signed a deal with the US military. OpenAI employees have criticized the move, which came after Anthropic’s roughly $200 million contract with the Pentagon imploded, and asked Altman to release more information about the agreement. Altman admitted it looked “sloppy” in a social media post.

While this incident has become a major news story, it may just be the latest and most public example of OpenAI creating vague policies around how the US military can access its AI.

In 2023, OpenAI’s usage policy explicitly banned the military from accessing its AI models. But some OpenAI employees discovered the Pentagon had already started experimenting with Azure OpenAI, a version of OpenAI’s models offered by Microsoft, two sources familiar with the matter said. At the time, Microsoft had been contracting with the Department of Defense for decades. It was also OpenAI’s largest investor, and had broad license to commercialize the startup’s technology.

That same year, OpenAI employees saw Pentagon officials walking through the company’s San Francisco offices, the sources said. They spoke on the condition of anonymity as they aren’t licensed to comment on private company matters.

Some OpenAI employees were wary about associating with the Pentagon, while others were simply confused about what OpenAI’s usage policies meant. Did the policy apply to Microsoft? While sources tell WIRED it was not clear to most employees at the time, spokespeople from OpenAI and Microsoft say Azure OpenAI products are not, and were not, subject to OpenAI’s policies.

“Microsoft has a product called the Azure OpenAI Service that became available to the US Government in 2023 and is subject to Microsoft terms of service,” said spokesperson Frank Shaw in a statement to WIRED. Microsoft declined to comment specifically on when it made Azure OpenAI available to the Pentagon, but notes the service was not approved for “top secret” government workloads until 2025.

“AI is already playing a significant role in national security and we believe it’s important to have a seat at the table to help ensure it’s deployed safely and responsibly,” OpenAI spokesperson Liz Bourgeois said in a statement. “We’ve been transparent with our employees as we’ve approached this work, providing regular updates and dedicated channels where teams can ask questions and engage directly with our national security team.”

The Department of Defense did not respond to WIRED’s request for comment.

By January 2024, OpenAI updated its policies to remove the blanket ban on military use. Several OpenAI employees found out about the policy update through an article in The Intercept, sources say. Company leaders later addressed the change at an all-hands meeting, explaining how the company would tread carefully in this area moving forward.

In December 2024, OpenAI announced a partnership with Anduril to develop and deploy AI systems for “national security missions.” Ahead of the announcement, OpenAI told employees that the partnership was narrow in scope and would only deal with unclassified workloads, the same sources said. This stood in contrast to a deal Anthropic had signed with Palantir, which would see Anthropic’s AI used for classified military work.

Palantir approached OpenAI in the fall of 2024 to discuss participating in their “FedStart” program, an OpenAI spokesperson confirmed to WIRED. The company ultimately turned it down, and told employees it would’ve been too high-risk, two sources familiar with the matter tell WIRED. However, OpenAI now works with Palantir in other ways.

Around the time the Anduril deal was announced, a few dozen OpenAI employees joined a public Slack channel to discuss their concerns about the company’s military partnerships, sources say and a spokesperson confirmed. Some believed the company’s models were too unreliable to handle a user’s credit card information, let alone assist Americans on the battlefield.

Shares:

Leave a Reply

Your email address will not be published. Required fields are marked *