The physical backbone of enterprise technology, a glimpse into a modern data center.
Pentagon Pushes AI Companies for Classified Networks Expansion
The Pentagon is pushing top artificial intelligence companies, such as OpenAI and Anthropic, to make their tools available on classified networks. This push occurs without many standard restrictions typically applied by these companies. This development is part of ongoing negotiations. It intensifies a debate over military AI deployment and safeguards for Pentagon AI classified networks.
What Happened
Pentagon chief technology officer Emil Michael informed tech executives during a White House event on Feb 13 2026. He stated the military aims to deploy AI models on unclassified and classified domains. An anonymous official told Reuters the Pentagon is “moving to deploy frontier AI capabilities across all classification levels.”
OpenAI and Anthropic are key companies being pushed for this expansion. Alphabet’s Google and xAI also have similar deals for the military’s use of their AI tools.
Details From Sources
Many AI companies currently build custom tools for the US military. These are mostly for unclassified networks used for administration. Only Anthropic’s AI is presently available in classified settings through third parties. The government remains bound by its usage policies.
Military officials hope to leverage AI’s power to synthesize information. This would help shape decisions. However, AI researchers warn these tools can make mistakes or “make up information.” This could have “deadly consequences” in classified settings.
AI companies build safeguards and request adherence to guidelines to minimize downsides. Pentagon officials have “bristled at such restrictions.” They argue they should deploy commercial AI tools compliant with American law.
OpenAI’s Agreement
OpenAI reached a deal allowing military use of its tools, including ChatGPT, on an unclassified network (genai.mil). This network is rolled out to over 3 million US Defense Department employees. OpenAI agreed to remove many typical user restrictions. However, “some guardrails remain.” Expanding this agreement to classified networks would require a new or modified agreement, according to an OpenAI spokesperson.
Anthropic’s Discussions
Discussions with OpenAI rival Anthropic have been “significantly more contentious,” as Reuters previously reported. Anthropic executives do not want their technology used for autonomous weapons targeting or US domestic surveillance. An Anthropic spokesperson stated the company is committed to protecting America’s lead in AI and helping counter foreign threats.
Anthropic’s chatbot, Claude, is already “extensively used for national security missions” by the US government. The company is in “productive discussions” with the “Department of War” about continuing that work.
Why This Matters
Pentagon CTO Michael’s comments intensify the debate. This involves the military’s desire to use generative AI military tools without restrictions versus tech companies’ ability to set boundaries on tool deployment. Classified networks handle sensitive work like mission-planning or weapons targeting, making AI tools classified deployment here particularly impactful.
Background Context
This development is part of broader, ongoing negotiations between the Pentagon and generative AI companies. These discussions focus on US defense AI use on a future battlefield. Autonomous drone swarms, robots, and cyber attacks already dominate the modern battlefield.
US President Donald Trump ordered the Department of Defense to rename itself the Department of War. This change, mentioned in Anthropic’s statement, requires action by Congress.
Industry Reactions
OpenAI agreed to fewer restrictions for its tools on unclassified networks, with plans for new agreements for classified use. Alphabet’s Google and xAI have struck similar deals. Anthropic’s discussions are more contentious, citing ethical concerns over autonomous weapons and surveillance. However, its Claude chatbot is already vital for national security missions.
Related Data or Statistics
[Omit, no specific data or statistics beyond employee count for unclassified use are present in the provided content for the main topic.]
Future Implications (SPECULATIVE)
Reuters could not determine how or when the Pentagon plans to deploy AI chatbots on classified networks. The military hopes to leverage AI for decision-shaping, while balancing the risks identified by AI researchers. Contentious discussions between the military and tech companies over AI usage policies are likely to continue as deployment expands.
Conclusion
The Pentagon’s push for AI deployment on classified networks involves complexities. It integrates advanced AI with national security needs. This highlights the ongoing tension between operational demand for US defense AI and the need for AI safeguards defense and ethical considerations.
FAQ
Q1: What is the Pentagon’s goal regarding AI companies and classified networks?
A1: The Pentagon aims to make top AI companies’ tools, including OpenAI and Anthropic, available on classified networks and across all classification levels, often without standard restrictions.
Q2: Why are AI companies concerned about deploying their tools on classified networks?
A2: AI companies are concerned because their tools can make mistakes or generate false information, which AI researchers warn could have deadly consequences in classified military settings. Some also have ethical concerns, like Anthropic’s refusal for autonomous weapons targeting or domestic surveillance.
Q3: What is the current status of AI tools on classified networks for the US military?
A3: While many AI tools are used on unclassified networks, only Anthropic’s AI is currently available in classified settings through third parties, though still subject to the company’s usage policies.
Q4: What is OpenAI’s agreement with the Pentagon?
A4: OpenAI reached a deal for its tools, including ChatGPT, to be used on an unclassified network (genai.mil) by over 3 million US Defense Department employees. This agreement involves removing many typical user restrictions, though some guardrails remain, and expanding to classified networks would require a new deal.
Q5: What are the Pentagon’s arguments against AI companies’ restrictions?
A5: Pentagon officials have “bristled at such restrictions,” arguing that they should be able to deploy commercial AI tools as long as they comply with American law.
Stay informed on the evolving landscape of military AI deployment and its associated policy debates.