
Last week, the U.S. technology and security sectors were shaken by a conflict between Anthropic, the developer of the AI model "Claude," and the U.S. Department of Defense (DoD). This escalated into a national issue about "who should control the world's most powerful AI in military applications"—the developers or the government, which can pressure developers through legal means for deployment.
Summer 2025: Beginning of cooperation.
January 2026: Operation to capture Nicolás Maduro.
February 2026: Tensions reach a breaking point.
However, hours after Anthropic was cut off, reports emerged that OpenAI announced an agreement with the DoD. CEO Sam Altman stated the company's model would continue to be used in the Pentagon's classified military networks, emphasizing that OpenAI maintains similar "red lines" as Anthropic—prohibiting mass surveillance and requiring human accountability for lethal force.
Meanwhile, multiple media outlets reported that U.S. Central Command continues to use Anthropic's AI despite Trump's recent ban, including during operations targeting Iran.
During U.S. and Israeli strikes on Iran, Claude was employed for intelligence, target selection, and battlefield simulation. This level of AI integration reveals that AI systems are deeply embedded in military operational planning more than many realize and cannot be discontinued instantly by order, as removing them from complex systems takes time.
This issue has sparked broad ethical debate. Anthropic has clearly stated that AI differs from traditional weapons or military technology because of its ability to learn, analyze, and make decisions with potentially widespread and rapid impacts.
Anthropic positions itself as an AI company committed to safety and ethics, while the Trump administration aggressively pursued military AI development by all means. The DoD's 2023 policy stated AI systems could select and attack targets without direct human intervention if properly vetted. This raised Anthropic's concern that if their model is used in secret military systems, the public might remain unaware until incidents occur.
The company does not reject security use entirely but believes current models are unprepared for high-stakes tasks like split-second attack decisions. Another key issue is AI use in citizen surveillance—while current laws permit some data collection, AI can elevate this to continuous behavioral analysis, cross-database pattern detection, and automated risk assessments on a broad scale.
Many question the DoD's decision to accept OpenAI's terms but reject Anthropic's, while some officials previously criticized Anthropic for being overly cautious about safety.
This is not a conflict either side can easily exit. If Anthropic is permanently cut off from government contracts, it could severely impact the business. But if the DoD loses access to the most advanced models, security gaps may emerge while waiting for other players to catch up.
This conflict reflects structural tensions between AI companies' safety values—rejecting violence or surveillance without human oversight—and governments' and militaries' desire to leverage cutting-edge technology for strategic advantage.
It raises a major question about the balance in an era where AI becomes critical infrastructure for security, human safety, human rights, and future technology governance. Who should set ethical boundaries? This dispute may become a new paradigm in Big Tech–military relations, marking the beginning of a new power dynamic between the AI industry and governments worldwide.
Read more
Sources Wall Street Journal , CNBC [1] [2] , TechCrunch , CNN
Follow the Facebook page: Thairath Money at this link -