# US media learn about the use of the AI tool Claude in a military operation to capture Maduro
The US Army employed Claude from Anthropic in an operation to seize Venezuelan President Nicolás Maduro. WSJ reports this, citing sources.
The mission included bombing several targets in Caracas.
The use of the model for such purposes contradicts Anthropic’s public policy. The company’s rules explicitly prohibit applying AI for violence, weapons development, or surveillance.
“We cannot comment on whether Claude or any other model was used in a specific operation—secret or otherwise. Any application of LLMs—whether in the private sector or government—must comply with our policies regulating neural network deployment. We work closely with partners to ensure compliance,” said an Anthropic spokesperson.
The integration of Claude into the Ministry of Defense structures became possible thanks to Anthropic’s partnership with Palantir Technologies. The latter’s software is widely used by military and federal law enforcement agencies.
After the raid, an Anthropic employee asked a colleague from Palantir what specific role the neural network played in the operation to capture Maduro, WSJ reports. A startup representative stated that the company had not discussed the use of its models in specific missions “with any partners, including Palantir,” limiting discussions to technical matters.
“Anthropic is committed to the use of advanced AI in support of US national security,” added the company representative.
Anthropic vs. the Pentagon?
Pentagon spokesperson Shawn Parnell announced a review of relations with the AI laboratory.
“Our country needs partners ready to help troops win any war,” he said.
In July 2025, the US Department of Defense signed contracts worth up to $200 million with Anthropic, Google, OpenAI, and xAI to develop AI solutions in the security sphere. The department’s Office of Digital and AI Technologies planned to use these developments to create agent-based security systems.
However, by January 2026, WSJ reported a risk of breaking the agreement with Anthropic. Disagreements arose over the startup’s strict ethical policies. The rules prohibit using Claude for mass surveillance and autonomous lethal operations, limiting its use by agencies like ICE and FBI.
Dissatisfaction among officials increased amid the integration of the Grok chatbot into the Pentagon’s network. Defense Secretary Pita Hegseth, commenting on the partnership with xAI, emphasized that the department “will not use models that do not allow for warfare.”
Pressure on developers
Axios, citing sources, reported that the Pentagon is pressuring four major AI companies to allow the US military to use their technologies for “all lawful purposes.” This includes weapon development, intelligence gathering, and combat operations.
Anthropic refuses to lift restrictions on citizen surveillance and fully autonomous weapons. Negotiations have reached a dead end, but replacing Claude quickly is difficult due to the model’s technological superiority in specific government tasks.
Besides the Anthropic chatbot, the Pentagon uses ChatGPT from OpenAI, Gemini from Google, and Grok from xAI for non-secret tasks. All three have agreed to relax restrictions that apply to regular users.
Currently, discussions are underway to move LLMs into a classified environment and use them “for all lawful purposes.” One of the three companies has already agreed to do this; the other two are “showing greater flexibility” compared to Anthropic.
AI militarization
The US is not the only country actively integrating artificial intelligence into its defense sector.
China
In June 2024, China introduced an AI commander for large-scale military simulations involving all branches of the PLA. The virtual strategist has broad authority, learns quickly, and improves tactics during digital exercises.
In November, media reported that Chinese researchers adapted Meta’s Llama 13B model to create ChatBIT. The neural network was optimized for gathering and analyzing intelligence data and supporting operational decision-making.
India
New Delhi also relies on AI as a driver of national security. The government has developed strategies and programs at the national level, established specialized institutes and agencies for AI implementation, and launched projects to apply the technology across various sectors.
United Kingdom
London has designated artificial intelligence as a priority area. In the “AI Strategy for Defense” (2022), the department considers AI a key component of future armed forces. The “Strategic Defense Review” (2025) describes the technology as a fundamental element of modern warfare.
Whereas AI was previously seen as an auxiliary tool in military contexts, the UK armed forces now plan a transformation into “technologically integrated forces,” where AI systems are to be used at all levels—from command analytics to the battlefield.
Recall that in March 2025, the Pentagon announced the use of AI agents for modeling encounters with foreign adversaries.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
The media learned about the use of the AI tool Claude in the military operation to capture Maduro - ForkLog: cryptocurrencies, AI, singularity, future
The US Army employed Claude from Anthropic in an operation to seize Venezuelan President Nicolás Maduro. WSJ reports this, citing sources.
The mission included bombing several targets in Caracas.
The use of the model for such purposes contradicts Anthropic’s public policy. The company’s rules explicitly prohibit applying AI for violence, weapons development, or surveillance.
The integration of Claude into the Ministry of Defense structures became possible thanks to Anthropic’s partnership with Palantir Technologies. The latter’s software is widely used by military and federal law enforcement agencies.
After the raid, an Anthropic employee asked a colleague from Palantir what specific role the neural network played in the operation to capture Maduro, WSJ reports. A startup representative stated that the company had not discussed the use of its models in specific missions “with any partners, including Palantir,” limiting discussions to technical matters.
Anthropic vs. the Pentagon?
Pentagon spokesperson Shawn Parnell announced a review of relations with the AI laboratory.
In July 2025, the US Department of Defense signed contracts worth up to $200 million with Anthropic, Google, OpenAI, and xAI to develop AI solutions in the security sphere. The department’s Office of Digital and AI Technologies planned to use these developments to create agent-based security systems.
However, by January 2026, WSJ reported a risk of breaking the agreement with Anthropic. Disagreements arose over the startup’s strict ethical policies. The rules prohibit using Claude for mass surveillance and autonomous lethal operations, limiting its use by agencies like ICE and FBI.
Dissatisfaction among officials increased amid the integration of the Grok chatbot into the Pentagon’s network. Defense Secretary Pita Hegseth, commenting on the partnership with xAI, emphasized that the department “will not use models that do not allow for warfare.”
Pressure on developers
Axios, citing sources, reported that the Pentagon is pressuring four major AI companies to allow the US military to use their technologies for “all lawful purposes.” This includes weapon development, intelligence gathering, and combat operations.
Anthropic refuses to lift restrictions on citizen surveillance and fully autonomous weapons. Negotiations have reached a dead end, but replacing Claude quickly is difficult due to the model’s technological superiority in specific government tasks.
Besides the Anthropic chatbot, the Pentagon uses ChatGPT from OpenAI, Gemini from Google, and Grok from xAI for non-secret tasks. All three have agreed to relax restrictions that apply to regular users.
Currently, discussions are underway to move LLMs into a classified environment and use them “for all lawful purposes.” One of the three companies has already agreed to do this; the other two are “showing greater flexibility” compared to Anthropic.
AI militarization
The US is not the only country actively integrating artificial intelligence into its defense sector.
China
In June 2024, China introduced an AI commander for large-scale military simulations involving all branches of the PLA. The virtual strategist has broad authority, learns quickly, and improves tactics during digital exercises.
In November, media reported that Chinese researchers adapted Meta’s Llama 13B model to create ChatBIT. The neural network was optimized for gathering and analyzing intelligence data and supporting operational decision-making.
India
New Delhi also relies on AI as a driver of national security. The government has developed strategies and programs at the national level, established specialized institutes and agencies for AI implementation, and launched projects to apply the technology across various sectors.
United Kingdom
London has designated artificial intelligence as a priority area. In the “AI Strategy for Defense” (2022), the department considers AI a key component of future armed forces. The “Strategic Defense Review” (2025) describes the technology as a fundamental element of modern warfare.
Whereas AI was previously seen as an auxiliary tool in military contexts, the UK armed forces now plan a transformation into “technologically integrated forces,” where AI systems are to be used at all levels—from command analytics to the battlefield.
Recall that in March 2025, the Pentagon announced the use of AI agents for modeling encounters with foreign adversaries.