Loading article...
Loading article...

Generating AI summary...
The US military reportedly used the AI model Claude, developed by Anthropic, to inform its attack on Iran despite a recent decision by Donald Trump to sever ties with the company and its artificial intelligence tools. This development highlights the intricate embedding of AI technology in military operations and the challenges of rapidly detaching these systems.
The US military command allegedly used Claude for intelligence purposes, target selection, and battlefield simulations during the joint US-Israel bombardment of Iran. This move came despite Trump's order to stop using Claude immediately, citing the company's "Radical Left" stance and "ideological whims." The defense secretary, Pete Hegseth, accused Anthropic of "arrogance and betrayal" and demanded unrestricted access to their AI models.
The continued use of Claude despite the ban underscores the complexities of withdrawing AI tools from military operations. The technology has become intricately embedded in various systems, making it challenging to detach them quickly. This situation raises questions about the role of AI in military decision-making and the potential consequences of relying on third-party companies for critical services.
The break between the US military and Anthropic has led to OpenAI stepping into the breach. Sam Altman, the CEO of OpenAI, has reached an agreement with the Pentagon for the use of their tools in the classified network. This development highlights the growing competition in the AI market and the potential for rival companies to fill the void left by others.
The US military's continued use of Claude raises concerns about the long-term implications of relying on third-party AI tools for critical operations. As the use of AI in military decision-making continues to grow, it is essential to address the complexities of withdrawing these systems and ensure that the technology serves the nation's interests rather than those of individual companies.
A: Claude is an AI model developed by Anthropic, used by the US military for intelligence purposes, target selection, and battlefield simulations.
A: Trump denounced Anthropic as a "Radical Left AI company" and accused them of "ideological whims," citing the company's terms of use and the use of Claude in a recent raid.
A: OpenAI's agreement with the Pentagon to use their tools in the classified network highlights the growing competition in the AI market and the potential for rival companies to fill the void left by others.
Source: The Guardian