Anthropic AI announced that it would be working alongside the Trump organization to integrate its AI model into government work. The announcement comes after Anthropic disputed the US government earlier in the year regarding the Pentagon's unmitigated use of the model for defence purposes. Anthropic, which owns Claude, an AI model that is rapidly increasing in popularity, is valued at nearly $400 billion USD.
The launch of Mythos
Anthropic's launching of its new adapted model, Mythos, is what sparked the meetings between CEO Dario Amodei and White House staff. Mythos is reportedly the most advanced model to make complex cyberattacks both easier and quicker to execute. The banking industries of the US, Canada, and Britain have all scheduled meetings with Anthropic to further understand how the model works and how it can be used to strengthen banks' defences against cyberattacks. Treasury Secretary Scott Bessent joined Chief of Staff Susie Wiles in the meeting with Amodei, Axios reported. Anthropic called the meeting ‘productive' and announced that it would be working together (with Trump) to “work together on key shared priorities such as cybersecurity, America's lead in the AI race, and AI safety.”
Anthropic vs US gov
Months ago, long before the release of Mythos, Anthropic was entangled in a battle with the US government and Pete Hegseth. The US government asked all major AI companies to give unfettered access to their models to the Department of War. Hegseth demanded that all restrictions attached to the models be removed so that the US could use the technology to its full potential. Sam Altman, OpenAI's CEO, signed on the dotted line without hesitation, as did Elon Musk, but Anthropic held out. The consequences were significant. Hegseth threatened that if Anthropic did not remove the restrictions by February 28 (a deadline that came and passed), he would label the company a ‘supply-chain risk'. With that label, no company doing business with the Department of War would be allowed to use Anthropic's software. The threat tried to effectively end Anthropic's meteoric growth over the last 18 months.
Hegseth slapped Anthropic with the formal label in March, with Anthropic suing the US government days later because of the label. When ordering federal agencies to stop using Anthropic's AI tools, U.S. President Donald Trump attacked the company on his Truth Social.
“The Left-wing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War.”
-Donald Trump
When asked about the meeting with Anthropic on April 17, Trump claimed he had no idea of the details or the result of the meeting. Even with Trump refusing to mention (or potentially forgetting) the meeting entirely, it seems as though Anthropics' relationship with the Trump regime has begun to thaw. Anthropic co-founder Jack Clark all but confirmed that fact, stating that the supply-chain risk designation is a “narrow dispute”. Clark believes the designation will not affect the company's ability to showcase their models to the government.
What is Anthropic AI
Anthropic AI was founded in 2021 by a number of ex-OpenAI employees. With a team of just seven, Anthropic built its AI model Claude in just a year, and Claude 1 was released in March 2023. After an initial investment of $1 billion from Google in 2021, Anthropic has built unimaginable wealth in less than five years, latching on and benefitting from the AI boom in 2025. In 2024, Databricks announced that Claude would be integrated into its software, marking a serious achievement for Anthropic. Two years later, Anthropic is valued at nearly three times what Databricks is, a company nearly 10 years its senior. In March, Anthropic became the first American company to be labelled a supply chain risk, opening the door for government overreach into the private sector, but Anthropic is reportedly confident in its ability to have the label stripped.
OpenAI bends the knee to defence pressure.
While Anthropic's decision to deny the Pentagon complete access to its model has caused issues for the company, it has also brought praise from the public, something Sam Altman has not received. OpenAI, the creator of ChatGPT, faced the same conditions as Anthropic but chose to cave to government pressure. Sam Altman described being ‘rushed' into the deal on February 28, and penned an explanation on X. Altman claimed that he ‘shouldn't have rushed' into signing the DoW's contract, and that the whole experience was ‘a learning experience' for the billionaire CEO. According to Altman, he was trying to avoid a ‘much worse outcome,' and when his attempts to de-escalate discussions failed, he chose to sign a dangerous contract instead of risking financial consequences for his stockholders. Sam Altman is worth nearly $4 billion.
Altman received immediate backlash for folding to government pressure, but defended himself in another statement. Altman claims that ‘unelected officials' should not decide how technology should be used by the government. Altman also said he doesn't want OpenAI to decide what to do in the event of an emergency in the US, specifically a nuclear attack. Altman believes that AI experts are unequipped to make decisions about their own software, instead trusting the US government. The US government echoed the same argument against Anthropic, but to no avail. Now, it's clear that the US government wants to use Anthropic's models, even with the supply chain risk designation.