Connect with us

Politics

BREAKING: OpenAI Takes Sides With Anthropic After Trump Goes Scorched Earth

Published

on

In an escalation at the epicenter of artificial intelligence and national security, President Donald Trump on Friday ordered all federal agencies to cease using AI tools developed by the company Anthropic, accusing the firm of attempting to “strong-arm” the Pentagon and endanger American troops.

Prior to President Trump’s statement, OpenAI, one of the leading AI companies, joined Anthropic in drawing red lines against the administration’s intended use of AI.

The initial directive, issued in a lengthy post on Truth Social, mandates an immediate halt to new usage and a six-month phaseout for agencies that have already integrated Anthropic’s systems, including the Department of Defense. The order marks one of the most sweeping actions ever taken by a U.S. president against a major technology provider.

“The United States of America will never allow a radical left, woke company to dictate how our great military fights and wins wars!” Trump wrote, asserting that Anthropic had tried to force the military to abide by its terms of service rather than government authority. He warned of “major civil and criminal consequences” if the company failed to cooperate during the transition.

The confrontation follows weeks of behind-the-scenes tension between Anthropic and Pentagon officials over how its flagship model, Claude, could be deployed in classified military environments. Anthropic had been the first frontier AI company to see its system integrated into some of the military’s most sensitive workflows, including planning and analysis tools. But executives at the company, led by Chief Executive Dario Amodei, drew boundaries around the use of their technology for mass surveillance or autonomous lethal weapons.

Those “red lines,” as the company calls them, became the crux of the dispute. Defense officials have insisted that any AI contractor must permit use of its systems for “all lawful purposes,” a formulation Anthropic rejected on the grounds that it could allow uses the company considers unethical, even if technically legal.

Now, that standoff has turned into an industry reckoning.

In a memo to employees obtained by Axios and first reported by The Wall Street Journal, Sam Altman, chief executive of OpenAI, said his company would adopt the same guardrails that ignited Anthropic’s clash with the Pentagon: no AI for mass surveillance and no autonomous lethal weapons, with humans remaining “in the loop” for high-stakes decisions.

Sam Altman, known for his work in artificial intelligence, New York, September 24, 2025

“Regardless of how we got here, this is no longer just an issue between Anthropic and the [Pentagon]; this is an issue for the whole industry and it is important to clarify our stance,” Altman wrote. “We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons.”

At the same time, Altman signaled a willingness to continue negotiations with the Defense Department. ChatGPT is already available in unclassified military systems, and discussions about deploying it in classified environments have accelerated amid the rupture with Anthropic. Altman wrote that OpenAI would seek a contract allowing deployment “except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons.”

The episode places OpenAI in a delicate position. By publicly defending Anthropic’s principles, Altman risks angering the administration. Yet if the Pentagon ultimately designates Anthropic a “supply chain risk,” OpenAI could stand to gain a lucrative government contract — provided it reaches terms acceptable to defense officials.

Elon Musk, whose company xAI has agreed to the Pentagon’s broader “all lawful purposes” standard, has derided Anthropic as “Misanthropic,” accusing it of ideological bias. Trump echoed similar themes in his post, portraying the dispute as a struggle between elected authority and “out-of-control” technology executives.

Defense officials have rejected suggestions that they intend to conduct mass surveillance or rapidly field autonomous weapons. Their objection, they say, is to a private company dictating operational constraints in the midst of a technological race with China. In conversations with reporters, Pentagon officials have described concerns that Anthropic might question or delay critical deployments at sensitive moments — a characterization the company denies.

The rupture comes at a pivotal moment for the AI industry. After Anthropic’s stance became public, employees from OpenAI and Google signed a letter urging their executives to resist “pressure” from the Pentagon. Some in Washington and Silicon Valley have praised Anthropic for risking a major financial blow to uphold ethical boundaries; others view the move as naïve in matters of national defense.

For Anthropic, the president’s order represents a significant setback. Government work has been seen as both a revenue source and a proving ground for frontier AI capabilities. Losing federal contracts could reshape its growth trajectory and strengthen rivals.

If negotiations with OpenAI prove less adversarial, the Pentagon could secure access to powerful models without surrendering control. If not, the United States may confront a future in which its most advanced AI tools are developed under conditions of political distrust — or not deployed at all.