DOJ Appeals Ruling That Blocked Trump's Ban on Anthropic AI
April 02, 2026 · 4 min read
The U.S. Department of Justice filed a notice of appeal on Wednesday in San Francisco federal court, seeking to overturn a landmark ruling that blocked the Trump administration from banning Anthropic's Claude AI across all federal agencies. The appeal, which will be heard by the Ninth Circuit Court of Appeals, escalates what has become one of the most consequential legal battles at the intersection of artificial intelligence, national security, and the First Amendment.
The conflict traces back to February 27, when contract negotiations between Anthropic and the Pentagon collapsed after the AI company insisted on maintaining safety guardrails that would prevent its technology from being used for fully autonomous weapons systems or mass surveillance of American citizens. Defense Secretary Pete Hegseth responded by invoking rare military authority — typically reserved for foreign adversaries — to designate Anthropic a "supply chain risk" and ordered all federal agencies to immediately cease using the company's technology. Hegseth publicly called Anthropic "sanctimonious" and accused the company of "arrogance." In a detail that underscored the political dimensions of the dispute, OpenAI reached a Pentagon agreement for classified services just hours after Hegseth's announcement.
U.S. District Judge Rita Lin issued a 43-page ruling on March 26 that was remarkable for its bluntness. She called the administration's actions "Orwellian," writing that "nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary" for disagreeing with the government on contract terms. The judge found the measures were "likely unlawful" and could "cripple" Anthropic, concluding that "the record supports an inference that Anthropic is being punished for criticizing the government's contracting position in the press." She also dismissed the national security rationale directly, writing that "the Department of War provides no legitimate basis to infer from Anthropic's forthright insistence on usage restrictions that it might become a saboteur."
The stakes of the case are difficult to overstate. Anthropic was the only AI company cleared for use on the Defense Department's classified networks, and the military was actively using Claude in operations against Iran at the time of the ban. The designation gave Anthropic just six months to be phased out of federal systems — a timeline critics said would both compromise national security operations and inflict irreversible commercial damage on the company.
The case has drawn support for Anthropic from a strikingly broad coalition. Third-party legal briefs were filed by Microsoft, multiple tech industry groups, military veterans, and Catholic theologians — all arguing that the government's actions set a dangerous precedent for retaliating against companies that impose ethical constraints on their products. Pentagon Undersecretary Emil Michael pushed back sharply, calling Judge Lin's ruling a "disgrace" and claiming it would disrupt Hegseth's "full ability to conduct military operations."
Anthropic has expressed confidence in its legal position. "We're grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits," a company spokesperson said following the original ruling. Judge Lin granted a one-week stay of her order to allow today's appeal to be filed. The company also has a separate, narrower case still pending in the D.C. Circuit Court of Appeals involving the specific Pentagon rules governing supply chain risk designations.
The Ninth Circuit's decision will set major precedent far beyond this single dispute. At its core, the case asks whether the federal government can use national security designations to punish American technology companies that refuse to remove ethical guardrails from their products. For an AI industry increasingly engaged with military and intelligence agencies, the outcome will shape the terms on which companies can negotiate the boundaries of how their technology is used — and what happens when they say no.