Documents reveal DOGE's AI deregulation playbook at HUD: a tool called SweetREX designed to classify housing rules with deletion as the default outcome.
Federal housing regulators received a slide deck last summer pitching an AI system with a name that left little to the imagination: SweetREX, built for the "extermination" of federal rules. Documents obtained through Freedom of Information Act requests, shared with FedScoop and first reported by the Lever, reveal how the Department of Government Efficiency laid out a concrete plan to automate the review of every Department of Housing and Urban Development regulation using large language models.
Named for DOGE associate Christopher Sweet, per Wired's earlier reporting, the tool was pitched to HUD employees as a workflow accelerator. The process had three steps: the AI scans each regulation and recommends keeping it, deleting it, or partially deleting it. Attorneys review the recommendations. Agency staff make the final call.
DOGE framed SweetREX as labor-saving rather than policy-deciding. The tool would handle "all the most time-consuming steps in deregulation" while leaving program groups in control, with legal review "as needed." HUD's rulebook covers a broad range of protections, including prohibitions on sex discrimination in mortgage lending and legal aid programs for homeowners in foreclosure.
The default assumption
The framing did not hold up under scrutiny. A source familiar with the FOIA submissions told FedScoop that the explicit instructions embedded in the system reveal a different intent: the default assumption is that regulations should be rescinded. The AI was not evaluating rules neutrally. It was, by prompt design, looking for reasons to cut them.
That structural bias reflects a well-documented limitation of large language models. These systems tend to produce outputs aligned with whatever assumptions are embedded in the prompt. A model directed toward a deregulatory conclusion will find one. The human review steps downstream do not undo that directional push; they process outputs that were already oriented before any attorney saw them.
AI developers recognize this problem even as they scale at historic speed. OpenAI closed a $122 billion funding round this week at a valuation of $852 billion, positioning the company as foundational infrastructure for AI deployment in enterprise and government. Yet sycophancy remains unresolved. TechCrunch reported last week that OpenAI's newest model specifically targeted hallucination reduction in law, medicine, and finance, acknowledging that prior versions produced unreliable outputs in exactly the high-stakes domains where artificial intelligence review tools like SweetREX are being proposed for regulatory decisions.
Scale and consequences
What the FOIA documents don't establish is whether HUD actually ran SweetREX against its active regulatory code. What they do show is that DOGE arrived at the agency with a branded tool, a named workflow, and a slide deck, and that the system's instructions pointed toward deletion from the outset.
Compute spending puts the infrastructure ambitions in context. Bloomberg reported this month that OpenAI plans to spend $50 billion on compute power in 2026, up from roughly $30 million nine years ago. The same foundation models sold to enterprise and government clients are the ones whose sycophantic tendencies have been flagged repeatedly in artificial intelligence review literature and policy discussions.
Housing regulations are not procedural filler. Anti-discrimination provisions in HUD's rulebook took decades of litigation and legislation to establish. Rules around foreclosure assistance were written in direct response to specific market failures. Running that body of law through an AI classifier with rescission as the default is a different process than notice-and-comment rulemaking: faster, less transparent, and shaped by the prompt before any human review begins.
Whether SweetREX processed a single active HUD rule, or whether DOGE brought the same playbook to other federal agencies, the documents do not say. What they reveal is a template: AI-assisted regulatory review with deletion as the default, packaged as automation but pointed in a predetermined direction.
Frequently asked questions
What is SweetREX?
SweetREX is an AI deregulation tool developed by DOGE, named after associate Christopher Sweet. It was pitched to HUD as a system to classify every federal housing regulation as worth keeping, deleting, or partially deleting, with rescission as the explicit default assumption.
Can DOGE legally use AI to eliminate federal housing regulations?
Federal agencies must generally follow the Administrative Procedure Act when rescinding rules, which requires public notice, comment periods, and reasoned justification. Whether AI-generated recommendations satisfy that standard has not been tested in court.
What HUD regulations could be affected?
HUD's code covers anti-discrimination provisions in mortgage lending, foreclosure legal aid programs, and access standards under federal civil rights law, among many others.
Why does LLM sycophancy matter for government AI tools?
Large language models tend to reflect the assumptions built into their prompts. When a system is directed to evaluate regulations through a deregulatory lens, it returns deregulatory results, making downstream human review less of an independent check and more of a ratification step.
