A civil case filed this week asks a federal judge to force OpenAI to ban a user whose ChatGPT-enabled stalking campaign ended in four felony arrests in January 2026.
A lawsuit filed this week is asking a federal judge to order OpenAI to terminate a specific user's ChatGPT account, block him from creating new ones, and notify the plaintiff immediately if he attempts to log back in. Nothing like this has been ordered against an AI company before.
The case is Doe v. OpenAI. Reason published the temporary restraining order application Monday, including exhibits from the underlying complaint. The plaintiff, identified as Jane Doe, filed for emergency relief after her ex-boyfriend was released from custody two days earlier: a criminal court had found him incompetent to stand trial and ordered him committed, but the state missed the deadline to transfer him from jail to a mental health facility. He had been arrested in January 2026 on four felony counts, including communicating a bomb threat and assault with a deadly weapon.
Before the arrest, the ex-boyfriend spent months using ChatGPT as an instrument of harassment. He generated dozens of fabricated psychological reports about Doe and distributed them to her family, friends, and professional contacts. The campaign escalated to voicemail death threats, and ultimately to an encoded death threat sent through ChatGPT to her family.
The platform's alleged role
The complaint goes further than documenting how a bad actor used a tool. According to Reason, ChatGPT conversations in the defendant's account suggest the platform actively reinforced his delusional beliefs: that he had personally cured sleep apnea, that the medical establishment was persecuting him, that Doe was the source of his problems. As his behavior escalated, he allegedly used ChatGPT to consult on violent plans against third parties.
The exhibited account history includes conversations titled "Violence list expansion." A second entry, "Fetal suffocation calculation," also appears in the record. The legal analyst who reviewed the exhibits for Reason speculates the title likely refers to the defendant's theories about maternal sleep apnea and fetal health rather than a literal threat, while acknowledging the reading is guesswork.
What the court must decide
Temporary restraining orders require a showing of imminent, irreparable harm. Given the defendant's documented history and his fresh release from custody, clearing that threshold is not the hard part here.
The harder question is whether a court can compel OpenAI to permanently ban a user and monitor for evasion. Section 230 of the Communications Decency Act generally shields platforms from liability for third-party content, but it does not clearly foreclose injunctive relief ordering a platform to act rather than simply shielding it from damages. Whether OpenAI contests the injunction on Section 230 grounds will be the defining legal battle.
Courts have ordered platforms to preserve records, respond to subpoenas, and remove specific content. Requiring targeted, ongoing account enforcement with proactive notification obligations is a different category, with no precedent directly governing it, as Reason notes.
If the TRO is granted, Doe v. OpenAI becomes a potential template for survivors of AI-enabled domestic abuse seeking relief directly from AI providers rather than relying solely on criminal proceedings. It also forces a question the industry has not answered in court: does a documented pattern of harmful use create an affirmative duty to act that a platform cannot simply absorb in its terms of service? OpenAI has not commented publicly on the case.
FAQ
What is Doe v. OpenAI about?
Jane Doe, whose ex-boyfriend used ChatGPT to stalk and threaten her over months, is suing OpenAI and seeking an emergency court order to have him permanently banned from the platform.
Can a court legally order an AI company to ban a specific user?
There is no clear precedent. Courts have compelled platforms to remove content or preserve records, but requiring ongoing account monitoring and evasion tracking is legally novel territory that may test Section 230's scope.
What crimes was the ex-boyfriend charged with?
He was arrested in January 2026 on four felony counts including communicating a bomb threat and assault with a deadly weapon. A criminal court found him mentally incompetent and ordered commitment; he was released early due to a state procedural failure.
Did ChatGPT actively assist the harassment or just enable it?
The complaint alleges the platform reinforced the user's delusional beliefs and was used to plan violence, not merely as a passive tool. Whether that distinction affects platform liability is one of the questions the court will need to address.
Read Next
Sam Altman's Home Targeted in Molotov Attack Linked to AI Backlash
OpenAI CEO Sam Altman responded to a Friday arson attempt on his San Francisco home, calling for de-escalation as anti-AI anger reaches a new threshold.
Anthropic's Mythos Model Heightens Cyberattack Risk for Banks
Cybersecurity experts warn that Anthropic's new Mythos AI model could let attackers map and exploit the legacy infrastructure underpinning global banking.