Society / StudentNation / August 8, 2025

Young People Are Using ChatGPT for Therapy, but the Conversations Lack Privacy and Protections

Therapy is now one of the leading uses of AI. But unlike traditional therapy notes, which are privileged, transcripts with chatbots carry no such shield.

Chelsea Lubbe and Ray Epstein

A woman holding a smartphone showing a chatbot conversation with virtual assistance app.

(Oscar Wong / Getty)

On May 13, Magistrate Judge Hon. Ona T. Wang issued an order mandating OpenAI to override its privacy policies and retain all ChatGPT user queries. In the copyright infringement case of The New York Times Company et al. v. Microsoft Corporation et al., the court found that the archives must be preserved for the sake of prospective legal actions relating to the use of the service. OpenAI appealed the decision, writing in a statement that it “fundamentally conflicts with the privacy commitments” made to users. But for now, the order stands. The impact of this decision is enormous: OpenAI CEO Sam Altman estimates that ChatGPT fields roughly 300 million new queries each week.

But one group of users is especially vulnerable to adverse exposure under this new ruling: survivors of sexual assault and other traumas who turn to AI in the absence of other support networks, under the expectation that their identities and experiences won’t be exposed unless and until they are prepared to come forward. But now, every message—whether archived, drafted, or sent in temporary chat mode—will be preserved indefinitely.

It might seem jarring to think of AI as a key source of support and care for sexual-assault survivors, but the service is filling a void created by the vast inequalities of access to mental-health treatment in the United States. The Harvard Business Review recently reported that one of the leading uses of AI in 2025 so far has been therapy and companionship.

Americans are experiencing a collapse in healthcare affordability, including a decrease in access to psychotherapy, antidepressants, and alternative treatments at the exact same time as national anxiety is skyrocketing. A 2022 survey showed that 90 percent of Americans believe there is a mental health crisis in the United States, and 80 percent of respondents cited cost as a leading barrier to accessing care. Another 2022 study exploring AI utility in supporting mental health found that young adults favor apps over interpersonal, verbal conversations. For many survivors, those barriers make anonymous AI tools the seemingly best option for the first disclosure of their trauma.

Research shows that obtaining quality care in response to a report of harm diminishes the severity of later post-traumatic stress disorders arising from an assault. AI tools are programmed to offer consistent, affirming responses, and once survivors feel grounded by affirmations of their harm and concrete suggestions for treatment, they are better positioned to seek additional care from appropriate providers. This is the best-case scenario use of the technology for survivors: that they can report their harm in a highly trained virtual companion at no cost, and turn to it as a surrogate confidant at a moment when they may not be ready to disclose their experience to family or loved ones.

But the same qualities that make it feel safe also make it dangerous. The American Psychological Association has recently questioned AI’s readiness to diagnose or support mental health disorders. While AI chatbots haven’t yet been “cleared by the FDA to diagnose, treat, or cure any mental health disorders, with clinical trials to prove safety and efficiency,” the APA writes, the group hopes that AI can play a supportive role in combating the mental health crises Americans face. But to get there, they say, “we must establish strong safeguards now to protect the public from harm.”

Until then, with ChatGPT creating a record of all inquiries, that record can be weaponized. Through a simple court order, AI tools can be transformed from sensitive exchanges initiated under an expectation of privacy and discretion into potential evidence in all manner of legal actions. And past experience with a legal system already weighted against their claims makes it clear that sexual-assault survivors will feel the hardest impact of this shift.

According to a piece in the Journal of Interpersonal Violence, the structure and consistency of a survivor’s narrative are often distorted in court, and it’s not at all difficult to envision how that process would be abetted by the introduction of AI transcripts documenting the fraught moment when a survivor begins processing harm and trauma. Early questions arising from an assault or fragmented disclosures of a survivor’s mental state at the time could be misinterpreted as inconsistencies in the survivor’s account. Some survivors seek answers about whether the harm perpetrated against them meets various legal and cultural definitions of assault. But questions like these could be flipped on their head in a courtroom: “If you didn’t even know if it was an assault, how can we be sure it happened to you?”

Unlike therapy notes, which are privileged and protected, these transcripts will carry no such shield, turning what felt like a private conversation into legal ammunition. These risks not only affect how survivors disclose their experiences but may also alter whether they choose to disclose at all. With the specter of a weaponized permanent archive of AI queries before them, survivors may shun using the service altogether, reckoning that their harm could be magnified rather than diminished by an AI consultation. This chilling effect would set survivors back on their own resources when their need for care is most urgent, and could well spur a vicious cycle leading back to the old system of rampant victim-blaming and stigmatization of assault claims: less reporting, less help-seeking, and deeper isolation.

If AI is to provide support to survivors, privacy protections must be strengthened. At a minimum, platforms like this must incorporate clear warnings and anonymous modes that are not tied to personally identifiable data. Schools and other institutions that are currently integrating AI into their daily operations must also teach students how to exercise caution with these tools. Users need to know what’s safe to share and what isn’t—as well as how they can maintain boundaries with a tool that can present itself as deeply personable and eagerly responsive.

Gen Z, millennials, and the coming-of-age Gen Alpha have been raised in environments that incorporate AI not only for personal queries but also for their academic work and professional lives. When these groups turn to the service after finding themselves priced out of traditional therapy, they are uniquely vulnerable to mistaking technological warmth for security and safety.

Chelsea Lubbe

Chelsea Lubbe is a senior at Temple University studying journalism and has previously written on lifestyle, health, and sexual assault.

Ray Epstein

Ray Epstein is a 2024 Truman Scholar, the Pennsylvania state director of the Every Voice Coalition, and the founder of Student Activists Against Sexual Assault at her alma mater, Temple University.

More from The Nation

LGBTQ+ rights advocates rally outside the US Supreme Court as justices hear arguments in challenges to state bans on transgender athletes in women's sports.

The Supreme Court Just Held an Anti-Trans Hatefest The Supreme Court Just Held an Anti-Trans Hatefest

The court’s hearing on state bans on trans athletes in women’s sports was not a serious legal exercise. It was bigotry masquerading as law.

Elie Mystal

Seymour Hersh during his New York Times days.

The Endless Scoops of Seymour Hersh The Endless Scoops of Seymour Hersh

Laura Poitras and Mark Obenhaus’s Cover-Up explores the life and times of one of America’s greatest investigative reporters.

Books & the Arts / Adam Hochschild

Puerto Rican rapper and singer-songwriter Bad Bunny has some warnings.

Listen to Bad Bunny: Abolish Act 22 Listen to Bad Bunny: Abolish Act 22

An egregious tax-evasion loophole is inflaming the displacement crisis in Puerto Rico.

Nomiki Konst and Federico de Jesús

How Elon Musk Turned Grok Into a Pedo Chatbot

How Elon Musk Turned Grok Into a Pedo Chatbot How Elon Musk Turned Grok Into a Pedo Chatbot

The tech oligarch sets a new low—for now—in the degeneration of online discourse.

Jacob Silverman

Parents, teachers, childcare workers, and community members hold up handmade signs defending local childcare programs during a press conference at a daycare center in Minneapolis, Minnesota.

The Real Welfare Fraud Scandal The Real Welfare Fraud Scandal

If the Trump administration were truly concerned with fraud in social services spending, it wouldn’t start with childcare, and it wouldn’t start with Minnesota.

Bryce Covert

Angelo Herndon, whose conviction for a crime related to free speech was overturned in 1935, arrives at NYC’s Penn Station.

Is It Possible for Speech to Ever Be Too Free? Is It Possible for Speech to Ever Be Too Free?

A new history explores the political limits as well as possibilities of freedom of speech.

Books & the Arts / David Cole