Toggle Menu

Trillion-Dollar Tech Bandits Are Finally Facing Justice

An outdated law has allowed Big Tech to evade accountability for 30 years. Now landmark court rulings are giving consumers a chance to fight back.

Margaret Mabie and Yasmine Taeb

Today 5:00 am

Bluesky

On March 24, 2026, in the landmark case State of New Mexico v. Meta Platforms, Inc., a New Mexico jury ordered Meta to pay $375 million in damages for violating the state’s consumer-protection laws by misleading consumers and harming children by enabling child exploitation. New Mexico became the first state in the country to prevail at trial against Big Tech for its role in enabling child exploitation on its platforms. But it wouldn’t be the last: The next day, in a bellwether verdict, a California jury awarded $6 million in damages to a young female plaintiff who was found to have been addicted to Instagram and YouTube as a child, holding the respective owners of the platforms, Meta and Google, liable.

These verdicts should have widespread implications for the tech industry. The social-media-addiction verdict in California will be followed by thousands of lawsuits from similarly situated plaintiffs, including suits that have already been filed in federal court in other states. Both verdicts represent monumental progress in the decades-long struggle to hold Big Tech accountable for harming children and teenagers. These cases will open up debate about Section 230 of the Communications Decency Act, or CDA, which for decades has prevented Americans from seeking redress for the damages done by corporate tech behemoths.

This year marks the 30th anniversary of the CDA, the most significant law of the digital age, which was signed into effect by President Bill Clinton on February 8, 1996. The CDA was intended to protect children from exposure to online harms, but the innovations of the past 30 years unleashed a technological revolution—one unimaginable when the CDA was passed—that, with the help of a series of legal decisions, have turned the law into a shield for tech companies rather than a sword for consumers. In 1996, the Internet was highly decentralized and dominated by no one. Today, a handful of trillion-dollar mega-corporations own the platforms that billions of us log on to every day, and everyone carries a supercomputer in their pocket, leaving most of us continuously connected and monitored.

The Nation Weekly
Fridays. A weekly digest of the best of our coverage.
By signing up, you confirm that you are over the age of 16 and agree to receive occasional promotional offers for programs that support The Nation’s journalism. You may unsubscribe or adjust your preferences at any time. You can read our Privacy Policy here.

Not surprisingly, Big Tech’s global hegemony has a dark side. Some of the most vulnerable members of society, primarily women and children, have become victims of online sexual abuse and exploitation. Tech created these hazards—not by accident, but by design—and we’re finally seeing these issues come to the fore in courtrooms with the recent verdicts holding social-media companies accountable. After spending decades operating with near impunity as they transformed social-media platforms into ruthless profit-making machines, the giant global tech companies are beginning to face consequences for their dangerous products.

Current Issue

View our current issue

Subscribe today and Save up to $129.

The product-liability claims that are now being deployed against social-media platforms rely on legal arguments that are similar to those that consumer advocates have used to hold Big Tobacco, Big Auto, and Big Pharma to account; the ultimate goal is to rein in bad actors who put consumers at risk. Progressive attorneys general across the country, directly confronting legislative inaction, have taken the lead in going after Big Tech. In 2022, California Attorney General Rob Bonta, alongside a bipartisan coalition of more than two dozen state attorneys general, urged the Supreme Court to interpret Section 230 to allow social-media companies to be held liable. “States are severely hampered from holding social media companies accountable for harms facilitated or directly caused by their platforms,” Bonta said. “This was certainly not Congress’s intent when it carved out a narrow exception in the Communications Decency Act.” The attorneys general urged the Supreme Court to “not insulate social media companies from liability.” (The Supreme Court ultimately decided not to rule on the scope of Section 230.) As a result of the expansive legal obstacles created by Section 230, victims continue to fight uphill battles against these omnipotent technology behemoths.

Section 230 was passed to allow families to use civil litigation as a regulatory tool for the reporting and removal of indecent material on the Internet. The law was designed to protect children from R-rated spaces online. But as court challenges have eroded the law in the years since it was enacted, tech companies have been able to use it to shield them from accountability. In 1997, the Fourth Circuit Court of Appeals held that Section 230 barred claims for “distributor” liability against an Internet “publisher” of defamatory statements. Later that year, in Reno v. ACLU, the Supreme Court struck down the child-protection portions of the CDA on First Amendment grounds, citing their chilling effect on free speech, thus removing any power the statute had to protect children from harmful material on the Internet. What remained in the CDA were two immunity provisions, 230(c)(1) and (c)(2), that tech companies soon came to rely on to avoid liability in federal and state lawsuits, going well beyond the statute’s intended scope.

These two immunity provisions have served as key hurdles for victims. The first specifies that an interactive computer service may not be treated as “the publisher or speaker of any information provided by another information content provider,” thus giving Internet companies immunity from being held liable for third-party content that appears on their platforms. The second provides “good Samaritan” immunity to companies that voluntarily act to “restrict access” to objectionable material. In 2002, in the ruling known as Ashcroft II, the Supreme Court held that under Section 230, the “government has no power to restrict expression because of its message, its ideas, its subject matter, or its content.” Nevertheless, this principle “is not absolute.” The decisions in Reno and Ashcroft II transformed the original statute from a means for child victims to gain justice to a form of tech immunity that, as interpreted, has barred countless victims over the years from their day in court. Despite efforts to hold tech companies accountable for indisputable harms, what was left of Section 230 brought accountability, justice, and child protection to a halt.

In response, Dick Durbin, the ranking member on the Senate Judiciary Committee, introduced a bipartisan bill to repeal Section 230 in December 2025. The Sunset Section 230 Act “would repeal Section 230 two years after the date of enactment so that those harmed online can bring legal action against companies and finally hold them accountable for the harms that occur on their platforms,” a statement released by the senator’s office explained. In that release, Durbin noted, “Children are being exploited and abused because Big Tech consistently prioritizes profits over people.” Joining Durbin as a cosponsor of the legislation, progressive Senator Peter Welch stated that “Section 230 has been used by America’s biggest tech giants not as a tool but as a shield, providing immunity from legal consequences when their platforms harm consumers.” More than ever, we are seeing actions by members of Congress to hold social-media platforms accountable for targeting children. Though Big Tech remains virtually immune from civil justice, victims continue to see glimmers of hope as they persist in fights on the floors of Congress and in the courts. We must prioritize children and families over corporate greed and profits.

The recent rapid developments in artificial intelligence multiply the risks that ordinary Americans, especially women and children, face in today’s digital world. AI is increasingly being used to create nonconsensual intimate images of women and children. Additionally, the victims of sexual abuse depicted in child-sexual-abuse material (CSAM) are victimized further when new images depicting them are generated from the original images. (CSAM depicting real children is illegal.) As early as 2012, in Doe v. Boland, victims sued over morphed, Photoshopped CSAM images and prevailed. In December 2025, a dangerous new feature was added to xAI’s Grok chatbot allowing users to “nudify” images. In just 11 days, Grok generated 4.6 million images in total, 65 percent of which, the Center for Countering Digital Hate estimated, were sexualized, including 23,000 of children. Numerous lawsuits were filed against xAI, including a class-action suit filed on January 23, 2026. Around the same time, Google and the chatbot platform Character.AI settled a lawsuit alleging that the chatbot contributed to the suicide of a 14-year-old boy. In response to this emerging scourge, the bipartisan GUARD Act was introduced to protect kids from chatbots and is supported by more than a dozen senators, including Welch, Chris Murphy, Tim Kaine, and Josh Hawley. “Nearly three-quarters of all young people in our country are now turning to unregulated AI chatbots, with alarming consequences—in some cases, even encouraging kids to engage in behavior that can hurt themselves and others. And Big Tech isn’t doing enough to address it,” Welch said in a press release about the proposed legislation. On April 30, 2026, the GUARD Act passed unanimously out of the Senate Judiciary Committee. At a town hall at Stanford University hosted by Senator Bernie Sanders and Representative Ro Khanna about the dangers posed by AI, Khanna warned: “The truth is that, in the hands of a few billionaires, the priority [in the development of AI] has been to eliminate jobs, extract profits, and addict us.”

The Internet today is exceedingly dangerous, but effective public policy that holds tech companies to account can begin to help protect those who need it most. Congress passed the CDA’s Section 230 statute to make the Internet safer for children. Striking the child-protection provisions from the law failed to anticipate the scale, power, and profit-driven nature of today’s platforms. With public policy stuck 30 years in the past, victims continue to struggle to hold tech monoliths accountable for their wrongdoings, though the recent product-liability verdicts against social-media giants provide some hope. There is an urgent need for policy reforms that will give people access to justice by establishing a private right of action against tech companies. With the advent of AI, and the proliferation of deepfakes and nonconsensual intimate images online, progressive leaders must hold Big Tech responsible for foreseeable harms. The goal is to establish stronger protections for our most vulnerable, instead of for billionaires.

Margaret MabieMargaret Mabie is partner at Marsh Law in New York City and can be reached at margaretmabie@marsh.law.


Yasmine TaebYasmine Taeb is a federal litigation attorney at Marsh Law in New York City and a progressive strategist. She can be reached at yasminetaeb@marsh.law and @YasmineTaeb.


Latest from the nation