Toggle Menu

We All Hate AI, but if You’re Poor, It Can Really Ruin Your Life

Debt collection. Parole decisions. Oversight of public services. It’s all being outsourced to AI, with terrible consequences for poor people.

Kali Holloway

Today 5:00 am

Residents of Daytona Beach, Florida, line up in their cars during a free food distribution for recipients of the Supplemental Nutrition Assistance Program on November 9, 2025. (Miguel J. Rodriguez Carrillo via AFP/Getty Images)

Bluesky

Luxury brands have always advertised the craftsmanship of their products, but in recent months, human artistry itself has become their advertising strategy. Hermès redesigned its entire website around hand-drawn illustrations by the French artist Linda Merad, who said the designer label wanted visitors to recognize that “the art was made by a human.” The fashion houses Chanel and Loewe commissioned human illustrators to create their recent social-media campaigns. Over the holidays, Porsche released an ad that combined hand-drawn artwork with 3D animation—a choice that seemed pointed coming on the heels of the viciously mocked generative-AI ads from Coca-Cola and McDonald’s. This past February, Gucci became a cautionary tale when it drew the wrath of fashionistas after using AI in its ads. “Any luxury brands that used AI slop should not be consider[ed] luxury anymore,” one viral post read. Another stated, “The whole point of luxury is that someone gave a damn.”

As automation and AI become ubiquitous, the human touch has become a luxury good. In some ways, this might seem to be merely a continuation on a theme: The rich get white-glove customer service while the rest of us are trapped pressing “1” and “2” and shouting “speak to an agent” into automated phone-tree voids. It can seem like just another symptom of the broader enshittification of our age and plutocratic economic order. And most of us don’t like it. Studies confirm the widespread skepticism: A Pew survey from 2025 found that half of Americans were more concerned than excited by the rise of AI, and roughly 60 percent said they wish they had more control over AI’s use in their own lives.

And yet it’s the poor who are subject to its most consequential uses. Today, debt collectors use AI to hound people via phone, e-mail, and chatbots. AI deepfakes are poised to worsen criminal-justice disparities. Parole decisions are being made by AI systems. And increasingly, federal and state officials are outsourcing decision-making and oversight for public services to digital machines.

As TechTonic Justice, a nonprofit that tracks technologies that are harmful to low-income communities, noted in a November 2024 report, governments employ AI in public programs when they’re looking to cut costs under the guise of ensuring that only the “right” people receive services. But any mistake made by an automated system immediately snowballs: Such systems can create “immense suffering at scales and speeds that were impossible with the human-centered methods that precede them,” the researchers found. After decades of austerity rooted in anti-Black and anti-poor politics, America’s safety net is already threadbare; those same biases are now encoded into digital tools that, like all AI, reproduce the prejudices of their training data and programmers. A human bureaucrat can destroy only so many lives in a day; algorithms can ruin the lives of tens of thousands at once.

Current Issue

View our current issue

Subscribe today and Save up to $129.

Every state now uses AI to determine Medicaid eligibility, according to TechTonic Justice. For the 73 million people enrolled in the program, automated systems increasingly decide whether to approve or deny healthcare treatments. The nearly 14 million Americans who receive disability benefits through the Social Security Administration are subject to decisions shaped by AI, which is also used by the Department of Housing and Urban Development; in fraud detection for the Supplemental Nutrition Assistance Program; and in making predictions of neglect in child-welfare investigations. Indeed, throughout the social-safety net, decisions about who gets helped and who gets denied are increasingly left to machines. (Right around when the Porsche ad dropped, the Trump administration quietly gave Palantir a no-bid contract for an AI system to search for alleged fraud by SNAP recipients.) In fact, as the TechTonic Justice researchers reported, “all 92 million low-income people in the U.S.…have some basic aspect of their lives decided by AI.”

In 2013, for example, cash-strapped Michigan instituted an automated system to root out fraud in its unemployment-insurance program. Over a two-year period, the system leveled fraud accusations against over 60,000 people—more than five times the number identified by previous human-led investigations. Despite no human review of these findings, the state began demanding repayment; court papers noted that the “punitive assessments regularly totaled between $10,000 and $50,000 and sometimes exceeded $187,000.” Three years later, Michigan’s auditor general found that 93 percent of those allegations were wrong. By then, thousands of people had endured arrests, bankruptcies, and evictions, with at least one person dying by suicide. As of 2022, Michigan owed $20 million in settlement costs to claimants who’d signed on to a class-action lawsuit.

In Arkansas, an automated system erroneously cut nursing and other home-aide services for about 4,000 people with severe disabilities. When families asked why the services had been slashed, they were told simply that “the computer did it.” (A court ruled that the state had to stop using the system.) In Minnesota and Kentucky, ongoing class-action lawsuits allege wrongful denials of care in cases where insurers enlisted AI to override doctor recommendations and deny the Medicare Advantage claims of elderly patients. In Illinois and Los Angeles County, the automated systems used to determine child-welfare removals were so error-prone that both jurisdictions have now discontinued their use.

The research company Forrester predicts that AI and automation will eliminate 6 percent of all jobs, or roughly 10 million positions, by 2030. That outlook seems sunny compared to a 2025 Senate report that predicted some 100 million Americans could lose their jobs to AI over the next 10 years. There’s a new digital divide, and the less money you have to buy your way out of it, the greater the role that AI will have over your life.

Kali HollowayKali Holloway is a columnist for The Nation and the former director of the Make It Right Project, a national campaign to take down Confederate monuments and tell the truth about history. Her writing has appeared in Salon, The Guardian, The Daily Beast, Time, AlterNet, Truthdig, The Huffington Post, The National Memo, Jezebel, Raw Story, and numerous other outlets.


Latest from the nation