Toggle Menu

What Happens When “Your Honor” Is a Robot?

The age of artificial judges is fast approaching. What will that mean for justice?

Elie Mystal

Today 5:00 am

Bluesky

Justice is often depicted as a blindfolded woman holding scales—but in real life, Justice is more like Santa Claus holding a shotgun. It sees everything: It sees whether you are rich or poor, whether you are powerful or powerless—and it sure as hell sees whether you are Black or white. Those observations tip the scales before any evidence is weighed. The idea of “blind justice” is a pure fiction, a cruel one invented by the rich, powerful, and white to justify the fickle, unfair, and prejudiced outcomes their legal system regularly produces.

In the United States, Black Americans suffer acutely from this failure. Black people experience an entirely different justice system than white people do, and almost everybody knows it. We are treated as guilty until “exonerated.” We are judged by predominantly white juries. We are tried under laws written by white people, for white people, and approved by white people, under a Constitution written by our white captors and enslavers. Even when we are murdered, we are put on trial so that the white people who killed us can walk free. I wouldn’t wish for my worst enemy to face justice-while-Black.

And the system isn’t much kinder to women, or poor people, or people who practice a non-Christian faith or live non-heteronormative lives. There is a “justice gap” in this country, and despite nearly a century’s worth of efforts to make the justice system apply to everyone equally, the results have been underwhelming.

Now, however, there is a new tool, widely promoted by rich white people, that purports to bridge the yawning gap between these different justice systems. That tool is artificial intelligence, and its boosters are sure that the robots are here to help. They tell us that the machines can produce justice more “efficiently,” bringing fair legal resolutions to people who do not have the resources to buy expensive lawyers, or the time to wait for the slow grinding of the wheels of justice. They tell us that the algorithms can bring an unbiased approach to sentencing and bail proceedings. They tell us that while AI should never fully “replace” human judges, the large language models can be a useful analytical tool for everything from statutory interpretation to determining what words are commonly thought to mean.

Current Issue

View our current issue

Subscribe today and Save up to $129.

According to one such booster: “Technology, especially AI, can expand legal assistance and drive costs way down. That promises to democratize justice, helping those who have long taken their lumps and done without help.” That’s not a quote from Elon Musk or Sam Altman. That’s from Stephanos Bibas, a federal judge on the Third Circuit Court of Appeals, appointed by Donald Trump.

That’s also not how it’s gonna work. AI justice is largely being designed and promoted by private businesspeople interested in creating profits, not justice. It’s a closed-source, proprietary product, meaning that the beeps and boops that constitute its “reasoning” and “decision-making” cannot be exposed, analyzed, or argued against on appeal but can be tweaked in secret whenever a wealthy tech bro doesn’t like the AI outcomes. And AI justice will ultimately be just as biased as real judges, because all it can fundamentally do is spit back out to us all of the garbage racism we’ve poured into our justice system.

AI justice can mean a lot of things—everything from a human judge using an algorithm to advise them on how to set bail to an AI judge that assesses evidence, weighs arguments, and issues binding rulings. Some people want to limit the use of the term AI to mean just “generative AI,” and the term AI justice to apply only to situations where a computer issues a final ruling. But to my mind, a judge who pulls out Claude instead of a dictionary to look up the meaning of a word in a statute is “using AI.”

Many countries are integrating AI into their judicial processes. Estonia is using AI judges to handle its version of small-claims court. Argentina is using AI to automate various processes and even using ChatGPT to draft legal rulings. Countries from the United Kingdom to Russia to Morocco are using AI in various ways to streamline legal processes. But no place has gone as far as China. Under Xi Jinping, China is on the leading edge of the robot-judge revolution. It has implemented a number of judicial reforms, including integrating information technology into all aspects of jurisprudence to create what it calls “smart courts.” Records are digitized, hearings happen online, automation is everywhere. Most of the reforms are designed to improve jurisprudential efficiency, in accordance with the slogan “Striving to make the people feel fairness and justice in every judicial case.”

China also employs AI judges. The government claims that millions of cases each year are adjudicated by AI, including financial disputes, product-liability cases, and even civil-rights cases. According to a Law360 report on the process, the AI is embodied by a “holographic judge [who] looks like a real person but is a synthesized, 3D image of different judges.” In Beijing, AI-judged cases can go from registration to resolution in an average of 40 days, with hearings lasting 37 minutes. According to reports, 98 percent of those AI rulings are accepted by the litigants and not appealed.

While legal and cultural differences between China and the United States abound (appealing a ruling you’ve lost is as ingrained in American culture as mass shootings and stealing land), human adjudication is slow and inefficient in both places. Cases that should be easy and don’t require anything but a simple application of well-established laws get backed up in a system that moves far too slowly. Not only is justice delayed—indeed, sometimes denied—but there’s also the deadweight economic loss that comes from just waiting for a decision, any decision at all, on whether a project can move forward or a contract can be executed.

The Nation Weekly
Fridays. A weekly digest of the best of our coverage.
By signing up, you confirm that you are over the age of 16 and agree to receive occasional promotional offers for programs that support The Nation’s journalism. You may unsubscribe or adjust your preferences at any time. You can read our Privacy Policy here.

But let’s not fool ourselves. Whatever robot judges offer us in terms of efficiency, the attempt to make people “feel fairness and justice,” as the slogan goes, is just that—an attempt to make people feel—and it’s a feeling that does not survive its first contact with reality. That’s because the robot judges are not deliberating; they’re just computing. They cannot reproduce the feeling of having your argument heard and listened to, which is inherent in the very idea of “having your day in court.” Even a “fair” process can feel unjust if it’s not transparent. You might be able to see through a hologram judge, but you can never see into it. You can never see how it thinks.

Support independent journalism that does not fall in line

Even before February 28, the reasons for Donald Trump’s imploding approval rating were abundantly clear: untrammeled corruption and personal enrichment to the tune of billions of dollars during an affordability crisis, a foreign policy guided only by his own derelict sense of morality, and the deployment of a murderous campaign of occupation, detention, and deportation on American streets. 

Now an undeclared, unauthorized, unpopular, and unconstitutional war of aggression against Iran has spread like wildfire through the region and into Europe. A new “forever war”—with an ever-increasing likelihood of American troops on the ground—may very well be upon us.  

As we’ve seen over and over, this administration uses lies, misdirection, and attempts to flood the zone to justify its abuses of power at home and abroad. Just as Trump, Marco Rubio, and Pete Hegseth offer erratic and contradictory rationales for the attacks on Iran, the administration is also spreading the lie that the upcoming midterm elections are under threat from noncitizens on voter rolls. When these lies go unchecked, they become the basis for further authoritarian encroachment and war. 

In these dark times, independent journalism is uniquely able to uncover the falsehoods that threaten our republic—and civilians around the world—and shine a bright light on the truth. 

The Nation’s experienced team of writers, editors, and fact-checkers understands the scale of what we’re up against and the urgency with which we have to act. That’s why we’re publishing critical reporting and analysis of the war on Iran, ICE violence at home, new forms of voter suppression emerging in the courts, and much more. 

But this journalism is possible only with your support.

This March, The Nation needs to raise $50,000 to ensure that we have the resources for reporting and analysis that sets the record straight and empowers people of conscience to organize. Will you donate today?

Efficiency, however, is only one of the alleged benefits of AI justice. Protecting vulnerable people is another, and there are plenty of people who argue that AI can do that. One example is the Oxford Institute of Technology and Justice, cofounded by Amal Clooney and Philippa Webb, both professors at Oxford. With the tagline “Harnessing the Power of AI for Justice,” the group seeks to bring legal representation to those who need it most. In a Time article describing their efforts, Clooney and Webb say they’re working with Microsoft’s AI for Good Lab to bring AI-lawyer chatbots to women in Malawi, where “almost one in ten girls is forced into marriage before turning 15.” They say they’re working with the Committee to Protect Journalists to provide at-risk reporters with free legal support. And they’re using AI to help pro bono lawyers file orders and determine best practices while defending abused women and children.

All of that is laudable. And yet, in the very same article, Clooney and Webb note the obvious dangers of AI justice. “AI is triaging cases, drafting pleadings, assessing witness credibility through facial-expression analysis and even generating avatars of murder victims that address defendants in court…. And courts across the world are grappling with deepfakes and manipulated evidence.” Sorry, folks, but “AI…assessing witness credibility through facial-expression analysis” is precisely when I turn into Morpheus from The Matrix and lead the resistance against the machines. I will never trust a white-bred AI system to assess my Black-ass credibility based on looking at my face.

In the US, judges are already using recommendations from AI algorithms to set bail and analyze the risks of recidivism. The system is called COMPAS (which stands for Correctional Offender Management Profiling for Alternative Sanctions), and its results are… not great, if you happen to be Black.

A 2016 study of COMPAS by Pro Publica found that “black defendants were often predicted to be at a higher risk of recidivism than they actually were,” while “white defendants were often predicted to be less risky than they were.” The report further revealed that “even when controlling for prior crimes, future recidivism, age, and gender, black defendants were 77 percent more likely to be assigned higher risk scores than white defendants.”

The company that makes COMPAS (it was called Northpointe back in 2016, but now it’s known as Equivant) said it doesn’t use race as a factor in its algorithm. Then, I shit you not, it defended its racist results by saying that the results were “fair” because the program is wrong about people 40 percent of the time regardless of race. That’s true—a COMPAS score is a little more accurate than a coin flip, if that makes you feel better. But because a Black defendant is more likely to be given a higher risk score than a white defendant, Black people who lose the coin flip could face higher bail than similarly situated white defendants, and that is racist.

But there are two larger takeaways from the COMPAS story beyond the system’s obvious racial biases. First, the astute reader will notice that I’m using a 2016 study from a media organization, not a 2025 study commissioned by the Department of Justice or the Supreme Court or a state Department of Corrections. That’s because, as far as I know, those studies do not exist, and if they do, they’re not publicly available. We don’t know who, if anybody, is even keeping track of COMPAS, its accuracy, or its biases. We also don’t have a great sense of how judges are using the thing, or the extent to which those judges are aware of its failure rate and biases. COMPAS is a tool, but for all we know, we’ve handed judges a racist sledgehammer they’re using to try to plunge a toilet.

The second problem is that I can’t tell you exactly what garbage COMPAS is recycling to produce its garbage results, because COMPAS is a closed-source algorithm made by a for-profit company that claims proprietary ownership of its process. The closed-source nature of its AI (and that of similar companies) is, or should be, anathema to people interested in justice, because justice is supposed to be the most open-source process in all of democratic self-government. The public is allowed to go to a trial and see literally all of the evidence a judge or jury considers before making a decision. Judges regularly issue opinions along with their rulings explaining exactly how they came to their decisions, including which specific cases and arguments they followed or rejected while deliberating the case. Their reasoning can be analyzed, questioned, and appealed to other judges, who might come to different conclusions based on the same publicly available evidence and logic.

That’s not the case with AI. That’s because AI’s “reasoning” is not, well, reasoning; instead, it’s probability based on vast quantities of data and other hocus-pocus, and the people who set the AI in motion don’t even really understand how it works. When you try to peel back the layers of why white South Africans are dominating your Twitter feed, or why YouTube thinks your kid should watch a video essay on how Princess Peach is “too woke,” or why a Black defendant got a higher risk score than a white one, all you get is corpo-speak platitudes that cannot be independently verified by anybody. The decision-making process is as crucial to justice as the decision itself, but the very first thing AI steals from us is the ability to review its process. AI gives us a sausage and then expects us to make the potentially deadly decision to eat it while only guessing at how it was made.

Despite all this—despite the evidence of racial bias in the AI we already use and the real threat that more will be coded into the systems by private tech bros answerable to no one—it turns out that Black people might be more trusting of AI justice than white folks. A 2025 study published in the journal Behavioral Sciences asked participants if they would have more trust in a judge who relies only on their expertise to make bail and sentencing decisions, or a judge who consulted AI to make the same decision. While all groups preferred judges who relied only on human expertise, Black participants perceived judges who relied on AI to be “more fair” than their white and Hispanic counterparts.

Support our work with a digital subscription.

Get unlimited access: $9.50 for six months.

I understand why some Black people feel like the AI would be more fair. Like them, I am well aware of what white judges are capable of. Fixing the racism, sexism, and prejudice endemic to the white judicial system has always been my goal. But so far, I don’t see how AI helps me do it. Just look at how AI justice is being regulated—or not being regulated—by the people chosen to represent our diverse, pluralistic society: Congress. Congress has passed no law regarding the development of AI’s use in the judicial system. Other countries, including China, are doing far more to regulate AI’s use in courts.

The problem is potentially even bigger than the usual congressional malfeasance. When it comes to AI justice, it’s not actually clear what powers Congress has under the constitutional separation of powers to regulate AI’s use by the judicial branch. Think of it this way: Congress can’t tell a judge’s law clerk or research assistant what to research or how to research it. Already, we have judges (they call themselves “originalists”) who functionally claim they can use a Ouija board to contact the spirit of James Madison to tell them whether a Trident II ballistic missile is a “traditional” means of self-defense protected by the Second Amendment. But Congress’s hands are tied: There’s not a damn thing it can do constitutionally to stop Clarence Thomas from using historical slop to make his rulings, so why would there be anything Congress could do to stop him from using AI slop instead?

As usual, the judicial branch is not filling the breach and regulating itself. It is the Wild West out there, with some judges using Claude (or whatever the hot AI is by the time you read this) to give them interpretations of statutory language, while others completely disregard AI as an analytical or research tool. If the judicial branch ever does deign to impose rules on itself, it’ll be people like Chief Justice John Roberts issuing “guidance” on how judges should use this technology that they almost certainly don’t understand. Roberts’s Supreme Court will either approve or overturn cases in which the evidence, research, or analysis is heavily influenced by AI, and that will be the signal for how AI should be used. I don’t know yet how he will rule on such issues, but this is where I point out that nobody elected John Roberts, and his unaccountable job functions should not include “determining how much AI is good for the rest of us.”

The people making the real decisions about the role of AI in our justice system won’t be us or even our elected representatives. It won’t really be the justices and judges either. It will be the people who design the AI models and own the for-profit companies that produce them—and these people are, to put it mildly, fucking weirdos. I can honestly say that I never sympathized with the guys who sentenced Socrates to drink hemlock until I started reading about TESCREALism, an acronym created by the scholars Timnit Gebru and Émile P. Torres to capture the truly wack mixture of interconnected beliefs held by our tech-bro overlords: transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism.

I have extensive familiarity with bad sci-fi literature and video games, but even I had to look up most of these terms. They basically add up to this: These people want to replace us all with machines and live forever in a future digital dystopia that they’re convinced will be heaven. I’m used to people playing God, but these guys think they can create God.

Still, you don’t need to understand the AI bros’ philosophy to spy that their conception of “justice” is a little different from those of the great moral philosophers, from the aforementioned Socrates to John Rawls and Derrick Bell and everyone in between. If you read Torres, you’ll discover that many of them want to create (not making this up) an intergalactic and immortal society, and if that requires some massive injustice along the way, so be it. It’s utilitarian, in a way, if you took utilitarianism to its stupidest and most genocidal conclusion.

And yet, even as I was reading about the truly dystopian beliefs of our tech-bro edgelords, I could hear the grown Black voice inside my head saying, “Sure, this all sounds bad, but have you met Justice Samuel Alito?” The promises and dangers of AI justice have to be plotted against the lived experience of the current justice system, and the juxtaposition isn’t pretty. I find myself playing the worst possible game of “Would You Rather”: Would you rather have Justice Alito or Justice MechaHitler? Would you rather have Donald Trump picking the judges, or Peter Thiel? Would you rather have Leonard Leo programming the judges, or Sam Altman?

Friends, I can’t tell you what the right answers are (which is to say, I can’t actually bring myself to write “I choose Justice Alito” without adding the coda “to be punted into the sun”). But I can tell you that a choice between evils is the right way to frame the question. AI boosters will tell you that AI justice offers the promise of nonpartisan, unbiased, purely logical decision-making—when it’s anything but. AI cannot be programmed to be fair, because we humans don’t even agree on what “fair” is. AI cannot be programmed to be just, because our definition of justice is ever-changing. AI will not make sense of all of our illogical inconsistencies; it will just digest them and spit them back out to us in some weird, uncanny form. Then, one horrible day, it will claim a Second Amendment right to defend itself, even if that leads to the deaths of schoolchildren, just like we do.

What I can also tell you is that AI justice will not be great for Black people. I understand all of the problems with the current judicial system, especially with how it treats people who look like me, but the very last thing AI can promise is fairness. Efficiency, access, speed, cost control—all of that might be on the table. But fairness? Nothing about the way AI is being developed and implemented suggests that it will be fairer, more transparent, or more just than the human-led system we currently have. And I struggle to think of a single technology yet invented that hasn’t been manipulated by white folks to cause more oppression.

From the cotton gin to Twitter, when the ruling class of whites get their hands on a new toy, they find a way to use it to promote racial and social stratification instead of using it to foster equality. There’s no earthly reason to believe AI will be any different.

We are not standing outside the gates of utopia; we’re standing outside a portal to the demon realm, and the final boss is telling us, “Choose the form of the Destructor.” If we try to not choose anything, we’re five minutes away from an AI marshmallow man wrecking our society.

Elie MystalTwitterElie Mystal is The Nation’s justice correspondent and a columnist. He is also an Alfred Knobler Fellow at the Type Media Center. He is the author of two books: the New York Times bestseller Allow Me to Retort: A Black Guy’s Guide to the Constitution and Bad Law: Ten Popular Laws That Are Ruining America, both published by The New Press. You can subscribe to his Nation newsletter “Elie v. U.S.” here.


Latest from the nation