What Are Rules For?

What Are Rules For?

A conversation with historian Lorraine Daston about her recent book on the history of rules and how they have structured life across centuries.

Facebook
Twitter
Email
Flipboard
Pocket

Edward Said once noted that when Napoleon Bonaparte invaded Egypt in 1798, it wasn’t just with warships and cannons. Indeed, Said writes, the “French expedition was accompanied by a whole team of scientists whose job it was to survey Egypt as it had never been surveyed before”—resulting in the production of a 24-volume compendium of diagrams and drawings purporting to represent the entirety of Egyptian history. The critical detail here is that the attempt to rule a society imagined to be in the thrall of disorder required that same society to be turned into an object of “scientific” inquiry capable of being comprehended, surveyed, and ordered according to the categories of Enlightenment thought.

This colonial anxiety about the imposition of order—whether epistemological or military—as a precondition to effective rule is borne out in a central insight of Rules: A Short History of What We Live By, historian Lorraine Daston’s sweeping survey of rules in the Western tradition. Across millennia, Daston writes, rules have been tasked with the Sisyphean feat of “sustain[ing] islands of stability, uniformity, and predictability”—in other words, of creating order—“in an intrinsically uncertain world.”

By attending to the shifting, occasionally contradictory meanings, forms, and functions of rules across time and geography, Daston draws out the philosophical stakes of a feature of our lives so ubiquitous as to appear self-evidently static and resistant to historicization, rather than contingent and always in flux. I spoke with Daston about the distinctions between “thick” and “thin” rules, the feminization of the work of computation, and the great danger of wearing the wrong thing in medieval Venice. This conversation has been edited for length and clarity.

—Ishan Desai-Geller

John Ishan Desai-Geller: Your book’s history of rules in the Western tradition traces shifts in their form and content across millennia. Perhaps because the “web of rules” of which we’re all a part and to which we’re all subjected is “so densely woven that barely any human activity slips through the mesh,” this history also tracks tectonic shifts in everything from epistemology and political economy to theology and mathematics.

What drew you to write a history of rules? Did you begin with the assumption that this project might intersect with so many fields, or was that an incidental discovery along the way?

Daston: The [idea for the] book began with a working group on “Cold War rationality.” What my colleagues and I found in looking at the shifting meanings of “rationality” during the Cold War was that the peculiar conditions of the nuclear standoff made for a very rigid kind of rule: the algorithm. The algorithm had to be not only transparent and predictable, so that your opponent could know what you would do depending on every move across the nuclear chessboard, but also inexorable and automatic in its execution.

Since I am not a historian who normally hangs out in the 20th century, I found myself slipping back and back and waking up sometime in the sixth century CE. That’s how the book began.

IDG: One of the central conceptual distinctions you draw is between “thick” rules—which tend to be flexible and capacious, attentive to the contingency and unpredictability of the world, and able to accommodate discretion in their implementation—and “thin” rules, which tend to be inflexible, presuppose a static and ordered world, and leave little room for discretion. The history of rules as you tell it is also a story of a gradual transition away from thick rules capable of accommodating diverse cases and toward much more rigid thin rules on the model of the algorithm. What drives this shift, and what are its primary consequences?

LD: The narrative of the tug of modernity from the thick rule of a contingent, variable, unstable world that requires nimble adjustment to circumstances—which not only accommodates discretion but demands it—to the world of the algorithm seemed to me to have the pleasant familiarity of a narrative that all historians can mutter in their sleep.

That is, until the pandemic, when I realized that our very stable world can be shattered in a week: Suddenly all the airplanes stop flying, all the ships at sea can no longer dock, and all the rules by which we conducted our lives suddenly collapse. That led me to rethink this idea of the inexorable movement toward modernity, and to think that what we are seeing here has nothing to do with modernity in the sense of a periodization. It has to do with the occasional achievement, through technical infrastructure and political will, of islands of stability, predictability, and order, which at any moment can be capsized by an unforeseen event. It came to be less a story about marching toward modernity than the fragile creation of this archipelago of islands of order, which could happen at any point in history.

IDG: Since rules require at least some measure of efficacy and enforcement to be regarded as rules at all, the successful implementation and enforcement of rules is also a question of power. What does the shift from thick to thin rules tell us about the changing nature of political power and how it has been wielded over time?

LD: You’re right that a rule which is never enforced would very soon erode; it would cease to be a rule with teeth. But while enforcement may be necessary, it’s not sufficient. One example is the long, unhappy history of sumptuary regulations—that is, telling people what to wear… or, mostly, what not to wear.

In medieval Venice, a slave who turned in his master or mistress, for example, for owning satin sleeves that were too wide would be granted his freedom. There was also a secret “mouth in the wall” where you could put unsigned denunciations of your neighbor, whom you’d seen with the forbidden sable trim on her robe. There were great incentives for people to follow the rules, and there were incentives toward denunciation—yet they never seemed to work. It’s a good example of rules which were enforced, sometimes very stringently, but which nonetheless never took root.

These rules have to be at some level accepted as norms in order to gain a foothold, to actually create the order they’re intended to create. Political power is definitely involved here, but it is impotent in the face of sustained defiance.

IDG: How would you distinguish rules from other forms of social structure and control? What do rules do that the other things structuring our lives don’t?

LD: It’s very difficult to imagine a form of social control which could not be cashed out as a rule. There’s something about rules which is fundamental. It’s proverbial among anthropologists that the content of rules in all cultures is dazzlingly diverse, but there’s no culture without rules. In fact, one could say that culture is rules, to make it into a simple equation.

The one example I can think of, which plays an absolutely seminal role in modern political thought since the 17th century, is completely arbitrary rule. You could have a government which has draconian, unjust rules, enforced cruelly. But the most awful scenario is a government which is bound by no rule beyond the caprice of the monarch. That is exactly what John Locke describes in the Second Treatise on Government as a truly unbearable form of political power.

This is also the core of Carl Schmitt—the 20th-century German National Socialist political theorist—who said the sovereign is the person who decides on the case of exception. He detested the Enlightenment natural-law theorists, who claimed that even the sovereign is bound by laws and that no one is above the law. For Schmitt, the sovereign is the person who has no laws. I think that is a situation which is worse than the Kafkaesque situation of opaque rules, enforced in ways that nobody understands.

IDG: “Rules,” as you put it, “rule.” That is, they seek to impose order and coherence in a world of uncertainty and flux. You write, “Rules of all sorts govern, constrain, specify, guide, and otherwise order action into rituals and routines.” What can rules that fail to effectively constrain or control behavior, on the one hand, and states of exception (the wholesale suspension of the rule of law), on the other, tell us about rules and how they work?

LD: Rule failure, especially chronic rule failure, does not necessarily mean that the authorities will throw in the towel. As we saw in Iran in late 2022, there are incendiary political protests going on essentially about sumptuary regulations. It’s not the case that a rule that meets with the screaming defiance of the entire population will necessarily crumble. That returns us to your point about political power: The rule can, to some extent, if not be enforced, then be maintained if the political will is firm enough.

But when rules break down or are suspended entirely, there’s a lot of philosophical musing about whether that’s even possible. When Schmitt defines the state of exception, implicitly at least it’s a punctual moment, a state of emergency. It’s understood to be temporary. When Locke allows for royal prerogative to be exercised in a state of emergency—or in a pandemic, for that matter—it’s assumed that this is not going to be the way we’re going to live from here on out. It’s barely imaginable to have a situation in which that is the status quo for the foreseeable future. Hume once wrote that there are rules and laws even among pirates. I think that’s a deep insight: It would make human life literally impossible to live in a world where no rules held.

IDG: In contemporary usage, an algorithm is most closely associated with computation, and a quintessential feature is its automation. However, you show that the concept of algorithms preceded not only computation but even mechanization by millennia. Furthermore, you identify the division of labor that emerged in concert with the Industrial Revolution—in this case, disaggregating the work of calculation itself, transforming “computation into semi-skilled piecework for poorly paid assistants” and creating a category of worker called the “calculator”—as a precondition for the conceptual shift it took to even imagine algorithms as automated or mechanized.

Could you elaborate on this relationship between the division of labor, the intellectual de-skilling of calculation as a form of work, and the advent of the thin, automated algorithms we think of today?

LD: Calculation begins this long trajectory as a very taxing and strenuous intellectual activity. Johannes Kepler, the early 17th-century astronomer, complains endlessly about the calculations he has to do, and he made a lot of mistakes. But there’s no doubt in his mind that this is what astronomers do. It’s a real innovation in the course of the 17th and 18th centuries when big observatories began to devise a division of labor that allowed a step-by-step calculation of the very complicated adjustments that have to be made for astronomical observations.

Once they figure out how to do this, they can farm out this labor to schoolboys, and later to women. I write about the famous project conducted during the French Revolution of developing the base-10 metric system (as opposed to the ancient Mesopotamian system of base 12). The revolutionary government commissioned the recomputation of hundreds of thousands of logarithms. The person in charge of this, Gaspar de Prony, had read The Wealth of Nations and was fixated by the chapter in which Smith talks about the division of labor in a pin factory. If there’s one thing that there was in abundance during the French Revolution, it’s unemployed, high-level servants of aristocratic households. And he began to recruit these people to be the calculators, or the “computers,” as they’re called.

The task of dividing this complex computation into a series of steps that require only addition and subtraction took a managerial genius. When Charles Babbage, the British mathematician, heard about this, he said, “If people this stupid can do this computation, a machine can do it.” That is the germ of the idea of the computer, his analytical engine.

That is, in short, how we get from calculation being an elevated intellectual activity to being a mechanical one.

IDG: What led from the initial moment of a loss of intellectual prestige to the feminization of computation work, which begins in the late 19th and early 20th centuries?

LD: The combination of higher education for women and very few employment possibilities that would make full use of it. Already in the late 19th century, the women’s colleges at Oxford and Harvard were places where women were poached to do this work. Because, as the directors of the observatory blithely write, they can be paid half as much and they are twice as conscientious, so they were an ideal match. For example, the head of the team of “computers”—as they were called at the time, though they were human beings—at the observatory in Paris was Dorothea Klumpke, the first woman to get a PhD in astronomy from the Sorbonne. Later in her life, she was certainly recognized as an important astronomer, but the job that was found for her was to be head of the team of calculators at the observatory. There’s also the famous case of the African American calculators at NASA, which gives you an idea of the intersection of two forms of labor which were chronically underpaid and undervalued.

IDG: Throughout the book, you argue that rules are tasked with the unwieldy project of mediating between the philosophical categories of universal and particular. Furthermore, you suggest that the shift toward rules with pretensions of universal applicability accelerated alongside the expansion of European empires in the 16th century. What can the history of rules in the West tell us about empire and imperial rule?

LD: Imperial rule is an extremely interesting test case for rules, because it stretches them to their widest extent. All rules, even if they don’t admit it, emerge from a particular context—historical and cultural, but also infrastructural and technological. To translate rules which were originally formulated for Victorian London to the Raj in India already represents an audacious act of generalization. Very quickly, the local authorities find themselves returning to the world of thick rules, which they are constantly tweaking and tailoring.

In the Roman legal code, the Digest of Justinian, there’s a whole set of rules of thumb meant for governors of distant provinces. You can see where the proverb “When in Rome, do as the Romans do” comes from: Basically, when not in Rome, don’t do as the Romans do. As long as you get the taxes collected, let [people do] what they usually do.

IDG: More and more of our lives are shaped by the opaque algorithms which mold our digital infrastructure, and the problem of algorithmic bias has become an increasingly pressing concern. Part of the trouble is that algorithms metabolize, then generalize from, the views and assumptions of a relatively small and homogeneous group of people—computer programmers—treating as universal what is in fact particular.

Is this an intrinsic risk of thin rules, or are there cases in which thin rules are less prone to unjust outcomes? Conversely, is it possible to imagine “thicker” algorithms that are less likely to import the biases of their programmers?

LD: I tread somewhat on thin ice here, because I am not a computer programmer. My understanding of machine learning is that the algorithm is trained on hundreds of thousands of examples. Depending on how representative a sample those examples are, you can get very biased results, or not. But there’s also a deeper assumption in those examples. Even if it’s not just a set of white, male, European faces that train the facial-recognition algorithm—a problem that could be corrected—the deeper problem is that all of these examples are from the past. This means if the future deviates from the past, as it is wont to do now and again, the algorithm is no longer a good fit.

This is a problem for a lot of economic models which retrofit data. They’re excellent at retrodicting what has already happened, but they’re very bad at predicting what is going to happen. This is a built-in problem, and I don’t see how to overcome it.

Thank you for reading The Nation!

We hope you enjoyed the story you just read. It’s just one of many examples of incisive, deeply-reported journalism we publish—journalism that shifts the needle on important issues, uncovers malfeasance and corruption, and uplifts voices and perspectives that often go unheard in mainstream media. For nearly 160 years, The Nation has spoken truth to power and shone a light on issues that would otherwise be swept under the rug.

In a critical election year as well as a time of media austerity, independent journalism needs your continued support. The best way to do this is with a recurring donation. This month, we are asking readers like you who value truth and democracy to step up and support The Nation with a monthly contribution. We call these monthly donors Sustainers, a small but mighty group of supporters who ensure our team of writers, editors, and fact-checkers have the resources they need to report on breaking news, investigative feature stories that often take weeks or months to report, and much more.

There’s a lot to talk about in the coming months, from the presidential election and Supreme Court battles to the fight for bodily autonomy. We’ll cover all these issues and more, but this is only made possible with support from sustaining donors. Donate today—any amount you can spare each month is appreciated, even just the price of a cup of coffee.

The Nation does not bow to the interests of a corporate owner or advertisers—we answer only to readers like you who make our work possible. Set up a recurring donation today and ensure we can continue to hold the powerful accountable.

Thank you for your generosity.

Ad Policy
x