The idea of the “Anthropocene” was first proposed in the 1970s, and came into widespread use in the early 2000s. Scientists began to argue that there had been a seismic temporal shift, from the geological epoch inhabited by humans, known as the Holocene, to one in which humankind had itself become an agent of geological change. Initially, the term was adopted by the “global change” research community: natural and social scientists studying global warming, climate change, and other planetary “symptoms” of the Anthropocene era. By the late 2000s, the idea had been taken up by geologists, who look to stratigraphic evidence—rocks, glacier ice, marine sediment—to measure the chemical composition of the global atmosphere and chart the impact of human activity on it. Later this year, the Anthropocene Working Group of the International Commission on Stratigraphy will meet to decide whether enough evidence exists (and, if so, whether it would be scientifically useful) to designate the Anthropocene as an official geological epoch.
Among scientists, and for those across the humanities—particularly environmental historians, who have taken up the idea over the last decade—the question of exactly when the Anthropocene began is still up for grabs. Some date it to the introduction of fire, many others to the development of agriculture or the Industrial Revolution; still others, for reasons less immediately obvious, insist on the start of the Cold War. The chemist who introduced the idea tied its origins to the geological changes (such as increased atmospheric CO2, methane, nitrate, and lead concentrations) that occurred following the surge in coal and other fossil-fuel use by the 1780s, but since then other, even more specific dates have been fielded as geological markers—“golden spikes,” in stratigraphic terms—denoting when atmospheric changes significant enough to warrant a break in geological time took place.
One of these is 1610, when atmospheric CO2 decreased abruptly as a result—scientists claim—of the meeting of the Old World with the New, which caused the cross-continental movement of plant and animal species and a massive decline in population as the inhabitants of the Americas were decimated by war, enslavement, famine, and exposure to unfamiliar diseases, leading to a reforesting of much of the two continents. Another is July 16, 1945, when a nuclear weapon was first detonated. There is also a broad consensus that the combination of the nuclear age and the three decades of economic growth after 1950—dubbed the “Great Acceleration” for their increase in human activity, and evidenced by a golden spike in stratigraphic deposits in 1964—at least mark the second stage of the Anthropocene, if not its beginning. Which of these dates, if any, is officially recognized, will bring with it considerable implications. If the Anthropocene era is said to have begun with the Industrial Revolution, it will be tied to climate change. If its origins are in 1610 (or the end of the last ice age), the political ramifications are less clear.
Even if the debate about the Anthropocene’s geological origins is set aside, the politics of the idea looks increasingly controversial. Because the term assigns responsibility for the transformation of nature to the human race, it has become as much a call to collective responsibility as the name of a geological period. Some environmentalists see the label as repeating the Enlightenment mistake of anthropocentrism: To speak of the Anthropocene is to assume, wrongly, that human beings, with their mastery of nature, are exceptional. For those who care more about ecological and planetary life in general than the impact of climate change on humans in particular, the very idea of the Anthropocene puts us on the wrong footing (whether or not human beings are in fact most responsible for environmental degradation). Others worry that although the idea of a collectively responsible humanity could, in theory, prompt an earnest call to arms against global warming, in practice it invites political fatalism. We might all be collectively responsible for the mess we’re in, but politics isn’t exactly stopping climate change, and the lesson of the Anthropocene seems to be one of resignation to an impending apocalypse: We can do little more than wait, learn to care for one another, and put our faith somewhere other than politics—most likely in technology.
"swipe left below to view more authors"Swipe →
The “Harvard Law Review” Refused to Run This Piece About Genocide in Gaza
The “Harvard Law Review” Refused to Run This Piece About Genocide in Gaza
Another problem with invoking the Anthropocene to argue that we’re all responsible for climate change is the reverse implication that none of us are—that no one in particular can be held to account. So as a call to politics, the Anthropocene looks a little flimsy, even to its champions. For those of us who think that we are emphatically not “all” responsible for climate change, and that there are huge discrepancies in terms of carbon emissions—between rich and poor, Global North and Global South, and so on—the term looks worse than flimsy. It says nothing about inequality. It ignores structural racism and the legacy of imperialism. And it lets capitalism off the hook.
* * *
In After Nature, Jedediah Purdy sidesteps these debates. He sees the Anthropocene as a descriptive truism: It is simply the case that nature no longer “stands apart from human beings.” Capitalism and inequality are aspects of the Anthropocene, parts of the world that humans have made. It is also the case that merely using the term “Anthropocene” doesn’t summon our common responsibility; that is a project for politics. Instead, the term describes a world in which the division between politics (a human thing) and the natural world (something separate from us) has collapsed. Nature is no longer purely natural, thanks in no small part to us.
Purdy’s history tells a number of interlocking stories, and only the first one is about this recent collapse of nature into politics. What he also tries to show is that the apparent novelty of the Anthropocene is misleading. This isn’t just because, as its proponents sometimes concede, the idea of humans as the agents of geological change predates the invention of the term. Purdy’s second story depends on a simpler claim: It’s an error to think that nature has only recently become inextricable from politics. The line between the natural world and politics is porous, and nature itself is politically malleable. It has always been given a role in political argument (and has often been enlisted in arguments that have little to do with it).
The history of these arguments matters—and not just for those who care about ecological ideas. Different visions of nature have underwritten different political arrangements. They have been used to vindicate social and racial hierarchies, to justify institutions and governments. Each vision of nature has also produced its own parallel kind of what Purdy calls “anti-politics”—types of power and government that prevent politics from realizing its ideal form, which for him is democratic and based on the collective human work of inclusive decision-making. The appeal to nature over human judgment has often been a way of removing decisions from the democratic realm. Divine nature, “natural” markets, forms of neurobiology and evolutionary psychology that naturalize violence: In these and other domains, ideas of nature work as justifications for human action and can have as much material force as floods, droughts, and hurricanes. While many of these justifications have been overturned, their legacies remain. For politics has not only used nature but created it, and in the United States, the law has been the primary tool in this process, making landscapes and environments that conform to each era’s political vision. Even as we have finally denaturalized nature, we still inhabit a world shaped by earlier visions of it.
* * *
Purdy begins his history of the Anthropocene in 17th-century New England, where for European settlers a nature bearing the stamp of the divine was a source of moral guidance. For natural theologians, natural hierarchies underwrote those of the social world and taught lessons of subservience; the threat of sin within was mirrored by the wildness of the land without. Nature here enforced an “unequal terrain,” as Purdy puts it, appearing as harmonious to the obedient, and terrifying to those who defied its moral order.
Yet in the 17th century, politics was also denaturalized, most famously by Thomas Hobbes, who saw it as something made by men and the state itself as an “artificial person.” Lurking behind Purdy’s story is an interpretation of Hobbes as the first philosopher to recognize the democratic possibilities of an artificial politics detached from nature. It was Hobbes’s untangling of natural and divine hierarchies from political ones that paved the way for democracy: Once hierarchies are no longer understood as natural, they can be more easily altered. Yet robbed of its divine stamp, nature became even more malleable as a tool of politics, of all kinds.
As the early sojourners became settlers and the country’s unequal terrain became a more democratic one, the “fearsomeness of nature” was superseded by the vision of a fruitful landscape in which wildness did not reflect potential sinfulness, but rather the tyranny of waste and idleness. Purdy calls this the “providential” vision, the first of four American environmental visions, where free land was the condition for a nation of free men, and men gained dignity by working the land. The settlement of the frontier became a mission of republican and natural progress. The land could be a democratic terrain open to all, even for the commoner (so long as he worked it hard enough). From the English Diggers of the 1640s to the American Populists of the 1890s, labor—the “great human equalizer”—was imagined as “working on or with the land,” itself a democratic partner in the creation and maintenance of the Republic and “collaborator” in its own development.
Human labor was destined to transform nature into property, and soon laws made real not Hobbes’s politics but John Locke’s idea that mixing human labor with land justified its ownership. “Pre-emption” laws granted settlers first claim to the land they occupied, amending earlier statutes that had made acreage available for sale. Later, the Homestead Acts, and others like the Timber Culture Act of 1873 and the Swampland Act of 1850, gave public lands to those who planted, cleared, or drained them. The vision of a supposedly empty and natural American landscape transformed by the fruitful power of labor became reality.
But not for everyone. The opening up of the continent’s natural wealth might have enabled the Lockean project of a democracy based on property. Yet this familiar ideology also underwrote a familiar “anti-politics”: the brutal exclusion of Native Americans, enslaved peoples, and women from the new terrain. For them, it looked more like the hereditary aristocracy that Locke had outlined in his (never adopted) 1669 constitution for the Carolina Colony—in which serfs worked the land, Africans were enslaved perpetually, and the absolute political power of their masters was sustained by a system of exclusionary land ownership that buttressed and bolstered existing hierarchies.
* * *
Utopian ideas about nature have tended in two directions: those in which nature is disciplined, tamed, and reformed to cater to human desire; and those in which access to a “true” nature is necessary for the transformation of individual desires. As the providential imagination found expression in law, an alternative “Romantic” vision developed alongside it, taking the second direction. Nature might have called out for settlement and development, but it also taught aesthetic and moral lessons, thereby transforming the lives of those who encountered it. Self-knowledge and an authentic self depended on the experience of wild nature, which could set people free.
The roots of this vision are usually traced to Thoreau and Emerson, but the Transcendentalists were less concerned with untouched wilderness than the later environmentalists who translated their ideas into practice. For John Muir and the Sierra Club at the end of the 19th century, the aim was to preserve a nature untainted by politics and capitalism, where pilgrims could communally pursue their own individuality and the solitary wanderer could transform his consciousness amid the sacred cathedrals of the American landscape.
Preserving a space of untouched nature for the individual free from politics and other people turned out, naturally, to require politics and other people. The strain of environmental thinking so often associated with skepticism about the state and its laws needed both in practice if it was to protect the forests and create the national parks it prized so highly. The laws also created “wilderness”—not just by drawing boundaries around a physical space, separating it from the rest of everyday nature in order to protect it, but by producing a new form of value. But while a great deal of collective effort went into the protection of the new wilderness, the Romantic vision fostered an impulse to evade collective efforts altogether. Nature’s freedom meant the freedom of individual conscience from social habits, a pure conscience rooted in nature. The ideological legacy of all that collective legal effort was the enshrining of another anti-politics: the belief that individuals should and can experience nature alone and untouched by the state. The practical legacies were equally contradictory, with the elitist devaluing of “lowlands” coexisting with the easily packaged tourism of the camping holiday.
The implementation of this Romanticism was enabled by what Purdy calls the “conservationist” vision. It emerged at the dawn of the 20th century, as the providential view gave way to a new managerial ethos that undergirded the rise of the bureaucratic, administrative federal state. Purdy sees the historian Frederick Jackson Turner’s famous “Frontier Thesis,” originally a lecture delivered in Chicago in 1893, as an ideological pivot between the two eras. For Turner, the frontier had been a “safety valve” for American democracy. Its abundance had produced a highly individualist democratic culture, hostile to central control and intolerant of “administrative experience,” in which economic power secured political power.
* * *
With the frontier’s closure, the idea of the collective management of both nature and people grew in prominence. For Theodore Roosevelt and subsequent Progressives, the management of nature was just one part of the management of social and economic life; forest conservation existed alongside labor regulation and antitrust law. The natural world was still understood as the servant of human needs, as it had been for the theorists of the frontier, but the work of taming nature was now seen as too complex to be left to individual pioneers. It was a job for federal administration. The providential view had naturalized the market economy: Free enterprise arose organically from settlement and private ownership. For conservationists, such ownership had led not to liberty but to waste, which was no longer defined in terms of wildness but productivity. So the economy had to be controlled by politics. Resources were managed, concentrated ownership limited, and monopolies checked, all by the enlarged and still-growing federal government. Where Hobbes had shown that politics wasn’t natural but human, now economics went the same way.
Where the providential vision lingered on, the government was understood as a new form of tyranny. The administrative state anyway had its dark underbelly; a politics based on the white conquest of western lands was replaced by one in which the management of economics and nature included the eugenicist management of races, and the success of the state at home depended on empire abroad. Nature may have been disenchanted—whereas once it was full of meaning, something above human judgment to which political decisions might appeal, now it was merely another “fungible resource” to be integrated into the calculus of decision-making—but it was still used to sustain hierarchies, and to draw a line between the experts who knew how to manage nature correctly and everybody else. In the name of the public interest, the state moved into more areas of collective life than ever before. Yet it did so, Purdy writes, by “technocratic,” not democratic, means.
The fourth vision of nature that Purdy discusses is also our current one, the “ecological.” Just as the Sierra Club reinterpreted Thoreau to justify the protection of a sacred nature set apart from everyday life, so Aldo Leopold and the Wilderness Society advocates who campaigned in his footsteps reinterpreted the Romantic view to show that nature mattered not because of its separateness from us, but because of the interdependence of life on Earth. The ecological view treats the planet as a system or organism defined by its wholeness. The label “the environment,” coined in the 1940s, but largely a product of the 1960s, once again made the natural world as a whole—now re-enchanted in new ways—central to politics. For ecologists, nature wasn’t a collaborator in republican virtue, a teacher of moral lessons about citizenship or the self, or a resource to be managed, but a unitary force to be faced with humility. This vision—so familiar today and so easy to disdain—could slip quickly into misanthropy and an anti-political, high-minded virtuosity. Everything was connected, but some connections counted more than others.
Yet ecological interdependence also provided the foundation for an explosion of environmental lawmaking in the 1970s. The National Environmental Policy Act and Clean Air Act of 1970, the Clean Water Act of 1972, and the Endangered Species Act of 1973: These and others were based on an ecological view of interconnection and marked a radical departure from the providential, Romantic, and conservationist uses of law. From the statutes of 1785 that gave frontier land to settlers to the Wilderness Act of 1964, the laws that allocated land either to private ownership or public management were all forms of continental zoning. The newer laws, like the antipollution statutes, dealt not with the division of land but with the governance of things that transcended and connected particular places, like water and air. As such, they regulated private property and industry, often for the first time.
But the new laws did not supersede older ones. As Purdy makes clear, the contradictions of law today show the extent to which our politics are still grounded in previous visions of nature. The laws of each period linger on: The General Mining Law of 1872, for instance, still allows any person title to valuable minerals found on public land. Opponents of the new environmental laws have likewise appealed to older ideas of nature, as inscribed in law, to challenge them. Purdy takes the Sagebrush Rebellion as an example—the movement in the American West in the 1970s and ’80s to defend private property rights and local land management, which began in part as a protest against new policies that required the review of ranchers’ permits to graze cattle on federal land under the National Environmental Policy Act. (The rebellion had a brief second life in the Malheur standoff led by the Bundys in Oregon earlier this year.) The legal arguments on all sides were a microcosm of the broader legal battlefield: Progressive-era policies regulating economic use were combined with ecological statutes imposing oversight, and were contested by those defending providential views of private property in which grazing permits were treated as entitlements.
* * *
Can traditional legal concepts—anchored in a settler vision of property rights, constitutionally enshrined—be made compatible with newer ecological ones? The question arises not just in particular cases but also at a general level of legal interpretation and decision-making. In determining whether land disputes should be solved by property law, interpreted by judges, or administered federally by executive-branch officials, the clash between the providential and conservationist vision is constantly replayed. Similarly, the problem of constitutional standing—of which actors can bring suit in federal court—turns on this clash of visions. The conventional view of standing relies on a litigant with rights in private property whose interests are directly and causally affected or harmed. More recent attempts, both to expand standing to include plaintiffs who are more abstractly or remotely affected by environmental changes (such as species extinction or climate change), and to allow natural entities to be considered legal actors, sit uncomfortably alongside that well-entrenched paradigm. As such, they have often failed. Environmental laws are less readily enforced than traditional property rights, so nature, nonhuman species, and future generations have little purchase.
This, Purdy suggests, and not merely a geological epoch or a call for a unified humanity, is our Anthropocene—a landscape that contains, and is shaped by, clashing visions of nature, its political uses, and its legal embodiment. Just as new laws conflict with the old, newer environmental preoccupations prove difficult to reconcile with older ones. Food production, the treatment of animals, and climate change have largely taken the place of wilderness preservation or laboring with the land, but concerns about the latter nonetheless still shape considerations of the former. For Purdy, one of the key challenges of the Anthropocene is to use the law in ways that adopt the best rather than the worst of each vision of nature: to integrate concern for human work and meaning into an ecological framework; to set standards for action on climate change; to make transparent the sources of our food and our treatment of animals. We need to do all this, he says, without falling into the trap of a “neoliberal Anthropocene,” in which nature is made visible only as “natural capital” in economic trade-offs, or as a backdrop to a techno-optimism that places our collective fate in the hands of markets and technology.
If we are to avoid this trap, Purdy thinks we need to learn the core political lesson of his story—which at its heart is not about the politics of nature, but about democracy. This is a history in which democracy is constantly evaded, decision-making is removed from collective politics by appeals to “natural systems,” and anti-politics creeps back in. The earliest “natural” system to circumvent democracy was a hierarchical, exclusionary political order. That was gradually dismantled by Hobbes and his heirs, who saw politics as artificial, something made by people. The second was the naturalized system of the free market and free ownership, the idea of a spontaneous economic order, which was likewise gradually dismantled—albeit often only temporarily—in the 20th century. The last natural system to fall was nature itself. That is what defines, Purdy writes, the newest phase “in a country that has always been Anthropocene.” If we’re clear-eyed, Purdy hopes, we’ll come to see that what’s left is artificial politics—the politics we make together. Technology and economics can’t save us: Both repeat the old fantasy and faith that politics can be avoided.
But we want that politics to be democratic. It’s become a cliché that if we’re going to tackle climate change successfully, we’ll need democratic self-restraint: Rich states and their citizens will need to control themselves, cut carbon emissions, and consume less. Purdy makes the case that democratic self-restraint relies on democratic self-assertion; only a properly democratic movement can actively choose (and maintain) that self-restraint and shape nature in the right kind of way. Nature has always been something made and imagined by us. The real challenge of the Anthropocene is not to face up to that fact; Purdy’s book does that for us. Instead, it’s to create a politics that confronts both environmental problems and those of inequality, exclusion, and capitalism, by building the kind of mass democracy that appeals to nature have always been used to avoid.