When the Finnish prime minister, Sanna Marin, was recently captured on video dancing at a party with friends, a great deal of ink was spilled evaluating her behavior. She was criticized for conduct unbecoming a prime minister and for an exuberance that some on the far right gleefully—and falsely—claimed must have been excited by illicit drug use. This framing put the onus on Marin to closely monitor herself even in closed rooms, and it does the same for us all. It compels us to perpetually distrust weddings, birthday parties—anywhere there are others who have cell phones. Ascribing the video’s circulation to individual intentionality restricts our imagination to carelessness and “leaks.” One cybersecurity expert “raised the possibility Russia had hacked the phone or social accounts of someone who is part of the close circle of the Finnish premier.”
Marin was in a supposedly private space at a supposedly private party. Technology being what it is, the leak didn’t have to be an overtly intentional exposure. Yes, it could have been an undercover Russian spy or a stalker paid off to share dirt. But it is just as possible that a friend posted it—for supposedly “personal” use—on Facebook or WhatsApp or TikTok. It is too easy to forget that nothing posted on these platforms is ever really personal—even in the European Union, which has somewhat stronger protections than the United States. This kind of inadvertent surveillance is always a problem when posting content on social media. It can be extracted and exploited for both legal and illegal ends: to threaten political careers, to defame and embarrass, or, as we see in red states, to threaten those who are thinking about—or who are thought to be thinking about—abortion, contraception, or other “disobedience” in our brave new world.
One does not have to be a celebrity or a prime minister to be subject to such intrusive exposure. Recently, a friend related that she’d stopped going to her favorite gym class because someone repeatedly pulled out a cell phone and filmed the class without seeking permission. The gym had a clear statement in its membership rules: “Photography and videography strictly prohibited.” But when she objected, she found no recourse: Not just the people wielding the cameras, but the teacher and other members of the class reacted as though she were in the wrong for complaining.
As recently as 10 or 15 years ago, most people would not have dreamed of taking pictures in a gym without permission. There was, I think, a stronger set of social conventions about averting one’s eyes from others’ daily rituals, embarrassments, ablutions, and purifications. The generational divergence around manners is not only a question of chronological age. Even more significantly, it is evidence of a shift in ideological constructs, the growth of a libertarian, tech-inflected disposition that has increasingly pervaded society at the point of use. Selfies are only the most conspicuous expression of a laissez-faire culture that is, just as the name implies, all about the self rather than others; that ennobles the gaze of the monetized “influencer” for whom the mirrored self is alpha and omega. It is also a juridical transformation. The privatized commercial value of data now tends to govern outcomes much more than collective or public interests, including the law. Legal rules can be overridden in the free-for-all that is the ungoverned power usurped by the camera, the platform, by Google or Twitter or Facebook. The burden is on individuals to figure out the labyrinthine rules in the terms of agreement; it’s up to the lonely you to scour those terms every day, every moment. Online “click-through” contracts can change privacy terms via quick pop-up notices; most consumers press “I accept” without even the most cursory glance at the actual text. In other words, when we take a picture or video and send it to someone on Facebook or WhatsApp or Instagram or TikTok, our “intention” that it be only for personal use is a hollow expectation: We have delivered our secrets to a predatory behemoth.
On August 11, the Federal Trade Commission voted to approve the commencement of an open inquiry into the making of rules that govern commercial surveillance and data security. The Advance Notice of Proposed Rulemaking invites the public to submit comments on whether the FTC “should implement new trade regulation rules or other regulatory alternatives concerning the ways in which companies (1) collect, aggregate, protect, use, analyze, and retain consumer data, as well as (2) transfer, share, sell, or otherwise monetize that data in ways that are unfair or deceptive.”
The FTC poses 95 questions to which it invites answers from anyone—workers from all walks of life, gamers, academics, social scientists, digital natives, and cranky outsider Luddites. The questions cover broadly familiar territory: Will giving individuals the right to control cookies be bad for “competition” between business platforms? Is it wrong to use “techniques that prolong online activity,” such as “quantified public popularity”? Should there be limits on marketing children’s data? What are the “relative costs and benefits” of passing regulations to limit the collection of data? Should end-to-end encryption methods be deployed to protect confidentiality? Does commercial surveillance cause harms that are not easy to discern, identify, quantify, or measure?
To anyone watching the rise of techno-kings like Elon Musk, media giants like Google, or whole cyberworlds colonized by companies like Meta, these questions may appear a bit like Rip Van Winkle’s creaky awakening to search for a horse that has long since left the barn. In fact, it might be said that the protracted failure to ask—until just now—“whether” there should be new trade regulations governing companies’ monetization of data is an anti-regulatory-policy stance in itself. But the FTC points out that this proposed new rulemaking is part of an attempt to fix gaps in its reach: “The FTC’s ability to deter unlawful conduct is limited because the agency generally lacks authority to seek financial penalties for initial violations of the FTC Act. By contrast, rules that establish clear privacy and data security requirements across the board and provide the Commission the authority to seek financial penalties for first-time violations could incentivize all companies to invest more consistently in compliant practices.” Congress currently seems likely to pass (with bipartisan support, no less!) the American Data Privacy and Protection Act, whose terms would narrow the permissible range of raw data collection, increase transparency, and generally expand people’s rights to access, correct, and delete personal data.
Perhaps the sudden crises presented by the past four years of political polarization have finally foregrounded the dangers lurking in the ultra-libertarian shrug of “I have nothing to hide”—that mantra one heard in the early days of Internet surveillance. “It’s so convenient” remains the insistent justification for algorithmic assortments that serve us up suggestions for the perfect book, the perfect recipe, the perfect game, the perfect home. It is all but impossible to function in a contemporary economy without such “efficient” surveillance, without an assortment of phones, watches, and other tracking devices attached to our bodies like demonic little leeches disguised as smiling, servile, nearly invisible helpmeets. But as our machines fatten themselves on gluttonous feasts of our data, it is increasingly clear that we are living in a technological panopticon, our solitary cells encircling a central observation tower. Fully exposed, we are rendered vulnerable to corporate manipulations that equate due process with business inefficiency and dismiss transparency as an infringement of trade secrets. At the same time, and not coincidentally, revenge porn, algorithmic bias, and identity theft have become all-too-common experiences. Indeed, commercial databases have enabled new forms of granular tracking in a time when heated passions fuel inquisitions that can amount to harassment and endangerment: of teachers, journalists, census takers, bookstores, voters, poll watchers, librarians, accountants with the IRS, public health workers, doctors, nurses, judges, ex-spouses, and now even the FBI—anyone deemed guilty of an ever-expanding list of activities targeted as the errancy of the moment.
Made particularly obvious in the wake of the overturning of Roe v. Wade, digital tracking poses invasive and dehumanizing peril for millions of Americans. Heretofore, only a few Americans seemed to grasp that supposedly “private” information collected by apps that track menstrual cycles might be sold to—as well as hacked and scraped by—private vigilantes determined to follow the activities of potentially pregnant users. New statutes criminalizing abortion have empowered police departments to use individuals’ social media data to file charges against people who may have ordered medications like abortion pills mailed to them across state lines. In one recent case, police in Nebraska issued a warrant for the Facebook communications between a mother and her teenage daughter, revealing that the daughter had taken a mail-ordered abortifacient and, as a result, miscarried at 22 weeks. Prosecutors charged the mother both with performing an abortion without a medical license and with attempting to abort a fetus at more than 20 weeks. Her 17-year-old daughter, charged as an adult, was accused of mishandling human remains and failing to report a death.
The architecture of this surveillance technology has reconfigured what we are used to thinking of as governance. Our constitutional right to privacy is rooted in a secular tradition of autonomy and selfhood that was historically understood as an aspect of human dignity, bounded against intrusion by the state. But state structures of surveillance have been vastly overtaken by the Internet’s evolution as a predominantly commercial space with an infinite profit motive. The supposed notice and consent proffered by the “I agree” buttons we press to access any given platform is woefully ineffective as a mechanism imparting anything like choice. Yet that empty assurance of “choice” is the vocabulary used to justify information-gathering practices about everything from voting preferences to bathroom habits to musical tastes to blood pressure and sleep patterns. Despite vague promises that such data is “anonymized,” it can be bought, sold, scraped, reassembled, reidentified, and used to zero in on particular individuals with laser-like predictive precision about specific behaviors.
Because giant social media platforms are not state actors, we can’t easily summon the same resources against them for censorship or denial of public accommodation. As private, corporate, profit-seeking businesses, they are responsible only to their shareholders. We are of relevance to these companies only as consumers in a market economy, not as citizens in a diverse social network. If we decline their terms of service, we are caught in a double bind, a “voluntary choice” that involuntarily excludes us from basic life necessities: credit cards, cell phones, banking transactions, workplace duties, insurance, credit ratings, schooling, shopping, health services, housing.
However it was that Prime Minister Sanna Marin’s happy dance became viral, it shouldn’t be a surprise. We are all at risk of being decontextualized through the revelation of unguarded moments. Social media has become an unforgiving monitor of our slightest indiscretions, even in the privacy of our homes. While we, the public, may hold each other to norms of the sober, industrious, and dignified in the workplace, tracking each other behind closed doors and during off-hours has become a monetized game of gotcha, an eternal reality show that rewrites the notion of an open society into a tyranny of voyeurs and pornographers.
Somehow, we seem oblivious, wandering through the metaverse like Little Red Riding Hood. We are innocent; our path through the forest feels like a secret. We don’t imagine the existence of wolves on our trail. But where does our path lead? How far off are we from the armchair censure of millions of strangers, the context collapse that occurs when precious bits of ourselves are viewed by many audiences at many distances, in many cultural contexts? We are collateral in the market for data, even when tracked, trolled, misread, targeted, harassed, humiliated, our naked images photoshopped in some dark room on some subreddit: All are mere transaction costs to the hungry leviathan that devours our every move and extracts monetary value without our knowledge or consent.