One glaring revelation of the Covid-19 crisis is how ill-prepared the United States turned out to be for an unexpected national medical emergency. There are deep historical reasons for why our welfare state proved incapable of dealing with the unprecedented challenges of the pandemic—reasons that predate the Trump administration’s mismanagement. Decades of economic decision-making that prioritized a market-driven system ignored the kind of welfare distribution necessary to deal with the potential challenges of a large-scale crisis like Covid-19.

As is so often the case, the long-term consequences of this deprioritizing of care fell disproportionately on Black Americans and other minorities whose health and finances were most affected by the pandemic. Consider, for example, a recent study by the University of Pennsylvania, which demonstrates that roughly half of low-income communities in the United States have no ICU beds. Indeed, the United States has fewer ICU beds per 1,000 people than China, Japan, or Italy.

The research of George Aumoithe, a historian of global health at Stony Brook University, shows that the lack of ICU beds in low-income communities is the result of federal and state spending cuts dating back to the 1970s. At that time, Aumoithe argues, an inflationary crisis, combined with urban flight to the suburbs, compelled politicians and health care experts to cut costs by closing down hospitals in urban areas and moving ICU-focused speciality care to hospitals in the suburbs. It was this policy of economizing—starting in the 1970s and continuing through the AIDS epidemic and today’s Covid-19 pandemic—that explains why half of low-income communities in the United States have no ICU beds.

The Nation spoke with Aumoithe about his work on the history of Medicare and Medicaid, the economics of disease prevention, the downsizing of inner-city hospitals, and the racial dynamics at play in health care during the pandemic. This conversation has been edited for length and clarity.

—Daniel Steinmetz-Jenkins

Daniel Steinmetz-Jenkins: As a historian of US health care and the welfare state, a major aspect of your work is examining how the political economy of health care after the implementation of Medicare and Medicaid shaped the nation’s capacity to cope with the challenge of disease prevention. One might think that, given this groundbreaking legislation, policies would have been in place to more adequately meet this challenge, but your work calls this assumption into question. Why is that?

George Aumoithe: The dominant narrative goes something like this: In the United States, “health care” and “public health” are separate. The former involves the healing arts and the individual’s experience managing acute or chronic illness; the latter deploys the statistical science of epidemiology to think about disease on the population level.

Private insurers still managed the bulk of health care financing after the passage of Medicare and Medicaid in 1965. Massive opposition from conservative medical lobbies and a perilous legislative path forward meant that Medicaid received less attention than Medicare in the drafting process. In a period of low unemployment like the 1960s, few policy-makers assumed that Medicaid would rival Medicare in size. As of 2019, Medicaid spent three-fourths the total of Medicare, making the programs functionally equivalent in size. Today, public insurance comprises a solid one-third of the total share of health care financing.

Yet, since at least Theodore Roosevelt, the pursuit of national health insurance has always been within the realm of—if not at the center of—American politics. Still, the movement for universal health care has hardly marched in lockstep with the contingencies of public health. In this sense, not much has narrowed the chasm between public health and health care in the United States. Understanding the history of what formally separates public health from health care explains how poorly this groundbreaking legislation served disease prevention.

DSJ: Of course, the shift at this time from health care policies focused on infectious disease and the diseases of the poor to health care policies preparing for chronic illness and long-term care coincided with the civil rights movement. Why is this significant?

GA: Much of the movement for universal health care emerged from eminently practical efforts to aid activists injured by bigots. After the fall of de jure segregation, though, groups like the Medical Committee for Human Rights moved from triage to advocating for universal health care, but these efforts faltered before organized conservative opposition in the 1970s and 1980s. As a result, narratives about disease-specific groups—ranging from the women’s liberation movement’s breast-cancer and reproductive-health activism to the movement to stop AIDS—dominate our conception of the late 20th century.

While these movements articulated a moral critique of the medically damaging effects of leaving people uninsured, efforts to expand care also ran into poor economic timing. By 1973, inflation unshackled the preexisting racial resentment against the welfare state, providing cover for welfare state retrenchment beginning with Nixon. There’s a strong whiff of fiscal hawks buffeting the same rhetoric today against the $3.5 trillion budget reconciliation bill being debated in Congress. Much like the 1970s, concern about inflation has become a bipartisan affair, making passage of the bill a thorn in the side of the ruling party.

DSJ: As a consequence of the inflationary crises of the 1970s, you argue that there was a turn toward economizing embraced by health policy experts. What exactly do you mean by the “economizing” of health policy?

GA: “Economizing” is a euphemism for the cost-cutting, both on the supply side and the fee side of health care, that began in the late 1960s in the realm of federal “health planning.” Under the Johnson administration, the Department of Health, Education, and Welfare (HEW) convened health planning task forces to think about how both economic and social “inputs” and “outputs” could serve as “indicators” of the effectiveness of federal health policies.

From the late 1960s to the ’70s, HEW continued developing social indicators—the Johnson administration went so far as to explore establishing a social counterpart to the Council of Economic Advisers. The Nixon administration reversed course, focusing health planning on hard economic figures, abandoning the development of social indicators, and terminating direct federal block grants to cities, which killed demonstration projects years in their infancy. All of this was an attempt to restrict the applicability of federal funds to groups falling outside the boundaries of retirement or the official poverty line.

Often outpacing the federal government were local governments that also indulged in the game of cost-cutting. Nowhere was this more so than New York City. The Health and Hospitals Corporation (HHC), founded in 1969 by Mayor John V. Lindsay, centralized control over 50 percent of the nation’s municipal hospitals. Administrators scoured the system, cutting medical procedures, technology, beds, and staff. The rhetoric of economizing obscured the concentration of services and tech, the elimination of inpatient beds, and the increase in work hours per remaining staff. The HHC closed multiple hospitals concentrated in core working-class neighborhoods such as Harlem, the South Bronx, and South Brooklyn.

DSJ: So here we might see a kind of tension in your analysis. On the one hand, much of this seems to be predicated on the unforeseen consequences of “colorblind legal ideology”—one based on overcoming racial segregation. At the same time, it shows how the benefits of Medicare and Medicaid became racialized in the 1970s. Was this intentional or unintentional?

GA: Answering the question of intentionality is a fool’s errand. It is difficult to empirically answer because intent in the law boils down to a question of psychological motive rather than statistically observable disproportionality. In public accommodations such as hospitals, multiple decision-makers with innumerable technical rationales produce policies manifestly terrible for Black people.

Divining intention is not my bailiwick; I study effects. The first question allows for more fruitful discussion. After more than a year of a pandemic that has claimed well over three-quarters of a million American lives and an estimated one in 1,000 Black lives, health outcomes in the structure of health care we live under obviously remain deficient regardless of intent. In fact, it is clear that very little attention has been given to the issue from a legal standpoint. Public policy after the Affordable Care Act hasn’t eradicated the fundamental driver of costs and inequality in our system: profit-driven private insurance and health care delivery. The exclusion of Black, brown, and working poor Americans from the most beneficial tiers of that structure is class warfare.

That warfare relied on jurisprudential shifts that moved the standard of discrimination away from a plaintiff’s demonstration of statistics showing disparate adverse impact and toward tests of a defendant’s discriminatory intent. Ever since that shift—one that largely took place following important employment discrimination lawsuits in the mid-1970s—plaintiffs who have tried to show the class-based impacts of hospital closures have carried a losing hand into court when it comes to demonstrating discrimination.

This shift accelerated as stagflation and the urban fiscal crisis left jurists hard-pressed to deny municipalities the freedom to dictate budgets as they saw fit—regardless of their impact on the people.

DSJ: How did this leave the United States ill-equipped to deal with the HIV/AIDS epidemic in the 1980s? And what might we learn, in this regard, from how comparable advanced industrial democracies dealt with it?

GA: The story of US unpreparedness goes back earlier than Medicare and Medicaid. Part of that context began with the decline in infectious disease mortality and morbidity after the Spanish flu of 1918. The shift from infectious to chronic disease coincided with the rise of pharmaceuticals in the postwar period. After the 1960s, policy-makers assumed that the need for acute inpatient care would decrease, while longer-term treatment for chronic diseases would increase. Historian Jeremy Greene describes this shift as an age in which doctors began “prescribing by numbers.” Pharmaceuticals became increasingly viable adjuncts to periodic and episodic care.

Consider, for instance, that from the 1970s to the turn of the millennium, the basis of acute inpatient care shrunk from 7,123 hospitals with 1.62 million beds to 5,810 hospitals with 984,000 beds. In the same time frame, the hospitals’ average daily census of admitted patients shrunk from roughly 1.3 million per day to 650,000. Correspondingly, outpatient visits steadily increased from 181,370 to 592,673. A declining occupancy rate resulted from improved pharmaceuticals, which facilitated more outpatient care, and a political economy that prioritized corporate forms of inpatient health care delivery while cutting public options.

During the fiscal-crisis years of the mid-1970s in New York City, this went so far as Mayor Abraham Beame closing every neighborhood family care clinic. What replaced those clinics? The so-called “minute clinics” of for-profit providers, precursors to the for-profit urgent care clinics that dot major thoroughfares.

DSJ: What parallels do you see between how an earlier health regime shaped the disparate responses to HIV/AIDS and the disparities with Covid-19 in the United States, where the most affected have been poor and Black Americans?

GA: Covid-19 has not discriminated nearly to the same extent as HIV did in the 1980s and ’90s. That said, there are parallels in mortality. First, the demographic impact of both diseases has disproportionately centered on Black and brown Americans. In fact, when you drill down on the demographics of Covid-19 transmission, you clearly see Latinx populations that are equally devastated, if not more so, by the spread of Covid-19 as Black Americans. This is something my Global Health and Health Inequality Mapping Lab at Stony Brook is presently studying.

Much like the nadir of the AIDS crisis, it’s clear that despite differences in each disease’s mode of transmission, Reagan’s and Trump’s avowal of austerity and personal responsibility both shaped the policy response.

For instance, the US government issued blatantly contradictory advice regarding the transmission of Covid-19 in order to prevent a run on limited stores of personal protective equipment. Bob Woodward reported that Trump “wanted to always play it down.” Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases at NIH, and Surgeon General Jerome Adams recommended to Americans that they need not wear masks. Very shortly after, it became clear that Covid-19 transmission occurred via respiratory contact. This was information known for a virus type that has been under surveillance for more than two decades and which was the topic of gain-of-function research funded by the NIH. Therefore, in light of the East Asian experience with SARS and the effort to contain the outbreak in Wuhan, it beggars belief to assume the US government did not recognize the immediate need for masks. Instead, policy-makers evidently misled the public to avoid a market panic over scarce N95 masks.

These lies—all to make up for an unprepared government stockpile—led to needless deaths and debilities. Even as the wartime nature of the pandemic stressed systems, Trump hesitated to activate the Defense Production Act and produce needed personal protective equipment and ventilators. Furthermore, decades of outsourcing left the US ill-equipped to physically produce medical-grade gear. In addition to not developing a vaccine distribution plan, the Trump administration also nixed other initiatives, including a US Postal Service proposal to distribute four face masks to every family in the United States in March.

DSJ: How many lives would have been saved if the US had heeded international guidance, mobilized existing resources, and even paid people to stay home for two to three months early in 2020? The same question could be posed for HIV/AIDS: Would centering the most vulnerable have produced better policy?

GA: Leading epidemiologists and physicians are increasingly describing SARS-CoV-2 transmission as endemic. In light of the cost of massive monetary and fiscal stimulus, and considering what we know about using contact tracing and strict quarantines to stop transmission in countries like New Zealand, Singapore, and South Korea, it doesn’t seem radical at all to ask the counterfactual: What might have happened had households been encouraged to isolate without fear of economic ruin? What would policy look like starting from the vantage point of the most vulnerable rather than the most privileged? With the ease of Covid-19 transmission, the immediate impact on Black Americans’ mortality has been more detrimental than what immediately followed the HIV epidemic. In addition to the one out of every 1,000 Black Americans killed by Covid-19, outcomes with HIV/AIDS remain staggeringly stark, with an estimated one in two Black gay men at risk of contracting HIV within our lifetimes. HIV’s burden of disease has been much more widely experienced along axes of sexuality and gender that concentrate around age and class. Covid-19 has disproportionately claimed the lives of the eldest among us—but it has also reshaped the lives of a greater majority who have survived it and suffer long-term debility. The Biden administration has suggested “long-term Covid-19” might, like HIV/AIDS, join the list of disabilities that qualify under the Americans With Disabilities Act. I don’t think we appreciate how much of our disabling illness results from neglectful, short-sighted policy, not nature’s accidents.

DSJ: As you know, in this country there is a strong tendency in our thinking to separate questions of race from class. How does your work address this matter as it relates to Covid-19 and its impact on the most vulnerable Americans?

GA: The history of racism is not immaterial to a class analysis. Part of what the jurisprudential shift from “impact” in the 1960s to “intent” in the 1970s did was to absolve local governments of the litigatory threat of civil rights lawsuits. Changes in the 1970s barred public defenders and legal aid agencies from lodging class-action lawsuits against the government. With the de jure fall of segregation, the communities that lost proximate public health care due to economizing cuts also experienced a resegregation in housing and an implicit rollback of the tenets of nondiscrimination via integration. With the rural South, for example, integrated facilities did not solve the discriminatory effects of closures that produced scarce and distant care for the poor.

While race-based discrimination seemingly disappeared from the picture, the sediments of class remained. This is a problem when you are HIV-positive, Black, and live in the country, where it takes two hours to get to the nearest treatment center. Or if you are young enough to work but are locked out of the labor market and must become HIV-positive to qualify for Medicaid. The logics of concentration, efficiency, and means-testing also mask deprivation by structures that allow racism to persist.

The raw deal we get in public education reveals the costs of integration. While Americans today have the legal right to live in any neighborhood free from racial discrimination, we know that token presence masks class allegiances. What I mean is, access to the best care in this country is linked to which institutions you are employed by or whether you’ve retired into Medicare. Public benefits are not equally available to all. Self-identified liberal people frequently oppose school boards attempting to integrate poor students with upper-middle-class students who remain in the public system. Public school revenues hew closely to the wealth derived from property taxes. The integrated structure is the same one that makes a system of vouchers that takes public money to put in private academies.

DSJ: You have argued that only with a true labor party in this country will Americans ever receive adequate health care and its associated benefits. Why is this the case?

GA: The heft of medical corporations and their monopolization of the market degrades public options. The United States remains the only advanced industrial democracy not to have universal health care. When defenders of the Affordable Care Act proclaim that the law is evidence of a right to health care, they do not deign to acknowledge the refusal of local governments to expand the program to the poor and how that refusal has been concentrated largely in the South. If they do acknowledge the problem, they chalk it up to a deficit of the states, rather than a national flaw.

The issue is, largely, constitutional. Court decisions have cited that the US Constitution does not stipulate an explicit federal right to health care and have coded the Affordable Care Act as a tax. Thus, the ability of the federal government to influence states in the form of grant funding has become the only remaining lever to cajole states to expand Medicaid.

In other countries, such as Germany and France, labor parties have shaped political structures that ensure that labor remains a coequal partner alongside corporate shareholders and government officials. The incorporation of labor into policy-making has been the motor that powered the development of guaranteed health care in other countries, whether they be social democratic like Sweden or liberal like the United Kingdom or Australia. The rising tide of xenophobia and racism throughout Western Europe in reaction to the climate migrations of the northwestern Sahel, however, also threatens the solidaristic inclusion of Black and brown people, especially migrants, into these systems. The international picture and the vehement debates over the boundaries of citizenship in the Eurozone is another area where an analysis of racism cannot be divorced from class—or colonialism.

In the end, nations that are much less wealthy than ours have managed to make systems of universal health care work—and their systems remain understudied in the academy. It’s my hope that future work can expand our conversation outside of the parochial boundaries of the nation-state or the self-laudatory club of advanced economies.