Toggle Menu

A Warning From the Past About the Dangers of AI

As far back as 1958, Nation writers were grappling with the prospect of ‘artificial brains,’ particularly when placed in the hands of the military.

Richard Kreitner

October 20, 2025

The February 2, 1985, article by Paul N. Edwards was illustrated by Randall Enos.

Bluesky

The concept of artificial intelligence, if not the precise phrase, first appeared in The Nation in 1958, in a review of the Hungarian-born mathematician and physicist John von Neumann’s The Computer and the Brain. Published a year after the author’s death, the book sketched out a then-novel analogy between the functioning of early computers and the human mind.

The Nation’s reviewer, Max Black, a Cornell philosophy professor, praised von Neumann’s earlier formulation of game theory as “one of the intellectual monuments of our time.” Had he lived longer, Black lamented, the scientist “might have constructed an even more important theory of computing machines. Such ‘artificial brains’ may eventually transform our culture, but our theoretical grasp of their underlying principles is still relatively crude and unsystematic.”

Black did not mention von Neumann’s ties to the military-industrial complex (a term coined three years later by President Dwight D. Eisenhower). A fierce anti-communist, von Neumann had played a critical role in the Manhattan Project and later advocated for the development of intercontinental ballistic missiles large enough to carry hydrogen bombs. Had he lived longer, von Neumann would almost certainly have set his own supercomputer-like mind to figuring out how “artificial brains” could best be put to military use.

That was precisely The Nation’s concern when it next addressed the perils of artificial intelligence. In a 1983 article, “Previewing the Latest High Tech,” Stan Norris, a researcher with the Center for Defense Information, wrote about cutting-edge tools being developed to give the United States an advantage over the Soviet Union. The CIA was working on getting computers to “process information and formulate hypotheses based on it.” Other projects aimed to build robots that could replace human beings on “twenty-first-century battlefields.”

Current Issue

View our current issue

Subscribe today and Save up to $129.

“As these examples show,” Norris concluded, “new technology continues to create new forms of terror. The technological arms race spirals on, adding to the danger of war by miscalculation, and diminishing rather than increasing national security. Weapons have outrun politics. The search for a degree of common security lies not in the laboratory but at the negotiating table.”

Two years later, a graduate student named Paul N. Edwards detailed efforts by the Defense Advanced Research Projects Agency to effectively “place a key element of the nuclear trigger in the ghostly hands of a machine.” That was both foolish and dangerous, Edwards argued:

“The idea of an artificial intelligence more logical and reliable than our own is a seductive one, especially if we believe it could protect us from a nuclear Armageddon. Sadly, it cannot. Computer systems, delicate and programmed by humans, who can never anticipate every conceivable situation, will always be untrustworthy nuclear guardians. The solution lies, as it always has, in reducing the danger of war by putting weapons aside and expanding the possibilities for peaceful interchanges.”

As Michael Klare shows elsewhere in this issue, the prospect of using “artificial brains” to replace human judgment and responsibility continues to hold a seductive appeal—creating new forms of terror, and diminishing rather than increasing national security.

Richard KreitnerTwitterRichard Kreitner is a contributing writer and the author of Break It Up: Secession, Division, and the Secret History of America's Imperfect Union. His writings are at richardkreitner.com.


Latest from the nation