Podcast / Tech Won’t Save Us / Apr 25, 2024

The Religious Foundations of Transhumanism

On this episode of Tech Won’t Save Us, by Meghan O’Gieblyn on transhumanism and Christian narratives of resurrection.

The Nation Podcasts
The Nation Podcasts

Here's where to find podcasts from The Nation. Political talk without the boring parts, featuring the writers, activists and artists who shape the news, from a progressive perspective.

The Religious Foundations of Transhumanism with Meghan O’Gieblyn | Tech Won't Save Us
byThe Nation Magazine

On this episode of Tech Won't Save Us, Paris Marx is joined by Meghan O’Gieblyn to discuss parallels between transhumanism and Christian narratives of resurrection, despite the fact many transhumanists identify as staunch atheists. Meghan O’Gieblyn is an advice columnist at Wired and the author of God, Human, Animal, Machine.

Advertising Inquiries: https://redcircle.com/brands

Privacy & Opt-Out: https://redcircle.com/privacy

(Alain Pitton / NurPhoto via Getty Images)

On this episode of Tech Won’t Save Us, we’re joined by Meghan O’Gieblyn to discuss parallels between transhumanism and Christian narratives of resurrection, despite the fact many transhumanists identify as staunch atheists.

Meghan O’Gieblyn is an advice columnist at Wired and the author of God, Human, Animal, Machine.

The Nation Podcasts
The Nation Podcasts

Here's where to find podcasts from The Nation. Political talk without the boring parts, featuring the writers, activists and artists who shape the news, from a progressive perspective.

How Data Is Changing Air Travel w/ Amanda Mull | Tech Won’t Save Us
byThe Nation Magazine

On this episode of Tech Won’t Save Us, Paris Marx is joined by Amanda Mull to discuss the data-informed decisions that are changing the way we all experience air travel, mostly for the worse. 

Amanda Mull is a senior reporter and Buying Power columnist at Bloomberg Businessweek.

Advertising Inquiries: https://redcircle.com/brands

Privacy & Opt-Out: https://redcircle.com/privacy

Paris Marx: Meghan, welcome to Tech Won’t Save Us.

Meghan O’Gieblyn: Thanks so much for having me.

Paris Marx: I’m really excited to chat with you. You had this essay published in n+1 quite a long time ago now. But that is rather new-ish to me that digs into transhumanism and its relationship to religion. Of course you have a book, “God, Human, Animal, Machine” that I have had the pleasure to read, as well. It deals with these really important topics that are coming back In this moment as we hear the talk about AGI, but also a real distinct shift in the way that Silicon Valley approaches some of these questions. So that’s why I really wanted to have you on the show. I think just to get into this, can you explain to listeners, what transhumanism and the singularity actually are? What do those concepts mean? Because people might have a general idea, but there might be some specifics that they haven’t caught.

Meghan O’Gieblyn: So, transhumanism is typically traced back to this niche subculture of West Coast Futurism that evolved, I guess, in the 80s and 90s. It was primarily a bunch of tech industry people who were interested in how technology could eventually help humans transcend over into the next phase of evolution. So they’re really interested in nanotechnology and cryogenics, and all of these sort of very speculative technologies. They communicated largely via mailing lists in the beginning. I think probably the point at which that idea reached the mainstream was with Ray Kurzweil’s book, “The Age of Spiritual Machines,” which was published, I believe, in 1999. Kurzweil really popularized this idea for a larger audience and his version of transhumanism, which I think has sort of become the most well known version of it. I mean, he sets out this whole history of evolution through the lens of information, and that information emerged with the Big Bang, and then it became more complex, as plants and animals emerged, and then human minds came about, and there was this much more complex form of information processing. 

He believed that this process was exponential, it was happening at an accelerating rate, especially now that we have developed technologies. So, a lot of his projections about the future were based on Moore’s law. This idea that computational power is doubling, I think, every two years. And that eventually, we were going to completely merge our minds with machines and become — he called it — post-human, basically. So he believed that we were currently transhuman because we’re in the process of aiding and enhancing our intelligence and our powers, humans, through technology. And once the singularity happened, which was this intelligence explosion, we were basically going to be post-human. It was really just this this work of, I call it, a work of secular eschatology. That has a very transcendent religious arc to it — this idea that all of history is moving toward this moment of final transcendence.

Paris Marx: That’s great. I want to come back to that religious piece in just a second. But you’re talking about this emerging in the 80s, and 90s. And Kurzweil ‘s book, really popularizing it in 1999, which, of course, was the peak of dot-com boom, as well. I guess it would not be surprising that someone like Kurzweil is dreaming up and publishing this idea of the history of humanity being this history of evolution being related to information and information becoming more complex over time and leading to more complex intelligences. At the same time, as computers are becoming popularized and the internet is becoming more common. It seems like there’s a clear relationship between both of these things. Would that be right to observe that?

Meghan O’Gieblyn: Absolutely. I mean, anyone who remembers that era of the emergence of the internet — I was very young at the time — but there was this very utopian strain of rhetoric, about the fact that we were all going to be globally connected. It was going to enhance productivity; it was going to democratize the world. I think Kurzweil and other transhumanists, they were really the most, maybe the highest form of spiritualizing that idea. Basically, the whole transhumanist ideology rests on the idea that information is sacred. That sort of patterns of information, are what’s going to outlast us. He was really interested in mind uploading, this idea that all of our neural activity is just patterns that we can transfer to a computer and we will be able to live forever. And if you’re a believer in that ideology, it makes sense that you share as much data as possible, that you contribute as much to this future, through these technologies that we’ve since learned have much more mundane uses, to collect masses of user data, and further the advertising. All of these more, less transcendent uses that this technology has been put toward.

Paris Marx: I think that makes perfect sense. I was really struck as I was reading in your essay in your book about these ideas that Kurzweil put forward in the book. For me, a lot of those things were new to me when I started to read about longtermism. I was like: Oh, they want to colonize these other planets and how have all these post-humans in these kinds of computer simulations. This was the first time I had encountered a lot of these things, and then to read in your work that all of these ideas contained within longtermism, other than maybe some particular orientation toward them, and moral justification for why that should be pursued, really comes out of Kurzweil’s work, and I’m sure some of this transhumanist thought before that. I hadn’t realized that those ideas were decades old already at that point. 

Meghan O’Gieblyn: It’s funny, because when I was writing this essay about transhumanism, and also when I was writing my book, which was around 2019-2020. I kept feeling like: Oh, man, these ideas are so dated. Because I encountered Kurzweil in the early 2000s, and I was like, really obsessed with, I was on the message boards and everything, hearing transhumanists talk about all these technologies. By the time we got into the early 2020s, it just felt like: Who really buys into this anymore? And then a few years later, all of a sudden, I’m hearing about longtermism. And I’m like: Oh, this is just the same shit basically, dressed up in a in a different name. But it’s the same people. It’s these rationalist bros, like effective altruists. And it’s funny, because part of my book was about how this ideology about the future keeps getting recycled, and it keeps appearing and reappearing throughout history. But I didn’t expect it to come back so soon in this other slightly different form.

Paris Marx: It’s at least good for making the writing even more relevant. But you talked about how these things come back again. And obviously, you mentioned religion and how you encountered Kurzweil’s book in the early 2000s. Do you want to talk to us about why for a little while, those transhumanist ideas were something that really resonated with you, in part because of the religious foundations that they had, even though that’s often not acknowledged by the very transhumanist that espouse them?

Meghan O’Gieblyn: Definitely. That was a big part of it for me. I was raised in a fundamentalist Christian home; my parents are evangelicals. I was homeschooled as a child and was taught six-day creationism, and went to Moody Bible Institute when I was 18 to study theology for a couple years. And I actually left that school after my second year, because I had a faith crisis and was starting to question the whole Christian ideology. I was living in Chicago at the time; I was living for many years on my own and working and identified as an atheist, at that point. I totally left the church. A friend gave me “The Age of Spiritual Machines.” And I read it and had my mind totally blown. I mean, the book was a best seller, but there wasn’t a lot of conversation about those technologies, at the time. They were very futuristic. Again, he was talking about mind-uploading, nanotechnology, all this stuff that wasn’t really part of the mainstream conversation.  

When I read the book, I was reading a lot of these message boards online among transhumanists. What really appealed to me, it took me a while to realize this, but there was very much this millenarian Christian narrative that was very familiar to me, you. I grew up thinking we’re living in the end times. Christ is gonna return at any point, we’re going to be raptured, the dead are going to be resurrected, we’re going to have these glorious new bodies and live in heaven forever. This was essentially what Kurzweil was arguing, but without any appeal to the metaphysics or the supernatural. In fact, I think the reason why it took me so long to realize the parallels between them, is that all of the transhumanists were also vehement atheists and rationalists. 

Even the histories of the moment, most of the people who were writing about the origins of transhumanism refer to Nick Bostrom’s brief history of the movement (“A History of Transhumanist Thought”) where he very much traced it back to the Enlightenment, and these very humanistic, secular ideas. So it didn’t seem if so there was a connection there. And it really baffled me for a while though, too, because I was like, for example, just getting into the nitty gritty of these conversations about, for example, mind-uploading. There’s this problem about continuity of identity If you were to transfer your all of your neural patterns onto a supercomputer or if you’re even to replace every part of your brain with a neural implant. Is your consciousness still gonna be there afterwards? Are you still going to be you? 

So, these were basically the same questions that the early church fathers were debating in the third and fourth century, which was, for Christians at least, the the body was a really important part of the afterlife. This was what distinguished Orthodox Christianity from Gnosticism, which thought that the afterlife was just going to be spiritual. We’re just going to be souls disembodied. So there’s this problem in early Christianity about like: Well, bodies die, and they decay, so what happens? How are all of these parts going to be resurrected and how is the person going to be the same person in heaven? The transhumanists at the time were using the same metaphors as the early church fathers too. Well, one of the metaphors that Kurzweil uses in “The Age of Spiritual Machines” is this idea that consciousness is a pattern. So, he said: It’s like the pattern that you see in ripples of water in a river, and the individual water molecules are always different, but patterns are the same. That’s basically what consciousness is and that’s why it can persist across substrates. 

This was the very same metaphor that the origin of Alexandria used to talk about the Christian resurrection. Where he said, basically: Our soul is a pattern, and our body is going to die and decompose. But basically, the pattern is going to persist. This is how he reconciled Christianity with Greek thought. So these transhumanists are not reading, obviously, the early church fathers. How do these same metaphors and these same ideas keep recurring? So part of the fun of writing that essay, which I didn’t write until much later for n+1, was reading about the strain of Christian eschatology that I didn’t know anything about. Because I had studied fundamentalist theology, which is very narrow. But there have actually been Christians throughout different points of history that have believed that resurrection could happen through science and technology, basically. Going back to medieval alchemists who were trying to create an elixir of life that was going to make the person who took the potion have a resurrected body. Through Russian cosmism, I wrote a little bit about but the idea and the theology of [Pierre] Teilhard de Chardin. There is this lineage, basically, and I don’t know how in detail you want to get here, but there is a way in which it connects directly to modern transhumanism. They basically took these ideas from Christian theology, stripped them of all the metaphysics, and created this religion of technology out of it.

Paris Marx: I did want to get into that, because I find that absolutely fascinating. One of the things that really stood out to me, and I can’t remember if it was in a talk that you gave, or in the book. But you’re basically talking about how there is this clear history that you can see, but when the transhumanists talk about the history of the ideas that they’re drawing from, as you say, they talk about the Enlightenment, or they go back to Julian Huxley mentioning this for the first time. But there’s this whole religious history discussing all of these ideas that they leave out of those discussions. I think it was in a talk you gave in Sweden, you basically said that: In part that seems like a deliberate act, to make sure that this history is not known, is left out of the stories that they tell because they don’t want this relationship with religion to be part of the thing that they are engaged with, and that they are talking about. Again, as you say, because a lot of them identify as rationalists and atheists and don’t want to pretend that they are engaging in these spiritual or religious ideas. Can you talk about that aspect of it and rather than Huxley being the first person who talks about transhumanism, how this is something that exists before that as well?

Meghan O’Gieblyn: So, I can’t remember if it was Bostrom — it’s a widely published origin, where people say the first use of the word transhumanism came, and I think it was in 1957, with Huxley using it in a talk. I had known that the first use of transhuman in English was in the translation of Dante’s “Divine Comedy.” It’s in a passage where he’s describing the resurrection — it’s in the “Paradiso.” So he’s talking about ascending into heaven and he notices at one point that his body is transformed into this heavenly body, and he’s trying to emphasize the fact that this is a singular moment that nothing like this has ever happened before. So, to emphasize that he makes up an entirely new word in Italian, which is “transhumanar” translated basically in English to ‘beyond the human.’ The line, I think, is: Words cannot tell of that transhuman change. 

So, I learned this actually because I was talking to a bunch of Christian transhumanists, which is a whole other weird subculture, but they really latched on to this as transhumanism has this Christian origin. I was like: Okay, well, how did this word appear? I think it was 1814 when this translation of Dante came into English. And how did it get to Huxley from there? Actually, there was another use of it in the 1940s I believe, before Huxley, which was the Catholic theologian and preist, Pierre Teilhard de Chardin, a French priest and paleontologist who was really interested in evolution and merging evolution with Christian theology. And he had this book that was banned for many decades by the Catholic Church as heretical called “The Future of Man,” where he laid out this vision of the future, where he imagined that technology would help humans reach the next level of evolution, and actually bring about the resurrection prophesied in the Bible.  

He had this image of, basically all of the world is becoming more and more connected with technology, and he’s writing in the 40s and 50s about these ideas. So he’s talking about radio and television, but he somehow saw that mass communication is making our minds start to become more connected and more merged. He believed that this was eventually going to create something called the noosphere, which is basically, a lot of people said, it’s a really brushing idea of the internet. So, human minds are going to be connected, then this was going to lead to an intelligence explosion, which he called the omega point. 

At that point, humanity was basically going to breakthrough the time-space barrier and become divine. And this was going to be, basically, the resurrection. This is how we were going to become gods. And that idea, obviously, like the omega point, is basically just Kurzweil’s singularity. But it has very clear religious connotations. He used the word transhuman, probably he got it from Dante, I imagine. And he talked about how that stage after we become divine, we’re going to become transhuman. Teilhard was friends with Julian Huxley, and they exchanged a lot of ideas. So the thought seems to be that Huxley got that term from the priest, but again, just totally stripped it of its religious and theological meaning, and created this secular idea of transcendence. Then from there, it’s a pretty clear lineage to get to Kurzweil and contemporary transhumanists and longtermists.

Paris Marx: It’s so cool to see that history and see the relationships to it. Part of what made me really interested in the way that you were telling this and these connections that you were making was because I also read David Noble’s book, “The Religion of Technology,” which goes into a lot of other aspects of what you were talking about. Where there were these Christian theologists or Christians who really believed that science and technology would be a way to achieve these Christian prophecies or stories or however you want to talk about them. I was wondering if you’d talk about that a bit more specifically in relation to transhumanism and the real similarities that exist between the stories that they tell about what our future is going to look like, and how these technologies are going to develop, and how they are going to allow us to transcend this human body. Also, even with Kurzweil talking about this progression through history, and how that relates to the way that this was told in a lot of Christian theology. And of course, how those things relate to one another and how entirely similar they are other than, obviously, looking at different means of achieving something that seems quite similar.

Meghan O’Gieblyn: There’s many different versions of the Christian historical narrative, I guess, in terms of the end times, and what that’s going to look like. The version I was taught in the fundamentalist college that I went to was called premillennial dispensationalism. It’s a very pessimistic view of history and of the future. The idea is basically that God reveals Himself in distinct dispensations across history, and that there’s different ways in which we experienced God through history, which is a little bit abstract. But the point is that, eventually, all of this, everything that’s happened in the past, and everything that’s happening in the present is leading to this redemptive narrative that’s going to happen. There was a big split in American Christianity around, I would say, the turn of the 20th century, if I’m getting the dates correct, then I think it became more pronounced after the world wars. 

But there was this split between Christians who had a very pessimistic view of the future, which is where my family and the tradition I was raised in. This premillennial view, which is that we’re headed toward this apocalypse and tribulation, and God is going to destroy the world. There’s nothing we can do about it. And eventually, we’re going to survive is because we’re the elect; we’re going to get raptured and get to go to heaven. But it was basically this really dark idea of history is on this downward spiral. And then there’s also this post-millennial tradition of Christians that you see in the Social Gospel movement that’s more concerned with making life better here on Earth. This idea that we can create, maybe not having, but this millenarian Utopia here on earth if we try to live out the Gospel and actually help the poor and become socially engaged. And that’s a much more optimistic view of history and those ideas have always been kind of in conflict.  

I would say that Kurzweil, and people who are techno-optimists feel to me a very, in a weird way, sort of like a post-millennial view, in that they believe technology is going to make things better, or they at least pay lip service to that idea. It’s going to extend our lives; it’s going to take away suffering on Earth; it’s going to create medical advances; it’s going to solve all of our problems. And then there’s this other very dark side of the debate about the future and AI, which is existential risk, which feels, to me, more like the pessimistic, apocalyptic view. So, it’s interesting how those ideologies are playing out and are similar in religious spaces and in secular spaces. It just feels like the same conversations, to me. And it also feels like those two worldviews really feed off of one another, and at times prevent us from having more practical conversations about the real world harms that the technologies are are doing.

Paris Marx: Definitely. We’re always focused on these big, as you’re saying, existential risk, especially when we’re talking about AI in the past couple of years, where the focus is: Is the AI going to be this wonderful future for us? Or is it going to end the world? Rather than talking about: Okay, how are these technologies being implemented now? And what are the effects of them and what should we be doing to try to mitigate the negative impacts of that? But that gets distracted by the much bigger picture of are the AIs going to kill us or enslave us or something like that?

Meghan O’Gieblyn: Absolutely, or become God or whatever.

Paris Marx: I found this fascinating that Kurzweil actually reached out to you after you wrote that n+1 essay. Did you get more insight into how he thinks about this and his approach to it in that exchange? Or what more did you learn about how he sees transhumanism through that?

Meghan O’Gieblyn: It was so bizarre. I had this essay appear in n+1, which is like this fairly small lit mag, and then it did get picked up in The Guardian after that. So I guess it reached a wider audience. But I was just checking my email one day, and I got an email from Ray Kurzweil. I was like: Surely this is a joke [Paris laughs]. But it was from the actual Ray Kurzweil, he had read the article. He said he really liked it. It was very strange. He was talking a lot about metaphor, which is odd, because I had been writing my book by that point. I was thinking a lot about the question of metaphor and technological metaphors. And he said: Anytime we’re talking about something transcendent, we have to use metaphor, because it’s a reality that we can’t access, we would have to transcend time and energy in order to understand that and our human understanding is limited. And he said, basically: Christians and other religious people are using pre-modern metaphors to describe the future. And I’m using technological metaphors.

I don’t think he said it exactly, but the implication is, we’re talking about the same thing. We’re just using different language, which, to me was really surprising and interesting, and was something that I had sort of intuited from writing about this history. It was also a little bit eerie, given my religious background. I started to get, even doing this research years later, a little bit conspiratorial. Where it’s like: How is it that these same ideas keep coming up? Is it true that these biblical prophets, and early church fathers somehow had this premonition of what was going to happen in the future through the technology, and they just didn’t have the language for it. But that’s sort of what, if I’m understanding him correctly, I think that’s what he was saying more or less. That we’re all just trying to describe something that’s going to happen in the future that we don’t understand yet. Then he offered to send me some of his books in the mail and he sent me a signed copy of “The Age of Spiritual Machines,” which was kind of cool to get, but that was the only correspondence I’ve had with him. 

Paris Marx: Do you still have the book? 

Meghan O’Gieblyn: Yeah, I do. And he inscribed it. He said, “Meghan, enjoy the age of spiritual machines.” But it wasn’t capitalized or underlined, or anything, so you could read it in different ways.

Paris Marx: That’s fascinating. I loved that when I was reading through the book. But I think that point around metaphor is actually really fascinating and you dig into this a lot more in your book. Because time and again, we encounter these metaphors for many things in the world. But in particular, with these discussions for how the mind works or how the body works, where they’re often related to technology, or the things that are really important to how we experience the world. If you think back to like, the Industrial Revolution, when we often thought of how we worked as being like a machine, and the cogs that worked within us. And now we see these metaphors that treat the human, or treat the mind, as though it’s a computer or as though it’s some digital technology. And we work in a similar way to that, which of course feeds into these transhumanist ideas. I wonder what you make of those metaphors and how they affect how we see ourselves and the world around us?

Meghan O’Gieblyn: I was interested in that question of: Where do we get this idea? And, this is something I think everybody just intuitively assumes today is that their mind is a machine in some way or a computer, and we defer to it in everyday language. If you say: I have to process something, that’s using metaphorical machine language. And like you said, these are very old, these metaphors, if you want to go back to ancient Greece, you have this idea that the soul is like a chariot. Then all throughout the Industrial Revolution you have these mechanistic metaphors for the body or the mind. The idea of the mind is a computer really emerged in the late 1940s, early 50s, with the emergence of neural networks, which were based on the brain. There was this idea that we could create these, they were called Turing machines, that were operating in the same way that our minds were. 

The thing with any type of metaphor is it goes both ways. So then shortly after that, there’s this idea that the mind is also computational. In some of the early theories of it was really this humorous idea that the mind functions according to like binary logic and things like this. But it’s a useful metaphor, obviously, I mean, all of cognitive science and AI research has grown out of it. Part of the appeal of it initially was that, if you think about the mind as a machine, you can get away having to talk about consciousness, or the soul, or the subjective experience, basically. Which is a hard thing to talk about from a third-person point of view of science. And I think that what’s interesting to me about it, though, is that there’s a point at which, because there’s a dualism built into computers, we have software, which is just information, it’s disembodied. It’s not matter or energy. And then you have hardware. 

There is this weird mind-body dualism built into it, where you can think about things like mind uploading, like: Oh, is it possible, if my mind is just information, can that somehow be extracted from my body and travel to this other substrate? The irony for me when I was writing about this is this metaphor emerged as a way to get around metaphysics and have this fiercely materialistic idea of the mind. But somehow the metaphysics snuck back in there, where if you look at Kurzweil or any of these people who are interested in these futuristic technologies, it’s almost information has become a metaphor for the soul. It’s something that’s going to persist after we die. It’s indestructible. It’s immortal. It’s very strange to me how that happens, but also very understandable. That dualism, I mean, is a cognitive bias that’s very deep in us. It’s in children. Anthropologists have studied it in cultures all over the world. I think that it’s natural that we extend that bias when we’re thinking about our technologies, also.

Paris Marx: Do you think that that metaphor — that the mind or the body is like a machine or a computer — do you think it leads us to be more open to ideas like transhumanism, or this idea that a brain or a human mind can be recreated on a computer, when it allows us to, I guess, not think so much about the biological barriers to that, because if we think that the brain computes and processes just like a computer, then it’s easier to believe that maybe we’re going to recreate the mind on a computor? Or an AI is going to reach the level of human intelligence and we’re going to be able to stick some computer hardware on the back of our brain and transfer it over to a machine? Do you think that it leads us to be more open to these things and do you think, in part, it misleads us into believing that something like this is possible at all?

Meghan O’Gieblyn: Definitely. I think this is the whole idea of Functionalism, which is just that it doesn’t really matter what the material is, so long as the parts are doing the same work. If you have a biological brain or if you have a computer, you can presumably have consciousness emerge out of silicon the same way from a human brain. And I think most people intuitively feel like there’s something that’s missed there, but it also ignores the fact that we all evolved together through millions of years of evolution, and whatever is evolving in machines, is the type of intelligence that’s evolving is not anything, like what the experience that we have of the world. In fact, there’s not really any evidence that there’s going to be any first person experience in machines. And that’s another thing that comes up in a lot of these conversations about mind uploading, which is if you start reading, it sounds great, like, yeah, we’ll be able to live forever in the cloud. 

If you start reading between the lines, it’s like: Well, we can’t really guarantee that there’s going to be any subjective experience. It could just be this clone that looks and toxic you and there’s not any experience there. Which is like: Okay, well, what is the point of people who want to live forever? They want to experience that. They don’t just want to have some avatar or clone of themselves that’s persisting after they die. Maybe some people do. That was a moment of disenchantment for me when I was really into transhumanism. I think that was the appeal for me, it was like: Oh, yeah, this is a way to live forever. But the people who are writing about this don’t believe the machines have consciousness. A lot of them don’t believe that humans have consciousness, really. Well, that’s a superstitious idea. There’s not a clear way to talk about it. But I don’t know, what is the point of living forever if you’re not going to experience it?

Paris Marx: I don’t know if living out my life in the Cloud sounds so appealing to me.

Meghan O’Gieblyn: Me neither.

Paris Marx: I was really fascinated to learn, though, that a lot of Kurzweil’s interest in this seems to come from his desire to be able to recreate his father, as a digital agent or an AI being or whatever you want to call it. I was particularly fascinated, because I’d never heard this before, that he has collected a bunch of writings and things like that, that his father had in the hopes that one day be able to be scanned and an AI chatbot, or something, of his father will be able to be recreated. Do you think that that is part of what motivates his interest in these sorts of ideas?

Meghan O’Gieblyn: He’s been very transparent, actually, about the fact that he has a very personal motivation for this. There was a documentary about him many years ago called “Transcendent Man” where he takes the filmmakers into this storage unit that he has, where he’s kept all of his father’s — his father was a classical musician, so he has all of his music, he has his letters, he has his diaries, and a lot of personal writing. He actually did, at one point, use this to create a chatbot of his father. His daughter, actually, Amy Kurzweil wrote a book about her dad called “Artificial.” It’s a graphic memoir, which is actually really excellent. She talks about sort of interacting with this chatbot version of her grandfather. 

And we’ve seen a lot of speculative startups that are claiming to be able to resurrect, in chatbot form, people who have died so that you can talk to them — talk to some version of them after they’re gone. To me, and I think to most people right now, it doesn’t really seem especially appealing, because part of the idea of communicating with the dead isn’t just to get information about what they would say to you. It’s about making some sort of interpersonal connection, that if the person isn’t actually there, I don’t know what emotional or spiritual benefit you’re getting from that. But I definitely think it’s something that we’ll see more in the future, there’s probably a market for it, of some sort.

Paris Marx: I definitely think so, and we’re already seeing it. I was reading a story the other day about how in China, I believe, they’re already making chatbots or something like that of the dead. And I’m sure it’s happening here as well, to a certain degree to. We’re already reading about AI girlfriends. So I’m sure AI dead people is something that some companies are working on.

Meghan O’Gieblyn: The next frontier, yeah.

Paris Marx: You were talking about how you became an atheist after this evangelical upbringing. I would imagine in the time that you had done so you were probably into some of the New Atheists writers and that movement as well. Would that be fair to say?

Meghan O’Gieblyn: I went through a phase where I was reading Dawkins and Hitchens and Sam Harris. It was part of my deconversion journey.

Paris Marx: I was right along with you. I became an atheist in the mid 2000s. So it was right at the time when that was in full-steam. And I remember watching Bill Maher’s documentary, “Religulous,” I think it was called, and being all for it, embarrassing to admit today. But reading the Dawkins and Hitchens and all that sort of stuff too. I feel like this movement happened that a particular moment in time. But we’ve seen even as New Atheism as a movement has faded off, it feels like a lot of those figures have continued along, and now are key parts of this right-wing movement that is increasingly popular in the tech industry, as well. People like Sam Harris, and Dawkins comes up as well. Even though it’s not as explicitly championing atheism in the way that new Atheism was, it feels like a lot of these ideas still stick around. And these figures have become key figures in this anti-woke movement or whatever you want to call it. I wonder if you have any thoughts on how that has developed and how those ideas seemed like a foundation for some of what has come after?

Meghan O’Gieblyn: I admittedly have not followed them as closely. I know that they’re touchstones in the rationalist community. But it’s strange. I don’t know if this is actually true. My parents recently told me that Dawkins is now Catholic, is that true? Or that he’s like a cultural Catholic? [both laugh].

Paris Marx: Cultural Catholic, yeah! [Editor’s note: Dawkins is a cultural Christian, not Catholic.] There was an interview recently, where he was talking about how, he’s not a Catholic himself. He doesn’t believe in God, of course. But he’s really interested in cultural Catholicism and it would be very disappointing to him if the churches in England went away, and he still loves to go to them for the cultural experience, just not the religious experience. He was basically making the argument that if churches were replaced with mosques, it would be terrible to him. Because picking up on the Islamophobia that was always there.

Meghan O’Gieblyn: Right, which is with Sam Harris too. To me, there’s a cynical part of me that just says that they’re just trying to capitalize on the way that public sentiment on the right especially has shifted in the last few years. I do think it reveal some bad faith about their whole project, from the beginning. I kind of became disenchanted with them even before that. Well, I mean, part of it is maybe because I can see how even people who are very militantly atheist or rationalist, that doesn’t inoculate you against superstition. It doesn’t inoculate you against these really basic human desires to live forever, or define some sort of technological transcendence. If anything it puts blinders on you, in a way. Because again, a lot of these ideas about technology and the future feel to me very much rooted in wishful thinking, in a way that is precisely the kind of wishful thinking that they accused religious people of back in the early 2000s.

Paris Marx: No, I definitely agree with that. I was commenting, I think it would have been the end of last year, on a manifesto that Marc Andreessen had written, “The Techno-Optimist Manifesto.” And it really stood out to me in that moment, I feel like they’ve been increasingly pulling on faith in order to drive their technological project. But in his manifesto, he was making all of these claims, and basically saying time, and again: We believe, we believe, we believe. There was no tangible foundation to this belief, it was just: We think that this technology is going to change the world in all these positive ways that I, Marc Andreessen, as setting out. And you should all have faith that we can achieve this. It a very much felt like this religious argument, even though I’m sure Marc Andreessen would say that he’s an atheist, and he doesn’t believe in all that, and whatever. But it still seems to be drawing on these very similar ways of arguing and ways of presenting this.

Meghan O’Gieblyn: That’s really interesting. To a certain extent, I think the whole rhetoric about AI rests on faith, nd this idea of: Just trust us, we’re the smartest guys in the room, we’re going to do this, we’re going to deliver. But what are you going to deliver? Nobody can even articulate what it is that we’re trying to solve. It’s going to change everything. It’s the future. It’s these abstractions that do feel very much like religious rhetoric to me. And the people who believe in this feel to me also like spiritual acolytes in a way. Just the way in which they’ve just completely gone all in and embraced this idea of the future. I mean, Sam Altman, talks about the future in a way that’s like very Manichaean. Like this is the way history is going, people who are on board are going to survive and people who are not are going to be left behind. He said this a couple of years ago in a tweet, I think. To me, it’s really like this idea of a spiritual elect that you see that was the same thing like my family believed, which is that: We are going to, because we have honored God, the world is going to be destroyed, but we’re going to survive, because we are the good ones. It seems like there’s a similar narrative there, which is: We’re on the side of progress; we’re on the side of the future, and people are going to fall by the wayside. But we’re the ones who are going to make it into the next stage of evolution.

Paris Marx: I feel like that’s part of the reason why your writing really resonated with me. On the one hand, talking about the circularity of these ideas, and these ideas coming back again and again. But also the fact that, we as western society, have gone through this secularization, so we lost this ability to look up and say: Okay, we’re trusting in God, we’re gonna go to heaven, we have these religious stories that we tell ourselves. And what fills that void, it feels like in the tech industry, they’ve built their own theology, or religion, of techno optimism, or whatever you want to call it. That gives them these narratives that give their life meaning and allow them to feel that they’re contributing to this bigger project. We often joke about the cult of Elon Musk, and the people who are really behind him and just believe in whatever he says, but it feels like there’s something broader in Silicon Valley where, as you say, Sam Altman is drawing on this, and especially the way that he talks about the potential AGI and what this is going to be. It feels like they are building their own belief system, whether we want to call it religious or whatever, in order to get their followers and their believers to stick behind them.

Meghan O’Gieblyn: Absolutely. What was the phrase that Altman used to when he was describing AGI, like magic intelligence in the sky?

Paris Marx: [laughs] I would believe that.

Meghan O’Gieblyn: I do feel like it’s filling a vacuum that we’ve seen so much secularization and people, even people who are in institutional religion, today don’t quite believe it as literally as they used to. And there’s something really appealing about a literal future that’s going to enact a lot of those promises. If you’re going to make the case, which I’ve made this, these technological stories about the future are a form of religious eschatology. It is the crudest, most fundamentalist version of that. Again, this is the idea that we’re going to live foreve; we’re going to be saved; everyone else is going to die. There’s this whole other tradition of Christianity that I really respect and that is not part of this at all. This is like: We are fallen human beings, we have limitations, and there’s something beautiful about that. Christ came to earth to take human form to take part in our suffering.

I think the Social Gospel movement really, really grew out of that. That’s something that is, I don’t know, to me, it seems like that could actually provide maybe a counterpoint. Maybe not necessarily a religious narrative, but just this idea of finding something positive in our human limitations. In the fact that we’re not going to live forever — we’re going to die. There’s something tragic and maybe beautiful about that. To strip away that whole aspect of human experience that so much of our history has been devoted to exploring, it just feels this very crude, like the most basic childish version of Judeo-Christian narratives that you can come up with.

Paris Marx: Based on what you’re describing, you almost see that in some of the backlash to these ideas. That you have the Altmans saying: We’re going to build the AGI and the AI is going to take care of everything and do all the jobs and whatever. And you have the Musks saying: Okay, we’re going to go colonize another planet and we really need to be focused on all this. The real longtermist ideology that we sacrifice in the present, maybe we don’t pay attention to global poverty, or we let climate change get worse than it would otherwise be. Because we need to be focused on this long-term future rather than addressing the here and now. There’s a growing number of people who say: That makes no sense. We should be caring for this planet and the people who are on it. Instead of going after your wild technological fantasies, which sounds a lot more similar to the social gospel thing that you’re talking about there.

Meghan O’Gieblyn: Definitely. I mean, longtermism, when I first started reading about it — even more than transhumanism, because they are ostensibly thinking about things like climate change, but also dismissing it — feels like really similar to the pre-millennial Christianity that I experienced growing up. Which was also not interested in climate change. Who cares? They’re like: God is going to destroy the world anyway. And this idea of we’re going to invest all of our resources and all of our energy into these future human descendants who are not even going to be human. They’re going to be digital beings, I think is that idea, this really extreme utilitarianism. It does feel like it’s a way to escape historical responsibility, to put all of your energy into this afterlife that you’re not going to experience, but is going to make you a good person somehow, and ignoring the really real and more urgent and justices and problems that we’re living through.

Paris Marx: When you talk about that element of longtermism as well, one other thing I wanted to ask you about before we wrap up was Nick Bostrom, who is obviously one of the major longtermist thinkers coming out of that rationalist and transhumanist tradition. He wrote about this thing called the simulation hypothesis that people will probably have heard Elon Musk talk about, that we all live in a simulation and whatnot. In your book, you talk about that, in particular, in relation to creationism. And, again, these ideas of religion, in a sense, coming back, where you’re not only thinking about how you’re creating new humans, or offloading the mind or what have you, but actually creating a whole new world that you have complete control over, that you are the God of. How do you see that simulation, hypothesis, and the way that Bostrom approaches it?

Meghan O’Gieblyn: I also was really obsessed with the simulation hypothesis for a long time. And it is a technological creation myth and it appeals to the same cognitive biases, I think, that we have as humans, where we tend to see everything as designed and everything is having a purpose and a telos. And it makes a lot of sense to people right now, for that reason. A lot of even people, that I’m friends with, will casually just be like: Oh, yeah, of course, we’re in a simulation, that makes total sense. I think it’s also appealing because you can think about an afterlife. Maybe if we’re just software, we’re not going to die and be done with ourselves. Maybe we’ll be extracted and put in another simulation at some point. It also makes the world seem like it has meaning and purpose that it was designed by some sort of benevolent engineer. The funny thing is, it doesn’t really explain anything, in terms of where the world came from. Because presumably, whatever civilization created us, where did they come from? You can keep going back and back and back. So it has definitely become very popular, thanks to Bostrom, and a lot of other people who’ve written about it.

Paris Marx: You even see the pop culture depictions of it, where that “Black Mirror” episode, San Junipero, where these people basically die, and then are able to live in the simulated world for as long as they want, I guess. How these things tend to come up time and again. To wind down our conversation, I wanted to ask you after looking into this history, after looking into transhumanism, and the metaphors that we have around technology, and the mind and the body, and all these sorts of things that you have explored through your work over the past number of years. I wonder, should this make us think differently about the technological stories that we’re told, and where this is all going?

Meghan O’Gieblyn: The big thing for me is just realizing how much of these projections about technology, even when they’re rooted and data and “hard facts” — I’m putting scare quotes on that — come from a lot of wishful thinking, and a lot of inherited cultural narratives that seem to keep finding their way back into the stories that we tell about the future. The thing that I’ve seen through a few, I’m old enough, now, I’ve seen a few cycles of technological utopia with the rise of the Internet, with the rise of social media. Is this going to topple autocratic regimes? There’s always this very utopian, and I think, also very spiritual dimension to those stories. The people who are telling them believe them to some degree, but they’re also used to get us on board. And again, to share our data to accept these technologies as somehow predestined or foreordained, that this is where history is going.  

If you don’t believe in God, and you don’t believe that there is a telos to history. You have to take responsibility for the fact that we are building these technologies as humans, we are. We have a choice. We’re making these decisions. I think the hardest thing for me is just watching the people who are building these technologies treat them as though they’re inevitable, that they’re just the next stage of evolution. And then also the people who I talked to who are not necessarily thrilled about that technologies, who just complacently accept them because this is the future. This is where everything’s going. And it’s like: No, we don’t have to accept this fatalistic story. If it’s true that we’re really directing our evolution or directing technology, then we have choices to make.

Paris Marx: I think it’s so important to recognize these histories of these technologies and where so many of these ideas come from, especially in this moment, because of the way that these people who rule the tech industry are using them and are deploying them in order to try to carry out particular futures. So that’s why I think your work is so important and why it was a real pleasure to have you on the show today. So thanks so much, Meghan.

Meghan O’Gieblyn: Thanks so much for having me.

Subscribe to The Nation to Support all of our podcasts

More from The Nation

x