Podcast / Tech Won’t Save Us / Feb 15, 2024

How Interfaces Shape Our Relationship to Tech

On this episode of the Tech Won’t Save Us podcast, Zachary Kaiser explains why data isn’t an accurate reflection of the world.

The Nation Podcasts
The Nation Podcasts

Here's where to find podcasts from The Nation. Political talk without the boring parts, featuring the writers, activists and artists who shape the news, from a progressive perspective.

How Interfaces Shape Our Relationship to Tech w/ Zachary Kaiser | Tech Won't Save Us
byThe Nation Magazine

On this episode of Tech Won’t Save Us, Zachary Kaiser is on the show with Paris Marx to discuss the power of tech interfaces, why data isn’t an accurate reflection of the world, and why we need to explore democratic decomputerization.

Zachary Kaiser is an Associate Professor of Graphic Design and Experience Architecture at Michigan State University. He’s also the author of Interfaces and Us: User Experience Design and the Making of the Computable Subject.

Advertising Inquiries: https://redcircle.com/brands

Privacy & Opt-Out: https://redcircle.com/privacy

(Shutterstock)

On this episode of Tech Won’t Save Us, Zachary Kaiser is on the show to discuss the power of tech interfaces, why data isn’t an accurate reflection of the world, and why we need to explore democratic decomputerization.

Zachary Kaiser is an associate professor of graphic design and experience architecture at Michigan State University. He’s also the author of Interfaces and Us: User Experience Design and the Making of the Computable Subject.

The Nation Podcasts
The Nation Podcasts

Here's where to find podcasts from The Nation. Political talk without the boring parts, featuring the writers, activists and artists who shape the news, from a progressive perspective.

Should Australia Ban Teens from Social Media? w/ Cam Wilson | Tech Won't Save Us
byThe Nation Magazine

On this episode of Tech Won't Save Us, host Paris Marx is joined by Cam Wilson to discuss Australia’s plan to ban under-16s from social media, the interests driving it, and whether it’s the right approach to tackle the harms of those platforms. Cam Wilson is associate editor at Crikey.

Advertising Inquiries: https://redcircle.com/brands

Privacy & Opt-Out: https://redcircle.com/privacy

Paris Marx: Zach, welcome to Tech Won’t Save Us.

Zachary Kaiser: Thanks. I’m so honored to be here. I just have to say I love your podcast so much. I’m so excited.

Paris Marx: Thank you so much, I’ll always accept that compliment. I think the listeners will start to be like: Is he just forcing everyone to compliment him in exchange for coming on the show?  

Zachary Kaiser: It’s legit!

Paris Marx: So, you have this book that I read, which I thought was fascinating, called “Interfaces and Us,” and I think that people recognize that the technologies we use are shaped by capitalism, by this socioeconomic reality that we live in. But why is it important to look at the interface level of what is going on with these gadgets and apps and things like that rather than just the broader level that we would usually look at? 

Zachary Kaiser: Such a good question. For me, an ahistorical answer in terms of my career would reference a Marxist materialism. If I had known about that when I started this project, which I really didn’t, I think my learning, as someone on the left as someone who came out of industry. I don’t have a PhD, I have an MFA. Not to say people with MFAs can’t learn about this stuff. But I was in industry. There’s so many things I didn’t know about when I came to academia,hat part of this book is really itself a compendium of stuff I’ve learned along the way. I think that engagement, the direct end arrangement, with interfaces is to me part and parcel of understanding the relationship between technology — if we can put it in the singular, even though we acknowledge it is multiple — between technology and politics and between political economy and society. Also how it impacts us as individuals. 

I think that not only is it important to have that, as opposed to a broader engagement, like you need a case study. You need to be able to say: Oh, well, like, I can assert, for example, that the interface to my Fitbit, or whatever, is making me a particular way. Or I can assert that, even more broadly “quantified self” applications are making me feel a particular way about myself, or even the aspiration to 10,000 steps. But until we really, directly engage with those objects, and also including the technologies underlying them. We were just talking about the wonderful work of David Golumbia, that was one of the things I always appreciated about his work is the really specific engagement with actual technologies themselves. And to have worked in that space and to be able to understand how those things work there is sort of, what I might call, an exegesis about all the technologies and like the first model of the the Fitbit in my book, and there’s a reason for that, which is trying to live out the materialism to which I subscribe. I think that there’s a strong case to be made for that. 

The last part of that I’ll answer to that question is, we only experience — we meaning, and again, I use the term we and it’s feedback I got in the manuscript, as well, and I think it’s really important feedback — because when I use we, I tend to mean, those of us in the Global North. I tend to mean those of us using these technologies as consumers. I might say we, as a designer, and I’ll try to qualify that statement. But I think it’s super important to say when I say we, there’s a geopolitical socio-political and political economic dimension to that, that is worth acknowledging. So when those of us who are users of these technologies use them, we only experienced them at the level of the interface. We don’t open them up, we don’t build them, unless we’re designing them. And even then, designers often don’t know how the data about someone’s body is captured. They’re not examining the accelerometers or any other sort of sensor, posture sensors, or whatever people are using these days.

Paris Marx: I found it really interesting, as you describe the importance of those interfaces, and that being how we interact with these technologies. In the book, you talk about how those interfaces are also a place of idea transmission, these particular ideas that the designers and the companies behind these technologies have for how we should interact with them the ideas for, I guess, even how we should exist are then transmitted to us through the way that this product, or this app or what have you, is actually designed. I feel like when we come to these technologies, we are often not thinking about that either. It’s just like: Okay, how do I use this? How do I get what I need out of it? How do I make it work for me. But in the process of that kind of interaction, there is also a shaping that is going on?

Zachary Kaiser: Oh, 100%. And I would say also, for those of us that engage in the critical side of it, because of the culture or because of the relationships that we’re steeped in, I think we have some assumptions that allow us to skip over the interaction with the interface to the critique that we engage in. But for me, as a teacher, particularly teaching in a pre-professional undergraduate program, where my students are going to go and get jobs doing this thing, there’s this instrumentalizing function of design, where, we just sort of do the thing. And we make it user-friendly, whatever that means. But I think taking a minute to stop and like, look at the interface and look at its rhetorical dimension and look at its ideological dimension, it is an important teaching moment. And it’s part of the reason I wrote the book, or part of the reason I conceived of it in the way I did, is because I saw a lot of literature engaging with the specific technologies, underlying the interface. Then there’s a lot of literature talking at a more high level about the political, economic, and social dimensions of UX or of technology. But then there’s that middle ground, which is literally where people meet this digital version of themselves that a lot of folks were addressing tangentially or just a little bit. So, that’s part of the reason I think I wrote the book, too.

Paris Marx: But I think it’s an important thing to consider, because obviously, a lot of my work is thinking bigger about the political economy of these technologies, not so much going down to the actual site where I’m clicking on an app on my phone and seeing exactly how it works or whatever. So, I think it’s an interesting lens through which to consider the way that these technologies then affect us or impact us or have us start thinking about ourselves. I remember, and maybe this is jumping ahead a bit, but we can circle back. Last year, it was the New Year, I was like: I’m probably going to exercise more, blah, blah, blah. And one of the first things that came to mind when I thought about wanting to exercise more was: I should probably get an Apple Watch, so I can track it. And then, immediately, because I think about these things all the time, I was like: Why do I consider that wanting to exercise more means I need an Apple Watch to track it? This just goes completely against everything I talk about all the time. I did not end up getting the Apple Watch.

Zachary Kaiser: 100%! I wrote an essay about dream reading technologies for real life. And the intro to the essay is basically me being like: I’m trying to sleep better, so I got all these apps to help me sleep better. Or like: I have the smartwatch, it’ll helped me sleep better. I’m like, wait a second. What am I doing? So yeah, 100%.

Paris Marx: I would say that this is a very common thing. We’re trying to improve something in our lives or are trying to address something and the immediate assumption is like: Let’s find the technology or the app that is going to help me to achieve this goal that I have. Not even taking the step to consider is the technology going to help me do this, there’s just that kind of like implicit assumption that: Of course, everything is technological. Everything is app-ified. This is going to help me do whatever it is I want to do.

Zachary Kaiser: Also on top of that, even if it helps you do what you want to do — and I think this is one of the things I try to address in the book is there is an important distinction between even if something helps you — there’s the accuracy question: Is this accurate? That’s one of the things that got a lot of criticism, especially in the early days of quantified self-technologies. A lot of the criticism hinged on accuracy, and more recently, bias. Those are really important, but I also want to point out that even if those things are accurate, it is then shaping your behavior in order to enhance the feeling of accuracy, in order to enhance the idea that you are nothing but a computer and that you can be quantified through these technologies that measure your behavior. So, part of what I want to ask is: Is that desirable? Is that a way that we, as a society, or as people or as communities, want to live? And there’s not a binary answer to that: It’s accurate; it’s not accurate. Technology can help us, and it can help you be healthier. It can help you work out can help you track certain things about your body.

One of my best friends in the whole world is diabetic, and I will tell you, for certain, his life has been enhanced so greatly by medical technological advances. And I don’t want to discount that, when I say flippantly like: Computers were a mistake. I have to acknowledge that there have been things that have materially changed the lives of folks for the better. At the same time, the underlying impetus for those things, and the manner in which they’re distributed — who has access to them, who doesn’t, the political economy of patents — all of this other stuff comes together to create a much different world than the world that that one individual person might occupy and be benefiting from. And I think that there’s just a lot there, so I think that we have to try to unpack some of those things. In addition to the accuracy question, in addition to the bias question. There’s that like deeper level, which is more complicated, if we’re honest about it.

Paris Marx: Totally. But that nuance is key, recognizing that that nuance exists, and that it’s not just that every digital technology is terrible, and we need to get rid of it. But there are ways that technologies can be used very concretely to enhance people’s lives. And it’s totally fine to embrace that, like someone who is diabetic, and who really benefits from having this ability to track blood levels and all this kind of stuff. Whereas, the idea that every single one of us should have an Apple Watch strapped to our wrist, tracking everything that we do, and all of our bodily functions all the time that strays more into the arena of: Wait, do we really need that? And the point that you picked up on there, that there’s this ideological component to it as well, which I think is not so much held by the wider public — I don’t know, maybe you disagree with that — is definitely held by the tech industry, that the human is computable, can be recorded, or can be quantified with data and that that data is an accurate representation of the human. I think that is an ideological statement that I don’t agree with, not something that is rooted in reality, even though the people at the top of the tech industry that people pushing these ideas definitely think that that is how the world works.

Zachary Kaiser: That’s again, to me that’s one of these things, that I’m actually writing a paper with Gabi Schaffzin. Shout out to erstwhile Tech Won’t Save Us listener and web developer! 

Paris Marx: Exactly, maker of our website! [laughs]. 

Zachary Kaiser: We’re working on paper right now called, “Should We Scare Our Students?” And the question that we asked in the paper is the title, but also, we then drill down to the question of whether or not we are giving ourselves too much credit as educators in saying should we scare our students, and also, are we giving our students enough credit. And by that, I mean just like what you were saying, this is an ideological thing: the belief that we are fundamentally computer-like, computable and computing, in some ways, that’s an ideological belief. But just like Stuart Hall knew back in the 80s, when Stuart Hall writes “The Great Moving Right Show,” and he talks about why Thatcherism takes hold in Britain. He smartly says: There is a kernel of truth that maps onto the material experience of people’s lives in Thatcher’s ideas. And so it’s not the sort of Engel’s false consciousness. Stuart Hall, and he criticizes his colleagues, he says: How is it that everyone I’m surrounded by can see through this screen of ideology, whereas all these other people in the world are just living in false consciousness? That’s a really self-important way to live in the world.  Stuart Hall’s right. That’s not how things work. Ideologies proliferate, in part, because they’re mapping onto at least someone’s material circumstances. So, if we take a look at how your Apple Watch may help you work out or how certain other innovations produce certain personal goods, or social goods, it would be disingenuous to say that: I don’t know, we’re all operating in false consciousness. 

I think that we have to acknowledge that there’s a material validity that we experience when we engage with the products of Silicon Valley. Also, I think the other component of that is that I’m curious the degree to which those in Silicon Valley — you’ve got the PMC, the professional-managerial class, they’ve probably adopted this ideology so much more thoroughly than the people that actually own the means of production. Musk and Zuckerberg, those guys don’t care whether we’re computers or not. They’re making a bunch of money; it doesn’t matter! That’s different than somebody who’s getting paid $500,000, but is still a wage laborer. They might be getting paid a bunch of money, but they’re getting paid to internalize and maximize the value of that ideology from them on down. I think there’s some interesting things there. Also, with my UX students, like when my students go out into the world, in some ways they have to internalize the ideology in order to make the products and services they design usable. So there’s, again, that nuance is really important. I think that’s where an intervention, like [Ivan] Illich’s work, and [Aaron] Benanav’s work, becomes really important. And we can talk about that later. I figured that’s maybe on the docket for later.

Paris Marx: It’s absolutely coming up. I do think it’s interesting you talk about the distinctions there, because I think I would say that some of the people at the top also very fervently believe it because they believe themselves to be computer-like, in a sense.

Zachary Kaiser: Also, very true. This sort of maximizing optimization is very much in line with the Effective Altruism crowd. That’s exactly what those folks are doing. And of course, it’s garbage. Just like to be fully clear here.

Paris Marx: Absolutely. I see the statements that Elon Musk makes where he’s talking about his brain as though it’s a computer all the time. And so there’s clearly something going on there. 

Zachary Kaiser: And that’s why everybody needs to read David Golumbia’s “Cultural Logic of Computation.” If there’s a book that demonstrates how the ideological dimension of believing that your brain is a computer, I don’t think there’s a better book that does it than that book.

Paris Marx: That’s a great recommendation. I also think that’s a good opportunity to pivot to talk about some of the bigger ideas that you present in the book. Obviously, you’ve mentioned the term, ‘quantified self,’ a few times. I think that one’s probably pretty obvious to people, this idea that we are going to collect all this data on ourselves; we’re going to quantify these aspects of ourselves. But you also talk about something called the computable subjectivity. Now, that’s probably a bit of an academic term, can you break that down for us and talk to us what it actually means. What computable subjectivity actually means and what it means for us?

Zachary Kaiser: Sure, like all good academics, I felt like I had to invent something in order to continue to advance my career [both laugh]. No, just kidding! I mean, there’s certainly a grain of truth to that. And we can talk about academia at some point if we need to. But, to me, this idea that John Cheney-Lippold — who also had a huge influence on this book, there’s a shout out to him in the acknowledgments — another touchstone for what the genesis of this book was John’s article, “A New Algorithmic Identity: Soft Biopolitics and the Modulation of Control” came out in, I want to say, 2011 — it’s old! And it is an awesome, awesome article, and he expands on it in his book, “We Are Data.” One of the things that I was thinking about is the linkage between the ways that were construed as data, and then something like what Golumbia is talking about, which is the ways in which we’ve come to be understood as functioning like a computer, and how those link up to produce a subject, a political subject, specifically, that operates computationally. That also is made up of nothing but information that can be read by a computer. 

So, I distinguish that from just like being made up of data, in the sense that data can be understood maybe to be everything that’s computationally legible. But I would suggest that there’s metadata and all sorts of other stuff that are properties of information that we feed into computers, that then they respond. I think that we, as a sort of Global North, Western hegemonic society, have come to understand ourselves through the products and services that we use. These tend to be computational and tend to have interfaces to them; we’ve come to understand ourselves as both computing and legible to computation. In other words, computable. We do so not purely out of ideological commitment, but precisely because of the convenience and ease, and the way that these things allow us to work more or to be more efficient or productive. There’s a material benefit that we derive from using these things in a particular way, and it only serves to reinforce that as an ideological proposition.

Paris Marx: As part of that you also talk about how there’s this assumption, I guess, as part of this, that the data that we collect is an accurate representation of the world around us, and that there’s no barrier there, it’s not that there are some things that can be collected, and some things that cannot. But whatever can be collected, whatever can be made legible by these computers, is also the world basically. Can you talk about kind of the problems with that understanding of how we see the world around us, or how these technologies position, how we should see the world around us?

Zachary Kaiser: There’s a depth to that idea that I think is really important. So the idea that there’s a one-to-one correspondence between data and world is one that we often see just manifest in our daily lives — whether it’s counting how many steps we’ve taken, or whether it’s assessing the data about our neurological state, or whether it’s quantifying data about pain. All of those technologies are built on this particular assumption of the one-to-one correspondence between data and world. However, I think the people that built those technologies probably have a more sophisticated understanding of that relationship, because they understand the sensors and the actual translations that are required to take a phenomena that is qualitative, or human, or environmental, and to translate that into something that is machine-readable data. One of the things that interfaces do is they collapse all of that, especially interface to consumer products. So I’m not talking necessarily here about scientific instruments, but perhaps more about this basic, everyday UX things that you and I, as consumers experience,. 

So, all of those technologies, because of the impetus for ease of use, because if users don’t adopt them, they won’t be profitable. Although, as you address on your podcast a number of times, Uber’s profitability is questionable. So maybe it’s just about garnering as many users as humanly possible in order to artificially inflate the value of your company. Great, that’s what they’re doing. But either way, it’s not to the benefit of the company to reveal any of the translations that are required to get from world to data and back. And what that does is it reinforce versus an idea that our world is made up of nothing but data. You see this assumption like all the time. One of the reasons I’m disillusioned with Actor-NetworkTtheory, we were talking about ANT earlier. One of the reasons I was disillusioned with it was I saw a presentation where someone talked about data trails or something like that. And they asserted that data pre-exists humanity, that data has always existed. And there is no better example of that ideology seeping in so deeply to someone’s core beliefs about how the world works than that.

Paris Marx: It’s wild to think that even before there was the ability to collect the data, that the world is just data. I think the world is like biological and stuff like that, actually, which is actually quite distinct from what you’re talking about. What comes to mind, though, based on what you were saying is really how these interfaces, these technologies, are designed so that we look at them. And we assume that what it is showing us is a one-to-one relationship, and that the marketing promises being made by these companies are accurate reflections of the capabilities of the devices. And the ideology that we’re talking about is embedded in that and helps convince us that those things are accurate. But then you get these marketing narratives that the Apple Watch can help you with your fitness and make you more healthy, and all this kind of stuff. But then we get reports that that study is actually fine that when you have an Apple Watch, you might not actually be as physically active, as if you didn’t have an Apple Watch. Or that there are problems with false positive readings of health signals and things like that, which go against this idea, or this narrative, that the companies want you to believe about what these devices actually do.

Zachary Kaiser: The thing that I always think about that is, I find this when I’m talking with my students too, because the ideological water in which we swim suggests that although there are currently issues with the accuracy of this technology, don’t worry, it’ll get better. Just like with facial recognition, like same shit. Where it’s just like: Oh, yeah, don’t worry, we’ll fix the bias. We’ll iron all this out. It’s like Elon Musk, he killed how many monkeys or whatever with the Neuralink testing? It’s like: Don’t worry, don’t worry, I got this. This is such a common assumption that technology just progresses there’s like teleology; it’s always moving forward. And I think that’s one of the most dangerous parts of even reporting, even popular journalism, about the failures of technology. They tend to ascribe a progress to the technology. Present company excluded, of course [laughs]. 

Paris Marx: Of course, of course. 

Zachary Kaiser: I mean, we see it all the time. Oh my God, I’ve been reading, so do you remember the newsletter Protocol? 

Paris Marx: Yeah, absolutely. 

Zachary Kaiser: Protocol was dope! And they went under towards the beginning of the pandemic, I think, they folded. And Politico took over their newsletter list. And Politico has this newsletter called Digital Future Daily, and let me tell you, the number of times I’ve seen like: This is bad, but [blank], is pretty incredible! And I think that’s characteristic of a lot of reporting on the failures of especially things like the accuracy of a certain features of the Apple Watch, or the accuracy of certain features like facial recognition, or facial expression recognition technology. So, to me, the buffer against that is even when it gets better, and even when it gets more accurate, who is materially benefiting the most from that enhancement in accuracy? 

I can certainly tell you that the app that tracks your poop, getting more accurate is not benefiting you nearly as much as the functionality of that benefiting whoever the capitalist is that owns the platform on which your poop tracking app is built. It’s not the developer of your poop tracking app. It’s actually the capitalist class that owns the material infrastructure on which that’s built. That has to be part and parcel of the conversation around the accuracy and bias of data collection, and extrapolation is the asymmetries that are baked into the system, regardless of the accuracy. I think that’s super important to think about and to me, that’s very much a political economy question.

Paris Marx: That’s an interesting point because on the one hand, you can think about how the interfaces are designed and how things are put together to make us think about these products or think about these technologies in a particular way. But then that can’t be divorced from the larger commercial pressures that are being undertaken where people’s fridges have computers in them now and there are microwaves. I’ve seen, I think it’s Kohler, advertising a toilet that’s a smart toilet, now.

Zachary Kaiser: Oh my god, that ad where the toilet is in Marfa! It’s like in the middle of the highway in Marfa. And Marfa has become this weird place where Louis Vuitton does stuff or whatever. And Kohler is like: Yeah, we’re putting a toilet in the middle of the highway in Marfa. What is going on? It’s a toilet! I need it to flush. It’s crazy. It’s so crazy to me. My students and I talk about this stuff all the time. It brings up a bunch of fascinating points here about, again, just going back to this: Who is materially benefiting from these innovations and how, and what happens when we assume? Our bar for what makes our lives better is so low, it’s crazy, it’s crazy. It’s like: Ah, my fridge has a computer in it. But a lot of people in the United States of America can’t buy food. That’s crazy to me that a fridge with a computer in it constitutes innovation, when what would be really innovative is if we could feed everybody. It’s insane. 

So my students and I talked about this all the time. I had a student the other day, she was like — and this isn’t my interaction design class, so this is very pre-professional, not a whole lot of critical theory stuff going on — and she was like: My sister’s car has a screen, like this huge screen in it, and whenever I drive it, I can’t tell if I’m turning up the heat or not, because I have to look at the screen. But I don’t want to take my eyes off the road. And I was like: Yeah, that’s bad interaction design. In your 1990s Volvo or whatever, you knew you could be watching the road and you knew by feel what dial you were grabbing. And there’s feedback, there’s like physical feedback that you’re getting. There’s nothing of that now, just these flat glass screens. It’s like, dude. But I think that when something like that constitutes innovation, there’s a whole history of ideas that are required to make us believe that that is innovative in the first place. And to make us believe that we should then consume and purchase the thing with a giant screen, as opposed to the thing with the buttons and the dials, that worked perfectly fine.

Paris Marx: I guess the point I was getting to was obviously, the interface is an important aspect of this. But then there are these other pressures that kind of forced these interfaces into these products in a way that people probably aren’t really asking for, like, who is really asking for Alexa in their microwave so they can say: Alexa, pop my popcorn or whatever. These are things that I don’t totally understand, in the sense that I don’t understand the desire. I understand the commercial pressures that are pushing us in this direction, the issue’s where this is being framed as progress, because now the internet and now screen is in virtually everything. Yet, someone’s fridge doesn’t last as long because it can fail more quickly because it has been redesigned in this way. It just seems so broken and backwards.

Zachary Kaiser: I mean, that desire has been manufactured, and that is the fault of what design has become under capitalism. I’d be remiss if I didn’t suggest that part of what I do is problematic in that way. Some of my students will go and work in advertising agencies, and that’s not to say that those students don’t need to make a living and they don’t need to buy groceries and pay rent, and pay back their massive student loans, which is horrifying, because public education should be free. But at the same time, part of why we adopt those desires part of why someone says to themselves: Oh, isn’t it amazing that Alexa can pop my popcorn for me, I love that. I can walk into the house and be like: Alexa, turn the lights on; Alexa pop my popcorn. 

Part of that is the way that extreme level of “convenience,” or even the way it’s been construed as convenient. In a lot of ways, it’s not. I tried to allude to this a little bit in the book, that idea has such a long history and there’s a really good book that like I heard about from Cameron Tonkinwise. Shout out, Cameron Tonkinwise. He posted a photograph once, this was ages ago on Twitter, of a couple of books he was reading and one of them caught my eye was called “The Value of Convenience: A Genealogy of Technical Culture.” It’s by this guy, Thomas F. Tierney. It came out in 1993, and it’s a wild book. It’s really good and talks about the story of how we come to see convenience as something to be valued. And then at the same time, how capitalism twists the pursuit of one’s calling to become about your role in the economy as opposed to your role in religious practice. And these things come together in very strange ways. It’s a really cool book.

Paris Marx: I love finding books like that from the 90s, and even before that feel so relevant to today because it shows you how this notion that we have from Silicon Valley that we’re in this new era that everything has changed, is not true at all. Actually, people have been criticizing these various same things for a long time. But picking up on what you were saying there. One of the things that you write about in the book is that these technologies, and the way that these interfaces are set up push us to think of ourselves more as individuals who are acting for our self-optimization, and that there’s a very big difference between individualism under neoliberalism and simply individuality. Can you parse that out for us and the consequences of having these incredibly individualist approaches that are encouraged by these technologies?

Zachary Kaiser: There’s a great term which I think I read in one of Zygmunt Bauman’s Liquid books, “Liquid Modernity,” I think he references it in. But it’s a term that came from the sociologist Ulrich Beck and it is biographical solutions to systemic contradictions. And I love that term because it captures the way that through the confluence of political, economic and technological forces over the history of the last, call it like, 70 years — like the beginning of the Cold War — we have come to see problems as being solvable exclusively through means of individual action as opposed to collective action. Rarely do we see instances of collective action, doing the kinds of things that probably would benefit a lot of us a lot more, and a lot faster than an app designed to help you track your electricity consumption. It’s not bad to track one’s electricity consumption, but ending reliance on fossil fuels would probably be a lot easier through mass action, as opposed to some of these individual solutions, but it’s interesting. 

I mean, we see this everywhere. It’s not just fossil fuels. It’s not just consumerism. There’s so many things like that. My institution has a partnership with Apple’s developer academy. And one of their first projects — this is a very complicated situation, I don’t want to oversimplify, but I think it’s an important example, so I’ll say it here, but I think it’s important to acknowledge that it’s nuanced and complex, and there’s a lot there — sexual assault on college campuses is a really, really big issue. And so one of the things that students did in one of their first projects with — I don’t know if it was the Developer Academy or iOS Lab or something. We’ve been partnering with Apple on a bunch of stuff. And the students developed a safer campus app, which is really cool, and it has these alert buttons, and all this kind of stuff. But one of the things that I talked about in class around this project, is that app on its own, is not going to stop rape culture; it’s not going to end patriarchy. It’s not going to change the fundamental structure of oppression, that leads to young men believing that women are objects. That’s not going to change. 

So, to me, even if a technological intervention, at an individualist level, is useful, it cannot be on its own a solution to what is a systemic issue that is baked into the fabric of society. It’s so much easier, though, right? It’s so much easier to say: Oh, we can fix this with technology. You were talking to Thea Riofrancos — huge fan. She talks about carbon capture technology and electrifying vehicles, and the American solution to carbon emissions is to electrify everybody’s car, even though individual transportation is actually the issue. And so I think her work is also a great touchstone for thinking about the relationship between biographical solutions and systemic problems.

Paris Marx: I completely agree. Obviously, I’m a big fan of Thea’s work — can’t wait for her next book, so I can have her back on the show. One of the things that stood out to me as I was reading the book is I’m sure that most of this was written before the AI hype that we’ve been in for the past year. I feel like there are a lot of connections between what you’re talking about in the book and what we’ve been seeing where, on the one hand, there is this view that human intelligence can be replicated in these machines that the human brain is basically a computer and, so we just need to build a similar computer in these data centers that can then have conversations with us or whatever. Then there’s the other piece of this where these companies are developing very specific interfaces through through which we interact with these tools that are designed to, again, make us believe that it has these capabilities that it doesn’t necessarily have. So, I wonder how you reflect on, you know how these things that you’ve been writing about apply to what we’ve been seeing with this generative AI over the past year or so?

Zachary Kaiser: It’s such a good question. I’ve been thinking about it a lot. Partly because I just finished working on an installation in a museum that’s about AI, loosely speaking. Speaking of things that go way back, people predicting and thinking about this kind of stuff. The installation is called “Blessed is the Machine” and the reason it’s called that is because that’s the mantra of the citizens of the global, subterranean world society in E.M. Forster’s short story, “The Machine Stops.” It was published in 1909. It is an incredible story, and I encourage folks to go check it out. But I think in a lot of ways, the hype around AI — particularly as it relates to the interfaces to those products and services — those interfaces, again, collapse, all of the things that are required to make that thing up here sentient. Or to make it appear as though it knows, whatever it is that you’re asking it. And I think that that’s an important piece of the puzzle is that the interfaces to those products intentionally lie, they intentionally conceal certain things about whether it’s the resource needs of those technologies. Whether it’s the notion, even that they are sentient in the first place.

There’s a great term, stochastic parrots. Basically, these are predictive things. It’s predictive algorithms that are producing some of these inferences that then get spat out. The AI thing is just hard for me to talk about it without getting super angry. Because for the vast majority of it, we don’t need it. We don’t need it in anyway. We’re making images that are “artworks” that are super derivative. Who cares? We’re doing things like writing poems that are whatever. We’re doing literature reviews, and a literature review is really important, great. 

But why do you need the AI to do the literature review in the first place? Because you’re under a whole ton of pressure to publish a bunch of journal articles, so that you can go and get tenure. Or so that you can get that next job, or you can get that next research job. There’s a political economic issue at play, which makes these things appear necessary, when actually in a world where we would somehow democratically determine what we would want our technologies to do, I guarantee you that none of that shit would be on the list. Food, food for people would be really cool. There’s a lot of things that would be great, that would be much higher on my list than AI that can make a bad painting.

Paris Marx: Well, we don’t want to digital paintings, digital artists? I just think it’s such a complete joke and listeners of the show, and you will know, how frustrated I am at the past year and all of this generative AI that we’ve been subjected to. But I think that what you were talking there gets to another important point in the book where you told us about this computable subjectivity, the way that the design of these things makes us think about the world and ourselves in this particular way. But in the book you talk about how, certainly this is a capitalist problem, we’ve been talking about how this is rooted in capitalist political economy, but it’s not solely a capitalist problem. If there was this optimization, and this degree of the quantified self within a socialist society, this would also be a problem. Can you expand on that?

Zachary Kaiser: So, I think there’s a couple dimensions of the computable subjectivity. And for me, part of the danger in adopting this idea of oneself is a danger of falling into the trap of social optimization in general, and what that optimization looks like. What does it include and what does it leave out? And so, I offer the example of Cybersyn in the book, and part of the reason I offer that is not to critique the project of Cybersyn, or to critique what I end was doing, which I just can’t imagine what the world would be like, now, if the coup hadn’t happened. But there’s a very insightful critique of that project, which is that in some ways, any technology that seeks to optimize something is going to leave something else out that it’s not optimizing for. So instead, I had a little post it note on my computer for many years that just said: The value of the sub-optimal. And I think that, to me, there’s something to be said for the things that fall outside of the optimization project or the necessity, the apparent necessity to optimize certain things. 

I also think that there is a flattening or totalizing — and this is maybe a more recent thought, for me, to be honest, most of this book was finished in 2021. So, things change. For me, I’ve also been thinking a lot about communal autonomy and self-determination. And again, this just comes from if you’re lucky enough to be an academic, you should be a lifelong learner. And I think that our ideas change over time. So, one of the things that I’ve learned a lot about recently, to the credit of folks, scholars, particularly from the Global South, and so I’m not seeing anything new here, people like Arturo Escobar. Folks who have written about the Zapatista communities for a really long time, my friend, Marissa Brandt, who’s a Science and Technology Studies scholar at Michigan State. Learning about communal autonomy and self-governance, and democratic self determination, because that looks different in different places across the globe, to me, that means that computable subjectivity — which is inherently a flattening of human experience, because the data that is required has to be standardized in particular ways. 

Imagine how different computing would look if computing was autonomously self-determined by communities across the globe, who knows? It’d be super crazy, I have no idea what it would look like, maybe we would be quantifying different things. I literally have no idea. It’s hard to even imagine. But I think the hegemony of the projects that have all come together, the patriarchal, capitalist, western, neocolonialist, projects have created a situation where not only do many of us  believe ourselves to be computable. But we believe that in a very specific way that has optimization for our economic roles under capitalism at the fore. Even if we don’t behave rationally, so like game theory. I talk a little bit about game theory. I had my Adam Curtis moment in this book. I love Adam Curtis’s films, but I think there is something to be said for the different ways that those can be a little conspiracy theory-ish. 

So, I don’t know, there’s just so much to think about when we adopt this idea of ourselves, and the way that it emerges through the Cold War and the way that people become able to be modeled as nation states in nuclear war. Implacable enemies, and we all come to be seen as that. So, it results in this sort of bizarre individualism, but we don’t behave rationally. But that almost doesn’t matter. That’s the other thing that I think a lot about with this book is that I had to navigate very carefull the counterclaims, for example: If you are a computer, you will behave rationally. So why do people do certain things or why do they behave in certain ways? And computable subjectivity manifests itself in material everyday existence in different ways. You can’t necessarily say that there’s a one-to-one correspondence between how a computational model of someone maps on to how that person behaves, even if they believe they are fundamentally the same as that computational model. So I think, as an intellectual thing, that’s a very tricky space to navigate. On the one hand, I’m criticizing the game theoretic notion of people, but at the same time, I’m not necessarily suggesting that they behave rationally.

Paris Marx: No, that’s all good. What you were saying about the totalizing nature of these technologies, how there’s this one kind of particular idea of how computing should work, and how digital technology should work, and the internet. And all this kind of stuff, and has been pushed out globally, is fascinating because you see these discussions occasionally, where maybe people push back against the ideas of nudity held by Apple or Facebook, and how they push that on to the rest of the world coming from the United States. Or how these technologies arrive in certain parts of the world and their languages, that their letter systems and things like that simply don’t work with the way that these technologies have been designed and set up. So, there’s this clash between this technology that’s created in a particular space by particular types of people within the world and then expected to become applicable to everybody because Silicon Valley has to have thids globalized nature. They have to take over everything. They can’t just be for Americans, they have to be for everybody, because that is what works for the market value of these companies, and all this sort of stuff.

Zachary Kaiser: There’s such an incredible history of going back to the history of scholarship around these ideas, like critical development studies, which Arturo Escobar is one of the key figures in the history of that field as well. Arturo Escobar, James Ferguson, who wrote this book, “The Anti-Politics Machine.” I went to Malawi with a colleague of mine who was very kind to put me on a grant that she got. I don’t know why. She just like: Zach, you can do stuff. Stephanie White, amazing scholar of food systems, snd we went to Malawi, and I was reading James Ferguson’s “The Anti-Politics Machine” on the plane. And one of the things he talks about is the way that the symbolic dimension of particular material aspirations. 

So, he talks about homes in Lesotho, and the way that the construction of a house reflects your geographic and environmental situatedness. But what happens is that the aspiration to Western wealth translates into the construction of houses that are totally inappropriate for the environment and to the space. I think that you could say something similar about computation, and about UX, and about technology writ large. In that there’s an aspiration across the globe, because we have made it aspirational, to be a certain way with technology and to live with technology in a certain way, and to use it in a particular way. That then, reinforces all sorts of ideas that would never happen, if we were to be able to engage in communally autonomous decision making about our lives and about the way we interact with technology.

Paris Marx: I think it’s interesting to see Aaron Benanav’s work come up in your book as well. Touching on some of these ideas and the democratic nature of deciding how society should be organized, how technology has a role in that, and these kinds of questions. To end our conversation, I wanted to talk about something that you get into at the end of the book. Obviously, you talk about what the potential responses to this are in the lens of design education, because that is your focus. But I think that those same ideas can be broadened out far beyond that. One of them you talk about is, of course, the reform scenario that focuses more on making sure that these kinds of critical understandings of how these interfaces and how these technologies are used among designers, but we could say among people much more generally. 

But then there’s also a scenario that you position more as revolutionary, more as a Luddite approach, we might say. And that is to consider the role of actual decomputerization, of pushing back on these ideas that we need to be constantly expanding digital technologies into every aspect of our lives. We need to be collecting data on virtually everything. So can you talk to us a bit about that reframing of this, and how we begin to think about the role of technology, about computers, and about the internet in our lives in a very different way, than what this tech industry is trying to get us to believe?

Zachary Kaiser: That’s a really important question. I have to say, it’s funny, I feel like the term — when you’re in the midst of writing something, you’re excited about it. And I even think that term revolutionary, I would hesitate to even use that now — in part because maybe things are even more dire than they were a couple of years ago. To me, Luddism is a hallmark of what we need to do in order to figure out what our relationship to technology is going to be going forward. And to me, that means understanding the ways that technologies become exploitative of the working class, and try to figure out how to eliminate technologies that are exploitative of the working class and embrace a democratic approach to the development of technology. Doing that in the classroom space is very difficult in a pre-professional design program. However, I do think that there are opportunities that require broad solidarity. So I think there are one-on-one moments, and I’ll just offer a couple of examples here. 

I taught a special topics class this past summer called “Design for Degrowth.” It really changed how I think about teaching design. And the students in there were incredible to go there with me. Basically, what we did was we sat for a few weeks and talked about what would the shape of our community — let’s call it East Lansing, Michigan, this little town, college town, in Michigan — what would the shape of that town be like if we democratically determined what constitutes socially necessary labor? So, we divvied up that labor according to aptitudes and proclivities, and then took the rest of the time we would have, which would probably be a lot of free time. How would free time look different if we were to commit to living well with less, complete decarbonisation, and an understanding that any consumptive behavior really, acknowledging the basic law of physics of entropy. That there is no such thing as renewable energy. Anything you make to capture energy requires an expenditure of energy. Like, nothing’s renewable in that way.  

If we acknowledge all that stuff, what would the shape of your free time look like? How would you freely associate with people in different ways. And the responses in the conversation, were really something. I had a student make a video about what the local news would be like, and it was really funny. It was about someone’s chicken. Another student, totally different, designed a bunch of interfaces, and basically built out the UX for the fundamental social infrastructure that we would use to do that democratic determination of socially necessary labor. Or to allocate people’s time and how they would decide. How would they express interests, things they wanted to apprentice with, versus things that they’re already good at? So we took up examples of child care and elder care. And we talked about the shape of the university system. Like: what would we really do in a design class? Where the vast majority of what I teach them is explicitly to augment the surplus value for the capitalist class, like full stop. I mean, that’s like most of what I teach my students, so what would design school look like? So exploring the shape of that. 

Then I had a student who was really into golf. This was an amazing experience. I had a student who’s really into golf, who said: Oh, my God, well, how would I golf? And I was like: Well, you like golf; golf is that’s fine. I’m not opposed to golf. Like, Okay, fine. Yes. It’s really unsustainable, if we think about it like. And I was very candid with her about that. And she was like: Yeah, you’re right. So, she basically came up with a plan for how different autonomous communities who had golf interest subgroups, could then connect with each other, and democratically invest in the building of a communal golf course, that would adhere to certain requirements, that would not use fossil fuel infrastructure. And it was like a really interesting exploration. Again, the design outcomes varied widely across the group. They had different majors, they were interested in different things.

I think we spent so much time unpacking what this world would look like, because it’s just so hard to wrap your head around, that I think we didn’t do enough of the design work. And that’s on me. Happy to take the blame for that, but these students were incredible. And I’m really looking forward, I’m hoping to be able to present about this with them to just get this idea out there more. So to me, that’s a vision. And I think part of what I didn’t address so much in the book, maybe, but I think it’s important is that having a compelling vision of living well with less. Having a compelling vision of a different way to be in the world, is lacking, because advertising is so good at convincing us that the way things are right now is the best that we’ve got.

Paris Marx: I think it’s a really interesting example when you talk about being able to do different things throughout the day, have more power to choose different things, it certainly brings to mind potentially hunting in the morning, fishing in the afternoon, rearing cattle in the evening, criticizing after dinner. Pulling from the Karl Marx quote, of course.

Zachary Kaiser: Totally, and I think Benanav is right in that kind of cannon, I think. He’s spot on with that stuff. Shout out Aaron Benanav.

Paris Marx: Absolutely. But I think what you’re talking about there is obviously having this democratic input to decide what production should look like, what society should look like, how technology should be used. Rather than an assumption that everything should be ingested into the machine, given over to the machine. Its algorithms sort everything out, and we, supposedly, live in this kind of Utopia, where we don’t have to work. And the democratic approach is not only more realistic, but also one that much better fits with the politics that many of the people advocating these things tend to ascribe to, or at least claim to ascribe to.

Zachary Kaiser: So true. The last thing I’ll say about that, too, is that the reason I position this in a design setting, that class about degrowth. The reason that it’s so important to me that that comes from the space of design is because the visual, like Jacques Rancière — I might be reading it wrong. I’m not a philosopher, whatever. But Rancière talks about this idea of the distribution of the sensible, meaning the world that we experience, basically reveals certain things and conceal certain things. And that revelation or concealing has some determining impact on our participation as political subjects. To me, it’s really important to reveal something else, to use the visual media of design, which is the lingua franca of everyday life now. 

This is very much Henri Lefebvre, as well. And to use that visual language to put forth a totally different idea of what things could be like. And this is different to than critical design or speculative design, which has a dystopian flavor to it. Oftentimes, they’re speculating on things that have happened a lot of other places in the world, but just not to white Europeans. So, doing something like this, in a design setting, where we’re talking about degrowth, we’re talking about sort of the democratic determination of all of these things. As Illich says: The democratic determination designs criteria for all tools. Putting forth a vision of that, in particularly in the visual space, can be useful, because it suggests an alternative that without that visual dimension, we might be lacking, and again, maybe that’s just my predisposition as a designer.

Paris Marx: No, I think it makes sense. It brings to mind Elon Musk saying, when they were planning at the Cybertruck, that the future needs to look like the future. And so for him, the future is this dystopian vehicle that’s huge and dangerous, and doesn’t even work particularly well. But I think that we can also think about the future looking very different ways, if we make very different decisions about what that should be. Zach, really great to speak with you. Great to dig into this. Thanks so much for taking the time.

Zachary Kaiser: Thank you so much for having me, Paris. This was awesome.

Subscribe to The Nation to Support all of our podcasts

Paris Marx

Paris Marx is a tech critic and host of the Tech Won’t Save Us podcast. He writes the Disconnect newsletter and is the author of Road to Nowhere: What Silicon Valley Gets Wrong about the Future of Transportation.

More from The Nation

x