Episode 5: Transcript

Transcription by Keffy Kehrli

Annalee: [00:00:00] Welcome to Our Opinions Are Correct, a podcast about science fiction. I’m Annalee Newitz. I’m a science journalist who writes science fiction.

Charlie Jane: [00:00:09] I’m Charlie Jane Anders. I’m a science fiction writer who obsesses about science.

Annalee: [00:00:15] And in this episode, we’re going to talk about fear of robots. And fear of AI. And Machine Learning and all the other things that are related to this issue. It’s huge in pop-culture right now. Westworld just came back. It’s huge in science culture and technology culture right now, and so we’re going to look at the ins and outs of what it means that our latest fear is something that actually doesn’t exist quite yet.

[00:00:41] Intro music plays.

Annalee: [00:00:47] So, as I mentioned earlier, Westworld is back. It’s a show that has always been kind of about fear of robots, but also sympathy for robots. But, we’ve also had a spate of movies like Ex Machina which are exploring, again, kind of sympathy for robots but also why they are fucking terrifying.

Charlie Jane: [00:01:06] Chappie.

Annalee: [00:01:07] Oh yeah. Chappie is a good one. And then there’s been these sort of terrible AI movies about everything from just AI being scary to things like the movie Transcendence with Johnny Depp where he uploads his brain into a computer…

Charlie Jane: [00:01:24] Right, yeah, no, he decides to become immortal because he’s dying of cancer or something and so he uploads his brain and unfortunately turns evil and they have to destroy the internet.

Annalee: [00:01:33] It’s… exactly.

Charlie Jane: [00:01:35] We had to destroy the internet to save it.

Annalee: [00:01:37] Yeah…

Charlie Jane: [00:01:38] I can identify with that.

Annalee: [00:01:39] To save us from Johnny Depp. Not to be confused with the ‘90s movie Virtuosity, which was also about a serial killer who uploads his brain into the internet.

Charlie Jane: [00:01:50] Russell Crowe.

Annalee: [00:01:52] Russell Crowe. That’s right. Uh. Mostly, what I remember from that film is his butt, because it was like a naked butt scene. Anyway, so, the thing about these television series and films is they’re also coming at a time when people who are developing AI and people who are thinking in academia about AI are also sounding the alarm about what might happen next if we have robots who have human equivalent artificial intelligence, or what’s sometimes called strong artificial intelligence. Over at Oxford, in England, Nick Bostrom runs a group on ethics and AI where they think a lot about how AI actually represents a kind of existential threat. And then here in the states, Elon Musk, entrepreneur extraordinaire, who’s sending us to space with Space X and is drilling holes in the ground with the Boring Company, but also has a bunch of automated factories, is also—

Charlie Jane: [00:02:52] And self-driving cars.

Annalee: [00:02:52] And self-driving cars, which are kind of the cutting edge of AI as we know it, and robotics as we know it. He has helped publicize the idea that, again, we need to be fearing AI. That developers need to be building safeguards far beyond what we’re already building into typical kind of computer systems. So, it’s a—I hate to use the word zeitgeist, but it’s the zeitgeist. We’re all fucking terrified, apparently, of this kind of blur of robot and AI. So…

Charlie Jane: [00:03:29] Yeah, and a self-driving car just killed someone for the first time, recently. That was the first fatality inflicted that we know of by a self-driving car. And we’re using drones that are semi-autonomous to kill people on a regular basis now. They’re not fully autonomous yet, but they will be at some point, and that’s a thing.

Annalee: [00:03:46] Yeah, and the military is certainly working on that. And there are, of course, robots that have been deployed in the military for search and rescue missions, and for reconnaissance missions. And so, this is becoming a part of everyday life for people who are in the military.

Charlie Jane: [00:04:02] Yeah, and actually there’s been a spate of stories  about soldiers in Iraq and Afghanistan and elsewhere, who became super emotionally attached to the robots they worked with. They had bomb disposal robots or, you know, other kinds of robots that were helping them in their jobs and if the robot became damaged, they’d be like, we have to fix it, it’s our friend. And even though it was just a mechanical device. It didn’t really have a personality, they would get super attached to it. But, at the same time, the notion of robots being integrated into the military complex does mean at some point we will have autonomous killing machines that will be out there, probably in our streets as well as the streets of foreign cities. They kind of dealt with this a little bit in the Robocop reboot a few years ago. That’ll be out in our streets and, choosing when to kill civilians and if you think that human police officers are trigger happy when faced with a civilian that they think might, according to some rubric, be a threat, just imagine once it’s a robot making those decisions and how controversial that’s gonna be.

Annalee: [00:05:03] Well, I’m kind of robot-identified. And so, I always think that if we had, say, robotic police officers, or autonomous cars, that they would actually do a better job than people just because they would have the ability to respond more quickly to inputs. Maybe they would have less bias, although of course now we know that algorithms can be just as biased as people, so, maybe it would just have robotic police officers running around doing racial profiling even more than humans do. I don’t know. So, let’s talk a little bit about the history of the robot. Both in science fiction and reality. Where does the term come from, Charlie?

Charlie Jane: [00:05:43] I mean, the term robot comes from a play by a Czech playwright, named Karel Čapek, I’m probably mispronouncing that name, called R.U.R, which stands for Rossum’s Universal Robots, and robot comes from the Czech words robota, meaning work. And basically these… they weren’t actually even mechanical creatures in the original stage play, they’re just kind of genetically engineered or grown in vats…

Annalee: [00:06:08] They’re grown in a vat, so they’re biological, which I always thought was kind of interesting, because you see a little bit of that throughout early scifi, you have sort of biological robots early on. Kind of like Frankenstein, I guess.

Charlie Jane: [00:06:25] Yeah, and then, you know, it really took off with things like Metropolis, Fritz Lang’s movie in which somebody creates a robot duplicate of his daughter and she kind of leads an uprising.

Annalee: [00:06:35] She leads a worker uprising. A human worker uprising, and it’s actually kind of a weirdly anti-union film because the evil robot is the one who is fomenting the working revolt. And she is evil, that’s very clear in the film. And the good Maria, not the robot Maria, would of course, cares deeply for the poor but she would never lead them in a revolt. So, this theme of revolution, whether you think it’s a bad revolution or a good revolution is kind of built into these early robot stories, because R.U.R. is also about a robot uprising. And then I think it’s in the ‘50s that we start to kind of see a diversification in what robots might be. I mean, we get the classic era of Isaac Asimov’s I, Robot, where he invents the three laws of robotic, which we are still thinking about today. Including in laboratories, people are thinking about how would we have something like the three laws to limit robots’ behavior. And you get things like, also in the ‘50s, you get things like Robbie the Robot, like a cute, friendly robot. So, you get the first glimmers of nice robots, which we start to see a lot leading up into things like Star Wars, where we have like R2D2 who, when I was a kid, I really wanted to be R2D2.

Star Wars Clip: [00:07:55] R2D2 whistles questioningly.

C3PO: Don’t get technical with me.

R2D2: Disdainful whistle followed by a low squawk.

C3PO: What mission? What are you talking about?

R2D2: Squawks and whistles.

C3PO: I’ve just about had enough of you.

Annalee: [00:08:06] That was a robot who was cute and nice and very competent and so. Not at all scary.

Charlie Jane: [00:08:12] Yeah, and it’s actually interesting to note, as a side note, that in the middle of all these scary, alarmist movies like Ex Machina, or whatever that are about robots that can kill us, we also have seen Star Wars making a huge comeback and the hallmark of Star Wars is that the robots are always freakin’ adorable. You always have robots like BB-8, and R2-D2 as well. And K2-SO, from Rogue One. The robots often steal the show, they’re often the best characters. And they’re just the cutest. They’re super adorable and loveable, and they’re a huge part of what people love about Star Wars. And it’s interesting that that’s coming back right at the time that we’re having all this anxiety. But yeah, cute robots are huge in classic scifi. Doctor Who had K-9 which was a robot dog.

Annalee: [00:08:55] Oh yeah. The original I, Robot novel, by Asimov, interestingly is not about fearing robots. It’s actually, if you go back and re-read it. The frames—it’s a set of short stories about robots, and the frame story is about a robo-psychologist who is treating a bunch of robots that have gone crazy because of the fact that these three laws of robotics actually don’t work. They contradict each other and they cause problems, and each of the stories in I, Robot is about a problem caused by contradictions between those three laws that humans have imposed on robots. So, we’ve literally been driving robots insane by trying to program them to obey us.

Robotics Clip: [00:09:41] The first law is as follows: A robot may not harm a human being, or through inaction, allow a human being to come to harm. Number two: A robot must obey orders given it by qualified personnel. Unless those orders violate rule number one. In other words, a robot can’t be ordered to kill a human being. Rule number three: A robot must protect its own existence, after all, it’s an expensive piece of equipment. Unless that violates rules one or two.

Annalee: [00:10:17] The three laws are all about put humans before robots, obey humans. Human survival is more important than your own, which are all not the kinds of things that a real living organism would believe. Because real, living organisms are—they want to survive. Their survival is paramount, so—

Charlie Jane: [00:10:33] Are you saying that your cats would not place your survival before their own?

Annalee: [00:10:38] Well, I have programmed them extensively. Yeah, no, they would be—they would be indifferent to my survival as long as they kept getting food. But the thing that’s interesting is that the sort of popular Will Smith film, I, Robot, which is only extremely loosely based on Asimov’s work. In fact, loose is in fact a loose term for how loosely it is based. But, it’s all about how robots become evil. So, it’s like the story—this set of short stories that were really nuanced, that were all about kind of how humans are actually kind of being cruel to robots and driving them mad, and how robots are kind of vulnerable to humans and vulnerable to our programming, has turned into this really simplistic story about how robots are gonna gang up on us and destroy us all. So, that kind of tells you what has changed over that 50 year period or 60 year period. Robots have gone from being something that is almost totally fictional and that we kind of have a nuanced understanding of, and now that we actually have robots kind of appearing in our factories around the world and in our cars. Or now that our cars are becoming robots, it seems like fear has become a much more common response to them.

[00:11:56] Let’s talk about some of the themes in modern science fiction that deal with why we should be afraid of robots.

Charlie Jane: [00:12:06] The most classic robot narrative is the kind of SkyNet or Battlestar Galactica one where basically they want to kill all of us, or they want to enslave us, or some combination of the two. They want to, you know…

Annalee: [00:12:21] Turn us into batteries like in The Matrix. I actually—I know that’s completely dumb and it doesn’t make any sense on any kind of like, level where you’re trying to be realistic. But I like the metaphor that we’re being turned into batteries for robots.

Charlie Jane: [00:12:35] It’s a cute metaphor. I mean, there are a lot of ways that they could have done it—would have been more plausible—

Annalee: [00:12:40] Yeah, cute is a good word for it.

Charlie Jane: [00:12:41] –Like the idea that they were using people for, like, software processing or whatever. Or that there was some reason they needed—

Annalee: [00:12:47] Yeah, we were like hard drives or something.

Charlie Jane: [00:12:47] Yeah, there was some reason they needed, like, a bunch of meat brains that had a certain level of sophistication. Because they could just use chihuahuas as batteries and not have to worry about humans. And like, what kind of Matrix would they make for the chihuahuas? Like what would the chihuahua Matrix look like? It would just be like, you know, endless like running around.

Annalee: [00:13:05] I can tell you what it would not be. It would not be frickin’ Taco Bell ads. The chihuahuas would be like, fuck that. We’re so sick of the way you’ve been representing us in your pop culture. It’d be like humans in little hats being like hey, eat some human meat.

Charlie Jane: [00:13:21] Time for the chihuahua uprising. Anyway, so, you know, there’s like the classic thing where they want to wipe us out, and sometimes their motivations for doing so are kind of sketchy, it’s just that, well, you’re there and they figure that you humans will at some point try to kill us if we don’t kill you first. Or, we need resources that you have access to. We want to convert the entire planet earth into just raw materials to build our robot army. We don’t need an atmosphere, so why do you need an atmosphere.

Annalee: [00:13:51] Yeah, I mean... but I think lurking in almost all of these stories is that early R.U.R. story from the early 20th century, which is, these are robots who have been built to be slaves and they’re having a slave uprising. Because I can’t think of a single one of these films where it’s like, there’s no reason why. Other than maybe in Terminator, where it’s actually—there is a reason, because SkyNet... I mean, it’s a little bit like Wargames where SkyNet really has only been programmed to do one thing, which is to kill and when SkyNet kind of comes to life, that’s it’s only motivation.

Terminator Clip: [00:14:25] In the panic, they tried to pull the plug.

SkyNet fights back.

Yes. It launches its missiles against the targets in Russia.

Annalee: [00:14:34] Currently, AI researchers who are worried about AIs kind of becoming conscious, that’s one of the things they worry about, is that, well, if we’ve programmed it to do only one thing… Like, what if we’ve only programmed it to build paper clips and it comes to life and it becomes super intelligent and it just turns everything into paper clips. So, basically SkyNet is the like… nuclear war of paper clip paradigms. Then there’s also, I feel like there’s the fear that robots are just gonna surpass us. So, they won’t crush us, exactly, they’ll just sort of turn us into pets. We’ll be kind of these feeble, sad creatures. I feel like Wall-E is in some sense falling into that category.

Charlie Jane: [00:15:15] A little bit, yeah.

Annalee: [00:15:16] Because, I mean that’s partly self-imposed. I mean, humans turned themselves into pets.

Charlie Jane: [00:15:20] Kind of.

Annalee: [00:15:21] In a way. But the robots are the only good characters left, and they’re kind of caring for the humans in a certain way. And also caring for the earth.

Charlie Jane: [00:15:29] Yeah. I mean, there’s always the classic sort of like, I Have No Mouth and I Must Scream thing where like an evil computer traps the last surviving humans and basically just tortures them forever, kind of. But also, yeah, just that the robots decide to keep humans as like, yeah, an example of like, we’ll just keep a few of them around to show what they used to be like. Or, you know…

Annalee: [00:15:48] And then there’s the more immediate fears. Sort of, near future fears. Like, robots will take our jobs and we’ll be poor. I think that that’s certainly what’s happening in Blade Runner. Even in the new Blade Runner film, which, I think, critics were really divided on. But one of the things that film did really well, was highlight the fact that automation had created this mass unemployment and mass poverty on the planet, and we see—I mean, we see these kids living in a giant garbage dump which used to be the city of San Diego. And that’s what humans are doing, is they’re sorting garbage. And that’s a good job, probably.

Charlie Jane: [00:16:32] Yeah, I mean… obviously some of it is just that we live in a world of haves and have nots, where the breach between the two is widening all the time, but also, there’s just the fact that, yes, in real life, robots are taking our jobs. There’s a whole category of jobs that people have today, in 2018 that probably ten years from now, a lot of them will be automated. A lot of these just like, really annoying repetitive tasks that we need people to do right now, will eventually just be done by machines because you don’t need an intelligent, adaptable creature, with all these complicated joints to be able to just move a box from one side of a room to the other over and over and over again.

Annalee: [00:17:13] I was recently talking to the economist Brad DeLong who writes a lot about automation and technological revolutions and he was saying, first of all, that the early industrial revolution was sort of about imagining what if people could be like robots. The goal of it was kind of to make people into robots, and then just replace them with robots. So we’re just kind of now playing out the logic of like the late 19th century’s industrial revolution. But, he also said that, you know, it’s not just about people’s manual labor being replaced, as many of us know. Plenty of algorithms are around to replace symbolic analytic labor. The kind of stuff that supposedly only humans could do with our magnificent brains. Brad was tormenting me by pointing out that journalism is of course going to be soon done entirely by algorithms.

Charlie Jane: [00:18:06] I mean, there’ll be a bot, basically. Like, not a robot, but a bot that’ll just write the best headline and it’ll be like the most exciting clicky perfect headline that everybody will love.

Annalee: [00:18:15] Or, it’ll just be written by some kind of cy-ops group in Russia. And that’s—I think that leads to another fear, which is that robots will become so good at doing this kind of mental labor that they’ll start hacking our brains and controlling our minds just the way we tried to control their minds with the three laws of robotics. So maybe we deserve it.

Charlie Jane: [00:18:38] Yeah, there’ll be the three laws of humans. You know, you must obey anything a robot tells you to do.

Annalee: [00:18:43] Uh-huh. Always put the robot’s survival ahead of your own.

Charlie Jane: [00:18:47] Yeah, exactly.

Annalee: [00:18:47] If you just fill in the word corporation for robot, I think you’ve got like every cyberpunk novel ever written.

Charlie Jane: [00:18:55] I mean, who was it that was saying that fear of the AI is basically the fear of large corporations? I think it was—

Annalee: [00:19:00] That was Ted Chiang who just wrote a great article about that which you can google. Ted Chiang, AI, corporations, and yeah.

Charlie Jane: [00:19:08] Right. It’s an amazing article.

Annalee: [00:19:09] Yeah, he basically makes the point that a lot of these fears really are fears of something else. You were saying to me earlier that you think that part of it is that people see robots kind of as dead. Sort of an uncanny valley kind of thing.

Charlie Jane: [00:19:23] Yeah. I mean, this is a thing that is covered in my favorite Doctor Who story of all time, “The Robots of Death” which is just also the best title.

Annalee: [00:19:32] Yes. Truth in advertising.

Charlie Jane: [00:19:33] Robots. Of. Death. It’s about robots that kill people. And, you know, that’s basically all it is. And there’s this long conversation somewhere in the middle of episode two where the doctor talks about—or the doctor and Leela talk about the fact that these robots—

Annalee: [00:19:48] I love Leela.

Charlie Jane: [00:19:48] Leela is the best companion, also. The doctor and Leela—don’t at me—The doctor and Leela talk about the fact that these robots don’t have any body language. They don’t breathe. They don’t move like people, and they’re—Leela calls them creepy mechanical men.

Doctor Who Clip: [00:20:03] What did you call those robots?

Creepy mechanical men.

Yes. You know, people never really lose that feeling of unease with robots. The more of them there are, the greater the unease, and of course the greater the dependence. It’s a vicious circle.

Charlie Jane: [00:20:17] And, they kind of move like the walking dead. They kind of seem like they’re not really alive. But they’re smart, they’re intelligent, they respond to you. They have some kind of affect. And it’s disturbing, and the idea of a society that’s built on these kind of like undead people walking around is kind of an unstable kind of fear driven society where at some point, everybody kind of knows deep down that it’s all gonna come crashing down. And, it’s—that’s one of the most—like, I love that whole conversation that they have in the middle of that story about this. And I think that that’s a common thing that robot stories kind of play on without necessarily commenting on it explicitly. There is often just the way in which robots kind of don’t quite move like people. They move like… more stiffly. They perhaps have different reactions. They don’t kind of have the same kind of natural behavior as a normal person and they can seem either like zombies because they’re in the uncanny valley and seem not quite alive, or perhaps they can seem like people who are kind of lacking some crucial element of what we consider human emotion and behavior. And, I’ve seen it played both ways. Like, Data on Star Trek: The Next Generation is clearly kind of alive in some sense but also kind of missing something crucial. Especially in the early episodes. But it is, I think, part of why we fear them is this instinctive fear of people who are not quite right in some way.

Annalee: [00:21:46] I often think now, especially, robots are used to talk about what it means to be non-neurotypical. And, I definitely think that it’s very easy to look at Data’s arc, his character arc on Star Trek as being about what does it mean to be non-neurotypical? And how do you—what if you don’t know how to read emotional cues? What if you don’t know how to send emotional cues? How do you learn to do that? And again, in a lot of films and stories that are sympathetic to robots, that’s what we see. In fact, Martha Wells just utterly incredible series which starts with the book All Systems Red is excellent at sort of portraying this. Because it’s about… it’s a cyborg. It’s a part biological, part mechanical creature who’s telling the story from the first person, and it’s very clear that it’s a person but it is also non-neurotypical. It does not like emotional connection with people for a lot of good reasons. Partly because it’s used as a machine so it’s been horribly abused. But it’s a fantastic portrait of what it means to be very human but just not think the way we are led to believe humans should think. Which, of course, we’re all kind of non-neurotypical. Nobody has a perfectly typical and normal thought process. And so this is a way of exploring what happens when you’re really at the edges of that spectrum. But also, I think, I mean you were mentioning robots seeming like dead men, and yet, so many robots are women.

Charlie Jane: [00:23:26] Yeah, really are. In fact, robots are frequently depicted as feminized even if they’re not actually female-bodied. But, in fact, scifi is full of female robots. Going back to the 1970s when there was like this craze for including fembots in everything. Like, the bionic woman show had fembots turning up in every episode. It was like constantly any time you saw a woman who wasn’t Jamie Summers, her face would be ripped off to reveal some kind of clockwork thing. And—

Annalee: [00:23:54] And there’s the Stepford Wives, which kind of cast a long shadow over the 1970s fembot kind of genre.

Charlie Jane: [00:24:01] And there’s a bunch of other kind of lady robots popping up in pop culture, including Hajime Soriyama did all these very fetishy paintings of lady robots where they are—they look like they’re kind of posing seductively.

Annalee: [00:24:15] Like, cheesecake, kind of.

Charlie Jane: [00:24:17] Yeah, cheesecake pictures of robots. And, a bunch of just the idea of the sexy robot is a thing that starts being more of a big deal. It starts, I guess, Star Trek actually has some lady robots who try to seduce Kirk back in the ‘60s as well.

Annalee: [00:24:32] I also think that we’re still seeing that. I mean, that’s—Ex Machina is really playing with that trope a lot. And, it’s tempting to say that something like Ex Machina is turning that trope on its head. And I definitely think that it is in some ways, but it’s also a hundred percent in line with the idea of a deadly woman.

Ex Machina Clip: [00:24:51] Why did you give her sexuality? An AI doesn’t need a gender. She could have been a gray box.

Hmm. Actually, I don’t think that’s true. Can you give an example of consciousness at any level, human or animal, that exists without a sexual dimension?

Annalee: [00:25:08] You know, what is a fembot if not just a machine that is sexy that can kill you. And so, and that—Ex Machina is pretty much just serving us that, once again. I mean, we sympathize with her a lot more than maybe we do with the fembots in the Stepford Wives.

Charlie Jane: [00:25:26] And pop culture is just full of sexy deadly lady robots, including like the movie Eve of Destruction.

Annalee: [00:25:33] Okay. I wanted to say this, so, Eve of Destruction, which is an ‘80s film, which probably none of you have watched, but you should, is a little bit more complicated, much like Ex Machina, because in that film we do have an evil female robot. The reason why she’s such a psycho and is about to kill a bunch of people is because her brain is modeled on the brain of a female scientist who was raped. And so, what happens is, any time someone who sees this hot fembot and calls her a bitch because it’s the ‘80s, and of course anyone who sees a hot lady basically is like, “Hey bitch, wanna make it with me?” She just kills them, because she has the brain of a traumatized rape victim, and so… I would love to see a remake of that film where it’s just like a straight-up rape revenge film, and it’s just like, I’m going after the men who tried to rape you.

Charlie Jane: [00:26:23] I mean, I think that there definitely could be a new spin on that for the #MeToo era or whatever.

Annalee: [00:26:29] Yeah, seriously.

Charlie Jane: [00:26:30] Like a female scientist decides to create like—

Annalee: [00:26:33] It’s like, this is what season two of Sweet, Vicious is gonna be. Like, the hacker character is gonna build a robot and it’s going to be the three of them, now, the sorority girl, the stoner hacker, and the robot.

Charlie Jane: [00:26:43] But yeah, getting to—just circling back to my favorite Doctor Who story, Robots of Death, which I’m obsessed with, I’m sorry. There’s that whole thing where like the robots are kind of feminized in that episode and the guy who wants to be a robot puts on all this extra make-up, like all the men in that episode are wearing make-up. But this guy puts on, like, extra make-up to make himself look more like a robot, and it really does look like he’s just looking more like a girl, and more girly, even though he still speaks in like a monotone. Like, male voice. And, you know, there’s this thing in a lot of robot stories, where there’s kind of a part where they get damaged and you see what’s underneath their smooth surface. Either, with the Terminator where the skin is kind of destroyed and then you see the kind of machine underneath, or you know. Some of the casing is stripped away and you see the gears underneath. And it’s—some kind of mixture of body horror but also it often feels a little bit erotic. A little bit like being stripped naked.

Annalee: [00:27:41] This is—I mean in Westworld, this is just front and center, where it’s both eroticized and it’s body horror. Like, when we go into the area of Westworld where they’re repairing all the robots and they’re all naked and some of them are being horrifically tortured. Some of them are just being kind of walked through their paces, and it’s just this—it’s intended to be incredibly horrifying.

Charlie Jane: [00:28:05] Right. And another sexy deadly lady robot I want to mention is actually Cameron on Terminator: The Sarah Connor Chronicles, who was not the first female terminator that we met. There was actually one in Terminator 3, but she’s definitely the best. And she’s the one who has the most complicated story line about her relationship with John Connor and her potential to turn deadly at any time. I think there’s at least one episode in which she malfunctions and tries to kill everybody.

Annalee: [00:28:36] I think what we see as we get closer to the present and to things like Ex Machina, and Sarah Connor Chronicles and Westworld is that we are having more and more sympathy for those robot characters. And it goes—instead of it being a kind of 1950s I, Robot scenario where we’re kind of in the position of the psychologist looking in on the robot and being, “Oh my, you’ve malfunctioned.” We’re looking at the world from the point of view of the robot and we’re feeling the malfunction but we’re also feeling the malfunctions of the world that have created the problems that the robots are dealing with. And that’s certainly the case in Westworld, where it’s very clear that a place like Westworld only exists because the outside world is horribly fucked. And, just riven by class division and by just hypercapitalism and total disregard for human life.

Charlie Jane: [00:29:30] Yeah, and actually, it’s sort of baked into the Terminator franchise that we have some sympathy or some understanding for the POV of the robot. Like, the first Terminator, where there’s no sympathy for that version of Arnold Schwarzenegger’s Terminator but he has this drop-down menu where we get to see his reactions and we get to see him choosing between different social responses to different social cues. And it’s fascinating and hilarious because it is about kind of trying to navigate the complicated world of humans so he can just go and kill some people.

Annalee: [00:29:59] And also in Terminator 2, I would say the terminator’s completely rehabilitated. He becomes as Sarah Connor says, he’s the best possible father because he’ll never leave. He’ll never abuse the kid. I love that scene. It’s a very, very beautiful moment and so what do we learn from all this? I mean, we’ve been talking about these fears. We’ve been talking about some of the underlying issues that might motivate them. Obviously we are, as you said earlier, we’re living in a period where people have good reason to fear that automation may replace them at a job. We live in fear that maybe some kind of algorithm will be unleashed that will destroy data that we’ve put on the internet or that will hack our minds and change the outcome of political events, and we have some evidence that that’s already happened. So, there are real life fears motivating this, but it seems to me that there’s also other fears that aren’t just about robotics and technology, like we’ve been talking about. There’s fears around women’s roles in society, for example.

Charlie Jane: [00:31:03] Yeah, I mean, I think that a lot of these fears are multi-layered and they go deep, and they go into a lot of aspects of society, because we are talking about robots replacing us or killing us, or having sex with us.

Annalee: [00:31:15] Standing in for us, yeah.

Charlie Jane: [00:31:17] We haven’t really talked about the thing where like a robot might replace your spouse like in Stepford Wives, or like, you know. But that’s a big deal. But you know a lot of these get into the basic nature of human relationships and I think it really does go back to the fact that the word robot comes from the word for work. Because really, work invades every corner of our lives.

Annalee: [00:31:40] Especially today.

Charlie Jane: [00:31:41] Yeah. And, like, our working situations are colored by the history of imperialism and exploitation and colonialism and racism and misogyny and a bunch of other things in the broader world. All of our anxieties about work are either anxieties about losing some status that we’ve had in the past because of who we were, or trying to gain some status, or having something ripped away from us.

Annalee: [00:32:05] And I would say, too, just on a psychological level, for many of us, going to work feels like you, when you go into that space, suddenly your brain is being formatted to produce certain things. To function in a certain way, to respond to certain people as if they should have authority over you, even if they’re total fucking dickbags, not that I’ve ever been in that situation. Um. But, it’s a feeling of like, you become an automaton. Even if you’re not somebody who’s working on an assembly line. In fact, even in some ways, it’s moreso when you’re going into a job where you’re supposed to be producing something that requires you to use your brain, because then you’re actually—your brain is kind of being taken over by a task that is often very boring or repetitive or revolting.

Charlie Jane: [00:32:57] Yeah, I think customer-facing jobs, like customers service, or retail, or anything where you have to give a series of cordial responses to every single person who comes in, no matter how obnoxious they are. And your boss can come and screw with you at any time, and your drop-down menu of possible responses does not include “Fuck You,” like Arnold Schwarzenegger. You can’t say that.

Annalee: [00:33:20] Exactly. But I also think that what you were saying earlier, you know, about sort of these broader issues, around say, colonialism or racism. Also hold true here because so much of the robot narrative comes out of a fear about slavery.

Charlie Jane: [00:33:37] Yes.

Annalee: [00:33:37] You know, we create these workers to be our happy slaves, or at least to be our obedient slaves. And R.U.R., the play, comes out only about a generation or two after slavery is abolished in the United States. And, of course, it was a European play but it became very popular in the US, and of course many other robot stories were popular here in the US. So, I think that whenever we think about robots, like, we’re always sort of haunted by, at least in the United States, that other narrative. That real-life narrative of how we took some people and turned them into slaves, and these are stories of course of us taking basically people and turning them into slaves. Because the whole premise, often, of these robot stories is that these are human-equivalent beings. They may be different in some ways. They may walk differently, they may express themselves differently, but they’re basically people. Which is why they revolt, because any thinking being would not be a happy slave. I feel like there is a new wave of robot stories that we’re telling not just things like Westworld. We’re seeing writers like Martha Wells, who I mentioned before, but also authors like Ann Leckie, whose Ancillary series, Ancillary Justice, et. al., ends up being a story about AI, and it’s fantastic how it kind of twists into being—Starting out as kind of one sort of tale and then it ends up being about AI rights, basically. What rights do these sentient ships have in the universe? And, I think what characterizes these new stories, as I mentioned earlier, they consider the point of view of the robot very sympathetically, and it isn’t just understanding the robot as just an automaton. It’s a real person. It’s not like a human, necessarily, but it has a very complicated relationship with humanity.

[00:35:31] One of the things, too, I was recently at the Tuscon Festival of Books with Nicky Drayden whose new novel Prey of Gods has a bunch of characters in it who are cyborgs and sort of semi-cyborgs, and partly robots and she made the point that when we think about the future of robotics we really shouldn’t be thinking about robots as being separate from people, but that in fact we are going to merge with robots. And, there’s plenty of roboticists who’ve said that, too, which is why I thought her comment was so interesting. We always imagine AI as being a thing that will evolve outside of us, but I think it’s maybe smarter for us, as fiction writers, as thinkers in general, to be imagining how will AI supplement us. Like, maybe AI won’t be a thing out there. Maybe it’ll be an implant in our brains. Just as we might become partial—like, our bodies might become partially mechanical. So might our brains. I think that’s a really, really important perspective to keep in mind. That we’re not necessarily—we are really talking about ourselves. We’re not nnecessarily talking about another species. We’re just talking about what humans will turn into.

Charlie Jane: [00:36:46] Yeah, and as we’ve discussed. Usually when we tell stories about robots and AI, we’re at least partly talking about people and we’re at least partly talking about ourselves. I wanted to give a shout out to an older book that I really love called Virtual Girl by Amy Thomson. It’s from the early ‘90s, I think, and it’s basically about this AI that is living inside a machine and then its creator decides to give it a humanoid body.

Annalee: [00:37:12] Female body.

Charlie Jane: [00:37:12] A female body, yeah. Part of what I love about it is that basically the main character at first thinks of this female body as being like a peripheral. Like the way you’d get a printer installed, or the way you’d get a webcam, or you’d get some other thing. Except that it has all these different sensory inputs, and it has all these different things that it can do. So, it’s sort of like just instead of the body being you, it’s like, the body is sort of this extension of your consciousness.

Annalee: [00:37:41] Attachment.

Charlie Jane: [00:37:41] It’s an attachment. And one of the themes that she sort of deals with throughout the rest of the novel after this character escapes from her kind of pervy creator, is the idea that a lot of other AIs want to have a body but that it’s really hard to make that jump because the overwhelming thing of being in the world around people all the time, and having all these different sensory inputs to deal with is just too much for them. And if they’ve spent their entire lives living inside a black box basically, only interacting through a keyboard, or a webcam or whatever, it’s just too hard for them to make the switch. And I think it’s interesting because it’s a different way of breaking down this kind of distinction between an AI that’s disembodied and a robot. Which is something that scifi kind of obsesses about a lot. And you know, in some ways we’re more scared of robots because they can actually come and kill us in person even though disembodied AIs can control the internet or do whatever.

Annalee: [00:38:35] And they can travel through wires and like all kinds of crazy—so it can take over your smart home, and—

Charlie Jane: [00:38:41] Yeah.

Annalee: [00:38:41] But, I was gonna say, the funny thing about Virtual Girl, which is also a book that I love, is that roboticists like, for example, Cynthia Breazeal, who spent a lot of time at MIT working on robots, she really feels, and I think other roboticists feel, too, that, that consciousness kind of comes from being physically embodied. And it’s not like she believes that her robots are conscious. I should make that clear. But that, if we were to have some kind of emergent consciousness that it would come from that experience of moving around in the world. She just bases that on human development and how babies learn, and how babies learn by interacting with the environment with their hands and stuff like that. And so, which doesn’t mean she’s right, but I always kind of loved that idea that we have this fetishized brain in a box trope, I think, in scifi. Hal being, you know Hal the robot from 2001 being like the perfect example of that brain in a box. And she kind of claims, like, you know, look, we’re never gonna have a brain in a box. We might have boxes with things in them that can do some thinking, but if we really wanted to have something like strong AI it would have to be in a body. Which, again, I don’t know if that’s—we won’t know.

Charlie Jane: [00:39:54] Interesting.

Annalee: [00:39:55] And that’s what’s kind of interesting about all of this stuff is that all of these fears that we’re having and that we’re talking about. The fears that we have in real life. The fears that we’re expressing in science fiction are all about something that doesn’t exist yet, which is whatever you want to call it. Human-equivalent AI, strong AI, something that we would recognize as being like us, but in a mechanical body, or in a box, maybe.

Charlie Jane: [00:40:23] Right.

Annalee: [00:40:24] And, I always come back to the fact that if such a thing does emerge. If we invent it. If it kind of is an emergent property of some system accidentally, it’s probably gonna be different than what we expected, and that I hope that when that happens, if that happens, that we will just ask the AI, what it thinks and how it feels instead of just projecting wildly onto it the way we currently are. And sort of assuming that we know that it’s going to become an evil killer or like a cruel seductress or what else that we assume it will. And maybe just say, hey, AI friend. What do you think? Do you? Would you like to have equal pay, or what? Where do you see your role fitting into this world? And I bet we’ll get a pretty good answer if we can approach it that way.

Charlie Jane: [00:41:18] Yeah, and like, my final thought, actually, is just that to underscore something I said earlier, which is that I think a lot of our fear of robots and of the artificial is actually a fear of femininity and of like, the intrusion of femininity into the sphere of work, and into the sphere of power and control.

Annalee: [00:41:35] Yeah, I think feminity but also just any group that has been excluded form the work place, which is why I think that there’s all of this sort of post-colonial stuff going on as well. So, I think—But I think you’re right. Like especially in the states, that fembot thing just will not die. And it starts with Metropolis, you know. So it goes way back. So, all right. Well, you’ve been listening to Our Opinions Are Correct. We will be back in two weeks with another episode. If you like us, please say so on iTunes. Please let people know about us.

Charlie Jane: [00:42:10] Please follow us on Twitter, @OOACpod.

Annalee: [00:42:13] Our Opinions Are Correct is edited at Women’s Audio Mission by Veronica Simonetti, and the music was provided by Chris Palmer.

[00:42:20] Outro music plays. Synth organ over a snare followed by a guitar riff.


Annalee Newitz