Episode 149: Transcript
Transcription by Keffy
Annalee: [00:00:00] Charlie Jane, I want you to imagine that you're in the desert.
Charlie Jane: [00:00:05] Okay.
Annalee: [00:00:05] And you come upon an ice cream cone and it's melting, it's melting in the desert. If you don't lick it up, nobody will ever eat that ice cream. It will seep into the boiling hot sand and turn into a bunch of dying sugar crystals. What do you do?
Charlie Jane: [00:00:23] Um, um, I don't know!
Annalee: [00:00:25] Come on, the ice cream is melting! What are you doing?
Charlie Jane: [00:00:29] Wow, well luckily I've got this turtle in my pocket, so what I'm going to do is like put the ice cream on top of the, flip the turtle over on its back, and use the turtle as an ice cream cone holder. And the turtle is cold blooded, so it'll keep the ice cream cold a little longer, I don't know.
Annalee: [00:00:43] Oh, good point. So, I don't know if that would pass the Voight-Kampff test that we saw in Blade Runner, which, I actually never really understood what that test was testing. Because Harrison Ford is like yelling at the replicant about what he does with a turtle flipped over on its back.
[00:01:02] And I'm like, what is the right answer? Like a human would know how to answer like, are you supposed to flip the turtle over? Because there's plenty of humans that would not stop and flip a turtle over in the desert. So I just don't understand what type of humanity that's measuring.
Charlie Jane: [00:01:18] I think the good thing about the Voight-Kampff test is that it's weird and I think that when you get into these questions about who's a human and who's real or whatever, it gets weird. And I think that, you know, better to have a nonsensical test than one that purports to actually make sense is what I'm going to say.
Annalee: [00:01:36] That's a pretty good point. And the Voight-Kampff test is just one of many tests from science fiction that are supposed to sort out the humans from the machines or the synthetic creatures, and all of them are extremely sus.
Charlie Jane: [00:01:49] Mm-hmm.
Annalee: [00:01:50] And by the way, in case you were wondering what the heck just happened, you are listening to Our Opinions Are Correct, a podcast about science fiction, science, society, and every other word that starts with S. I'm Annalee Newitz. I'm a science journalist who writes science fiction. My latest novel is The Terraformers, and my forthcoming nonfiction book is called Stories Are Weapons: Psychological Warfare and the American Mind.
Charlie Jane: [00:02:17] It's such an amazing book. I'm Charlie Jane Anders. I write science fiction and various other things. I have a young adult trilogy out. The third book, Promises Stronger Than Darkness, is gonna be out in paperback in April. And also, I've been writing a bit for Marvel Comics, and you can still get New Mutants: Lethal Legion, at your local comic book store.
Annalee: [00:02:42] In this episode, we're going to be talking about the grandmother of all tests of machine sentience, known as the Turing test. And joining us to discuss it are the AI researchers Alex Hanna and Emily M. Bender, who are also the hosts of the Mystery AI Hype Theater 3000 Podcast. We're going to talk about why the Turing test is so influential in fiction and reality, and why it's completely wrong.
[00:03:11] Later in the episode, we're going to talk about another thing that humans got wrong about non-human intelligence. That's right. We're talking about the domestication of dogs.
[00:03:20] Also, on our mini episode next week, we'll be discussing Andrea Long Chu's recent essay in The New Yorker on why trans kids should be given gender affirming health care, and also why it's important to talk about biological sex, and not for the reasons that you think.
Charlie Jane: [00:03:37] Yeah, and by the way, did you know that this podcast is entirely independent? And it's funded by you, our listeners, through Patreon. And you know, if you become a patron and support us on Patreon, you're not just like helping to keep this podcast going, you're becoming part of our community. And you get mini episodes after every episode, and you get access to our Discord channel, where we take part in lively, fun, freewheeling conversations about everything.
[00:04:02] And just all of that can be yours for—
Annalee: [00:04:04] Freewheeling!
Charlie Jane: [00:04:04] —a few bucks a month or whatever you can afford. And everything you give us goes right back into the podcast and this community. So find us at patreon.com/ouropinionsarecorrect. All right, let's get tested.
[00:04:26] [OOAC theme plays. Science fictiony synth noises over an energetic, jazzy drum line.]
Annalee: [00:04:51] Alan Turing was a British computer scientist who's famous, in part, for popularizing the idea of artificial intelligence and for his work as a codebreaker during World War II. He also speculated about the idea of of a test for artificial intelligence, now known as the Turing test. Since Turing wrote about it in 1950, this test has taken on almost mythic proportions.
[00:05:13] And joining us to talk about it are Alex Hanna, the Director of Research at the Distributed AI Research Institute, and Emily M. Bender, a linguistics professor at the University of Washington in Seattle who studies ethics and natural language processing technologies. Welcome Alex and Emily.
Emily: [00:05:31] So excited to be here.
Alex: [00:05:32] Thanks for having us.
Annalee: [00:05:33] So this is actually the second part of our podcast universe crossover event. We were on your podcast a couple of weeks ago.
Emly: [00:05:41] And it was awesome.
Annalee: [00:05:43] Yes. We tore apart the three laws of robotics. So we're super happy to have you here to break down the Turing test with us.
Charlie Jane: [00:05:49] Yeah!
Alex: [00:05:51] I'm excited to be in the podcast multiverse with y'all.
Annalee: [00:05:55] Yes. We need to bring more podcasts into it until finally we just are able to defeat, OpenAI with all of our comrades.
Emily: [00:06:06] All of our combined podcasting powers.
Alex: [00:06:07] I know we got to get Miles Morales in this, in this.
Annalee: [00:06:11] Yeah. Get him on this, for sure. So, okay. Just start us off with the basics. What exactly is the Turing test? And also how accurate is the popular conception of it?
Emily: [00:06:23] Huh. So maybe we should start with what's your impression of the popular conception? What's your short version of it?
Charlie Jane: [00:06:29] I'll take that.
Annalee: [00:06:31] Yeah.
Charlie Jane: [00:06:31] I think that, popularly, the Turing test is perceived as like, if a human can talk to an artificial entity and not be able to tell that they're not talking to another human, if you can fool a human into thinking that you're a human, then that proves that you're in some way intelligent.
Emily: [00:06:50] Yeah. Okay. So that matches my understanding of the popular understanding. It's a little bit more detailed in the paper. So it's not just, I'm having a conversation and I can't tell that that's not a human on the other side, but it's specifically set up as a test where the, oh, now I've lost the term.
[00:07:09] So there's three participants. There is the two witnesses. So there's, a human and then the machine and they are both communicating via teletype with the third party who is the interrogator. And the interrogator gets to talk to both of these entities for a short period of time and then has to pick which one is the human and which one is the machine.
Alex: [00:07:32] Right. Yeah. I mean, it's interesting that there's these different interlocutors, too. And it's also kind of fascinating in the original paper on this, and I think there's a few different formulations, but the initial one is published in a magazine called Mind, and the name of it is “Computing Machinery and Intelligence.”
[00:07:52] And actually, kind of even prior to this, there's a thing that doesn't get discussed, and it is actually determining if the person is a woman or a man.
Annalee: [00:08:06] Oh. Interesting.
Charlie Jane: [00:08:08] Whoa. Oh my God.
Alex: [00:08:08] I didn't actually, or I had read this before and then it kind of slipped my mind because I read it in another book and when I read it, I was like, oh my God.
[00:08:18] So in the second paragraph, it says, the new form of the problem can be described in terms of a game, which we call the imitation game. And there's been, you know, that movie of the same name, featuring a Benedict Cumberbatch. That's his name, right? I'm so bad with actor names.
Annalee: [00:08:38] Yep.
Charlie Jane: [00:08:37] No, you got it.
Alex: [00:08:38] I'm sorry.
Annalee: [00:08:39] Just think of him as Smaug the Dragon. That's really the main role he's ever played.
Charlie Jane: [00:08:45] Manifold Cumberbund.
Alex: [00:08:46] I just think of him as Dr. Strange and then...
Annalee: [00:08:48] Another good...
Alex: [00:08:49] Another... what’s a role I think he’s great in? Anyways, it is played with three people, a man, A, a woman, B, and an interrogator, C, who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogated is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y. And at the end of the game, he says either. X is A and Y is B, or X is B and Y is A.
Charlie Jane: [00:09:22] That's not confusing at all.
Alex: [00:09:24] Yeah, right. I mean, you see this. And then the interrogator is allowed to put questions to A and B like, will X please tell me the length of his or her hair? So it's just like, wow, even prior to this new formula, and in section two he talks about the kind of articulated, imitation game, but even prior to that it's about, gender difference, which I was like, whoa, what the hell?
[00:09:50] I thought that was fascinating.
Charlie Jane: [00:09:51] Interesting.
Annalee: [00:09:52] I love the idea that our model for distinguishing between artificial and “natural” intelligence comes from distinguishing between male and female. So yeah, I think we know where the fembot icon comes from.
Charlie Jane: [00:10:11] Yeah, so much of our popular ideas about AI are gendered in really weird ways.
Alex: [00:10:16] Absolutely.
Emily: [00:10:18] And just sort of tracking that gender through here a little bit. So, the the continuation is basically, Turing has... The woman in the initial game is trying to convince the interrogator that she is actually the woman and the man in the initial game is trying to confuse the interrogator and says that he is the woman.
Alex: [00:10:37] Yeah.
Annalee: [00:10:38] Okay...
Charlie Jane: [00:10:38] Oh my God.
Emily: [00:10:42] And then, so that's the setup. And then he switches it out so that the third player, the one who was the woman previously is now the man and the one who is trying to pass as something that it isn't is the computer. So the woman is only there until it is man versus computer and then she's gone. But he's not very clear like about how he switches them out and I sort of went back and read it a couple of times, like, but wait, who's who?
Charlie Jane: [00:11:08] I'm starting to wonder about Alan Turing a little bit, to be honest, like whether, there were some trans feelings here.
Alex: [00:11:15] Well, yeah.
Annalee: [00:11:15] I mean, I think he might have had a few gender issues.
Charlie Jane: [00:11:19] I mean...
Alex: [00:11:19] Turing himself is a fascinating figure, right? I mean, and there's been a bit of writing and I was revisiting an old blog post by Jacob Gaboury, who is a, I think professor of, I want to say English at Berkeley, and he's a historian of computing. Back in 2013, he wrote this series on, like, a queer history of computing.
[00:11:47] And so it's well known. Turing was a gay man. He was persecuted for his queerness and was chemically castrated and in 1954 took his own life kind of as a consequence of being of that. And it's kind of interesting, in reading this history of Turing, kind of thinking about... He's this kind of figure that I think there's a particular reading of Turing within the pantheons of computer science. The most prestigious award of the ACM, which is the Association of Computing Machinery, one of the professional associations of computer science names their highest award, the Turing Award, there's the Turing Test, but then the kind of notion of his queerness is really kind of played down, in some sense.
Charlie Jane: Not surprising.
Alex: [00:12:42] Yeah, even though Turing himself, kind of according to Jacob Gaboury, didn't really play that down. He was pretty out. He spoke, you know, he wasn't really closeted despite the legal consequences, and then also mentored a lot of different queer men. But then it's still very, it's kind of very shocking that he's also then taking this sex difference to be so strident in the test. And I found that to be a bit shocking given that. So, yeah, I mean, like, the man contained multitudes.
Charlie Jane: [00:13:18] He did.
Annalee: [00:13:18] Jumping over to science fiction really briefly, it kind of clarifies for me why some of the most iconic scenes in science fiction movies where AI is being interviewed to see whether it can pass a test are both men interviewing female AIs.
[00:13:38] So I'm thinking about, the famous scene in the first Blade Runner movie where Harrison Ford is interviewing a female replicant and giving her the Voight-Kampff test, which is kind of a mutated version of the Turing test. And he's trying to see if he can figure out whether she's a machine or not.
[00:13:55] And then the movie Ex Machina, which, basically the entire movie is structured around a man interviewing a woman to see if she can basically pass the Turing test, as well.
Charlie Jane: [00:14:06] Yeah, oh my God.
Annalee: [00:14:08] Although again, I don't think it's particularly a Turing test, in that case, but he's definitely testing her for realism.
Alex: [00:14:13] That's fascinating. And it's, yeah, I mean, it's got. To me this reads trans as hell, right? I mean, it's just, you know, you're trying to like interview can this machine pass, except the passing is not of being a woman kind of in situ, but being a person or belonging to the collective known as humanity.
Annalee: [00:14:38] Yeah. Well also, is she artificial or real is like the actual question underlying it, which is obviously a very gendered question?
Charlie Jane: [00:14:46] Femininity is always coded as artificial as we know.
Alex: [00:14:50] Yes.
Charlie Jane: [00:14:49] And especially assigned male at birth, people who have gender feelings, I don't want to make any speculation about Alan Turing other than what we already have, but, it's interesting.
[00:14:59] So. Part of why I wanted to do this episode, why I wanted to talk to you both about this, was because I feel like I grew up with this idea of the Turing test, in popular culture, in the popular imagination. I grew up with this idea that the Turing test was something that was a huge milestone, and one day a machine would pass this test and we would know that something important had happened.
[00:15:21] And then, sometime in the last, and even actually not even sometime in the last 10 years, I was going to say something sometime in the last 10 years, we started to have machines routinely pass the Turing test, but that's not even true because when I was a kid, ELIZA was around. Like I played with ELIZA when I was in my early teens and ELIZA...
[00:15:38] So for those listening at home, ELIZA was a therapy program that I believe was created in the 19 early ‘70s, I think?
Alex: [00:15:45] Prior to that, yeah.
Emily: [00:15:47] 1960s.
Charlie Jane: [00:15:47] 1960s by a guy at MIT, there was actually a great episode of, I think. the Tech Won't Save Us Podcast where they talk about ELIZA and how the creator of ELIZA really warned against AI hype.
Alex: [00:15:59] Yeah.
Charlie Jane: [00:15:59] In a lot of ways. ELIZA was a therapy program that would just basically parrot back to you stuff you'd said to it with slightly different phrasing, and it famously would say, “Please elucidate.”
Alex: [00:16:13] Mm-Hmm.
Charlie Jane: [00:16:12] It liked the word elucidate a lot, for some reason. And people would think ELIZA was a real person. Real person, obviously those are very loaded terms, but they would think that they were talking to a human being. They would get sucked in.
Alex: [00:16:25] Yeah, absolutely.
Charlie Jane: [00:16:27] So it's just, I think that, you know, I grew up with this idea, oh, the Turing test is going to be really difficult to pass. A machine that can pass that is going to be really advanced. And like, no, it's ridiculously easy to pass.
Annalee: [00:16:38] So my question is like, Doesn't anyone take the Turing test seriously at this point, like in the AI development community?
Emily: [00:16:46] So, there was something called the Leibniz Prize for a while, which was like an established version of the Turing test.
Alex: [00:16:51] Yes.
Charlie Jane: The best of all possible worlds.
Emily: [00:16:57] Yeah, and that had a prize associated with it and people were doing that. I think what's happened is that, in the focus of the machine passing the test, what we've lost track of is that actually we are frequently failing the test.
[00:17:13] And by failing the test, I don't mean that we are being mistaken for machines, although occasionally that happens. I mean that we are failing to understand what's happening on the other side and imagining minds that aren't there.
[00:17:25] And what happened with both ELIZA and ChatGPT and some of the things in between is that people were so taken by the sense they made of system output that they lost track of the fact that they were doing the sense making.
Charlie Jane: [00:17:40] That's such a good way of putting it.
Emily: [00:17:41] I think about ELIZA, it's important also to say that Weizenbaum wasn't trying to make a therapy program. He wasn't trying to make something that was therapeutic or useful for therapy. He was doing a language processing system, and it turns out that that style of therapy is one in which the computer doesn't need to know anything about the world because it can just keep basically rephrasing what the patient has said as a question.
[00:18:03] So it was a convenient sort of domain to be playing in, in order to show off a computer using natural language. And I think that's where Weizenbaum started. And then when he saw what people made of it, he got very worried and went on to be a fierce critic of AI hype and AI and wrote some wonderful things that everyone should read.
[00:18:25] Unfortunately, his book is out of print, which is really a bummer
Alex: [00:18:29] So his book is, I mean, it’s excellent. I'd love to see a reprinting of it. So 1976, the book's called Computer Power and Human Reason. And, you know, Weizenbaum talks a lot in, and this is the MIT professor who created ELIZA. And he talks a lot about how people were taken with it. He talks a lot about how they were trying to use it for therapy and it was even taken by these people who were other therapists. And he's saying, we can have robo therapy in five years time. And this is something that I need better name for but it’s like the five year always horizon of AI, it seems like.
Charlie Jane: [00:19:11] Oh my God, yeah.
Alex: [00:19:11] But it's like in five years we're going to have... And you actually see this in Turing, too, where he’s saying, by the end of the century we're going to have things that are going to be able to pass this imitation game. That they’re actually going to work on this.
[00:19:37] And he does this mostly by a very physicalist understanding of what knowledge is. He presents all these objections and there’s a bunch of shit in there. So he does this merely by this kind of like physicalist understanding where he's saying like, well, we need like 10 to the power of 10 bits. And he says bits, not bytes, but we basically need 10 to the power of 10 bits. And that's like sufficient to like pass this test.
[00:20:00] And it's like, that's wild. Actually, immediately what I went to, because that number is actually not very large at all with regards to the kind of memory that we have in modern computers now and actually the amount of data it takes to sort of train a large language model or whatever. And it actually reminds me a lot of all the discussions of Lieutenant Commander Data's discussion of his positronic net. And actually how I think that in a few episodes, I looked this up at one time, but I think Geordi or someone, says, my positronic net has like 10 to the power of blah, like whatever.
[00:20:41] And you're like, and in today's terms, I'm like, it’s not a lot. So, it's actually quite interesting. I mean, making predictions, especially with regards to how much space something requires in memory is an incredibly dangerous game and not even Alan Turing can succeed at making that prediction.
Annalee: [00:21:02] So are there tests now that people use to try to figure out whether an AI has achieved... what would it be? Human equivalent intelligence or general intelligence?
Emily: [00:21:18] Yeah, so we've, people claim this. Alex and I and a few other co authors have written some papers about why it makes no sense. But basically it comes down to you gotta define intelligence if you want to test for it.
[00:21:33] And if you look into the existing definitions of intelligence, what you find is eugenicist race science and total bullshit that direction. But there are plenty of things where people claim that ImageNet tests for general visual understanding, whatever that means. And there's something called glue and super glue, which is supposed to test for general linguistic understanding. By the way, it's only English and it's not actually general understanding.
[00:22:01] And so we have a paper, called “AI and the everything in the whole wide world benchmark,” which is partially inspired by the children's picture book Grover and the Everything in the Whole Wide World Museum.
Annalee: [00:22:13] Nice.
Emily: [00:22:13] Which we get to summarize it at the paper, sort of talking about the ways in which these benchmarks have been misconstrued as evidence for general intelligence.
[00:22:22] The most recent one that I spent any time with was this paper that came out of Microsoft Research as a non-peer reviewed pre-print called “Sparks of artificial general intelligence” where Sebastian Buback et al were playing with an early version of GPT-4 and trying to test it for intelligence and they needed a definition of intelligence. And so they turned to psychology and found an editorial written by—
Alex: [00:22:52] Linda Gottfredson.
Emily: [00:22:53] Thank you.
Alex: [00:22:54] Yeah.
Emily: [00:22:54] Cosigned by 50-some psychologists in The Wall Street Journal in the mid-1990s. and this was written in response to the public outcry about the book, The Bell Curve—
Charlie Jane: [00:23:07] Oh, no.
Annalee: [00:23:09] I'm familiar with this, yes.
Emily: [00:23:10] And what they say in this editorial, so page one basically gives a definition of intelligence and that's the definition that the Sparks of AGI paper points to. And then page two basically says, okay, so we need to set the record straight about all of this outcry. The folks who wrote The Bell Curve, they're basically right. There are racial differences in IQ.
Alex: [00:23:29] Yeah.
Charlie Jane: [00:23:30] No! Oh my God. I feel like there is the Venn diagram of people who are the most zealous about proclaiming the wonders of AI and how we're going to have AGI any minute now and people who are trying to bring back race science. There's some overlap there.
Alex: [00:23:48] It’s nearly a circle. And just to add to that, I think it wouldn't be a far cry to say this kind of notion... In the field of AI, there is, I would say, an evaluation crisis. In that so much of the evaluation that's done on that is really just a lot of blovination and recourse to a lot of tests that they consider human level, right?
[00:24:22] So, there's been a lot of recourse to let's have the computer take the bar or let's have it take the standard U.S. medical licensing exams and they take these things off the shelf without kind of a notion of knowing, well, this is only one very small part of the bar or licensing exams. And actually, to be evaluated as a lawyer or a medical provider, you actually need to have quite a lot more of an assessment. I mean, these things are not as simple as doing simple tests, right? There's so much, that needs to be done.
Annalee: [00:25:04] Yeah, you get a fellowship. You do an internship.
Alex: [00:25:07] You do your rounds.
Annalee: [00:25:07] You do labor for other humans.
Alex: [00:25:09] You clerk in the legal field.
Charlie Jane: [00:25:12] Yeah, a lot of it is social and yeah, I want to get back to the question of language and intelligence.
[00:25:20] You know, I've had an experience recently just throwing this out there, that I've signed up with a really bad insurance plan for 2024 and it's been making my life hell. And I've had to spend a lot of time on chat with like different representatives of the insurance company and I never know if I'm talking to a person or an algorithm. Often I think it's a mixture of both and so I'm like, well, I can't be too rude to this interlocutor because it might be somebody in some other country who's not even necessarily a native English speaker who is getting horribly underpaid. Or it could be an algorithm, or it could be bouncing back and forth through those two things, and that’s...
Alex: [00:25:58] Could be both. Could be someone, could be an algorithm.
Annalee: [00:26:02] Yeah, using it.
Alex: [00:26:02] Or it could be an algorithm trained on those interactions between, I mean, that's very likely.
Annalee: [00:26:07] Yeah.
Charlie Jane: [00:26:07] It's this sort of gray goo of social interaction that's becoming much more common lately. And so, I guess my question that I wanted to ask y'all is this idea that language connects to intelligence, and that that's the silver bullet. Like if you could communicate in a particular way, you are intelligent. And like, I think obviously humans love language. We use language. I'm a writer. Language is great, but why privilege language like that?
Emily: [00:26:36] Yeah. I think you've put your finger on something really important because it goes both ways, right? When we see systems that are artificial using language, apparently fluently, we take that as evidence that there's a mind there, that there's intelligence there. And conversely, you all have probably had the experience of, trying to speak in a language that's not one of your first languages. And feeling like people aren't perceiving you as sort of the full person that you are. They have an underestimate of your intelligence. I hate saying it that way because I don't like the notion of intelligence as a linear thing.
Charlie Jane: [00:27:14] Yeah, but I know what you mean.
Emily: [00:27:14] But that's sort of where we end up in that space.
Charlie Jane: [00:27:16] People treat you like a child because you're talking like a child.
Annalee: [00:27:16] Yeah.
Emily: [00:27:20] Yeah. And you know, to a certain extent, the point about children is really important, right? So if you think about it, developmentally, for most individuals we interact with, the way they use language gives us a lot of information about their developmental trajectory for most, setting aside anybody who's operating in a second language, right? Anyone within a linguistic community, and we might be generalizing from there.
[00:27:42] And then there's this whole additional layer of trained-in linguistic discrimination. Right? Everyone who's taught, don't speak that way, that's incorrect, that’s... So that adds another layer. Or sometimes it's very overt, so one of the kinds of linguistic discrimination that happens in English is against varieties that do what's called negative concord.
[00:28:02] So instead of, I don't have anything, you got, I don't got nothing, or I ain't got nothing, where the negation is marked in two places. And guess what, that's how it works in French. Like that's standard French. Right? It's standard Polish.
Charlie Jane: [00:28:17] [French]
Emily: [00:28:19] Yeah. But in English we say, oh, well, it's illogical because two negative makes a positive and that's not what you meant by that, right? So we have this additional layer of sort of discrimination that is supposedly justified in terms of logical thinking that sets up the standardized right of English as therefore more logical aka more intelligent.
Annalee: [00:28:42] So, I guess we have this idea that we would test AI to see whether it could fool us by sounding like a person.
[00:28:51] We have an idea of testing AI to see if it's intelligent because it has the right kind of language.
[00:28:57] And then in science fiction, there's also often tests like the Voight-Kampff test, which is kind of an emotional test. Like, does the replicant show emotion? So, I'm wondering, for you guys, do you think there is something that we should actually be testing AI for? Like, if we just assume all these other tests are kind of, in their own ways, a bit foolish?
Emily: [00:29:20] Yes, we should be building automation for specific use cases and testing how well it works in those use cases.
Alex: [00:29:25] Yeah. And just to build on that, I mean, I think the idea that there's kind of a... I mean, the idea for general intelligence itself has these eugenicist roots as Emily described.
[00:29:35] And I mean, you can take it much deeper than, than Gottfredson to take it to the history of IQ, to take it to the history. And, you know, many folks have written on this. The kind of standard texts on this is Stephen Gould's, Mismeasurement of Man, which is a fantastic book. I mean, very old at this point, but still pretty relevant because race science comes up every five years.
[00:30:00] And I think when we call for specificity, it’s like the idea that you could find replacements for humans in these particular ways to do kind of anything. It does a few things, right? One thing it does is it accrues a certain kind of power and capital to a narrow set of actors. If you can create the best large language models, you know, whether you're OpenAI or Anthropic or Google or Microsoft or whatever, then, you become the go-to. And then you can use this and then maybe you can do some fine tuning of it at the back end and then it works in all these other places. Computer scientists like to think that they are the kind of jack of all trades and the masters of none.
[00:30:47] So there's a great article by David Ribes, who's a sociologist of science called “The logic of domains.” And the idea behind it is that computer scientists tend to think of themselves as, well, we're going to make a general problem solver.
[00:31:06] And there's actually a tool called the general problem solver, that was developed in, I forgot the exact year. I think it was kind of ‘80s and ‘90s. And there had been a few different kinds of elements of this. But then if we need to get the expertise of domains, we'll just go out and find that person that is good in this domain.
[00:31:29] We even had this language sort of in tech organizations where you find people who are subject matter experts, right? Who we can grab and focus on this. Rather than thinking about these tools as, well, let's not evaluate this with regards to some kind of notion of general intelligence, which, you know, when you peel back the layers, you get some really, gross race science.
[00:31:53] Why don't we focus on things for particular applications that are going to work for particular people.
Annalee: [00:32:01] All right. Well, that sounds like a good place to end on.
Charlie Jane: [00:32:03] Yay!
Annalee: [00:32:04] Thank you guys so much for joining us and setting everyone straight on how we should be testing AI.
Alex: [00:32:10] Totally.
Annalee: [00:32:10] And what Turing was really thinking.
Emily: [00:32:15] Yeah, our pleasure. We didn't even get to the part of this paper about ESP.
Alex: [00:32:19] Oh, yeah.
Annalee: [00:32:19] What?
Charlie Jane: [00:32:19] Oh, wow. That'll be the sequel.
Annalee: [00:32:24] Well, so where can people find your guys work in the real world, online?
Emily: [00:32:30] Yeah, well, we have just launched a newsletter associated with our podcast, which, thank you Charlie Jane for the recommendation. You can find on ButtonDown. We are Mystery AI Hype Theater 3000.
Charlie Jane: [00:32:39] Oh, nice.
Alex: [00:32:43] Yeah, I think the link to the ButtonDown is ButtonDown.email/MAIHT3K, which is a little confusing, but yeah, sorry.
Emily: [00:32:54] And our podcast is, available on PeerTube as videos, including the wonderful video with the two of you not too far back and also everywhere fine podcasts are accessible.
Annalee: [00:33:04] Awesome. Great. Well, thank you so much. And we look forward to part two.
Charlie Jane: [00:33:11] Yay! Thank you.
Alex: [00:33:13] Yes.
Emily: [00:33:13] Bye. Thank you.
Alex: [00:33:13] It's a pleasure. Thank you.
Annalee: [00:33:16] So, after the break, we're going to talk about the selective breeding of dogs and how it's related to the ways that we think about non-human intelligence more generally.
[00:33:30] OOAC session break music, a quick little synth bwoop bwoo.
Annalee: [00:33:32] So I've always been super interested in the idea of domestication. Partly the way it applies to humans, because of course we've domesticated ourselves over the past 15,000 years. But also how it kind of defines our relationship with lots of other non-human animals that we've lived with, including things like sheep. But most especially dogs, because there's genetic evidence that humans and dogs have been co-evolving for more than 30,000 years.
[00:34:07] And during that time, we've kind of domesticated each other. And, Especially in the last couple of hundred years, that mutual domestication process has kind of become a lot more one way. Humans have been doing a lot of selective breeding of dogs, and it's low tech, but it does cause dogs to mate that normally wouldn’t mate or even be able to mate.
Charlie Jane: [00:34:35] Right. Yeah, I had a friend who worked with dogs who told me that certain really purebred breeds of dogs are incapable of giving birth because the heads can't fit through the birth canal and also there's all sorts of weird health problems that you get with these purebred dogs.
[00:34:52] Like, you know, when we were discussing this episode in advance. The analogy I came up with was that we've created these breeds of dogs that are so highly specialized that the only human equivalent, like there's no real human equivalent, there's nothing like that among humans. The only thing that comes even remotely close to being in that zip code is the British Royal family, like the British Royal family are the closest that we can get among humans to having like a breed of dogs. And because they do have, they've got the hemophilia, they've got all these other like weird problems because the British aristocracy and I guess the European aristocracy in general is super, super inbred and has been inbred to the point where it's got real problems now
Annalee: [00:35:35] Yeah, it's funny because Naya added in the show notes here that of course, many of the reasons for selectively breeding dogs is actually to create dogs that have traits that we like, like better ability to herd animals, for example. But a lot of it is just, especially since the 19th century, for aesthetically pleasing features, like dogs that have the really smushed face, which have breathing problems.
[00:36:05] And I was just thinking about in relation to the royal family, like, I don't feel like there's been an effort to make aesthetically pleasing royals. I'm not sure what that breeding program is really about.
Charlie Jane: [00:36:17] I mean, it's a really good question, actually. I mean, you know, somebody should ask them.
Annalee: [00:36:23] Yeah, I mean, it does tie into this idea, which again, we get from selective breeding, that there is a kind of a purebred version of every type of dog, which is funny because they're all dogs, people. They're all the same species. Just like, of course, all humans are the same species. But we get very hung up on this idea of like, who's a purebred person of particular variety.
[00:36:46] So, my guess would be that's the royals breeding program is to create these. I mean, they're literally called blue bloods, right? Like that's a way of talking about purebred humans.
Charlie Jane: [00:36:59] It’s bad for people. It's bad for, it's just, it's a weird concept. And I think that part of what's interesting about thinking about dogs is that you only get that kind of like, incredibly extreme version of, basically, a weird form of eugenics, or whatever. That can only happen where there's a species that's under the control of another species. No species would really choose to do that to itself. That is something that has to be done to a species that is controlled by some other species that wants to use it for specific goals, either for whatever we think is aesthetically pleasing, or because we need a dog that can herd sheep, like you said, or hunting, or whatever.
Annalee: [00:37:47] Yeah, and of course, in the United States, a lot of historians who've studied slavery have pointed out that there was a kind of breeding program in place, there, where of course, white men could, basically create profit by selling their own offspring, selling their disavowed children.
Charlie Jane: [00:38:09] Right, I don't know to what extent that was people were being bred for specific traits. I think there was some of that.
Annalee: [00:38:14] Right. They were being bred to be sold.
Charlie Jane: [00:38:17] They were being bred to be sold. I think that the kind of program that we've done with dogs as you said, it's been thousands of years at this point. It's gotten more intense since the 19th century, but it's been thousands of years. I think that's like, in a way, it's a very science fictional thing. Like imagining a world where one species controls the reproduction another to the point where they can force it into these bizarre shapes and bizarre versions of itself.
Annalee: [00:38:46] Yeah, it's really true. There's this really amazing novel that I've talked about before on here, called The Mount by Carol Emshwiller, which is all about an alien species invades the earth and turns humans into horses. They're very tiny, fragile aliens and actually riding on humans’ shoulders for them is extremely comfortable.
[00:39:14] And so they've created different breeds. They have like the Seattle breed and a couple of other different breeds named after different parts of the United States. And they do have vaguely kind of ethnic characteristics. The Seattle breed is husky with dark hair, for example. And I mean, obviously, this is Emshwiller really pointing the finger at humans for doing exactly what you're describing with dogs, a really extreme eugenics program that's in some ways cruel. I mean, it's, it's a way of breeding animals who are going to struggle to have a normal life because their lung capacity is reduced or they have other kinds of physical problems.
Charlie Jane: [00:39:56] Yeah. It's important to note that humans have a lot less genetic diversity than a lot of other species. Like there's not that much genetic difference between any particular human and any other human. We don't have the kind of, there's much more genetic diversity among dogs and other creatures. So when you imagine selectively breeding humans for particular traits, you're not starting with a huge gene pool. You're not starting with a huge set of differences to begin with.
Annalee: [00:40:21] Yeah.
Charlie Jane: [00:40:21] There aren't humans that are the equivalent of huskies or pomeranians out there.
Annalee: [00:40:28] The wiener dog of humans.
Charlie Jane: [00:40:30] The wiener dog of humans. Yeah, exactly.
Annalee: [00:40:32] I always think about people who crossbreed like a dachshund with a husky or something like that. Like that's just not possible among humans. But it is a kind of science fictional fantasy to create another type of maybe synthetic creature, maybe like an artificial intelligence, that we could iteratively design to evolve into a very specialized kind of human intelligence.
[00:41:03] Or it could have a really specialized body, it could be a human equivalent intelligence in the body of a squid, so that it could do underwater activities.
Charlie Jane: [00:41:15] Yeah, it's so interesting. I mean, I think that really, when I think about this weird program we've embarked upon with dogs, which does feel in some ways kind of cruel, but also, it's fascinating, the extent to which we've been able to exert this power over their biology.
[00:41:33] I sort of think about this overall fallacy that humans are in control over the natural world, that we have mastered the natural world, our own environment, to the point where we can predict it, we can control it, we can turn it to our purposes. And of course, I feel like the lesson of the last few decades, and I think even more of the lesson of the next few decades, is going to be humans, having fucked around, are going to find out and are going to discover the extent to which we really do not control the natural world at all.
[00:42:08] And I think that that's an interesting way to think about it because I mean, part of what we've done is we've driven so many other species to extinction and we've proliferated the species that we do control, like cattle or anything that we eat, anything that we use, anything that's domesticated within our homes or within our farms, we just have a ton of them, but other species, we just are like, no, they can die out.
[00:42:33] And I think that, you know, we're going to find out why that wasn't a really wise choice.
Annalee: [00:42:37] Yeah, when you put it that way, it's kind of like the human experiment with the climate is kind of a selective breeding program for particular kinds of weather. That’s not turned out really very well.
[00:42:53] The other thought that I had, just kind of going back to our conversation with Alex and Emily is that domesticating dogs is kind of a model for how we might form relationships with artificial intelligences.
[00:43:09] So I know that sounds kind of weird, so hear me out for just a second, which is, basically it seems like humans first started collaborating with dogs many thousands of years ago for hunting purposes. Basically, dogs were an assistive technology. So, bands of humans would go looking for a game and dogs could help kill it or help find it, sniff it out.
[00:43:35] And I think that that's exactly what AI is being positioned as, that it's an assistive technology for the kinds of labor that we do now, which is largely intellectual labor and not, chasing down mastodons or mammoths. So, I think that many of the things that we've done to dogs, like many of the assumptions we've made, like, oh, dogs are our quote unquote “best friend.” They’re our assistants, but at the same time, they are absolutely a hundred percent are inferior.
[00:44:08] And that there's actually, not a lot of ethical concerns about the kinds of breeding programs that they've undergone. And I don't mean to say that no one's questioning that ethically. Obviously a ton of people are concerned about those ethics. And some, obviously very extreme responses to it. But also just in general, it is popular to say, ooh, that dog is. There's something bad about overbreeding dogs to this particular point.
Charlie Jane: [00:44:35] Or people boycott breeders. There's a whole movement to boycott breeders, now.
Annalee: [00:44:37] Yeah. And puppy mills and things like that.
[00:44:40] So I think that there is a lively debate about this, but at the same time, there's thousands and thousands of years of no debate whatsoever about this. And I do believe that that's similar to how we're kind of regarding AI. It's like, this is great, it's going to be kind of your friend, your assistant. Literally they're referred to as assistants, but they will always be taught and trained to know their place and they will be given, we use these terms like training and all of the language that we use around domesticating dogs—
Charlie Jane: [00:45:14] So true.
Annalee: [00:45:14] —we also use for AI. So, yeah, I mean, I guess we can look forward to in 30,000 years, some kind of weird aesthetic breeding program for AI. My AI has a really smashed nose. It's so cute.
Charlie Jane: [00:45:31] I mean, I think it's interesting because you’ve now made me think about how AI to some extent is being treated like our dogs. But it’s been pointed out many times that dog is God backwards and like there's this idea that like we are creating something that eventually will become a God, but we need to train it so that when it becomes a God it'll still think of itself as our pet kind of.
[00:45:58] And that's such a weird thing to wrap your mind around if you think about it, like that notion that this thing that we kind of... Because you're right, a lot of the way we talk about AI is similar to the way we talk about pets. And part of what popularized AI was, like, back in the day, people had these AI pets, these Tamagotchis that they would mess around with.
Annalee: [00:46:17] Yeah, and one of the first really popular home robots was the AIBO dog.
Charlie Jane: [00:46:22] Right!
Annalee: [00:46:22] And it did have a very early version of what, today, we would call AI, where it could learn certain things and it could, adapt to its environment.
Charlie Jane: [00:46:31] Yeah, it’s so interesting.
Annalee: [00:46:32] So, we are creating an AI dog army and I think that it's good to keep in mind a lot of the ethical questions that we've confronted in our relationship with dogs and be on the lookout for when those crop up in our conversations about AI. Not just because I'm sort of...
Charlie Jane: [00:46:52] Justice for big dog!
Annalee: [00:46:53] Right, justice for big dog. Exactly. Justice for AI. I don't necessarily believe that we have to be worried right now that AI might be suffering, in the same way a dog would suffer. But, we maybe should be concerned about that. So, good. It's a good model to have in mind. The domestication model is always an interesting one to think about in the context of our technology and politics.
[00:47:20] All right. Thank you so much for joining us for this episode. You've been listening to Our Opinions Are Correct. Remember, you can find us on Mastodon, Patreon, Instagram, and if you want to become a supporter through Patreon, we would really appreciate it. We would love it if you would like and review us on any of the podcast apps where you get your podcast.
[00:47:46] And thank you so much to our brilliant producer and engineer, Naya Harmon. Thanks so much to Chris Palmer and Katya Lopez Nichols for the music. And we will see you in Discord if you're a patron and if not, we'll hear you later or we'll talk to you later. You'll hear us later. Bye!
Charlie Jane: [00:48:01] Bye!
[00:48:01] [OOAC theme plays. Science fictiony synth noises over an energetic, jazzy drum line.]