Episode 140: Transcript

Episode: 140: Why Silicon Valley is Doomed to Misunderstand the Future, with Dr. Joy Buolamwini

Transcription by Keffy



Annalee: [00:00:00] So Charlie Jane, when was the last time you heard about virtual reality? Like in any context, could be like advertising, could be in a movie, like last time you heard about it?

Charlie Jane: [00:00:10] Probably like six months ago. I mean, it feels, my sense of time is really kind of attenuated at this point.

Annalee: [00:00:17] Fair enough.

Charlie Jane: [00:00:17] I feel like I’m now like Spock in Wrath of Khan, days feel like weeks, weeks feel like months, everything is kind of like… But I feel like six months ago, I was still hearing like the tiny little chittering creatures that go around being like, “Metaverse! Metaverse! Virtual reality!” You know, those little goblins that used to land on your shoulder and start just chittering to you about Metaverse? Like, they still were chittering on my shoulder a little bit six months ago, but now they're just gone. 

Annalee: [00:00:44] Yeah, I feel like, probably because I'm a little bit more immersed in tech media, tech news. Like, I hear about it all the time. I literally just heard a long conversation about it because, Facebook just had its Meta conference. Or I should say Meta just had its Meta conference, where they talked a little bit about it.

Charlie Jane: [00:01:03] That's so meta.

Annalee: [00:01:03] It is so meta. They talked about their augmented reality offerings and did some demos, which sounded fine, but, not particularly inspiring. And of course, Apple has a virtual reality faceputer that's supposed to be coming on the pike soon. 

Charlie Jane: [00:01:21] Of course they do.

Annalee: [00:01:23] So I feel like I hear about it a lot, but almost entirely in the context of Silicon Valley products, not in science fiction.

[00:01:31] So, here's my next question, which is, when was the last time you encountered some kind of virtual reality or augmented reality technology in real life? Like either, a person doing it in front of you, you doing it?

Charlie Jane: [00:01:47] I’m going to say it was before the pandemic. I'm going to say we went to our friend's graduation from VR game design class. And we got to play test all the VR games. And this was like 2018, 2019. And they were actually pretty, some of them were really fun. And I was like, oh yeah, you know, this is definitely a thing that I would mess around with if I had the money to burn, but it wasn't like, I wasn't like, oh my God, this is gonna change my life. It was more like, okay, I'm having fun with this for now. This is cute. 

Annalee: [00:02:15] Yeah. She was learning Unity and which people are still using all the time.

Charlie Jane: [00:02:19] Right. 

Annalee: [00:02:19] I think my last experience with it was playing Pokemon Go.

Charlie Jane: [00:02:26] Oh, wow. 

Annalee: [00:02:26] hich people kind of forget that that's actually an augmented reality app.

Charlie Jane: [00:02:31] No, it really is.

Annalee: [00:02:31] It’s kind of the template. It’s very crude, obviously, you're just looking at stuff through your phone camera. But in a way, it's very immersive if, especially like in the early pandemic, I was doing a lot of just wandering around my neighborhood playing Pokemon Go, and I don't ever play in battle mode, I just collect, which is part of why I kind of suck at it. But it’s always, I don't know, there was something nice about feeling like my real world was fictional and there were like little creatures everywhere. 

[00:03:08] And of course, when you put it into like pure VR mode, it makes everything green and beautiful. And city streets that are full of like crappy boarded up shopfronts because the pandemic has put everybody out of business, they turn into parks, you know, with trees and stuff. Or with giant glowing spires or whatever.

[00:03:31] But that's like the last time that I did anything with augmented reality or anything even remotely like virtual reality. So it's been years at this point. 

Charlie Jane: [00:03:39] Wow.

Annalee: [00:03:39] And it's funny because it is still the case that we hear that virtual reality, augmented reality are like coming around the bend anytime now, and yet it feels really remote unless you're just, as I was saying, super immersed in the tech field.

Charlie Jane: [00:03:57] Yeah. I mean, I think it really points to two separate problems. One of which is things being overhyped before they're ready. Like at some point, VR and AR will be amazing, but it might be another 10 years away. It's the same with large language models. Like at some point they will actually get better at creating something that feels less uncanny valley in terms of like human speech and like art and stuff, but they're being hyped now and they're not ready.

[00:04:26] And the other problem is just kind of misplaced priorities. Like nobody is actually thinking about like what problems can we solve with this? Like what actual problems that people are dealing with can this solve or what problems can we work towards solving? It's more just like, shiny thing! Nobody actually asked for it, it’s not really helpful or relevant to people's lives, but shiny thing.

[00:04:46] And so I think that This is part of this weird disconnect between Silicon Valley and its customers, kind of. 

Annalee: [00:04:50] Yeah, that is so true and that brings me to what we're talking about today.

[00:04:58] So, you are listening to Our Opinions Are Correct, a podcast about science fiction, science, technology, and everything else. And over the past several months, we've reported out five episodes in a series called Silicon Valley versus science fiction, and we explored the influence that science fiction has had on Silicon Valley products, but also the ways that tech leaders are using sci fi kind of as propaganda to sell things.

[00:05:26] And today, we're going to do is talk about what that series helped us understand about how we as just ordinary people, end users, consumers, are affected by Silicon Valley's obsession with making science fiction into a reality. 

[00:05:43] And then we are super lucky to be joined by Dr. Joy Buolamwini. She's the founder of the Algorithmic Justice League and her work on algorithmic bias was featured in a really awesome documentary that I highly recommend called Coded Bias, and now she has a new book out about AI that she's going to talk to us about.

[00:06:03] She also, by the way, just a few months ago was meeting with President Biden to talk about the dangers of AI, so I really like the idea that now we're going to be starting to have guests who, you know, they meet with the president and then they meet with us. 

Charlie Jane: [00:06:17] In increasing in order of importance, obviously.

Annalee: [00:06:19] Exactly. Exactly. So, I’m Annalee Newitz. I'm a science journalist who writes science fiction, and my latest book is The Terraformers. 

Charlie Jane: [00:06:28] I'm Charlie Jane Anders. I'm kind of a weird goofball who sometimes writes science fiction and fantasy, and my latest book is Promises Stronger Than Darkness

Annalee: [00:06:39] And on our mini episode next week for our Patreon supporters, we will be talking about two of the best known recent works of feminist science fiction, Everything Everywhere All at Once and Barbie.

Charlie Jane: [00:06:52] That's such a great combo, and actually that reminds me, Our Opinions Are Correct... We're not a Silicon Valley juggernaut. We're not VC funded. Actually, our VC funding, the check bounced. I don't know what happened. They won't return my calls now. We were supposed to get like level 99 seed and sprug founding and funding.

Annalee: [00:07:13] Something about Silicon Valley Bank not being a concern anymore? I don't know. 

Charlie Jane: [00:07:18] I don't know. It was a little weird. 

Annalee: [00:07:18]  Yeah, it’s weird.

Charlie Jane: [00:07:18] I don't know. So, the check bounced. We don't have that funding right now. So, we're entirely dependent on you, our listeners, to keep us going and to just allow us to keep doing what we do. 

[00:07:31] And this all takes place through Patreon. So, you know, if you become a patron, you become part of our community. You hang out with us on Discord all the time. You get a mini episode every other week in between episodes and you just, you get to help keep us doing what we do and spouting off, you know, reasonably correct opinions. And you know, whatever you can afford to give, a few bucks a month, 20 bucks a month, it all is really appreciated and all goes back. into making our opinions even more correct. Find out more at patreon.com/ouropinionsarecorrect. Okay, let's get into Silicon Valley.

[00:08:04] [OOAC theme plays. Science fictiony synth noises over an energetic, jazzy drum line.]

Annalee: [00:08:39] Okay, so one of the episodes that we did that I found most fascinating was the one about Ayn Rand, partly because you went through and really got into her life and what her were actually about as opposed to what we hear that they're about from Silicon Valley billionaires. 

[00:08:58] And one of the things that came up again and again was that her work has really fed into this idea in Silicon Valley of the difficult genius who kind of leads a tech company all by himself. It's always a guy. And it's just this sort of individualism, but also an exceptionalism, like an exceptional individual. And I wonder, like, when we think about how this stuff affects consumers, how do you see that kind of ideal of the Silicon Valley leader affecting what we get as consumers?

Charlie Jane: [00:09:38] Yeah, I think it affects it in two ways. One is that you have this individualism, which means that there's a guy. It's always a guy, usually. And he's the visionary and whatever he decides to give us is what we should be happy for and there's not a lot of decision-making that involves a lot of the people who are going to use the technology. A lot of types of people including people of color, women, queer people, disabled people are not part of the Silicon Valley elite and don't get to make these choices. So, we're just sort of like, whatever this guy thinks is cool, the super genius guy is what we get. That's the first way. 

[00:10:14] The other way that I think that this obsession with individualism really affects us is that we get products that are aimed at making us more discrete individuals, that are aimed at kind of isolating us in a way. I feel like technology was supposed to bring us together. This is a cliche. Everybody says this. Technology was supposed to bring us together, but instead it isolates us. 

[00:10:34] And I'll give you an example. I was thinking this morning about health. Now health is a perfect example of of something that is actually a collective problem. Our health problems are communal. Like a lot of the—

Annalee: [00:10:49] As we recently learned in the pandemic.

Charlie Jane: [00:10:51] Exactly, as we had to be hit over the head with a two by four to find out during the pandemic, our health problems are communal. If you get sick, you're going to make me sick. If you smoke, you're going to give me lung cancer. If you know, if healthcare is unaffordable, if health providers have terrible incentives because of policies that have been set at the state and national level, we all suffer. There's so much about our health situation that is communal, that is societal. 

[00:11:21] But Silicon Valley looks at this and is like, how can you as an individual, make your health better through like health apps and quantified self? And there's nothing wrong with that stuff. Some of it can be really helpful. I got my mom a Fitbit. She loves it. You know, there are ways in which you can individually track your health. But in the end, we have technology that is about individual health choices. We don't have technology that helps us to kind of collectively make our health better because Silicon Valley believes that the individual is all that matters. 

Annalee: [00:11:56] Yeah, I really think that's true. And it does go back to a truism about Silicon Valley, which is that Silicon Valley leaders design technology for themselves and not for the public. Even when they're designing tech which should be for the public good, like health or transit, instead of getting a Tesla public bus system powered with electricity and made with sustainable materials. Instead, we get Tesla cars, which are dangerous and are clogging up the roads and getting into all kinds of crashes.

[00:12:39] So, I think that this myth of the perfect leader who always knows what's best for us, whether it's someone from an Ayn Rand book or Steve Jobs or Elon Musk, it's really toxic. It's really affecting what we are able to buy that might indeed help the community, but instead doesn't.

Charlie Jane: [00:13:06] And I think it's really important because I feel like in the previous episodes of Silicon Valley versus science fiction, we kind of talked about it in broad strokes, in terms of how there are these weird ideas, but I think it's really important to bring it back to this actually affects you, our listener, in terms of like the technology you're using and the apps that you're installing. Like it's not just some theoretical like, oh, this is off in the distance. It leads to bad outcomes, especially for marginalized people. 

Annalee: [00:13:33] Yeah, I was just, talking with, Paris Marx, who does the Tech Won't Save Us podcast, which I really love. And we were talking about—

Charlie Jane: [00:13:45] Same.

Annalee: [00:13:45] Yeah, super great, highly recommend.

[00:13:45] And they were talking to me about how science fiction kind of inspires Silicon Valley leaders, but also, you know, products. And they were like, Paris was asking, are these people really science fiction fans or is it just that they kind of like the aesthetic and I thought that that was really interesting.

[00:14:07] I think they really are science fiction fans. I want to say like, I don't think it's like people in Silicon Valley are pretending to read science fiction and kind of using it retroactively to justify what they're doing. But I think there's something to the idea that they're pulling an aesthetic or a vibe from science fiction and not looking at the context. And I think that's why you get people pouring money into things like cryptocurrencies, the metaverse, NFTs, like all of the kind of financial instruments that are enabled by blockchain technology. Because they sound like a future that you would read about in a 1950s novel, you know, it sounds like something from Isaac Asimov or from like really early Star Trek, and it's not something that reflects what we need right now in 2023.

[00:15:09] You never, almost never hear Silicon Valley leaders talking about reading like N. K. Jemisin or Paolo Baciagalupi, or—

Charlie Jane: [00:15:19] Yeah, and actually that kind of gets it. I was going to say, I think a lot of these people did read a lot of science fiction, maybe in their 20s, and they absorbed a ton of old science fiction, but they're not reading what's being published now.

Annalee: [00:15:30] Yeah, and if they were reading the science fiction that's being published now, like if they say, I don't know, picked up a Becky Chambers novel, or picked up a Fonda Lee book, even. They would see that science fiction right now is really concerned with the environment. It's really concerned with the problem of authoritarian leadership, the problem of leadership that relies on exceptionalism.

[00:16:00] And it's not all about celebrating great men who lead us into a shiny future. And in fact, what people really want out of their stories are technologies that are going to help us survive climate change.

Charlie Jane: [00:16:14] Absolutely.

Annalee: [00:16:14] And that's what we want as products and we're just not getting it like Tesla cars are not it.

Charlie Jane: [00:16:21] Yeah. And like you kind of said, we don't really need self-driving cars. What we really need is more trains and buses. Like we need public transit. We need green spaces. 

Annalee: [00:16:34] Yeah. We need, we need new materials to build with. that are more sustainable to create. Like, how about not using concrete and asphalt anymore, or how about modifying the production of them? I mean, all of that stuff is technological, that's all the kind of machines that Silicon Valley could be working on.

[00:16:58] That also reminds me of the episode we did about smart homes where we talked to Jacqui Cheng, who ran Wirecutter and Sweet Home for a long time.

Charlie Jane: [00:17:06] That was so great. I love her. 

Annalee: [00:17:09] Yeah. So what did you think about what she had to say about consumer technologies that had been made smart?

Charlie Jane: [00:17:14] I mean, it was so interesting because it kind of got to the root of some of the gender stuff that we kind of hinted at. Like the idea that when people think about sexy technologies, they think about technologies that cis men are interested in, like cars and phones. And they don't think about, like, the technologies that actually, arguably affect our lives the most, like our kitchen technologies and our home technologies.

[00:17:38] But also just that this was a classic case of the shiny future that someone had daydreamed about not actually being very useful. Like, I don't want a smart toaster. I don't want a smart refrigerator. I don't want any of my kitchen appliances to be smart because it's not actually, there's no actual benefit to it. It's another point of failure. It's another thing that somebody can hack into and fuck with you. It just doesn't feel like a great use of technology. 

Annalee: [00:18:09] The fridge is finished. Like, we're done. Like, the fridge is amazing. It's the culmination of centuries of technological development. And it's, it's great. We can change the form factor, we can make it more sustainable, we can run it on better kinds of energy. But it’s done. And I love that Jackie had that story about how she got a smart coffee pot that could evaluate the type of coffee and figure out how to make the water the perfect temperature. And she was, like, I literally never use that as part of the coffee pot. 

Charlie Jane: [00:18:45] Yeah. So, Annalee, I wanted to tell you about a journey I went on this morning after we hung out this morning. So, I feel like I saw a lot of think pieces like three or four, maybe five years ago about how technology had kind of taken a wrong turn at some point in the late ‘90s or early 2000s. And that we were no longer creating new technologies that were actually going to take us into like a better, more sustainable, more livable future. And instead, we were getting technologies that were kind of trivial, that were not actually pro-social in any way. I feel like I was seeing a lot of thick pieces that were saying that not just from the left, but also from the right. Like there was a general dissatisfaction. 

[00:19:22] So, I went on Google and I tried every search term I could think of for why did technology take a wrong turn, where did technology go wrong, blah, blah, blah, and you'll never guess what happened. I got garbage. All the search results, no matter string I put in.

Annalee: [00:19:42] I was gonna say Bard gave you bad answers.

Charlie Jane: [00:19:44] I got nothing. Basically, like, if I say, “Why did technology take a wrong turn?” I only got results about a movie that Eliza Dushku was in called Wrong Turn, which was some kind of horror movie. If I said, you know, how did technology go wrong 20 years ago, I got results about Y2K. And it was like, Google just did not want me to find what I was looking for. And it was kind of—

Annalee: [00:20:06] That is like the Bard powered, AI powered search. Bard is their AI. 

Charlie Jane: [00:20:11] Yeah. 

Annalee: [00:20:11] And, I was talking to a friend of mine who works a lot with GPT-4, which is OpenAI's, algorithm for large language models. And, he said that he was asking GPT-4 how to convince his son not to use it to write his papers. Because his son kept asking him, Dad, can I have access to the GPT-4 account so I can write a paper? And he was like, GPT-4, tell me, how can I convince him not to? And it wouldn't answer that. All it would say is here's a bunch of ways that he could use it to do research, but not to write. Or he could use it to help him make an outline but not to write. And so there was no pathway out of using GPT-4. It was just that you could use it in ways that were not technically plagiarizing.

Charlie Jane: [00:21:04] It's almost like they don't want you to not use GPT-4. Just, the experience of like, our technology has gotten so bad that I couldn't get it to tell me why our technology has gotten so bad, was so kind of like, it felt like such a 2023 moment. Because even a year ago, I would not have expected to have such a frustrating time doing a Google search.

[00:21:24] It just feels emblematic. It feels like we've crossed some kind of Rubicon or whatever in terms of the crappiness of our, what Cory Doctorow would call the enshittification of pretty much all technology. And it does feel like this comes at a moment where our tech leaders are much more chest thumping about their own status as maverick geniuses who are too brilliant and cool to listen to anybody. While, they’re more and more hyperbolic about how they're going to create a AGI and how they're going to create all this like incredible… singularity is going to happen in like three days. I got it marked my calendar.

Annalee: [00:22:07] We're all going to become post human. 

Charlie Jane: [00:22:11] We're all going to just like barf nanites. I don't know. It's going to be great. 

Annalee: [00:22:16] Yeah. As I was reflecting on the series, one of the things I really came away with was that science fiction is propaganda in Silicon Valley and that a lot of what we're dealing with are product cycles that are inspired not by actually looking at the world as it is or looking at the likely future we will have, and trying to meet the needs of actual people. Instead, it's all about, and this is, I think, where long-termism comes into this, it's all about this one future, which is based on, science fiction written in the 20th century, and how can we stay on that path toward a future where we have great men leading companies engaging in extractive practices who then change the world?

[00:23:07] Ultimately, what really concerns me is that consumers and regular users are being given this sci-fi propaganda instead of the tools that they need and we're being told that these tools are actually giving us a better future and yet we look around and we're like, well, how is it that social media is giving us a better future?

[00:23:28] How are large language models doing this? Why is my phone that I can't recycle and that I have to buy a new one every two years or three years. How is this helping me have a better future? I don't see a connection between this collection of dead phones I have on my shelf and the ozone hole or mitigating the results of carbon loading in the atmosphere.

[00:24:02] And so, that's the problem with propaganda, right? Is that it kind of pulls a curtain over your eyes so you can't see the future clearly that's going to be the sustainable future that you can actually live in. 

[00:24:13] So, I think my goal as a science fiction writer, but also as someone who thinks about science is to focus on stories that will actually help us reach a future that's more sustainable and more democratic. 

Charlie Jane: [00:24:27] Yeah, and I just wanted to add that you can love science fiction, and I believe that a lot of people we're talking about do love science fiction, and then also turn around and cynically use it as propaganda to promote your latest widget that you know on some deep level is actually just bullshit.

Annalee: [00:24:42] Yeah, so if you're interested in hearing more about any of these ideas, you can go back and check out the five episodes in our Silicon Valley vs. Science Fiction series. And next, we are going to talk to Dr. Joy Buolamwini, who knows a lot about the dangers of propaganda around technology.

[00:25:02] OOAC session break music, a quick little synth bwoop bwoo.

Annalee: [00:25:06] And now we're so happy to have Dr. Joy Buolamwini joining us to talk about her new book, Unmasking AI, My Mission to Protect What Is Human in a World of Machines. Dr. Joy is an engineer and a poet who has won a number of awards, including a Rhodes Scholarship and the Technological Innovation Award for the Martin Luther King, Jr. Center for Nonviolent Social Change. She founded the Algorithmic Justice League, and her work on bias in facial recognition systems at MIT was featured in the documentary, Coded Bias

[00:25:37] Welcome to the show, Joy.

Joy: [00:25:39] Thank you so much for having me. I'm excited to be here. 

Annalee: [00:25:44] Yeah. Your book was so terrific and a really great way of summing up a lot of the different work that you've done. And I wondered if we could start by talking about this idea of the coded gaze, which is a concept you've been developing for a while in your work. So, what is the coded gaze, and how do we challenge it?

Joy: [00:26:03] That’s such a great question, people listening might be familiar with terms like the male gaze or the white gaze, and the coded gaze is a cousin concept to that, right?

[00:26:14] So, when we talk about the male gaze, it's the ways in which men's perspectives of what is important, what is worthy, can be prioritized. And similarly, when we think of the white gaze, we can have a similar understanding when it comes to what's prioritized from a racial lens. So, the coded gaze borrows from these concepts and it's really about who has the power to shape the priorities, the preferences, and even the prejudices that shape the technology we use. And very much so, this has implications for artificial intelligence.

Annalee: [00:26:53] So, I wonder if you can talk just a little bit about how, how we would challenge the coded gaze. One of the anecdotes that you have in your book is this really powerful story about a group of tenants in New York living in Atlantic Towers and the struggle they had with a secure access company called Stone Lock.

[00:27:15] So, can you talk about how that very everyday experience was informed by the coded gaze and how how you fought back? 

Joy: [00:27:24] Absolutely. Well, I was at that time when I met with the Brooklyn tenants, it's after my research, Gender Shades came out. And Gender Shades is the research I did as a graduate student at MIT that showed gender bias and skin type bias in commercial AI products that analyze people's faces.

[00:27:43] So, the reason I even got in contact with the Brooklyn tenants is that there was a legal clinic that was working with them. They had seen my research and they reached out because they heard that there was an installation of a facial recognition-based entry system that was being basically forced on the tenants and they had questions. They wanted to know if, one, would it even work on their faces? But even if it did, right, even if you had flawless technology, what was going to happen to their data? Could this data be given to police? We've seen this happen with Ring cameras. So, all of their concerns were valid. 

[00:28:29] And basically, what I did with the Algorithmic Justice League is I got together with other well-known researchers in the space and we wrote an amicus letter of support, verifying everything that the tenants themselves were saying with some of the research that existed.

[00:28:46] So, in fact, yes, there were problems when it came to the type of facial recognition system they wanted to install and the company had tested it with I believe Fortune 500 or maybe Fortune 50 companies, which is not the same context as living in your residence, right? And so that's how we even came into a conversation.

[00:29:14] But what I want to emphasize here is the tenants were already organizing themselves. We did what the Algorithmic Justice League tends to do, which is to provide empirical evidence that others can use to support resistance campaigns. We also will stand up to powers that be when is necessary, like we did with the IRS's adoption of facial recognition from a third party vendor, ID.me, etc. 

[00:29:41] But I do think it's so important that we celebrate people who are on the front lines fighting for algorithmic justice and pushing back against That's actually why earlier this year, we did the Gender Shades Justice Award. And the first recipient was Robert Williams, who was falsely arrested due to a facial recognition misidentification. And this was some of the concerns of the Brooklyn tenants, right? What happens to our data? What if the data or information that's supposed to just be linked to getting into the apartment then gets put into some other kind of lineup and you're erroneously flagged. So, all of those concerns are valid.

[00:30:25] And in the case of Robert Williams, he was arrested in front of his two young daughters. This happened in Detroit this year, 2023. Portia Woodruff was arrested eight months pregnant sitting in a holding cell, right? And she even reported having contractions. When they finally let her out, she had to be rushed to the emergency room. Another false facial recognition match, three years after you already knew this was a problem. And two other people had already been falsely arrested that we know of from that same police department.

[00:31:01] And so, I think the stories of the Brooklyn tenants pushing back is important. I think the stories of other people who've been X coded like Portia and Robert, are also important because they're pushing back. They are suing, right? They're saying this is not okay. 

Charlie Jane: [00:31:19] Yeah, I'm so glad they're fighting that. Yeah.

Annalee: [00:31:23] And this is because, especially in the Atlantic Towers, you were saying a lot of the tenants are brown or dark-skinned. And so, the facial recognition is going to be much less accurate. That's part of the coded gaze. 

Joy: [00:31:36] That's part of it. And also, many of them are elderly. And so, this is another aspect of the coded gaze that doesn't get as much focus, but it's really important. There's quite a bit of ageism, as well, that's embedded in these systems. And so, studies do show on both sides of the age spectrum that the faces of elderly people, these systems tend to fail more often on, as well as youthful faces. And I cringe any time I see people proposing the adoption of facial recognition in schools, particularly at that stage of life where your face morphology is changing so rapidly.

[00:32:19] And furthermore in one way that's good because there are more data protections for kids. Usually, you're not going to have as many kids’ face data in these data sets. Which also means it's likely the systems are not gonna work as well for them. But just like you were talking about that misidentification, right? It doesn't mean decisions still can't be made. And we've seen this with the introduction of some of these e-proctoring tools. There was a student in the Netherlands, she talked about trying to take a test, shining all these lights on her face just to be seen.

Charlie Jane: [00:32:57] Yeah, that was horrifying, that part of the book. I just read that part.

Joy: [00:33:01] I did want to share one more. There was a Muslim student, right, who had her face covered, and this I don't share in the book. But this particular student, I don't think this is a widely known. story. When the system looked at her in her face covering that she wears for religious reasons, it said that a hand was covering her face, which indeed was not. So now here, this is before even, factoring the stress of taking the test. Right?

Charlie Jane: [00:33:34] Yeah, and to be clear that those e-proctoring systems, they're designed to kind of see if you’re cheating or if you're paying attention or if you're doing anything irregular while you take the test and they're just more likely to fail for marginalized students of various types.

Joy: [00:33:48] Absolutely. I think it's important to think about the different types of students it can fail on. Think about students with invisible disabilities, like let's say someone has Crohn's. You know, someone has Crohn's and they are getting up often, you know to take care of business and that can be flagged as being irregular.

[00:34:11] Someone might have Tourette's, you know, and so the way in which they shift or they move could be different. Someone might have ADHD. There's so many ways in which you can be X coded. And so, I do think race and gender are entry points into that conversation, and it's also so important to know that there are more aspects, too.

Charlie Jane: [00:34:35] Yeah, and one thing I loved about your book is the way that you made it clear that this isn't just about race, it's also about disability, it's also about gender. Trans people are affected by these systems. And if we build a ubiquitous surveillance system that's automated based on this AI and facial recognition, it's going to hurt a lot of people. 

[00:34:56] So, actually, I wanted to pivot slightly. You know, the first concept I learned in computer science or one of the first concepts was garbage in, garbage out. And I think that that's still a really important maxim, especially when we talk about AI. And one of the things that you talk about in the book is that the training data for these algorithms is one of the places where we need kind of ethical analysis rather than just technical analysis. Can you walk us through why trading data is so important and why it's so bad that there's basically no transparency about it? 

Joy: [00:35:23] That's such a great point. And yes, garbage in, garbage out, that principle continues to matter quite a bit. What I'm also seeing is sometimes people don't know they're putting in garbage. And so let's go back to your question, right? How does data connect to this whole conversation? Well, you've probably heard about AI, and within AI, an approach to AI is machine learning. So, what are the machines learning from? They're learning patterns from data sets. So, this could be the pattern of a face. It could also be something like what we see with generative AI systems. The pattern of what language looks like. Or if you're producing audio, sounds like. And so, this means that the machine learning model that's created, the training data gives it its experience of the world. 

[00:36:22] So now if you have a training data set that, let’s say, is largely male faces or largely white faces, it's not so surprising, and what my research was looking into, was that some of the measures we were using… So, after you've trained your model, right, we want to know how well it works. So, you have your training data and then you have another type of data set that's your benchmark data set. What I found was these benchmark data sets were actually giving us a misleading sense of progress. So, that's why I was saying, like, you might not even know there's garbage in it if the measures themselves don't let you see.

[00:36:59] And so what was happening, when I was looking at some of these benchmarks, one of the most popular ones labeled “Faces in the wild” was considered a gold standard. And it was about, I want to say, over 70 percent male and over 80 percent lighter skinned individuals. I looked at a government data set that had been made public at the time. And when I looked for women of color, like me, they were about 4.4% of the data set which meant you could fail on all of them and still be in the 90s, right? You can still get an A on the test. 

[00:37:35] And so, I realized that we had to change the measures we were using to assess progress with, in my case, looking at facial analysis systems, but more broadly, any type of machine learning system that's learning from data around humans. Because we were kidding ourselves with what the progress looked like because of the universal idea that, oh, if it worked well on the benchmark, it means it works well on everybody. And that wasn't exactly what was happening.

[00:38:07] And so that was a big part of what the research I was doing was showing. When you change the measures of success, how do these companies perform? And we found that changing the test definitely changed the overall results so they weren't as robust and as advanced as was initially assumed based on the more pale male data sets that were being used to assess how well they worked. 

Annalee: [00:38:34] So you've talked about the need to pass the Facial Recognition and Biometric Technology Moratorium Act and an AI Bill of Rights. And people in the AI industry, they keep saying that they want to be regulated, but how do you think AI should be regulated and do you think that ai entrepreneurs are gonna welcome that kind of regulation?

Joy: [00:38:57] It's a really interesting conversation around AI governance and AI regulation. When we look at Europe, for example, right now we're in the late stages of having passed the EU AI Act. And in the EU AI Act, there's a piece of it that I think lawmakers in the U.S. can learn from, right? Which are explicit restrictions on high-risk uses of AI. And one of those restrictions is a ban of face surveillance. So, you cannot use facial recognition in public spaces. So, I think we can learn from that, right? Establishing clear red lines. 

[00:39:45] What I appreciate about the blueprint for an AI Bill of Rights is it outlines what should be in place for AI systems, right? So, they should be safe and effective. Portia Woodruff being arrested, that was not safe or effective for Portia or for Robert or for Randall Reed, who's also now suing. He was arrested for a crime in Louisiana. He'd never been to Louisiana. He was in Georgia, right? 

Annalee: [00:40:12] Oh, man. 

Joy: [00:40:11] And so that's one part of the AI Bill of Rights. Another part is that everyone should be protected from algorithmic discrimination, and this could happen in all manners of life. There are predictive AI models being used in health care that actually say patients don't need the care that is required and so their coverage gets cut short even though their insurance would have covered it if not for an algorithmic model saying that this patient doesn't need the care they do need. 

[00:40:49] So, another area, right, where you want to make sure that we have protections from algorithmic discrimination. Another piece that's so important goes back to data privacy. And on the data side, so much of what we're seeing with current AI models, particularly those called the general purpose models, or some might call foundational models, is it's based on a foundation of data theft, because so much of that data has been collected without compensation and without permission. And we continue to see lawsuits. There will be more litigation, and in the book, I write as a computer scientist, the way I learned to do this type of work was if the data was available, it was available for the taking.

[00:41:41] When I would ask questions about, do we need consent, etc, you know, peers and older scholars, were like, this is computer vision, these aren't the types of questions we ask. And also, what I realized quickly is, it would make our job a lot harder if we had to get consent. It would be difficult to collect millions or billions of photos if you had to have consent in some manner, especially if you then had to pay people. But if you are creating products based on data taken without permission, that's unfair and it's unethical. And I don't think any company can claim to be responsible by building on a foundation of stolen data. 

Charlie Jane: [00:42:29] Yeah. So, I read an interview on NPR where you said that we're quote, “Automating inequality through weapons of math destruction,” which I loved that phrase. And, you know, I've seen it so often with tech companies using technology to discriminate. I wrote about digital redlining back when I was a financial journalist. How do we use technology to reduce inequality and bias instead of increasing it? Is there a way that we could be using algorithms to create more opportunities and more space for marginalized people. 

Joy: [00:42:59] I think we have to be really careful about how much power we ascribe to the algorithms versus the decision makers. I even think of something like risk assessments. Generally, risk assessments are introduced as a way of helping those who are at risk get more resources. That's how they're introduced. How they're used is in a more punitive manner. 

[00:43:25] This was even the case with the development of IQ tests, right, in France. The idea was this was to help those students whose IQ was lower. It was later used in Nazi Germany to justify the killing of people with low IQ. Let's not just put it on Germany. In the U.S. it was, I want to say in the state of Virginia, you had a law passed where you could sterilize people with low IQs. And the ways in which those IQ measures were put in place were a reflection of socioeconomic status. Even the military used IQ in a way where it was based on racial rankings to decide who would be officers. And so the type of risk you're going to even, put yourself into place. 

[00:44:19] But this came from a tool that was initially positioned, right, as being a tool to support those, in need. So, I say all of this to say, even in the case where the framing is that the AI system is being developed to help, you really have to check what's going on.

[00:44:45] Let me give an example in healthcare. So, for cardiovascular disease, one in three women in the U. S. die of cardiovascular disease, but about a quarter of research participants are women. So, we have a very skewed understanding of cardiovascular disease in the first place. This is where you might say, okay, data can help, right? Or AI models can help. And theoretically they can, but you have to be really careful about how you're collecting the data and who's included.

[00:45:20] For example, let's say you want to use computer vision and you want to look at, the buildup of plaque in arteries. Well, it's been shown that depending on the presence of testosterone or how much estrogen, that actually changes the way in which plaque builds up.

[00:45:36] So, let's say you're now training on a largely male data set with more of the presence of testosterone and you have more of the stalagmite-stalactite type of formation versus people with more estrogen. It would be, you wouldn't see that as much as you would see, narrowing of the arteries. Just that insight alone would inform you that you have to be really careful with how you would develop that system. So, you could, truly, from the best intentions be wanting to create a system that's going to help address issues with cardiovascular disease and without those insights and much more, this is not my realm of expertise, right? You can easily see how you can trip yourself up. 

[00:46:30] I think the most important thing is making sure we don't stop with the good intention for how we might use AI tools for good, but we are having the continuous oversight, and the transparency. So, we ourselves know the limitations from the design, development and deployment aspect. And I think the other thing that I don't hear discussed as much, which is so crucial, is redress. So, let's say we did our best to, you know, attend to all of the risk and the harms. You've got the tire certified, but you still got a flat on the road, right? There has to be some way of addressing these systems when people are harmed. It can be monetary compensation. It can be actually looking back at the opportunity that wasn't given and thinking about different types of recourse. But I do think redress is also very important. 

Annalee: [00:47:22] Yeah, that's a super important point.

[00:47:26] This is a show where we talk a lot about science fiction. And we recently watched the TV series, Mrs. Davis, which features a young Black AI developer named Joy, who is trying to create an app that leads to social justice and mutual aid. And I was wondering, did you see this show? Did you have any thoughts about it?

Joy: [00:47:44] A friend of mine saw it and they said, “You're in the show.” I was like, what do you mean? I'm in the show. I don't even know what you're talking about. So, then she sent me photos and not only did the person have my name, but they had my style, right? They had the red right there.

Annalee: [00:48:08] Yeah, it was pretty clearly a tip of the hat to you. 

Charlie Jane: [00:48:11] We were freaking out.

Joy: [00:48:14] I thought the whole thing was really interesting and it actually made me start thinking about, like where the line is between inspiration, and then appropriating someone's likeness. And so, I started thinking of the Black Mirror episode. I'm forgetting the name of this one, but essentially it shows what it looks like when your likeness can be taken wholesale. The example you're showing, right, like they're inspired and I mean the name, the look, others seem to think it's me. But it's not a direct carbon copy. What does it look like now when you can bring Robin Williams from the dead? 

Annalee: [00:49:02] Yeah. 

Joy: [00:49:06] Or Tom Hanks is now, now your spokesperson, you know, even though he didn’t know. It also made me think, again, where is the line? So, I don't know if Mrs. Davis completely crossed the line there, but the other examples certainly do.

Annalee: [00:49:25] Yeah, well, you do kind of almost destroy the world in that show, so. 

Charlie Jane: [00:49:29] Well, it’s debatable. 

Annalee: It’s complicated. Actually, it’s so… anyway, I hope at some point you get to watch it because it is actually about algorithmic bias and like how…

Charlie Jane: [00:49:41] It is a really interesting show, yeah.

Annalee: [00:49:42] How the app, which is well intentioned, kind of gets out of control. But it doesn't destroy the world, it just makes people do silly things. 

[00:49:50] But, anyway, thank you for answering that question, because as sci fi fans, of course, we, we had to ask. 

[00:49:57] So, we're at the end of our time. We really appreciate you joining us. Dr. Joy Buolamwini, thank you so much. Thanks for talking about your new book, Unmasking AI. And can you tell us where people can find you online? 

Joy: [00:50:11] Yes, so I am poetofcode.com, and then also @poetofcode on Instagram, @jovialjoy on X, formerly Twitter. I guess I twixt instead of tweeting now. So those are the areas to find me.

[00:50:30] And if you wanna be part of the Algorithmic Justice League and the fight for algorithmic justice, check out AJL.org. 

Annalee: [00:50:38] Awesome. Thanks again.

Charlie Jane: [00:50:39] Thank you so much.

[00:50:42] OOAC session break music, a quick little synth bwoop bwoo.

Annalee: [00:50:44] You've been listening to Our Opinions Are Correct. In case you just stumbled on this podcast out of nowhere, you can subscribe to us wherever fine podcasts are purveyed, and please do leave a review if you like it. It really helps people find us. Remember, you can always find us on Mastodon at wandering.shop/ouropinions. You can find us on Patreon at ouropinionsarecorrect and on Instagram at @ouropinions. 

[00:51:11] Thank you so much to our amazing producer, Veronica Simonetti. And thanks to Chris Palmer and Katya Lopez Nichols for our great music. And if you're a patron, we’ll see you on Discord. Otherwise, talk to you in two weeks.

[00:51:26] Bye!

Charlie Jane: [00:51:26] Bye!

[00:51:26] [OOAC theme plays. Science fictiony synth noises over an energetic, jazzy drum line.]


Annalee Newitz