AI and algorithms influence culture. You should pay attention : Short Wave Humans hallucinate. Algorithms lie.

At least, that's one difference that Joy Buolamwini and Kyle Chayka want to make clear. When ChatGPT tells you that a book exists when it doesn't – or professes its undying love – that's often called a "hallucination." Buolamwini, a computer scientist, prefers to call it "spicy autocomplete." But not all algorithmic errors are as innocuous. So today's show, we get into: How do algorithms work? What are their impacts? And how can we speak up about changing them?

This is a shortened version of Joy and Kyle's live interview, moderated by Regina G. Barber, at this year's Library of Congress National Book Festival.

If you liked this episode, check out our other episodes on facial recognition in Gaza, why AI is not a silver bullet and tech companies limiting police use of facial recognition.

Interested in hearing more technology stories? Email us at shortwave@npr.org — we'd love to consider your idea for a future episode!

Algorithms don't just pick playlists. They're changing your life

  • Download
  • <iframe src="https://www.npr.org/player/embed/1257825205/1258097912" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript

REGINA BARBER: You're listening to Shortwave from NPR. Hey, Shortwavers, Regina Barber here. And we have a special episode for you today, one recorded at the National Book Festival, an annual event hosted by the Library of Congress. Hey, how's everyone doing? There, I had the honor of moderating a panel on algorithms, specifically how computer algorithms are everywhere.

KYLE CHAYKA: We experience a lot of mediation in culture now when we watch a TV show through Netflix or listen to a song through Spotify. Like, all of this digital technology has become a kind of filter.

BARBER: That's Kyle Chayka, a staff writer at The New Yorker and the author of the book Filter World: How Algorithms Flatten Culture. And, OK, all of that sounds fancy and new. Algorithm is this kind of buzzy word people like to toss around because it shows how it integrates with technology. But Kyle says that algorithms-- they can do things like find new songs on Spotify, yes. But at the end of the day, they're very basic. They've been around since the Babylonians, who made equations to calculate all sorts of things, like the volume of dirt dug up to make a ditch.

CHAYKA: Or how to split up a crop or share things between people. And so these were just basic equations-- like, step-by-step mathematical processes to find a result.

BARBER: Dr. Joy Buolamwini agrees with Kyle on the basic nature of algorithms. She's a computer scientist and author of the book Unmasking AI: My Mission to Protect What is Human in a World of Machines. And she joined Kyle and me on stage to talk about all things algorithms.

JOY BUOLAMWINI: To me, an algorithm is just a sequence of steps to achieve an outcome. So for me, I love video games. I used to love playing Tony Hawk's Pro Skater 2.

BARBER: Mine was, like, Donkey Kong Country. That was a beautiful game in the '90s.

CHAYKA: We could just do Tony Hawk and Donkey Kong for another half hour.

BARBER: Right. We could talk about-- those two games were really, really good. Anyway, continue.

CHAYKA: [LAUGHS]

BUOLAMWINI: Right. So you want a character to move left? You have this input-- this action happens. And so that gets you that early introduction to, what is our language to communicate with computers so that we can program them or shape them to help us express ourselves, solve problems, and so forth?

BARBER: So algorithms can be fun, obviously, but Dr. Joy's book dives into the much more serious and potentially dangerous side of algorithms used in artificial intelligence, like how facial recognition is far from perfect.

BUOLAMWINI: The work that I do, a lot of people know, is around facial recognition technologies, so the callback you never get because your resume is screened out by a hiring system. Many of the algorithms that are shaping our lives are the ones you don't see.

BARBER: Or how it can amplify racial bias, like how it has trouble even detecting people with darker skin. This software has led to false identification and even false arrests of Black people.

BUOLAMWINI: So you had people like Robert Williams, falsely arrested in front of his two daughters and his wife for supposedly stealing a watch.

BARBER: So today on the show, the algorithms behind artificial intelligence. At their best, they can expand our worldview. But what happens when they're at their worst? You're listening to Shortwave, the science podcast from NPR. Let's get right back into the conversation at the National Book Festival, talking about algorithms and their connection to AI. So how do we go from software-- like, solving problems, calculations, like Kyle talked about, to software that "thinks for itself?"

BUOLAMWINI: Thinks for itself is interesting.

BARBER: Yeah, well, we have it in quotes here because my boss was like, put it in quotes. "Thinks for itself." Yeah.

BUOLAMWINI: So when I think about AI, I approach it as this ongoing quest to give machines abilities we associate with intelligence or humans. So whether it's recognizing a face, a puppy, or making a prediction-- are you going to pay back that loan? Are you going to be a good tenant? Are you going to be a good student? These are the types of applications we're seeing with AI. Computer scientists like me-- we quickly realized you're not going to be able to put out all of the instructions. It's messy. It's complicated. There are always exceptions.

BARBER: It's time-consuming.

BUOLAMWINI: Right. And maybe we can imitate the human brain. Maybe we can create pattern recognition systems. So instead of having to code everything line by line, maybe instead, we can present a data set of examples. And so now you have these pattern recognition algorithms that are trained on a data set, and then it means the quality of that data set is going to determine the quality of your outputs.

BARBER: Right. Kyle, you start your book, Filter World, with this machine built in 1769, and it seems to think for itself, right? It seems to be AI.

CHAYKA: Yeah, I think we overestimate that idea of artificial intelligence or of the algorithm. That device that opens the book is called the Mechanical Turk. It was a machine that appeared to play chess. It was a wooden cabinet with this automaton guy in a turban on top of it. And it appeared to move its arm and smoke a pipe and play chess. And it competed against really famous people. Like, I think it beat Ben Franklin--

[LAUGHTER]

CHAYKA: --in a game--

BARBER: He loved it, I'm sure.

CHAYKA: --of chess. But the problem was that inside this cabinet was not just a series of gears and levers, but a small man that [LAUGHS] was maneuvering magnet-controlled pieces under the board. And he had a little lamp in there, and there was, like, a hole in the back for stuff to come out.

BARBER: And people were like, why is there smoke coming out of this thing? It was from, like, the lamp.

CHAYKA: Yeah.

BARBER: [LAUGHS]

CHAYKA: So it was this illusion of an artificial intelligence, of a machine that could think for itself, which, in reality, was a little guy in a box. And I think that kind of became my metaphor for talking about algorithmic recommendations and some of machine learning, because we see it as this automated process, but it's really designed and updated and changed and driven by human engineers who are working at Silicon Valley technology companies. And I really wanted to get back to the source of these decisions that are made about how these things work, because they're not-- I mean, in the context of recommendations, Spotify's main goal is not to give you the best music ever. Netflix is-- [LAUGHS] that might be a surprise.

[LAUGHTER]

CHAYKA: But I wanted to point out that, like, there are human incentives behind how this technology works. And usually, they're motivated toward profit, which, online, now, is mostly in the form of advertising.

BUOLAMWINI: So we can acknowledge the illusion that's happening while also understanding it can have really real consequences. That's why when you even said thinking--

BARBER: Right.

BUOLAMWINI: --I was really cautious about that, because what we believe about AI systems can be even more powerful than what they actually are.

BARBER: Yeah. And Dr. Joy, I want to get into kind of, like, unmasking a lot of the algorithms and what's happening in our software. Your work is focusing on this facial recognition. But you kind of go even further, that it's not just, like, they can't see you, but they're also you have this misgendering whole section. And you also have like people aren't seen by self-driving cars. They're being wrongfully arrested. Like, can you kind of go into those a little bit?

BUOLAMWINI: OK, the doom section.

BARBER: The doom section.

BUOLAMWINI: After this?

BARBER: We're ending on hope. We're ending on hope. i swear to god.

BUOLAMWINI: Just--

BARBER: I have a list.

BUOLAMWINI: I remember when I was working on my master's at the time. I came up with this notion of FML. You all know FML? Failed Machine Learning.

[LAUGHTER]

BUOLAMWINI: Failing-- freedom, money, love. And so on the freedom side, thinking about AI systems that were being used by law enforcement, so these algorithms of exploitation. Now, when it comes to love, I was procrastinating my PhD. And I went on this social-- this dating app, and it required facial recognition to get there. So I upload one photo. It's an artistic photo profile, and whatever--

BARBER: [LAUGHS]

BUOLAMWINI: --and it didn't detect my face. So I was like, OK, it's because of the type of photo. So then I did another photo looking straight at the camera, right? Still didn't detect my face. So once I realized it was messing with people's love lives, that's when I went straight back--

BARBER: Like, I think we're going to listen.

BUOLAMWINI: --I was like, OK, we really have something here. But even our connections and our communities-- who's following you?

CHAYKA: Yeah. Your personal relationships are so mediated by these things now, too. I mean, every dating app is driven by an algorithm that sorts people into matching with each other. Like, whose Instagram stories you see most often are a choice that Instagram makes for you based on what you engage with. I mean, which emails you see, who's seeing your posts on Facebook-- it's all so filtered that it's not necessarily what you organically want. But it's, like, this weird technological mirror reflection of what the platform wants to show you and how the platform interprets your behavior.

BARBER: This is something I learned when I was doing this reporting on AI-- hallucinations. And I really, really like this, and I kind of want to go through this before we kind of get to hope. Kyle, do you want to talk about an AI hallucination?

CHAYKA: Yeah, I think-- I mean, a hallucination is essentially an error of the artificial intelligence. It's when an AI comes up with an answer to something or generates a result that is just completely wrong and irrelevant. Like, in experimenting with ChatGPT myself, I was struggling with an article. So I was like, ChatGPT, can you find me an example of Russian folklore that has to do with an automaton or some kind of robot? Like, surely, there's something.

BARBER: Yeah.

CHAYKA: And ChatGPT kept giving me stories and fairy tales and examples of these Russian folk tales. But then every time I tried to chase down the source of them, it turned out they did not exist. And it would give me fake URLs, fake PDFs, like, fake book titles. It would say, no, no, no, this is absolutely real. Just go look at this book. And the book was also not real. But that just showed me how-- or not manipulative, but just, like, blatantly false these results could be.

BUOLAMWINI: And even just thinking about hallucinations, it's a fancy way of saying BS--

BARBER: Yeah.

BUOLAMWINI: --right? It elevates it to another place, which is good for marketing. Humans hallucinate--

BARBER: Yeah.

BUOLAMWINI: --right? So it's like, oh, OK.

BARBER: So just lie.

BUOLAMWINI: Right. So I think-- so I push back, even, on-- it's making things up that are not factual. I think hallucination as a way of framing it doesn't really give the gravity, right? If you're reaching out to try to find information about benefits or health care and you're getting the wrong information, it's not like, oh, it's hallucinating, having this mystical experience, [LAUGHS] you know?

CHAYKA: And yet it can be so persuasive in its hallucinations. Like, it'll tell you exactly where to go.

BUOLAMWINI: Right. [LAUGHS]

BARBER: And Dr. Joy, you were talking about how, after your master's thesis and the documentary that you're in and all this stuff--

BUOLAMWINI: Coded Bias.

BARBER: Coded Bias. You actually put pressure on, like, big companies, like Microsoft, Amazon, and they claimed to get better and didn't change things.

BUOLAMWINI: Well, they actually stopped selling facial recognition to law enforcement, which I thought was huge.

BARBER: Yeah.

[APPLAUSE]

BARBER: Yeah.

BUOLAMWINI: And part of that experience for me was showing the importance of sharing our stories. If you have a face, you have a place in the conversation about AI. You don't need all the degrees from MIT or wherever. And so that's why I'm excited about what's to come, because when we address our limitations, that's how we actually reach the aspirations of what we have for AI.

BARBER: I want to thank you, Kyle, Dr. Joy, for talking to me, and we're going to take some questions after. But let's give them a round of applause. They're amazing.

CHAYKA: Thank you.

[APPLAUSE]

BARBER: This episode was produced by Hannah Chinn, edited by our showrunner Rebecca Ramirez. Hannah, Rebecca, and I checked the facts. The audio engineer was Kwesi Lee. Beth Donovan is our senior director, and Colin Campbell is our senior vice president of podcasting strategy. I'm Regina Barber. Thank you for listening to Shortwave, the science podcast from NPR.

Copyright © 2024 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.