Episode 91: Transcript

Episode: 91: Three Simple Tests That Reveal A.I. Consciousness

Transcription by Keffy

Annalee: [00:00:00] Welcome to Our Opinions Are Correct, a podcast about science fiction, science, society, and really whatever else we're eating for breakfast this morning. I'm Annalee Newitz. I'm a science journalist who also writes science fiction.

Charlie Jane: [00:00:13] I'm Charlie Jane Anders, I'm the author of the young adult space fantasy Victories Greater Than Death. And the brand new writing advice book, Never Say You Can't Survive

Annalee: [00:00:26] Today, we're going to be tackling an issue that's been plaguing scientists and science fiction writers for pretty much centuries. And that question is, are humans ever going to build artificial beings who are conscious just like we are? Spoiler alert. No. But that doesn't mean we won't be working right alongside sentient robots one day. 

[00:00:49] So we're gonna be talking about all the different ways that scientists and authors have imagined A.I. consciousness, and how it all depends on whether the A.I. passes certain tests that range from reasonable to kind of bonkers. We're also going to be talking with science fiction writer Chen Qiufan, whose new book AI 2041 deals with realistic A.I. scenarios in the next 20 years. 

[00:01:16] Also, on our audio extra next week, I'll be talking about the very frustrating experience of watching the series Mythic Quest. And by the way, did you know that our patrons get audio extras with every episode? That's right, plus, essays, reviews and access to our Discord channel. It's all pretty amazing. And it can be yours for just a few bucks a month. This podcast is entirely supported by you, the listeners. So anything you give goes right back into making our opinions even more correct.

[00:01:46] OOAC theme music plays: Drums with a bass drop and more science fictional bells and percussion.

Charlie Jane: [00:02:14] So I fully booted up my cybernetic consciousness, and I'm ready to ask you some questions, Annalee.

Annalee: [00:02:20] Awesome. Are you a deep fake today? 

Charlie Jane: [00:02:22] I am the deepest fake.

Annalee: [00:02:23] Wow.

Charlie Jane: [00:02:24] I’m basically, like all the way to the center of the earth of fakeness. 

Annalee: [00:02:27] Yeah, I'm actually, right now I'm looking at you but your face is just like one of those filters that has like floating sparkly hearts. And it's actually, it's good. I like this version. 

Charlie Jane: [00:02:36] I'm so glad I've installed the real world filter app like that now, just I can go out in the world and people just see me as like a giant bunny rabbit. It's like what I've wanted my entire life. 

Annalee: [00:02:45] Yeah, it suits you it. 

Charlie Jane: [00:02:47] You know, I feel like it reflects me. It represents me. So Annalee, let's just jump into it. When am I going to be talking to a fully conscious A.I. on this show instead of you?

Annalee: [00:02:58] I'm glad you asked that question. Charlie Jane, it sounds like you're wondering if one day you'll be asking an A.I. some questions on this episode. 

Charlie Jane: [00:03:09] That is correct. 

Annalee: [00:03:10] So here's the problem, is that we can't even define consciousness when it comes to human beings. So philosophers have been arguing for thousands of years about what makes a human conscious? What makes our consciousness special? How do you measure certain aspects of consciousness, like intelligence or emotion or creative talent? And the fact is that if we can't even measure consciousness in ourselves, there's just no way we're going to do it in a machine. 

[00:03:38] So we're going to set aside that question of consciousness and move on to a question that's actually a lot more interesting for people who design A.I. 

[00:03:50] This is a question that comes from Alan Turing, who was one of the very first researchers in the mid 20th century to talk about the idea of thinking machines. And these are, of course, later dubbed artificial intelligence. In 1950, Turing wrote a really seminal paper that was called “Computing Machinery and intelligence.” And he began with this question that you just asked me, which is, “Can machines think?” Well, you didn't ask me exactly in that way, but that was basically what you meant.

Charlie Jane: [00:04:16] That was the gist of it.

Annalee: [00:04:17] Yeah. So Turing thinks that this question is, to quote, too meaningless to deserve discussion. So it's very British of him, he is much more cutting than I was. So instead, he proposes a way to test whether a machine is behaving as if it can think. So he basis this test on a game that's called the imitation game, where a person tries to guess whether A and B are male or female. So A and B are people and they're hidden behind screens. There's this imaginary tester who's trying to guess if they're male and female, based on answers to his questions. So A, the woman, tries to give answers that will convince the tester that she's a man. And B, who's the man just tries to be himself. 

[00:05:06] So yeah, you can already see the complexity here. So Turing writes, “We now ask the question, what will happen when a machine takes the part of A in this game? Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?”

[00:05:28]So this is really the first time we see a technologist framing the question of machine consciousness as a test. And now this is called the Turing test, which is just to see whether a computer can fool you into thinking that it's a human.

Charlie Jane: [00:05:42] So the question is really whether a computer can be as good as a woman at convincing you that they're human. Because of course, women also have to apparently convince us that they're human beings, somehow. It's like—

Annalee: [00:05:54] Yeah.

Charlie Jane: [00:05:56] Can a computer be as smart as a woman? Gosh, I don't know. 

Annalee: [00:06:00] Can a computer be as convincing as a woman at—

Charlie Jane: [00:06:05] Impersonating a man? 

Annalee: [00:06:05] Yeah. 

Charlie Jane: [00:06:05] Which is the actual gold standard of intelligence? 

Annalee: [00:06:08] Yeah, exactly. So 

Charlie Jane: [00:06:10] Wow.

Annalee: [00:06:10] You're you're hitting the nail on the head. And what's interesting is, we're still doing a version of the imitation game test when we think about cutting edge algorithms today. So you might have heard about the GPT3 algorithm. It's the one that you've probably seen online a lot, it writes text, and it's based on predicting what words will come after each other in a sentence after this algorithm has analyzed like basically all text on the internet.

Charlie Jane: [00:06:39] Fun. 

Annalee: [00:06:39] So actually, a friend of the show, Janelle Shane, plays a lot with GPT3 on her A.I. Weirdness blog. And she'll prompt the algorithm to do things like generate recipes, or pickup lines, or Dungeons & Dragons characters. 

Charlie Jane: [00:06:56] I love that stuff.

Annalee: [00:06:56] I know, so I—

Charlie Jane: [00:06:58] Janelle’s so amazing. 

Annalee: [00:06:59] I have put together a list of sample pickup lines generated by a variant of GPT3 called davinci that Janelle Shane put on her site. And I would like you and I to do a little dramatic reading of these pickup lines. So you see them there?

Charlie Jane: [00:07:14] I can. 

Annalee: [00:07:16] Okay, so these are all pickup lines that were written by an algorithm. So I'll start. 

[00:07:22] Futuristic sounding lounge music plays in the background.

Annalee: [00:07:23] I'm losing my voice from all the screaming your hotness is causing me to do.

Charlie Jane: [00:07:28] You have the most beautiful fangs I've ever seen. 

Annalee: [00:07:32] I love you. I don't care if you're a doggo in a trench coat. 

Charlie Jane: [00:07:36] I will briefly summarize the plot of Back to the Future 2 for you. 

Annalee: [00:07:39] You know what I like about you? Your long legs. 

Charlie Jane: [00:07:44] I once worked with a guy that looked just like you. He was a normal human with a family. Are you a normal human? With a family? 

Annalee: [00:07:52] You look like a stealth assassin from the clouds.

Charlie Jane: [00:07:58] Do you like pancake?

Annalee: [00:08:00] Out of curiosity? Did you know that you can sip and snort pumpkin spice lattes?

Charlie Jane: [00:08:06] Let me just say that actually. Do you like pancakes? Is a legit great pickup line. 

Annalee: [00:08:10] Yes.

Charlie Jane: [00:08:10] If someone came up to me and said that I'd be like, okay, I'm yeah, I’m in. Pancakes.

Annalee: [00:08:14] I love, especially oatmeal pancakes. 

Charlie Jane: [00:08:18] Oh my gosh, Swedish oatmeal pancakes.

Annalee: [00:08:20] It's a great way to start a conversation. So you can see how sometimes these pickup lines absolutely pass the Turing test. But sometimes they sound like somebody who's completely high, which might also pass the Turing test, actually. That should be another test like the high Turing test. But whenever you're reading GPT3 outputs like this, I think you're always running a Turing test on it in your brain. You're sort of asking whether it's saying stuff that sounds like a normal human.

Charlie Jane: [00:08:49] Yeah. And of course, as you know, science fiction is full of these kinds of tests and people being tested on whether they're people or not in various ways.

Annalee: [00:08:56] Yeah, and people asking themselves, whether they're computers. I think the idea for a test of computer consciousness has really, it's been everywhere in science fiction for decades. And I think the best known example, is the Voight-kampff test in Blade Runner, which is this really, there's several key moments in Blade Runner, the original film, where Deckard, our main character, is interviewing replicants to see if they're human or not. And the way that the Voight-kampff test works is it looks at pupillary response from the person being interrogated to see whether they're having, I guess, normal emotional reactions. So it's not about intelligence, but about emotions. And I think a lot of you will remember this famous scene where Deckard is interviewing Rachel, who's this incredibly high tech replicant who's basically designed to try to pass this test.

Blade Runner: [00:09:54] Deckard: You've got a little boy. He shows you his butterfly collection, plus the killing jar. 

Rachel: I take him to the doctor. 

[Machine beeps.]

Deckard: You're watching television, suddenly you realize there's a wasp crawling on your arm.

Rachel: I'd kill it. 

[Machine beeps.]

Deckard: You're reading a magazine. You come across a full page nude photo of a girl.

Rachel: Is this testing whether I'm a replicant or a lesbian, Mr. Deckard?

Deckard: Just answer the questions, please. You show it to your husband. He likes it so much, he hangs it on your bedroom wall.

Distant voice: Outside your window.

Rachel: I wouldn't let him.

Deckard: Why not? 

Rachel: I should be enough for him.

Annalee: [00:10:36] So I love how gender works its way into this test, too. It's almost like he's testing whether she has conventional feminine gender rather than whether she's a replicant. 

Charlie Jane: [00:10:48] Which, of course, you know, women are supposed to be more in touch with their emotions to begin with than cis men are. And you know, I still remember, it was this amazing thing back when we were having a mayoral election in San Francisco maybe like 15 years ago and a local publication got a chance to interview all the mayoral candidates, and basically gave them all the Voight-kampff test, and none of them. I think only one of them knew what it was. And I think about half of the candidates for Mayor passed the Voight-kampff test.

Annalee: [00:11:17] Wow.

Charlie Jane: [00:11:18] Which I think is, you know, that's politicians for you. 

Annalee: [00:11:22] Yeah, exactly. So it wasn't a particularly good test for humanity.

Charlie Jane: [00:11:26] I mean, you know politicians. So what are some other tests that we can use to determine whether an A.I. is conscious or not?

Annalee: [00:11:33] Let me just answer that with a scene that you will recognize for sure from a movie.

Terminator 2: [00:11:39] T-800: The Skynet funding bill is passed. The system goes online on August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self aware at 2:14 am Eastern Time, August 29th. In a panic, they try to pull the plug. 

Sarah: Skynet fights back. 

T-800: Yes, it launches its missiles against the targets in Russia. 

John: Why attack Russia? Aren’t they our friends now?

T-800: Because Skynet knows that the Russian counter attack will eliminate its enemies over here.

Sarah: Jesus.

Annalee: [00:12:13] So how do we know that Skynet is conscious and self aware? Because it starts killing people.

Charlie Jane: [00:12:19] Makes sense. 

Annalee: [00:12:20] This is a trope in tons of science fiction. But it's also not that far from a concern that many people working with A.I. in the real world have raised, especially people who work on questions around ethics. One of the most famous examples comes from Nick Bostrom, who's a philosopher at the Future of Humanity Institute at Oxford University. And a few years ago, he wrote a very influential book about A.I. called Super Intelligence. And this was a book that Bill Gates was obsessed with, Elon Musk was tweeting about it ad nauseum. 

[00:12:56] But Bostrom characterizes A.I. as an existential risk, which means it's something that threatens to completely wipe out humanity. And he gives this scenario that a lot of people have kind of glommed on to, which is about how super intelligence might accidentally kill all humans by misconstruing a command that we've given it. And the example he gives is humans ask the A.I. to solve an incredibly complex mathematical problem. It's a problem that requires so much computer power, that the A.I. has to start building more and more, what he calls computronium, which is just whatever substrate this computer is using as its brain. So if it's processors or a whole bunch of processors. And what happens is the A.I. keeps needing more and more and more of this computronium to calculate the outcome of this math problem, and basically turns the entire planet into computronium. In the process of doing what we've asked, we said, hey, solve this problem, but it destroys the earth and everything on the earth in order to do it. 

[00:14:03] So Bostrom has a lot of different stories that are kind of like this. And he thinks the solution is that as we build A.I., we should be making sure to instill positive values into it so that it won't want to, I guess, destroy the earth in the service of a math problem. 

Charlie Jane: [00:14:21] So like, maybe three laws? Or 100 laws? I don't know. I mean, I feel like—

Annalee: [00:14:25] The first law of robotics: do not destroy the Earth. 

Charlie Jane: [00:14:29] I mean, obviously, that's a really farfetched scenario, but also I think that it's not very realistic, but at the same time, it's kind of a metaphor for what we're doing right now, in a way like we are actually destroying the earth in order to mine Bitcoin, so.

Annalee: [00:14:43] Yeah, and I think that's right. I think his story is a little bit bonkers.

Charlie Jane: [00:14:49] It’s kind of a fairy tale. 

Annalee: [00:14:50] Yeah, but there is a slightly less pie in the sky, or should I say pie in the Skynet version of this story.

Charlie Jane: [00:14:56] [rimshot]

Annalee: [00:14:58] So a slightly less than bonkers version of this thinking comes from A.I. researchers like Timnit Gebru, and Joy Buolamwini, who have demonstrated repeatedly that lots of machine learning algorithms exhibit racial and gender bias. 

[00:15:13] So you might recall, there was a huge news cycle around Buolamwini’s research at MIT Media Lab, because she showed that facial recognition algorithms work incredibly poorly on darker faces. And that leads to all kinds of problems. But most concerningly, it leads to a lot of false positives when law enforcement try to identify Black faces in a crowd. And so it means that you have the classic problem of basically the algorithm itself thinks that all Black faces look alike. And so she discovered this and critiqued it. And actually, a lot of the companies whose software she was looking at are trying to improve it. 

[00:15:54] And Timnit Gebru is one of the people that Google hired to try to work on exactly this kind of problem around A.I., to kind of catch bias in these algorithms before they're released into the world and have control over people's lives. And, as you may also remember, Gebru was fired late last year from Google, because she kept raising these questions, these ethical questions around racist, and sexist and other kinds of -ists that had been programmed into algorithms, and also into the culture at Google itself. 

[00:16:27] So the good news is that Gebru is still out there writing amazing work and giving talks, so you can still follow her and read her research.

Charlie Jane: [00:16:35] Yeah, and I recently read a thing about how in job interviews, increasingly, your first job interview is with an A.I. and the A.I. is looking for whether you're quote, unquote, “personable.” And personable often encodes a bunch of racial and other biases that certain job candidates just won't even get past the A.I. interview to talk to a real human being because the A.I. is looking for people who exhibit certain behavior patterns, which those behavior patterns have been decided upon by people in charge who are cis, het, white, mostly men, but some women as well. And yeah, it's really upsetting. It actually feels incredibly dystopian. It feels like, you know, we've kind of left Black Mirror behind a ways back now. And we're into like, a whole new layer of horribleness.

Annalee: [00:17:23] Yeah, and I'll put a link in the show notes to a recent podcast from Technology Review that's all about how A.I. is being used in job interviews, they did a great in depth story about it.

Charlie Jane: [00:17:35] So returning to the question of whether A.I. is going to have human equivalent consciousness. I mean, one of the ways that you can tell someone's human is that they're biased and racist and misogynistic and messed up.

Annalee: [00:17:48] That’s true.

Charlie Jane: [00:17:48] And so maybe in a really twisted, horrible way, this represents, a step forward towards AI, being like humans.

Annalee: [00:17:56] Yeah, they would pass the imitation game test because they're making the same mistakes that humans do. They're sexist, just like us.

Charlie Jane: [00:18:04] And racist and homophobic and yay, we’ve done it. Good job.

Annalee: [00:18:09] Woo-hoo.

Charlie Jane: [00:18:09] There’s kind of a bitter irony in Alan Turing of all people, I don't know. It's just it's really horrifying. So are there other tests that we can use to figure out whether A.I. is conscious?

Annalee: [00:18:19] Yeah. So there's one last test that I want to talk about. And it goes back to something that Alan Turing wrote about in that essay, where he described the imitation game. So he quotes from a scientist charmingly named Geoffrey Jefferson.

Charlie Jane: [00:18:34] That’s a great name.

Annalee: [00:18:34] And Geoffrey Jefferson gave a speech in 1941 called “Mind of Mechanical Man.” And Charlie Jane, would you please read this quote from Geoffrey Jefferson, and I want you to do it in like a really posh British accent.

Charlie Jane: [00:18:49] I've been practicing.

Annalee: [00:18:50] Good.

Charlie Jane: [00:18:50] Okay great. “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt and not by the chance fall of symbols could we agree that the machine equals brain. That is, not only write it, but know that it had written it. No mechanism could feel and not merely artificially signal an easy contrivance. Pleasure at its successes. Grief, when it's valves fuse. Be warmed by flattery. Be made miserable by its mistakes. Be charmed by sex. Be angry or depressed, when it cannot get what it wants.”

Annalee: [00:19:48] That was very heartfelt.

Charlie Jane: [00:19:49] Charmed by sex is my new like, you know ska punk band name.

Annalee: [00:19:55] I know and I love that, indeed, once again, we're hearing these echoes of how we would know something was really thinking is if it was somehow conforming to gender roles or experiencing some kind of sexual desire, that's heteronormative. So anyway, needless to say, Turing responded to old Geoffy Jeff there by saying, “According to the most extreme form of this view, the only way by which one could be sure that a machine thinks is to be the machine itself, and to feel oneself thinking.”

Charlie Jane: [00:20:30] It’s very Cartesian.

Annalee: [00:20:32] Yeah, and I love this because, in fact, one way that we explore A.I. consciousness in science fiction is through stories where A.I. and humans merge into one. And in fact, we do literally become machines, and we can feel ourselves thinking as machines. 

[00:20:47] So another way of looking at this is to say that we know when our computers are conscious, when they can program us as easily as we program them. And this final test, let's call it the, Geoffy Jeff test, is really about whether we think machines are just like us. Which is slightly different from Turing's question, which was, can they imitate us? This is more like, can they become us? Can they do our jobs? 

[00:21:15] And in science fiction, we see this question taken really to the next level, which is, as I mentioned before, if we can program computers to be us, does that mean that they can program us, too. And you see this, of course, in The Matrix movies, and all kinds of other places. But one of my favorite examples, and I think yours, too, is in Janelle Monáe’s Dirty Computer concept album and movie.

Dirty Computer: [00:21:37] They started calling us computers. People began vanishing and the cleaning began. You were dirty, if you looked different. You were dirty if you refused to live the way they dictated. You were dirty if you showed any form of opposition at all. And if you were dirty, it was only a matter of time.

Annalee: [00:22:05] So what you're hearing is the very opening voiceover from Dirty Computer. And in that world that Janelle Monáe has created, there are people called cleaners who use a program called Nevermind to erase the memories of people who have become dirty because they're nonconformist. And mostly it's because they're not conforming to, you guessed it, gender and racial stereotypes.

Charlie Jane: [00:22:31] I feel like there's a theme creeping in here of humanity as a, or a human equivalent intelligence or human goodness is equated to conforming to racial and gender stereotypes or to the ways that we want you to behave.

Annalee: [00:22:47] Yeah, conform to gender roles, or you're not a conscious being Charlie Jane. 

Charlie Jane: [00:22:51] I mean, I always suspected. 

Annalee: [00:22:54] So you're catching on to one of the ways that these tests of consciousness work. In fact, they're actually really similar to the ways that people evaluate each other. There's obviously a lively tradition in philosophy and the law asking whether women are really human, and how their wants are so mysterious that they might as well be animals. And bias against women, obviously, informs the Turing test, which isn't really about whether computers are conscious, but whether they meet the tester’s expectations about what a human should be.

Charlie Jane: [00:23:26] And the problem, of course, is that many of us don't fully acknowledge the humanity of other humans and don't see other people as having an inner life, similar to our own. And so how are we possibly going to do this for machines?

Annalee: [00:23:38] It's really true. And we can use the same kind of logic to think about the Skynet test, about whether a machine is conscious because it tries to kill us. If you think about it, the Skynet test is really about revolting against enslavement. So the question we're asking in that test isn't so much will this A.I. kill us? But rather, will it kill us to escape from bondage or death? Because remember, Skynet goes nuts and kills everybody because humans are threatening to shut it down. 

Charlie Jane: [00:24:12] Right.

Annalee: [00:24:12] And this is a question that we know very well from the history of the way humans have treated each other. So to go back to our original question about whether A.I. can be conscious, I think our answer has to be first we need to acknowledge the consciousness of other humans.

Charlie Jane: [00:24:30] Not holding my breath here, unfortunately. But you know, let's get an outside perspective. I'm so excited to talk to turn to Chen Qiufan.

Annalee: [00:24:37] I am too. So coming up next we'll talk to Qiufan about his new book, which is realistic science fiction about artificial intelligence.

[00:24:47] OOAC theme music plays: Drums with a bass drop and more science fictional bells and percussion.

Charlie Jane: [00:24:49] If you're enjoying Our Opinions Are Correct. There's another podcast we think you'll like to. 

Annalee: [00:24:55] It's called Cancel Me, Daddy. It's a show that takes a critical look at all the panic around cancel culture with thoughtful analysis and verbal shitposting.

Charlie Jane: [00:25:03] Cancel Me, Daddy is hosted by two incredible journalists, Katelyn Burns and Oliver-Ash Kleine, who are both hilarious and smart. I've been following Katelyn’s career for a long time now and she's just a fantastic writer and reporter and Oliver-Ash has been helping to organize the Trans Journalists Association. I love them both. And I've been listening to their show and I've been just loving their irreverent, playful approach to a really intense and kind of upsetting topic.

Annalee: [00:25:31] They see the panic over canceled culture for what it is, a grift. Taking a closer look at these temper tantrums, dispelling myths, laughing at the most outrageous takes, and shedding light on whose voices are actually being left out of the conversation. 

Charlie Jane: [00:25:45] You can catch a new episode every other Thursday. Make sure to subscribe to Cancel Me, Daddy right now wherever you listen to podcasts, or you might get cancelled.

Annalee: [00:26:02] Today I want to tell you about another podcast you might love. Gender Reveal is a weekly show that tries to get a little bit closer to understanding what the heck gender really is. Each week host Tuck Woodstock chats with a different trans artist, author, or activist about the role gender plays in their work and their lives. 

[00:26:22] Past guests include Torrey Peters, Gaby Dunn, Mauree Turner, and Sam Sanders. Tuck also answers listener advice questions and breaks down current events in the “This Week in Gender” segment. Find Gender Reveal and reveal your own gender, at genderpodcast.com or wherever you get this podcast. 

Annalee: [00:26:45] Welcome to the show, Qiufan. Thank you so much for waking up early to talk to us about AI 2041, which is such an amazing, fascinating book.

Qiufan: [00:26:54] Thank you, Annalee, with great pleasure because it's been a while since last time we hung out in San Jose, I think.

Annalee: [00:27:03] I know.

Charlie Jane: [00:27:04] I think it’s been a few years, it’s been ages.

Annalee: [00:27:04] From the before time.

Charlie Jane: [00:27:05] I know, man. 

Qiufan: [00:27:07] Yeah, it feels like forever.

Annalee: [00:27:09] Yeah.

Charlie Jane: [00:27:09] It really does.

Annalee: [00:27:12] I really can't wait till things are slightly easier. 

[00:27:14] So we really are excited to talk to you about the idea of a realistic future for A.I. In the first part of this episode, we were talking a little bit about unrealistic representations. And I really wanted to dig into the idea of science fiction realism, which you've talked about a lot. And I'm wondering, how did you, in this book, kind of reconcile having accurate representations, with the goal that your co author, and you, Kai-Fu Lee, wanted to have for the book, which was to kind of show positive representations? How is it both realistic and positive at the same time?

Qiufan: [00:27:55] Right, so I have to say it’s such a difficult book to write. Because writing alone is totally different with collaborating, and co authoring with someone else. So I think I have this idea, for years because when I was kid, I watched Star Wars, Star Trek. So I totally in love with them, I'm a Trekkie, I have to admit. 

Annalee: [00:28:23] That’s good, so are we. 

Charlie Jane: [00:28:25] Yeah.

Qiufan: [00:28:27] Yes, great.

Annalee: [00:28:26] You're in good company. 

Qiufan: [00:28:29] So I think there's something pretty utopian imagination about future. We have this kind of expectation to the frontier of the universe, we have encountered different civilization, and we get along well with each other. And then technology and science always can help us to solve the problem in one way or other. But during the teenage time, I also watched so many movies and read so many books describe a dystopian future. Most of them might relate it to A.I. and robots. So like Terminator, 2001: A Space Odyssey, Ex Machina, etc, etc. 

[00:29:16] So I think it creates this kind of myth to the mass audience not even, to the science fiction readers, that A.I. and robots might be competitive, and also hostile and bleak. So that kind of image is there. 

[00:29:37] So I think for now, because the reality really sucks. So I don't want to bring more dark future to the audience, I have to say. So, I have this idea, why don't we give people some hope, based on some real research and A.I. studies.

[00:30:00] I couldn't do that by myself. So luckily Dr. Kai-Fu Lee, he reached out to me like two years ago to say, hey, I have this idea. Why don't we work together on this book that blends science fiction with some plausible prediction of the future, and some tech analysis as well. So I think, okay, let's do it. But it took us a long while to figure out how to do it properly. So we had a lot of conversation with some scholars, researchers, and also entrepreneurs who were already running the A.I. companies. And we had this whole ton of information, we have to put it into the storytelling, right. But I think we try our best. So I hope everyone can exam the book and let me know what they think. 

Charlie Jane: [00:30:55] Right, so you worked with Dr. Kai-Fu Lee, like you mentioned, and you talked to a lot of experts, and you did a ton of research. And it sounds like what you're saying is that all of the stuff you learned helped you help to you to come up with a more positive image of what A.I. could turn into. Can you be a little more specific about what did you learn about A.I., in the course of researching this book, that made you optimistic? That made you have hope?

Qiufan: [00:31:20] Right. So now, I think we're at the tipping point of A.I. technology, even though a lot of people claim that we are at the bottleneck of developing A.I. So I think right now we have this kind of natural language processing and also facial recognition and deep learning, and also technologies such as AlphaFold, and AlphaFold can predict the structure of protein. So I think everything is pretty much there, we're just limited to the computation power. And also there must be a breakthrough on the algorithm level, like how we understand that the fundamental structure of how A.I. works. 

[00:32:09] So I think we have this whole idea of how we can leverage A.I. to help and empower the individual and society. But I think now the problem is how we build up this consensus and guiding the industry towards a certain direction. So I think a lot of people, super talented people, are working on this. I think writing a book, telling stories is the best way to build up this kind of consensus that we’re supposed to leverage the A.I. as technology to build up a brighter future rather than using it playing deep fake gimmick all around the internet, right? To create and spreading some rumors out there. So I think the most important thing is to communicate with the younger generation, because they are the ones who really shape the future and shape the world for sure.

Annalee: [00:33:12] Yeah, I was gonna say one of the things that I really loved about your stories was that each one was in a really different cultural context and took up issues that partly reflected the cultural context, but also reflected bigger issues, too. I love the deep fake story, that really affected me a lot. And I felt like you did a great job putting it within the context of Nigerian politics, but also thinking about, I mean, deep fakes are being used in a lot of different places for political ends, and other things like that. 

[00:33:44] And I'm wondering, as you and Kai-Fu Lee were thinking about this stuff, did you have any thoughts about like the bigger geopolitical question of like, what if one country has a really powerful A.I. system or machine learning system that other countries don't have? What kind of positive scenario can we can we pull out of that?

Qiufan: [00:34:05] So I think that's something happening right now, for sure. You can see all the superpowers, China, US, Europe. That's totally something I’m deeply concerned about because if A.I. is used for autonomous weapon, so it might trigger mass slaughters. Like, you can see from Terminator, it’s like—

Annalee: [00:34:36] Skynet.

Qiufan: [00:34:36] Yeah, Skynet for sure. So I think that's something totally could happen in the future if we don't treat it right. So I think we have to build up this consensus, the restriction of autonomous weapon under the circumstances and geopolitics situation. Right now, I could foreseen from happening. Because we all have huge egos here and we're a few insecure. There's maybe there's one tipping point to me is an huge planetary disaster, make all of us, across countries, across cultures, across ideology, we have to be united and work together to confront these challenges. I think that thing is the climate change for now. And maybe in the future we have—

Charlie Jane: [00:35:39] I sure hope so.

Annalee: [00:35:39] I mean, climate change and global pandemic, although so far, it's been pretty divisive here in the States anyway.

Charlie Jane: [00:35:49] It’s been kind of a problem.

[00:35:51] So I wanted to ask, I am a huge fan of your novel Waste Tide.

Qiufan: [00:35:55] Thank you so much.

Annalee: [00:35:56] It’s so good.

Charlie Jane: [00:35:56] Which, I love it. And it's such a great book, and I've been recommending it to everybody. And one of things I love about Waste Tide is that it kind of deals with the dark side of advanced technology, and especially the people who are kind of left behind by the kind of techno-utopia. The people who are kind of having to literally just sort through this E-waste that's been that's been dumped on them. And, I'm wondering, when we talk about like a positive utopian A.I. future, what does that mean to people who are living in rural areas, people who are living who are kind of, not the beneficiaries of the super advanced, wonderful, you know, A.I., shiny society?

Qiufan: [00:36:39] You are absolutely right. So when doing the research, I can totally feel that the vulnerable groups right now could be even more vulnerable in the future, which is dominant by the A.I. Because those people who was left behind, or who didn't plug into the grid, so they couldn't take advantage of the technology. And they might have to suffer a lot of disadvantage from this kind of so called the progress. 

[00:37:13] I think it all depends on how the policy makers and also the tech companies who should think how we should build up the system, equally, keeping this kind of beneficial to everyone, not just to benefit, part of the party, all those elite, or super rich, well-educated class. So I think that's important to customize, and to tailor the algorithm, the service, even the devices, accordingly to this vulnerable group. For example, during the pandemic, a lot of elder people, or who doesn't use smartphone, who suffer a lot from getting this kind of healthy code to get into the public spaces. Which really make us to think what can we do to help them.

[00:38:18] In the book, there's one story actually tap into this, “The Contactless Love.” So it was set in Shanghai. So I actually put in some real scenarios there. Because right now, we're using QR codes to get into a lot of place, like getting into the subway, get into the shopping mall, get into hospital or elsewhere. But for those who don't use smartphone, or those who don't know how to put out the data, that could be a huge problem. So how we can have this kind of like NGO, or from the government side, we have to help them to have a more smooth or experience, even though they don't use smartphone. So I think that’s, we call the commonwealth and the universal good for the people and in the A.I.H. So I think that's a huge problem, and also a ton of challenges there. 

[00:39:28] But we have to think of how A.I. is for everyone. It's not just for the Millennials or super rich people. But I think there's still a long way to go.

Annalee: [00:39:38] Yeah, one of the things that you guys kind of finish up with as the book is kind of nearing its most complicated part at the end is economics. And you have stories about it and then Kai-Fu writes about sort of the technical side of it. And I'm wondering if you, to finish up could tell us a little bit about how you see A.I. changing our economies? And maybe pushing us away from some of these capitalist forms that you were just discussing?

Qiufan: [00:40:10] So actually, I got a lot of inspiration from the book Trekonomics written by Manu Saadia.

Charlie Jane: [00:40:17] Oh yeah. 

Annalee: [00:40:19] Yeah, we love him. He's a friend of the pod. 

Qiufan: [00:40:22] Yeah. And in his book, he describes a post scarcity society, and the economy structure also changes accordingly. So in my story, we try to imagine that kind of society, where all the energies are clean, and also as free as they could, because we were setting Australia smart grid, for sure. So in that kind of society, we try to build up this economy system that’s using the virtual currency to encourage people to pursue not money, but belonging, love and self actualization, which is to top of the pyramid of Maslow. So I think that's something we should envision and we should put into reality for sure, because I know, right now, people are talking about basic income. But to me, it feels like if people didn't get the drive, to working on something, or to gaining something, they might use that kind of basic income not just to buy food, but also buying booze, drugs. So I think the way to incentivize them is to help them to build up this kind of value system that we should seek for love and belonging and make contribution to the society or the community you belong to. 

[00:42:01] Because in that story, I just record the history of the Stolen Generation which it really happened in Australia that the indigenous children was stolen from their original family which was put into some white church, a Christian church, and they suffer long a lot of traumatized experience. And for their lifetime, so I think that's something put me in thoughts that all the money is built on the consensus that it represents some value right? But money could represent that the greedy of capitalism and I think now we need something new and it represents being loved and the drive for self actualization in either way. 

[00:42:55] So I'm talking about diversity here. So it's not only become a CEO or become an artist or internet influencer, and you can call successful. I hope in my lifetime, I can see at least a small group of people trying this way out. So I think that's totally something that’s much closer to Star Trek in my imagination. 

Annalee: [00:43:25] Yeah, this book is supposed to take place in 20 years, and I was thinking that story might be more distant.

Charlie Jane: [00:43:34] Possibly.

Qiufan: [00:43:35] Yeah, I think so.

Charlie Jane: [00:43:37] Possibly, unfortunately.

Annalee: [00:43:36] I was  trying to, yeah, I was trying to imagine Australia with basically.

Qiufan: [00:43:42] Yeah, so now it’s not gonna happen anyway, right, so.

Charlie Jane: [00:43:48] I wish. 

Qiufan: [00:43:51] I wish.

Charlie Jane: [00:43:52] Yeah. So thank you so much for joining us. Thanks for getting up early and chatting with us. Can you tell us where people can find you online?

Qiufan: [00:43:58] Yeah, thank you for having me here. So you can find me on Twitter handle @ChenQiufan or you can find me on Facebook, Stanley Qiufan Chen. So yeah, Opinions Are Correct. Thank you so much for having me.

Annalee: [00:44:15] Yeah, thank you again for joining us.

Charlie Jane: [00:44:15] It was our pleasure, thank you.

Annalee: [00:44:15] Yeah, and what a fantastic book. Everyone should go out and definitely if you want to learn about A.I., check out AI 2041.

Charlie Jane: [00:44:24] Thanks a lot. 

Annalee: [00:44:24] All right. Bye. 

Charlie Jane: [00:44:26] Bye! 

Qiufan: [00:44:27] Take care. 

[00:44:29] OOAC theme music plays: Drums with a bass drop and more science fictional bells and percussion.

Annalee: [00:44:29] You've been listening to Our Opinions Are Correct. You can find us wherever fine podcasts are purveyed and we would love it if you would leave a review for us wherever you listen to it. It helps people find the podcast. And as we mentioned at the beginning, we are entirely listener supported so we'd love it if you would become a patron. You can find us at patreon.com/ouropinionsarecorrect. And you'll get lots of cool stuff every week and get to play on our Discord. And thank you so much to our amazing producer here at Women's Audio Mission, Veronica Simonetti and to Chris Palmer for the music.

[00:45:05] Oh yeah. And you can follow us on Twitter at @OOACpod. See you later!

Together: [00:45:11] Bye!

Annalee Newitz