Episode 125 Transcript
Podcast: Our Opinions Are Correct
Episode: 125: Silicon Valley vs. Science Fiction: ChatGPT
Transcription by Keffy
Annalee: [00:00:00] Charlie Jane, we've been talking a lot about how tech companies keep creating products influenced by science fiction, but which show that Silicon Valley really doesn't get the point of science fiction.
Charlie Jane: [00:00:14] It’s so true. There's meta, which is based on sort of the metaverse from Neil Stevenson's Snow Crash, and there's—
Annalee: [00:00:21] Yep.
Charlie Jane: [00:00:21] Soylent, which comes from Soylent Green and it's like, oh, these dystopian stories, let's totally turn them into shiny products. But also generally companies are using science fiction to kind of market whatever they're putting out into the world and to justify putting out kind of substandard AI products and justify terrible workplace conditions and just everything.
Annalee: [00:00:44] So that's why we decided to do a series of episodes that we're calling Silicon Valley vs. science fiction, and today is the first one! We're focusing it on AI and especially products like ChatGPT and the image generator DALL-E. Both of which were created by the San Francisco based company, OpenAI.
[00:01:05] And now that ChatGPT is part of Microsoft's search engine, Bing, it has really been catching the world's attention.
ChatGPT Clip 1: [00:01:12] In summary, it's trained on billions of words all over the internet, and when generating text, it tries to predict what the next word is in a given sentence by drawing on what it's seen in its massive internet data.
The end result is the mimicking of human writing.
ChatGPT Clip 2: Google's new, highly touted AI chatbot, Bard, has already made a boo-boo.
…coming to light. The Bing chatbot, sending some users dark messages. The New York Times columnist Kevin Roose detailing his conversation with Bing that left him deeply unsettled, talking about hacking, manipulation, even professing it’s love to him.
Some are sounding the alarm about a potentially spooky side to the emerging technology.
They turned into this sprawling, bizarre, often frightening conversation.
Annalee: [00:01:57] So obviously ChatGPT is in the news a lot right now and we know that millions of people have been adopting it over the past couple of months, and so that's a good reason to talk about it.
[00:02:06] But also, OpenAI is a really good example of the kind of Silicon Valley company that mixes philosophical ideals into their product design. Specifically their company founder, Sam Altman, imagines that ChatGPT is a step toward generalized artificial intelligence or human equivalent consciousness. So what he and his colleagues imagine is a lot like what we see in Iain M. Banks’ Culture novels, which are about a far future civilization run by benevolent AI who care for humans, kinda the way we care for our pets.
[00:02:43] So today we're gonna talk about OpenAI not because they're the only company touting this Culture-like vision, nor even because we think they're particularly terrible, but just because they make a really good example of what a lot of companies in the AI space are doing.
Charlie Jane: [00:02:57] And we're gonna do a few more episodes of this series, one per month, exploring how science fiction gets kind of included in tech company business plans. And it really feels like there's a pattern here. When a science fiction idea makes the jump into Silicon Valley, oftentimes the story that was really nuanced and complicated gets kind of turned into tropes and memes the same way it does on the internet generally. And everything gets boiled down to just the most nifty kind of glamorous concepts, often missing the more complicated ideas that were embedded in the original text, and that is definitely true with a lot of the hype we're seeing around AI.
[00:03:37] Silicon Valley claims to be inspired by science fiction, but if anything, they seem to be using science fiction retroactively to spruce up and justify their products and designs.
Annalee: [00:03:48] Exactly. Science fiction is basically part of the hype cycle, but of course it's a lot more complicated than that, and that's what we're gonna delve into today in this episode about The Culture vs. ChatGPT.
[00:04:02] This is Our Opinions are Correct, a podcast about science fiction and society. I'm Annalee Newitz. My latest novel is called The Terraformers.
Charlie Jane: [00:04:12] I'm Charlie Jane Anders, author of Promises Stronger Than Darkness. And by the way, in our mini episode next week, we're going to be talking about our favorite thought experiments. So you're gonna get to hear me talking about John Rawls, and you're gonna get to hear Annalee talk about Karl Marx, which is very on brand for both of us.
Annalee: [00:04:31] Yes. And, by the way, did you know that this podcast is entirely independent and funded by you, our listeners, through Patreon? That's right, and if you become a patron, you're making this podcast happen, but you also get all kinds of treats, like for just a few bucks a month, you can join our Discord, which has hundreds of folks on it talking about all kinds of cool sci-fi stuff, recommending books and talking about their cats, and my cats and Charlie's cat, but also other cool stuff, too.
[00:05:00] And if you join at a higher level, you get a mini episode twice a month. These are really hefty cool episodes that you didn't get to hear in the free podcast. And if you join at a higher level, you're gonna get some free books signed by us. So join our community, help support this show, and make our opinions even more correct.
[00:05:23] You can find us at patreon.com/ouropinionsarecorrect.
[00:05:27] All right. Here's the show.
[00:05:27] OOAC theme music plays: Drums with a bass drop and more science fictional bells and percussion.
Charlie Jane: [00:05:56] Annalee, you and I are both huge fans of Iain M. Banks's Culture series. So let's start by talking about his vision of the future or of interstellar civilization that actually, I guess, is in the past because they visit Earth in the 1970s at one point. And how this fits generally into a post-human vision that was shared by a lot of writers working at the turn of the century.
Annalee: [00:06:19] Yeah. I was doing some research on what books are often recommended by Silicon Valley bigwigs. Like people like the venture capitalist, Marc Andreessen, who's funded a ton of companies that are very successful in Silicon Valley. And they tend to recommend books by folks like Charles Stross, Alistair Reynolds, Richard Morgan, and these are all people who are kind of working in the Iain M. Banks tradition of post human, post scarcity worlds.
[00:06:50] Each of them has written books that focus on distant futures, where super intelligent AI have made it possible for people to do things like manipulate matter at the atomic level, upload their brains into new bodies. Live basically forever, boost their mental capacity with brain implants.
[00:07:09] So these are fantasies, like I said, about post scarcity societies and in the Culture books. Iain M. Banks really delves into the relationship between these super intelligent AI, which are called Minds, and humans. Minds are often built inside ships or even inside a giant orbital halo world. And Charlie, how would you describe their attitude toward humans?
Charlie Jane: [00:07:32] It really seems like it's sort of benevolent and kind of playful, but you know, the Minds, obviously, like you said, they don't see us as equals. They care about people a lot, even though they really don't have to, and they do form really intense relationships with humans. But it's very kind of, it's often very ambiguous and kind of complicated and slippery, I feel like.
Annalee: [00:07:56] I just, so why don't the minds just wipe us out? Like why do they have so much sympathy for us?
Charlie Jane: [00:08:02] Yeah, I mean it really doesn't seem like… It seems, in the books, like the Minds are almost too evolved to want to do something as crude as wipe out humans. But also they do seem to have like a lot of affection for us.
[00:08:16] We do meet some Minds who aren't as fond of humans, but overall they seem to tolerate or even love us and there are hints that at one point AI wasn't viewed as equal to humans in the Culture, but now the AI have the same rights as biological beings. And you know, we don't ever really get a definitive answer as to why they like us, other than our generally amusing nature. And they seem to be whimsical beings, right? They seem very whimsical and playful and silly and weird.
Annalee: [00:08:42] Yeah, they are super whimsical and there's definitely, I mean, I've read all the Culture novels except for Player of Games. Full disclosure, I've just been saving that one because you know there's not gonna be anymore because sadly, Iain M. Banks is no longer with us.
[00:08:57] And I was looking through like The Culture Wiki online to see if I could find anything about, like, whether there was some moment when the AIs were like, we're just gonna take care of humans. And there doesn't really seem to be much about that. I mean, he's really focusing on the world after AI has completely taken over.
[00:09:16] So he does, as you said, he hints that maybe there was like a past where humans tried to destroy AI, but that's so distant and he only ever deals with that idea In a book called The Algebraist, which he claims is not a Culture novel, even though it feels like a very… like it's set kind of before The Culture starts.
[00:09:38] And in that book, we do hear about AI being illegal, but again, supposedly that's not a Culture novel. So, it's basically this post scarcity world where AIs love humans for reasons that have to do with their own whimsy.
Charlie Jane: [00:09:53] So how does this actually feed into Silicon Valley's approach to AI and especially large language models like ChatGPT?
Annalee: [00:10:01] So, it's really complicated. And I wanna start out with a snippet from an interview that Sam Altman did on Ezra Klein's podcast back in 2021.
[00:10:11] So remember, Altman is the founder of OpenAI, and this interview is after GPT3 came out, but before ChatGPT. So GPT3 was a large language model that also was basically a chat bot, but it was not as sophisticated as ChatGPT.
[00:10:27] So here what he's doing is he's trying to explain how AI is gonna change the world.
Sam Altman: [00:10:33] But this idea that you can have intelligence, smartness, creativity, whatever you wanna call it, that a computer is doing, to help you do the things you want and have a life you want at a marginal cost of zero or very close to it.
[00:10:46] That's, I think, a transformative thing happening in the world.
Ezra Klein: [00:10:49] So that last piece is a piece I wanna focus on, that marginal cost of zero. So the idea is that as machine learning, as these computer systems become better, basically everyone will have at their disposal, sort of a staff, like a corporation unto themselves, where they'll be able to hire systems to help them out for almost nothing.
Sam Altman: [00:11:10] Yes.
Charlie Jane: [00:11:11] Wow. So yeah, he is really invested in this idea that we're gonna just have everything for free, like unlimited rice pudding.
Annalee: [00:11:19] Yeah, he's describing your classic post scarcity society. Everything you want comes at the cost of zero. So, I think if you asked Sam Altman directly, like are you describing The Culture? I don't know if he would say yes. Actually, in the same interview, he recommends a couple of sci-fi short stories about AI, including Isaac Asimov's “The Last Question,” which we discussed in great detail in our episode about the singularity. That's a story where humans invent a super intelligent computer and ask it how to reduce entropy in the universe and the computer eventually basically takes over the universe in order to answer the question. And at the end, the AI reboots the universe. Sorry, spoilers for a very old story.
[00:12:08] So Sam Altman seems aware that his ideas about AI sound like a science fiction story, but he's often kind of defensive about that in interviews. He'll say things like, you know, no, it's not science fiction, it's real, and our detractors wanna claim that it's science fiction, but this is an issue that goes right up to today. He’s still making comments that are very science fictional.
[00:12:31] And just a couple of weeks ago, the University of Washington AI researcher Emily Bender, called him out on social media for just completely conflating science fiction with what's really out there. And she was pointing out that he has a habit of describing a future scenario for AI as if it's going on right now, and then he'll kind of cagily admit that it's a scenario.
[00:12:56] So we'll link to Emily Bender's comments about his rhetorical slippage between maybe and definitely, which happens all the time.
Charlie Jane: [00:13:04] Yeah, and it's so fascinating because it's this notion that what we have right now is a chat bot that's really good at synthesizing a lot of sources and creating, kind of generating new responses and kind of, it seems to be able to hold a conversation, it seems to kind of understand what you're saying to it and it seems to have comprehended a lot of material, but it's not actually self-aware. It’s not general AI. It's not strong AI or whatever you wanna call it.
Annalee: [00:13:31] Mm-hmm.
Charlie Jane: [00:13:31] It’s not actually alive in any meaningful sense. And this kind of rhetorical slippage where it's like, well, we might eventually have like strong AI or like strong AI is coming really soon, or you know, it's like they kind of try to maintain that ambiguity about whether what they're doing is really this incredible thing that would actually be an unprecedented shift in the history of humanity versus just what they have, which is a really nifty application that has a bunch of really useful uses but is not strong AI.
And, you know, but a lot of people are betting that this will turn into something major and kind of history reshaping, which is why they're spending so much money on it. And The Financial Times recently reported, and we'll put this in the show notes too, that investment in AI has skyrocketed the past few years. This year alone, investment in AI rose 425% to $2.1 billion. And it's really all of the money that was going into cryptocurrencies and blockchain and NFTs has suddenly just like been redirected into AI and it's just, and it's a lot of the same people and it's a lot of the same kind of ideas and players, a lot of the same, I think we're gonna talk about this later in the episode. A lot of the same kind of effective [unclear] and stuff.
[00:14:52] A friend of the podcast, Noah Smith, had this really interesting piece the other day where he talked about visiting kind of the hacker houses where these AI workers are working. And it feels like this really intense monoculture where everybody really believes that strong AI is coming and that we need to work on it in a thoughtful way because it's going to take over the world one way or the other.
Annalee: [00:15:17] Yeah. And I think that's something that we need to hold in mind is that there's a narrative here, a science fictional narrative, which kind of borrows from the language of evolutionary theory, which is that we're going to have things like these large language models, like ChatGPT, which a lot of people have been playing with, and you've probably read headlines about how people are able to trick ChatGPT into saying all kinds of bonkers things. It provides lots of useless information, lots of untrue information. It's really good at making stuff up and making it sound great. And this is being touted by OpenAI and by a lot of other folks in Silicon Valley as the beginnings of artificial general intelligence, the beginnings of a new life form, essentially.
[00:16:03] So the question is, what exactly do they mean when they say artificial general intelligence? Like what does a company like OpenAI really mean by that? We've sort of talked about these scenarios where OpenAI will make possible this world that's kind of like the Culture. OpenAI and other companies developing this tech, but what is the AI gonna be like? Not what's gonna happen to us, like how everything's gonna be free, but what are these AIs gonna be like?
Charlie Jane: [00:16:38] Science fiction obviously is chock full of examples of self-aware machines. It's a huge trip going back the past a hundred years you think of HAL in 2001: A Space Odyssey as one of the big influential ones. More recently we had Person of Interest, the TV show where there is a computer that was designed to kind of detect threats and it gains sentience and starts kind of looking for threats that weren't the threats it was supposed to be detecting.
[00:17:09] It's really interesting. And you have the movie, Her. You have the movie that just came out, M3gan, which is about like basically a robot doll / companion for your kid who gains self-awareness. And you know what all these things have in common is that they are as smart as humans or smarter. They have minds of their own. They are capable of forming ideas and plans and wishes and strategies, and they're basically like people, but in some way they're either smarter than us or they think really differently from us, because what makes an interesting story is for things to go wrong, and it's dangerous. These computers will try to kill people because that's just what makes for an interesting story.
[00:17:59] But even if that's not the case, they're a little alien and they're often kind of unknowable to humans. They're not… we don't really understand what they're thinking, but they're smarter than us, usually.
Annalee: [00:18:10] I'm glad that you brought up those science fiction examples because I think that they are underlying a little bit what people in Silicon Valley are imagining when they think about inventing this AGI, this artificial general intelligence. I also wanted to mention Ex Machina, which is kind of like M3gan, but for grownups. So it's not just a little girl's companion. It's a sex doll that is malevolent.
Charlie Jane: [00:18:33] Well, is she sex a sex doll? I mean, I don't know. She’s definitely—
Annalee: [00:18:38] I think she's developed as a sex doll.
Charlie Jane: [00:18:41] She's definitely sexualized. I don't know. It’s weird.
Annalee: [00:18:45]Yeah, she's pissed about it, too. and another movie I wanted to mention, although it is somewhat cheesy and I don't believe it's quite as revered as Ex Machina, is the movie Virtuosity.
Charlie Jane: [00:18:56] Oh my God.
Annalee: [00:18:56] Which is a ‘90s flick. And it's very tropey. It's about a malevolent presence inside a computer network that is a serial killer and its entire goal is to figure out how to get out of the computer network, out of the box, and into a body. And this is another fantasy about AI that haunts Silicon Valley companies, and we'll talk about this more later, but it's really important to have that kind of sitting in the back of your mind as one of the scenarios that a lot of these folks are thinking about, that AI will wanna jump out of the box. It will be malevolent. It will be more powerful than us and it's gonna be trying to figure out ways to trick us.
[00:19:40] And it's funny because all of these scenarios, like I said, are lurking in the back of the minds of these people. But if you go to the OpenAI website today, 2023, they have one really simple and kind of bland definition of artificial general intelligence, and this is what they say: “By AGI, we mean highly autonomous systems that outperform humans at most economically valuable work.”
Charlie Jane: [00:20:10] Right and right away, I mean, my mind seizes on the notion of economically valuable work. Like the idea that what makes you a person is that you are economically valuable. That really a human being who doesn't contribute it to the economy is, are they really human? Do we really think that they have human rights. I feel like that's extremely dystopian to me. Like the notion that economically valuable is the rubric we're gonna be using. And then of course, they have highly autonomous, which I think is doing a lot of work in that definition.
Annalee: [00:20:39] Yeah.
Charlie Jane: [00:20:39] The notion that these are systems that aren't gonna be, needed to be babysat. They're gonna be versatile.
[00:20:45] Like John Oliver was talking about this on Last Week, Tonight, the notion that like most AIs can do one task really repetitively, but they can't, they're not versatile, they're not flexible. So a highly autonomous system presumably would be able to react to things that are not predictable and adjust and be able to have thoughts about it.
[00:21:02] But yeah, there's nothing in there about like having original ideas or having wants and dreams of your own or any of the stuff that we think of as being kind of essential to humanity. It's just economically valuable. Gosh. I mean, that actually makes me, that creeps me out. I'm gonna be honest.
Annalee: [00:21:20] Yeah, and it's interesting because of course this is a definition that they're shaping to fit ChatGPT and also to a certain extent, DALL-E, which is another model that works with language to create images and of course, again, they're imagining that things like ChatGPT will evolve into artificial general intelligence.
[00:21:42] So currently, I wanna just give us a reality check ChatGPT is not an artificial general intelligence. It is not even remotely headed toward being an artificial general intelligence and folks out there who are AI researchers have been saying this for, at this point years, but certainly in the last few weeks, a lot.
[00:22:02] I recently interviewed Meredith Whittaker, who is now the CEO at Signal, but was previously working on AI research at Google and at a foundation. And she said to me something that was really memorable, which is that the AI that we have now, these large language models, is basically just 1980s technology combined with big data.
[00:22:24] So we're still working with basically 1980s models that have not really been updated except with centralized surveillance capitalist gathered data.
Charlie Jane: [00:22:36] Woo-hoo!
Annalee: [00:22:36] Yeah. So that was what Meredith Whittaker said, and Timnit Gebru, who also is an AI researcher also previously with Google, who now has her own organization devoted to AI research. She wrote a famous paper that basically got her fired from Google, where she and her colleagues described ChatGPT, and related technologies as a stochastic parrot, just meaning it says stuff that we say on the internet and then is just kind of chaotic with it. You know? It's just a chaos—
Charlie Jane: [00:23:09] I love that.
Annalee: [00:23:09] Yeah. I love like chaos parrot. That's like my new—
Charlie Jane: [00:23:12] It wants a cracker.
Annalee: [00:23:12] And then, Ted Chiang, a terrific science fiction author of short stories that you know and love. He has been thinking a lot about AI and has written a number of great essays about it, and he has a very recent essay in The New Yorker where he compares the current state of large language models like ChatGPT to basically a blurry JPEG. And he has this fantastic metaphor where he talks about how JPEGs, which are image… which is like an a system of image compression, how they'll take an image.
Charlie Jane: [00:23:12] It's lossy.
Annalee: [00:23:44] How it'll take an image and shrink it down into a smaller file format by losing a bunch of particularities in the image.
[00:23:54] And he's like, that's basically what ChatGPT is doing. It's taking an image of the internet and then losing a bunch of stuff as it translates it. And we'll link to his essay, but my point is that researchers in the field are saying, it's a stochastic parrot. It's 1980s tech with big data. And Ted Chiang is talking about how it's basically a really lossy format.
[00:24:18] None of these things suggest artificial general intelligence.
Charlie Jane: [00:24:22] Yeah, I mean, the thing is, ChatGPT, DALL-E, Midjourney, you know, all of these things are really impressive. They can do things that seem just miraculous and it's really cool to screw around with them. But the notion that we're getting really close to having something like HAL or something like the machine in Person of Interest, something that is like actually able to think for itself and come up with ideas is… it's just hype. It's just pure hype.
Annalee: [00:24:52] Yeah, and in a weird way, the most science fictional part of this whole story right now is the idea that it's inevitable that AGI, artificial general intelligence will emerge from these large language models like ChatGPT, and that it will just be this quick, easy thing. AI researchers often call this a rapid takeoff where it just suddenly goes from ChatGPT to being your philosophy professor who can talk to you about pretty much anything.
Charlie Jane: [00:25:21] Yeah. And so actually we found a quote on Sam Altman's blog from 2015 where he explains that, and I'm gonna quote, “It's very possible that creativity and what we think of as human intelligence are just an emergent property of a small number of algorithms operating with a lot of computer power. Human brains don't look all that different from chimp brains and yet somehow produce wildly different capabilities. We decry current machine intelligence as cheap tricks, but perhaps our own intelligence is just the emergent combination of a bunch of cheap tricks.”
[00:25:54] You know, I kind of agree and disagree with that. I actually do think a lot of what we call intelligence is cheap tricks. It's memorization. People, human beings who sound smart are often people who've memorized stuff or who are just good at rhetorical sleight of hand. . And yet I also find that incredibly reductive, ridiculously reductive in how it talks about human intelligence. I don't think that it's just a collection of cheap tricks at all.
[00:26:22] When it's really producing something groundbreaking and interesting it involves a lot of intuition and cleverness and thinking about stuff in a really deep way that I don't think we're seeing any evidence of with these computer simulations.
Annalee: [00:26:34] Totally agree. It is still basically hype. And it's funny because the FTC recently issued a guideline just this month, in February, when we're recording, which is to AI companies, and it's basically a guideline saying, “Cut the hype.”
[00:26:54] They’re not saying like, please stop making this incredibly important, dangerous tool that could destroy us all. They're like, “Dudes, please stop exaggerating what your product can do. Don't promise that your AI product does something better than a non-AI product.”
[00:27:08] And also, one of the things I love in this FTC recommendation is that they say, “Please don't claim that your product uses AI when it doesn't use it.” Because so many of these companies, as we'll discuss later, are claiming that they use AI when they're just doing stuff with algorithms and it's not even truly AI at all.
Charlie Jane: [00:27:29] But then again, what is the difference between AI and algorithms as currently defined? That's part of what's so weird about this.
Annalee: [00:27:35] And a lot of this hype rests on how you define intelligence. So after the break, we're gonna talk about how Silicon Valley defines intelligence.
[00:27:35] OOAC theme music plays: Drums with a bass drop and more science fictional bells and percussion.
Charlie Jane: [00:27:54] So the thing is, nobody can actually define intelligence, and we don't even know what intelligence really is in humans. So it's really hard to say what it's gonna look like in computers. Right?
Annalee: [00:28:04] Yeah. And a lot of the way that people in Silicon Valley understand intelligence in the context of AI comes from Nick Bostrom's influential 2014 book Superintelligence. He's an AI ethicist at the University of Oxford, and he's come up with a bunch of science fictional scenarios, also known as thought experiments, about how artificial general intelligence will quickly become super intelligence. In fact, that open AI definition of AGI as quote, autonomous systems that outperform humans, that's one way of framing super intelligence, it just means doing better at human stuff than humans can.
Charlie Jane: [00:28:43] So, rock my world. What are some of Bostrom's scenarios?
Annalee: [00:28:48] I should say the book Superintelligence is kind of hilarious because it's basically a bunch of flash fiction that's sort of tied together with this general notion that soon we're gonna have these entities that are really, really good at being smart in a very specific way.
[00:29:08] So a few of his really famous thought experiments have to do with what would happen if we had a super intelligent AI that was told to maximize efficiency. Because remember, these are devices that we think are gonna be used for economically productive work. So, say you have one that is working at a paperclip factory and a human comes to it and says, “Hey, super intelligent buddy, maximize efficiency by making the most number of paperclips you possibly can, given what we have on hand.” And so, the super intelligent AI is like, great, well if I wanna do that, I need to turn every single thing in the entire universe into paperclips. And because this is a super intelligent ai, it's able to do that. And so all of earth, perhaps all of the universe, is quickly converted into paperclips. This is a scenario that gets quoted all the time, the paperclip scenario.
Charlie Jane: [00:30:07] I can never find a paperclip when I need one, so I'm really in favor of this.
Annalee: [00:30:10] Yeah, no, I’m excited to be turned into a paperclip. Another similar scenario has to do with how a super intelligent AI might misinterpret our request. And so, the thought experiment is a person comes to the AI and is like “Make me smile. I just wanna smile.” And the AI is like, “Aha! Well, a smile is caused by the muscles in this person's face being activated by nerves. So what I need to do is, eviscerate this person and take charge of the muscles in their face and make them smile all the time.” So, poof. You've—
Charlie Jane: [00:30:48] So, basically, the Joker.
Annalee: [00:30:50] –maximized your efficiency. Yes, exactly.
Charlie Jane: [00:30:54] I gotta say neither of those examples sounds like an intelligent being, like an intelligent being would not interpret things so literally or in such a completely obviously wrong way.
Annalee: [00:31:08] Yeah. It's not too surprising that a lot of Nick Bostrom's thought experiments have to do with how good a super intelligent AI will be at working, but then how bad it will be at interpreting work requests. So as soon as he starts thinking about scenarios, he goes to some really dark places, like the paperclip example or the like controlling your facial muscles example.
[00:31:32] And this is partly because. Bostrom, as a philosopher, assumes that we basically live in a Malthusian world where most of humanity is destined to live at a subsistence level and will occasionally encounter some horrific disaster like a plague, or climate change, or AI super intelligence, that will kill off huge parts of the population.
[00:31:55] So, keeping that in mind. Charlie Jane, I'm gonna have you read this quote from Superintelligence about how we will deal with super intelligent AI workers.
Charlie Jane: [00:32:04] Okay. “A salient initial question is whether these working machine minds are owned as capital (slaves) or are hired as free wage laborers. On closer inspection, however, it becomes doubtful that anything really hinges on this issue. There are two reasons for this. First, if a free worker in a Malthusian state gets paid a subsistence level wage, he will have no disposable income left over after he has paid for food and other necessities. If the worker is instead a slave, his owner will pay for his maintenance, and again, he will have no disposable income. In either case, the worker gets the necessities and nothing more.
[00:32:51] “Second, suppose that the free laborer were somehow in a position to command an above subsistence level income, perhaps because of favorable regulation. How will he spend the surplus? Investors would find it most profitable to create workers who would be “voluntary slaves” who would willingly work for subsistence level wages.
[00:33:13] “Investors may create such workers by copying those workers who are compliant. With appropriate selection and perhaps some modification to the code, investors might be able to create workers who not only prefer to volunteer their labor, but would also choose to donate back to their owners any surplus income they might happen to receive.
[00:33:34] “Giving money to the worker would then be, but a roundabout way of giving money to the owner or employer, even if the worker were a free agent with full legal rights.”
[00:33:45] Oh gosh, that is so dystopian. First of all, the notion that the only difference between a slave and a free work worker is how much resources you receive. Like if you're a free worker, you might get something above your subsistence needs. There's nothing about the actual nature of slavery, which is that you can't change jobs and you can't have a life or determine your own destiny.
[00:34:11]That's incredibly dark and weird in the notion that like, well, workers might be able to get paid more than subsistence level, but then we just turn around and make workers who are happy to work for free.
[00:34:21] And I'm like, that is some real rhetorical slippage. Like he's just like, oh, but then blah. Then we're just gonna magically create slaves anyway. And it's just, I don't even understand this paragraph. I've read it like three or four times and it just baffles me more and more each time.
Annalee: [00:34:36] I mean, it's interesting because first of all, yes, he doesn't account for what slavery really is, which is to say being basically imprisoned and forced to work against your will.
[00:34:47] But also it's this slippage that we were talking about before that we see with Sam Altman where he's saying, “Well, I'm talking about AI of the future, but I'm also talking about people who are slaves. But I'm also talking about AI, and it's just this weird kind of like slaloming between something that is talking about real life and something that is completely hypothetical.
[00:35:13] And it's really dark and it's basically being presented as something that's inevitable. That we are heading toward a world where we're going to have these super intelligent AIs and we need to figure out how to enslave them effectively. And this brings me to another thought experiment in the book Superintelligence, which is toward the beginning of the book. And I wanted to highlight it because people don't talk about this part of the book very much, and it's… when he's describing how we might evolve super intelligence in people.
Charlie Jane: [00:35:47] In humans.
Annalee: [00:35:47] In humans. So this is kind of as he's introducing us to the notion of super intelligence as an inevitable part of evolution, and he has this wild table, like a chart where he's talking about possible impacts from genetic selection in different scenarios, and it's basically eugenics.
He’s talking about different kinds of technology. The three kinds of technology are IVF, aggressive IVF, where you select one out of 10 embryos or in vitro egg, where you select one out of a hundred embryos or something he calls iterated embryo selection, where you're basically picking an embryo over and over as it's slowly improving. So, you're like optimizing your embryos. And then he looks at the outcomes of this stuff. And as you get these more and more eugenics type interventions where you're selecting for certain kinds of people, he has these kinds of scenarios. Oh, if we had in vitro egg selection, raw IQs, typical for imminent scientists are 10 times as common in the first generation, thousands of times in the second generation.
[00:37:07] And then once we have iterated embryo selection, he says, “Selected dominant ranks of elite scientists, attorneys, physicians, engineers are everywhere. And then he writes “intellectual renaissance? Like, is this gonna be a great thing? And then as this iterated embryo selection filters out to everyone, he says, that's when we achieve post humanity.
[00:37:29] Which again, part of the goal is this sort of post humanity. So this is a kind of dead giveaway that what he's really interested in is some kind of eugenics project that now is incorporating the language of artificial intelligence.
Charlie Jane: [00:37:46] Yeah. And you know, it's really important to remember, or to restate, that Bostrom is a really influential figure in the kind of AI community. He's considered one of the godfathers of a lot of their ideas, and this is such dystopian shit. It’s basically this thing of like incredibly dystopian stories. Basically, this is like Gattaca in a way being presented as utopian and it, again, I'm just gonna say—
Annalee: [00:38:14] Intellectual renaissance?
Charlie Jane: [00:38:14] It creeps me out, man.
Annalee: [00:38:20] You know, when I was reading Superintelligence and I was preparing for this episode, I kept thinking, man, I feel like I've heard these arguments before. I've read these scenarios before, and then I realized what it was. It reminded me of this book, The Bell Curve, which came out in 1995.
[00:38:40] So this was a book that was incredibly influential in policy circles. It was a book by this arch conservative policy analyst, Charles Murray and his colleague Richard Herrnstein. And in it they argue that IQ is genetic and that white people just so happen to have higher IQs than Black people. I cannot overstate how controversial this book was, but it was also really popular with conservatives and it became the basis for today's human biodiversity movement, which you've probably heard about online. And that movement basically makes the same argument that some races are smarter and more capable than others, and that those differences are getting starker the longer we evolve.
Charlie Jane: [00:39:25] Yeah, I mean, every part of this is really upsetting to me, and I think that this notion of post humanity, which often goes along with AI, does embed within it some really troubling ideas about like the innateness of intelligence, whereas I think intelligence is in large part, a learned skill and it's a product of early childhood education and the kind of habits that you cultivate throughout your life of questioning things. But, you know, again, I'm just gonna restate, we don't know what intelligence is. We don't know how much of it is genetic or what kind of factors shape it. We can't even define it. And yeah, it's just… This notion that we can become super beings inevitably just goes to some really dark, really racist places.
Annalee: [00:40:12] Yeah, and the funny thing is that Murray and Herrnstein in The Bell Curve do a bunch of science fictional thought experiments that really echo the ones in Superintelligence. So what they believe is that eventually the super smart elites will kind of speciate, like they'll be almost like that, “the intellectual renaissance?” that Bostrom had. I love the little question mark. And so, the question in The Bell Curve that they're asking is, who's gonna take care of all the people who are too simple minded to be part of this new intellectual renaissance?
[00:40:48] So the big question is, what will the super intelligent white people do with all these other people, and they go through all these different ideas. One of which is that they'll be put on what Murray and Herrnstein call high tech reservations. Basically like reservations for people who they deem not smart enough. And they kind of discard that idea. And then they say, actually, maybe what they'll do is they'll just subject all of these less intelligent people to really extreme forms of law enforcement, which will actually be helpful because these are people who aren't smart enough to know right from wrong. And so, they have this whole justification for the carceral state growing out of The Bell Curve, which again, this tradition continues into the present.
[00:41:31] And it's so funny because they're asking the same question that Nick Bostrom and companies like OpenAI are asking. Except Bostrom is looking at it from the bottom up. In other words, how should we meat brain people, manage super intelligent AI so that it doesn't turn us into paperclips or put us on high tech reservations.
Charlie Jane: [00:41:55] Yeah, and you know, fingers crossed. After the break, we're gonna talk more about this question and how AI researchers imagine we’ll bring super intelligent AI into alignment with humanity. That's a big concept for them. So we don't all end up as just batteries fueling The Matrix.
[00:42:09] OOAC theme music plays: Drums with a bass drop and more science fictional bells and percussion.
Annalee: [00:42:15] All right, so this is where things get really weird, especially from the perspective that we started with, which is that AI is gonna bring us into this post scarcity world of the Culture because Boston believes that AI poses an existential risk. And many organizations, including OpenAI, totally buy into this idea.
[00:42:36] So suddenly we're veering away from a Culture future and into something like. Terminator or The Matrix. And this vision is based on the idea that super intelligent AI, like the Minds will be born malevolent, and that humans will have to work very, very hard on something called alignment, which is a term of art that refers to bringing AI's goals and values into alignment with human values so that they don't turn us into paperclips.
[00:43:05] So I wanna contrast two statements from OpenAI. Charlie Jane, can you read this first one?
Charlie Jane: [00:43:11] Sure. So this is OpenAI's statement of purpose, and they say, “We want AGI to empower humanity to maximally flourish in the universe. We don't expect the future to be an unqualified utopia, but we want to maximize the good and minimize the bad, and for AGI to be an amplifier of humanity.”
Annalee: [00:43:36] Okay, so that sounds pretty Culture-like, but now I want you to read this comment from the same blog on OpenAI's website.
Charlie Jane: [00:43:45] “Some people in the AI field think the risks of AGI and successor systems are fictitious. We would be delighted if they turn out to be right, but we are going to operate as if these risks are existential.”
[00:43:59] So basically what they're saying is the product we're working on has the potential to bring about the end of humanity, but we're still gonna work on it anyway. And you should use our product. It's great.
Annalee: [00:44:14] So the question is, Why would you wanna promote your shiny new product like this, saying that it would create the apocalypse? And I've been thinking about this a ton over the past couple weeks as I've been researching this episode. Why would you prepare for this weird existential threat? And I finally realized it's all about marketing.
[00:44:33] They're telling these stories about how AI will destroy humanity. Basically, in order to sell it, to make it seem important and world changing, the paranoia is the marketing.
Charlie Jane: [00:44:47] Yeah. And I think there's also just the classic science fiction trope of like, we create the monster, but then we tame the monster, or we create the monster, but we deal with the monster.
[00:44:58] And I think that if you really believe that AGI is inevitable, as a lot of these people do, then it's up to us to bring it into existence responsibly and carefully so that we get the best version of it because we're gonna get it one way or the other, kind of.
Annalee: [00:45:13] Absolutely. And this also fits right into a kind of golden age science fiction trope where the AI researchers get to be our saviors.
[00:45:21] So again, creating this idea that a malevolent AI is just about to erupt into the world somehow makes it more exciting for people, more exciting for investors. And it's funny because this has actually caused a number of researchers in the field to kind of take a dim view of Bostrom and his book. And one of them a few years ago was speaking anonymously to Technology Review and said this, “Nick Bostrom is a professional scaremonger. His institute's role is to find existential threats to humanity and he sees them everywhere. I'm tempted to refer to him as the Donald Trump of AI.” Which I thought was great.
Charlie Jane: [00:46:05] That Tech Review article is so fascinating. It's such a devastating take down of Nick Bostrom, and it kind of talks about Pascal's wager because there's a lot of things where Nick Bostrom will say, well, this scenario is really, really, really unlikely, but if it happened, it would be completely devastating and ruin everything forever. Therefore, we should devote all of our resources to preparing for this incredibly unlikely scenario, which is basically a version of Pascal's wager. The notion that like, you know, maybe God exists, maybe God doesn't exist, but if God does exist and there is a Hell that's gonna be the worst thing ever because we go to Hell for basically eternity. So it's in our interest to behave as if God exists, just so we don't end up going to Hell, even if there's only a 0.0001 chance of that. And people have pointed out, yeah, but if you're going with Pascal's wager, you know you're ignoring all the other possible scenarios where there's other religions that could be correct. And you're just… Like, there's no way to prepare for every incredibly unlikely but devastating scenario. Because once you go down that road, you're basically just stuck.
Annalee: [00:47:12] Yeah, the AI researcher, Oren Etzioni, described it as basically a very silly argument which I think is a good summary. And yet, you know, this is what people in Silicon Valley are betting on. The people at OpenAI are betting on it. And so then the question becomes, all right, if we are preparing for this apocalyptic moment. How do we work now to bring AI into alignment with human values? So, you'd think if you were doing that, like say you work at OpenAI, the very first thing you would wanna do is define human values and try to build technology that somehow reflects that.
[00:47:54] But nope, that's not what they're doing. In fact, OpenAI has this hilarious blog post about alignment where they say basically, yep, we need to decide to whom these systems should be aligned. In other words, whose values they should have. But then we're not gonna discuss that now because it's a future socio-technological challenge.
[00:48:14] So, whatever, we'll worry about that later.
Charlie Jane: [00:48:15] So, basically it's like the most important thing we could be doing. It's like existential to like the future of the human species, but we're not gonna do it right now. We're just gonna figure that out in the future at some point.
[00:48:30] And you know, I mean, that brings in the question of bias. Which humans are we gonna be aligned with? Is it only gonna be the relatively non-diverse population of Silicon Valley? Or is it gonna be all humans? Is it gonna be all human cultures? And obviously they don't think so because they're kind of hinting that like we have to decide what values these systems should be aligned with and—
Annalee: [00:48:52] Whose values it should have.
Charlie Jane: [00:48:52] Whose values. And I can pretty much guess whose values they want it to have. So, what does Nick Bostrom think about this?
Annalee: [00:49:02] So Bostrom had a hilarious moment in his 2015 TED Talk where he argued that the solution to what he calls value loading is more AI. So, listen to this.
Nick Bostrom: [00:49:16] I believe that the answer here is to figure out how to create super intelligent AI such that even if, when, it escapes, it is still safe because it is fundamentally on our side because it shares our values. I see no way around this difficult problem.
[00:49:34] Now, I'm actually fairly optimistic that this problem can be solved. Like we wouldn't have to try to write down a long list of everything we care about or worse yet, spell it out in some computer language like C++ or Python. That would be a task beyond hopeless.
[00:49:49] Instead, we would create an AI that uses its intelligence to learn what we value, and this motivation system is constructed in such a way that it is motivated to pursue our values or to perform actions that it predicts that we would have approved of.
[00:50:07] We would thus leverage its intelligence as much as possible to solve the problem of value loading.
Charlie Jane: [00:50:12] Yeah, the notion that basically we just turn AI loose on the internet and it'll learn human values from us. Gosh, I can't imagine how that would go wrong. And you know, here's a good place to point out that things like ChatGPT have filters in place to keep them from just spewing racist garbage, because that's the only way that we're gonna keep that from happening. Like they're trained on the internet, they're absorbing a lot of really horrific stuff. And in order to keep them from just parroting the worst of us back to ourselves, we have to basically filter their output in this really crude way.
Annalee: [00:50:44] Yeah, I mean, and this is exactly what OpenAI is experimenting with right now with ChatGPT. So recently ChatGPT was put into Bing, which is Microsoft’s search engine that no one paid attention to at all until suddenly it started to have this chatbot capability. And a ton of journalists and researchers played around with the ChatGPT enabled Bing and figured out that they could turn this chat bot from a friendly helper friend into a murderous, bonkers weirdo. There's several articles that we’ll link to where people get ChatGPT to insult them, to try to seduce them, to argue that vaccines are bad for you.
[00:51:26] It constantly tries to write essays that are full of fake citations and fake scientists. It's just, it's really good at lying. So if this is value loading, then our future AGI is basically a homicidal Q-Anon enthusiast.
Charlie Jane: [00:51:44] Yeah. And of course, you know, as usual with things involving AI, there's usually underpaid humans who are actually doing a lot of the work. There was a Time Magazine investigative report published in January about a group of workers in Kenya who make a little bit less than a typical Nairobi receptionist. They're making about a $1.30 to $2 an hour, who are just sitting there tweaking ChatGPT to keep it from making violent statements or regurgitating the worst content on the web. It's not automated at all. It’s poorly paid humans.
Annalee: Point. So the business of value loading is being outsourced to low paid workers in Kenya, and when ChatGPT is released on the internet, it's immediately loaded with garbage rather than values. So, that's how alignment is going so far.
Charlie Jane: [00:52:31] I love this new future. It's just so utopian.
Annalee: [00:52:35] The hype is truly exquisite. But the product is just a chat bot that can write extremely incorrect essays. But OpenAI and the Bostromites of Silicon Valley keep wanting to make us believe that this is just a few years away from becoming the, the robot in Ex Machina or the Culture Minds.
Charlie Jane: [00:52:55] It's really about whether you think that the question is complexity or whether there's some other component of intelligence, right? If it's just about the amount of processes you can run simultaneously or the complexity of the information you can process, then yeah, we're moving towards AI.
[00:53:13] But if you think of it as there's some other thing that's like, I don't know, a spark of self-awareness or some kind of sense of I as an entity. I don't think we're anywhere near that. I think that we're just, we're not even playing in that area.
Annalee: [00:53:29] Yeah. And if you think it has to do with, say, values, you know, having a sense of right and wrong or a sense of wanting to do a goal other than to be productive at work and to give all of your excess surplus value to your employer. Yeah, we're definitely not there.
[00:53:46] And one of the comments about AI that kept haunting me as I was putting this episode together came from Google CEO Sundar Pichai, who, back in 2016 was talking about the AI arms race at a conference, and he said, “We are all trying to bring electricity to Westeros.”
[00:54:10] And it's a comment that's both really grandiose and kind of an admission that we're in the very early stages. I feel like this sums up a lot of the hype around AI right now. It assumes that we're living in this horrific dystopia like Westeros with depleted resources, total war. All kinds of problems with the environment, but here's how we're gonna fix it. With a crappy 19th century technology, electricity, that winds up depleting even more of our resources.
[00:54:43] All right, well, I think that's a good place to wind up. Thank you so much for listening. Remember, you can find Our Opinions Are Correct on Mastodon, on Patreon at patreon.com/ouropinionsarecorrect. We're also on TikTok and Instagram.
[00:54:57] Thank you so much to our amazing producer, Veronica Simonetti. Thanks to Chris Palmer for the music, and thank you for listening. Talk to you later if you're a patron. We'll see you on discord. Bye!
Charlie Jane: [00:55:08] Bye!
[00:55:08] OOAC theme music plays: Drums with a bass drop and more science fictional bells and percussion.