Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Melissa Gismondi (00:00):
So this is my, I've never engaged with a chatbot ever. This is me back in April, 2024. I have no idea, I don't even know where you go online, I don't know anything. I would probably, if it was up to me, I'd probably Google like ChatGPT.
Teresa Heffernan (00:21):
I read a lot about it. I don't actually use it, so I'm in the same boat.
Melissa Gismondi (00:29):
I'm at the Jackman Humanities Institute at the University of Toronto with Teresa Heffernan, a former visiting Faculty Research Fellow, and I'm about to engage with ChatGPT for the very first time. It says here, "ChatGPT can make mistakes. Consider checking important information, read our terms and privacy policy."
(00:51):
So write me a brief description of a podcast featuring humanities and literature scholars discussing the perils of AI. Oh my gosh, it's going so fast. Oh my god, it mentions, there's Frankenstein. It's very creepy.
(01:18):
I don't know. There's just always been something about chat bots and artificial intelligence and big tech that doesn't sit right with me. I just don't trust it. It wasn't until I got the chance to speak with Teresa, who's been researching AI and robotics for more than 20 years, that I started to understand what's really going on.
Teresa Heffernan (01:40):
Well, it's also interesting that if you look at that, we uncover narratives that reflect our hopes, fears, and uncertainties about AI. But that's where I'd say, oh, let's back up there.
Melissa Gismondi (01:51):
Teresa Heffernan is a Professor of English Language and Literature at St. Mary's University in Halifax. While at the JHI, she was investigating how literal readings of science fiction are shaping the AI industry and our society. It's a fascination that began in the early 2000s.
Teresa Heffernan (02:11):
I kept seeing references to fiction as evidence, and I was thinking, okay, well there's something wrong there. And I wanted to go and figure out what that relationship between the science and the fiction, what that relationship was. And I also wanted to go out and see, okay, well what's your model of human? What does that mean? What do you mean by intelligence? So I did go off to Japan, I went through the States, I talked to people like Geoffrey Hinton, and I didn't get very good answers, I would say.
Melissa Gismondi (02:39):
The absence of intelligence in AI is the focus of Teresa Heffernan's research. It's also the focus of this very first episode of Humanities at Large, a new podcast from the Jackman Humanities Institute at the University of Toronto. Each year, the JHI invites a group of research fellows from various disciplines to investigate topics based on a theme. Last year's theme was Absence.
(03:18):
I'm your host, Melissa Gismondi. I was a JHI fellow back in 2020. Over six episodes every other week, I'll introduce you to a different research fellow and their inquiries about absence through the lens of philosophy, music, landscape architecture, social justice, and more. We're starting this series with Teresa Heffernan and the Absence of Intelligence, the Science and Fiction of AI. This conversation was recorded back in April. Since then, news about artificial intelligence has been constant.
Newsreader 1 (03:56):
OpenAI just dropped a bombshell.
Newsreader 2 (03:58):
GPT4.0.
Newsreader 3 (04:00):
The laughing, chatting, singing AI, "So responsive," says the Guardian, "you could almost forget it's not a sentient being.
Newsreader 4 (04:07):
The latest version passes the bar exam? The thing we should embrace and be terrified of.
Newsreader 5 (04:13):
Elon Musk is suing OpenAI for breach of contract. He has warned before that the unfettered use of generative artificial intelligence could pose an existential threat.
Newsreader 6 (04:25):
Geoffrey, thanks so much for joining us. You've spoken out saying that AI could manipulate, or possibly figure out a way to kill humans. How do we solve this problem?
Teresa Heffernan (04:34):
One of the problems with the reporting on AI is that these companies put out what are basically press releases or product information, and it gets picked up in a way, like a corporation getting a free platform to advertise their products. We're prone to be seduced by these narratives because of the framing of a lot of, intentional framing of a lot of this technology by fiction. And so you get seduced by this story. So for the last year, they've been talking about AI as an existential risk, and it's a real distraction from what is actually happening.
Melissa Gismondi (05:15):
This is sort of difficult to wrap your brain around because it's like, why would developers want us to think that AI poses an existential threat when AI is their product?
Teresa Heffernan (05:27):
I would say that there are some true believers that think that actually AI is an existential risk. And if you can go back to Alan Turing, who's considered one of the fathers of AI, and you can go back to his papers in 1950, and he talks about this child machine that he is creating and the possibilities that machines will take over. And then he says, like in Erewhon. Erewhon is a Victorian novel, it's 1872, it's written by Samuel Butler, it's a satire. What Butler refers to is this specious analogy. So specious analogy means that it has a kind of ring of truth, but it's completely fallacious. It's in the story, there's a section called The Book of Machines, and what he does, what Butler does is apply Darwinian logic to machines, and so this whole society is completely afraid. Remember, this is 1872, completely afraid that machines will take over.
(06:29):
When Turing reads that in 1950, he was known as a sort of literal reader, he takes that seriously. You've seen, now, iterations of that, you can trace it from that moment. But you can see iterations of that idea of machines taking over or AI becoming autonomous, the possibility of becoming a super intelligence. You've seen iterations of that from the 1950s on, from Turing's paper on. But when you actually go, well, okay, tell me what the science is, tell me what the logic behind this is. There isn't any. Often I get, what I get instead is, well, it's like Terminator, well, it's like, those are fictions and we can talk about how to interpret fictions, but they're not science. They can't fill in for scientific evidence.
(07:19):
So why is this? Well, it's a huge distraction from what is actually going on with AI, which is massive amounts of power. Water is very environmentally resource intensive, not to mention the destruction to democracy, the toxic bots out there, the troll farms, all sorts of things. But this idea of AI as being a threat that is possibly developing is a big distraction, and it puts it off into the future as opposed to what should be regulated right now. So it also keeps the profits of AI in complete power, in a sense, because they're the ones who are determining what regulations we need. They keep saying, "We'll, keep it safe."
Melissa Gismondi (08:11):
I know you've talked about how when you get down to it, AI is just, it's data, it's cameras, it's microphones, it's algorithms. It's not that sexy, it's not that innovative, in a way. So is that part of this, too, of that if you exaggerate the risks, it makes the innovation look, maybe, more significant or more daring than it actually is?
Teresa Heffernan (08:41):
Yeah, I think there's a kind of glamour to it, there's a intensity to it, there's the threat of it, all these things, and we just think, oh, please keep us safe, Microsoft.
Melissa Gismondi (08:57):
Please keep us safe, Sam Altman.
Teresa Heffernan (08:59):
Exactly. Who, I would mention, is a prepper himself. So I don't know if people know the term prepper, but it's people who stockpile. He's got stockpiles of gas masks and antibiotics and all sorts of things for this point when the world goes into chaos.
Melissa Gismondi (09:20):
Do you know, not to take us down the prepper, I am just curious in this tech elite logic, because I know there's others. I think Peter Thiel?
Teresa Heffernan (09:31):
Absolutely.
Melissa Gismondi (09:32):
I know Mark Zuckerberg's building some compound in Hawai'i, I think.
Teresa Heffernan (09:36):
These bunkers.
Melissa Gismondi (09:37):
These bunkers, yes. Are they talking about climate now or are they talking about technology gone rogue, in terms of the future that they're afraid of that you have to prep for?
Teresa Heffernan (09:50):
Yeah, it's a good question. Is it nuclear weapons? Is it AI? Is it climate change? I think the other thing that they're really concerned about is the massive concentration of wealth and power that they have.
Melissa Gismondi (10:06):
So they're afraid of us, in a way?
Teresa Heffernan (10:07):
Well, I think they're looking around and thinking, I think they are afraid of us. But I think that it's also, they're looking around and it's an apocalyptic narrative. It is like the apocalypse will happen and then we'll be able to restore the world, whether that's on Mars or whether it's on Earth or another planet. A lot of them are so influenced by science fiction, and I think that there is this tendency amongst that crowd to have a literal reading of science fiction. If you look at, for instance, someone like Jeff Bezos or Elon Musk, they talk about growing up on Star Trek, and Jeff Bezos actually models himself on Captain Picard. He's got his dog named after one of the characters, he wrote himself into a show, he's got the bald head, all these things. So Captain Picard is a huge hero.
Captain Picard (11:06):
Graves, give Data back, give him back. He's not simply an Android, he's a life form, entirely unique.
Teresa Heffernan (11:15):
When they're talking about it, they say, "Well, I watched Star Trek and I'm frustrated with the state of intergalactical travel." And you're like, well, but hold on, that was actually fiction. And if you read anything by Gene Roddenberry, who's the creator of Star Trek, he said, "I was talking about the Vietnam War, I was talking about race politics in the States, I was talking about patriarchy, and I couldn't get it by the censors, but if I put it back on the planet with purple haired people, I could get away with a lot." And so you've got the first white/ black kiss on TV is in Star Trek.
Melissa Gismondi (11:56):
So they're missing, which I think you have this quote on your website from Ursula K. Le Guin, "Science fiction is not predictive. It is descriptive." That's what they're essentially missing in these literal readings of science fiction.
Teresa Heffernan (12:10):
Yeah, they treat it as kind of prophetic, and I think that there's a way in which fiction is never predictive. What it is, is that defamiliarizing our present so that we can see more clearly the problems that we're having in the present. So for instance, if you take Terminator, James Cameron says, "I was never talking about killer robots. I was talking about the whole military industrial complex, and that, that is actually dehumanising us, not that we're going to create machines that are killing people."
Melissa Gismondi (12:52):
So this idea that super intelligent machines are one day going to overthrow us or kill us, whether it's the Terminator or HAL in 2001, a Space Odyssey, try to kill us, is this at all possible or are we just so far away from that future that AI is trying to make us afraid of?
Teresa Heffernan (13:15):
Yeah, a lot of people would say, "Well, it's not here, yet. AGI, which is artificial general intelligence is not here yet, but it's coming." And I'm like, well, okay, show me the model, because I don't see it. And if you go back to Turing again, which I think his model is still very much there, what happens is the idea of imitation. So what the Turing test is all about, if I can't tell the difference between a computer and a human, then I must be able to say that the computer has reached human-like intelligence. But if you think about it for a minute, or if you read Turing, the article is called Can a Machine Think? But he very quickly says, "Well, thinking is too complicated. So I'm going to look at imitation."
(14:00):
And imitation and thinking have nothing to do with each other. I can get up on a stage and I can say, "I'm a pilot," or I can be in a film and I can play a pilot, but that doesn't mean I know how to fly a plane. I actually know nothing about it. I'm pretending, it's pretence. And the Turing test actually starts out as the imitation game, and it's between a man and a woman with a judge there, and the judge can't see the man or the woman, and the woman is trying to prove that she's a woman, and the man is trying to prove that he's the woman. And so it is based on deception, the man can lie, all sorts of things, which is all fine within the parameters of a game. But when you put that out into the world at large, we can go from Turing right to deep fakes. It's the same model.
(14:53):
So as I say, within the parameters of a game or within fiction or within film or within place, pretence is great, mimicry is great, but not when you put it out to the world and you pretend that it's true.
Melissa Gismondi (15:08):
As you were starting to research the subject and get into it, you took some trips and you went to talk to people like Geoffrey Hinton, who's considered one of the godfathers of AI and you went to Japan. Talk a little bit about that trip and maybe what you learned that sort of got you to see AI differently. Because for me, I sometimes think, okay, if I went to talk to these guys, I'd probably buy everything that they're saying. But you have come away with a very different critical view of AI.
Teresa Heffernan (15:40):
In Japan, for the robotics lab that I was at, he's probably the biggest roboticist in the world, Ishiguro, He creates what's called Geminoids. So the Geminoids look like their maker, so he has several of them. And I also went to Denmark to meet the only Caucasian Geminoid, and after, I think when I first encountered the Geminoids, they are startling because they're life-size, the Danish Geminoid, it's Henrik Schärfe's Geminoid, but it's very, it has its own tailor, so it's incredibly well-dressed. There's the initial surprise and you're impressed, but after a while, it just seems like a hunk of silicone sitting there on a breathing pump in the corner.
44
00:16:32,000 --> 00:17:085,000
Teresa Heffernan
Melissa Gismondi (17:08):
Right. Well, I want to get in a little bit more to this literal readings of fiction, and I'm curious if you remember the first time you read or saw a fictional story about AI taking over. So, I rewatched 2001, A Space Odyssey, on the weekend, and I was struck by, for a movie in 1968, HAL is terrifying.
Dave (17:37):
Oh, HAL, do you read me?
Melissa Gismondi (17:38):
Especially the scene where Dave's trying to get him to open the pod door so he can come back in.
Dave (17:45):
Open the pod bay doors, HAL.
HAL (17:49):
I'm sorry, Dave, I'm afraid I can't do that.
Melissa Gismondi (17:52):
I felt the challenges of Dave's dependency in this case on, potentially, on his life, on HAL, on technology, and it resonated a little bit with me as I was thinking about how dependent we are on technology right now.
HAL (18:08):
Dave, this conversation can serve no purpose anymore. Goodbye.
Dave (18:11):
HAL?
Melissa Gismondi (18:16):
So I'm curious for you, if you can remember the first sort of time you engaged with a story about AI taking over.
Teresa Heffernan (18:25):
I guess you could go back to Mary Shelley's Frankenstein, actually. When you think of Mary Shelley's Frankenstein, I'm always curious at the point when the monster comes to life.
Victor Frankenstein (18:37):
He's alive, he's alive, he's alive.
Teresa Heffernan (18:44):
And he runs from his lab screaming. And you think, well, okay, but what were you expecting? And then he spends the rest of the novel trying to kill off the monster. But if you go back to the reason he created the monster in the first place, it's because his mother had died and he was so grief stricken. And I think grief underlies so much of this fascination with artificial intelligence and with creating life. He's so grief stricken that he decides, I need to be able to bring people back to life because death is this horrible thing.
Victor Frankenstein (19:26):
Now I know what it feels like to be God.
Teresa Heffernan (19:30):
But then he rebels against it because he sees that as not, in fact, human.
Melissa Gismondi (19:56):
You were talking earlier about Alan Turing and his Imitation game or the Turing test. Talk a little bit more about these different views of intelligence, human intelligence, and how it's applied to AI. So I'm thinking about in terms of maybe how you might approach the question of what is intelligence from the point of art, from the point of humanities versus the view of intelligence that Turing, how he saw it when he was working out some of his ideas, and can machines think?
Teresa Heffernan (20:28):
Yeah, if you think about computers, the way that you turn the world into information is by turning it into code. And it's binary, it works on a 1 0, so it's a yes/no logic to it. That's not how humans work, that's not how intelligence works, and it's certainly not how literature works. But you see right away, there's the assumption that the computer works like a brain. Well, we know so little about the brain, how do you even begin with that assumption? Some of the neurologists that I've read to say, "Well, we might know 5% about how the brain actually works." How are you assuming your computer works like that? How do you make that analogy? And that's what I say, it's this problem where you use a metaphor and that metaphor becomes illiteralized, and they're two different things.
(21:24):
Some things you want to have a yes/no answer to, but a lot of the world is a lot more complicated than that. And humans, the way that they think is a lot more complicated than that, so you get this reductionist logic, and when you put that reductionist logic out in the world, it really shuts down conversations about all sorts of things. So when people think, oh, I'm getting a response from, for instance, one of the chatbots, that's already a really limited response. The way that the chatbots work is that you have this large database, and then it's working with the statistical predictability. So how often has this word appeared next to this word, this sentence in this context or in this paragraph? So all this is, is imposing a certain restrictive version of what intelligence is, and I think we need a much more creative approach to the problems that we have in the world.
Melissa Gismondi (22:29):
That we're dealing with right now, not in this future that AI is warning us about.
Teresa Heffernan (22:34):
Yeah, and sorry, I'll go back to your question because you talked about will we get to AGI? And as I say, I don't see any model for it. I've been looking at this field for a long time. I find that there's massive amounts of projects that I can tell you right away won't work. And I'll give you one example. It was at Georgia Tech, funded by the military, it was on the front cover, I think, of Time Magazine. It was a robot sitting on ...a humanoid robot sitting on a stack of books, and it was like Nietzsche, and Marcus Aurelius, and philosophy, and novels sitting on the stack of books, and it said, "We're making ethical robots." And I'm like, okay, how do you make an ethical robot? Because to me, ethics requires a lot of complicated thinking.
(23:21):
I'm looking at the model, and their model is, we're going to get the robot, the machines, to read literature, and then they'll know how to act in the world because they'll know how humans act, and I'm just like, what could you possibly mean by reading there? When you think what you're doing is pattern matching. There's no understanding, there's no imagination, there's no ethical imagination possible in that scenario. But there's huge amounts of money that goes into this. Of course, we've got nowhere with the idea of ethical machines, but it has received a lot of funding.
Melissa Gismondi (23:58):
I read in something you wrote that it was John McCarthy who originally coined the term AI, that has later came to regret it and said, "Computational intelligence would've been a better term for what this is."
Teresa Heffernan (24:09):
Exactly. So John McCarthy says he regretted it at the end of life. He said, "I wish I had never used this term. I wish I had used computational intelligence." Which I think makes a lot more sense. It is a process of computing.
Melissa Gismondi (24:25):
And when you say that they're trained on data sets, these are trillions and trillions and trillions of data sets. I think we're recording this in April. I heard yesterday that they think they might run out of human-based data to feed AI by 2026.
Teresa Heffernan (24:42):
Yes.
Melissa Gismondi (24:43):
So this is a lot of data. It's not like a small amount.
Teresa Heffernan (24:47):
Massive.
Melissa Gismondi (24:48):
Massive, yeah.
Teresa Heffernan (24:50):
And in that data, one of the things that has to be done is it all has to be cleaned up so that your Chat GPT doesn't send you all sorts of violent or pornographic and rape images, whatever. So all that data has to be sorted, and all that data is being done by very poorly paid workers in various parts of the world, like Kenya, and it's really traumatic work.
Melissa Gismondi (25:19):
Should we ask it the carbon footprint of that question?
Teresa Heffernan (25:22):
Yeah.
Melissa Gismondi (25:25):
What was the carbon footprint of your response? Interesting. As an AI language model, I don't have direct access to real-time data about energy consumption or carbon footprint. However, the energy consumption associated with generating this response would be minimal. These servers are maintained and optimised by open AI to be energy efficient.
Teresa Heffernan (25:50):
Spin response of the industry.
Melissa Gismondi (25:52):
Yes, this is like a corporate statement for sure.
Teresa Heffernan (25:56):
Yes.
Melissa Gismondi (25:57):
Just for some quick stats, a typical data centre will require about three to 5 million gallons of water. Very often, these are in the desert where there's not a lot of water to say nothing of minerals and the stuff that has to be mined for all of the electronic components, that creates e-waste. And at the same time, within this media hype, AI is pushing this narrative that AI is inevitable.
Teresa Heffernan (26:24):
Absolutely.
Melissa Gismondi (26:25):
And it's going to be everywhere, and we want it to be everywhere. So I'm curious your thoughts on what the implications of this narrative is, particularly at a time when we're really starting to see the stark reality of the climate crisis. Because these are all, as I understand it, they're very much connected.
Teresa Heffernan (26:43):
Absolutely. And I think that, first of all, you have to look at, okay, where does that narrative come from? Because you can look at the many winters that the AI industry has undergone. So the winters are like, okay, we thought it could do this. It can't do this. And then the whole thing gets shelved and people stop investing in it. But then there's the new hype cycle. So right now it's like AI is going to cure all these diseases and it's going to give us solutions to climate change. And I just want to say, how? Please tell me how. Give me, don't just make the promise, tell me how this is actually going to work. And then when you look at the AI industry, it is hugely resource intensive. And so at the same time that we're trying to cut back on our carbon footprint, we're investing in a whole industry that is increasing this carbon footprint.
(27:38):
And I think I read recently, it was a physicist who, when you look at the kind of explosion of data that's happening, mostly through social media, that in a certain amount of time, it will require as much power—the digital world will require as much power as our entire world is using right now. So you can see that it's completely unsustainable. It's unsustainable and even people like Sam Altman will admit that and say, we actually need another power source. Well, we don't have the other power source, so we better start paying attention to the kind of infrastructure that we're creating.
But I would also say a lot of the people who are in the tech, have been in the AI industry, are what I would refer to as techno-utopians. So they have this great faith in the idea that technology will save us.
Melissa Gismondi (28:37):
Including from the climate crisis?
Teresa Heffernan (28:39):
Including from the climate, or we're going to go off and live on other planets or move into outer space. It's a very, very particular narrative. And I think what we actually need to do is take care of this planet.
Melissa Gismondi (28:54):
I'm curious if you think the hype has started to, maybe, wane. So right now, OpenAI, several, I think, companies are being sued by artists, musicians, the New York Times, are we, maybe, going back to a period where this hype around AI gets quieter, maybe even goes away, or maybe even we see AI differently?
Teresa Heffernan (29:18):
When I teach my students, they're very quick. When I explain the environmental cost of it, they're like, oh, yeah, okay, that makes sense. I think they have a better understanding of the technology, they have a better understanding of how it's impacted their lives. So I think there is a resistance to a lot of the narratives that have been so dominant in the media, but as I say, largely put out by a fairly small group of men, powerful men, very powerful.
Melissa Gismondi (29:45):
Right. We spent a lot of time today talking about how AI has misunderstood fiction. And so I'm curious what role you think fiction can play in helping us start to overcome some of the challenges that we're facing because of AI, whether it's fiction helping us imagine different futures or demanding different futures, but where do you see the role of fiction?
Teresa Heffernan (30:10):
Fiction says, look, I'm fiction. I'm making this stuff up. I'm making up talking lions or robots or whatever. And so it announces itself as fiction and then says, okay, so let's look at how narrative works. Let's have some kind of reflexivity. And I think that's why, for instance, I wasn't seduced early on by the narratives about humanoid soon to be human intelligence, machine intelligence, because I wanted to say, how does that work? And you never get a simple answer, right? In literature, you'll never get a simple answer. I also think that there's some really interesting writers, at the moment, who are fighting back against the co-option of fiction by the AI industry, because with the AI industry, fiction turns into tech propaganda.
(31:05):
So they're reclaiming fiction as fiction. And there's a great novel by Will Eaves called Murmur, and it's about Alan Turing. He uses an avatar, but it's a really interesting novel, which looks at where Alan Turing is thinking of the brain mechanically. And then when he, because Alan Turing, of course, was forced to go through a chemical castration, and his body started to change and he was under this regime, this authority, and he started to rebel against it. Eaves, in any case, thinks that he would've rebelled against it. Turing ends up committing suicide not long after that. And there's a sense in which there's a reclaiming of, okay, what are the implications of thinking of the brain as mechanical? What happens to my body, then? What happens to a body? What's the connection between the brain and the body or an embodied or an environment?
Melissa Gismondi (32:06):
So it sounds like this is humanising it, putting the human back into this subject that has been stripped of it, essentially, it's like.
Teresa Heffernan (32:15):
Yeah, when I say I don't want to, because I always think of the human as something that is open. We don't have a clear definition of what it means to be human, so I would say it's more resisting a very reductive notion of what intelligence is or what creativity is. So it's not so much the human versus the machine, but rather, recognising the very limited ways that machines operate.
Melissa Gismondi (32:45):
Theresa Heffernan is a Professor of English Language and Literature at St. Mary's University. She was the Visiting Public Humanities Faculty Fellow at the JHI from 2023 to 2024. Thanks for listening to this very first episode of Humanities at Large from the Jackman Humanities Institute at the University of Toronto. If you enjoyed this conversation, be sure to subscribe, share this episode and leave us a five-star review. It really helps to get the word out. I'm Melissa Gismondi and I'll be back in two weeks.