All Episodes

April 4, 2024 9 mins

University of Toronto researchers Rahul Krishnan and Beth Coleman dive into the world of AI – how far we’ve come, where we are heading and the potentially profound impact for society. 

01:24 Geoffrey Hinton's warning about AI
03:21 Regulating a multi-billion dollar industry
04:50 How is AI being trained?
05:58 AI as a tool
07:08 What can we learn from chatbots?
08:28 Who watches the Watchmen?

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:02):
- I'm Beth Coleman.
- I'm Rahul Krishnan.
- This is What Now? AI
Hi Rahul.
- Hey, Beth.
- So we're doing an AI podcast
and we are both super excited to get into it.
- Absolutely.

(00:23):
- I'm very curious to hear andlearn from you about how you
think about AI and all thedevelopment that's been
happening over the last decade.
- Beth Coleman is an associateprofessor whose research
explores how advanced automationshapes societies,
communication and social dynamics.
In her role at Schwartz ReismanInstitute for Technology and Society,
she has embarked oncollaborative projects exploring

(00:45):
human and machine learningcollaboration,
inclusive languages in large language models
and trust in human-machine learning engagement.
And Rahul Krishnan is AssistantProfessor of Computer Science
and Laboratory Medicine at the University of Toronto
and a Canada CIFAR AI chair the Vector Institute.
His research focuses on buildingnovel machine learning algorithms

(01:06):
to automate clinically meaningful problems
and to advance our understanding of human health.
- OK, let's go back in time to May 2023.
- The godfather of AI...- The godfather of artificial intelligence,
has left his job at Google.
- ... saying some of the potential dangers of chat-bots are quite scary.
- He says it's time to put thebrakes on AI

(01:27):
while we still can.
- We're in London with GeoffreyHinton, one of the godfathers of AI,
and he's already announcedearlier that spring that he's
leaving Google
because he wants to talk about AI technology's imminent threat to humanity.
The existential crisis.
This revelation that the technology that he helped to invent,

(01:51):
machine learning AI, might actually be the end of humanity.
- Given all the bigger risks youspeak to, can't you just throw a
master switch and shut us down?
Aren't humans still in control?
- It's very tempting to think wecould just turn it off.

(02:13):
Imagine these things are a lotsmarter than us,
and remember they'll have readeverything Machiavelli ever wrote.
They'll have read every examplein the literature of human deception.
They'll be real experts at doinghuman deception because they'll
have learned that from us.
And they'll be much better than us.
They'll be like you manipulatingyour toddler.

(02:35):
- So we have now seen Geoff Hintonon tour talking about this.
And we've also seen realchanges.
I mean we think, and we're goingto talk more about what happened
with Open AI in the November of2023
where we saw this shift in understanding in terms of the
progenitors of AI saying actually responsibility is really important.

(02:58):
So LeCun versus Bengio.
Hinton in the ring.
Hopefully we'll get to all that.
- You know, following just about here from Google
and the talks he's given around the world
I think it's broadly a very good thing that
these countries are starting to take this seriously
and put up draft regulation.
But at the same time, in November 2023,

(03:18):
we saw this boardroom drama
unfold over a weekend at Open AI.
- Breaking news. Sam Altman is out.
- ... was fired by his board
- ... and the Shakespearean drama gripping artificial intelligence.
- Late on Tuesday night,
the company announced that Sam was coming back.
- Sam Altman is back.

(03:38):
- Sam Altman is back.
- What the hell happened?
[Laughter]
- It's a good opener.
- I think this serves as a reallyimportant episode
for us to think about, how regulationcould potentially even be
effective in an investmentlandscape that involves billions

(04:01):
of dollars and investors thatwould much rather have
a viable product moving forward.
- I've got a question for you.
If you could wave your AI magic wand and say
we're not going to produce into the world
models that are more powerful than what we have right now,
and if you could just wave your hand

(04:23):
and it would be done, would you do it?
- You are at the moment, you Rahul, you are the AI God.
And so what are you saying?
Stop or what?
- Oh, that's a great question.
- So now we're already in the thick of it.
One of the things that we want to talk about is

(04:45):
how are some of the
popular models that are being used, AI out in the world,
how were they trained?
There's a couple of things thatcame about in conversations that
I've had with folks, and one ofthem is that not all countries
may choose to think about dataas resources in the same way,
For example, Japan has a muchmore open idea of what's fair game

(05:06):
to train AI models on than,say, some of the countries in
North America.
- That's right.
- And you know, you mentioned thisnotion that people create data
and that data is used to trainthese models, but not everyone
creates data at the same rate.
Not everyone even gets theopportunity to create data.
And if these

(05:27):
large language models or these very large neural networks
are being trained on data that is created,
who gets missed out?
- So another thing that I think
we want to do is translate.
So really try to get past some of the noise
and talk to amazing people about signal.

(05:48):
- I think that would be amazing.
I think that there's a lot of
hypothesis on how AI could be used.
You know, for us it's a tool.
We're out of school.
I mean, we're still in school actually, but
[laughter]
we're not taking classes anymore.
But for the next generation ofstudents, this is not,
this is not just a toy that exists in the background,

(06:10):
this is something that they're actively using.
And so understanding and being able to process
how this object behaves,
even if it's not a perfect understanding is I think going to be important.
Because, you know, there's anaspect of worry that some people
have which is are we losing outon our ability to think critically?

(06:30):
And if we can, if we justnaively use these tools to auto
complete our lives for us.
- Yeah, I think that's really well said.
"Naively using these tools toauto complete our lives."
So this season we'll dive intothings that are
important to us

(06:51):
and I hope important andinteresting for all of you.
We're going to look at trust,power.
What is real?
Can you believe that's aquestion that we're asking?
Like we're in a moment that justsay what is real is a viable question
But we're saying that.
- I think there's also lots of really cool opportunities
to think about how we can use some of these advances

(07:14):
to help us co-create new things.
And I think that's going to be areally interesting conversation,
not just in the context of art,but also in the context of
science and literature.
To think about what does it meanto be a co-creator with an AI.
- Yes, co-creators.
In some ways I feel like okthat's the paradigm.
That's the dream.

(07:34):
Did you read Neuromancer, the William Gibson book?
- No.
- What's wrong with you?
[Laughter]
- I have.
I'm pre-tenure remember?[Laughter]
- At the end of Neuromancer you've got an AI
and the reason that we knowit's sentient is because it makes art.

(07:56):
And I think about that in terms of, I mean projects that I'm
working on right now, I amcollaborating with machine
systems and they are not so muchentirely tools that I'm using,
but more and more closer to whatyou've said in terms of
collaborators.
So I actually am reallyinterested in hallucination,

(08:21):
bad behavior like when chatbots get all salty with people,
like I'm super interested to see whatwhat can we, what can we learn?
Because I don't think our bestuse here is having them
mirror what we know.
- I don't know if you ever readthis graphic novel

(08:43):
or seen in the movie Watchmen.
There's this line in Watchmen,which is,
"Who watches the Watchmen?"
And I I think there's there'smaybe a future in which we start
building a different kind of AIthat monitors other AIs.
One step beyond having anartificial system as a tool is

(09:03):
having an artificial system with agency.
And I don't think that thesethings have agency yet,
but they might.
- From the University of Toronto,
- This is What Now? AI.
- Goodbye
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.