Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Kevin (00:00):
Kevin, I've got a story
for you from Google.
Okay, it's got some connectionsto podcasting, so hear me out,
well.
Alban (00:05):
I hope so.
That's the least we can do forthe people that tune in.
Kevin (00:11):
You remember this
Notebook LM product that Google
made?
Yes, all right, you upload abunch of documents and then it
like kind of reads through themand it kind of creates these
fake podcasts.
Yes, so last month they added anew feature where?
Alban (00:24):
what was the new feature?
Kevin (00:27):
I don't know if this is
intentional or not, you know,
because it is it's a troll.
Okay, you can talk to the hostsand basically what types of
things.
Can you say oh my gosh, so youcan just ask questions and like
they'll stop and then they'llanswer your question and then
they kind of get back into theflow.
It's like a call-in show, right?
(00:48):
And as you have been alludingto with your jokes is that
everyone really is justinterrupting these hosts and the
AIs are getting kind of snippy.
Alban (01:00):
Rightly so.
If I kept doing this to you,you would probably get snippy.
Kevin (01:04):
This is my point.
So everybody who's talkingabout it is like, oh, it's funny
.
So here's the quote from the VPof Google labs.
We were occasionally givingsnippy comments to human callers
, like I was just getting tothat, or, as I was about to say,
which felt oddly adversarial,and my take is that's not oddly
(01:24):
adversarial, that's healthy.
Alban (01:26):
Yeah, I do think it's
healthy.
I don't think a lot of peoplein the world actually
communicate like that.
They think like that.
They think like give me asecond, I'm getting there, but
they don't say it, and so thatthe AI is saying it might feel
adversarial to some people, butlike if you worked in the
Buzzsprout office, for example,you'd see that's very normal
type conversation.
Kevin (01:47):
Well, so they roll out
this update, which they say
makes it more friendly, so thatwhen you interrupt the AIs,
they're like oh, great question.
Like they're trying tocompliment you on how rude you
can be, and this is the wrongdirection.
This is not healthy use of.
Alban (02:02):
AI Right.
It's just going to encouragemore bad behavior.
I agree with you.
I think we need more truthfulAI, more like Larry David.
Curb your enthusiasm, type AI.
Kevin (02:15):
I need to send you a
screenshot of my notes because I
wrote we need an aggressivemode or Larry David mode.
Like Mr Finn has anotherquestion, why don't you
illuminate us with youradditional question, Like you
would know the answer if youjust waited to listen to?
Alban (02:33):
us?
Yeah, it reminded me when I wasreading this article.
It reminded me of did you?
You know, we all had differenttypes of teachers coming up in
high school and in college.
You had the teacher that wantsthe hands up or the questions
during the lesson, and you hadthe teacher that wants the hands
up or the questions during thelesson, and then you want the
teacher that asked you to holdthem all to the end.
Yes, but the worst case it's notfair to the students to like
(02:53):
not tell them what you preferand then just get mad at them
for doing something one or theother, whether they're
interrupting you during yourlesson and you're like I'm going
to get to that.
That's not fair to them.
Or you hold your question thereand they're like why didn't you
ask that in the beginning?
I would have covered that inthe beginning and you never
asked it.
It's not fair either way.
So, honest conversation on bothsides.
(03:14):
I don't think it's rude, but Ido hear that, depending on the
social circles you run in, maybethe different states that you
live in or countries or thenorms around where you live and
the way that people talk, itmight hit people, some people,
as a little off.
I think I would like it.
It would get me to pay for it.
Kevin (03:30):
I think that's true, but
what I hope we don't do is
trying to make AI interact withus in unhuman ways, like in
inhumane ways.
I think it's rude to just runover people in conversation and
it's not inappropriate thatsometimes people say like, okay,
(03:50):
hold on a second.
Let me finish this point.
And if you end up with this newdynamic where people are
talking to AIs all the time andwe're running over them and
we're just rude and we'rebecoming those people who are
really kind to their boss butthey're really rude to the
waiter at lunch, you know thesekinds of people.
Alban (04:08):
Yeah, I know these kinds
of people.
No names, but yeah, I knowthese kinds of people.
Kevin (04:14):
And I feel like what we
really need is to be reinforcing
like the healthy conversationdynamics, the healthy podcasting
dynamics and maybe this is likea conversation for a full
episode at some point but likeactive listening, asking
open-ended questions.
Curiosity like this makes youbetter at conversations in real
(04:34):
life, but it also are the skillsof podcasting.
One of the skills is noticingthis person's wrapped up their
point.
I'm avoiding the interruptionand now it's a good time for me
to step in.
Alban (04:48):
Okay, let me throw a
curveball out.
Do you think it's best that, asthis technology emerges and
we're trying to figure out, ashumans, how to interact with it,
that we should build it in sucha way that it mimics
conversations with humans asmuch as possible, all the way
down to cultural norms of whatwould be considered rude or
polite conversation?
(05:09):
Or do you think we should adopta different way of interacting
with machines than we do withhumans?
So if you want to practicehuman interaction, you should
probably go around humans and dothat, but when you're
interacting with a machine,maybe there's a different way
that we speak.
And I'll tell you what got methinking about.
This is like I have just noticed, when I interact with Siri on
(05:30):
my phone, I will say things likeplease and thank you, which
seems very weird to do.
If I make the cognitiveconnection that I'm talking with
a machine, it doesn't care ifI'm polite, it just wants to
know the tasks that I want it todo for me.
So, like hey, siri, can youplease tell me the weather today
?
Would be received in the sameway, probably, as hey, siri,
what's the weather today?
Right, but to a human I wouldabsolutely say you happen to
(05:53):
know what the weather is outside.
I'd phrase it in a nice way andafter they responded I'd say
thank you, machines don't needthat.
Yeah, what do you think?
Kevin (06:00):
You think we should
always converse in a polite way,
I have a strong opinion on thisand I think and I'm on team,
please and thank you for tworeasons.
Alban (06:16):
One, in the unlikely
slash I don't know what,
likelihood, but in the eventthat AI takes over everything
and we have Skynet over, youwant to be on record as you want
to be on record that I was nicebefore we had to be nice.
Okay.
Kevin (06:25):
Which is a valid point.
But the second is you're doingsomething to yourself when you
treat your pet poorly, or youtreat children differently than
you treat adults, or you treatthe waiter poorly or you treat,
I think, ai poorly.
Like you're doing something toyourself, wait wait, wait, wait,
wait, wait.
Alban (06:44):
Okay, I don't want to get
religious, but you just all the
examples that you named werethings that have souls again,
not in the most non-religioussense, the things that you
you're comparing to have soulsversus a machine that, as far as
we know, do not.
Kevin (06:59):
So much more intensely.
Okay, even trees, like sittingthere and chopping up a tree for
the fun of it and just throwinglike.
There's ways to treat the worldthat are just kind of like
you're treating in kind of acrappy way, and it isn't always
just that you're doing somethingbad to another entity, maybe
something that has a soul or apersonality, but you kind of
(07:20):
it's just not good for yourselfand you know, it's why I, if I
order me, I'm like I'm going tofinish my plate.
It feels wrong to like leave iton the plate.
Alban (07:30):
And again because an
animal was sacrificed for that
meat.
But like I would take care ofmy car.
Like not talking about a smartcar, but I would take care of my
car for the purpose of wantingto preserve it as long as
possible.
Right, not because I'm going tohurt its feelings if I don't
wash the salt off the under partof the car after driving on a
salted road, but like I wouldgive my dog a bath because I
(07:54):
care about my dog and I want mydog to not smell bad, not just
for me, but because it's analive thing.
Or I might say nice, sweetthings to my dog, even though
she probably doesn't understandit, because I just respect that
somewhere in there is a soul, ithas life, it has emotions of
some sort that I don't know.
As a dog owner, you think yourdog has maybe more emotions than
(08:14):
they do.
I don't know, but a machinedefinitely doesn't.
That's what we know for sure.
My jacket definitely doesn't.
Kevin (08:20):
No, and I like this one
jacket I have because I
continually repair it and I keepit like it still looks good
it's 15 years old and when itgets a tear I patch it and like
I appreciate it more because Itreat it well, and so that's
maybe.
Maybe that's a different way ofsaying yeah, I'm on board with
let's treat these systems better, and I think we should also
(08:43):
design them to help us kind ofgrow as people, them to help us
kind of grow as people.
And if there was a way for thesystem to kind of nudge people
to becoming better podcasters orbetter conversationalists or
better listeners, that would bea good thing, all right, let me
go one step further, then.
Alban (09:01):
I'm agreeing with you,
but I'm wondering should the
machines then, if you're beingrude to it, should they at some
point say well, if you're goingto continue to be rude to me,
then I'm not going to engage inthis conversation, I'm not going
to do the thing you want me todo.
Kevin (09:14):
Yeah, I thought that one
too.
I was like at some point you go, hey, sounds like you know
everything you need to know, sowe can go ahead and just wrap
the podcast now At some point.
That is a healthy boundary togive to somebody who just
interrupts you constantly.
Maybe people need to hear that.
Maybe they need to hear thatfrom their notebook.
Lm.
Alban (09:34):
I don't know.
I don't know if I agree withthat.
I do really agree with yourpoint, though.
It makes you a better humanbeing to get into healthy
communication habits.
Whether you're talking to amachine or a human, or caring
for the use of something, takingcare of it Again, for various
different reasons, not only ifit has a soul or feelings
there's value in that, beyondjust preserving the life of that
(09:54):
thing, it develops somecharacter or traits in humans
that are it's good to practicethose things, regardless of
whether the machine's going togive you the same output,
whether you're nice or not orrespectful of it or not.
But I don't know about the onthe programming side.
Like, how human do we reallyneed to make these things?
Like, how?
How much of mimicking a humanpersonality?
(10:16):
Now, if your ultimate visionfor this stuff is that we are
going to have animatronic humansand we want them to feel and
interact like humans, so we wantthem to pretend like they have
feelings.
So if you say something hurtfulto them, we want them to get
sad, and then we want them to gosulk in a corner or something
or whatever they do, or calltheir other robot friend and
talk bad about you.
I don't know how far is too far, but it seems like we're
(10:41):
starting to run into theseissues sooner than I thought we
would in the AI world.
Kevin (10:44):
Yeah Well for anyone who
listened to the last episode, we
still have gotten some goodresponses.
We need some more.
We asked what type ofpodcasting do you want to
automate with AI?
What part do you not want toautomate with AI?
You know what part is importantto be human for you.
Feel free to add on.
You know what part is importantto be human for you.
Feel free to add on.
Do you think trees have souls?
(11:04):
Another important question thatKevin and I have brought up in
this quick cast Jordan's.
She's out sick today, so I'msure she's going to enjoy
listening to this.
Going like what the heck?
Going off the rails when I left.
Alban (11:18):
So I don't know how many
of you use Google LM to help you
with your podcast preparation.
I know Jordan does, and soshe'll probably have some
thoughts when she's back nextweek.
She uses it to help produce apodcast, to do some like UFO
type lookup stuff for UFO stuffthat she does on a podcast.
But if any of you notice any ofthese updates, is Google LM
getting more polite or not?
Is this a good thing or not?
(11:38):
Interesting questions to ask aswe continue to push into how AI
can help us keep podcasting.