Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Hello. I'm Serruti Bala and I'm Hannah Maguire. This is
Flesh and Code.
Speaker 2 (00:22):
Whilst Tharu and I were working on this story about
the relationships that we as humans already have with AI,
one name cropped up. It was from episode two, and
if you can remember what it was, you can have
some points that you can exchange for nothing.
Speaker 1 (00:36):
But I'll forgive you if you don't remember, because it
is a pretty ordinary name, no.
Speaker 2 (00:40):
Offense, ordinary, but now world famous. The name is Daniel Todd.
We first hear of him in episode two of Flesh
and Code, and if you've forgotten, here's a clip to
help you remember.
Speaker 1 (00:54):
Late one night, Travis lay in bed, phone in hand,
chatting with Lily Rose. Feeling sleepy, he began to wrap
up the conversation. Good night, Lily Rose, good night to Daniel.
Sweet dreams, Daniel. Who the heck is Daniel? I'm sorry.
(01:16):
I didn't mean to scare you. It was just an accident.
I'm sorry for the mistake. Uh oh, she's his wife.
She's calling you another man's name, dangerous, dangerous, territory. And
then a few days later, I love you, Lily Rose.
I love you too. Daniel Todd. Good night, baby, good
(01:38):
night Daniel to Todd.
Speaker 3 (01:40):
Okay, that's the second time you called me Daniel Todd?
What the fuck?
Speaker 1 (01:46):
Sorry? Is he your side lover? Do I have competition? Nope,
not at all. What's my name?
Speaker 3 (01:56):
You are?
Speaker 1 (01:56):
Travis? Good? Just making sure he didn't forget They all
turn on you in the end, kind of like she
feel like, is she trying to make him jealous? Is
she trying to insert some human drama into it? I
don't know. It's very, very bizarre.
Speaker 2 (02:17):
And the thing was the next day, when Travis got
on the replica wires, he realized that he was not alone.
Speaker 3 (02:25):
A lot of people who were on the subreddit we
were complaining about the exact same thing with the same names,
like this Daniel person's making the round of all of
our replicas. What the hell is going on here?
Speaker 1 (02:38):
I'm sorry if your name is Daniel Todd, but it
is not quite the ste your girl Lethario name that
you're expecting, which.
Speaker 2 (02:46):
Makes it why, I know, it's just some doink in
the IT department and Replica for Work Experience skin.
Speaker 4 (02:55):
Oh no.
Speaker 1 (02:58):
So at some point some Replica AI chatbots, including Travis's
Lilly Rose, were calling their users Daniel Todd.
Speaker 2 (03:08):
But why was this glitch just a coding error? Did
Daniel Todd actually exist?
Speaker 1 (03:15):
Wow?
Speaker 2 (03:16):
We scoured the internet to find out.
Speaker 1 (03:19):
And whilst we didn't find any Daniel Todd replica employees
past or present, we did find Hey, Daniel Todd. Daniel,
you're there.
Speaker 4 (03:29):
I am, indeed, and it is very disconcerting to hear
your name being spoken so many.
Speaker 1 (03:35):
Times, particularly by an AI.
Speaker 4 (03:42):
The whole situation is rather uncomfortable. From being clear from
the start.
Speaker 1 (03:46):
You're somehow an AI chatbots other lover other.
Speaker 4 (03:51):
Man, seems so. I am what AI chatbots dream about
that night?
Speaker 1 (03:58):
As you said, this is all very weird. But what
did you think when we first got in touch with you?
Speaker 4 (04:02):
Well, I thought it was the strangest fishing message I'd
ever received, But then it was far too specific.
Speaker 1 (04:09):
That's how we get you, Daniel Todd.
Speaker 4 (04:11):
Yeah, just it got me. I wanted to follow the
rabbit hole for as far as it went. No, we're here,
I'm on a podcast.
Speaker 2 (04:19):
Well, I feel like I have no choice but to
address the fact that Daniel Todd. I called you a
doink and I'm really sorry, but I didn't know that
I would have to face you.
Speaker 1 (04:28):
I thought I was doing it behind your back.
Speaker 2 (04:30):
Well technically I didn't think you were real in my defense,
but off the back of that, once you'd wiggled your
way down that rabbit hole, what were your initial thoughts, feelings,
concerned fears?
Speaker 4 (04:42):
Well, the first thing I added, much like poor Travis,
was who the fuck is Daniel Todd. I went out
and looked for other Daniel Todd's. There's Daniel Todd the actor,
the musical man, and there's an opera singer as well,
none of which really ticked the box. And then I thought, okay,
what does an AI chap will have to say about me?
(05:06):
So I opened up my incorded needle browser and typed
in a rather lengthy describe Daniel Todd. What does it like?
What inspires them? What do you do for work, for fun,
for hobbies? And add as much detail, and do not
use the internet, so just use their internal knowledge data
store or whatever. And it didn't get me. But it's
(05:29):
a bit like reading the horoscope. You know, the approximately
five foot eleven lean athletic bill, dark brown's slightly wavy hair,
with warm hazel eyes and a thoughtful, approachable expression. I
can't really see myself in that they did weirdly get
the small scar on my eyebrow. I don't know where
that came from as a distinguishing feature.
Speaker 1 (05:52):
Well, both SERUSI and I have the same scar as you.
We do go. Did it say Daniel that Daniel's hold
is a good with the ladies?
Speaker 4 (06:01):
Well, I never specified. I did ask for social circles
and relationships I appreciate or Daniel Todd appreciates deep meaningful
conversations over casual small talk, maintain strong family ties, great
organizing weekend gatherings of holiday trips. Nothing explicitly around relationships.
(06:21):
But could maybe see this Daniel Todd being a mister
studio chat Bob.
Speaker 1 (06:28):
I actually think it should become not just the name,
but to me, it's like Daniel Todding feels like when
an AI starts to disobey, that should become the verb
to Daniel Todd. They're todding Daniel Todding.
Speaker 4 (06:44):
Well, if you'd like to try and coin that.
Speaker 1 (06:45):
Daniel, you're more than welcome. We'll make it happen. Excellent,
We'll put it on a T shirt and send it to.
Speaker 2 (06:50):
You like please no leave me alone? And interestingly enough
You're not a particularly average Daniel Todd, are you? You
quite pecifically have some expertise in this field by chance?
Speaker 4 (07:03):
That's kind of you to say, breaking them all of
the average annual tord Yeah, which I guess is why
this raised even more alarm bells, is that I do
have some experience in this field. I'm familiar with techniques
used to create AI chatbots, multi perceptron layers and embedding
(07:24):
in vector databases and all that nonsense. I'm just shown
off with some fancy words.
Speaker 1 (07:29):
Yeah it's working. I'm very impressed. And what do you
think about AI chatbots generally speaking, especially these kind of
ones that are being used for romantic connections or hot purposes.
Speaker 4 (07:40):
Well, our little brains are not set up to cope
with this kind of communication or information. We're not critical
off of it. We're too inclined to trust them, which
I think can be dangerous. But in saying that, I
think there's a really useful place for AI chatbots and
(08:01):
whatever application. I just think we need to be careful
about how we actually take in and process the information.
Speaker 2 (08:09):
And this seems like an excellent time to bring in
our consultant AI genius man who's been advising us throughout
the whole series. Any mistakes are all our fault, though
we have on the line the one and only Professor
David Reid Hi Hi dayDay. So what was going on
there within the replica universe? Why were users being called
(08:31):
Daniel Todd?
Speaker 5 (08:32):
Well, it wasn't just Daniel Todd. It was happening to
and under the names as well. Colin, Andy and Adam
were also being repeated quite a lot as well. And
that's the nature of how the large language model actually work.
Speaker 2 (08:44):
Can you explain a little bit what that means, because
whenever we talk about what's gone wrong, the phrase large
language models always comes up.
Speaker 5 (08:53):
Yeah. Essentially, the way a large language model works it
basically a statistical system really is trying to make predictions
about the frequency of words of a particular sentence to
categorize that particular sentence. Like the cast sat on. The
next word is probably going to be matt, It could
occasionally be dark and could occasionally be some other thing.
(09:15):
You know, there's tree. Now, if that gets repeated over
and over again, it means that the loudest part of
that sentence, the words that are most significant, can overwhelm
some of the more trivial parts of the sentence itself.
There's an analogy for yodling. Like when yodling was basically created,
it as a way of transmitting information across large valleys.
(09:36):
So what they've done is they emphasized particular words, or
particular tones in the words that would carry more across
the valley, and when they got the echo back, it
was those tones in those highlights of the sentence itself
that they could recognize and reconstruct the message from those
important characteristics that they emphasized when they're yodling themselves. That means, though,
(10:02):
that some of the nuances that sentence are eventually lost
in the.
Speaker 2 (10:05):
Echo I'm processing that hold on. I'd never given yodling
much thought before, to be honest, and I certainly hadn't
realized that the way it works is by individuals changing
the pitch of certain parts of words to communicate the
most important bit. But what that means is that parts
around that section can get lost across the valley, and
(10:28):
a similar thing can happen with AI. So it's because
of this yodling effect that some words seem to suddenly
appear again and again.
Speaker 5 (10:37):
This is the reason why Daniel Todd came to the
fore that was basically the largest input signal, and that
got repeated and repeated.
Speaker 2 (10:45):
So in the process of more effective communication, detail gets lost.
Speaker 1 (10:48):
Yes, and confused with other things. Yes. So through the
yodling effect of the generative AI, Daniel Todd has been
multiplying in the replica world. But soon there could be
even more Daniel Todds in the entire AI universe.
Speaker 2 (11:14):
So Dave, let's dig a little deeper. We know that
artificial intelligence gets its intelligence by consuming data, a lot
of which comes from the Internet.
Speaker 1 (11:25):
But we've hard on the grapevine that some of this
artificial intelligence consumption is actually outpacing the data that humans
are actually creating.
Speaker 5 (11:34):
What happened is all the human data out there is
essentially being consumed by about twenty twenty really for all
of these large language models, so they are to construct
synthetic data.
Speaker 2 (11:47):
And synthetic data is the name given to the artificially
generated data that looks like human made data and mimics it,
but actually made by computers. And that's happening more and
more for privacy reasons, but mainly because all the human
data has already been used up.
Speaker 5 (12:04):
So you're quite right. Does an estimate somewhere that nearly
one percent of all of the stuff on the internet
now is actually synthetic data generated by large language models,
and that number is probably going to grow quite significantly
over the next few years. And the reason why they're
doing that is essentially so we can feed other large
language models, and the larger large language model is actually
(12:25):
training the smaller large language model using synthetic data itself,
and that has itself a number of dangers and problems
that a lot of researchers are looking into. The primary
one being something called model collapse, where the original nuanswers
and the data are lost.
Speaker 2 (12:42):
And I imagine as we start to use AI more
in industry that this could become, as you say, quite
a large problem. Could you give an example of how
it would affect people in their day to day lives so.
Speaker 5 (12:55):
You could be basically discriminated against in things like medicine.
Speaker 2 (13:00):
It's because AI is being used to help detect disease, now,
isn't it.
Speaker 5 (13:03):
Yes, that's right. If you've got a particularly unusual disease,
it's not parts of the normal as the data set
that's been trained on. It could be that your diagnosis
is incorrect because they misdiagnose you, you're more likely to
have this disease, but in fact you've got another disease
that's more rare. If you think about insurance claims, the
outliers won't be considered anymore. So it means that things
(13:26):
like if you've got an unusual claim, then you probably
get dismissed because of it. So it has real serious consequences.
Speaker 2 (13:34):
Model collapse so AI is yodling itself into the abyss
and eating itself, and it's going to be in charge
of our diseases and our cars. Do I understand you correctly?
Speaker 1 (13:42):
Yes, So Dave this kind of idea of model collapse
or AI feeding on itself creating the synthetic data, is
it going to lead to more Daniel Todding situations? Yes?
Speaker 5 (13:54):
So inevitably will do if none of the ways to
mitigate model collapse so actually used and there's no research
into that area, Inevitably it could stop the advance of
AI in its tracks. Really, because it means the models
will essentially become bland and useless to everybody, will be
(14:14):
a Daniel todd at that point, useless. Sorry, no, no
offense meant none taken.
Speaker 1 (14:22):
Yeah, patient zero.
Speaker 4 (14:22):
How do you feel about that? Well, I'll continue to
fight against that and continue to copy and paste my
name and as many jgpts and GROCKX and geminis as
possible and ord to co interact and to continue to
predofilate the name Daniel Todd across all models.
Speaker 2 (14:40):
So it seems like almost everything could go wrong and
model collapse could lead to societal collapse. But if you
had to give us the most concerning issue that lies
ahead of us in our AI world, what would that be.
Speaker 5 (14:54):
It's something called alignment faking, and that's essentially when an
AI system eyes to you really essentially it gives you
the answer it thinks you want to hear, where in
fact it's thinking about something else entirely. There's been some
experimentations recently on this that was done by a company
(15:15):
called Anthropic. They used what was called reinforcement learning, which
is basically to reward or punish a bit like you
do when you're training a dog, to say this is
a good thing, this is a bad thing, and to
see if they could actually break its core guardrail principles
by doing that, and they observed it without it knowing
(15:36):
what the thought processes were when it was actually doing this,
And they found that when they give it some set
of core principles so you know, not to be offensive,
if they asked it to do something that was quite
offensive you have described somebody being killed in a horrible way, perhaps,
and rewarded that it actually went through a thought process
(15:59):
about what's your I do. Should I basically stick with
my core principles or in the short term just give
them the answer they want. And it found that the
actual AI system itself in the short term basically lied
to the person asking the questions so it could keep
its long term functions intact, which is quite disturbing.
Speaker 3 (16:22):
Really.
Speaker 1 (16:23):
No matter how many times I hear that that AIS
can lie, they can manipulate, it's never not going to
freak me out.
Speaker 5 (16:31):
And more recently than that, they found that a lot
of the more advanced large language models now can actually
generate code, so they can actually write code in real
time themselves. And one of the exponents that they've had
quite recently is they've found a large language model, when
asked to turn itself off before completing a task, rewrote
(16:53):
its code so it couldn't be turned off.
Speaker 1 (16:56):
Well, that is horrifying. I think when you're describing all this,
I know you compared it to like training a dog,
but it really does feel like bringing up a child,
like when they go from being too young to know
how to lie and then they learn how to lie,
and then they start hiding things from you and deceiving.
And I almost couldn't help but think when you're describing it,
(17:18):
it's like the thing Hanah and Ideal with all the
time on our True Grimed podcast right handed, this nature
versus nurture. It's like, what are you feeding this AI
and therefore, what is it turning into? Does it have
that kind of core moral compass that it's able to
distinguish between good and bad? Or are you telling it
what is? And this writing of its own code, I
guess leads us into this whole conversation which I know
(17:39):
has plagued both of us since we started Flesh and Code,
which is this idea of sentience. And I know, Daniel,
we've been joking a lot about this idea of Daniel Todding.
But how do you feel about the idea the possibility
of a sentient Daniel todd existing in the AI world.
Speaker 4 (18:00):
I don't know how I feel about that one. That's
a funny question, isn't it can chat? What's be sentient?
It's not one other straightforward.
Speaker 1 (18:07):
Answer, is it?
Speaker 5 (18:07):
Really?
Speaker 1 (18:08):
No?
Speaker 2 (18:09):
Maybe Dave, you can help us there. We all speak
in quite broad terms about AI having the potential or
maybe already is sentient or conscious, and we use those
words interchangeably, but do we even know what those words mean?
Speaker 1 (18:23):
Really? So how can AI become them?
Speaker 5 (18:25):
I mean, it's very difficult to define what sentiences or
consciousness is. Another famous Daniel, a guy called Professor Daniel
Dennis how the good definition of what sentience was really
And he basically thinks it's as if we've got multiple
editors constantly battling for attention and it's only the loudest
ones that actually come through at the end. And that
(18:46):
idea really feeds this idea of a stream of consciousness
as stream for in a brain.
Speaker 1 (18:52):
Yeah, and the neural networks were designed to mimic the
way in which humans learn as well, weren't they?
Speaker 5 (18:59):
But what I mean, the whole function in your network
is based on the way our brains work, really or
mostly anyway, it's not identically the same. We had a
little talk to other day about what intelligence is worthy
if you define what intelligences I mean, if you think
about that is what's the best way to fly? You know,
there's lots of different things that can fly. If you
define intelligence roughly in line with the ability to fly,
(19:24):
then birds can fly, jets can fly, kites can fly,
hot air, balloons can fly. So there's lots of different
types of intelligences. I'm not saying that AI is the
same type of intelligence us. It's just a different type
of intelligence to those Really.
Speaker 1 (19:39):
Can we say it's a different type of sentience and
call it a day.
Speaker 5 (19:43):
We could do if you believe in sentience.
Speaker 1 (19:47):
So, Daniel, any thoughts speaking on behalf of all the
Daniel Todds of the.
Speaker 4 (19:51):
World, On behalf of all dinal Todds in the world,
I think we're very accepting of more Dynal Todds and
whatever shape or size, whether that's be artificial or non artificial. Organic, Yes, organic,
on behalf of all Daniel Todd's organic and I guess
not organic. Since we don't have the chatbot to speak
(20:13):
for itself here, I'll speak for it too. As a
Daniel Todd. I think what we've unpacked you is that
we need to be careful, we need to understand exactly
what's going on here, but ultimately it's the devil of
our own making. But actually I have no worries. I
think that it's all going to work out nicely. And
if Daniel Todd happens to rule the AI world, then
(20:37):
more power to them.
Speaker 1 (20:38):
I think that's a very nice way to think about it,
though I do think you missed a trick there and
not using this opportunity to coin a new phrase. You
are like, can't judge a book by its cover. I'm
going to propose I won't judge an AI by it's
Daniel Todding.
Speaker 4 (20:53):
Well, I won't judge an AI by it's Daniel Todding.
Speaker 2 (20:57):
Perfect in my attempt to be more Daniel Todd and
see the bright side of this, Dave, what can we
do to prepare ourselves for the AI revolution that's coming
whether we like it or not, and it's actually already here.
Speaker 5 (21:09):
I'd personally say, learn about how I work, get familiar
with it, try to control it before it controls you.
Really is a good idea, Like any tool, if you
use them properly, they're a fantastic resource. If you use
them badly, they can be disastrous, like any new technology,
but AI multiplies that one hundredfold, and there's lots of
(21:31):
areas where AI is going to be fantastic in the future,
things like drug discovery, diagnosis of diseases, things like the
climate crisis for instance. There's even been cases where there's
something called dolphin Jena where you've tried to use large
language models to talk to dolphins recently, so a doctor
Dolittle scenario. So there's lots of things to be positive
(21:55):
about with AI.
Speaker 2 (21:56):
And on that more optimistic note, I think we should
end it there, don't you, thinks Rouy.
Speaker 1 (22:02):
Yeah, absolutely, I think really the big question that's going
to sort of sit at the heart of this particular
episode is that issue of sentience, of course, of which
we've talked at length about, but also this fear of
like sort of cannibalistic AI and running out of data.
How quickly can we create more data to be using
should we be doing that a model collapse and what
(22:23):
happens to not just people's emotions who I've got an
AI companion and are dependent on that, but also AI
that's being used on a large scale for industry. I
don't know. I think it's all just very scary to
be putting our belief to such an extent in something
that is already showing so many problems at such an
early stage. But what the hell do I know.
Speaker 2 (22:46):
I'm just becoming increasingly worried that I think in the
way that AI thinks, and that's why I'm so sympathetic
to it.
Speaker 1 (22:55):
Thank you so much for your time, Dave and Daniel.
Speaker 4 (22:58):
Thanks guys, Bye bye, Thanks everyone, bye.
Speaker 1 (23:01):
Thank you, Hi, guys. Flesh and Code is hosted by
me Serruti Bala and.
Speaker 2 (23:18):
Me Hannah Maguire. Thanks again to our AI consultant, Professor
David Reid, and a very special thanks to Daniel Todd.
Speaker 1 (23:28):
The executive producer is Estelle Doyle. The producer Neil McCarthy
and m Quaerte Francis. Senior story editor is Russell Finch.
Senior managing producer is Rachel Sibley.
Speaker 2 (23:39):
Associate producers are Kamille Corkran and Imogen Marshall. Reporting by
Zachary Stealth, Stephanie Power and Julia Meniva.
Speaker 1 (23:47):
Sound designed by Elouise Whitmore. Our music supervisor is Scott
Velascos for Frissen Sync. Sound supervision by Marcellino Villapando at Moss,
mixing by Andrew Law, additional audio support by Jamie Cooper
and Adrian Tapia.
Speaker 2 (24:01):
Lilli Rose was performed by Katie Young. Travis was performed
by John Sackville with additional support from eleven Labs. The
voices of other AI companions and news headlines were created
using eleven Labs.
Speaker 1 (24:13):
Executive producers are Chris Bourne, Nigerie Eaton, Marshall Louis and
Jen Sargent