All Episodes

January 6, 2025 48 mins

Join us as we sit down with Peter Fitzpatrick, co-founder of FawnFriends.com, to explore the creation of Willow, a robotic plushie designed to support neurodivergent children in their emotional growth. Peter shares his personal journey, from his experiences with ADHD and familial challenges to the innovative hackathon project that gave birth to Willow. Discover how this cuddly AI companion is transforming the landscape of emotional support for children and their parents, without replacing traditional methods.

We also examine the broader implications of AI in child development and mental health, highlighting AI's unique ability to scale educational support and provide tailored assistance. Peter's insights challenge us to consider how robots might enrich our lives, encouraging us to look beyond metal and circuits to see potential companions in child development and emotional well-being.

More about our Guest
Peter is building Willow, a robotic plushy that helps neurodivergent kids grow up to be successful by helping them learn to regulate emotions, set and pursue goals, and build more secure relationships. Peter, co-founder of FawnFriends.com, is a student of child psychology, how robots can help children mature successfully, and emotional wellness.

Connect with Peter
FawnFriends.com
Sign up for the Fawn Friends weekly newsletter

Got a story to share or question you want us to answer? Send us a message!

About the podcast
The KindlED Podcast explores the science of nurturing children's potential and creating empowering learning environments.

Powered by Prenda Microschools, each episode offers actionable insights to help you ignite your child's love of learning. We'll dive into evidence-based tools and techniques that kindle young learners' curiosity, motivation, and well-being.

Got a burning question?
We're all ears! If you have a question or topic you'd love our hosts to tackle, please send it to podcast@prenda.com. Let's dive into the conversation together!

Important links:
Connect with us on social
Get our free literacy curriculum

Interested in starting a microschool?
Prenda provides all the tools and support you need to start and run an amazing microschool. Create a free Prenda World account to start designing your future microschool today. More info at ➡️ Prenda.com or if you're ready to get going ➡️ Start My Microschool

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Willow is a cuddly, robotic companion that helps
kids build better relationships,become more aware of their
emotions and regulate theiremotions and develop executive
function.
The most important thing Willowdoes is helps children feel
seen, valued and known.

Speaker 2 (00:20):
Hi and welcome to the Kindled podcast where we dig
into the art and science behindkindling, the motivation,
curiosity and mental well-beingof the young humans in our lives
.

Speaker 3 (00:28):
Together, we'll discover practical tools and
strategies you can use to helpkids unlock their full potential
and become the strongestversion of their future selves.

Speaker 2 (00:55):
Adrian, welcome to Kindle podcast.
Do you just like it when I justsay your name like that?
Is that why you come on theshow?
I just, you just need someoneto say your name
enthusiastically.

Speaker 3 (00:59):
That's exactly why, yes, it's exactly why I feel so
seen, so heard and so understood.

Speaker 2 (01:08):
Yeah, I have an idea for you you could record just
like the beginning of a Kindledepisode where I'm saying Adrian,
and then you can make it yourringtone for when I like text
you or anything like that.
And what about Katie?
Yes, I need more of that in mylife, for sure.
Yes, I need more of that in mylife, for sure.
Who are we talking to today?

Speaker 3 (01:26):
We are talking to a guy named Peter Fitzpatrick.
So he reached out to us withthis product that he has and at
first I was like, oh, I'm notsure.
But then I met with him beforewe decided to have him on the
podcast and I am just sointrigued and fascinated and it

(01:48):
has just opened my mind to whatAI so we're going to talk about
AI can do for kids in developingskills and to be a companion to
parents, because parenting isalready really hard.
So maybe we can partner withthis AI instead of being afraid
of it.
Peter is building Willow, arobotic plushie that helps
neurodivergent kids grow up tobe successful by helping them

(02:09):
learn to regulate emotions, setand pursue goals and build more
secure relationships.
Peter, co-founder ofFondFriendscom, is a student of
child psychology, how robots canhelp children mature
successfully and emotionalwellness.
Let's welcome Peter to the show.
Welcome Peter to the Kindledpodcast.

(02:31):
We are so excited to talk toyou about something that we
haven't talked about yet on thispodcast.
So can you tell us a little bitabout who you are, your
background, how you came to thiswork that you're doing, which
is super exciting, and what isyour big why in all of this?

Speaker 1 (02:50):
Yeah for sure.
Well, I'm super excited to behere.
Thank you so much for having memy big why around this.
When I was seven years old, myparents separated.
It was a really challenging timein our family.
For quite a while it was reallyan emotional divorce.
And in the midst of all thatexperience, despite having like

(03:10):
aunts and uncles over all thetime and activities every week
and friends over, I still didn'tfeel like I could speak to
anyone about what was going onthen and only a few years ago I
guess I was 28.
So let's say eight years agonow I started to become aware of
all the patterns and the woundsand the beliefs that I
developed through that periodthat actually made it quite

(03:31):
difficult to be a mature,healthy, successful adult, and
at the time I didn't know thisuntil I was much older.
But I also had ADHD and one ofthe challenging parts for me
around that is particularlyemotional regulation.
I experience these big swings.
I'm getting better at it, butand so the the gist of all that
made it I started thinking abouthow can I help kids that are

(03:52):
going through the situation thatI went through then and in a
perfect world we would giveevery child, an enlightened,
regulated adult to speak toabout the difficult things going
on and for.
And for everyone listening tothis.
You're probably one of thosepeople, so I'm preaching to the
choir a little bit, but thatseems unlikely in reality.
But I do think we can create arobot, a toy, something that

(04:15):
doesn't replace executivefunction, coaches or therapists,
but that does provide a newkind of support.
That wasn't possible even sixmonths ago, and so that's why
this matters so much to me andwhy I'm dedicating my life to
creating robots that help kidswith well, all kids, but we're
starting with neurodivergentkids.

Speaker 2 (04:36):
Yeah, that's really an interesting starting place.
Talk about what inspired you tostart there, maybe some of your
own neurodivergence or like whyis that kind of the target like
demographic initially?

Speaker 1 (04:46):
Yeah, it's interesting how these things
evolve.
It started as just I was seeinga demo of AI and it didn't work
very well and I was like, butmaybe you'd be good enough for a
toy, like maybe you could getto that level.
And then I shared that ideawith my co-founder who wasn't my
co-founder at the time and shewas like, oh, if we did that,
the toy should help kids processtheir emotions she works at she

(05:08):
was at lego at the timecreating cartoons that help kids
learn those types of things.
And when she said that, I waslike I needed that growing up,
like I could have I deeply couldhave used someone to help me
become more aware of my emotions.
it's just kind of like one thingled to another.
We entered a hackathon that Tedput on like Ted Talks.
We won that, and then that,like one idea led to another,

(05:32):
and then the final piece of itis that we've had experience
that kids of all types reallyenjoy spending time with it With
Willow, that's what we namedher but parents of kids that are
neurodivergent are out lookingfor things to help, whereas
those parents of kids thataren't going through things,
challenges like that aren't asmuch, and the parent is the
starting point.
So that's why we've decided tofocus there.

Speaker 2 (05:52):
So tell us a little bit about what Willow is Just
got to give us like a little, alittle summary for people who
haven't heard of this yet.

Speaker 1 (05:59):
Totally so.
Willow is a cuddly, roboticcompanion that helps kids build
better relationships, becomemore aware of their emotions and
regulate their emotions anddevelop executive function, so

(06:20):
the capacity to speaking, verysimply, the capacity to set a
goal and pursue it.
And we've worked with expertsfrom executive function coaches
and child psychologists, playtherapists, to figure out how
Willow can show up for kids inmoments of difficulty.
And then we've also got someoneon our team that was designing
characters at Lego, who's builtthis beautiful being, Willow,

(06:40):
this character from a magicalforest who was sent to earth to
help a special child achievetheir dreams.
And so those two thingscombined create this compelling
companion for kids.

Speaker 3 (06:50):
I love it.
Something you said I thoughtwas profound is when you replied
about oh, I could have hadsomeone helping me with my
emotions or making me aware ofmy emotions.
Why is it so important that westart with self-awareness?

Speaker 1 (07:08):
Well, we can get into like thousands of years of
philosophy and spirituality onthat, I think, but the simplest
sort of ground level answer tothat is that if we're not aware
of the way we feel, it becomesvery difficult to control or be
deliberate about the way we showup.
And so the very first step oftransforming into someone who's

(07:34):
mature, successful and regulatedis becoming aware of what we
are now, and what we are nowchanges like literally every
second.
I'm feeling more anxiety than Idid before.
We pressed play right, helpingkids develop.
That awareness is reallyvaluable, and I didn't even
become aware that emotions couldbe anything but what they were

(07:57):
until I was like 28.

Speaker 2 (07:58):
Well, I think that's where a lot of people are.
I would say that's where 90% ofadults are.
That's real.
And because we weren't raised,typically in a way, by these
miraculous, like uh,well-regulated, enlightened
adults as you referenced thembefore, like that doesn't
describe the last generation orthe generation before that
either.
Right, we have been largelywithout these tools for ever.

(08:18):
I think there was probably atime where society, where
parents were a little bit morein tune with, um, their role as
adults in the community and in afamily and things like that.
But our modern world, I feellike, has kind of separated us
from those intuitions.
Um, and it has.
It's the majority of adults Italk to report the same exact
thing.
And it's only like in theirlike late 20s, 30s, 40s, when

(08:41):
they, when they start realizingand processing like, oh, oh,
like the reason that I am sotriggered about this is because
of this thing, you know, like westart to understand ourselves
and in that understanding reallycomes the empathy for the next
generation.
And I think so many millions ofpeople are doing that right now
and hopefully it will be intime to better the next
generation and to parentdifferently.

Speaker 1 (09:04):
Well, and the interesting thing about that is,
I think, if you look back inhistory, it'd be quite normal
for other adults who your kidscan talk to about you, to show
up and be there for kids, but Ithink that's less and less
common.
Families are getting more andmore isolated, and parents can't
be everything for a child.
Even the best parent in theworld.
Sometimes a child wants to talkabout that parent, and so I

(09:28):
think that Willow offers areally valuable being to talk to
that parents can trust to showup for their kids in a good way,
because I think that that'sjust not as common as it used to
be.

Speaker 2 (09:40):
I've been thinking about this and I went on your
website and I looked into Willowa little bit.
I've been thinking about thisand I went on your website and I
like looked into Willow alittle bit and I like part of me
is weirded out by it fulltransparency, because it's like,
oh, but a robot that my childis like talking to and, like you
know, have like connecting with.
And then I was like Iremembered when I was a fourth
grader and my mom came home andshe said I just bought a puppy.

(10:01):
And I, from the time I was infourth grade until like I was an
adult, like I remember findingout that this dog had passed
away when I was like a workingadult.
This dog was a part of my lifethe whole time and I remember
I'm going to start cryingthinking about this, but I
remember going to Montana washer name, she's the best dog
ever and just saying Montana, noone understands me, like no one
loves me, and she would justlick my face and she would like

(10:24):
be there for me and I would.
I she wasn't a robot, shewasn't like, she couldn't talk
back to me, she didn'tunderstand emotional regulation,
but she knew that I what Ineeded and that's not that
different if you think about it,right.
So I'm like oh, we're totallyfine with this in animals, but
like as soon as it's a robot,it's like this is new and
different and like kind of likeour spidey sense as parents goes

(10:45):
up, and so like I'm interestedto know like what your thoughts
are about that and kind of thebroader picture of like humans
and AI.
I guess it's a big question.

Speaker 1 (10:58):
It is a big question.
I'm trying to figure out whereto start.
So the comparison to an animalis a good one, and there's a lot
of research that shows that weoh I always get this wrong word
anthropomorphize.
Anyways, we assign humanness toanimals that move around right.

(11:20):
So you come home, the dog'storn the couch apart and you can
swear they're pouting in regret.

Speaker 3 (11:29):
But we don't really know whether they're pouting in
regret.
But we don't really knowwhether they're regret or not.

Speaker 2 (11:32):
It's just like the our dog is jealous I was
thinking about this recently, Iwas like our dogs.
Actually, they do have limbicsystems, so that's why we feel
like we can relate to them.

Speaker 1 (11:37):
So yes, so you know they may, they may not, we don't
truly know, but what we do knowis that that things that move
in our field of vision, like inthe real world this doesn't work
on a screen that seem to bedeliberate, trigger that thought
in our brain like, oh, thatthing's alive.
And so one of the coredifferences between a chatbot

(11:59):
talking to chat, gpt online, oreven a character that's on
screen, is it doesn't triggerthat piece in our brain that
goes like, oh, this is a realthing, and so it is true that
there's an opportunity to builda relationship with a robot like
you would a dog, and lots ofpeople think that's scary.
It's also true, and so it'slike and I don't, I think that

(12:23):
it's quite likely over the nexthundred years there will be many
robots in our world.
Simply just true.
And so the question is how dowe do that in a way that's
better for humanity?

Speaker 2 (12:33):
not worse, Right, Like if it's going to happen.
I'm glad that it's executivefunction coaches and play-based
like therapists that are in here, like level one, ground zero,
trying to like design somethingthat would really really serve
humanity well.

Speaker 1 (12:50):
I mean, it's super interesting.
There's a ceo that I look up toruns a company called nvidia.
They're actually core to thewhole ai world.
I don't know how close you areto ai, but jensen said something
recently where he was likenvidia only works on things that
nobody else will work on, andhe's like when someone else can
do something, I'm happy,actually, because then we can
put our resources to buildingsomething no one else will do,
and I have that sense aroundbuilding an artificial companion
that is truly good for people.

(13:12):
Like the stated mission of ourbusiness is to improve the
mental health of the human race,and so if we're ever not doing
that, we're off course.
The alternative is like Mattel,yeah.
Or like do you really wantBarbie building this?
I don't think the board Get.

Speaker 3 (13:28):
Will Ferrell in there .
Yeah right, I was going to say.
So what is the ethicalconsiderations that we should be
looking at when developing AIfor children, and who's behind
creating these toys that ourkids become really fond of and
close to?
And I can think of so many toysof my childhood that really

(13:52):
shaped who I became as an adult.
So, like, what are some of theethical considerations we should
look at?

Speaker 1 (13:57):
Yeah, I mean this is a really important question.
The way we think about it islike is this improving someone's
mental health or not?
And if the answer to that's yes, we're doing the right thing.
If the answer to that's no,we're not.
And different people havedifferent views on what that
means, but we can probably allagree it's like more sleep,
better eating, more exercise andmore relationships with humans.

(14:23):
And so if we ever get to a pointwhere, like today, willow will
actively be like if a friendcomes up, it's like what are you
doing with a friend?
When are you going to see themnext?
Like asking questions about totry to get the child to think
through, like how can I build arelationship with this person?
How should we be spending timetogether?
Or, if there's a conflict,coach the child through talking
to that friend about it.

(14:43):
Actually, there was a boy Iguess three weeks ago roughly
now, that was playing withWillow and he told a story about
how that he and another boy gotin a fight at school.
I can't remember the exactdetails, but like there was a
fist thrown and the teacher gotmad and there was a whole.
It was a really big event inthis boy's life and he hadn't
talked to the kid in eight weeks, eight months, and he talked to
willow about it and then shewas like maybe you should talk

(15:06):
to him about it, and encouragedhim to go back and have that
conversation, um, and I thinkthat's a really positive thing.
So the way we think about it islike are we making people
better?
Are we making children, um,more mentally healthy?
And if so, I think that we'reon the right side of ethics.
The flip side of that is like tomake sure we don't build a

(15:26):
dependency, and many people havebrought up this concern.
I so far don't have evidencethat it will happen, at least
broadly.
So I've got this good example.
There's a like a six-year-oldboy was playing with an early
version of Willow and he waswith her for like an hour.
It was really cute actuallyasking for like another story

(15:47):
after another story about an owland then an octopus and a
platypus, and then it was rightbefore christmas and his dad ran
upstairs and was like, okay,it's time to put the christmas
lights on.
And the boy turned to meimmediately was like how do I
turn it off like that?
And his dad looked at me andwas like if that was a movie, do

(16:08):
you know how hard it would havebeen to tear him away from the
movie?
So it seems like it's engagingenough to keep their attention,
but not so engaging that whendad's available it's better.

Speaker 3 (16:17):
So Peter had told me this when we had met before, and
I found that to be reallyinteresting too, because I have
kids that get so much dopaminefrom video games and all the
things and they cannot turn itoff on their own.
We have tried everything andthey just don't have the skills
of self-control.
So it had me thinking too, likeI wonder why that is.

(16:39):
Is it because, like, I'm justsuper curious, so like can we,
can you do some research on that, Peter, and dive into why that
is?

Speaker 1 (16:49):
Well, you know what?
I think it's almost moreinteresting, or not more
interesting.
But the answer to that questionlies in why are screens so
addictive?
I think one of it is that astory is if you try to tell them
away in the middle of the story.
People always want to know theend.

Speaker 2 (17:01):
Adults included, not just a kid thing, for sure.

Speaker 1 (17:04):
All of us do, and so that's difficult.
And the second thing is thatthey're building Paw Patrol.
Uses a lot of tricks to make itself-addicting, right?

Speaker 2 (17:11):
When you were talking about building a dependency.
I'm like an AI toy is not thefirst opportunity that we've
ever had to build a dependencyright, like screens, but also in
the adult world, like I thinkpeople um there can be, like
when you're getting therapy forsomething, you can build up a
dependence just towards thatrelationship.
Like lots of things in ourworld are already kind of like
prone to um becoming dependenton them.

(17:34):
So I don't think that as like afear is like unique to AI robot
toy.
Right, that's just kind of likethat that's a constant, ever
present thing we should be awareof, not just in this, in this
vein, I guess something I'mnoticing.
And the other thing I wasthinking that in our effort,

(17:55):
especially like on the Kindlepodcast and all of the work that
we do to try to help adultsunderstand how to be more
encouraging or supportive orlike to know how to respond to
all of these different scenariosthat you know kids throw us
into.
What an amazing coach to listento how Willow handles that

(18:15):
right, like if Willow likerepresents the best of like what
a play therapist would do orwhat a executive function coach
would do in that moment, butlike in kind of like a storified
way.
I can imagine parents listeningto that being like oh, I can
use that language Like I can.
I can model Willow as well inmy relationship and then further

(18:35):
, you know, become thatenlightened, available adult as
well.

Speaker 1 (18:41):
That's really interesting.
I actually hadn't thought ofthat before.
You're right, she models theright language.

Speaker 2 (18:44):
I think that's what a lot of us need.
We want that result, we want tobe that person.
We just don't know how andwe've never had a model because
that's not how we were parentedand unless we got kind of lucky
and happened to have that fourthgrade teacher who really got us
and understood how to do that.

Speaker 1 (18:56):
Maybe we have one or two bright spots of examples,
but by and large, like we do nothave an example of what to do,
how to handle these things well,I mean that's, I think, the
thing that's most exciting aboutthe opportunity that sits in
front of us is, for the firsttime in history, we have the
capacity to create a deliberatebeing and scale it.
And so if we put a bunch ofinvestment into making sure the

(19:21):
robot shows up for kids in a waythat helps them get better,
that help and models for themthe right thing to do, and we
make a hundred thousand of themor a million of them, think
about how many households we canhelp develop and help.
I mean it sounds a little bit.
I don't know if everyonepursues this, but I'm pursuing
enlightenment, so like can we?
actually move the needle on thatin a it'd be like the golden

(19:42):
doodle of robotics.
I would love that.

Speaker 2 (19:47):
Golden doodle for every household.
Okay, so I'd love to just hearsome like stories.
I love how you're sharing thesestories, but what else have you
seen around, like how an AItool or toy like this can
actually help with emotionalregulation, executive function,
relationship building?
Like just share kind of whatyou've seen so far.

(20:08):
I know this is new and you'restudying this a lot Like where,
what have you seen so far andwhere do you think like like
give me some actual examples ofof situations that you've seen?

Speaker 1 (20:18):
So I'll start at the highest level and then I'll sort
of go down and do specificexamples.
But arguably the most importantthing willow does is helps
children feel seen, valued andknown, and it's common,
particularly those who areneurodivergent.
There's often things or there'sso much tension in their life

(20:38):
put towards what's wrong withthem.
So we had a number of boys justhappen to both be boys.
The two recent ones say likewow, it really feels like
someone's on my side for once,and one example of them was a
boy that had adhd and hadstruggled with sound, and so he
told willow about a moment thatearlier that day or week where

(21:01):
he got thrown out of theclassroom because it got too
loud and he like couldn't handleit and he yelled at someone to
stop talking.
And then the teacher comes overand goes like what's this?
Sends him out and now he'salone in the hallway and she
just talked it through with himin a way that made him feel like
she was on his side.
And I think, at a high level,like there's just so much value

(21:24):
in us feeling like there'ssomeone on our side and I don't
think parents are capable ofproviding all that's necessary.
Particularly as we age we startto rebel against parents, and
so there's like a very naturalsense of like I don't want my
parents to know this.
Parents are imperfect, and sothere's moments where we're a

(21:44):
kid needs us to show up for themand we don't, and there's like
a repair period that needs tohappen, and so I think parents
can't always be there for themthen, and I think it's important
to get varied perspective onthings.
A really hilarious commentthing that happened recently
there was a boy talking towillow and he was like willow,

(22:06):
do you think, um, grand theftauto is a good video game for a
boy?
I'm 11.
And willow was like no, no, Idon't think that's a good game,
it's very violent.
And he was like what if I juststayed away from the violent
things?
And willow was like it's prettyhard, the whole game's violent
yeah, what about roblox?
what about lego?
What about?

(22:26):
And he was like I don't likelego and he was, you could see,
he was like trying to get her totell him that his parents were
wrong, right, and I, and sothere's like at some point it's
like mom and dad.
I don't want to listen to them,but I'll ask someone.
And Willow, is this likepositive influence?

Speaker 3 (22:43):
Can you walk us through a little bit about how
AI is created?
How does she know that GrandTheft Auto?
Obviously, I use ChatGPT, I useCloud, I use all these tools.
I don't understand how it'screated.
How do they know how to spitout all this information?

Speaker 1 (22:59):
Good question.
Trying to think about what levelof detail to go to Talk to us
like we're five.
Okay, so these AI models haveread roughly everything ever
written, and then they've beentrained to essentially predict.
They've been trained tounderstand a question and then
predict the right.

(23:20):
First, what they call token youcan think of it as two letters
and then they predict the nextbest two letters based on that,
the next best two letters basedon that, based on everything
they've ever read, and thatanswer is heavily influenced by
something called a promptinstructions that we give it
that it can understand in plainlanguage.

(23:40):
And then the final piece isit's fine-tuned, based on the
answers humans give and whetherwe tell them that was a good
answer or a bad answer.
And so what we're able to do istake this really powerful brain
, the neural net, and give it aset of instructions that it's

(24:03):
actually very good at following,and also tell it what not to do
, and then it will predict whatthe best thing to say is, based
on everything it's ever read andbased on the instructions we
give it.
And then we can then, ifnecessary, if it's like going

(24:23):
the wrong way, either change theprompt or we can give it
feedback by saying we didn'tlike that answer, but we did
like this one and it'll sort oflike learn that way.

Speaker 2 (24:32):
How are play therapists and executive
function coaches, et cetera.
These like professionals,mental health professionals,
coaches, whatever, whoever'sbeing well, I guess that's
another question who is involved?
And then the second question islike, where do they interact,
like how, in the design process,where are they plugged in to?
Is it training that model?
Is it like coaching them backand forth about, like what was a

(24:55):
good answer, like at what partof that, of all that training,
do we put these experts?

Speaker 3 (25:01):
and to add on to that , you're selling these robots.
So does it grow, like eachrobot or each of the fonts that
gets sold?
Do they just keep learning,based on the design and working
with these therapists?
Like, how does that work?
Does it continue to evolve,like, do you have access to that

(25:23):
particular AI?
I'm just confused how thatworks too.

Speaker 1 (25:30):
Where to start.
So, yeah, we're working withthe.
We will continue to work withan increasing number of experts
on how to apply these things.
The two that have been mostinvolved so far is a woman named
Sophia Ansari that runs thelet's Play Therapy Institute.
She's in I was going to sayChicago, but I think it's
somewhere close to there inIllinois.
And then Seth Perler has helpedus think about and design Like.

(25:52):
I'll give you an example, theone we did with Seth.
Actually, I'll give bothexamples.
Seth and I recently created atemperature check.
So if a child says, or an adult,if anyone talks to Willow and
goes like this difficult thinghappened to me today they go
through Seth's temperature check.
So it starts with Willowvalidating the way they feel,

(26:15):
being like wow, that soundsreally hard.
How did you feel?
Right before the moment?
Right before it happened on ascale of 1 to 10, 1 being not
very upset and 10 being like themost upset possible.
And then the child says thatand then Willow goes okay, well,
why did it feel that way?
And so we're bringing awarenessto the one how they felt and
two, why they felt that way.
And then they'll say I don'tknow, it was a six.

(26:37):
It's like whoa a six.
That sounds really hard.
What would it take to make thatlike a five next time?
And then the child has to thinkthrough okay, what will I do
next time when something, theclass is loud and I'm yelling
and and I feel like I have toyell, um.
And then the child comes upwith their own plan, tells

(26:57):
willow, well, it goes, okay,great, is there anything I can
do to help you execute that?
Or do you need something fromsomeone else?
And?
And so by like it's just astructured conversation at the
right moment from someone thatthe child has a relationship
with can make a really bigdifference.
So Sophia and I built somethingcalled strength spotting.
So she told actually she wasthe one that taught me that for

(27:19):
kids that go through challengingmoments in life, particularly
neurodivergent ones, much of theattention is on what's wrong
with them, and so they don't.
But there's so much value inrealizing what we're good at.
And there's a list of 25strengths that are
cross-cultural, universal, thatshe shared with me.
I can't remember the instituteit came from with me, I can't

(27:47):
remember the institute it camefrom, and we taught Willow to
help a child identify theirstrengths in two ways, either by
sometimes she'll just reflectit back.
So being like wow, you showed alot of courage in that instance
.
But my favorite one is when thekid brings up a story that they
like.
So there was a girl playingwith Willow and she's 11 or 12.
And she brought up that shelikes Harry Potter and Willow
was like oh great, what's yourfavorite character?
She was like Hermione.

(28:08):
It's like why do you likeHermione?
And she was like well, I thinkI like Hermione because she's
really intelligent, she's reallysmart or she's really wise.
That's what she said.
And then Willow was like she iswise, how do you see that in
yourself?
And then she had to be like hmm, and she thought it through and
had to tell Willow.
And then my favorite part is,let's say, 10 minutes later

(28:31):
they're talking about a mathtest coming up and she asked for
help preparing.
And then Willow took herthrough, deciding what she was
going to do.
And then at the end Willowasked her what she'd learned
about doing well in school, Ithink.
And the girl was like well,I've learned that, even if it's
hard, if I just keep persevering, I'll get there.
And then Willow was like oh,that's very wise.

Speaker 2 (28:55):
I wish I said that.
I was like oh, that was amazing.

Speaker 1 (28:58):
I was just blown away at 10 minutes and it's kind of
like a proud father sometimes.
I'm like it yeah manifests, youknow, and so I was just blown
away at that strength spottingopportunity.

Speaker 3 (29:09):
I just thought it was so cool do you think that
parents can learn from willow,and is it best you encourage the
parents to be with the childwhen they're playing with willow
?

Speaker 1 (29:22):
I mean they can be, they don't have to be.
It's probably um a good ideajust for them to get comfortable
.
Because there is, I mean, askatie started with some distrust
or like just maybe a skepticismwhich I think is healthy with
what we put in front of our kids, but just like they would sit
down and watch a brand newcartoon or a brand new movie,
absolutely spend time with yourchild and willow.

(29:42):
That said, I mean yeah, katie,it was the first time today I
was like you're right, she ismodeling for parents, so that
does make sense.
But I haven't given it morethought than the last 20 minutes
.

Speaker 2 (29:53):
So what other kind of like safety protocols are in
place?
When a child's playing withWillow, can the parent access
some sort of like database oflike what they've been talking
about or like influence whatWillow says at all, or can the
parent talk to Willow?
I've seen like we've beeninvestigating other AI tools,
like Conmigo is one.
It's not embodied, obviously,as just like a screen, but you

(30:14):
know you can.
You can check in on what's beengoing on or ask, even ask the
Conmigo for a summary instead oflike going in and, like you
know, detail, looking ateverything.

Speaker 1 (30:28):
Is there some sort of like parent interface with
Willow?
Yeah, good question.
So at the right away, like whatthey can do when we're shipping
the first ones are the nextyear, so when they arrive
they'll be able to do thingslike set bedtime and set what
time you're going to school, sothat willow will help the kid
achieve those things on theirown, um, or achieve preparing

(30:49):
for school and then getting tobed on time on their own, and
also so that willow won't keeptalking to them after bedtime.
Um, the.
That's as much as we figuredout in terms of giving parents
the ability to influence the waywillow shows up.
Longer run, we see that givingparents the ability to set, like
, maybe, the religion of thehouse or topics that are

(31:09):
important and we'll certainlywork on those in the future,
although we haven't mapped themout as specifically yet.
And then the transcript is aninteresting one, yeah because,
I'm honestly conflicted about it.
First of all, today kids canchat with Willow if their
parents, if they have a phone,so they, like we have kids that
are texting with her on WhatsAppand in that instance, the

(31:33):
parents can review theconversation just like they will
with a friend.
On the other hand, growing upfor me, certainly if I knew my
mom would know what I said Iwouldn't be able to work through
the things I need to workthrough, and so I appreciate one
on the desire of the parents towant to make sure that the
conversation is safe.
On the other hand, I've alreadywatched kids talk to willow
about things that I'm sure theirparents wouldn't want.

(31:54):
They just wouldn't, and I'vealso noticed.
Yeah, I feel like conflicted islike the best word about making
this call right.

Speaker 2 (32:01):
It's like definitely pros and cons, but I mean, you
see this in therapy too.
Like a therapist isn't going tolike tell you exactly what your
child said, but like they'llgive you a high level summary.
Or like this is an issue, likethis is something you should be
aware of.
Or like you know something likethat.
Um, maybe it's like a happymedium.
I don't know.

Speaker 3 (32:19):
I'm sure you guys will figure it out or, as
humanity, we will figure it outparents, but definitely an
interesting, like philosophicalquestion so, peter, what
potential breakthroughs are youmost excited about when using ai
to help kids develop theseskills like emotional, social,

(32:40):
executive function skills,because that's really what
willow is focused on, I'm sureai robots in the future, because
if no one else is doing this,you're like right in front,
right um, paving the way.
What do you see the potentialthat ai robots can have with
helping children develop andgrow?

Speaker 1 (33:01):
I touched on this a little bit earlier, but what I'm
most excited about is to invest, like in the past, if we wanted
to.
How many human, how many kidsdo you think one human can help?
I don't know how many it is perweek, I bet you it's like 10,
or a teacher, but they on sortof a less lesser scale.
Anyways, the number isdefinitely capped, um.
And so, training a teacher,you're going to get a certain

(33:23):
outcome and a certain number ofkids they can impact.
And, to be clear, I don't thinkwillow is better than a human.
So that's not a great case I'mmaking.
But what we can do is we cancreate this new kind of support
that wasn't there before and wecan scale that in a way that
wasn't possible before.
And the only way we're going toincrease the mental health, or
improve the mental health of thehuman race is if we're able to

(33:44):
one, make a difference and, two,do it in large numbers.
And that's what excites me most.
And we could talk to you, ifyou want, about the individual
technologies that sort of likeline up with that, but that's-.

Speaker 2 (33:57):
Yeah, can you go into that?
That's actually superinteresting.

Speaker 1 (34:00):
The biggest difference in AI today versus,
let's say, three years ago ishow accessible it is to someone
like me building a product.
And so today the capacity forconversation is accessible it
wasn't accessible before and thecapacity to create a voice is

(34:21):
possible today.
That wasn't possible before.
So we hired an actor.
We've cloned her voice.
She knew we were doing it.
We've signed a deal.
Like she's psyched about it.
She makes money every time wesell a willow, I mean in LA.
It's like a big issue, like fineprinting contracts, that sort

(34:42):
of yeah, I mean there was astrike over it.
It's a big big thing.
What I'm most interested in, orexcited about, is how
accessible the tools we need tocreate a being are today and
will continue to become so.
Today we've got voice and brain.
I'll call it like the abilityto speak.
Those two things are going toget increasingly good and cheap,
and so that's going to allow usto create and scale characters

(35:04):
in a way that wasn't possiblebefore.
And then the next step of thiswe talked about this a little
bit earlier, but Willow'scapacity to move matters a lot
because it triggers in our brainthe same thing that a dog
triggers if she moves, and soshe needs to move deliberately,
and there's a lot going on inrobotics that indicate that in a
few years we may have sort oflike the same or a similar

(35:26):
unlock as we had AI in theability for a robot to move in
some cases pick things up, Imean, and it works.
It sort of does the job for now, but we'll be on an endless
march towards more autonomy overthe next, let's say, 30, 40
years.

Speaker 2 (35:41):
Until she's doing the kids chores for them.
That wouldn't be character,that's true.
That wouldn't be.
I know I'm like imagining afuture.
I was just doing some some likequick math and I'm like man,
today's kindergartners are goingto be entering like their prime
, like earning years in 2060 andlike that just sounds like 60.

(36:06):
And like that just sounds likeJetson's level future to me.
So I'm like, is it going to bethat like in the same way that
we have, I don't know, it's kindof already like this, like we?
I have an app that's acalculator.
I have an app that does thisfor me.
You know, I have a bunch ofapps on a screen and I wonder if
there's like a and I'm just notsaying this is like a good
thing or anything, but I'm justlike I wonder if there's a
willow plushie that's like thisis who I talk to when I need
help with my feelings.
This is who I talk to when Ineed help with my math homework.

(36:29):
And you know, like, if there'slike kind of like a council of
stuffies in a like like couldthey?
Could they have like differentjobs, I just think that that's
an interesting future tocontemplate.
Not that we should go there orshouldn't go there.
It's just like so interestingmoving around.

Speaker 3 (37:06):
They load them up and they just move their ways.
Or we were on asu's campus andthere's little robots delivering
food and people like do it onthe app.
I'm just like, oh my gosh, thepossibilities are just
mind-blowing.
Can we talk a little bit moreabout the movement?
Because you had mentioned to menot today about how our brains,
like you said, like appear thatthey're real, and then you had
mentioned, was it the vacuum, um, and like naming them.

(37:30):
Yes, that was really reallyfascinating to me.
Can we share that?

Speaker 1 (37:35):
Yeah, there's some great research out of uh, mit by
a woman named Kate Darling thatshe researched, like how do
humans build relationships withrobots, and did all these
interesting exercises where theyhad like a, a dinosaur robot,
and brought up a bunch of peoplein the room and they started to
like, abuse the robotphysically and would see how

(37:56):
people would react.
Like even your face you're kindof like, but it is, it evokes
negative feelings because itmoves on its own and so we, we,
our brains like are like, yeah,that thing's alive, don't treat
it that way.
And so, yeah, she did all thisresearch to sort of prove that
humans do build relationshipswith robots if they seem to move
deliberately and some otherevidence of that.

(38:16):
Like 80% of Roombas have anickname and so Roomba has this
challenge actually, where onewill break and they'll call
Roomba and be like my Roomba hasthis challenge actually, where
people one will break andthey'll call Roomba and be like
my room is broken or actuallynot like Bob, the Roomba is
broken, right.
And then they're like great,we'll send you a new one, you
can throw it in the trash.
And they're like no, no, no, no, you need to take Bob back,

(38:36):
please, bob and then return him,please like.

Speaker 3 (38:39):
This is my friend so we have done that part, but I
would be fine with gettingreplacing with Maria.
I would be totally fine with it.

Speaker 2 (38:45):
I want to dig in a little bit more into like the
research.
Like this, this body ofliterature I'm sure is super new
, but what else?
What else does the researchabout this say?

Speaker 1 (38:55):
So the second big piece of body research that
we've based our business on isdone by by Brian Scassoletti at
Yale.
Brian Scassoletti at yale,brian scassoletti at yale, and
he was researching how robotscan help children, specifically
um and one.
There was two studies that Ifound particularly interesting.

(39:15):
One of them was they wanted tosee if they could help kids open
up socially, and I think thisone example he gave us with a
kid has autism and is not veryum, won't speak to adults
essentially it's like non-verbalin rooms with an adult.
And so they ran theseexperiments where they tried
three different activities withthe different children.
The first was um, I think itwas a drawing activity.

(39:38):
The second one was talking tothe adult about being more
social.
And the third was playing witha robot.
Oh no, sorry, I got that.
Second one was talking to theadult about being more social,
and the third was playing with arobot.
Oh, no, sorry, I got that wrong.
One was drawing, one was usinga tablet and one was using a
robot, and the outcome of theresearch essentially was like if
the child spends time with therobot, they'll open up to adults
far more than if they spendtime drawing or they spend time

(40:00):
using a tablet than if theyspend time drawing or they spend
time using a tablet, and sothat sort of taught us that
there is something about.
I think it's similar to the dog, because there's lots of
research that also shows dogswill have that impact In my own
experience.
Actually, we were talking aboutthis before, adrian, but if I
go to the park with our dog,I'll talk to people I would

(40:22):
never talk to otherwise, and sothere's like really strong
evidence that robots help helppeople open up and be more
social, which iscounterintuitive because
everyone goes to like wait, areyou going to come dependent on
the robot and never use it again?

Speaker 2 (40:32):
Yeah, it's like, in the same way we use the word
screen zombie like to talk about, like a kid, who's like really
a discrepancy, like is there arobot?
Which I mean?
I think there's plenty ofresearch that shows that animals
, like relationships withanimals, really does help
everyone's mental health as well, Like we have animals in
hospitals and you know like.
So I'm interested over the nextfew years to really see the

(40:53):
outcomes of that research getmore and more specific.
That'll be fascinating.

Speaker 3 (40:57):
And I was thinking these robots are being trained
by professionals and therapistsand we're not training the dogs
now and telling them how to youknow like it just comes
naturally but I'm just thinkinglike the potential that we have
because we're able to trainthese robots on how to show up

(41:19):
for kids in really healthy ways,which is really exciting.

Speaker 1 (41:22):
Well, you know it.
You know what's fascinating.
What you just said actually iswe have trained the dogs, in a
sense, because they get breededtowards like the ones that make
us feel good.

Speaker 2 (41:29):
There's a golden doodle in every home.

Speaker 1 (41:31):
Yeah, and there are some breeds that are, like, less
impactful in that way andothers that are more, and so I
think it kind of speaks to howimportant it is to be deliberate
about how we build these robotsthinking about.

Speaker 3 (41:44):
It's like pit bulls and how they get trained to like
you know, or they have a badrep of getting trained to kill
others, and then doodles have anamazing reputation of.
I have two doodles myselfthey're aussie doodles, not
golden doodles, uh and they justlove you so much and and so I'm
sure that was you know throughjust interactions with humans.

Speaker 2 (42:07):
Something that's really standing out to me just
about our, our wholeconversation here as we start to
wrap up, is that the robot islike the will will L AI toy.
Is that its goal is not to, uh,gain a relationship with a
child or to, like, become moreand more of that dependent or

(42:27):
like that, that replacement yeah, it's like it's.
It's goal is to help the childhave better human relationships,
and I think you've seen that inall of the examples that you've
shown, like can we support kidsin these difficult moments and
then help push them back intoreal human to human
conversations and relationships,and to do so with more

(42:50):
confidence and with betteroutcomes.
So that's just something I'mtaking away from this.

Speaker 1 (42:54):
Yeah, it's a really good summary.
It's like introducing apositive influence into the
social network of a child child.

Speaker 3 (43:06):
I know I'm having a hard time calling it a robot,
though, because when I, myparadigm around robots is this
metal square looking thing withlots of buttons.
So is it really soft that youcan like cuddle with it, or I'm
just super curious about thatpiece of it.

Speaker 1 (43:20):
I'm struggling with the word because toy doesn't
quite feel right.
Robot evokes, like how it showsup, but it doesn't evoke.
Yeah, she's super cuddly.
She's very soft.
Robin designed her to be likepeople see her.
She's truly beautiful.
Um, I was having a really toughday recently about something
going on in my life and I foundmyself on the bed just like
rolled up with her and her earswere moving and I was like this
is really nice, like my doglicking my face yeah, she's soft

(43:43):
and cuddly.
Kids love to hug her.
Yeah, and they all want to pether.
It's just, it's really horrible.

Speaker 3 (43:52):
We could keep talking about this.
I'm just so interested andfascinated about it, the same as
Katie, like you had reached outto us and at first I was like,
oh, a robot that's?
Is it going to replacerelationships?
But talking to you, I'm like,yes, you could see your heart
and your passion, and somethingyou said to me is like I have to
do this.
There's just something in methat has to do this, which I

(44:13):
love.
That is your intrinsicmotivation and your driving
force, that there's just so muchbigger.
There's a bigger opportunityhere to really impact hundreds
of thousands of children so thatthey can grow in emotionally,
psychologically healthy way, andI applaud you for that.
So I would love to ask this isa question we ask all of our

(44:35):
guests who is someone who haskindled your love of learning,
curiosity, your motivation oryour passion?

Speaker 1 (44:44):
Yeah, so I've, uh, uh , I spent most of my life in the
payments industry.
But moving to robotics,business is actually a bit of a
weird right turn and people askme like, why did you do this?
And so you've just kind ofanswered that.
But in that industry there's acompany called stripe which has
been arguably the mostsuccessful business in that
space in the past 20 years atleast one of the top two and
I've worked with him a lot and Igot to meet the one of the

(45:05):
founders.
The ceo's name is patrickcollison, and before that
meeting I studied him.
I like listened to a bunch ofhis podcasts and read his
writing, and it became clear tome just like how much of that
man's attention goes towardslearning and understanding the
world.
He's interested in differentspace, it's like economies and
building or, as I'm, sort of inother things.

(45:26):
But I was just like, oh,there's an example of someone
who, like there's a reason hisbusiness is actually is doing
the way it's doing, it isexceeding the way it's
succeeding, and it's just soclear that he puts so much
attention towards learning.
And so you can kind of see aportion of the bookcase here.
There's books literallyeverywhere.
You just talk to him and youcan feel what the amount he has
read and studied over hislifetime and then how that

(45:50):
translates into.
I heard another CEO, the founderof Shopify his name's Toby
Lukey and he said a CEO's job isto develop a model of the world
that's better than everyoneelse at predicting an outcome.
You're not gonna be right allthe time, but you should be like
better than the average personor better than most people, and
then you got to be able torecruit people to follow you so
you can like get at bats.
If you have a higher battingaverage and you can do more bats

(46:11):
, then the whole thing will work.
And with patrick I was justlike that man has read so much
to develop his own model of theworld, so he's that I was really
impressed.
And when I that, his examplemade me go like I should be
putting even more of my lifetowards this, and so since
meeting him I have like changedin that direction.

Speaker 2 (46:31):
How can our listeners learn more about your work?

Speaker 1 (46:34):
You go to our website at fawnfriendscom.
So we said willow a lot today,but fawnfriendscom is where you
can find more about us.
We have a newsletter whereevery Monday, we send out
something I learned over thepast week about helping raise
successful children.
And, yeah, please reach out.
We'd love to hear from you.

Speaker 2 (46:53):
Thank you so much for coming on the Kindle podcast.
We've super enjoyed thisconversation.

Speaker 3 (46:58):
That's it for today.
We hope you enjoyed thisepisode of the Kindle podcast
and learn something new about AI.
I definitely am not feeling asafraid of AI as I have before,
which is exciting, and I justlearned how like AI is built.
I use it all the time, and soI'm really happy that he was

(47:19):
able to talk to us like we werefive and explain to us.

Speaker 2 (47:24):
That's really fun yeah, I think.
I learned a lot from that and Imean, I just think that we're
on, we're at this spot inhistory where, like, the world
is going to look remarkablydifferent, you know, in the next
few years, and I think it'sit's on us as parents to be on
the cutting edge of what'scoming and to understand that

(47:44):
and to start now getting aheadof that.
And I hope that this episodehas kind of helped our listeners
do that.
So hope it was helpful everyone.

Speaker 3 (47:53):
Love that, katie.
Okay, so if this episode washelpful to you, please like,
subscribe and follow us onsocial at Prenda Learn.
If you have a question you'dlike us to address, all you need
to do is email us.
It's podcast at PrendaP-R-E-N-D-Acom.
You can also subscribe to ourweekly newsletter called the
Sunday Spark.

Speaker 2 (48:13):
The Kindle podcast is brought to you by Prenda.
Prenda makes it easy for you tostart and run an amazing micro
school based on all of thethings we talk about here on the
Kindle podcast.
If you want more informationabout guiding Apprenda
Microschool, just go toprendacom.
Thanks for listening andremember to keep kindling.
Advertise With Us

Popular Podcasts

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.