All Episodes

March 5, 2024 30 mins
Rosalind Picard is the founder and director of the Affective Computing Research Lab at MIT and co-founded both Affectiva and Empatica. Both companies are aimed at using extensive AI and wearable tech to improve the lives of people with chronic illnesses and make robots act a little more human. Ironically enough, she never wanted to start any kind of company but the need for better data and better measurement tools pushed her past the point of research and into the public sphere of engineering consumer products.

The views and opinions expressed within this content are solely the speaker's and do not reflect the opinions and beliefs of Supplyframe or its affiliates.
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
I think there is way too muchhype and extrapolation about because these have gotten
so smart so fast. Therefore peoplewill be obsolete in a certain amount of
time. I think that is foolishand dangerous and harmful to people to talk
like that. I think that theextrapolations there are completely ignoring the need for

(00:25):
having an entity that has a mind, that knows what is true and false,
that knows what matters, and thathas emotional intelligence. My name is
magentas strongheart, and this is thebomb and sharing a path forward. This
week we discover what in the worldaffective computing is, how to make technology

(00:46):
more empathic, and how wearable techcan save lives. Rosalind Picard is the
founder and director of the Effective ComputingResearch Lab at MIT and co founded both
Affectiva and Empatica. Both companies areaimed at using extents of AI and wearable
tech to improve the lives of peoplewith chronic illnesses and make robots act a
little more human. Ironically enough,she never wanted to start any kind of

(01:08):
company, but the need for betterdata and better measurement tools pushed her past
the point of research and into thepublic sphere of engineering, consumer products.
This is my conversation with Roslyn Picard, and this is the Bomb. We
live in a time where design andtechnology touch every aspect of our lives.

(01:30):
But where did it all come from? Who designed it, how is it
built and brought to market? Whatwill look like in a year, two
years, one hundred years. Fromthe phones and smartwatches that help us in
our day to day to the cuttingedge spaceships and three D printers that are
leading us into the future. Moderndesign is constantly shaping the way we work,
communicate, problem solve, and play. And every new design, bigger,

(01:51):
small, starts with an idea anda bill of materials. I'm agenta
strongheart, and this is the Bombwhere we talk to leading innovators in the
tech world and celebrate the transformational powerof design. Awesome. Thank you so
much, Roslyn. I'm really excitedto have you here on the Bomb podcast,

(02:13):
and I know you must have anincredibly busy schedule from what I've learned
in my little amount of research I'vedone before our conversation, you manage a
lot of things, so I appreciateyou taking the time out to talk with
us, and I'm excited to learnmore about your research your kind of background
getting into this work and what you'redoing with Empatica and affectiva. So before

(02:37):
we get into it, did Isay that correctly? Yes, that was
perfect, Okay, just want tomake sure, And I just wanted to
say it's a pleasure to talk withyou too. It's really wonderful of you
to reach out. Thanks forresling.So before we get too far into the
kind of current technology you're working onand the exciting things you're doing with these
two companies, I'd love to justrewind a little bit and learn more about

(03:00):
your background and how you got intoengineering. So, if I understand correctly,
you did an undergrad in electrical engineeringand then PhD in electrical engineering later
on in computer science at MIT,So if you could tell us a little
bit about what brought you into that, even if it's I know, sometimes
it can be a long story,but even if there's early moments that really

(03:21):
inspired you to get into making thingsunderstanding how things work, I'd love to
learn more about that. Thanks.I was not one of those children who
was really interested in science, tobe honest, it was my least favorite
subject. I didn't get interested inengineering until I took a test in high
school, trying to figure out whatto do with my life, and I

(03:44):
was good at math and science andalso English, and it was suggested that
I learn more about engineering. Ihad actually never met or heard of engineering
really up until that point, andthen when I realized it was the core
of understanding all the cool technolog aroundme, I thought, wow, I
mean I could really learn how allthese things work, and so I set

(04:05):
out to study at Georgia Tech.Also, I heard it was the hardest
major, and that appealed to me, And as I learned more and more,
I studied computer engineering, in particularlearning how to design the insides of
computers, and then later took ajob at att Bel Labs designing computer chips
and also got into MIT's electro engineeringcomputer science programmer. I did at master's

(04:29):
and doctorate also in designing the insidesof machines and thinking about the human brain
and then trying to build better algorithmsand things that would run on those architectures
that would imitate the human brain.That's where things really took some interesting twists,
so interesting that it all came fromthis early survey. I love that

(04:50):
that served you so well because sometimesI'm sure people are like, this is
not helpful or not you know,leading me anywhere really informative, But I
love that it kind of relates,I feel like to your interest now in
understanding others and how they think.And you were probably like, I want
to understand, you know, whatis the right path for me by figuring

(05:10):
this out and kind of answering thesequestions. There was a point where my
advisor said to me, you know, these are really interesting computer architectures you're
designing, but what runs on them? And I was so inspired by the
human brain that I thought I needto know more about how the brain works.
And I was thinking of it morefrom an engineering perspective, and I
started working on computer vision and workingnot just on the algorithms, which at

(05:33):
the time, interestingly, we calledmachine learning and pattern recognition, and it
was called not AI, very veryfunny because when I first started teaching it
the first day I had to say, this is not AI, this is
pattern recognition, but it's useful.And today the AI is built on machine
learning, the AI that's most useful. But at the time I was just

(05:54):
interested in algorithms that would run anddo things similar to what I thought were
so impressedly intelligent in the human brain. And as I studied how the human
brain worked, one day I learnedthat there were regions deep in the brain
involved in perception that we had notstudied. We had always studied the outer

(06:15):
part of the brain, the cortex, the visual cortex, the parts that
an EEG sitting on your scalp couldread signals from. And suddenly I'm learning
that there are these deep, olderregions in the brain that were involved in
perception. They were involved in particularin attention and in memory and emotion,

(06:35):
And I thought, well, attentionand memory are important, and I don't
want to have anything to do withemotion. That would ruin my career,
I thought at the time, especiallybeing a woman in engineering. So this
was a really pivotal moment to learnthat those parts of the brain were really
useful for perception. The companies thatyou now have, how did they come

(06:58):
out of the research you were doing, And what was the kind of timeline
from I mean, it's clearly directlyrelated to the work that you were doing
even in your doctorate, But whatwas the had you already come up with
some of these ideas during that timeand then decided to start a company and
commercialize some of these solutions right aftercompleting your doctorate or what was the kind
of timeline and space in between thatthat led to being able to commercialize those

(07:25):
There was actually quite a span.I was really interested primarily in the coming
up with better algorithms that were smarterand did more useful things, and in
understanding increasingly how the brain worked andhow emotion played a role in that.
When I learned that emotion was actuallyessential for intelligent decision making and perception,

(07:48):
and that was really a difficult challengeto try to build this area in a
current that thought that emotion should notbe taken seriously, but I kept trying
to bring objective measurement to it.It meant that we had to build our
own devices to gather data, andso we started to build devices that could
get the kind of physiological data withgreat quality in real life. And at

(08:11):
this point I had no interest ina company. I actually thought business was
kind of the dark side. Ijust wanted better data, and we couldn't
buy anything to give it to us. So in fact, you couldn't even
in theory. You couldn't even measurethe things we were measuring in a comfortable
way. You had to strap themto uncomfortable parts of the body and walk
around with duct tape on you toget the data. And then when we

(08:35):
built those devices that gave great data, suddenly everybody was asking us like can
I have one? And they weresaying can I have your data? And
these same conferences where people said,you know, did she really do respectable
stuff? They were coming up tome and saying we want to work on
this now, eating their words.Yeah, they yeah, And I'm like,
great, I'm happy to share data. This is a big area.

(08:58):
It needs a lot of people.Let's let's collaborate. That's great and I
love hearing. I think usually,you know, some of the best ideas
come from a need that we mighthave ourselves, right, Like this came
out of you guys needing to makemore effective tools for the research you wanted
to collect. And so these solutionscame out of a need of this.

(09:22):
You couldn't find it what you needed. And it's incredible to talk to someone
who's been doing machine learning and patternrecognition and you know AI before it was
called AI, and to now bein a world where obviously it's very it's
a hot topic at the moment,you know, in the last couple of
years. I would be interested tohear kind of your perspective on what that

(09:43):
evolution has been like to be atthe front seat of that, and how
are there any things I guess there'sa lot of questions I could ask.
First question, is there anything yoursurprise hasn't kind of gotten further in this
work, whether it's in you know, healthcare, are the specific spaces you're
addressing, or just broadly as faras the adaption and the use of it.

(10:07):
You know, in the world.Actually, mostly things have moved fast
as fast or faster than I expected. I'd say the one thing I think
is not moving as well. Thereare two things that aren't moving as fast
as I think they should that I'mfamiliar with. One is the medical area.

(10:28):
The medical area, it does needto be conservative, it does need
to check everything. And maybe withrespect to large language models, in some
cases people are you know, somemaking some poor decisions. I think they're
with some of the large language modelsputting them into situations where they will make
bad errors and not properly compensating forthose right away. But there are other

(10:52):
areas of AI where the data andthe behavior can be well characterized, and
medicine I think needs to focus onthose and maybe move a little faster than
it's done there. I think thatcan bring a lot of good to people.
I also think regulations haven't moved fastenough, and part of that is
the lack I think of technical people'sinterest in getting involved in them. You

(11:18):
know, it's not fun. Wedon't really want to be regulated. We
absolutely don't want to stifle innovation,and yet the people doing the regulations sometimes
are not well informed at all abouthow things actually work and how to craft
a regulation. That's kind of likea decision line and machine learning. You
know, it's probably not just astraight line, like where everything on one

(11:39):
side is good and everything on theother side is bad. Definitely, Okay,
there's so many directions we could go. Now, just that little bit
of your response, I want todive deeper into. I'm going to go
back slightly to what you were sayingabout. There are areas you feel like
where this is potentially dangerous as beingused a little bit, and if you're

(12:01):
open to it, I love tohear. Do you have specific examples you
could give us in what you're kindof referring to when it comes to the
side where you see it not beingeffectively used in a little bit, you
know, problematic or dangerous. Briefly, I think that the large language models,
while incredibly impressive in what they're doing, and I think exceptionally useful for

(12:24):
some things like creative generation, synthesisof documents and images and ideas in a
space, and software, you know, with a human in the loop checking
things where there's a lot of dataon some of that. I think it's
fabulous what they're able to do.I think there is way too much hype

(12:48):
and extrapolation about because these have gottenso smart so fast. Therefore people will
be obsolete in a certain amount oftime. I think that is foolish and
dangerous and harmful to people to talklike that. I think that the extrapolations
there are completely ignoring the need forhaving an entity that has a mind,

(13:13):
that knows what is true and false, that knows what matters, and that
has emotional intelligence. So often thepeople hyping the models don't talk about the
thousands of humans in the loop shapingit and teaching it, and how the
model doesn't have a mind. SoI just think that we need to be
very careful when we hear the hypeand ask another question, which is are

(13:37):
the people hyping it? Do theyown stock and these companies are they just
trying it, you know, becausethey usually see and they're usually trying to
pump it up and press good,bad, whatever pumps up their values.
I think that's a really excellent pointand thing to emphasize, of course,
because as you said, that's probablyone of the most common kind of fears

(14:00):
monks. I would say the generalpublic right around AI. I would love
to hear if there's any resources yourecommend or you think you're doing a good
job, or spaces, institutions,anything in that regard that you think are
really like leading the way and helpingeducate people on how that can be done
most effectively. Yeah. Well,one of my favorites is one that came

(14:22):
out of the Media Lab. It'scalled the Algorithmic Justice League. Joy Brill
and Winnie and team there have justdone an amazing job of being grounded in
the engineering, the science, thetechnology, and yet connecting with policy makers
and leaders and helping them understand theopportunities to do better there. Awesome.
I definitely need to look into themmore. Thank you for sharing that,

(14:46):
And I want to get back tokind of more the policy side of things
and the patenting side of things,and I'd love to hear just what are
the what do you see as theimportance and benefits of patenting and why you
try those to patent your innovations andhow that's kind of played out over the
years versus I know you're also aproponent. A lot of your research is

(15:07):
open source as well, so alittle bit if you could speak to that
balance. Yes, Actually I havedefault not patented. I've been told many
times that my articles, my bookhave prevented patents. We in the media
lab our default position has been opensourcing our software and publishing and openly putting

(15:31):
out our ideas and not patenting them. I am, however, listed on
over one hundred patents as a That'swhat I was going to say. Often
I saw, yeah, that isagain that is not my default position.
Okay, interesting, hundreds more ifthat was my default position, our default
position is not to patent. Eachof those patents was because a company requested

(15:56):
that we patent or ipatent what wedid, and that most of those were
you know, I'm an academic andthey wanted to do something commercial with it,
so okay. Mostly I've been afan of just putting things out there
and letting people use them for free. We have to take a quick break

(16:18):
to hear from the wonderful people thatmake this show possible. After the break,
we'll hear about the groundbreaking research andnew wearrable technology that Empatica created that's
helping to save lives. And we'reback to hear more of the fascinating story

(16:38):
of Roslyn Picard's research and company.I'm your host, Magenta Strongheart, and
when we left off, we werediscussing the pros and cons to patenting research
tech and if it even makes senseto patent technology that's designed to help everyone.
Thank you for your insights on that, and I want to get back
to more of the fun stuff thatyou do. I would love for you

(17:00):
to explain in detail the two companiesthat you founded, the Affectiva and Empatica,
and tell us more about the kindof products and services that they're providing.
I think they're really interesting and doinga lot of good and so i'd
love for you to for people whomay not be familiar with what they're doing

(17:22):
the elevator pitch maybe for each sure. The first company I co founded with
Ron L. Kelub is called Affectiva. It's been acquired by smart Eye,
a public Swedish company. The originalgoal was to be a company that provided
the technology to help people to measureand make sense of affective signals from things

(17:48):
like a wearable device and faces andvoices, and do so in a scientifically
rigorous way, and do so inan ethical way, always with people's prior
informed consent. Actually, even inthe early days when starved for money,
turned away possible funding from people whowanted to collect data without people's fully informed
consent. So we stuck to that. And then today Affactiva has been well,

(18:14):
actually four years. About three yearsinto it, our CEO, we'd
Hire, decided that hardware and medicalwas too hard and canned all of that.
I tried to do a spin outfrom a spinout which I was told
might have a five percent success rateof succeeding. It wound up not succeeding
to do that. So I fortunatelywithin my t I had my IP,

(18:37):
I had access to it as inventorthrough the media lab they give you that,
so I just started back at Squareone with a new company to build
a wearable and that company within ayear merged with Impatica SRL to form Impatica,
Inc. And Impatica today is sellingand providing services and digital biomarkers that

(19:00):
include an FDA cleared smartwatch, anFDA cleared platform, FDA cleared biomarkers totally
focused on helping people succeed and gettingthe highest quality medical insights using wearable sensor
data and AI technology. So thatis, in a nutshell, what Impatica

(19:21):
is offering today. It also grewout of a very crazy story path that
was not at all anything I everexpected. That's really interesting. And do
those just to make sure I understandcorrectly, are those products provided through other
companies like it's a B to Bor those go directly to consumers for them
to access the data themselves or witha care provider or something. Affectiva sells

(19:47):
mostly B to B and B toresearchers who are collecting data. They mostly
are doing improvement of safety in theautomotive environment these days, you know,
safer driving exp speriences. At Empatica, where we're focused on medical, it's
a little complicated. We sell tothree different groups. Really, we sell

(20:07):
direct to consumers who have a diagnosisof epilepsy. We sell the best selling
wearable to detect the most life threateningkind of seizure, the grand mall or
technically called the jenernalized tonic clonic seizure. So a patient who has those kinds
of seizures should not be alone.If they are alone and have those kinds

(20:29):
of seizures, there's a sixty fivetimes higher risk of dying of a condition
called suit up that they should talkto their doctor about if their doctor hasn't
brought it up with them. Sothey should not be alone. They need
something to alert to people to comeand check on them when they have a
seizure. We also sell directly atEmpatica to people running both small research studies

(20:52):
usually academics, and large clinical trialsusually companies, medical companies and pharmaceutical companies
working on treatments to improve lives,and they purchase sets of devices or digital
biomarkers or services to help them collectobjective data that gives them insights into things

(21:14):
like the patients activity levels, leapchanges, stress changes. A list of
over one hundred and twenty eight digitalbiomarkers that relate to everything from six minute
walk tests to FDA cleared sleep waketiming. That's really helpful to understand.
Thank you for explaining a little moredetail. And I'd love to hear just

(21:38):
what's kind of a day in thelife of Rosalind now and you're managing all
these different things, but also ofcourse still continuing your research, and I'd
love to hear what the kind ofsplit of some of these responsibilities. Yeah,
what does that look like for folkswho might not know? Oh goodness,
a day in the life. I'mnot sure there's a typical day in
the life for boring. I'll tellyou that I get to fill my days

(22:04):
usually talking with collaborators on our researchprojects. I have a lot of projects
going. My priority at MIT iswith my graduate students and our collaborators on
those projects. So a typical daymight involve so, for example, yesterday
talking with leading collaborators at Harvard MedicalSchool and Brigham and Women's Hospital and in

(22:26):
my lab about how to better understandthe way that human sleep behaviors and activity
behaviors relate to mental health and mood. So actually, yesterday spent several hours
going into the latest data, differentways to analyze the data, finding old
data, new data, a lotof data analytics and other meetings. We're

(22:49):
talking with people about machine learning methods, the latest, how to improve them,
how to help them work better forgeneralization, explainability on you know,
data sets we haven't seen before.I also teach. I'm preparing lectures right
now related to analysis of emotional informationfrom speech and from physiology. So that's

(23:15):
that was one day. I don'tknow what typical day, but typical day
usually involves a lot of learning andbrainstorming. Wow, it sounds incredibly dynamic
and a balance of a lot ofdifferent I would say, using a lot
of different parts of your brain.So that's awesome, and I think what
keeps things interesting in careers, ofcourse, is being able to have that

(23:36):
kind of dynamic day to day.I'd also be interested to know if you've
had to go through some of thosetransitions from being more on the ground.
I don't want to assume anything,but you know, in the weeds of
the research to now having to managethese large teams and also balance you know,
all these different kind of large institutionsyou're responsible for, and do you

(23:56):
have any kind of tips or thingsyou learn learned throughout maybe some of those
growing processes that you think might behelpful for others who are going through that
transition. Definitely, I've learned alot. Actually, one of the most
important things I learned I learned whenone of my graduate students had a bad

(24:18):
experience on an airline on his honeymoon, and the flight attendant handled it very
poorly, which caused him to getso upset that he told their customer service
people that he could build a computerthat handles people's feelings better than the airline
did, and he came to meto work on that, and we built
the first technology that attempted to showempathy to people. First of all,

(24:41):
it had to elicit frustration, whichturned out to be so easy to do
that we actually had a hard timedoing a control, a non frustration control.
We wound up with a high frustrationcondition and a low frustration condition.
And then we randomized people to asituation where they got active listening and empathy
or just a friendly chatty you bought, or just a control. And we

(25:04):
learned that the computer algorithm for displayingempathy, which was handcrafted, then okay,
there's no real intelligence in it,just human intelligence anticipating a situation,
trying to do a very emotionally intelligentthing. And that has probably been the
most valuable skill I've learned from ourwork, and how to think about the

(25:29):
difficult emotions somebody's going through, howto help them kind of like not a
perfect mirror holding up to it,but sort of trying to paraphrase, trying
to show understanding, not just tounderstand, but to actually do the handshake
metaphorically of repeating it back to themand really making an effort to see if

(25:51):
what they hear from you is accuratelywhat they feel. And that is incredibly
effective and it works whether one onone or trying to address the feelings of
a larger team. Absolutely No,that makes a ton of sense. And
I love the note of kind ofvalidating the understanding, not just like you

(26:12):
said, understanding is one thing beinga good listener, communicator, but even
going that extra step to make surethey know you understood or to make sure
they have the opportunity to correct anythingthat might have been misunderstood. Yes,
and one other question we like toask, because we're hardware nerds and we
build things, is what's on yourpersonal bill of materials and you can interpret

(26:37):
it however you like. Right now, my personal bill of materials is actually
going back in time to build awearable that helps a non speaking person communicate
their stress. We started off tryingto do that years ago, and we
know how to do it, butwhat we have right now is a little

(26:59):
too fancy for what they need.So we're actually going back to some basics
and working again with people who areexperiencing a whole lot of stress and are
being misunderstood and working to come upwith something that will help fix that.
That's awesome. I'm excited to kindof follow along the journey there. I
watched your TED talk, which I'mimagining this is kind of the work you're

(27:23):
referring to right as these early wearablesfor folks on the spectrum if I'm not
mistaken, and how they can communicateif they are nonverbal. So that's awesome
to hear. And I just wantto give you the opportunity if there's anything
else you want to share or callout or promote or anything. Now's the
time before we wrap up the conversationhere. Thank you. One thing that

(27:48):
is the case today and maybe andeverybody could do a little something about it.
Is there are people with epilepsy whofeel that they can't tell others about
their epilepsy because it's stigmatized. Andit shouldn't be stigmatized, because one in
twenty six people will get epilepsy,and anybody with a brain, which is
everybody listening, it could happen toyou or somebody you love. So I

(28:14):
just encourage everybody to ask around ifthey think they don't know somebody with epilepsy,
to ask their friends, Hey,I heard it's actually really common,
but I don't think I know anybody, And ask around and when you find
you will find that you do knowsomebody. And I would just offer to
them if they still have active seizures, to be a support to them,
to be willing to be added totheir call list. If they have active

(28:34):
seizures, they should not be aloneat the time of a seizure. They
should use a device like our FDAcleared one. They should make sure that
they're not alone because having somebody who'swilling to be there for you can be
life saving. And that is somethingeverybody listening could do. That's wonderful.
That's the perfect note I think toend on. So thank you so much,

(28:55):
Roslin. Thank you for your insightin so many realms across like leadership,
experience and careers in engineering down tothe nitty gritty of tech development and
building empathy. We covered a lotof ground and I'm excited for others to
get this conversation. So thank youso much for the time today. Thanks
Minchin. That was Roslin Picard andher insight into how to build more empathic

(29:25):
computing systems, the right way tostart a data driven company, and why
it's not always the best thing topatent something. Robots are becoming more and
more commonplace in our daily lives,so it's vital that engineers learn from the
work of experts like Roslin in orderto build a more empathic technological world.
This has been The Bomb Engineering apath forward. If you like The Bomb,

(29:52):
don't forget to subscribe, rate andshare the show wherever you get your
podcasts. You can follow supply Frameand Hackaday on Instagram, Twitter, LinkedIn
YouTube, and design Lab at supplyFrame Design Lab on Instagram and Twitter.
The Bomb is a supply Frame podcastproduced by me Magenta Strongheart and Ryan Tillotson.
Written by Maggie Bowles and edited byDaniel Ferrara. Theme music is by

(30:12):
Anna Hogben. Show art by ThomasSchneider. Special thanks to Giovanni Selinas,
Bruce Dimingez, Thomas Woodward, JinKumar, Jordan Clark, the entire Supply
Frame team, and you are wonderfullisteners. I'm your host, Magenta Strongheart.
See you next week. What
Advertise With Us

Popular Podcasts

Law & Order: Criminal Justice System - Season 1 & Season 2

Law & Order: Criminal Justice System - Season 1 & Season 2

Season Two Out Now! Law & Order: Criminal Justice System tells the real stories behind the landmark cases that have shaped how the most dangerous and influential criminals in America are prosecuted. In its second season, the series tackles the threat of terrorism in the United States. From the rise of extremist political groups in the 60s to domestic lone wolves in the modern day, we explore how organizations like the FBI and Joint Terrorism Take Force have evolved to fight back against a multitude of terrorist threats.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

NFL Daily with Gregg Rosenthal

NFL Daily with Gregg Rosenthal

Gregg Rosenthal and a rotating crew of elite NFL Media co-hosts, including Patrick Claybon, Colleen Wolfe, Steve Wyche, Nick Shook and Jourdan Rodrigue of The Athletic get you caught up daily on all the NFL news and analysis you need to be smarter and funnier than your friends.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.