All Episodes

October 24, 2023 28 mins

As an MIT professor and tech entrepreneur, Devavrat Shah has seen firsthand how AI tools can impact research, business, and careers. While some have dire warnings about the scale of harm AI can cause, Shah is optimistic. He joins the Data Nation podcast to dispel some doom and gloom, unpack ways that people are already using AI to make change for the better, and to examine how future benefits can emerge with regulation and education.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
[MUSIC PLAYING]

(00:03):

Should we be scared aboutartificial intelligence?
Are the robots coming for us?
We touch on all of thisand more on Data Nation
from MIT's Institute ofData Systems and Society.
I'm Liberty Vittert.
And today, my co-host, MuntherDahlet, the founding director
of MIT's Institute forData Systems and Society,

(00:24):
and I are speakingwith Devavrat Shah.
As an MIT professorand tech entrepreneur,
Devavrat Shah hasseen firsthand how
AI tools can impact research,business, and careers.
While some have dire warningsabout the scale of harm
AI can cause, Shahis optimistic.
He joins the Data Nation Podcastto dispel some doom and gloom,

(00:47):
unpack ways thatpeople are already
using AI to makechanges for the better
and to examine how futurebenefits can emerge
from regulation and education.
Let's start this discussion ofthe future of AI with, I guess,
a personal question.
And that would beoverall, what is

(01:09):
one word that woulddescribe your feeling
about the future of AI and why?
Are you hesitant?
Are you fearful?
Are you excited?
Are you apprehensive?
What would be one word thatdescribes your feeling?
Very optimistic.
That was not whatI was expecting.
I love that.
What gives you thatoptimistic feeling?
I feel that there's onepart where everybody

(01:31):
is afraid of losing jobs.
There would be doomsday.
My computers will be hacked,attacks would happen,
and all that.
It all sounds fantastic.
However, what I feel is thatreally the biggest and largest
value would come is just theway washing machines changed,
electricity changed, havingautomated vehicles changed

(01:54):
a lot of things for us,AI will play similar role.
And AI is not everything, OK.
AI is a small piece of a muchbigger technological world
out there that's there.
And there is so muchphysical stuff that happens.
AI is not going to change it.
Even if there is thegreatest form of AI,

(02:15):
there will be stillamazing food that you and I
will enjoy eating.
So coming back, I feelvery, very optimistic
about AI is becauseI think in forms,
it's been around for a while.
I mean, we call itmachine learning.
We called it @IDSS data science.
And now we call it AI.
So just the way we havedone a tremendous progress

(02:38):
as both intellectual communityas well as society at large,
I think there's a lot ofgood things that will happen.
You read in the news, oryou hear commentators,
or you hear people creatingthis very fearful environment
around AI and almost like we'rein this AI revolution that's
different than anythingelse we've ever seen before.

(02:59):
You don't really feellike it's that way.
This is the same as the cellphone or electricity being
created or theIndustrial Revolution.
It's not something thatis beyond our control
and crazy and out of this world.
It's something that'scontrollable and gives
better life than a worse life.
Is that fair?

(03:19):
So I think that thereare two parts here.
One is what you're sayingis a tremendous change,
and second iscontrollability part.
So maybe let's dissectit in that way.
I mean, cell phones, I haveseen that change in my life.
There was 1999 when I cameto this country and then 2002
when I had my first cellphone, which still remains

(03:39):
to be my same cell phone number,has completely changed the way I
do my day-to-day things.
Electricity, Ilived in a country
where there were be days when--
actually, many daysregularly, like weekly, there
would be scheduledwhere parts of the day,
there won't be electricity.
So we knew how tolive with electricity

(04:01):
and without electricity.
And that has changed everything.
There is a world which wasbefore internet, before emails,
and after internetand after emails.
That has again changedthings drastically.
So in similar sense, Ithink AI would have value.
Now, the thing about thinkingof AI as sudden change,

(04:23):
I think that's a mistaken.
Because a lot of these thingshave been around for a while,
is just that the popularattention, popular societal
attention has come nowa lot more especially
in the recent times.
And maybe that's thereason why also we
are having this conversation.
But again, I think we've beenwith this for at least 20 years.

(04:45):
I mean, since westarted collecting
large amounts of data,getting information out of it,
and using it moreand more, that's been
what's been driving force.
The very traditionalview of AI has
been whatever isartificial intelligence,
there's a natural intelligencethat is you and me.
Whatever we can do, machinesshould be able to do,
and hopefully, at least in someaspects, they can do better.

(05:06):
We can fold towels.
Machine can fold towels,maybe a little faster.
We can drive a car.
Maybe machine can drivea car without falling
asleep and things like that.
So there's been automation, andthere is a learning from data.
A lot of modern AI has beenabout learning from data
rather than thinking aboutautomation as well as coupled.
But I think that alsoneeds to be thought about.
So in short, itis a great change.

(05:29):
It's not been sudden thatwe should suddenly start
worrying about controllability.
If it has beencontrollability has
been an issue, which it hasbeen an issue, by the way,
and we can cometo that in a bit,
but it's been an issuefor at least 15 years.
I mean, misinformation,that's been also
very much about questionof controllability of AI.

(05:50):
Let me actually pick up and takethe first part of your answer
first.
And you are in--
your optimism, if I would say,comes from your unique position
because you're anacademician who
has contributed tomachine learning and AI,
but also you're anentrepreneur, and you have used
that in the business world.

(06:11):
And I would say you've had afirst hand experience seeing
how AI and technology havehad transformative effect
on the business world.
And maybe you cancomment a little bit
about where do you see thebiggest impacts in the business
world and the job creationand all this optimism
that you come with, whichI also agree with, where
do you see it mostly happening?
And as opposed to thefear of losing jobs,

(06:34):
is the job creation and theopportunities that are created?
That's fantastic.
I mean, I think maybe that'sone of the key things we
should start thinking about.
So first is peopleworrying about jobs.
So it is no doubtthat there will
be jobs that will disappear.
For example, beforewashing machine,
people were washingclothes with hands,

(06:55):
and those things disappeared.
But then that created few jobs.
One is people who buildthe washing machines, who
managed it, maintainedit, et cetera, et cetera.
But that's not justthe one thing, right.
It's not just washing machine.
It's a lot more than that.
People who havebuilt AI solutions,
who would be buildingAI solutions globally,

(07:18):
there will be lotsof positions there.
There has always beenthis whole question of we
do not have enoughdata scientists, right.
Well, maybe with AI and all thetooling and all the development
around, that questionwould hopefully disappear.
But the way that would disappearis by us academics actually
educating people howto use AI and then

(07:40):
its ability to solve harderproblems that have not
been yet solved.
The thing about AI taking awayexperts will never happen.
In the short termand a long term,
there will be expertsindividuals who both have
information that machineshave not captured,
who both have ability to dothings that machines cannot do,
and the third thing, especiallyin the world of-- think

(08:03):
of compliance.
For purpose ofcompliance, you would
need individuals sitting there.
And really, more likely ornot, a good nirvana state
of the world wouldbe where experts
are working with the machines orexperts are working with the AI
rather than out of the loop.
It won't be open loop system.

(08:24):
Let me ask a question forthe purpose of conversation.
When was the first fully,self-driving, transcontinental
flight take place?
I think the first--
and I only know this--
I mean, this is probablynot what you're asking,
but I'll go for it becauseI felt like I sound smart.

(08:44):
The firsttranscontinental flight?
Oh, shoot, I was going to saythe first ever flight because I
was just in Pennsylvaniaat a flight field
where in 1896 was thefirst unmanned flight.
Yes.
But I think it only went 1,000feet, so not transcontinental.
No, no.
So there's a plane that took offa runway, flew, and then landed.

(09:06):
I don't know, inthe last 30 years?
Again, [INAUDIBLE]likely you know this.
So it was late 1940s.
And so this is better and bettertechnology and all that stuff,
but still we alwayshave the pilots there.
And as we know,I mean, actually,
people who do fly, as Iunderstood from people who fly,
flying is seriouslydifficult taxing business

(09:28):
because you're taking yourdevice into three dimensions
rather than two dimension.
And so maybe when you're flyingin the air, when machines--
I mean, automationcan take care of it
by tracking thingsand all that stuff.
Why tax humans?
But when landingand takeoff happens,
which is actually highlyuncontrolled setting, let's
tax humans.
So in a similar fashion,I think that might

(09:50):
be the steady state we might beevolving towards continuously,
right, in all aspects.
And that's a good thing.
And as we do thosethings, I think
there will be more and morehuman jobs would be created.
Expert jobs will becreated around that.
It's like therewere no consultants
maybe 40 or 50 years ago.
Now there are lotsof consultants

(10:11):
because there was this wholething became a knowledge
information economy.
And to solve that, just becausewe had a lot of knowledge,
didn't take people out.
So I believe that ina similar fashion,
there will be AI jobs thatwill be created where experts
will have a role to play.
And we'll need experts, notjust to build more AI solutions,

(10:33):
but also to workwith AI solutions.
To remove people'sfear of this--
because I think you got it rightwhen you said at the beginning
that fear is really whatoverpowers most of the general
public when it comesto the concept of AI,
and especially in themost recent months--
what would be aconcrete example?
You gave the example ofpeople are washing clothes

(10:55):
and then we havewashing machines,
and you need people towork the washing machines.
What would be some of the mostconcrete examples of where
you see in the next 5 to 10years of this transfer of jobs
to AI where youstill need people?
So if I knew precise answer,Liberty, I will not tell you.
I'll tell you why.
Because then that's whereI will take all my money

(11:16):
and invest, OK.
But I'll give you anapproximate answer.
Pre-internet, if youhad asked any one of us,
would you have imagined thatwe'll be watching movies
not in disks, orcassettes, or in cinema, we
will be getting food delivered,clothes delivered, and furniture

(11:37):
delivered, none of uswould have believed that.
But connectivity would bringcertain speed, certain removal
of barriers, and certainability to connect
to disconnected parties so thatcertain types of activities
would become possible.
In a similar manner,in a simplest form

(11:58):
of AI, what's going tohappen is that there's
a lot of information,and machines
will help us sort throughthat information better.
Machines will provide usbetter recommendations.
And question is that wouldthese recommendations
create more jobs of some form?
I do believe they will.
I don't know exactly what forms.

(12:19):
So continuing thetheme of fear, I
think it's interestingthe development that
happened with large languagemodels and Bard and ChatGPT
and so forth.
It challenges our distinctiveintelligence language.
That's what made us differentfrom all the other species.

(12:41):
We've got language.
And language isexpressive and so forth.
And now ChatGPT can chat, andit can talk like a person.
So some of thatfear maybe is real.
Some of that fear maybe isanticipated and so forth.
But definitely, we'reencroaching a space
now where we'rea lot more afraid

(13:02):
than we were November 2022.
And so the question is,what are your thoughts?
What should we be afraid of?
So I think what weshould be truly afraid of
and maybe going back and pickingup on that misinformation
thread, right, that is wherepersonas can be assumed
and our inability toactually tell apart, I think,

(13:25):
that's what I would bereally, really afraid of here.
I mean, we, inthe United States,
we have election coming up.
I'm actually reallyscared what we're
going to see because ifwe have seen something
with Facebook-relatedactivities for the past decade,
I feel it almost feels likewith these new abilities,

(13:48):
it might be just a small tipof a massive, massive iceberg.
And that is somethingthat I am truly afraid of.
I'm not afraid of jobs.
I'm not afraid of job creation,the value it will bring.
And maybe if wetranslate it back,
I think it's about us thinkingthrough carefully what
does regulation mean.
And in fact, if Ijust follow maybe

(14:08):
also thoughts about privacybecause in the same vein
that you described, nowthat these complex language
models, the more datathey have on you,
the more they can appearas a trusted agent
and elicit evenmore data from you,
which then becomes avicious cycle in terms
of the misinformation and theimpact they can have on you.

(14:29):
Absolutely.
And actually, thisreminds me of, Munther,
one of the other related projectthat you and I had that we
started off with our visit to--
what is it?
In New York.
Right.
Data markets.
Yeah.
And the whole thingis of right now
I'm afraid of asking questionsto ChatGPT because who knows?
Maybe I'm revealinginformation that it will learn
and then it will sellit to somebody else.

(14:51):
When people used to writeexciting blogs online,
the reason they wrote is to tellto the world that they're smart,
B, create their brand,and C, not to sell it.
Now ChatGPT and thingslike that have gone around,
eaten up all of that, and thennow they're pretending them.
And that is an issue,which means it's tomorrow,

(15:13):
I don't want to ask anyquestion to ChatGPT that reveals
any intelligent information.
And if it does, maybeI want it to pay to me,
if somebody else is gettingpaying to ChatGPT because when
Google search werefree, OK, that's fine.
I'm still OK.
But the moment ChatGPT startscharging people for whatever
the Premium package is, Iwant a 100th of the cent

(15:34):
for the queries that servesmy ends using my data.
That's a very good point.
I think that brings us tothe very interesting question
of lawmakers andheads of companies
were discussing in DC whatregulation should look like.
And with ChatGPT,I mean, I remember
when it first came out, therewas a big story about how they

(15:54):
typed in, write a poemabout Donald Trump,
and it refused to write apoem about Donald Trump.
But then it was writea poem about Joe Biden,
and it wrote this glowing,wonderful poem about Joe Biden.
Whatever anyone's politics are,it obviously has its own biases.
Yes.
And so what should theregulation look like?
Should it be thegovernment regulating?
Should there besome organization

(16:16):
that comes in to regulate?
What should thatlook like for AI?
It's a very difficult question.
But bottom line is thatwe need regulation.
What we do not need-- and maybeI can start by what we do not
need--
we definitely do not needa heavy-handed industry
driving the decision.
For example, even in thesimplest form of this
was when internetbecame internet,

(16:37):
the net neutralityhas been a challenge
where the massive internetservice providers deciding
who to charge how much.
Well, I mean, if you're a bigcompany, sure will survive.
But then the moment you startcharging me for my thing or stop
serving my content as a smallcompanies, we cannot grow.
So we have to avoid thatkind of issues for sure.

(17:00):
The example you pointed out,that is appropriate freedom
is needed.
But at the same time, the thingthat cannot happen is wrong
information being created.
So maybe somehow informationauthentication is needed.
And then going from there,nobody can stop research.

(17:22):
You can't-- this whole viewingthat AI is like nuclear weapons
and hence nuclearnon-proliferation treaty,
we need AInon-proliferation treaty,
that doesn't soundright to me at all.
Maybe a controlled use of thatmight be useful, but again,
that cannot be like that.

(17:42):
Just the way utilitiesare governed,
maybe we should be governingsome of these things, too.
I mean, we still have notgoverned social media yet.
I mean, the pact that we signedwhen we did social media was you
give me your data, and youget the utility for free.
Cell phones don't do that.
They charge as money.
And then we have a contract inthe return saying that you are
not going to look into my data.
So I think some of thosethings need to thought out.

(18:05):
But I think we as academicshave a much, much bigger
responsibility here.
In a sense, I feel thatif some of my colleagues
say that of allthe interesting AI
research is going tohappen in industry,
I think is completely wrong.
But everythingrelated to this has
to happen in academiabecause academia is where
we will have unbiased view.

(18:27):
And for that reason, I'mactually a lot more optimistic
about academia aswell doing very well
in terms of thinking aboutAI research than not.
It's not just about complicatedneural network and yet
another AI system.
So in fact, I'm going totake this another level.
And your speculationwill be interesting
because I think alot of us potentially

(18:47):
are still struggling with this.
So regulation has to happenat every level of life.
I mean, everytechnology needs to be
regulated because technology canhave a side that is detrimental.
Any technology can do that.
A car can become a bomb, and itcan kill people and so forth.
So we need regulation.

(19:08):
But we have a problem hereof defining who owns what.
I mean, information flow anda certain level of information
flow is actually needed.
But then at some point, when ismy data actually belongs to me?
When the electric companyis measuring my consumption
of electricity, do I own thatdata or the electric company

(19:30):
owns that data?
When I use Googlefor free, does Google
own the data in exchange for thefree service that they gave me?
We have a problemof definitions.
And I think back to your pointabout the academic pursuit
in this regulation, wedon't have a framework.
Yes.
I don't understand theframework by which we're even

(19:51):
discussing regulation.
What is the framework?
Do we have a sense ofwhere that's heading?
Fantastic question.
So maybe let's reducethe scope so that we
can have a conversation.
And reduction of scopewould be in the context
of, let's call it,recommendation systems
because in my mind,recommendation systems
are one of the earliestAI applications

(20:13):
that has started interactingwith society at large.
You go to online marketplacelike Amazon, and you buy things,
and you are recommendedcertain things.
And that's how actually Amazonis controlled whose products are
sold or not, by the way,because it's a place where
so many vendors come and sell.

(20:34):
That's how your pricediscrimination has taken place
because differentpeople are shown
different prices at differenttimes for the same product.
And that's where yourinformation about what
you bought or not has createdpreferences that Amazon could
have consumed and maybesold to somebody else
saying that, hey, person inthis zip code, et cetera,

(20:54):
has purchased these things, andlet's sell it to somebody else.
So there's a lot ofthat happening there.
Same is happening in thesocial media platforms
and where informationconsumption and so on.
Now, if we go back to that,let's say, social media platform
because that'swhere misinformation
has been at leastbelieved to be rampant,

(21:18):
there are few things happening.
One is who ispublishing information?
Then two is what recommendationalgorithms are utilized?
And then the three is how theadvertisements are deployed
and who is subjected to.
And so there arethree parties here.
There's the people who areconsumers, the platform who

(21:39):
is, let's call it a social mediaplatform, and then the content
producer, whether it's anadvertised content or whatnot.
There's the third party.
And these three thingsneed to stick together.
Platform has gotten allthe powers right now.
The contract has toimplicitly happen
between platform andconsumers and explicitly

(22:01):
in now somewhat monopolisticmanner between platform
and vendors or thirdparty providers.
That's what is going on rightnow in today, for example,
at Google.
That is, did they have unfairadvantage with the government
claiming that they hadunfair advantage in data,
and that's how they couldcharge a lot to the advertisers.

(22:22):
So maybe that's athree-party environment
that shows up everywhere.
In the platform, consumers andsellers are content providers.
And maybe that's one goodplace to start in terms
of thinking about regulation.
My knowledge islimited, but I'm pretty
sure there is very good waysfor two-party interaction

(22:46):
and contracting thatcontract theory in economics,
for example, [INAUDIBLE].
Law has certain waysof thinking about how
do you define whatare the boundaries
of these kind of interactions.
But I think with data and AI, Ithink there's a new dimension.
So it's like peoplelike us along working
with them are learninga lot of their stuff

(23:06):
and then bringing that togethermight be the right thing to do.
It's one thing totalk about the laws
within the US that are ableto govern this kind of stuff,
but we obviously have a globalwork on AI and a global race,
if you might put it, to AI.
And even somethinglike social media,
TikTok has gottenbanned in many places

(23:27):
in the United States becauseof the data transfer to China.
And so when you have thesearguments for regulating AI
within the US space,that's obviously
a very different discussionthan regulating it
in the global space.
And the argument being that ifwe regulate it here in the US,
somebody else is going to do it.
China is going toleapfrog ahead of us,

(23:48):
and we're going to bein terrible trouble.
So how do you balance that?
And how do you see this globallyworking in terms of regulation?
Maybe two parts, one isdevelopment and progress in R&D,
let's call it, in AIis not necessarily
what we should be regulating.
Maybe I'm just naivein addition to being

(24:10):
optimistic is thatdevelopment of the technology
is not destructive.
It's a deployment oftechnology without regulation
is destructive.
So maybe we should worry aboutthe deployment, not necessarily
development.
Deployment beingdestructive is both
destructive to outsideworld and to inside world.

(24:31):
And one of thereasons United States
would like, for example, todo that is to retain sanity
of the society within itself.
So tomorrow, if I'm acountry which is outside,
and I see that, yes,lack of regulation
is going to remove thesanity of the society,
I think maybe peoplewould do that, too.
So a lot of this we'retaking from the perspective

(24:54):
of some evilness happeningand some country taking over
and so forth.
But we really arefacing a problem
of compliance and the law.
A company says, myproduct does this.
Who's going to verify given thefact that the product in itself
is learning andchanging and so forth,

(25:16):
and now we have this legalaspect of when it doesn't
do the right thing,who's responsible,
and who's verifying?
Is this in therealm of regulation
or maybe a question of law,a question of legal action?
It's a new world.
Yeah, no, I completely agree.
I think that's another reasonwhy I'm optimistic about future

(25:38):
of academia being verybright for a while
because thesequestions would need
a very fundamental take on themand also interdisciplinary take
on them.
Wouldn't it bereally good and cool,
for example, asinternet came around,
secure transfer became an issue?
And that's [INAUDIBLE] whois going to certify it?

(26:00):
Well, very soon andthings like that, that
came around and certified it.
And then there'sa whole ecosystem
has been built around it.
And it has been aglobal ecosystem
because just the waypeople in the United States
don't want to lose their dollarswhen they're transferred,
people in China and people inIndia and people in Switzerland,
everywhere, nobodywants to lose that.
So I feel thatstyle of ecosystem

(26:24):
will evolve because it willbe about self-interest.
Alignment would be necessaryfor self-interest preservation
rather than not.
When you say thefuture of academics
is bright in this senseand that experts are really
going to be workingwith these AI systems,
and it's not goingto be this loop,
what is your advice for youngdata scientists or people who

(26:44):
may be looking to thefuture of what kind of jobs
they could have that wouldwork well in the AI space?
Do they need to goto places like MIT
in order to beexperts in this field,
or are we going to start seeingtechnical institutes like we
would if you want to bea welder, or a plumber,
or you're going tobe a data scientist?
Where do you see that futuregoing for academia in general

(27:07):
and the public for the kind ofjobs that they plan to have?
Fantastic question.
So I think I'm goingto hijack on one
of the things Ilearned from Munther,
and I thought it's a fantasticpoint is that so far we
in the United States havebeen universities of higher
education.
Where I grew up in India and forexample, where Munther grew up,

(27:27):
there were always thesetechnical institutions,
as you said, variety ofwelders and all that.
There are such institutionsout there in the United States
as well, but they'rein a lot more scarcity.
And I think we just needto have that enabled.
Now, one potential model isthat there is no reason why
MIT can't do that as well.
Now, the questionis that how does

(27:48):
it do that well whilepreserving its core on one hand
and have its ability to helpsupport that at the same time?
And that's something thatwe all have to figure out.
But again, I thinkcore to your question
was that education in AIis needed both as a user,
as somebody to understand,as a broadly society,

(28:08):
and as an opportunity foryounger folks as well.
So I think there's definitelya role for all of us
to play there.
[MUSIC PLAYING]

Thank you for listening to thismonth's episode of Data Nation.
You can get more informationand listen to previous episodes

(28:30):
at our websiteidss.mit.edu, or follow us
on Twitter andInstagram @MITIDSS.
If you liked thispodcast, please
don't forget to leave us areview on Spotify, Apple,
or wherever youget your podcasts.
Thank you for listeningto Data Nation
from the MIT Institute ofData Systems and Society.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy And Charlamagne Tha God!

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.