Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Alex Kotran (aiEDU) (00:05):
Roy Bahat,
head of Bloomer Beta, very happy
to have you on our stillburgeoning but, I think,
relatively in the maturing modeshow AIEDU Studios.
Roy, you and I met a long timeago, before most people were
talking about AI.
You were talking about thefuture of work.
I don't know if you would havethought of yourself as an AI
(00:26):
person.
You certainly were one of thethought leaders on the
conversation about the future ofwork, which was about more than
just and I think is about morethan just AI.
But how do you describeyourself now?
Is it any different from whenwe would have first met, back in
2018, 2019?
Roy Bahat (00:44):
I mean, we've been
investing in AI as a firm since
2014.
And so, for sure, ai would havebeen then my number one
investment area.
And I don't know, am I an AIperson?
We're all still figuring it out.
So, like, the most expertpeople I know in AI are the ones
(01:05):
with the least confidence aboutwhat is happening right now,
and so together we're all tryingto shape this understanding
alive as it's happening.
So I think, in that way, it's alot the same.
You know, I teach at Berkeley,and so I have some limited
amount of understanding of theeducational setup and a parent
of teenage kids, and so that'show I relate to all this.
Alex Kotran (aiEDU) (01:27):
What do you
teach at Berkeley?
Roy Bahat (01:29):
I teach in the
business school.
I teach two courses one on thebusiness of media and the other
on leading an organized ororganizing workforce, because I
spent a bunch of time withorganized labor.
So you know, I'm a VC atBloomberg Beta who believes that
AI is the most importanttechnology trend that's
affecting our world and workersorganizing is the most important
(01:50):
human trend that's affecting atleast our economic world, our
working world, and so those arethe areas where I spend time.
Alex Kotran (aiEDU) (01:56):
Excellent.
Yeah, I would love to dive intothat, putting on your hat, as I
mean, there's so much that I,you know I was thinking about
what to talk with you about.
I want to start with actuallyjust sort of like the most
recent stuff that I've beenobsessed with and I'm curious if
you've had a chance to to thinkabout it.
Um so, first, bureau of laborstatistics.
(02:17):
They published I think it wasin february, but they still
published statistics well, thiswas.
This was in february, so inFebruary.
So that question will have tobe answered sort of in real time
.
I don't know if this is like anofficial update to their stats.
I'll be honest, I'm notextremely deep into BLS's sort
of the arcane details of how BLSworks, but they basically
(02:41):
published this update to some oftheir projections to
incorporate AI and the thingthat jumped out at me was
they're predicting, I think,like 18 to 19 percent growth in
computer science jobs over thenext 10 years, which stands in
stark contrast to what I haveheard from folks who are really
on the front lines of thistechnology and their
(03:03):
expectations about the impactit's going to have on
unemployment.
Hot take, I mean, I don't knowif you, if someone, were to ask
All employment predictions arestupid.
Roy Bahat (03:13):
Yeah, I mean, the BLS
, as last, I recall, took
current trends and justextrapolated.
They have been wrong many times, but that's because prediction
is really hard and you know, oneof the reasons why it's so hard
is, first of all, you have twoeffects.
You have an income andsubstitution effect.
For the economists, it'sbasically like you make it
(03:33):
easier to be a softwaredeveloper and make software
developers more productive.
There are reasons to think youmight want less fewer software
developers, because each one cando more, and there are reasons
to think you might want morebecause you want more software
developers compared to marketers, or something like that, and so
both of those effects happen atthe same time.
But the other thing that happensis work constantly gets
redefined.
I mean, what a teacher doestoday compared to what a teacher
(03:56):
did 30 years ago.
Many of the things are the sameand some of the things are
different, and so the faster wedo these things, these changes,
the more we relabel like what isa software engineer today
versus 10 years ago?
It's a different kind of a job.
By the way, it's a differentjob now from what it was a month
ago, and so, meaning the AItools have been improving that
(04:19):
fast, and so I don't know how tomake predictions about it, and
I'm not even sure why theymatter.
Like, let's say, I told youthat we were going to have 10%
more software engineers and not10% less, or something like that
.
How much would it affect things?
I'm just not sure.
Alex Kotran (aiEDU) (04:38):
I mean, I
well, let's.
I think it would affect things.
I mean, surely if you're, Imean if you're a junior in high
school.
Roy Bahat (04:47):
Okay, what job should
I prepare myself for?
Alex Kotran (aiEDU) (04:49):
Well, and
also should I take on $50,000 of
debt per year to go to a tiertwo, computer science program.
Roy Bahat (04:56):
Totally, and I think
the answer to that is as hard to
answer now as it was to answerfive years ago, which is
honestly I just don't know in alot of cases now as it was to
answer five years ago, which ishonestly, I just don't know in a
lot of cases.
And one of the things that Ibelieve is happening and this is
a place where AI can be veryhelpful in a learning
environment is, you know, theold model.
The model that I went to schoolunder is first you learn and
then you do, you prepareyourself and then you do the
(05:18):
thing.
The tools have become so goodthat there's no reason why
learning and doing should be sodivorced from each other.
So I look at schools like Ithink we've talked about this
high school for entrepreneurshipin Fresno, the Patino School
those students graduate withbusinesses.
That is wild.
When I look at a tool likeReplit, we were investors in
(05:38):
Replit.
Replit started out as alearning tool.
We are investors in ReplitStarted out as a learning tool
for classrooms to learn how tocode.
Now I just type in plainEnglish what I want the software
to do and it doesn't.
It makes it and you know I haveto learn how to do that.
It's not automatic.
I have to learn, like the wayyou learn riding a bicycle.
But what I?
(05:58):
So my own kids ask me what jobshould I go for?
And I'm like I honestly don'tknow.
But I think the skill Of allparents.
This should give solace toparents who feel like they don't
know the answer to the question, because None of us knows Roy
Bahat doesn't know that, theonly answers that I think are
true are either true by luck orthey are bullshit, and so what I
tell my kids is learn how tomake your own money, learn how
(06:21):
to continuously improve your ownskills, because I just spoke to
a group of HR professionalsyesterday and this learning and
doing thing has lots of in theworkplace analogs, which is it
used to be.
You got to the workplace andyou expected your employer to
teach you how to do your job.
You went to training sessionsand, of course, all that is
still true to some extent, butit is the people who will
(06:43):
themselves learn how to use theskills and be.
I call this, in the terms ofuse of technology, being the CIO
of your own life, the chiefinformation officer of your own
life, if you're the personconstantly doing that.
I mean.
Somebody on that panel describedPerplexity, which is a company
that we in tech know well.
It's a search engine that I useas an alternative to Google,
(07:04):
because it looks things up andfinds the articles on the
internet using AI and thenwrites a summary.
It's like what I want Google todo, and somebody on stage
called it that P company and Ialmost laughed, because it's
like the level of knowledge thatthose of us have who are paying
attention to trying to improvethe skills and the level of
knowledge others have is justreally different, and so I think
(07:24):
my advice to them they couldn'tthink of the name.
Alex Kotran (aiEDU) (07:26):
What's that
?
Roy Bahat (07:27):
They couldn't think
of the name.
Yeah, it wasn't even couldn'tthink of the name.
I mean, it was more that itstruck them as some newfangled
tool that had just come out,when it's been part of my core
workflow for two years, like Iwouldn't forget the name of
Gmail.
And so you know, to me, theadvice to students is going to
(08:00):
be less about which job titleand occupation do you that I
can't speak to, because thatgets to like your family's
financial circumstances, and soI'm not going to say yes, it's
for sure worth it.
You should do that, because Ijust don't know.
Alex Kotran (aiEDU) (08:14):
I think
this might be actually an area
where you and I diff.
Uh, you know have sort of likeslightly differing points of
view.
Roy Bahat (08:21):
I can learn from you
I.
Alex Kotran (aiEDU) (08:23):
My instinct
is, if you look at some of the
research of how generative ai isused right now in knowledge
work, um, what concerns me isthat you're seeing like a few
things like there's adeterioration of skills, um,
among people who are overlyreliant or comfortable or
confident in the tools, and soit's almost like it's ironically
like the more confident you arein AI, it's actually you're
(08:46):
actually less and less effective, because what you need is
someone who is sufficientlyskeptical and like really paying
attention.
It's like I don't know if youhave a Tesla, but I have.
My friends who have Teslas willtell you that, like this, all
drives a venture capitalist inSan Francisco, of course'll have
a sticker that says I boughtthis before Elon went crazy.
Good, good.
So can you tell?
(09:07):
I mean, do you use aself-driving mode?
Roy Bahat (09:10):
I do?
Alex Kotran (aiEDU) (09:12):
Do you find
yourself like are you holding
two hands on the wheel?
Roy Bahat (09:15):
No, of course not.
Like all of us, I do the thingwhere I try to pay as little
attention as needed before thesystem times me out, and I even
have special sunglasses in mycar so I can't tell if I'm
paying attention.
Alex Kotran (aiEDU) (09:26):
Because
there's a camera right that will
actually sort of like look tosee.
Yeah, it's creepy too.
And this is despite you knowingthat there are plenty of
stories of people who overlyrelied on self-driving mode and
got into accidents.
Roy Bahat (09:38):
Yeah, for sure.
Alex Kotran (aiEDU) (09:41):
Isn't this
going to be something that
companies care about?
If you're an accounting firm,if you're a law firm, of course,
the idea that, well, you don't.
I mean because I think what I'mpushing back on and I don't
want to strawman your point ofview, but what I push back on,
you know, at the summit that youspoke at, I was there as well,
and you'll hear people say stufflike well, all people need to
know is just they need to justlearn how to use AI and that can
(10:02):
replace, sort of like, theknowledge and skills that
they're currently learning and Inever said that.
You didn't say that, and so Iwould never say that.
Roy Bahat (10:13):
So, help me thread
this.
Yeah, so this is how I threadthat is, if you rely on the AI's
outputs and take it at facevalue, it's the same as if you
hired somebody to work for you.
I think of AI as like a reallygood intern.
A lot of people have said thisIf I rely on the intern's facts.
I mean, I remember I wrote apost about learning to code in
like 2012 or something like that.
(10:34):
Asa Hutchinson, when he ran forgovernor, like excerpted that
piece and put it off as his ownpolicy and somebody buzzfeed or
somebody wrote a story about it.
Why did that happen?
Because the intern put it thereand passed it off as their own
work.
The governor, the gubernatorialcandidate, didn't do that.
In fact, he had the class tocall me and apologize and I
(10:56):
didn't care.
I was just happy that the workgot out there, but I cared a lot
that he was classy enough toown his work.
So I don't think you can own, Idon't think you can rely on the
AI's outputs and just treat itas finished.
But I do think that the peoplewho are more reliant on AI to
get the work done, to take partsof the workflow over, are going
to perform much better.
(11:16):
So that's the trick.
And, like here, what I don'twant to have happen is
everybody's like everything youknow.
It's really sad thateverybody's learning how to bake
bread because they just don'tknow how to make flour anymore.
And it's really important thathuman beings know how to mill
the flour themselves.
It's like, no, somebody has todo that and I need to know, as
the buyer of the flour, thatthey've done it in a way that's
safe and blah, blah, blah.
But what we are learning rightnow is intuition over when can I
(11:41):
trust the thing and when can'tI.
When do I have to double check?
I mean, somebody just did aprofile on me that a fundraiser
did, where they used OpenAI'sdeep research to do like a
dossier on me, and it wasamazing because it had insights
in there.
I was like, oh my God, ifsomebody tried to fundraise for
me using that, it would totallywork, like it understood my
psychology.
But the very first bullet saidI was the son of Holocaust
(12:02):
survivors who went to NYU.
And I'm not the son ofHolocaust survivors and I did
not go to NYU.
I am the grandson of peoplewhose siblings died in the
Holocaust, and so it's kind oftruthy in a certain way, but
that's so.
I think the way to square thecircle you're describing is we
need to learn the skill of whento trust the outputs and when
not to, and blind faith isbasically always stupid.
Alex Kotran (aiEDU) (12:27):
And when
you describe, for example, like
a school leaning more intoproviding students with this
opportunity to doentrepreneurship, like, as I can
attest, you know, building acompany requires a heck of a lot
of expertise, and like therapid acquisition of expertise
in things that you don't know.
To me, what you're describingis actually different
(12:50):
formulations of how students arebuilding and capturing that
expertise and how they arerepresenting their mastery and
knowledge and how we'reassessing them.
It isn't necessarily that, butI think this is important, right
?
You're not saying that theyjust need to get really good at
the AI.
It's like necessary but notsufficient.
Roy Bahat (13:10):
I think being a user
of the AI, yeah, words like
replace I'm really hesitant touse, because replacement, first
of all, it's lazy intellectually, because you sort of imagine
the job as it is and the onlything you imagine is the
replacement.
So like, instead of me doingthe dishes, there's a robot
doing the dishes, you knowsomething like that.
Instead, what I think ends uphappening is that the activities
(13:32):
shift around as we redefine howwe do it.
I mean, replacement to me makesme think you know that story
about how the early ads ontelevision were just like radio
ads, where the announcer justsat there and read the ad.
That's replacement.
Replacement is stupid.
But what will happen isredefinition of everything.
What experiences will I needand won't I need, and it's going
to keep changing.
(13:53):
The other model that's brokenis the model of learn then do,
because we've all talked aboutlifelong learning.
But learn then do assumes youget trained at the beginning of
your career, then you do yourwhole career and, yeah, you
improve as you go by learning onthe job.
But I actually think it's goingto be much more continuous,
which maybe is obvious.
Alex Kotran (aiEDU) (14:13):
You don't
make predictions, so maybe you
can just sort of give mefeedback on this, this um.
An idea I have is that there'sthis assumption right now that
ai is gonna be really good atreplacing entry-level jobs.
So maybe we start with thatassumption.
Do you buy that?
That like if you look at sortof the jobs that a company has?
It depends really just theentry-level jobs.
(14:35):
I'm not, I'm not convinced bythat.
Roy Bahat (14:37):
No, I mean, I think
what is true the the big
surprise with generative ai, sotools like ChatGPT and Claude
and Replit, et cetera, and youknow there's lots of other kinds
of AI.
Like, we invest in AI that makesaircraft flight routes more
direct.
You know that doesn't haveanything to do with generative
AI, or very little.
Anyway, they may use somegenerative AI somewhere.
(14:58):
There's the big surprise iseverybody used to be worried
about all the low-wage workbeing automated, and now I know
a lot of software developers whoare worried and lawyers who are
worried and et cetera, etcetera, and so I think that's a
major shift.
I think that the other thing isit's going to vary a lot
(15:19):
depending on the nature of thework.
So, like, some care work isgoing to be really hard to
automate.
Like you know, I think we didfocus groups.
You know we've been working onthese things for a long time,
and in 2016, I think we didfocus groups on um, on people
who, in addition to their job,cared for an aging relative,
because the multi-generationalthing is going to be much more
(15:42):
of a thing.
Like, by the way, the oneprediction I'll make about work
is people are going to be older,yeah, which is huge and so
obvious that we don't even thinkabout it.
Like I was like how are thekids going to learn?
Ai, you're working on education.
It's essential, but I'mactually much more worried about
what 55 year olds are going todo than I am about what 15 year
olds are going to do, becausethere's going to be more
55-year-olds at work than15-year-olds or 18 or
(16:05):
21-year-olds, and the proportionof the workforce that is older
is going to grow.
And so I don't know I guess I'mnot sure if it's giving you
like a satisfying answer,because I think a lot of this is
just unknown but it's going tovary by occupation.
The thing that people will wantis anything where the person
(16:27):
demanding it wants it to comefrom a human.
And the focus group example iswe talked to a guy in the focus
group I was behind glass orwhatever and he was saying how
he cares for his like aging auntor something like that.
And the focus group person saidlike you're a successful person
, you're like a who's anexecutive, an insurance company
or something like that, andshe's like you can afford to
have somebody care for your aunt.
(16:49):
Why don't you do that?
He's like because my auntdoesn't want somebody, she wants
me and that you can never takeaway.
By definition, you can nevertake it away.
But you know, could call centerjobs be less plentiful?
Yeah, of course they could.
Might the people who do thosejobs need to do something else?
Yeah, I mean, 100% of the workis going to be automated,
(17:10):
because 100% of the work alwaysgets automated.
Like if you look at what mylook at you and me, if you ask
my great grandparents about workand you said this is what we're
doing right now, you and I areboth at work right now They'd be
like that's not work, like mygrandfather lost his eye at work
and so we can.
My point is just that wecontinually redefine and the
(17:31):
really meaningful question iswhat produces a life that
somebody wants to live, values,experience and enough money that
they can include it as part ofthat, that they can feed their
family and live in a safe andstable way.
Those are very open questions.
We can talk more about thedifferent kinds of AI and how it
might affect things, but thoseare the big questions to me.
Alex Kotran (aiEDU) (17:52):
Yeah, this
is I mean, this is, I think,
what I mean.
People, folks in the educationspace, struggle because they
hear this.
You know, setting asideactually talking about
artificial general intelligence,but you know, you hear folks
who are really at the frontlines of this saying things like
you know, it's actuallypossible that the vast majority
of work is displaced and it justfor folks in education.
(18:13):
Their heads kind of explodebecause it's not really like
it's easy for us to think aboutOK, how do we sort of shift
education to orient studentstowards a slightly different set
of career pathways?
But then when you have someoneto come and say well, actually
there may not be pathways for alot of folks and we need to sort
of think about how we roll ofeducation in there and I think
that's how productive it is.
Roy Bahat (18:33):
I think that's a
busted model, a mental model In
the following way we will alwayshave pathways because we will
always redefine what we do aswork, whatever we do in the US
again what you and I are doingright now You're not saying no
work, you're just saying what wethink of as work will change.
That's right, that's exactlywhat I'm saying.
But I do think it's an openquestion about how will they be
able to earn enough to live andhow will they enjoy their lives.
(18:57):
And that's where things likegovernment policy.
One of the reasons I believe ina much higher social floor and
as a proud AFT member, you knowmy union has advocated for a
higher social floor, you know,in many, many ways just more
general is because I think it'llstabilize many of those
transitions, Because justbecause there might be more jobs
(19:30):
in total doesn't necessarilyhelp.
Alex Kotran (aiEDU) (19:31):
You know
Alex Notran, who's the other
Alex who doesn't have a.
You know a good job that heloves, like running an AI
education nonprofit.
One of the heuristics that Iuse is if someone's because,
like, sometimes people want muchmore actionable they're like is
my job at risk?
And, as you say, it's hard tosay.
Roy Bahat (19:39):
No, no, no.
Actually, I think that's nothard to say the answer to that
is yes.
Anybody who asks you is my jobat risk?
The answer is yes.
There are questions aroundwhich aspects of your job.
We invest in a company calledWork Helix that does workforce
analysis for big companies,where they basically determine
here are the proportion, hereare the jobs that are more and
(20:00):
less vulnerable right now.
So there's a question aboutdegree of vulnerability.
There's a question about timing, but everybody like the notion
of I'm going to pick a career,it's going to be safe for a long
time and I'm going to be fine.
There might be some exceptions,but in general I think that's
gone away.
And look, we invest in acompany called Campus.
That is a national high-qualitycommunity college.
That is a national high qualitycommunity college, and people
(20:22):
who do learning in a newenvironment like that, I
actually think, are much morelikely to be successful than
people who think the old waysstill apply.
Like the old ways, if theyworked 10 years ago which I
don't think they did they forsure don't work anymore.
Alex Kotran (aiEDU) (20:40):
One of the
heuristics, because sometimes I
think people are still notnecessarily, uh, happy with,
well, all jobs are at risk.
Um, so one of the heuristics Iuse is you know, open up your
calendar and, like, look at howmuch time you spent interacting
with people and how much timeyou spent sitting in front of
your computer.
And, and what I say is, itdoesn't really matter what
you're doing at your computer,whether that's writing or
(21:01):
researching or coding oranalyzing.
That is probably not a goodsign if too much of your day is
solo, sort of like creation ofstuff or writing stuff.
In a previous conversation,though, you sort of challenged
me a bit because you said well,right now we assume like things
(21:22):
like empathy and communicationare sort of the bastion of of
human work.
Roy Bahat (21:27):
But I don't want to,
I want to paraphrase, but you're
like, I'm not so sure aboutthat I definitely, no, no, I I
for sure believe that if we tellourselves bedtime stories, like
, humans are inherentlyempathetic and machines are not.
I mean, there's already studiessuggesting the ai has better
bedside manner than the typicaldoctor.
Yeah, so I think we are.
(21:49):
There's a great book the manwho Lied to His Laptop about how
people anthropomorphizemachines, written by the guy who
folks are old enough toremember Clippy in Microsoft
Word.
He figured out why people hateClippy so much.
He was a Stanford professor,cliff Nass, and I think that we
shouldn't tell ourselves bedtimestories and instead, you know,
(22:11):
it's about figuring out where isthe enduring value and assuming
it's just going to keepevolving.
And I just don't see people arelooking for the safer place,
the higher ground, and I justthink that's not a great way to
think about it.
Alex Kotran (aiEDU) (22:27):
Because
it's all going to get flooded at
some point and you just need tolearn to swim.
Did you watch the Y Combinatorpodcast about vibe coding?
But you've seen, you've heardthe top lines right About like a
quarter of their cohort reportthat 95% of their code base is
written by AI.
None of this is in conflictwith any of the things that
(22:50):
you've shared earlier.
Have you heard similar thingsfrom Bloomberg betas?
I hope it's true.
Roy Bahat (22:59):
I mean, look, first
of all, we had an emergency
meeting of our portfolio lastmonth because so many people
were freaked out by how muchfaster they could go using AI
tools to code.
They wanted to make sure theywere all learning from each
other.
These are expert startupfounders.
The paradox is it's new and thetools are new and it's moving
(23:21):
fast, but the principles are notthat new and in a way, I think
that's an analog to education.
It's like people learning howto think for themselves, how to
assess what's right and what'snot Moral.
I mean, there's all kinds ofstuff where the principle is
still going to apply.
It's just a question of how,and so the reason I say it's not
new in 2012, maybe 2013,.
(23:43):
I was very interested in thisquestion of how people could
learn to code, because I assumedeverybody would learn to code
at some point, because I justsaw how useful it was becoming,
how much easier it was becoming,and a friend of mine introduced
me to this guy who was theprofessor at Stanford who taught
the intro computer scienceclass, just so I could learn
from him, and he ended upteaching that class, I think for
almost 30 years, and he saidthis is now more than 10 years
(24:10):
ago he said nobody programsanymore.
I was like what do you mean?
He's like well, because thetools have gotten so good that
the abstractions they use arenot real programming.
And to me both, I think he wasright by a certain definition.
But it's a little bit againlike, just because I bake, don't
know how to mill the flour,doesn't mean I can't bake the
bread, and there's skill inbaking the bread and, by the way
, there's skill in owning thebakery and buying from the
(24:33):
person who bakes the bread, orhiring somebody who bakes the
bread or whatever.
And so where people move up anddown the layers of abstraction
in order to do somethingvaluable, that to me, is a major
question.
Alex Kotran (aiEDU) (24:50):
I mean,
look, I'm completely aligned
with this, the baking analogy.
I'm always in search ofanalogies and that's not just
because I like them.
I think one of the biggestchallenges we have, again from
the perspective of education, istranslating what is, at least
initially, a relatively opaqueor arcane topic to a lay
(25:13):
audience.
It's really hard and I thinkpeople sometimes sort of they
either like revert too quicklyto sort of like the shortcuts.
So prompt engineering for me is, like you know, there's, I
think there's, a lot of folkswho have spent a relatively
small amount of time withprobably just chat GBT and
they're sort of going around andsort of bandying about this
(25:34):
idea that well, everybody isjust going to be a prompt
engineer and that's going to bethe job of the future, and that
just seems like lazy thinking tome.
Roy Bahat (25:43):
Lazy thinking, but
it's also, I mean, I mean
therefore, everybody's going tobe a prompt engineer.
That's sort of like in thefirst two weeks of COVID was
like look, it works to workremotely.
Everybody's going to workremotely.
It's the future.
Alex Kotran (aiEDU) (25:57):
But I think
part of it.
Yeah, I think that's true, butpart of it is like you've been
in the space way longer than me.
I've been in the space waylonger than most people who are
now talking about AI.
If you weren't one of like thenerds there was, you know, prior
to November 2022, AI wasliterally just science fiction,
(26:19):
and so you don't necessarilyhave a barometer of like what
the velocity is, and and they'realso not on Reddit.
They're not necessarily.
They're not on Discord.
They're not seeing, they don'tknow what a reasoning model is.
They don't know that, like theway.
Roy Bahat (26:32):
You don't need to.
I mean again, I don't need toknow how to mill the flower to
read like here.
I'll give you a practicalsuggestion Ethan Mollick.
Ethan Mollick is a penprofessor, as you know, who does
practical tips about how to useAI.
Everybody should subscribe tohis newsletter.
Who is at all curious aboutthis?
And then you'll see how fastit's going.
Alex Kotran (aiEDU) (26:52):
So that was
what I was going to ask you.
It was like, how do we sosubscribing, so just sort of
digesting information frompeople who are taking the role
of being those translators?
Roy Bahat (27:01):
Yeah, it's what
Malcolm Gladwell would call a
maven.
Alex Kotran (aiEDU) (27:05):
Right, and
obviously this is the work that
we do as well.
Not that I have anything toplay myself.
No, and that's why I think it'svery valuable.
But I I still wonder about, youknow, aiedu, ethan malik, roy
bahat?
There are not enough of usgoing out and doing the work of
informing the general population, given what you have shared
(27:26):
earlier, which is this is aliteral in like.
The scope of this challenge iseverybody, like we are not
talking about.
You know how do we make surethat X percent of students can
improve their literacy becausethey're not graduating?
You know being able to read andwrite?
Or you know how do we deal with, like, you know, lagging math
scores?
We're talking about, even ifyou are a top performing student
(27:47):
who is going to become a doctor.
It is actually, and I'veactually talked to my brother
who become a doctor.
It is actually, and I'veactually talked to my brother's
a doctor and his boss has beenan early user of AI and he was
telling me like, honestly,doctors are actually worse, like
at least near term, likethey're probably going to be
impacted sooner than the nurses,which kind of blew my mind and
it's just like your point aboutknowledge work.
(28:08):
The conversation about AIpreviously was well, basically,
the poor people are going to beimpacted.
What are we going to do withthose poor people.
It's just too bad.
And those were theconversations happening at the
World Economic Forum, and nowit's slightly different.
It's like oh shoot, this issoftware engineers, this is
lawyers, accountants,accountants.
But looking back to analogies,I mean when you look back at the
(28:32):
internet or you know some ofthese prior technology
revolutions, what worked reallywell to keep people from falling
behind?
Is there any?
Is there any sort of analogBesides, obviously, like
newsletters and it's a greatquestion.
Roy Bahat (28:58):
It's a great question
, I mean.
I think the bad news is it wasreally bad in past transitions.
How so?
Alex Kotran (aiEDU) (29:05):
I mean, we
had two world wars after the
Industrial Revolution.
That part seemed to suck Okay.
Roy Bahat (29:08):
so you're going back
right, okay, and you know, like
you know, manufacturingtransition in the US hollowed
out entire places.
You know, like, pick your thing.
So the risks are real and bigand that's why we need to be
prepared for a societal response.
And you know, my best mentalmodel for that is um.
My best mental model for thatis hurricane and disaster
(29:30):
response.
Like, at some point there maybe an AI hurricane that some
version of FEMA, economic FEMA,needs to respond to the way that
we responded to COVID, honestly, economically speaking, and I'm
not saying we did all thatright, but we swarmed for sure.
Then the second thing I'd saythe good news is it's easier to
learn this stuff before.
If you wanted to transitioncareers in the past, you might
(29:53):
have to move cities, you mightnot have the tools available to
you, but, like you know, for thevery technical people, or more
technical people, one of thegreat AI teachers, who was one
of the legendary teachers atStanford and built the first
version of the Tesla fullself-driving software, andre
Karpathy, is on YouTube.
Alex Kotran (aiEDU) (30:11):
Yep's wild
yeah and it's, and yet we seem
to be gravitating away fromlong-form content.
It's.
It's weird how youtube is goingsuper long, for I don't think
we're gravitating away fromlong-form content.
Roy Bahat (30:23):
I think it's doing
what everything else is doing,
which is it's just bifurcating.
It's bifurcating it's eithervery short, but if you look at
like the top 10 podcasts, I meanjoe rogan can talk, dvorkash
patel can talk um, and so it is.
Alex Kotran (aiEDU) (30:39):
I think
it's bifurcating is what's
happening do you, do you haveany sense of?
I haven't looked at the numbersin terms of like who's actually
watching the long form versusthe reels it's.
It's not necessarily like olderpeople are watching long form.
I actually don't know I.
Roy Bahat (30:55):
My sense is that it
is bifurcating for everybody and
the age is more about channel.
It's more like young people areon tiktok and old people are on
facebook, but that's not adeeply researched thing, that's
just kind of a a sense see, Ilike every student being their
own.
Cio is this is this within reachfor an under-resourced public
school, setting aside a privateschool that has the flexibility
(31:20):
One of the other cool things isalmost everything has some
version of the tools that isfree, and so I don't know if
there were ever something thatwas accessible, unlike accessing
the internet, where you neededall this equipment you didn't
have yet, and laptops andinternet connections, you know.
Again, I don't know theeducational context, I
(31:40):
definitely don't know the K-12context very well, but at least
in principle, most of thesetools are free or have a free
version.
That's powerful enough to learna lot from it.
Alex Kotran (aiEDU) (31:50):
Yeah, I
mean, but can you-.
Roy Bahat (31:52):
Or am I missing
something?
Alex Kotran (aiEDU) (31:53):
You're not
missing something.
Unpack this, like I'm kind ofthat.
Maybe not even everybody knowswhat a chief information officer
does.
Roy Bahat (31:59):
Yeah, good question.
So what does a chiefinformation officer do?
They decide what technologytools a workforce should use.
Okay, they try new things.
They figure out.
Should I build something myself?
Should I buy something and lookwith tools like Replit?
Somebody who doesn't knowanything technical can make what
the founder calls personalsoftware.
They can make their ownsoftware.
But it's that process of notwaiting for somebody to tell you
(32:25):
what tool to use, but going outand researching your own.
That, I think, is very, veryvaluable and becoming more
valuable.
Alex Kotran (aiEDU) (32:32):
And this is
interesting because a lot of
school leaders right now areobsessed with what is the one
tool that we get every studentto use, and what I'm hearing
from you is that may actually becounterproductive to the
ultimate goal, which isequipping students with the
ability to actually navigatedifferent tools and make
decisions about which ones areappropriate.
Great, observation.
Roy Bahat (32:51):
Yes, Although I also
think you know, pick any chat
bot and make sure everybody hasaccess to it, Cause you know I
can ask Claude.
Hey, I want to use a tool thatcommunicates securely in the
following way and, shockingly,Claude is generally pretty good
at that.
Alex Kotran (aiEDU) (33:07):
How do you
use?
I mean what?
Do you have?
An AI, an LLM of preference foryour own work?
Roy Bahat (33:13):
No, I use them
against each other.
So, like in my writing process,I frequently will basically go
out and do a bunch of research,get LLMs to make an outline.
I mean I can talk you throughmy whole writing process, if
it's interesting, on how I useAI.
I would love it.
If you're willing, I would loveto hear it.
Yeah, I mean I can talk youthrough my whole writing process
if it's interesting on how Iuse AI.
Alex Kotran (aiEDU) (33:32):
I would
love it If you're willing.
I would love to hear it, yeah.
Roy Bahat (33:33):
I mean, I can even
send it to you in writing.
I mean, basically the shortversion is I have the LLMs.
First of all, when I'm doingresearch I'll often just talk,
so I'll record audio files oflike oh, I'm thinking about this
, but what about this argument?
I'm thinking about this, butwhat about this argument?
I'm not sure, maybe the otherone, blah, blah, blah, blah,
(33:57):
blah.
And then what happens is I willtake that and I'll feed it into
an LLM and I'll say please goresearch this other stuff that
I'm curious about, find me afact on this.
Blah, blah, blah.
And then all that leads to anoutline.
I'm not getting it to draft forme, but I'm getting it to make
a rich, detailed outline.
And then I'm oftentimes doingthe same thing in another LLM.
(34:18):
So I have two outlines producedby the same prompts and then
I'll feed each one the otherone's outline I call it dueling
LLMs and say this is the onethat came from another LLM,
please improve it andincorporate it with yours, and
then they'll kind of convergeinto something great, and then
I'll look at it and I'llsometimes give a little feedback
and then I'll cut and paste theoutline into, like Google Docs
or something like that, and I'lljust type until I'm done
(34:38):
writing it, and I can go prettyfast, much faster than I used to
be able to before.
I just wrote a blog post on whatquestions a person should ask
before joining a startup.
That was based on that, and so,yeah, you know it is.
It's a process that has beenworking for me and that I've
iterated on Like, and I expectthat you know, literally just
(35:00):
today, my team was talking aboutsome piece of writing we had to
do.
I was like, okay, could we tryusing the AI in this way?
And so we're constantly probingthe limits of what it can do.
Alex Kotran (aiEDU) (35:11):
Yeah, I
also.
I mean, I got my scholarship toOhio State because I got like I
won a little bit of money fromlike a writing competition, so I
got a partial scholarship.
I've always been a writer.
I actually resonate withteachers when they stress about
students using AI tools.
It harkens me back to one ofthe first English teachers that
(35:31):
we worked with.
We did this training on how youknow, this is like the very
first days of chat GPT and Icaught up with her six months
later, asked her like how areyou going?
Are you using AI in theclassroom?
She was like, oh my gosh, itwas so engaging.
My students loved it.
She had all these really funactivities that she created with
the AI.
It's the most engaged thatshe'd ever seen her students.
And then I was like, I mean,actually I had a New York Times
(35:53):
reporter that wanted to talk toa teacher that was using ChatGVT
and I said, oh like well, howare you using it today?
And she's like, oh, I don't useit anymore.
Well, why did you stop using it?
And she said, well, you know, Iteach freshman English and we
got to the portion through thatsection that they were not
(36:15):
learning how to create anoutline.
They were learning how to askand prompt engineer for an
outline and she's like.
Those are actually differentskills and it strikes me that
what you described sounds great.
But I think that my suspicionis part of the reason why you're
so effective with LLMs isbecause you've always been a
really strong writer and thatyou are able to so I really
(36:36):
appreciate you saying that Idon't know.
Roy Bahat (36:40):
I think you're a good
writer Well, thank you, but you
also don't know what I'vewritten versus what the LLM has
written.
Alex Kotran (aiEDU) (36:46):
I've read
your stuff prior to ChatGPT.
Yeah, fine, thank you.
Roy Bahat (36:55):
But the thing I would
just say about that is I'm not
suggesting that people shouldstop learning how to write.
What I'm suggesting is thatit's a different process now and
that the same way by the way,like you could have said the
same thing about spellcheck, andI remember I'm old enough to be
like penmanship is important tolearn.
Is penmanship important tolearn?
Alex Kotran (aiEDU) (37:14):
Is
penmanship important to learn?
I don't think so.
I do think that writing isthinking Sure.
Roy Bahat (37:21):
I agree with that,
and so my point is not that
writing is like penmanship.
My point is that, of course,it's important to learn, and the
question is how and in whatcontext, and that we should get
away from.
Well, if they're using the LLM,they're not learning to write.
It's like no, they're learningto write in the same way that
when I use spellcheck, I'm stilllearning to write.
It's just different and I needto pay attention to how do I
(37:43):
edit?
How do I?
Can I compose a paragraph on myown and we just don't know what
all those lines are going to be.
Alex Kotran (aiEDU) (37:51):
I agree
with that.
My instinct is that the wholething about writing is thinking
and the whole thing about whydoes it feel wrong when a
student is just getting to theoutline so quickly?
And I think that is thatthere's actually a lot that you
gain from sitting in front of ablank piece of paper and
(38:15):
struggling to figure out how toget started.
Like you have these thoughts inyour head and this like sort of
productive struggle that comesfrom writing, especially early
on.
You know like once you do itenough, you kind of build the
like the muscle memory to figureout how to get started.
And you know like for me it wasalways like I would spend a lot
of time with the firstparagraph and once I had that
the rest kind of flowed.
But the process was not just.
(38:35):
I think that I actually builtand learned a lot from the
challenge that I faced inwriting and I wonder if
something being so easy yeah,and maybe it's just different
ways of creating productivestruggle.
Roy Bahat (38:58):
I think it's
different ways.
So I think productive struggleis, for sure, essential, but I
think the different ways is thequestion, because I sort of hear
, like when I went to school Ihad to walk up, you know, walk
uphill both ways, kind of thing,and okay, struggle is great,
it's just got to be necessarystruggle.
An unnecessary struggle can befine as an exercise, like look,
I go to the gym, I don't really,but I should go to the gym and
(39:19):
lift weights, and that'sunnecessary struggle, but it's
necessary for my own health andlearning.
You know, yeah and so.
But I think I'm not trying todismiss the idea of productive
struggle.
I'm trying to dismiss the ideathat which struggles are
productive is fixed, because Ithink it's malleable across time
(39:39):
and we should find newproductive struggles.
That's my kind of general take.
Alex Kotran (aiEDU) (39:43):
Yeah, and
this is where I find that the
issue with teachers beingworried about students cheating
is so fascinating to me, because, on the one hand, I don't think
it's correct to just think of astudent using AI equals
cheating.
Roy Bahat (39:55):
Of course not At
Berkeley.
By the way, in my class Irequire the students to use AI.
The only rule is they have totell me how they used it.
Alex Kotran (aiEDU) (40:04):
But okay so
, but the nuance is if the
student is more effective atusing AI than the teacher, and
certainly if the teacher doesn'thave any comfort or experience
with what AI is capable of, I dothink that it's possible for
the students to get around theteacher's ability to create that
productive struggle, because ifthe teacher doesn't know what
(40:24):
the AI is capable of, you knowthey'll do things like what my
favorite is when they say, oh, Ifigured out the Trojan horse
strategy.
And this is one of the thingsthat was shared on a Facebook
where you put, like you know, inwhite text some like you know
trojan horse to sort of fool theprompt into like, including the
word frankenstein somewhere inthe in the homework assignment.
Um, and, and to me this is likethis is this is what this
(40:44):
represents.
The teachers are not.
They're they're.
They're spending too much timetrying to figure out how to
actually like, enforce and likestop students that they're
actually they're not investingthe time into figuring the tools
out for themselves, which Ithink would get them to a place
where they figure out oh, yeah,I mean, that is a you kids in
your rock and roll kind ofmoment.
Do you have any?
So what other strategies I mean?
So you ask the students to usethe AI and show you how they use
(41:06):
it.
Roy Bahat (41:07):
I have no other
strategies.
Alex Kotran (aiEDU) (41:11):
How do you,
how can you validate?
I mean like are your?
I assume your homeworkassignments are not um.
They're not doing multiplechoice tests.
Roy Bahat (41:20):
No, I don't have a
cheating problem in general
because the students can't evendisclose their grades.
They're paying a lot of moneyto be there.
It's business school, so myview is right.
It's not.
If they cheat, it's fine.
But I'm not there to enforceanti-cheating against them.
And there's no right answers tothe question to ask, like the
final assignment is a personalreflection essay on the class,
(41:41):
like I don't even know what itwould mean to cheat on that,
like did you not share the thingthat you?
And so it's just different forme and easier.
But I do empathize with thefact that in a different context
, if you're a math teacher, ifyou're an English teacher
assigning an essay on Sound andthe Fury, you know cheating
could be a big deal.
But I hear you on the desirefor people to figure out what
(42:06):
they're doing being somethingthat is sorry, figure out the
tools being something thattrades off with blocking others
from using the tools.
So that seems like a real issueto me.
Alex Kotran (aiEDU) (42:21):
Okay, this
self-reflection is one of my
favorite ones.
It's been something else that Ihear a lot is teachers say well
, self-reflection, somethingthat's personalized to the
student, and the challenge is Ithink you can actually get a
pretty damn good self-reflectionwith a single sentence prompt,
and you could do a little bitmore prompting.
(42:43):
It would take you probably lessthan five to 10 minutes to get
to something that's probablyhigh enough quality.
That totally maybe not you, but, by the way, if I had to do a
self-reflection, I'd be like hey, I look at my emails that I've
sent relating to this class.
Roy Bahat (42:59):
I'd upload the emails
directly into chat, gpt or
Claude, or something like that,and I'd say based on this and
also my Slack messages, whichI'd cut and paste in what are my
reflections?
Alex Kotran (aiEDU) (43:10):
And then
I'd edit that, and then you'd
edit it, and but I guess thechallenge is what like how?
And so the point is taken thatif you're an MBA and you're
cheating, it's like well, whyare you spending the money?
Roy Bahat (43:21):
But also like look if
it's easier or harder.
I'm not going to the printingpress and laying out the print
type on the machine anymoreeither.
I'm not doing mimeographs, youknow, like there's a lot of
conveniences that have comealong the way and I think we
have to embrace constructiveconvenience and still find a way
to learn the things we need tolearn.
So your point about productivestruggle I think is so valid,
(43:44):
but the mere fact it is a loteasier does not disqualify it in
my perspective.