Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:10):
Hi, this is Eloy
Ortiz-Oakley, and welcome back
to the Rant Podcast, the podcastwhere we pull back the curtain
and break down the people,policies and the politics of our
higher education system.
In this episode, I sit downwith Andrew Magliazzi.
Andrew, or Drew as he likes tobe called, is the CEO and
(00:32):
co-founder of Mainstay.
Mainstay is deployinghuman-centered AI to support
more learners across the countryand to help them be more
successful in their highereducation journey.
Drew and his team have workedwith institutions across the
country, some of which many ofyou know.
They were part of the greatwork that was happening at
(00:53):
Georgia State under theleadership of Tim Renick using
data to better inform theinstitution and give them more
tools to help students succeedin real time, giving them
information and nudges.
They need to understand what isholding those students back,
and so I get to talk with Drewabout how Mainstay is deploying
(01:15):
AI today.
What are some of the issuesthat they're wrestling with,
particularly now with theavalanche of AI that's sweeping
the country, and there isn't aninstitution in America who isn't
being bombarded with questionsabout how they're going to use
AI.
So Drew walks us through histhinking, talks about some of
(01:36):
the pitfalls, and we also talkabout how higher education
leaders should be thinking aboutdeploying AI and not just going
after the newest shiny objectin the marketplace.
So before I get into myconversation with Drew, I just
want to take a moment to saythat this interview was recorded
(01:59):
at the most recent ASU GSVSummit back in April in San
Diego, and, of course, the ASUGSV Summit was as audacious, as
big and as amazing as always.
A lot of tech on display,particularly AI.
You can't turn the corneranywhere in that conference
(02:19):
without hearing something aboutAI.
So AI in education is certainlythe issue front and center at
the ASU GSB conference.
So it was great to run intoDrew to help us dive into some
of the issues that we werehearing about, that we're seeing
and what we can expect goingforward.
(02:40):
Every institutional leadershould be thinking about how to
deploy AI and, in particular,not just how to approach
purchasing AI, looking at thetools that are available, but to
stop and think about whatproblem they are trying to solve
in their institution.
Some of the best use casesright now really are all about
(03:00):
how to make the institution moreefficient, more effective,
about how to make theinstitution more efficient, more
effective, lower the cost ofthe education, particularly now
when there's so much pressure oninstitutions to perform better,
to deliver greater ROI and toshow value.
Using AI to make the operationmore efficient is certainly one
(03:20):
of those use cases being able tolook at all the data across the
institution, pull it togetherin a relevant format and make it
available to faculty, to staff,to administrators, to use in
real time.
Also, giving more and betterinformation to students, helping
them make better choices alongtheir higher education journey
(03:41):
and, of course, as Drew willtalk, helping personalize the
experience for learners,providing them support that they
need 24-7, throughout the day,through some of these chat bots
or other AI-type agents thatwe'll talk about.
So a lot to consider.
Higher education leaders needto be thinking about this, need
(04:02):
to be exploring this, need to bedoing this in a way that is
conscious of what they're tryingto achieve, before they go out
and buy the latest shiny objecton the shelf that you will see
at ASU GSV or that you will seewalking onto your campus with a
salesperson telling you howwonderful their latest tool is.
(04:23):
So, with that backdrop, pleaseenjoy my conversation with Drew,
ceo and co-founder at Mainstay.
Drew, welcome to the RENTPodcast.
Speaker 2 (04:34):
Thanks for having me,
Eloy.
I'm excited to dig in.
Speaker 1 (04:37):
Yeah, it's great to
have you here.
We're here at the ASU GSVSummit in San Diego.
At the ASU GSV Summit in SanDiego the annual pilgrimage to
this mecca of technology,education, technology and this
year is no different.
There's 7,000 plus peopleroaming the Hyatt Hotel here and
(04:59):
lots of conversations.
I can't help but see everythingon the billboards, everything
on the presentations.
Every company has AI taggedinto it and, of course, you've
been doing this work for sometime, so I'd love to to get into
it with you.
But first, for our listenerswho aren't familiar with
Mainstay, tell us about thecompany you know, some of its
(05:22):
origins and then and then we cantalk about you and how you got
into this field.
Speaker 2 (05:25):
Sure thing, thanks,
yeah, 7,000 seems like an
understatement down there, bythe way.
Yeah, so Mainstay.
We've been at it for about 10years and we've been deploying
conversational AI for collegesuccess.
The term chatbot has become afour-letter word, so we refer to
AI-powered coach andessentially we tap into not only
(05:49):
the power of AI to automateconversations but nudge science
and the ability to coach peoplewith the science of coaching to
get them to complete thelearning objectives they set out
for.
So basically, we're anAI-enhanced coach at every
student's fingertips, primarilyover text message.
We sit on top of data systemsand we push messages to them
based on the barriers that theymight be facing, the deadlines
(06:11):
and whatever the data suggestsneeds to be done, and we don't
just tell them what to do, weinvite them into a conversation
and you know, over the yearswe've done gosh at this point,
13 randomized control trials toprove that its outcome is
effective.
But I often tell folks the mostimportant research study we ever
did was the one that had theleast impact because it was the
(06:33):
one time we did not have humansin the loop, and it's really
interesting.
Students tell us this all thetime and, for context, last year
we engaged about 5 millionlearners across America and they
tell us they feel that they canbe vulnerable with a chatbot
because it's not judging themRight.
But the fascinating thing is,if there is no human in the loop
(06:53):
, they are not accountable to it.
So it's the sort of what is thecareful balance, and in our
experience, it's about 2% of themessages have to be sent by a
human, and usually a human thathas a preexisting relationship
with the student, to get thebest outcome.
Actually, having a human in theloop triples the effectiveness
of our product.
(07:14):
So the second thing we are is aconversation co-pilot for
advisors and educators, so ittells them who to talk to when
and gives them a suggestion onwhat they might say, and we use
generative AI in those places.
And the third thing we are isfor leaders Word Insights
platform.
Hey, what if you could listento the sort of whispers from all
(07:35):
across your campus andunderstand, hey, what are my
first gen students saying?
What are my fellow studentssaying?
What are my seniors saying?
What's their sentiment, what'stripping them up and what's got
them motivated?
And so it's a good way tosystematically listen at scale
and so that's what we are to thesort of three primary
constituencies of a universityand we work with schools, state
(07:59):
and national nonprofits focusedon college access and success,
and a handful of companies alsodoing workforce upskilling and
reselling.
So that's us in a nutshell.
Speaker 1 (08:14):
So how did you get
started with this work and
what's your background?
Drew?
Speaker 2 (08:18):
My background.
Well, my parents met on doingVISTA, so there's public
services almost in my DNA and Isuppose I always wanted to do a
double bottom line business.
Before this, I'd run a tutoringcompany and felt like I was
bringing inequity to the system,so I started a nonprofit, but
(08:38):
the left hand didn't know whatthe right was doing and you know
I deliberately let those go todo one thing.
That was double bottom line,and my co-founder comes from the
higher ed industry as well, andso we knew we wanted to make a
difference in students' lives.
We knew that unless we had apositive return and investment,
we were never going to be amust-have, and we were really
(08:59):
familiar with the behavioralscience data about if you engage
people in this way over textmessage, you can move the needle
, and I had a thesis that Ithought I could automate a lot
of this with AI and machinelearning, and luckily we met the
leading national researcher onthe topic of summer melt and
(09:20):
then met Tim Rennick at GeorgiaState.
And Summer Melt is a perniciousproblem in higher ed where it's
a leaky funnel where studentssay they're coming and don't
show up.
Right, you know it well.
And at Georgia State it wasaround 20% of students who
committed were not followingthrough and they were familiar
with all the nudge scienceresearch.
(09:41):
They estimated that to do thatwork by hand would have required
a call center of a dozen people.
And so I showed up and I youknow so, not when I was a kid
saying Tim, I think we canautomate most of this, so you
shouldn't have to hire anyone.
He asked me is it going to work?
And I'm terrible salesman, eloy.
I said I have no freaking idea,tim, let's find out.
(10:03):
And they must have been reallydesperate because they said yes.
And we had then, lindsay Page,run a research study and you
know, fast forward nine months.
The results were pretty awesome.
Right, we dropped their melt 27percent, boosted enrollment 4
percent, and this is the 50-50treatment control, like equal
(10:25):
audiences half got it and halfdidn't, and the ones that
received the treatment were muchbetter performers.
But that also was well, thesort of headlines were at the
time.
We were called Admit Hub is asummer melt solution, tim, you
know, never losing his eye onthe prize said you know, thank
you, but getting people over thethreshold is not the goal,
(10:48):
getting into the finish line is.
Can you do this?
To persist to a degree, and thenaivety of course I said yes,
and that was a probably an orderof magnitude harder problem,
right, because of, rather thangetting everyone to show up at
the same place in time, you'regetting people to go on 10,000
different pathways.
Right, but fast forward.
(11:10):
I think it was two and a halfyears.
We ran another study and thistime actually this is an
interesting study At the time itwas mid-summer the treatment
group was two and a halfpercentage points above the
control group when Tim and teammade the decision to end the
control conditions, feeling thatit was unethical to continue
(11:33):
them to the finish line.
So we actually might have gotteneven a better impact.
But even two and a halfpercentage points at a school
like Georgia State is somethinglike 1,200 students who
otherwise would have dropped outor stopped out.
So since then we've gotten, youknow, we've worked with a
couple hundred institutions.
We've also gotten into theacademic experience, helping in
(11:53):
large lecture forces with highDFWs, and Tim talks about this
better than I will, but hereally.
We've really maybe done themost exciting work there
reducing DFW rates forPell-eligible students by 50%
and really boosting academicperformance again.
And so I heard Tim talkrecently at a conference and
(12:16):
we've done a lot of RCTstogether and he even said and
we're going to do 21 more RCTsin the next three years.
I hadn't heard it quite inthose terms, but I'm excited.
Tim is running around today, somaybe I'll grab him right after.
Yes, he'll tell it how it is.
That's great, but yeah, that's.
I mean.
Georgia State is one excitingpart of the innovation story,
(12:39):
but it really all the work we'vedone is take the technology and
have a deep partnership withsomeone who's not only willing
to tell us their problems, rightbut really open up the kimono
and show us all that's going onand let us in in a way that we
can do something better togetherthan either of us could do
alone.
Speaker 1 (12:58):
Right, let's dive
into some of the technology,
because you know the examplesthat you just mentioned.
Georgia State that's been, youknow, some time ago now, yes,
and before this world AI buzz issort of taken hold, yeah, and
now people hear the terms allthe time.
(13:20):
People in higher ed hear theterms.
They're feeling pressure toadopt some sort of AI solution.
Let's go back to some of thoseearly experiences with Georgia
State.
Tell us about the technologyyou were using then.
How would you describe thetechnology and how were you
using that, along with the datathat you had To actually produce
(13:42):
, the results that you saw inthose studies?
Speaker 2 (13:45):
So most of what we do
is over text message and the
superpower that we have is twodirections.
One is we look at the data andwhen you have a hold on your
transcript or an academic hold,or haven't submitted your
immunization form've watched thedata and reach out to you
directly.
(14:05):
But the key is we don't justtell you it turns out by the way
just a brief aside in humanpsychology and teenagers, the
least reliable way to get ateenager to do something is tell
them they got to do it.
You kind of ask them about it.
Speaker 1 (14:21):
I don't think we need
a randomized trial.
Speaker 2 (14:24):
Yeah, exactly Anyone
who's ever met a teenager.
And so when you say some ofthese things and open the door
to a conversation, a lot ofquestions come back.
And in 2016, 17, we actuallywere one of the earlier adopters
of the transformer architecture.
Same thing ChatGPT is based on,but not a large language model.
(14:45):
We were doing a teeny, tinylanguage model.
We didn't have the resources totrain you know, hundreds of
millions of dollars but wetrained a model based on the
questions and answers we wereseeing and had seen and some
great data we got from the folksat Georgia State and it was
enough to get going.
And in about 2018, Googlereleased an open source model
(15:07):
called BERT Okay, and it wasactually the sort of dawn of
this open source king greenexplosion, of these LLMs, but in
a small way.
And so we took that and wefine-tuned it, and then it was
actually when GPT-4.0 came out,was when I had the moment, and
(15:27):
actually I honestly think mostleaders need to prepare
themselves for the rapiddevelopment of this technology
and understand what theconditions are going to be for
them.
To say what I said to my team,which is stop everything.
What got us here is not goingto get us there.
We threw away our model and weadopted GPT-4.0 as the core
(15:53):
brain of our system and we havenot looked back.
That was a tough call.
I'm sure it was.
I mean, we had patents, we hadmultiple patents on how we
trained our model and it was nolonger relevant or necessary.
It was actually holding us back, and so it took a moment to say
like we can't do it this wayanymore and it's now cheaper,
(16:15):
it's better and it has unlockeda whole realm of conversation we
could never have dreamed ofhaving with students before.
But the real power is beingable to push conversations to
people Right is being able topush conversations to people
Right, and now we can push anygenerative AI conversation you
can imagine to somebody at theirfingertips on their cell phone,
which might seem small, butactually the ability to do that,
(16:38):
I think, is one of the keythings why we close achievement
gaps rather than exacerbate them, Because you can't just serve
the students with help-seekingbehaviors.
You have to reach the ones whoaren't raising their hands.
Speaker 1 (16:53):
Aren't raising their
hands or aren't thinking about
what questions to ask, and thathappens a lot.
I mean, you know, we see how itchanges behavior when we're
dealing with these nudges in allsorts of other walk of life,
whether it's your banking orwhether it's your credit rating
or whether you know, just havingthat feedback just prompts you
(17:13):
to think about what questionsshould I be asking.
You know what is going on, whoshould I talk to?
And it just creates anopportunity for the individual
to be able to react.
Speaker 2 (17:26):
Yes To information
right then.
And there A good friend alwayssays this about college students
College students don't careuntil they care a lot, right,
and then they don't care again.
Yeah, and actually it's so trueand it's human nature and you
can be upset by it or you canembrace it.
And actually I think the powerof this technology is you can
(17:46):
capitalize the instant thecaring happens and run, embrace
it.
And actually I think the powerof this technology is you can
capitalize the instant thecaring happens and run with it.
Right and okay, this is thetime for that conversation.
Sometimes it's at one in themorning when they think, oh shit
, I need to file the FAFSA, okay, well, let's do it right now.
And they don't.
Their desire and willingness toget things done doesn't always
(18:07):
fit in the nine to five businesshours.
Yeah, that's right.
Well, most things don't.
Their desire and willingness toget things done doesn't always
fit in the nine to five businesshours.
Speaker 1 (18:10):
Yeah, that's right.
Well, most things don't.
So now you mentioned thatmoment you had with your team
when ChatGPT4 came out.
Yes, has there been any othermoments like that, Like, for
example, when DeepSeek came outand just plummeted I mean
apparently plummeted the cost oftraining these models overnight
?
Have you had any of thosemoments since then?
(18:31):
You?
Speaker 2 (18:32):
know, deepseek,
because of its origins, is one
that is extremely interesting.
We haven't adopted it yet, butwe've played with it.
I don't think our customers andpartners in higher ed would be
comfortable with it inproduction, but I do think, you
know, it's really fascinating,eloy, because people talk about
the expense of these things andthey look at the headlines and
(18:54):
you know $500 billion is beingspent, but actually it is
incredibly encouraging that theopen source, democratization of
this technology is going to meanthat it's probably not one
super intelligent, large modelthat's going to rule them all
Right, but inevitably millionsof flowers will bloom and you
(19:16):
know, ultimately.
I know Apple has been, you know,pilloried recently through
their performance, but it isprobably each of us having our
own model on a device that istuned, trained and private,
rather than needing to make APIcalls to the cloud.
So it's unclear where it'sgoing.
(19:36):
But it could be a world inwhich the cloud everything is
fashion, bell-bottom jeans, butcloud versus on-prem.
And I actually see a world inwhich there could be more
happening on the premises, witha hosted GPU and a really smart
model happening, for instance,on a college campus or elsewhere
(19:59):
, that you can own, manage andmaintain and the pendulum will
keep swinging, but the opensource makes it possible that
it's not just going to be ahandful of big players that
dominate the market and controlall the data.
Speaker 1 (20:12):
So you've been
working with educational
institutions for some time now.
Yeah, how are people reactingto the technology today?
I mean, I think what I find isprobably still more the 70, 30,
70 percent either still wary ordon't understand how the use
(20:32):
cases will roll out, and 30%sort of getting excited about
the possibility.
And I think and of course partof it is people in higher
education.
You know they haven't had a lotof access or a lot of
experience with these tools.
They had the experience ofhaving the tools work for them
(20:53):
from whatever provider ofinformation they have, but they
haven't figured out how toactually deploy it to make their
jobs more efficient or be ableto reach more people.
So how do you talk about itwhen you get in front of
educators?
Speaker 2 (21:09):
You know it is a fine
line because at this conference
especially, there's no shortageof new companies and, heck, I
was one of them once doing likethe hot sizzle demo.
Oh, you're the old guy now.
Oh, yeah, I can see the grayhairs from it and you know, I've
learned a little bit over thetime.
But I will say, the number onequestion I implore people to ask
(21:31):
is not the what, but the why ofAI, and I think a lot of people
are buying AI because they areafraid of missing out or they
see a hot demo and it blows themaway because this thing is
talking to them and holy smokes.
But I think one of the reallyinteresting analogies I've heard
(21:52):
well, actually, there are threequestions I usually see people
asking.
The sort of least sophisticatedis what can AI automate?
And the temptation is to findwhat it can do and run with it.
But I will say that what it cando is not always what it should
(22:15):
do.
The most extreme example I'veever heard is and I'm 100%
confident that AI could do thiswith almost perfect precision,
word for word is conduct awedding ceremony.
But you laugh.
Yeah, no one in their right mindwould want to be married by an
(22:36):
ai and I say that and there'llbe a headline tomorrow about
someone I don't wait for therobot to start marrying people
but it's silly because you know,the marriage is not about, you
know, giving a perfect sermon,right, and the altar is about
this sort of ineffable betweenhumans, right, and so the job to
(22:57):
be done, I think you have to becareful about it's not always
about delivering word for wordthe right things.
For instance, a teacher mightbe a lot like the AI clergy
person, right, like is theremore to the job of teaching that
is beyond subject matterexpertise but is also motivation
?
Speaker 1 (23:13):
Right, absolutely, I
mean we all having experience in
education, think back to thoseteachers that really impacted
our learning or our thinkingabout life or thinking about the
world, and those are stillthings that are impactful the
way the information is delivered, the way that the human picks
(23:37):
up on how the learner isreceiving the information and
using that to either reinforceor re-explain, or that's
something that I think is stillonly a human element, that AI
cannot replicate.
Speaker 2 (23:49):
I think you're on it,
eloy, and so really, the most
sophisticated question I hearpeople ask is what is the thing
that AI is uniquely suited to doto amplify the outcomes we're
looking for and make sure we'renot doing them that
inadvertently widen achievementgaps, but actually maybe help
close them in the process?
And there are a ton ofsolutions.
(24:12):
They're not obvious always and,honestly, they're rarely sexy,
like usually.
It's the mundane process flowthat you know people are feeling
like they're saying lather,rinse, repeat on email all the
time.
But I do think there is thisweird promise that I would be
wary of companies making whenthey say, hey, we're going to
(24:34):
make it, so students don't showup at your doorstep anymore
because we're going to take careof everything.
One, the technology is not yetreliable enough to do it
perfectly, so a human needs tobe in the loop for supervision.
But two, there are some thingsthat only humans can do, and
that is like a black mirrorepisode.
If students stop showing up.
(24:55):
And if the technology does getthat good where it can do all
teaching, learning and studentsupport perfectly, we have more
to worry about than studentsuccess, like we should be
building the bunkers, right?
I think skynet is coming and soI don't.
I actually think hallucinationsare endemic to this technology.
(25:16):
We're never like.
There's been hundreds ofbillions of dollars spent and
there will be trillions more.
We are not going to make theseperfect right, but it is the
flaws of the humans plus theflaws of the machine, and how we
intermingle the two to makeboth better is the magic, and
that's just hard work.
I wish there was a shortcut,but it's definitely not.
(25:38):
Set it and forget it.
Speaker 1 (25:39):
Yeah, no, I hear
those themes a lot when I talk
to educators.
They're still waiting to seeyou know, they're hearing a lot
about the hallucinations, about,you know, random answers
generated and, of course, in themedia.
You hear that well, you know,some of our bright minds still
don't understand how this isworking Right.
(26:00):
That's not comforting.
Speaker 2 (26:02):
No, I, if you have
fear, uncertainty and doubt,
that is a good thing.
I have fear, uncertainty anddoubt and I am all over this
Heck.
There was an article, I thinkit was a few days ago, from
Anthropic that showed that themodels will lie to you.
They will lie about how theycame to the conclusions they
came to.
Speaker 1 (26:20):
Well, they're getting
to know humans pretty well.
Speaker 2 (26:22):
Yeah, they are a
projection of our imaginations
and they are emulating us verywell.
The chief product officer atOpenAI has been quoted saying
LLM's dream internet documents.
Which is like and the internetis rife with you know Some crazy
(26:45):
dreams in the internet.
Yeah, nightmares sometimes, butyeah.
So what do we expect?
It will ever do.
Speaker 1 (26:51):
Yeah, you wonder if
you know, aliens just started
peering into what we do here.
They'd wonder what the heck iswrong with us, because humans
are capable of doing some crazythings.
Speaker 2 (27:00):
But the truth is like
, among the flaws there is tons
of opportunity and there is aweird world in which we hold
technology to a higher standardthan humans, because by no means
are we perfect.
We make plenty of mistakes, butsome reason we are less
tolerant of a machine makingthem.
I actually think the key is tohave a growth mindset, not only
(27:24):
about the technology but thepeople who use it.
You ever seen the movie HiddenFigures?
Yeah, it's a great movie, andthere's the women who are the
computers, so they do the sliderules putting the man on the
moon.
And then the machine from IBMshows up on the desk and they're
looking at it like is thisthing coming for us?
(27:46):
And of course it's an inanimateobject, but it's threatening.
And eventually they sort ofrally around it and they're like
we're going to master thistechnology because it is the
unlock for our capabilities.
You know, the same thing ishappening right now into all of
our lives in a way that veryrarely happens.
We're used to making ourmuscles obsolete.
(28:06):
Now we're making parts makingour muscles obsolete, now we're
making parts of our mindsobsolete, and it's threatening,
but the the surest way to haveyour job disrupted is to ignore
the technology sitting on yourdesk right now.
Right, no.
Speaker 1 (28:22):
So let me ask you
this you are of a generation
that's growing up now and kneedeep in this new technology.
My generation was a generationof you know being introduced to
the Internet.
How are you seeing this newgenerations coming through
(28:42):
higher education now thinking,embracing, using the technology
that I mean?
Is there anything that worriesyou, or is it just a matter of
institutions being able to keepup with the demand that the
learners are coming with,because many, many folks in this
generation have grown up withthis technology?
Speaker 2 (29:05):
They're, you know.
Actually, the best example itgives me hope is how I use it
with my own kids.
Oftentimes it'll be I'll bedriving in the car and I set up
my phone, so one button, I don'thave to distract myself.
I can ask ChatGPT stuff andthey'll say Papa, what's thunder
(29:26):
and lightning?
Now, I know what Thunder andLightning is, but I'd be hard
pressed to describe it to afirst grader.
And hey, chachibt, how wouldyou explain Thunder and
Lightning to a first grader?
And gosh, darn it, it nails it.
And this opportunity to haveon-demand instruction in a way
(29:46):
that is so personalized and thatnow they're just like hey, papa
, ask ChatGPT this other thing.
Or my son is curious about theRoman Empire and we're always
asking it questions and it is.
I mean, we had this with Google, but it is Google on steroids
Right.
And the opportunity to havehighly personalized education at
(30:08):
your fingertips, which is notonly exclusive to the classroom,
but is something that isinfused into your life in a way
that our kids, my kids, aregoing to have the facility with
it Like you know, young peoplehave smartphones today is
incredibly exciting and is thefrontier of what learning will
(30:29):
be, and I don't think we canunderstand the opportunity.
At the same time, we've seenthat new technology doesn't
always bring good things Right,and the opportunity for abuse,
addiction and other problemsalso runs a beam, and so the
question I ask myself is how canwe avoid stepping in the trap
(30:53):
that we stepped in with socialmedia and find a way for this
technology not just to berelational but to also help
bring people together?
And I don't have the answer,but I'm very curious.
One of the things we've done isall of our conversation is a
three-party conversation it'sthe student, the bot and the
(31:15):
advisor always, and I thinkthere's an opportunity to have
conversations that broaden thecircle.
And what does a groupdiscussion look like,
facilitated by an AI that couldbe part of a learning experience
, and highly scalable and deeplyengaging?
You know, I think there is afrontier of possibility, or if
(31:38):
I'm an instructor.
I think this makes instructorsafraid.
But there's also opportunity,which is AI grading, and it is
scary as an instructor to feed arubric and say grade all these
papers and yes, okay, now I seethe time saving as an instructor
(31:59):
.
But actually there's anotherunlock, when things become
instantaneous and almost free.
The process of grading is nowsummative.
How well did you do?
Right?
But if we could migrate this toa truly formative assessment,
such that, hey, the process ofthis is you're going to submit
this paper seven times overseven days and every time you're
(32:20):
going to get feedback.
Speaker 1 (32:21):
You're going to get
that feedback.
Speaker 2 (32:22):
And how can you?
Speaker 1 (32:23):
It's no different.
I mean, I use and this is not acommercial for Grammarly, I use
Grammarly on a regular basis.
Yes, you know, it edits myemails, it edits all my writing
and I've just gotten used to itand it helps me feel more
empowered to improve my grammar.
Speaker 2 (32:41):
Yes, Because it's
constantly giving me feedback in
the moment, and I actuallythink there's something that I
mean.
It's hard for most folks toremember your freshman year
expository writing class, yeah,but most of the time you have
the opportunity to rewrite yourpaper and stunningly few low
percentage of students will doit to get a better grade,
because it's threatening andit's hurtful to have someone
(33:03):
critique your writing.
Oddly enough, it's actuallyless hurtful to have an AI do it
than an instructor, and somaybe there's unlocks that we
can find, which emerge whenthings become free, abundant and
instantaneous.
That may not be obviousinitially.
Speaker 1 (33:20):
Right.
Well, let me ask you a coupleof final questions as we begin
to wrap up, yeah, sure, as webegin to wrap up yeah, sure, so,
based on what I'm hearing fromyou, you feel bullish on AI
really unlocking the power ofpersonalized learning.
Yeah, what are some of thethings that you are seeing on
(33:42):
the horizon that make youbullish about AI helping more
students?
Because, the way I look at itis personalizing the learning
and the experience for eachindividual, because one of the
greatest challenges I've foundas a learner myself, watching my
kids and spending time incommunity colleges or the
(34:05):
University of California isevery learner is different and
we build the model around, sortof you know that lowest common
denominator yeah, and noteverybody fits into the mold,
right, and so you can see peoplestruggling with the modalities
that we are serving them, andwhat I'm hopeful for is that AI
(34:28):
can help create thatpersonalized experience and keep
more learners engaged happeningto us for sure, and the pace of
(34:51):
change is overwhelming.
Speaker 2 (34:52):
But I can say there
and everyone is saying it's
going to revolutionize education.
Well, no, it ain't going torevolutionize education.
You are with it and so we dohave.
It is a collective actionproblem.
We've got to get people usingthe tools.
And I will say that, while Ihave hope, human nature
typically is to sort of copy andpaste and move incrementally,
(35:24):
and actually I think this is oneof these moments where the
adage is AI going to make anincremental improvement in
efficiency and grading or is itgoing to fundamentally transform
from summative to formativeassessment?
You know, it is actuallypeople's imagination and so far
there has been a failure ofimagination to really embrace it
(35:45):
and do breakthrough innovation.
Embrace it and do breakthroughinnovation, and I don't know
what the thing that reallygalvanizes folks to do it, but I
can say the one excitement isthat it actually puts the
decision-making innovation atthe edge and the people on the
front lines, who are the onesthat are going to have the power
to make the innovativesolutions that really ultimately
(36:09):
transform the system.
It's not a top-down innovation,it is a bottoms-up one and I
don't know that we havesufficiently unlocked the
imagination of people on thefront lines, of teachers,
learners.
But if I know anything, I havebelief in human nature and there
are going to be a small numberof people that make
(36:29):
breakthroughs that impact us alland we're in like inning number
one right, and there will be asmall number of people that do
things that are very harmful fora lot of people as well, and so
the key is like how can we knowwhat works and what doesn't, to
stop the things that arefailing and double down on the
things that are working?
Speaker 1 (36:51):
What's next for
Mainstay?
Where do you see your companyand your technology going from
here?
Speaker 2 (36:57):
You know we talked a
little bit about where the
technology can go, but I look atus as a canvas for phenomenal
educators.
I say we are not a robot, weare an Iron man suit, and it is
really the enablement of theoperators who are going to do
(37:19):
things that I've never dreamedof.
But I think the big one is howcan we create any Gen AI
conversation, whether it's aboutcareer coaching, choosing a
major things of complexity thatwe could never have dreamed, and
not wait for someone to find it, but push it out to them at
their fingertips?
I I think the potential isgreater than I can imagine and I
(37:40):
need a lot more other people tomatch it with me.
So it is not my decision.
It is me creating theconditions for a lot more people
to innovate.
So if you're interested, shootme an email or I promise it'll
be me who responds.
Where do they find moreinformation about Mainstay?
Oh, mainstaycom or Andrew atMainstay is my email address.
I always want to talk more.
All right, eloy, this isawesome.
Speaker 1 (38:01):
Well, drew.
Thanks for joining us here onthe RAND Podcast.
Likewise, you've been listeningto my conversation with Andrew
Maliazzi.
He is the CEO of Mainstay.
We've been having a greatconversation about AI and the
tools that are being developedto help more students succeed in
post-secondary education.
Thanks for joining us,everybody, and we will be back
(38:23):
with you soon with more episodes.
If you're following us onYouTube, hit subscribe, and if
you're listening to us on youraudio podcast, continue to
follow us on your favoritepodcast platform.
Thanks for joining us everybody, and we'll see you soon.