Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome to the
Changing State of Talent
Acquisition, where your hosts,graham Thornton and Martin Credd
, share their unfiltered takeson what's happening in the world
of talent acquisition today.
Each week brings new guests whoshare their stories on the
tools, trends and technologiescurrently impacting the changing
state of talent acquisition.
Have feedback or want to jointhe show?
(00:21):
Head on over to changestateio.
And now on to this week'sepisode.
Speaker 2 (00:27):
All right and we're
back with another episode of the
Changing State of TalentAcquisition Podcast.
Super excited for our nextguest, alex Swartzel from Jobs
for the Future.
Alex, welcome to the show.
Speaker 3 (00:38):
Thank you so much.
It's wonderful to be with you.
Speaker 2 (00:40):
Yeah, we'd love to
have you start off with an easy
one.
We'd love for you to share alittle bit more about your
journey to becoming the founderleader of Jobs for the Future's
JFF Labs Insights Practice.
Speaker 3 (00:51):
Sure, and I'll start
by saying a little bit more
about Jobs for the Future forlisteners who may not know us
yet.
We are a national nonprofitorganization that focuses on
transforming education andworkforce systems across the
United States so that morepeople, especially people that
have historically faced barriersto economic advancement, can
(01:14):
get the training and educationthat they need to attain a
quality job and a career thatallows them to sustain
themselves in their lives.
And that mission has been supercompelling to me in my
professional career.
I've had one of those careersthat spanned politics and public
service.
I'm based in Washington DC.
I've spent some time in tradeassociations.
I have my MBA, but within thelast 10 years or so, I spent
(01:38):
time at Teach for America in theDC region, getting a really
firsthand look at just howcritical education systems are
in making sure that all of usare prepared to thrive in our
society and to help make Americamore economically competitive,
and just how important it iswhen we can make sure that those
(01:58):
supports are available toeverybody.
Of course, talent is equallydistributed, but too often
opportunity is not, and thecentrality of opportunities for
work and for quality jobs feltreally critical to me as a key
component of economic mobilityand opportunity line of sight
across really every differentaspect of the spaces that all of
us move through, whether that'sfrom K through 12 to
(02:28):
potentially post-secondaryexperiences or career and
technical experiencesapprenticeships, for example and
then into the world of work aswell.
So this felt like a terrificplace, and JFF Labs is a very
special place.
Much younger than JFF as anorganization, we're only about
six years old and we are reallydesigned to be the research and
development arm of JFF, lookingfar into the future to
(02:50):
understand how emergingtechnologies and other
innovative models are poised tochange the way that we work, the
way we earn and the way that welearn.
And so we've had a chance todive into technologies like
artificial intelligence, whichwe'll spend some time on today,
as well as things like virtualreality, starting to explore
quantum computing, for example,just knowing how fast technology
(03:12):
is developing and how much ofan impact that potentially has
on all of us who are living ourlives and working day to day.
So I feel both deeply connectedto this mission personally, as
somebody who cares profoundlyabout opportunity for all, but
it also is just an incrediblyexciting place to be, with
extraordinary colleagues allacross the landscape.
Speaker 2 (03:35):
Yeah, I think that's
great and, like you know, we're
big fans of JFF.
You know you don't know this,but Marty certainly does.
My parents are both publicschool teachers in the South
side of Chicago, for, you know,50 some odd years and very
passionate about the educationspace in general.
So, you know, thrilled to haveyou on One piece that I'd love
to have you even define, youknow, as we kind of.
You know, get started here isyou know you mentioned quality
(03:57):
jobs, which I know is somethingthat JFF talks about a lot.
What do we mean by quality jobs?
Speaker 3 (04:05):
Yeah, it's a great
question and at its bottom, I
think of a quality job as thejob that we all probably want to
have, and it's certainly a jobthat pays well enough to sustain
people and families,potentially, where they have
families.
But it's beyond that and beyondeven making sure that jobs bring
benefits with them, forinstance, that also allow us to
support our lives, but that alsocreate opportunities for
(04:27):
advancement upwards in ourcareer path, that give us the
flexibility and autonomy that weneed.
It is a pretty holisticstructure and far too often
those kinds of jobs are notavailable to all of us, which is
exactly the kind of challengethat JFF has set for itself.
We have a North Star visionthat you may know of already,
graham, that in the next 10years, 75 million Americans
(04:50):
should be working in qualityjobs, which, surprisingly, is
about double the number who arein quality jobs today.
So it's a major focus for us,and one that allows for a lot of
different dimensions, becausefor that to be successful, we
need our education and trainingsystems to work at their highest
potential.
You know we need jobs to becreated, you know, through a
(05:11):
variety of ways, and we need tomake sure that people are
supported to prepare for andnavigate into those jobs.
So it's a big mission andallows us to think really
comprehensively and holisticallyabout what both people need and
what our systems need.
Speaker 4 (05:24):
Yeah, that's very
interesting, Alex, and welcome
to the show.
You know, it's quite a treatactually to talk to somebody
who's been thinking about thesethings for a while, especially
as we approach or we're actuallyalready in this current moment
where it does seem like there'sa lot of different factors that
are dramatically affecting thefuture of work.
Obviously, what's on everyone'smind these days is artificial
(05:45):
intelligence.
I think, Graham, is it truethat every episode we've done
this year has touched onartificial intelligence in some
way?
I think that might be true, andin some way, it feels like
we've been talking about AI fora while, but it does seem like
something has changeddramatically recently, and you
did some research recently, Alex, where one of the stats that
really popped out at us was thatAI usage at work jumped in the
(06:08):
last year from 8% to 35%, whichis what More than fourfold
increase, and I just wondered ifyou could comment a little bit
on that stat.
I mean, is it as simple asthat's when these large language
models like ChatGPT came intopublic awareness, or what are
the forces that are shaping sucha dramatic shift in such a
short amount of time?
Speaker 3 (06:28):
Yeah, it's a great
question.
I think it's probably a lot ofthings.
So those are two differentsurveys that we did, one in June
2023.
So actually after ChatGPT hadbeen around for about six months
to the most recent one inNovember 2024.
And so it is a prettysignificant jump in that period
of time, and I suspect it'sprobably a combination of things
.
One is that there is surely,you know, an increased level of
(06:52):
awareness amongst people westudied that as well and not
just awareness but sort of depthof understanding and
opportunities to try these toolsas they've proliferated and
improved in quality.
One of the things that hasalways been so striking to me
about this Cambrian explosion inartificial intelligence that
we've seen here in the last, youknow, two, approximately two
(07:13):
years, is that it's essentiallya B2C technology.
It is available to all of us invarious forms, obviously not
always in the most sophisticatedforms that require more payment
to use, for example, butanybody can access these tools
and play around with them, youknow, even just on a mobile
phone, which is really exciting.
And so I think and this is alsoborne out in some of the survey
(07:35):
research that we'll probablytalk about I think you're seeing
the human spirit and humancuriosity at play, with people,
you know, hearing about AI onthe news through social media,
for instance, from their friends, maybe from their employers,
and getting a sense of all right, let me try this out.
That's on the individual side.
We also are seeing constantlythat there is increased pressure
(07:56):
on businesses to adopt and useartificial intelligence.
There was a study from McKinseyjust in the last month that
found that over 75% of theirrespondents are saying that
their organizations are using AIin at least one business
function, which has been a bigconcern, at least initially.
And so you know where somebodymight potentially have tried out
(08:30):
an AI tool or looked at it inthe early days and thought, yeah
, I can sort of see thepotential, but I'm not sure that
I can fully get my arms aroundit.
You know that same personcoming back to some of these
same models today is probablygoing to be blown away by the
improvement.
That's especially true, I think, for some of the image
generation models.
But it is a really significantshift that I think has a lot of
(08:52):
different parents to it but thatI think increases the urgency
both for businesses, foreducation and workforce
institutions and for learnersand workers as well to get their
arms around what this means forthem.
Speaker 4 (09:05):
Sure, well, that
certainly helps to have that
unpacked.
It was certainly much biggerthan my initial hypothesis that
it was the mainstreaming of chatGPT.
That seems like it's certainlypart of it, but, as you point
out, there's a lot of vectorspointing in that same direction.
Well, I wanted to ask you aboutanother sort of stat that came
out or maybe the stat is lessrelevant than I think the
(09:26):
distinction that you makebetween what you're calling
individual initiative andinstitutional support for AI, as
it relates to using AI in jobsand seen in other trend reports,
that business leaders, hrleaders in particular, are
(09:47):
looking at when is AI coming?
They're waiting for it, they'reanticipating it, and I think
some of the leading thinkersthat we've talked to are making
the point which maybe is whatyou're also saying here that
it's kind of already here thatindividual employees as you say,
it's a B2C product A lot ofpeople can access it and what's
to stop them from using it intheir job.
Is that what you mean when yousay this difference between
(10:07):
institutional support andindividual support, or maybe you
could just share your thoughtsa little more about that?
Speaker 3 (10:12):
Yeah, I think that's
some of it and to a certain
degree, you know a technologylike this.
It is, of course, it is B2C,but it is also very much B2B
increasingly and it shows up indifferent ways in those
different modalities, right?
So if I'm an individual,whether I have a job or not, and
I can access this tool myself,maybe I'm using it in part for
(10:33):
personal things or in part I cansee opportunities to use it at
my job.
If I am any institutionalentity or any business and I am
thinking about a formalworkplace adoption of these
kinds of tools, that kicks off,as you all know well, a whole
cascade of decisions that thatentity has to make.
Whether that is, what exactlyis the tool that we're going to
(10:55):
use, what are the cost-benefitanalyses of different structures
?
What policies will surround it?
How do we think about datagovernance and privacy?
What are the use cases that wewant to explore?
How are we training andsupporting our people, for
instance?
So the ability for aninstitution to make a decision
about a use case, as contrastedwith an individual person's
(11:16):
ability to just pick up or signinto a webpage and start to use
it, those barriers to entry arevery, very different, and so on
some level, it doesn't surpriseme that we're starting to see
these gaps between how peopleare using themselves of their
own volition and that's true, Ithink.
Even within work, we're seeingdistinctions between people
(11:37):
saying I'm using it at work, butnot because my employer is
telling me to, I'm just using itat work.
I've seen the term BYOAI floataround out there versus people
who are saying, yes, you know,I'm using it in a school setting
or I'm using it at work andit's because it's being
intentionally used in theclassroom or at work.
But I actually think thatthere's real opportunity here,
and this is part of what we'vebeen talking about coming out of
(11:58):
these survey results, becausenow any institution that's
thinking about adopting AI insome form and probably all of
them should has a whole universeof testers and experimenters
and brainstormers.
You know within your four walls, right Of all of the people
that are part of yourorganization and culture who are
already thinking in these terms, know their jobs and their work
(12:20):
really well, probably alreadyhave ideas.
So for us, I think it points tosome real exciting
possibilities for howorganizations are involving
their own stakeholders in thedeployment and use of AI and the
decisions that they make aroundit.
Speaker 4 (12:34):
Yeah for sure.
Well, your research is full ofsuch provocative stats, but I
just want to call out a couplehere.
So, in reference to the point wewere just mentioning, I think
you found that 60% of employeesare using AI for self-directed
learning, as opposed to theinstitution or the employer
providing direct support oraccess to a tool, and separately
(12:55):
and related, of course, youfound that only 16% of people
had access to an employer orschool-provided AI tool.
So I think that's I mean, if youcan just take the 60 minus the
16 and have that be meaningful,because I assume those came from
different places, butnonetheless it highlights a
major gap here, and I guess thequestion that comes in my mind
when I see that gap is who arethe people who are out there
(13:18):
accessing these tools on theirown and taking the initiative to
that at all?
And, like my theory might bethat actually might be a way
that might be one subtle waythat we're, that the people who
are like certain groups aregetting ahead of other groups in
terms of advancementopportunities, sort of
underneath the radar, becausethey've somehow, based on their
(13:39):
education, their life experience, have come to a place where
they have started using thesetools.
Other people are not, andthere's an inherent advantage in
that.
I just wondered if you have anyresponse to that, or any
reaction.
I mean, do you think about itin a similar way?
Speaker 3 (13:51):
Yeah, I think that's
exactly right.
It is its own form of digitaldivide, when some of us have
access to really high qualitytools and some of us don't.
You know, on top of the factthat the digital divide still
exists in this country, not allof us have access even to
high-speed internet, and so it'san especially important
question to ask.
I don't think our data parsesin every way that we would want
(14:15):
to.
You know, who are the peoplewho do and do not, have varying
kinds of access, especially topaid tools, which is not always,
but can sometimes be a proxyfor quality.
But, you know, we have seensome interesting things.
One that really stands out isthe degree to which people of
color are using these tools,which is higher than a lot of
other demographic groups, whichis, you know, I don't think we
(14:37):
have all of the reasons for that, but it's just really striking
for us to see.
But I think there are a lot ofimplications when we're starting
to see these kinds of gaps, youknow, and whether that is the
lack of early exposure totechnology like AI, as its
adoption grows, especially whenpeople are coming out of schools
, for instance, and entering theworkforce, where they may be
(14:58):
expected to be familiar with AI,ai literate, but they haven't
necessarily had the experiencewith the tools in an educational
setting.
I think that will be a growingconcern for employers, and that
barrier could exist for a lot ofdifferent reasons.
You know, whether a tool is, ora school rather, is
under-resourced and so theycan't provide access to tools.
(15:20):
We've certainly heard that,including from some community
colleges who have talked aboutthe cost of some of the licenses
or the cost of, you know, asyou all know, what's called
compute, just to be able to have, if they're trying to build out
actual facilities for people tobuild and experiment with AI
tools on their campuses, thatgets very expensive, very fast,
and so the resourceconsiderations you know that are
(15:42):
not always the same across thecountry, you know are very, very
real here, and we're alsoseeing continued mixed messages
in some cases, I would say,particularly within education,
where there are certainly somedistricts or schools, leaders or
particular educators who areall the way in on this
technology, building itthemselves, experimenting with
(16:04):
it, encouraging new forms ofpedagogy, encouraging students
to use it.
At the same time, there are alot of messages that using AI is
cheating, using AI has academicconcerns in some form, and
we've heard anecdotally fromstudents that that can
potentially be a real concern.
So it really is, I think, acomplex landscape where
everybody who is figuring thisout for themselves and trying to
(16:27):
navigate a new landscape, andso the more I think that
institutions that are workingwith people, especially learners
and workers, can have a way ofthinking about AI that leans
more into its potential as atransformative technology that
will be relevant for them inlearning.
Relevant for them in work is anew foundational digital skill
(16:49):
that we need to develop.
That, I think, will go a longway towards addressing some of
the potential gaps that we couldsee emerge here.
Speaker 2 (16:56):
Yeah, so, alex, you
know on that.
So we talked a lot about youknow educational institutions
and districts and, like you know, access to AI tech tools, right
.
And I'm just curious, like youknow, how do you think about
that same lens from you know anemployer or an organization?
So on one level, like acommunity college system might
not have the same, you know same, you know level of access that
(17:18):
you know another district in,you know the Bay Area might.
But like, how do you thinkabout that from organizations
and you know what are some ofthe risks that you know
organizations face?
You know, from that same lensof not investing in AI tools or
not investing in AI training Isthat contributing to workplace
inequity?
Talk to us a little bit moreabout the employer side.
Speaker 3 (17:39):
Yeah, potentially,
and, on the one hand, especially
if we're in an environmentwhere, on the risk mitigation
side, if we're in an environmentwhere people are bringing their
own AI tools to work, if we'rein an environment where people
are bringing their own AI toolsto work, you know we've already
seen companies send really clearmessages about making sure that
you're thoughtful about thedata that you input into tools
and not sharing proprietary data, or making sure that the models
(17:59):
are not being trained on yourdata.
So making sure that people havefoundational literacy to
understand how the model worksand what risks might show up, so
that they can protectthemselves and their companies
as well, is maybe thefoundational element in terms of
risk mitigation.
But there's also extraordinaryopportunities that I think
organizations miss out on whenthey're not engaging with their
(18:22):
employees in this way.
Just a couple that come to mindfor me is one it's a signal of
investment in your workforce,recognizing just how
transformative an impact AI willhave on the future.
We see time and time again whenwe work directly with workers
and companies helping implementnew technologies.
(18:42):
The workers come back and sayit is awesome that my employer
trusts me with this exciting newtechnology and you know, and
helps me think about ways that Ican do my job more effectively.
That's really a signal ofstrong employee engagement.
So that's an extraordinaryopportunity, certainly as
businesses themselves areseeking to adopt AI, as we
talked about a minute ago.
You know, more and morebusinesses are telling us that
(19:04):
their employees are the onesthat are bringing them ideas
about how to use AI, and so whenyou can have broad-based AI
literacy training, you createmore opportunities for your
workforce to show up and addvalue in that way.
And one thing that has feltreally important to us as we've
dug into this space a little bitis to make sure that AI
literacy training is not justtool-specific but is really
(19:27):
truly broad-based, and maybe onegood analogy here is the idea
of digital citizenship.
You know, when we think nowabout supporting people and
understanding how to use theinternet, there's all kinds of
layers to that in terms of howto understand if a source is
likely to be an accurate source,how to be safe online, for
instance and I suspect that,because AI is really
increasingly a an accuratesource how to be safe online,
(19:48):
for instance and I suspect thatbecause AI is really
increasingly a general purposetechnology it's going to be the
water that we swim in, the morethat we can orient training
around those kinds offoundational questions,
including ethics, includingresponsible use, all of these
things.
That's going to prepare peoplemuch more effectively than
here's how to use ChatGPT orhere's how to use Microsoft
(20:10):
Copilot, and the only thing thatyou know is how to use this one
tool, because we also know thatAI is going to develop
extraordinarily quickly.
That's already happening,including with the growth of AI
agents, which seems to be thetopic of the year.
This year, in particular,no-transcript lives and at work.
(20:58):
Very true, well you know.
Speaker 4 (20:59):
I wonder if we could
get into some specifics or if
you could just provide somespecific examples of AI-based
skills, because I know this is ahuge priority.
I think you said in yourresearch, 70% of people of color
feel that they need to gain newskills related to AI and I
think we've all seen stats likethis and we're all on board.
But some of the challenges whatdoes that actually mean?
(21:19):
And maybe you don't know, maybeno one knows at this point, but
can you give us some examplesof specific AI related skills
that people are trying tocultivate and how are these
different from what we mightcall traditional skills?
Speaker 3 (21:31):
Yeah, that's such an
important question and I want to
take it a little bit beyondwhat we might think of as AI
skills quote unquote because, inaddition to understanding as I
was just describing how thetechnology works, fundamentally
understanding how to use it mosteffectively.
We're seeing a lot of even overthe last couple of years, a lot
of micro iterations of that.
(21:52):
So are we training people toprompt effectively?
Are we training people how tobuild so-called GPTs or agents,
you know?
Are we training people toreally deeply understand what
data sources AI is tapping intoand what it's not, either
generally or within the contextof their job?
But what fascinates us andfeels even more important than
AI skills by themselves is theways in which AI will transform
(22:16):
jobs as it starts to percolateacross the workforce, and what
new skills will be activated bythe changing job descriptions
that will result.
The changing job descriptionsthat will result.
We actually released someresearch about this in late 2023
, called the AI-Ready Workforce,which mapped out this idea that
(22:36):
AI and I think this language isbecoming more and more
commonplace, but that AI is notjust something that can
potentially automate away tasksor skills.
Ai also has an augmentingeffect.
There are some types of skillsor activities where the use of
AI makes the humans better attheir jobs, better at
undertaking those kinds ofskills.
(22:58):
Great example of this is whatwe think of as the soft skills
human connection, communication,collaboration, the ability to
work with teams.
So I'll give you one example.
You know where people, forinstance, who are managers, who
might need to have a coachingconversation with their direct
report, you can talk to a largelanguage model, ask, you know,
(23:19):
give it a little bit ofobviously privacy-shielded
information about theconversation that you want to
have and have it work with youas a coach to set up the way
that you might need to have theconversation.
You then, as the manager, stillneed to be the one to engage
with your colleague, but youprobably are able to have a more
successful conversation relyingon, at least in part on the
(23:41):
coaching that you receive fromthe AI, than you might have done
if you were just working, youknow, by yourself.
Unlike, for example, on theother side of the equation, a
skill like coding, which we'realready starting to see
increasingly generative AImodels are able to do, and so a
software developer you know, whomight be very skilled in coding
, is probably still going tohave to spend some time doing
(24:03):
quality assurance, for instance,but they may not need to spend
as much of their own time coding, and so what that means is that
jobs will shift over time sothat, for example, if you're a
software developer and you'renow spending less of your time
coding, more of your time oryour creative energy is freed up
to be able to conduct needsassessments, for example, to
talk to your colleagues, to talkto your clients about the kind
(24:26):
of software solutions thatyou're helping build for them,
(24:49):
and to be able to deliver highervalue as a part of those
interactions through the use.
Ofability and the skill of beingable to learn, how to learn are
becoming increasingly important.
Those were also some of thesame skills that popped out in
the survey that we've beentalking about.
When we ask workers directly,that's what they say, and it's
not huge numbers yet.
It's maybe one in five peoplestill, but they're saying I'm
increasingly having todemonstrate these new kinds of
(25:10):
skills because AI is workingwith me in new and exciting ways
, and we've also seen and othershave as well that you referred
to more traditional skills thatmight've been valued in the past
, the kinds of technical skillslike, possibly a specific coding
language, for example,increasingly have a shortened
(25:31):
half-life, and so maybe thatskill is relevant to you for six
months or a year, but thensomething has changed and you
need to develop a new skill.
So in our own research, we'reseeing really rapid turn of
these kinds of technical ordigital skills, but the enduring
power and potential of thehuman skills and highly complex
cognitive skills like problemsolving, for instance, you know,
(25:53):
will increasingly come to thefore.
And so that's, I think, anytimewe think about the critical
skills around AI, you know, yes,it's increasingly important for
all of us to become users of,and creators of, and builders
with, AI, but these additionalbaskets of skills are also going
to be even more essentialbecause they will respond to the
(26:14):
ways in which jobs continue toadapt.
Speaker 2 (26:17):
Yeah, I think that's
great and I think you know
there's a lot of industryexamples of, like hey, where
we've seen quicker pushes or,you know, faster adoption in AI.
You know one area that you knowI think it's great for us to
double click into is, you know,probably just AI in education.
You know you talked about usingAI, you know AI to learn right
and you know I think one of thestats coming out of your study
(26:37):
was also, you know, close to 60%of learners are, you know,
using AI weekly in education.
So you know, I think it's safeto say like AI is transforming
the learning experience quite abit.
But, that said, in yourresearch, alex, what are some of
the positives and negativeswith this integration of AI into
the educational space and howpeople learn?
Speaker 3 (26:58):
Yeah, for sure, and
I'll start by saying that you
know, at least in my view, theeducational space is one of the
most complex in terms of AIadoption, just because of all of
the different ways in whichthis shows up.
We've talked about thequestions around, you know.
Does AI have negativeimplications for academic
integrity?
For instance, explored the ideathat it can potentially be a
(27:20):
pedagogical tool, and we cantalk more about that with
teachers using it day in and dayout to support students in the
classroom.
We see use cases around AI forstudent supports, helping career
counselors, for instance,whether it is doing some of the
routine work for them so thatthey can spend more time using
those human skills, engaging andconnecting more deeply with the
learners that they'resupporting.
(27:40):
We hear about administratorswho are really excited to use AI
for some of the same kind ofbusiness use cases that business
would use it for to streamlineprocesses, for instance.
So it is a highly, highlycomplex space and, of course,
education is navigating thiswithin the confines of their
local environment and of policyand all of these different
considerations.
And so, you know, we saw fromour own survey that learners are
(28:03):
using it again, whether attheir own impetus or whether, at
the direction of theireducational institution, they're
using it to just to learn, tohelp them understand complex
topics, they're using it forexam preparation, to help them
research, to get access todifferent kinds of resources and
supports.
For instance, they are using itfor career guidance, and I
(28:26):
think, you know, one of the keythings that we and many others
that are working in this spaceare seeing is just the how of
that really matters.
I think it is very tempting forany of us to just ask the AI
the question and have it give usthe answer, but increasingly
you're starting to see not justthe platforms themselves
building out new modalities ortools that are built on top of
(28:48):
generative AI platformsintegrating more like Socratic
models where what the platformis doing is asking you questions
and drawing you out, helpingyou really drill into areas
where your learning needs to beshored up, for instance, which
is really exciting.
But we're also hearing thatstudents are looking for that as
well.
I've talked to some collegestudents who say I really hate
(29:09):
it when it just gives me theanswer.
You know I wanted to actuallyhelp me learn more deeply, and
so you know we think that theconsiderations there in terms of
how these platforms aredesigned as they're interacting
with students, as well as thekind of training and literacy
support, as we've been talkingabout, that students have, so
(29:29):
that they know that those modesare available to them, that
actually you can set it up sothat it asks you questions and
you can learn more deeply.
You know.
That really underscores theimportance of making sure that
everybody has the supports thatthey need to use these platforms
.
The other thing that was reallystriking to me in the survey
results was that we asked aquestion about how the use of AI
in the classroom or in aneducational context could impact
(29:51):
either student-teacherrelationships or students'
relationships with their peers,and what we found was really
kind of, you know, I think,positive leaning, but a little
bit of a mixed bag in ways thatare really interesting.
So when we asked you know howmuch time are you spending with
your teacher?
You know how effective is thecommunication that you have with
(30:11):
your teacher?
Is AI making that work moreeffectively or less effectively?
We saw almost equal numbers sayyes or no, about 16, 15% on
each side.
We also saw or asked you know,are you spending more time with
your peers?
Are you spending less time withyour peers as a result of AI
interaction.
Same numbers on both sides.
(30:31):
About one in five people saidit's allowing me to spend more
time with my peers.
About one in five people saidit was less, and so I think that
speaks to really some of thechallenges about how AI shows up
in the classroom and the waysin which it's being used can
either contribute to pedagogicaloutcomes as well as student
support outcomes or hold themback.
(30:53):
And you know, for us, on somelevel, anytime you're using new
technology, it's always excitingto think that the technology is
the whole solution, butinevitably the real answer is
the combination of technologywith the fundamentals is the
whole solution, but inevitablythe real answer is the
combination of technology withthe fundamentals and the things.
That will always be true.
So there is no substitute forgood pedagogy here, for ways of
interacting with students thatallow for them to engage
(31:15):
personally, to be drawn out, todevelop human connections with
their peers and their teachers.
But the fact that we're seeingthat, you know, both sides of
that equation show up in thisearly data was really striking
to us.
Speaker 4 (31:27):
Yeah, well, there's a
lot of different directions we
could take this.
I think one of the moreinteresting angles to this
perhaps is the idea thatdifferent institutions this has
happened so quickly, and I thinkyou point out that different
educational institutions havedifferent policies.
It could be very differentpolicies in terms of what is
considered cheating or notcheating or, beyond that, you
(31:48):
know what are the appropriaterules of engagement for
utilizing AI in an educationalsetting.
I wonder if you could justspeak a little bit about that,
and do you see a future wherethere needs to be some sort of?
You know, I don't know if it'sa government body, a nonprofit,
some organization that comes upwith a set of rules of
engagement that we can applyacross educational institutions,
(32:10):
Because it seems like thiscould be an obvious source of
inequity as well.
If you just chose a school bychance that has a very strict
policy with regard to AI, youmay have a lot less access to AI
and emerge from college with alot less of an AI and emerge
from college with a lot less ofan understanding of it versus
some other institution.
Speaker 3 (32:27):
Yeah, absolutely.
And on that last point, I dothink that this is an important
question for parents to answer,for students to answer.
You know, not all of us havethe same kinds of choice in
selecting educationalinstitutions, but, even as we're
understanding the places thatwe're headed, to ask these kinds
of questions how does thisschool use AI?
What are its policies?
(32:47):
You know, is it a universalpolicy across the school or is
it different from classroom toclassroom, which, you know,
we've still seen over the lastseveral months may, hopefully,
is changing, but, you know,sometimes it really just depends
on which subject you're in atany given moment.
You know, sometimes it reallyjust depends on which subject
(33:18):
you're in at any given moment.
But there are, I think, reallyimportant and exciting efforts
underway to do exactly what youjust described AI Alliance,
which has developed somethingthat they've called the Safe
Benchmarks Framework, whichcreates a roadmap and a set of
issues around what it's going totake to make sure that the AI
ecosystem is really thoughtfuland built in a way that is
equitable for students as wellas others thoughtful and built
(33:38):
in a way that is equitable forstudents as well as others.
And so there are four componentsof that Safety, which is
essentially focusing on dataprivacy, managing risks to
cybersecurity.
Accountability, which is makingsure that there are standards
in place and that all partiesare clear on who holds
accountability for thebenchmarks that will be used to
(34:00):
evaluate the degree of successof the solutions, and that it's
abiding by policies andregulations where they exist.
Fairness and transparency,which is really understanding
how these solutions areavailable for everybody and
ensuring that there are someguidelines to ensure that
they're of quality.
And efficacy, which is to makesure that there are student
(34:21):
outcomes that result in these.
So these kind of high-levelstandards and benchmarks are
already well underway across thespace, led by some
extraordinary leaders in thisecosystem.
Those are some that I would payclose attention to, and there
are increasingly more and morebecoming available at the state
and local level as well.
Speaker 2 (34:39):
Yeah, that's great.
Well, one last question.
I suppose you know, because Ithink we keep going through this
research for hours, alex's what, if anything you know,
surprised you most.
You know, throughout the courseof this research, you know, and
has it influenced your ownthinking of AI's role in the
future of work?
Speaker 3 (34:55):
Yeah, I think it
really is just how extensive
people's curiosity about thistopic is and the nuanced views
that they hold about it.
You know, when we really weredigging into this work from the
very beginning it felt as thoughthe conversation on AI was very
binary, that it's either goingto save us all or it's going to
be an extinction level event,and there's still some of that
(35:16):
popping up.
But I think when you talk topeople as we you know we did
through this survey and we'vedone through focus groups and
others you know people have anuanced view.
They say well, you know, on theone hand, this can help me be
successful in school and work,but I worry a little bit about
its impact on jobs People, youknow especially this is
anecdotal, but you know we'veheard from young people who are
concerned with the climateimpacts of AI as energy use
(35:39):
grows and grows.
Hopefully that will normalizeover time.
But it is a real question thatfolks are asking, even to the
point where they're starting tothink through.
You know, in some cases, whichmodel do I use?
Is there a model that's more orless you know, energy draining,
for instance, and how can Imake that decision if I'm trying
to be attentive to climateimpacts.
You know people are sothoughtful about this.
(35:59):
It extends as well to how theirdata is showing up and being
used, especially amongstpopulations where there's a
higher level of awareness thatAI training data may not
necessarily fully represent themor their experiences.
So people are increasinglysavvy about these tools and
(36:19):
really thinking in nuanced waysabout how they show up for
themselves in their lives, and Ithink that is an
extraordinarily powerful thing,because it creates a really
strong foundation that all of uscan build on when it comes to
asking the question that reallyis, and will be, at the heart of
JFF's work on this movingforward, which is how does AI
make us all better off?
(36:40):
In our view, the conversationabout AI can so often be pulled
in this direction of which modelis bigger or faster or better?
How many jobs are being createdor lost?
Is there some kind of broadlyspeaking economic impact?
And we always want to bring itback to this simple question of
(37:01):
are we all better off as aresult of AI?
Are we able to access qualityjobs?
Are we able to pursueopportunities for
entrepreneurship, which wasanother thing that really stood
out to us from this survey?
Are we able to sustain ourlivelihoods, and if we're able
to do that in a fair way throughthe use of AI, then I think we
can count it as a success.
(37:21):
And if we're not, we have toask some pretty big questions as
a society, and so I think thatthe ground is really ripe for us
to ask those kinds of questionsand to be met with a community
of learners and workers that'sreally, you know, that wants to
dive in and engage.
Speaker 2 (37:36):
Yeah, I think that's
great and, like you know, I hope
our audience kind of enters.
You know, their path forwardwith AI, you know, with an open
mind.
But I do like that questionlike hey, are we better off?
Right, and you know it's notjust AI for AI's sake,
automating for automating's sake, and there's a lot more nuance
to it.
And you know, super helpfulhaving you walk through the
report and you know yourperspective too, alex.
(37:57):
Well, the last question is theeasiest.
Speaker 3 (38:23):
Where?
Well, the last question is theeasiest when can people learn a
little bit more about you?
Online.
We post our research andpublications on our website and
we try to share about monthlythrough LinkedIn, both what
we're seeing and hearing out inthe space, as well as
opportunities that are poppingup to collaborate with us.
We are very, very eager to bothhear about how all of you are
thinking about these questionsand to work together where the
opportunity presents itself.
So please do look us up andreach out.
We would love to connect.
Speaker 2 (38:41):
Yeah, a lot of great
events and sponsored programs
through Jobs for the Future ingeneral too, so love having you
on today, alex.
Really appreciate you joiningus.
Speaker 3 (38:49):
Thank you both so
much.
It's great to be with you Allright thanks for tuning in.
Speaker 2 (38:53):
As always, head on
over to changestateio or shoot
us a note on all the socialmedia.
We'd love to hear from you andwe'll check you guys next week.