All Episodes

January 21, 2025 50 mins

In 2024, artificial intelligence dominated conversations across the globe from copyright lawsuits against AI art generators to developing legislation for artificial intelligence regulation. On this episode of Access to Excellence, President Gregory Washington and George Mason’s inaugural vice president and chief AI officer Amarda Shehu discuss the research possibilities of AI and the role of higher education in AI training and development.  

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:04):
Trailblazers in research;innovators in technology;
and those who simply have a good story:
all make up the fabric thatis George Mason University.
We're taking on the grand challengesthat face our students, graduates,
and higher education is ourmission and our passion.
Hosted by Mason PresidentGregory Washington,
this is the Access to Excellence podcast.

(00:26):
In 2024,
artificial intelligence dominatedconversations across the globe,
questioning the environmentalimpacts of ChatGPT,
copyright laws against AI art generators,
and developing legislation forartificial intelligence regulation.
As we enter this newfrontier of technological advancement and intelligence,

(00:51):
George Mason is positioning itselfto be a pioneer in the field.
Our guest today is anexemplar of that. Professor
Amarda Shehu is GeorgeMason's inaugural vice
president and chief AI officer.
She is also a professor in the Departmentof Computer Science in the College of

(01:13):
Engineering and Computing,
where she also serves as anassociate dean for AI innovation.
She is recognized in various scientificcommunities for her research and thought
leadership in artificial intelligence,
and she is a member of multipletask forces advancing AI
insecurity, AI standards,and AI governance and policy.

(01:37):
Amarda, welcome to the show.
Happy to be here.
Well,
this topic is one thathas essentially dominated
scientific and engineering fieldsand the total public at large
over the last year and a halfor so since the emergence,
the public emergence of ChatGPT.

(01:59):
Right.
And so George Mason is addressing someof the world's most pressing issues.
And your new focus as our chief
AI officer will be integratedin many ways to deal
with the challenges and theopportunities associated with

(02:20):
artificial intelligence.But before we get to that,
how do you see AI efforts contributingto the major solution to the grand
challenges that we're facing today,
like public health or being ableto help us become more climate
resilient?
Right, so that's a good question.What happened to the old days, right?
When you could work in a lab and nobodyknew what you were working on? .

(02:43):
So I'm a very levelheaded AI researcher,as my students will tell you,
but even I cannot really containmy enthusiasm on the opportunities
and where we're going and thebreakthroughs that we are making.
I have a lot of examples for you,but I'm gonna try to keep it short,
so just stop me at any time. Hugeopportunities in the health space. Okay?

(03:04):
And even if I just narrow it,
like really focus it in on aspecific sub-sector: new drugs.
We now have AI discovered drugs that aremaking their way down clinical trials,
right?
And there are even studies now that showthat these AI discovered drugs have a
higher likelihood of survivingthose really difficult complex steps

(03:25):
that take a drug from the idea tobasically putting it out in a market.
I know a lot about this spacebecause my lab was one of the first,
and we continue to develop AImethods for what we call property
controlled generation of small molecules.
So molecules that canserve as drug compounds,
but that you really have tocorrect for a lot of things.

(03:46):
Like do they survive in the blood?Do they go down the brain barrier?
We have new biologics. Okay.
Did you follow the NobelLaureate winners this year?
There was like huge press on them.
Yeah, yeah, yeah.
Yeah. David Baker, he just gave hisNobel speech, I think yesterday,
and he's recognized in the field forprotein engineering and protein design.

(04:07):
So now we have AI pipelines that aredeveloping new proteins, new enzymes,
right? So think you'rea mechanical engineer.
Think about new catalysis processes.They go beyond drugs to new materials.
Imagine the opportunities,
materials that can capture carbon dioxidethat you can put in the soil and clean
the soil or clean the oceans. Newmaterials for more resilient structures,

(04:30):
better bridges, all theway to quantum materials.
And we used to talk for a long timein my community on automated labs.
So labs where you can think you havethe robots doing all the experiments,
but now we realize that that wasa very unambitious way of framing
what an AI scientist can be.
Because now we're talking aboutlabs that are not only operating,

(04:52):
but ready to be scaled that go from, youknow, ideation generation to synthesis,
to testing. And there's just so muchmore that's coming in the health space.
There is a team here of Mason researchers.
What they're doing is they're lookingat how do we help folks with opioid
addiction, right?
There are these very classic frameworksfor how to do interviews and how to

(05:14):
motivate folks to stick tosort of specific regimens and how they're utilizing
AI to personalize thesemotivational interviews.
I think you mentioned challenges,like grand challenges. Well,
climate is a grand challenge of ourtime. So what about, for instance,
climate forecasting? Climate resilience.
Now we have really accurate AImethodologies that can forecast weather.

(05:36):
We have a team here at Mason that feedssatellite data into AI algorithms to
better predict storm surges, right?
So you can help our communitiesto be more better prepared. Uh,
we have another teamthat is thinking, okay,
how do we communicate tocommunities better? Right?
How do we help with disasterpreparedness? They just received, um,
a $1 million grant from NIST toadvance AI for disaster preparedness.

(05:59):
We have another team that's using AIto predict snow accumulation and melt.
It's a collaboration between the Collegeof Engineering and Computing and the
College of Science. What aboutenvironmental conservation?
We have a faculty here inthe College of Science.
He's using AI powered systems to monitorecosystems like the Amazon and track
the wildlife population. Amazingstuff. I have a lot more.

(06:21):
No, this is really, really good stuff.
I think the real challenge for many of us,
especially those of us whowant to harness it for purposes
in higher education,
is the speed in which the technologyis moving. Right? You know,
in our meetings and our meetings withour faculty and our individual meetings,

(06:45):
we've talked about how essentialit is to not just harness the
technology,
but to build the right ethicalguardrails to protect vulnerable
populations, actually to protectthe population overall, right?
From outcomes associated with AI.
So what is your approachto making sure that ethics,

(07:07):
societal impact, and good governanceour front and center in our AI work?
Yeah, I,
I fully understand the speed challengeI tell colleagues who are kind of
relating to me, man, there'slike a new article every day.
How do I keep up with it?And I tell them, well,
there's a research paper like every fiveminutes, it seems to me in my field.
So I understand the challenge of speed,

(07:27):
but we're a little bitahead of the game here,
I think in terms of thinkingabout those ethical guardrails.
And I might say we're a little bit aheadof the game compared not only to other
universities in Virginia, but even beyond.
So let me just give you a coupleof examples. Mason, for instance,
we are active participants in the AISafety Institute Consortium that's run by
NIST under the charge fromthe Department of Commerce.

(07:49):
And we're the only university inVirginia there. And as part of, you know,
being a participant, we areoutlining better understanding,
what are the capabilities of AIsystems? Can we forecast them?
And as we think aboutoutcomes and capabilities,
can we think ahead of what arethose ethical guardrails, right?
How can you define them?And more importantly,

(08:10):
how do you translate them fromconcepts to actual metrics, right?
And functions that you can put intothese systems so that they do the right
thing. How we understandthe right thing, right?
In alignment with our values andwith our principles. Here at Mason,
we also work very closely with SCHEV,
the State Council of HigherEducation in Virginia.
And we are really trying tobetter kinda piece out the

(08:35):
governor's charge, for instance, on,it's called executive order number 30,
where the governor is worried about AIsafety and thinking about what does it
mean to integrate AImethodologies in education, right?
What are those guardrails for ourinstructors, for our students?
And they go beyond just data securityand data privacy, right? So think agency,
think making sure that those toolsare not replacing sort of very

(08:57):
key developmental skills. So infact, we are leading in this space.
We have an AI in Education summitcoming up in May where we are bringing
together higher eds, community colleges,
K-12s across Virginia to really outline,develop, and implement standards.
Let me give you sort of very, very threequick high level frameworks, right?
How we are thoughtfully proceedingon those ethical guardrails. First:

(09:21):
governor's framework, right? Sothings may be moving very fast,
but you wanna go backto what are your values?
What are your principleshere at the university?
And so together with a lotof representatives from the colleges, faculty, staff,
students, were developing comprehensivepolicies that cover data, privacy,
security, ethical use, transparency,accountability agency and more.
And we're not thinking inthe abstract, all right?

(09:43):
These are not sort of boringpieces of text that you write,
and nobody really reads or understandswhat they mean. We are thinking,
what does this mean for youas an instructor, right?
What does this mean for you as a student?
What does this mean for you asa staff member, a researcher?
So we're thinking of allthe stakeholders, um,
integrating these guardrails in thecurriculum and in the research, right?
You need to go beyond just saying theright things, but doing the right things.

(10:04):
So again, that's what I meant by sayingwe're actually ahead of the game.
We have an undergraduateminor in ethics and AI,
because what we wanna do is wewanna open this up to all Mason
students, right? We want studentsfrom humanities, from social sciences,
from education, from business,policy, wherever they are,
they can come and they can take this minorand they can learn not only kind of a

(10:25):
better understanding of AI,
but also better understanding whatdoes it mean to have ethical ai,
what is safe and responsible AI.
We also have specifically a responsiblecertificate for our graduate students
where we are teaching them about riskframeworks and then how to make sure
wherever they going, companies,
or whether they become clerksor staff or senators, right?

(10:46):
Wherever they end up,
how do they make sure that theyincorporate those risk frameworks,
whether it is in the waythey're critiquing, right? Interrogating AI systems.
So, or even activelyparticipating in development.
But I think the bigger umbrella ishow do you foster this culture, right,
of responsible innovation on campus?
And I'm gonna come backand circle now to research.

(11:07):
And the reason I wanna circle back onresearch is because as, as folks perceive,
it's a very fast moving space, but it'sa space where we are just discovering,
uh, somehow the outcomes, right?So, and we wanna make sure, okay,
what does it mean to have agency?
How do you develop systems that don'ttake agency away from human beings?
This is an active area ofresearch. All of these are,

(11:28):
so we're incentivizing our facultyto come together across the different
colleges and together advance, youknow, research that tells us, okay,
how do we make surethat we have ethical AI?
Or how do we make sure that thisis interpretable or transparent?
And we can understand whatis it that we are doing?
If you're using thesesystems in decision making.

(11:48):
You've brought up a wholehost of ethical questions.
Right? Yeah.
Right? And, and clearlyyou are thinking about 'em.
Our team is thinking about 'em, right?
Right.
So talk to me about what thatlooks like in the classroom, right?
How can we make sure our studentsare equipped to handle the ethical

(12:08):
questions and societalchallenges that come with AI?
What are we doing
to ensure that in ourethics of AI curriculum?
So first I wanna takea little bit, you know,
a few steps back because we dowant our students to think about,
you know, the ethicalaspects of AI, but you know,

(12:31):
you can't ask the real questions and youcan't do the right things if you don't
understand, and you don't have a deeperunderstanding of technologies, right?
So we don't want students just toget their information from articles.
We want our students to really understandwhat is artificial intelligence,
right? What are methodologies? Andhere I'm not thinking about say,
computer science studentsor engineering students.

(12:53):
I'm thinking very broadly anystudent at a public university.
We talked for a long time in computerscience, I've been around a few years,
we talked so much about opening itup, right? Opening up computing,
opening up sort of computing principlesand analytical thinking to all other
students.
But we just couldn't figure out how todo this without forcing students to go

(13:17):
through, you know, you gotta take Python101 and then you gotta take Python 201,
and then you gotta take Java, right?So the model was always, well,
first come and learn, you know,what, and become and computer what.
First you gotta learn how to program.
Yeah. You gotta learnhow to program, then...
You gotta learn the basics of AI. Andthen, and then you gotta program the AI...
Right? Right, right. So, and, butwe would start with like, okay,

(13:39):
sort these numbers. And youknow, it's not for everybody.
I always tell folks,
I survived my first years as a computerundergraduate because most of the stuff
that I was doing was okay, but I justcouldn't see like the big picture.
Why is it that I'm doing this, right?What is the real interesting thing?
So what really excites me now isthat we have the opportunity to teach

(14:02):
students the bigger thingswithout forcing them, you know,
to go to this pipeline, thiscookie card or model, right?
So they don't have to learn sortof the inner things about coding.
Now we're talking about, uh,non-coding frameworks, right?
We can teach student how to build AIagents by operating on top of this
platforms that are basically sortof point and click and put together.

(14:22):
So this is at this level---
Now what platforms? Now whatplatforms are those, right?
So there are a lot out there in industry.
A lot of the companies areproceeding in this space. Of course,
OpenAI is a big player, but evenothers, you know, um, Anthropic,
um, Microsoft, Amazon, they'reall going in this space.
They're all going into AI.

(14:43):
So these are, these are, these areno code or low code type frameworks.
Yep.
There's a code underneath that'sbeing, being generated by the bot.
Absolutely. Yeah.
And you're, and you're setting a setof high level instructions, right?
This is just for the community out there.
Yeah. You're operating at the top.
The challenge with that isyou don't necessarily know or

(15:04):
understand if the codethat's coming underneath
is doing exactly whatyou state to that code in
English.
Yeah. Those are the skills,right? If you teach a certain way,
then you can create the spaces toreally get into what matters, right?

(15:24):
So if you allow students to get,I call it do it yourself, right?
DIY I'm really excited about this.DIY AI. So if you teach the students,
here is how you can build an AIAgent4. And what could that do? Okay?
Let's say you're lookingfor a job. Alright?
So I wanna figure out what are amongthe job descriptions out there,

(15:44):
what are those that arebest aligned with my cv?
You can build an AI agent that says,Amarda looking for a job. Okay?
I'm not looking for a job. But youcan build an AI agent for that. Now,
the real question is, isit doing what you're doing?
Is it giving you the rightinformation? And more importantly,
how can you spot it when it's notgiving you the right information? Right?

(16:06):
Because it may be very subtle. And sothat is what I would call AI literacy,
right? And so that's what we wannafocus on for all our students,
to give them the right capabilities sothat they can understand these nuances
and be not just better informedusers, but better informed builders.
Right? Now I wanna mentionone more thing. It is,

(16:28):
the opportunities in this space are big.
Not only because sort of atthe level where you operate,
but because when you open thesethings up to students that are coming,
let's say from a philosophy background,right? Or in communications,
then those students can ask some questionsthat maybe an engineering student
would not, right? Because they'regoing through their program,

(16:48):
whether it's policy, ethics,or English, humanities,
whatever it is that they're trying todo, or health, right? Say public health.
And they can spot, okay, it'snot doing what I wanna do, right?
Or I wanna do something bigger and theycan really come up with new problem
spaces, not just new solutions.And then the right questions.
How do we interrogate? So it's a win-winbecause you are advancing education,

(17:11):
but you're even doing a little bitmore. You are advancing innovation.
I would say.
What kinds of support
will we offer the region in thisregard, right? So you have our students,
and I get it. They're gonnalearn the basics of how AI works,
and then on top of that,
you're gonna learn toolssuch that they ensure

(17:33):
that the AI is doing whatit's actually intended to do.
That's the literacy piece. Yes.
And so now you have a cohort ofstudents who are basically equipped
to go out and tacklemajor problems with AI.
Yes.
That being said, we have a wholecommunity around us, right?

(17:56):
And at last check thefastest growing AI community
relative to job requisitions is
the Washington D.C. Metro area.
Yes, it is.
It's second only to Silicon Valley.
Right, we're number two.
And, and it is razor thin,
the delta between SiliconValley and Washington D.C.

(18:18):
and then there's a big drop off when youget to places like Austin, Texas and,
and many of these other major cities thatare hubs of innovation and technology.
So given that we have toalso figure out how do we
engage the broader communityfrom an AI perspective.
So talk a little bit aboutyour thinking on that.

(18:40):
Yeah. So we live in a reallyinteresting region in the nation,
and I always tell my students thatyou are so privileged to be, you know,
next to the government and next toyou know, all sorts of agencies here.
But what may not be appreciated, asyou said, is the whole industry, right,
that supports this region. Sowe are a public university,

(19:02):
and as you know,
our first tangible product is a skilledworkforce for the region, right?
And then the nation and the world.
And I often hear you say that the majorityof our graduates stay in Virginia.
So that's a great thing because that meansyou're uniquely poised to offer an AI
skilled workforce to Virginia.
I went through this exercise this pastyear of designing a new educational

(19:22):
program, a new master's program,
and I got sort of this veryfirsthand view of how many
AI and AI-related jobdescriptions are out there in our
region, right?
There are a lot of industry here thatoffer public sector technologies, right?
They're developing systems for thegovernment where it's local, state,

(19:43):
federal government,right? But there's also,
then there are startupsin this region too.
There's a lot of very diverse industry.
So what we really can offer thisregion is an AI-skilled workforce.
We understand the region, okay?
We understand the needs becausewe're next to the government.
We talk to our federal techproviders where we talk to companies,

(20:03):
but we also talk to the DOD, right?We talk to the Department of Health,
we talk to a lot of agencies. So wesee, you know, what the needs are,
and we see also what thecapabilities are in the region.
And so this is a very unique viewthat we have because it informs us on
what is it that we need in our educationalprograms to prepare the students,

(20:25):
right?
Whether they want to join the governmentor whether they want to go in industry
in this region, there arejust tons of opportunities.
So we've made a lot ofheadway in this space.
We already have a lot of ourgraduates going in this region,
but we also have a lot of educationalprograms, either operating,
I mentioned a couple of those before,and new ones in the works now.

(20:47):
Something really unique I think thatwill open up opportunities, right?
And will also open up opportunities forindustry and government in ways that
they haven't thought of before,
is we can create new educationalprograms that are not
just, you know, for engineering studentsthat are not just say, housed in,
let's say, a college of engineeringor in a school of computing,

(21:08):
but we can create new educationalprograms that bring all the colleges
together. So everybody instancetalk about AI ethicists,
but did you know that there are noprograms? Like, if you think about it,
it's maybe you get a PhD. We can do this,
we can create new educational programsthat prepare our students right after an
undergrad to have thisunderstanding and go and, you know,

(21:32):
help the government as theythink about, say, procurement.
Now we are preparing a programin AI ethics. Is that accurate?
We do.
So, and so where are wewith in that process?
Yes. So, so we first tested thewaters with an undergraduate minor,
and now we are talking aboutgoing to the higher levels, right?
So going to majors, going to masters,

(21:52):
but it's not even just AIethics, right? You open it up.
There's a lot more in this spacebecause then you think about, okay,
what does it mean to develop things forthe society to have societal impact,
right? So there's a lot more thematicallyunder the umbrella of AI and society.
There's a lot under, say, AI and health.
The College of Public Healthis creating concentrations.

(22:13):
And so we are first testing the waters,
but we are just proceeding nowto bachelors and to masters.
And, um, as I say, touh, folks, stay tuned.
There's gonna be very new thingscoming out of Mason that not only will
serve the needs of the region,
but I dare say we're gonna go a littlebit beyond that because we're going to

(22:34):
tell industry, well, you didn'tthink about this, but look,
we have thought about it.And here is, you know,
where you should be heading and hereis how you should be doing things.
It's good that you bring that up.
One of our strengths is ourinterdisciplinary approach to research,
bringing our faculty and expertstogether to tackle complex

(22:54):
problems.
How do you see us harnessingthis collaborative approach while still maintaining
the rapid innovation that we'vehad to get to, uh, AI solutions?
Yeah, so we actually arevery collaborative at Mason.
I think it's part of usbeing young, really, right?
And being ambitious and tryingto run very fast and, you know,

(23:17):
catch up and even go beyondother universities in a really short amount of time.
We really don't have many silos asyou may find in other universities,
but we don't just take it for granted.The fact that our faculty, you know,
have that posture of wantingto collaborate with others.
We actually are creatingthe structures, if you will,

(23:37):
the infrastructure and the incentivesto allow faculty and students to
collaborate with one another. Iwanna give you a couple of examples.
You may already be familiarwith them, but just, you know,
for whoever is listening to the podcast.
We have three transdisciplinaryinstitutes, okay?
And those institutes are constantlycoming up with ways of bringing
faculty and students from the differentcolleges together in a room and

(24:01):
outlining new ideas and newproblem spaces, as I call them.
So in 2023, we had an AIinnovation summit where we had,
I think 150 faculty thatcame from all the colleges,
and they were organized around themes ofresearch and educational opportunities
that they wanted to develop,right? So we had an AI for society,
AI in education. We had AI and health,

(24:24):
we had a lot of the sortof special interest groups,
teams of faculty and students thatwent out of that symposium with ideas
and, you know, sort of sharedgoals that they could go after.
And some of the ideas for even theeducational programs. But we also,
we go a little bit further than that. Wealso incentivize faculty and students.
We have a,

(24:44):
a wonderful program here housed underthe Institute for Digital Innovation.
It's called the PredoctoralFellowship Program.
It's a very different program becauseit goes and it tells PhD students,
you have agency, youare embedded in society,
there are things thatmatter to you, right?
There are great challenges that youperceive about the world in which you are
living and you're gonna live inand go and formulate, you know,

(25:07):
a problem that you wanna work on.But it has to be interdisciplinary,
inherently interdisciplinary. And sothe students really take ownership.
They're given a three year fellowship.
So it gives them that breathing room todevelop complex ideas that are bringing
together faculty across the collegesand giving the student, you know,
the mentorship and theexpertise that they need.

(25:28):
We have a public private partnershipfaculty fellowship. Okay? It's a mouthful,
but it's a P3 Faculty Fellowship.
But what it does is it tells ourfaculty to understand your, your lab,
go outside, find an industrypartner in the region,
or even in the nation for that matter,
find a problem thatthey're struggling with,
but it also has--it can't be niche--
It also has to be a problem that has highsocietal impact and high intellectual

(25:51):
merit, right?
So we're doing a lot of these toincentivize faculty and students to
collaborate. So I like to think ofthem as seeds with strings, right?
So we're moving fast, but we wannamake sure that as we're moving fast,
we also have accountability.Right? What are we doing?
What are the ideas that you're developingand, and what are they bringing?
What are the societal impacts? We areincentivizing faculty and students,

(26:14):
but we're asking them to thinkbig and to do big things.
So there are a host of programsand a host of initiatives
that connect us to the broader population.
Industry and the like. Alright.
Talk a little bit about K-12,

(26:35):
non-governmental organizations,
and other entitiesoutside of those who would
have a vested interest in AI formaking money or advancing a field.
Right? Talk to us about that.
Right? Yeah. So there isa great interest in K-12s,

(26:55):
in community colleges,
and actually some community collegesare already running ahead and they're
thinking about all kinds of, you know,
certificates and expertise that they cangive to their graduates in this space,
in the AI space. But there's alot of, I would say desire too,
but not knowing how to, right?And there's just so much,
there are not just even the major tech,

(27:17):
but small companies thatare experimenting with say,
new chat bots for personalizededucation for K-12.
That space is really taking off.But the real questions in K-12 is,
well, I may like the capabilities,I may like to, you know,
give my students that extra helpthat they may need right? At home.

(27:38):
For instance, imagine a student whoseparents, you know, for their parents,
English is a second language.
And so that student might not get thesupport at home for any concept that they
didn't get in class, right? They maystruggle a little bit with homework.
There are huge opportunities tohelp students to level up with these
technologies, but thequestions among K-12s are,

(27:58):
can we do this safely? Right?
So what happens to the data thatthe students are putting in,
they're interacting, right?
But, but, but, but, butback up for a minute.
We're acting as if if we don't dothis, the young people won't get it.
Oh, they're already doing it .
Yeah, yeah, yeah. And so that's,
I think that's misguidedbecause the reality is

(28:20):
just because you're not teaching it,
don't assume that A) thereare more nefarious entities
out in the community that willhelp these young people learn it,
and B) don't get in front oftheir own individual curiosity,
right? If this is somethingthat they want to learn, right?
There are so many tools available

(28:44):
online and through YouTube and other
mechanisms. You couldbe self-taught. And so.
They are.
My, my challenge is, is that oftentimes
we're entering intothese spaces behind the
ones that we are responsible to teach.

(29:07):
Meaning they've already not only adopted,
especially when it comes toutilizing the technology,
they've already adoptedit, they're using it.
Then here you come trying to teach themhow to do something that they've been
actually doing for months.
Oh, trust me, we're not. I wanna give you a,

(29:27):
you reminded me of an interesting thing.So last was it, I can't quite recall.
I think it was a few months back actually.
I've held several events withstudents, okay. At different levels.
And I held an event with masters,
PhDs and some senior students in theCollege of Engineering. And I told them,
it's a safe space. I just wanna learn,

(29:48):
we just wanna learn how you're usingthese tools. We know you're using them.
We just wanna learn use cases from youand, you know, no faculty were allowed,
you know, I said it's just me.
It's just me and maybe some of my studentslistening in. And I was blown away.
Okay? I, I learned from them on notonly what kind of tools are out there,
but how to use them forthings I never thought about.

(30:11):
So they're already ahead ofthe game. The kids, as we say,
they're already using these technologies.
The questions that I was talkingabout earlier are questions posed from
instructors, right? In K-12s.
Because very often they haveto comply with very specific,
whether they're state regulations orcoming from the Department of Education,
right? So that is the challenge.It's not so much about, you know,

(30:33):
what the kid is doing at theirown time. It's in the classroom.
If I want to embed thesetechnologies in the classroom,
how do I make sure thatI'm compliant, right?
Not at just data privacy and security,
but I'm compliant with whateverregulations there may be in Virginia or in
North Carolina. And sometimesthey're a little bit different.
So that is what we'retrying to help K- 12s with.

(30:54):
We're trying to better understandthe space as well, right?
There's some education for ussitting here in a public university,
but we're trying tounderstand from them, okay,
what are those regulations and how doyou map those regulations into specific
sort of things that youlook for in this technology?
So we're doing in some sense a matching,but it's also an education, right?

(31:14):
That we are educating theinstructors what tools already exist,
what they can do with these tools.
And they're also educating us in termsof sort of the borders in which they have
to operate. So those are the conversationswe're having. And there's a lot of,
I think, um,
I'm really excited about the summit inMay because that's where we will all sit
in the same space and talk to oneanother, right? And educate one another.

(31:35):
And there's gonna be training there too.
We're also gonna be training some of theseinstructors in K-12 so that they have
also themselves a deeper understanding.
So right now,
the big discussion that weare still dealing with and,
and the big discussion quitefrankly that's happening nationally,
is the discussion aroundhow much of the technology

(31:59):
is actually,
how much do you utilize forthe benefit of you getting
a task done...
Mm-hmm .
And how much of that is you personally,
let me state it a little clearer.
There are articles of presidentsand other leaders on campuses who've

(32:21):
gotten in trouble because theywant to send a memo to campus on
a specific issue, right? Theyconsult ChatGPT, they say,
here write a memo to answer this
particular broad base issue.And it could be any issue.
The ChatGPT produces it, theymay spruce it up a little bit,

(32:44):
put their signature on it, and boom,
it goes out to theirbroad base institutional
communities.
Okay? Yeah. Don't do that. Don't do that.
No, no, no.
Don't ChatGPT for that .
Well, what happens is,undoubtedly somebody,
some itinerant person checks it andthen when they check it, they say, well,

(33:05):
wait a minute. And then you hear,oh, this was generated by ChatGPT.
There's absolutely no way a leaderin our organization should be using a
bot to give us feedback on how we should
operate. And so myquestion to that is, well,
if the information is right,why not? Right? And so,

(33:28):
gimme your thought. I, 'cause I got, Igot two or three more after this one.
Gimme your thought onthat one specifically.
I'll be, I'll be quick. Sofirst of all, anybody that,
that tells you that I check thisand this is AI generated? Uh, nope.
They actually cannot do that. Okay. Uh,
I've done a couple ofprojects with students,
but also if you hear or or you read, uh,

(33:48):
Chronicles of Higher Edhas had articles on this,
the false positives are crazy.
Oh, so, so sometimes a personmight not have even used AI.
No, actually you should not. Thefalse positives are absolutely crazy.
And there are examples of kids, youknow, submitting their own work,
for instance, in high school and ateacher saying this is AI generated.

(34:09):
And there are articles that say, holdon, they, you penalize with this.
You penalize kids that are, say forinstance on the autism spectrum, right?
They have a very structuredway in which they write essays,
they read a little bit differentthan others, but you really cannot.
I was actually giving our middle schoolerand she was like, mom, I'm so afraid.
What if my teacher saysthis is AI generated,
I shouldn't use bigthings in this. I said,

(34:30):
you generate big things and you givethis article to your teacher that says,
anybody that is telling you thatthey can spot AI generated text, uh,
don't trust them.
Okay. Well this is good becausethat leads to my second question.
So now you're a studentand you have an a writing
assignment. Write anarticle on the topic of

(34:54):
wearing burkas in public. Right?
And so you go online,
the student pulls up ChatGPTor Anthropic or any of these
others and actually ask it the question.
Mm-hmm .
And then the bot givesit back a very thoughtful

(35:15):
reply and response. Now thatstudent has one of two choices.
They can cut and copy that, dropit into their article, submit it,
and say that they're done. Right?
Or they can use it as a tool.
Pull references from that,

(35:37):
use it as a way to get morein depth understanding and
use it as a start todeveloping their own thoughts
on the topic.
Yeah.
Talk to me about the rightway. And the wrong way.
The first that you mentionedis the wrong way. Okay?
If you just cut and paste or copyand paste without attribution,

(36:00):
right? That is going againstacademic integrity, right?
You don't do that with anypiece that you find on the web.
So it's exactly the same thing,right? You don't do that.
You don't take from a book copy and paste.
So you shouldn't take from ChatGPT orClaude or whatever it is that you're using
and copy and paste and passit as your own work. Alright?
So that's the keyoperating term: passing it.

(36:23):
The trouble is you're passing it asyour own work. Now, if you attribute it,
right, and you say you, youreference it, that is okay.
I know that some instructorsmay not be okay with it,
but I think some of itis because, you know,
lack of understanding that it is the samein some sense of doing some research,
right? And saying, okay, this is theconsensus, but the real, you know,

(36:43):
mode in which students should beoperating--and that's, I think also, uh,
sort of a charge for instructors--is togo now beneath the surface and you pick
that example, right? Burkas inpublic and then ask the question.
But what do you think? Right? You maywanna be informed, right? In terms of,
okay, what does this mean?Where has this thing happened?
What controversies havebeen possible out there?
Because you may wanna look at theproblem for many different angles that,

(37:07):
you know,
you may not think about all of thosebecause of where you live or you know,
the community you're in. You justhaven't thought about those issues.
So using ChatGPT or, or goingand looking at articles online,
it's a way of educating yourself.
But the second way in which yousaid then using that as a way to go
deeper, right? An inch deeper or aninch beneath the surface and say, okay,

(37:30):
here is now what I think, orhere is where I'm coming in.
Or maybe even renderjudgment in some way, right?
So I'm thinking of all thesethings that are out there,
this makes more sense for these reasonsbased on my experiences on all my
thinking. That's where you shouldbe heading. That's the right way.
Okay. That's my thought.Here's number three.
Mm-hmm .
There have been a numberof high profile firings

(37:53):
of university presidentsbecause of plagiarism,
which was quote unquote unearthed by
checkers. These softwaretools that are designed to map
plagiarism and to catchplagiarism of a person's work.

(38:14):
Here's the challenge.
Oftentimes it's looking 5, 10, 20 years,
25 years back,
I know a president who's dealing withthis now for a paper that was written 30
years ago. Right? Okay.
So 30 years ago we didn't have the tools

(38:36):
to help check.
So if you're working with a graduatestudent and you were working with that
student 20 years ago,
and let's say that studentwho's learning might have took
too much liberty withquoting or not quoting
and utilizing some textthat they found in a

(39:00):
related work somewhere asthey were. As a professor.
You, you may read their work,
but you don't necessarilyknow every single one of those
references. It would be timeconsuming at some point.
You trust your student that thestudent has given you work that is not

(39:21):
lifted and not plagiarized.
Yes.
And it's not, I don't know that it's fair
to use tools that have beendeveloped now to check stuff
that was written 20, 15, 20, 30 years ago.
'cause we didn't have those tools.If we had those tools, then.

(39:43):
We would, you would check.Right? We could check.
And now faculty are doing thosechecks with these tools, right?
Hmm.
But you didn't have those toolsbefore. And the other thing is,
even with the tools now,
what we're finding is that oftentimestwo people who have a similar
background will write thesame thing. The same thing,

(40:04):
explain the same way.
Yeah.
To explain a phenomena.
Exactly.
It doesn't necessarily meanthat one copied another. Right.
But it does mean that whoever camefirst is gonna be attributed with the
discovery.
Yeah. I mean that'sconvergent evolution, right?
Right.
And so what we're finding with the toolsnow is you can come up with something,
you write it, you run acheck, or you say, oh, well,

(40:27):
such and such and such such found outthe same thing two months ago and they
published it in this paperso we can't put it forward.
Doesn't mean you were plagiarizing,it just mean you were late.
They got to the discoverybefore you did. And so.
Or or they were published earlier.Right, right. Because it depends.
That's right. That's exactly,that's exactly the point I'm making.
And what we would do back in the day isyou kept very accurate lab books, right?

(40:51):
And,
and you would write your lab books inpen and you would keep dates on your lab
books so that, so ifsomething happened like that,
you can go back to your labbook and say, no, no, no.
Even though they published it before me,I actually discovered it on this date.
And here's the evidence...
And here's the proof that I did so.
So the point that I'm makinghere is we should not be

(41:14):
utilizing these tools to go back 15,
20 years to basically state that a person
is plagiarizing. Because
A) the nature of thework we do as scientists
is such that two people cancome up with the same answer

(41:36):
at the same time, and one notactually know that the other
has made the same discovery.
Yeah.
And with the large number of publicationsthat it's not like we have five or
six publications and any given area,
there literally can be hundredsof publications in that area.

(41:58):
And you just, it's not fathomablewithout an electronic tool. It,
it's just not realistic for you to beable to read all of those to know what
everybody else is saying on a, on atopic. And you definitely can't do it.
Even 20 years ago,
there were hundreds of differentjournals on a singular topic.

(42:18):
To me, I think this thing hasgone way overboard. And, um, yeah,
we are penalizing people andmaking them look like criminals for
what could be oversights, what couldbe sloppy work, or what could be just,
they're just late.
Well, yeah, I mean, if I can sortof comment on one thread of it,
I tell my students always beworried when you're looking,

(42:41):
you have a hammer and you're trying tofind things to hit with that hammer.
So taking--
Everything looks like a nail.
Yeah.
So taking something and trying to unearthor trying to find things that look
like, you know, they fitfor the tool that you have,
that's a little bitscary to me, I would say.
So what you really wanna do is urge folks,
I would say trust people andtry to understand something from

(43:05):
all angles. Think about allthese other things, right?
We don't have all thisaccess to information.
You could have come up with the same idea,
or sometimes in our field you cancome up with the same way in which you
formulate a problem, right? Andbecause it's the optimal way, right?
You wanna feed it in three sentencesin that first paragraph in the

(43:26):
introduction, and you go to theconferences and you talk to people around,
that's what I mean byconvergent evolution.
And so then you start talkingabout that thing in a similar way,
but that doesn't mean thatyou copied it from each other.
So there are multiple reasons forwhy something may look like a nail,
but doesn't mean that it's a nail.So I would say always in these cases,

(43:47):
trust people and don't just say, oh,
I have a tool and I can go and find allthese other kinds of things with this
tool first. Because that toolwasn't designed for those things.
Here's what we'll wrap up. Okay?Look, as we wind down here in time,
there is clearly a significantamount of potential for AI,
but there's also one riskthat we haven't talked about,

(44:09):
and that is that it can deepenexisting inequalities, right?
So how do we make sure that thework here at George Mason helps
bridge these gaps and brings thebenefits of AI to all communities?
'cause just like with the computer, right?
Just like with the calculator before it,

(44:30):
not all communities benefitedthe same from the technologies
that were developed to helpsociety. And with this technology,
the impact that it's going to have andthe speed at which it's going to be
implemented means that some people will be
definitely left behind.Let me give you an example.

(44:53):
One of the things Elon Musk is doingwith artificial intelligence and with his
work now, he's, he's been, they'vebeen developing robots. Uh,
human assistance.
And those robots now are being testmarketed in the homes of individuals,
some famous individuals, right?
So I read the other daywhere Kim Kardashian has her personal assisted

(45:14):
robot.
I read that too. .
That has, you know, that was developedthrough Elon Musk's company, right?
This kind of thing.
So you got some folk who are usingthe technology at that level,
literally interactingday to day, hand to hand.
And you got some folk who are stilltrying to determine what it is.

(45:38):
They're still asking a question,
what is AI and how doesit affect me where I live?
So there is a gulf that's widening,
and with a rapidly developingtechnology like AI that gulf is
going to expand and expand rapidly.
And so how do we make sure thework here bridges the gaps to all

(45:59):
communities?
Yeah. By the way,
I wouldn't worry too much about thatrobot in Kim Kardashian's home. It's,
you know, it's a littlebit of a gimmick, right?
It's not really an autonomous robotif you've been following the news.
But anyway, I'm not worried about that.But I am worried about this gulf, right?
This deepening inequalities.In some sense, I mean,
you and I know this is astory of humanity, right? I,

(46:19):
I grew up in a country whereI, I didn't see a computer,
I think till senior year in highschool, okay? And I actually,
I went to a mosque, they were givingtraining . They were, yeah,
there were, I don't know why in a mosque,but that's where I went. So I learned,
okay, here's this thing andhere's how you turn it on .
But on a more serious note,

(46:40):
I do worry about leaving AI innovationonly to companies because they have
different objectives, right?
They're not necessarily thinkingabout the underserved populations,
and they're not thinking abouthow do you lift everyone up.
But here is why I am ina public institution,
because only in a public universityyou have this concern, right?
You're thinking about serving your region,

(47:01):
you're thinking aboutserving your students, right?
You're thinking about whatdoes it mean student success,
and how do I prepare them?How do I lift them up? Right?
You're a deep believer in thatopening it up to all students.
And here is why I really want universitiesto claim their space and to go
through really aggressive in AI innovationso that the narrative is not just in

(47:21):
the companies. And that's whywe keep thinking about, okay,
educational programs, howcan we prepare our students?
And here's why we're thinking,
how do we connect with communitycolleges so we can bring those students?
And here's why we're thinkingabout the K-12s, right?
How do we prepare those students so wedon't lose them early so we can help
them? But how do I train teachers toteach with AI teachers in those K-12s,

(47:43):
right? So that they cangive that enthusiasm, their,
that energy and that motivation to thestudents so we don't lose them. We have,
I, I think you know this,we have two data labs,
sort of two big data scienceprojects here at Mason.
These are big investments by Virginiathat are trying to reach the rural regions
in Virginia. Those arelaunching pads for us.

(48:03):
We're thinking abouthow do we utilize them?
So we expand from data science toAI, right? We also have faculty here.
So think about inequalities,okay? How do they arise?
How do we actually address inequalities?
We have faculty whose researchis specifically in this space,
trying to better understand inequalities.
And then how do you change educationso that you make sure that you

(48:27):
bridge, right? You don't allowthese inequalities. But honestly,
whenever folks tell me, oh, you know,
there's singularity coming and there'sartificial general intelligence coming.
I say, oh, stop doom scrolling.Just stop doing that.
Think about the real dangers. Theinequality is the real danger.
And that is what we really have to, youknow, on a daily basis, actively think,

(48:49):
how do I tackle, what do I do so thatmy kids are not left behind? All right,
I'm worried about my kids.So go away from the abstract.
Think about what about your kids?What are you doing for your kids?
How can we help? You know, those kidsthere in, in Hampton Roads in Virginia.
What are we doing about them?
And so that's why I think this isthe charge of our times, I believe,
of universities.

(49:10):
How do you make sure thatyou bring everybody so we can all participate in this
new digital society in which,you know, we're already living,
but we are gonna keep kind of interactingmore and more with AI in the future.
We are definitely,
definitely at the forefrontof this technology,
but I think we're at theforefront in the right way.

(49:32):
So I want to thank youfor your engagement.
I want to thank you for whatyou will do in the future.
What we are gonna ask you to do andas we move this vital technology
forward. You know,
I expect that this is thefirst of many conversations
that we will have around this topicof artificial intelligence, Amarda,

(49:53):
thank you for sharing your expertiseand for the leadership you bring
to this university.
Thank you for having me.
Alright. I am George MasonPresident Gregory Washington.
Thanks for listening.
And tune in next time for moreconversations that show why we are
All Together Different.

(50:17):
If you like what youheard on this podcast,
go to podcast.gmu.edu formore of Gregory Washington's
conversations with thethought leaders, experts,
and educators who take on the grandchallenges facing our students, graduates,
and higher education.That's podcast.gmu.edu.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.