All Episodes

July 4, 2025 • 59 mins

Dr. Ralph Ford, chancellor of Penn State Behrend, talks with Dr. Tiffany Petricini, associate teaching professor of communication, and Kyle Chalupczynski, assistant teaching professor of management information systems, about their work with Behrend's Ad Hoc AI Task Force. Originally recorded on June 26, 2025.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Ralph Ford (00:00):
Welcome to Behrend Talks, I'm Dr.
Ralph Ford, chancellor of PennState Behrend, and today we're
diving into a topic that istruly transforming our world.
We hear it in the news everyday, you can't escape it, and
that is artificial intelligence.
And I am thrilled to have twofaculty members here from Penn
State Behrend with us today.
T hey are leaders on our AItask force, that is Dr.

(00:23):
Tiffany Petricini and KyleChalupczynski, and they have
been instrumental in exploringhow AI can be used across campus
.
We're going to dig in deep andthey are truly engaged in the
subject in a significant way.
Welcome to the show to both ofyou.

Kyle Chalupczynski (00:41):
Thank you, thank you for having me.

Ralph Ford (00:43):
Well, appreciate it.
Tiffany.
You know we're going to go backand forth today, but I'll do a
little introduction for each ofyou and Tiffany, it's great to
have you here.
You've got a very interestingbackground rhetoric, technology
ethics from Duquesne, and that'sreally relevant with today's
conversation that we're having.
You are an associate teachingprofessor of communication and

(01:08):
Kyle, you're an assistantteaching professor of management
information systems and youhave some nice experience.
You worked for ParadigmInfotech I remember that company
well when you were in KnowledgePark before coming here and
also have some other greatexperience.
But let's again welcome hereand thanks both for being here.

(01:30):
I want to give you each amoment to introduce yourself,
and Tiffany, we'll start withyou first.
Tell us you know how did youend up at Behrend and what drew
you down this interdisciplinarypath, and how did you end up
getting so fascinated with AI?

Tiffany Petricini (01:45):
Yeah, thanks.
Well, I think that you startedthe introduction well, setting
it up.
To explain my backgroundrhetoric, technology and ethics
I actually was really interestedin what used to be called
computer-mediated communication,so this was, even in the days,
sort of pre-social media in theway that it is now.
So I was just fascinated withthe way that technology was

(02:11):
impacting relationships and, ofcourse, a significant part of
relationships is communication.
And so as I sort of traversedthrough my studies and I got
into my graduate programs,technology was so central to
everything that I was trying tounderstand in my own world.
So I originally was studyingsocial media.
So my book Friendship andTechnology it looks at the way

(02:32):
that technology in general hasimpacted our friendships, but
specifically social media.
And so the one review I've hadof my book the only review
that's out yet the only critiquethat I got is that I wasn't
thinking about AI and how AIimpacts friendships, and it was
fair, it was a really, reallygood criticism.
So I wrote that book in2020-ish and I just wasn't

(02:55):
thinking about AI.
It was there but it wasn't.
We hadn't hit the Chat GPT boomyet, and so as soon as we did,
I really really got interestedand it just so happened that I
was teaching at Penn StateShenango.
And when I was at Shenango,that's when I really had started
getting interested in AI and Iwas just trying to understand

(03:17):
why so many places were banningit.
Why are we banning AI?
What is happening?
Why are we doing this?
And I started looking andtrying to understand the
evidence.
Where is the evidence?
And it wasn't evidence-based.
The institutions and theirautomatic sort of turning to
banning was problematic.

(03:37):
And so I had just startedresearching and I got connected
with teaching and learning withtechnology at University Park
and we sort of formed a researchteam and I got really lucky
because Behrend was desperatelytrying to find someone to teach
communication studies and atBehrend the communication
department, their focus is media, and it was sort of just

(04:01):
serendipitous, it was reallylucky.
They really needed someone.
And so the school director, shereached out to me, Melanie, and
she had said, hey, would you beinterested at all in coming to
Behrend?
And I'm really really fortunatethat she made that call.

Ralph Ford (04:17):
That's a great story .
We're happy you're here.
And before I jump over to you,Kyle, you actually made me think
of a few things.
Do you remember somethingcalled ELISA, that program that
they had a long time ago thatwas supposed to act like a
therapist for you?
People?
Tried to communicate with you,right?

Tiffany Petricini (04:33):
So I don't remember it, but I know of it.

Ralph Ford (04:36):
Well, I've been around a little longer than you
and I remember it, and Iactually remember how poor it
was, and after a fewinteractions you figured out
pretty quickly that there wasn'tany intelligence.
Anyways, we'll go a lot furtherthan that, but I guess what you
made me think about with thatintro is, from the moment
computers have been created,people have been trying to
figure out how to communicatewith them, and I mean that's

(04:58):
going to be part of thediscussion today, so I think
your topic is right.
On.
Kyle, let's jump over to you.
Why don't you tell us a littlebit about your background and
how you got here as well?

Kyle Chalupczynski (05:08):
Sure.
So I always start out by sayingthat you know, ultimately I'm a
computer nerd at heart.
So regardless of where I endedup, I feel like I would have
been kind of, you know, obsessedwith AI to a degree.
But, as you mentioned, I spentsome time working with Paradigm
Infotech as a contractor for GETransportation.

(05:29):
After that I went to ErieInsurance where I was in a
business analyst role.
Both roles were differentflavors of the business analyst
role and I really kind of fellin love with that way of
thinking.
I've always enjoyed solvingproblems.
So having the skill set to kindof systematically dissect
problems or opportunities andcreate solutions and figure out

(05:50):
better ways of doing things hasalways kind of just been
something that I've enjoyed.
So during my time as a businessanalyst I also spent some time,
like I mentioned, helping outwith our quality analyst team
and doing some automation of ourtest suites with them.
So I really found that Ienjoyed the automation side of
things as well.

(06:10):
So with that background, youknow, I was just going along
doing my thing Sometimes I sayflying under the radar, teaching
my core MIS courses, and then,of course, about two and a half
years ago, we had our ChatGPTmoment and really everything
changed and I won't get into thedetails because I think we'll

(06:30):
cover those later just as far aswhat was going through my head
and all of that.
But at the end of the day therewas a lot of kind of
contributing factors that reallykicked things into high gear
for me.
One was, I guess, fulltransparency.
I've always kind of dealt withimposter syndrome a little bit,
and landing here at Behrend withno teaching experience, no

(06:54):
background in instructionaldesign, that was most definitely
in full gear, right.
So the first thing I thoughtwas well, this is an opportunity
to kind of start filling thegaps right, and to feel maybe
like I belong here a little bitmore.
But then other thingscontributed as well, right, as
we started to learn.
Well, this is going to be veryhigh, you know, in very high

(07:14):
demand from employers.
Well, the entire reason I'mhere is to help students get
jobs right.
So that was another factor.
As I mentioned, regardless ofwhat my job or role ended up
being, I probably would havebeen playing with this stuff
anyway.
So now I'm just fortunate thatI'm in an environment where I
can dedicate a prettysignificant amount of time to

(07:37):
researching this and trying tostay on top of it as best I can
and testing new things out andreally almost kind of creating
the classroom like a sandbox,figuring out what works.
Nobody's going to have all ofthe right answers right up front
, but the way that we're goingto get to those answers is by
trying different things, beingagile, seeing what works and

(07:59):
iterating quickly, and AI allowsus to do that right.
So that's essentially been itfor the last two and a half
years.
It's been a whole lot ofexperimentation.
Some things have not worked.
A lot of things have worked and, like you said, as I'm sure
we'll talk about, I think thatmy classrooms are a better
experience.

(08:19):
Because of it, I enjoy my workmore.
It's freed up time to do otherthings, and so I would say.

Ralph Ford (08:28):
That's it in a nutshell.
That's a great summary andwe're all learning as we go
through this and you know wecould spend hours on this.
So maybe this is the first ofmultiple podcasts on the subject
, podcast on the subject.
And I'll disclosure.
You know my background ismachine learning and spent my

(08:49):
career in computer vision andneural networks, and the AI I
learned when I was in school wasfocused on a lot of things like
game theory and how you uselogic and the like.
So we've advanced so far.
It truly amazes me.
I can't believe some days whatI see when I get responses to
answers to problems that I lookat.
So we'll dig into all of thatand it's a rapidly advancing

(09:10):
field and, as you've said, Ithink it's.
I know it's just a matter of wehave to jump in and work with
it and understand it for betteror worse.
We're going to get into a lotof those details.
So you know, here's thequestion for both of you and
we'll work through all of these,but for a long time.
Ai and, by the way, that AI isnot new.

(09:30):
We've been thinking about this.
The movie Terminator right,it's the fear of AI coming get
us, and robots and the like, butnow it's starting to seem like
these things actually have somepotential, right?
But let's stick with somethinga little more worldly right now,
which is ChatGPT.
It arrived in 2022 overnight.
100 million people are using it.

(09:52):
Why was that such a big gamechanger?
What happened that?
We all were expecting that thatshowed up.

Tiffany Petricini (10:01):
Well, you know, I think so there's
different perspectives on this,and when I'm looking at this
from sort of a historical pointof view, it has a little bit of
technical elements to it.
So never before in humanhistory have we had the
computing power that we have now, and we have not had access to
the amount of data required tomake something this successful,

(10:24):
you know.
And so it just sort of all cametogether in a very ripe moment
to sort of create this tool, andit really did take the world by
storm.
I think I had tried chatbotsbefore.
You know not Eliza, but I havetried other chatbots.
You know you use them and theyuse different technologies.

(10:46):
When you talk with a companyand you're sort of trying to do
pre-troubleshooting before youget a human being, and none of
them were as capable as chat.
Gpt was Kyle, though you have adifferent take on this, you
know.

Kyle Chalupczynski (11:25):
Like you said, I think that all of those
technology factors have toconverge in order for this to
happen.
But seeing on my timelinesChatGPT from , on social media
and thinking, well, no, no,that's not, that's just not
possible, and you can't tell methat somebody told a computer to
write a letter from a CEO towhoever and it actually came out
and it made sense, right.
So I think a lot of people kindof found themselves in the same

(11:49):
position where I was, where itwas like no, I have to see this
for myself, and I think that'sreally been a driving force here
.
I think OpenAI in particular isreally good at capitalizing on
those type of viral moments.
When we saw the Studio Ghibliimages of everybody and their
dogs and pets and literallyeverything imaginable was

(12:10):
Ghiblified.
We don't really see other labsdo that.
Now, I shouldn't say we don'tsee other labs do that, because
that's kind of one of my kind ofa barometer for me for keeping
an eye on things.
For example, just in the lastcouple of days, I started seeing
on the timeline videos ofanimals competing in human
Olympic events, and so that tome is okay.

(12:33):
Look, there must be a new modelout with new capabilities
because we didn't have thisquality before.
So now I know that I have to,you know, start pulling on that
thread and going down thoserabbit holes.
But, like you kind of alludedto Dr Ford, this stuff happens
on a weekly, if not a dailybasis and there's so many
different players in the game Inaddition to you know, the core

(12:55):
AI labs there's.
The volume of those kind ofviral moments we're seeing are a
lot more frequent now.

Ralph Ford (13:05):
Yeah, it's amazing, and each and every day, this is
a truism.
You know the AI you have todaywill only be, you know, it'll
only be better tomorrow andthat's going to be the case for
quite some time.
Let's, you know, Kyle, you wantto.
You like to get geeky, right?
So we all know about chatbots,right, we interact with them.
They can do tremendous, say,tasks for us.

(13:27):
But what's interested me isthis idea of agents.
Right, sounds rather ominous.
So there are AI agents that aresecretly out there and I think
they're behind some of thethings that are happening in our
lives or manufacturingprocesses that implement AI.
Can you explain to me whatthese agents are and how they

(13:50):
operate?

Kyle Chalupczynski (13:52):
I can explain my understanding and
hopefully, maybe Tiffany can, oreven you can, fill in some of
the gaps, because that is one ofthe that's kind of the one of
the buzzwords of 2025.
We, you know, we startedhearing that this year this was
going to be the big thing, and Ithink that there is absolutely
a lot of truth to that.
So, whereas with traditional AIthat we've all become used to

(14:13):
for in the last two and a halfyears, you know, I have to think
of the or help have AI, help mebuild the perfectly worded
prompt and make sure that it'sgot all of my criteria and
stipulations in it, and then Ihave to go back and forth with
the chatbot to continue givingit direction and continue giving
it feedback.
Agents are a lot moreautonomous.

(14:36):
So agents can they have kind ofa high level task and they have
access to a set of tools.
Whoever creates the agentdecides which tools they have
access to, and then itessentially has the ability to
make all the decisions foritself.
So it knows that it has toachieve this high level goal.

(14:57):
It uses the tools at itsdisposal.
Sometimes it connects toadditional agents.
We're starting to see nowmulti-agent swarms are more
effective than single agentapproaches, but it's closer to
kind of that fully autonomoussilver bullet I press the button
and the work gets done, used.

(15:21):
And I want to be careful withhow I mix terminology here
because I don't want to sayanything that's wrong and
there's kind of some debate onthis right now.
But people have said that ifyou've used ChatGPT's O3 model,
you've kind of experienced on asmaller scale what that agentic
behavior is like.
Because one of my favoriteexamples is if you take an image

(15:44):
outside of any building orwhatever and you send that to O3
and you tell it to be ageoguessr and then you watch its
chain of thought and thedifferent tools that it calls on
to actually, in a lot of cases,figure.
This is another one that wentviral, another use case that
went viral.
It gets it right a lot of thetime.

(16:04):
It can nail your longitude andlatitude just from taking in all
the visual cues from the imagethat you provide.
But it's building code withinthe chat to help it do that.
It's checking weather reports.
It's checking things that youwouldn't even think to check to
come to that solution.
So again, some people will sayit's not an agent, some people

(16:27):
say it is an agent.
I just say that it helps giveyou an idea of how agents behave
.

Tiffany Petricini (16:34):
Yeah, Kyle did a really nice job, I think,
of unpacking the idea that thereare different ways to frame
agent.
You know so there's a lot there.
So AI agents in a reallygeneral sense are just tools,
you know, really basic toolsthat do things that we want them
to do.
They're like an agent that youknow.
A sports agent is someone whogets you a job, so you could

(16:55):
think of it that way.
Then there's this idea ofagentic AI and I think that this
is sort of maybe more like theterm you know if you're familiar
with artificial generalintelligence.

Ralph Ford (17:07):
And this is yeah.

Tiffany Petricini (17:08):
So this is this idea that there are these
really super powerful, superautonomous AI that are are
acting without our understanding.
News this has been big latelybecause there are all sorts of
reports about AI being deceptiveand trying to blackmail the
companies when they threaten toshut it down.

(17:30):
So this would be an example ofthe rhetoric that is sort of AGI
related.
I think there are that's sortof the science fiction realm,
you know.
I think that there are.
There's agentic AI in the senseof a tool.
There's agentic AI in the senseof a tool.
There is what Kyle was talkingabout, which is incredible AI
that has more autonomy, butthere's no fully autonomous AI,

(17:51):
and when we start thinking aboutthat idea of fully autonomous
AI, we sort of are getting outof the scope of things.
That is focusing right now onthe here and now, that the tools
that we have, intelligent ornot, that still have really big
impacts on the work that we do,the way that we think and the
way that we learn.

Ralph Ford (18:11):
Yeah, I think that this is, you know, my
understanding matches as well.
You two are more in theexpertise field than I am, but
I've seen in industrialapplications and the idea is you
actually have agents thatself-adapt right.
They take feedback, they have agoal in mind and then they're
able to adapt and figure outways that we don't fully

(18:34):
understand how they work.
I mean, we understand thatthey're changing the underlying
algorithm on which they operate,but it's not always immediately
obvious and that they're tryingto make a decision and that you
know.
You can imagine the use of thatfor good.
You can imagine the scary partof it if it's used for not for
good as well.

(18:54):
But I see it in everything nowfrom you know, my microsoft uh,
office uh, comes up and tells meI can use agents to carry out
tasks.
I haven't fully figured thatout yet.
So that's interesting.
I'm definitely at least in therealm where I'm working.
I see it in industrialapplications.

Kyle Chalupczynski (19:10):
Could I jump in real quick, just because,
Tiffany, you mentioned therecent articles about the
blackmail and everything.
That was actually somethingthat we're probably going to get
to later when we talk about,maybe, some of the concerns, but
since you brought it up, Ifigured I'll jump on it now.
I think and I'm not saying thatyou were doing this, tiffany,

(19:30):
but I think that what happens alot of times is that we see
articles like that and I mean Isee them all the time too.
That was the latest one, and ifyou read those articles, almost
none of them actually explainthe background of what was going
on there.
So for somebody, for anuninformed consumer, that's

(19:54):
going to sound scary, right,that's going to sound like well,
this is most certainly Skynetand we should not be using this.
What, like I said, what theyalmost never mention is that and
we'll use Anthropic as anexample, because they were
actually the ones who did thatstudy is, a lot of times, when
you see those scary headlines,it's because Anthropic is
conducting very carefullydesigned experiments that

(20:17):
specifically put AI in thatposition.
They're almost essentiallytrying to bait AI into doing
something like, for instance hey, if you threaten to turn me off
, I'm going to, you know,blackmail the CEO or the
developer, whoever isthreatening to do that.
But the entire reason Anthropicis doing that is so that we can

(20:39):
study those conditions andbuild better guardrails for the
system.
So I think that's one caveatthat's important to keep in mind
, because I think that we'llcontinue to see, you know, those
types of articles, anthropicsgoing to continue to do that
type of research, but even ifit's not coming from them, I
think that's one that's a riskthat's, I think, particularly
high for what the, as far aswhat, the public perception of

(21:02):
AI is, and I'll leave it thereso that we can talk about that
later.

Ralph Ford (21:06):
Yeah, I mean that's the classic red team attack sort
of thing that you look at incyber threats in the military
and where you're trying toprepare yourself against those
threats.
It makes a lot of sense.

Kyle Chalupczynski (21:20):
Yeah, it should be.
Hey, thank goodness Anthropicis doing this, not.
Hey, everybody, we can't haveAI, because AI could do X, y, Z.

Ralph Ford (21:28):
Well, somebody is going to do that, and that's why
they need to do it, of course,right, so they need to be
prepared.
Well, let's switch to our room,let's switch to the academic
world and, as we know, catgptimmediately changed, first and
foremost, writing assignments,but it is permeating everywhere.
I can give you examples, Iwon't right now, but it has

(21:50):
changed the landscape.
So let's talk.
What do faculty need to know orthink about in regards to AI?
This is a huge question intheir classrooms, but let's
start, maybe at the low level,is it?
You know?
Is this the devil that weshould turn our back on beyond
writing?
What are the changes?
You know?
What's the advice you give tofaculty?

(22:11):
Come up to you and say, geez,this scares the heck out of me,
tiffany, why don't you go?

Tiffany Petricini (22:16):
Yeah.
So in the research that I'vebeen doing I have been looking
at student and facultyperceptions and now we started
looking at staff perceptions andwe just got a new result that
really reinforces that facultyneed to be very intentional
about their decisions, both toinclude and exclude AI.

(22:40):
So the latest research thatwe're doing it shows that using
AI in a gen ed course in thiscase it was public speaking CAS
100, can actually increase theeffectiveness of integrative
thinking.
So that's a gen ed pillar andwe can see that over time.

(23:00):
You know, having AI embeddedand integrated in a course
impacts integrative thinkingpositively.
Now the only sort of I guessyou would say like fence is if a
student isn't exposed to AI ina way that they can actually
integrate it in other courses.
So, for example, if I embed AIin my class and I encourage my

(23:22):
students to use it, I'm teachingthem how to use it responsibly,
reflectively, ethically, andthen another instructor just
bans it for the sake of banningit, because they don't
understand it and they don'twant to deal with it.
That is creating a problem.
It's actually limiting studentsat the course-to-course level.
Another research article thatwe published not too long ago

(23:45):
actually looked at that samesort of concept as a source of
inequity among students.
So it's creating theseinequities, sort of in access,
in all sorts of different typesof ways, but also in thought and
learning, and so it can bereally hard, I know, for faculty
who are already so burnt outand have so many things they

(24:08):
need to do and stay on top of.
Administrative responsibilitiesare rising, class sizes are
rising, and it can be easy justto say you know what, I'm
banning it, but it can have realdeep impacts on students, and
so it's just something thatfaculty need to be aware of and
mindful of.

Ralph Ford (24:30):
Well, let's if both of you could, maybe let's make
it real.
You know, what are examples ofhow you have used AI effectively
?
Do you think to improve notjust your efficiency?
That's great and actuallyreally important.
But how about student learning?

Kyle Chalupczynski (24:43):
I mean I'll say that I mean that's one of
the reasons why I'm such a hugeproponent of AI is because I,
like I said, I feel like I'vebeen able to maybe not
completely, but certainly closesome of the gaps in my
understanding of instructionaldesign and exactly what goes
into building a well-designedcourse and how to actually
achieve your learning objectives.
Achieve your learningobjectives, I would say to just

(25:06):
to kind of tie in both questions.
You know to any faculty thatare kind of feeling anxiety
around this.
I 100% get that and it mightnot seem like it, but I came
from that same exact place.
I had the period where I wasspinning my wheels and
essentially accepting the factthat the rest of my life was
going to be, or the rest of mycareer was going to be, chasing
down academic integrityviolations, because there was no

(25:28):
way that that wasn't going tohappen, right, but I decided
that there was.
That was not any.
You know, that wasn't a goodfuture, that wasn't a good way
to spend my time in theclassroom.
That wasn't doing anything orgoing to do anything for the
students.
It was becoming increasinglyclear that the majority of
students were going to be usingAI no matter what, and it also

(25:50):
became clear that not manystudents were inherently adept
with AI.
They were treating it like atraditional search engine or
like Google.
At the same time, we wereseeing an increase, like I said,
in businesses looking to hireAI savvy students.
So the natural progression thenwas to require students to use
it, but to make them use it in aspecific way.

(26:13):
And again, I didn't start outdoing everything at once.
There was a whole lot ofexperimentation.
To Tiffany's point, yes, youknow, time is fleeting.
As far as you know, do weactually have the bandwidth to
completely redesign all of ourcourses?
But I'll also say that it was alot easier than I expected,
right?
So it was as simple as startingout and feeding all of my

(26:36):
assignments to AI and firstseeing how AI handled them right
, just to kind of get abenchmark of what might a
student do if they just did thebare minimum.
Handed it.
The assignment said do this forme, and, as you can probably
guess, most of the results wereA or B papers or assignments.
So then the next step was OK,now you know about all of these

(27:00):
assignments, suggest maybe threeor four of them that I could
revise and integrate AI in someway, so that we're still getting
hitting the core learningobjectives from whatever that
chapter or topic was.
But we're also adding somethingto the end and I was really
lucky, or fortunate, thatteaching MIS you know, that's a
major that really lends itselfwell to AI, because right now

(27:24):
every company, or almost everycompany on the face of the earth
, is trying to figure out how dowe use this technology to to
solve our problems, to make moremoney, and that's at the core.
That's what MIS is all about,right, so that that did, like I
said, give me a lot of room forexperimentation.
But I will say that, just kindof out of curiosity, I've
stepped outside of the bordersof MIS a little bit and just to

(27:46):
see, you know, because and notthat this is not that there's
any one faculty member going no,kyle, you know finance is going
to be untouched by AI but justout of curiosity, because early
on we heard, you know AI doesn'thandle math so well, so started
testing things out like that,saying, ok, what would a finance
assignment look like with justa little bit of AI baked in?

(28:08):
I think that there are a lot ofopportunities that probably
aren't readily apparent on thesurface, just because we're not
used to thinking in this kind ofAI co-pilot relationship.
But ultimately it's startingsmall, it's making little
changes, it's accepting that,again, not all of them are going
to work, but we can now iteratepretty quickly on different

(28:31):
things and try different thingsin the classroom, and even if
things don't work, that's stilla learning opportunity for the
students, right?
Because you can explain to themwhy something didn't work the
way that it did, and then you'rehighlighting one of the
limitations of AI.
So the quickest way to kind of,you know, dip your toes in is
to actually jump right in.

Ralph Ford (28:51):
Thanks, Kyle.
Tiffany.
Yeah, go ahead, Tiffany.

Tiffany Petricini (28:54):
Yeah.
So, like Kyle, I've used it.
You know, on my end, you know Iuse it for design, I use it for
playing the role of a student.
So, for example, I feed it mysyllabus and I might say pretend
you're a first generationstudent.
Is there anything about thissyllabus that could be more
inclusive?
Or this policy or X policy?

(29:16):
And it's really helpful on thestudent end.
I definitely I incorporate it.
I try not to do just like oneand done assignments.
It's embedded because studentsit's 100% true, students are
using it.
They're not going to not use it.
They're just going to not tellyou they're using it if it's
banned.
And so when you find a way toincorporate it, you're going to

(29:40):
be able to start teachingstudents about the thought
process itself.
So the evidence is coming out.
You know it's all over theplace, but we are seeing that
there are impacts on criticalthinking and critical thought.
But the studies, they'reshowing that if you think first
and then you use AI to amplifythat, it's even better than just

(30:01):
working on your own.
And so that's where we need tocome in and we need to teach
students that it's not aboutreplacing, it's about refining.
So in the public speaking coursewe use it from the beginning.
So first I have them write anoutline and then I have them
have AI critique their outline.
I have them write a speech andthen I have AI critique that

(30:21):
speech.
Then I have them have AI writea speech and then they critique
it.
It's really important for themto see.
It's sort of like what Kylesaid, where you just have to
keep trying it on your ownbecause you can't know the
limits, because AI is sopersonalized like the uses are
so personalized that you won'tknow that you've hit a limit
until you get there.

(30:42):
I also the Commonwealth CampusTeaching Support Team is a
really, really great group ofindividuals with some great
resources.
And they have a resource wherethey talk about common learning
challenges for students and thisis like across disciplines.
So this is not justcommunications related and when
I found that, I found it was areally good in as far as student

(31:04):
learning goes.
So students hands down reallyhave problems with brainstorming
.
I think so much of my officehour time was spent with
students trying to brainstormtopic ideas for a speech and
it's unnatural.
I tell them that that in thereal world they're not going to
pull a topic out of a hat.
In the real world, if they'regoing to speak in front of an

(31:24):
audience it's because they'vebeen invited, because they're
talking about somethingparticular, you know.
So brainstorming is sodifficult and I walk through
exercises to show them how AIcan assist in brainstorming.
Organization is another learningchallenge and AI can help with
that.
Organization, revisions,proofreading AI can help with

(31:45):
that.
And you know, some of thearguments are that AI shouldn't
replace an instructor, and thatis 100% true.
But on the other hand, we'vebeen replacing instructors with
teaching assistants to do thesevery type of things for a very
long time.
When we started putting TAsinto classrooms, we were already
starting to put distancebetween the instructor and the

(32:08):
student and I think that AI it'snot a replacement for a teacher
ever, but it's also a greatstep and a great tool for
students who are anxious orstruggling.
They can start there and followup with their instructor.
So that's sort of some waysthat I've used it as far as
student learning goes so far.

Ralph Ford (32:28):
Thank you both for your examples, in that if we can
switch and this is this is thetrick you know switch to from
just using it to give the answerthat's the obvious problem to
using it as your, your mentor,your teacher, your learning
assistant, your tutor.
That's where real learning canhappen.

(32:48):
And I'll give you one quickexample, something that I tried
so to your point.
Kyle, you made me think aboutthis.
Can I do math?
At least the one I'm using can,and I uploaded.
I took a circuit that I use in aclass that I teach very common,
but it's third level, so it'snot simple and it's nonlinear,
and I scribbled it on a piece ofpaper as legibly as I could,

(33:09):
put the currents and voltagesthere, and I asked it to solve
the circuit and give me anexplanation, and it, frankly, it
did as good of a job as I couldhave done as a faculty member.
Now it looks like a textbooksolution.
But then the real interestingpart to me was I started to
think well, that's okay, but youknow what's the next level?

(33:30):
There are different parametersin the circuit that make it do
different interesting things,you know, and I'll get too geeky
, but drive it into saturationor use it as an amplifier.
And then I found I could startto ask it questions about what
happens, the what, if, and thenif.
If we can start to get thatlevel of thinking, we get to the
personalized tutor and thepersonalized learning.

(33:51):
Isn't that, you know?
What fascinates me about thatis I think that's the goal we
want, right, we're the teachers.
A student would never get tothat circuit without my help.
They wouldn't know to just golook at that, but I get them
that far and then they can dosome learning and prepare for
the exam on their own.
Anyways, a little bit of my owninsertion there, but I think
that's the real trick we'regoing to face.

Kyle Chalupczynski (34:14):
Yeah, I absolutely agree and I think
that there are.
There are definitely studentsthat do understand that this
isn't just the next.
You know, this isn't just hypeand, yes, there's hype around it
, but this isn't just the next.
You know, hyped up technology.
There's going to be worldwideramifications here.
The students that really latchon to that and really use the

(34:38):
technology intelligently and howwe're hoping and trying to
teach them to use it they dothings that just are incredible.
We've talked about having ourminds blown on a somewhat
periodic basis by AI.
It happens to me with studentsas well now, and, that said, I

(34:59):
do think that that is a minorityof students right now.
I think that kind of the baseassumption is oh, another new
technology, it's the same asvirtual reality, social media or
Bitcoin or whatever.
But we know that it's not right.
We know that the worldgovernments didn't pump
trillions of dollars into makingsure that Bitcoin or blockchain

(35:23):
was a success, right, or thesame for social media or VR AR.
So it's really, I think, A,communicating that to students,
but then B, I think the nextchallenge is figuring out how to
teach them to be curious.
I think that, you know, I see,with those higher performing

(35:44):
students.
That seems to be the missing.
They're naturally curious.
They naturally want to go downthose rabbit holes and you know
that with AI you can go down asdeep as you want with a rabbit
hole.
If a student isn't inherentlycurious, I think that's a real
challenge, At least that's inthe last year specifically.
What I've come to realize iskind of my next hurdle is how do

(36:07):
I teach them to be curious?

Tiffany Petricini (36:11):
This is Kyle.
I'm really excited you broughtthis up, because the study that
I'm wrapping up also looked atcuriosity and the way that AI,
integrated into the classroom,may impact curiosity, and
students came into the classvery curious, which was really
really great, and they left theclass very curious, and so it

(36:34):
means that AI does not killtheir curiosity.
So just because they can turnto the tool, it doesn't mean
that it kills it.
But there was an interestingconnection between curiosity and
stress, so that it's not evenabout like sort of sparking
their curiosity.
You know, our students arereally really curious already

(36:54):
and they're interested, but it'sabout teaching them how to
navigate the stress and sort ofbe mindful.
I guess is what you'd say andthis goes back to what you said,
dr Ford, which is that you knowthat's really our role as
instructors.
That is where we are movingtoward.
We aren't just these suppliersof knowledge who dump knowledge

(37:15):
into their brains and they walkaway.
You know we're not justteaching them how to use the
tools.
That's not what it's about.
We're teaching them how to usethe tools effectively and we're
teaching them content matter,and so that's it's sort of like.
You know, education AI isreally really finally pushing us

(37:35):
into a framework where we'reeducating the whole person,
which is something we do, ofcourse we do at Bearing.
But you know, I think it reallyreally is starting to draw out
that education can be and shouldbe and needs to be so much more
.

Ralph Ford (37:52):
Yeah, you know I'll take it to the next level and
I'll give you some stats, youknow.
And first another story I'mspeaking to a high-level
industry leader, Behrend agraduate recently, and she said
we don't hire anybody nowwithout them indicating some
level of understanding of AI.
And there are plenty of surveysyou know that show LinkedIn,

(38:14):
microsoft, all these companiesyou know high percentage.
They want AI skills.
Not only that, students areafraid of AI more so than
faculty, more so than our staff.
We've got the data to show that.
And students want to beprepared for this AI world.
So, you know, I'd like to diginto that further.
And maybe the first question ishow do we fare it out?

(38:39):
How does industry, when they'retrying to hire somebody, you
know industry and they're hereinterviewing a student on campus
, how do they know the curiousone who used it in a really good
fashion?
And those who've been I don'tknow, I'll call it posers are
just using it and getting by bynot using it as ethically.
Is there any way for us tofigure that out?

Tiffany Petricini (39:00):
Well, in my class in CAS 100, as we go
through these AI exercises, onething that they have to do is an
exercise where they put intheir AI skills that they've
gained in the class in theirresume directly.
So that may be a specificassignment, that may be
something they've built, and Ithink that teaches them the

(39:22):
language that they need toexplain that, because that's
what we do.
You know, that's what we dowithin our disciplines.
We have to teach students howto enter the conversations and
they need to explain that,because that's what we do.
You know, that's what we dowithin our disciplines.
We have to teach students howto enter the conversations and
they have to know how to do that, and I think that's a really
good giveaway to someone whoknows how to use the tools
effectively versus someone whois sort of a you know poser.
I'm using air quotes for that.

Kyle Chalupczynski (39:42):
But you know someone who's using it.

Tiffany Petricini (39:44):
Well, they know.
Like Kyle, I really appreciateevery time I get to sit in on
one of his talks, because Ilearn every single time and Kyle
talked about imposter syndrome.
But I feel every time I'msitting in on one of his talks I
feel like, oh my gosh, he knowsso much and you know.
So.
When I go to professionaldevelopment opportunities, there

(40:05):
are people who are just they'retalking but they're not really
saying anything, and I thinkthat now it's easy to get looped
in, especially with LinkedIn.
Everybody's an AI expert,everybody's an AI coach,
everybody's an AI consultant andit's easy to throw that label
on there.
But you start to see the peoplewho really know what they're
talking about and the people whodon't.

(40:27):
But you start to see the peoplewho really know what they're
talking about and the people whodon't.

Kyle Chalupczynski (40:33):
And that's part of knowing how to use the
language and enter theconversation, and I'll just add
that I mean, I think that thenow, if we're talking about like
a pre-screening, yeah, thatbecomes a little bit more
difficult.
First of all, I'll say that Ilove to hear, tiffany, that
you're doing the, how you'rehaving them document their AI
usage like that, because I'mdoing something similar with
their.
They have to create ane-portfolio that focuses on, you
know, some of their favoriteways that they use AI throughout

(40:55):
the semester.

Ralph Ford (40:56):
Yeah, I think I'd like to move on to a next
subject, if you two don't mind,which is how you're bringing
what your expertise to ourstudents and to a larger
audience.
So first of all, internallyI'll just throw this out here
and you can both answer we'regoing to create some, we not
going to.
We have created AI certificateprograms.

(41:17):
We've got an AI course herethis fall which I've heard is
already well sold out, and Ithink we're trying to get more
sections of that.
Then you're offering acontinuing education program
called AI Essentials for aprofessional seminar here in the
next few weeks.
So talk us through what's goingon in the curricular side, what
you are developing and whatothers here at Behrend are as

(41:38):
well.
I know there are many involved.

Tiffany Petricini (41:41):
Yeah, there is a lot right now.
There's a lot and you know it'smoving so quickly.
I should say that the classthat you mentioned is Humanities
220N, so that's called AI andthe Human Experience, and we are
already at 50 students andthat's two sections.
So we increased the class sizeso it can take 40 now, but it's

(42:06):
almost full again and that is aseminal course for both AI
certificates.
So if they are in the moretechnical one or if they are in
the more general one, they stillhave to take this course.
And I have been designing thecourse and I'm pretty excited
about it.
I have it almost finished.
So I'm sort of, you know, Ifeel like almost it's like a

(42:28):
Christmas present.
I can't wait to get the classon the first day and reveal to
the students and let them unpackeverything that we're going to
do.
It's fun, it's really fun.
Something else that Kyle and Iare working on, and just a
little preview University Parkstarted something called the AI
Arcade, and the AI Arcade is aresource for students and

(42:51):
faculty too, but mostly it'sstudent focused and it supplies
students with pro versions ofsome of the major AI tools.
So ChatGPT Pro, which is $200 amonth.
I've only used it through theTLT MidJourney.
And then Suno, and Suno is amusic tool, and so we're working

(43:11):
on bringing that to Behrend.
So we're pretty excited aboutthat because it will also help
with some of the issuesassociated with equity.
There are very few students whocan afford $200 a month for a
ChatGPT Pro account.
I don't have one, but bringingthe AI Arcade will help it
really will.
It'll have a dedicated spaceaccount.
I don't have one, but bringingthe AI arcade will help.
You know it really will.
It'll have a dedicated spaceand we don't know where that

(43:32):
will be yet whether probably thelibrary, but it may be
somewhere else, but it's adedicated space where students
can come in and actually use thetools and use them well.
So those are two very excitingthings I'm thinking about, but
there's so many more.

Ralph Ford (43:46):
Absolutely, Kyle, you want to add?

Kyle Chalupczynski (43:48):
Yeah, I'll also say that I found myself
getting excited for fallsemester way too early.
But yeah, there's, you know, asfar as just from the student
level.
You know, tiffany and I haveboth talked about how we've
integrated it into some of ourclasses a little bit.
But it's not just us, there'ssome.
There's close to right now thatwe know of 30 courses at

(44:11):
Behrend that have AI integratedin some way, shape or form, and
we're actually working ongetting a website together,
because that's something thatstudents have been asking for is
, after they've gone through myMIS 204 class it's Mr C.
What other classes can I takethat have AI integrated?
Right, we also have the, the AIcertificates that we offer to
students.

(44:31):
So those have been indevelopment for about a year and
a half but they're officiallylaunching in the fall.
So we have a more technical one, the certificate in building
artificial intelligence, andthen there's the
interdisciplinary certificate inartificial intelligence.
So we did.
There was a lot of effort thatwent into making sure that those
were cross-disciplinary,because that's kind of the way

(44:54):
that we see things are headedhere.
We're also again looking atwhat can we do for faculty and
staff.
I know Tiffany mentioned someof that from a business
partnership side.
There's a lot of interestingstuff going on in our innovation
through collaboration programthat links students up with
typically smaller, medium sizedbusinesses in the tri-state area

(45:17):
.
We started giving students chat, gpt plus access for those
projects and we've seen both thequality and the quantity of
student work on those projectsjust absolutely skyrocket and
the client satisfaction has gonealong with it.
So we're starting to look atreally making that part of the

(45:38):
core experience.
It kind of organically happenedand the ITC program kind of
morphed into ITC plus AI.
But even especially with one ofour projects that we had with
Recap Mason Jars over the lastspring, we were able to really
kind of refine that process sothat we think we have kind of a
repeatable and a pretty solidbusiness model for really adding

(46:02):
significant value, creating,you know, custom chatbots for
organizations or things likethat stuff that you know they
might be paying a consultingfirm thousands and thousands of
dollars for our students areable to do that.
So it's A it's really cool tobe able to see them provide
value for local businesses.
But it's also, you know, I toldthe students that were on the

(46:26):
last project and they alreadyhad internships and even job
offers lined up.
So it wasn't necessarily thatbig of a or, as far as they were
concerned, not that big of adeal.
But I said absolutely like putthis on your resume.
The fact that you can build acustom chatbot for an
organization makes you extremelyattractive to potential
employers.

Tiffany Petricini (46:47):
Something I want to add for you know just
sort of on this.
You know momentum that we'retalking about.
You mentioned the AI Essentialsfor Professionals, dr Ford, and
that is coming up next month,which is a great opportunity.
Kyle and I are teaching a classit's on Zoom, and it takes you
through just sort of the basicsof AI, what it is, how to use it

(47:11):
, why it's important and alsoexciting is that we are doing
separate staff trainings.
So part of my research team andI what we've tried to do is
we've tried to make sure that werecognize that staff are
integral to the studentexperience.
Staff are integral to learningand they also need to know what

(47:33):
AI is and how it works.
But you know, in the higher edwork they're often sort of
ignored, and so that's somethingthat's been important to the
whole task force.
We have Jacob Marsh on the teamand we have other sort of staff
that are involved, and it'sbeen really important to make
sure that their voices are heardas we start to think about AI
at Behrend, and I'm reallyexcited and I'm sure Kyle is

(47:54):
very excited too to do stafftraining Right now.
We have two ideas in mind forthe summer a sort of basics and
then beyond the basics and then,based on feedback, we hope to
really start tailoring it todifferent staff needs as we move
forward.

Ralph Ford (48:10):
Well, I really love the fact that I'd be remiss in
not recognizing there are a lotof faculty staff.
There are students on your taskforce.
This to me, is one that seemsvery much a self-forming group.
Love to see that we didn't haveto come in and say, hey, you
need to look at AI.
You all have really just saidwe're going to lead the way.

(48:33):
That's the best sort of teamthat you can create, and so I'm
really incredibly happy andproud in the fact that we are
looking across the wholeorganization, from how staff can
use it and the like.
So you have my true kudos toyou and the entire team, and
we'll want to make sure that wedo recognize all the people

(48:55):
involved.
I know we can't do it by name,but it is a significant effort.
And that brings me to my nextquestion.
You're coming with the idea ofan AI center.
What would that be?
What sort of things would an AIcenter here do?

Kyle Chalupczynski (49:07):
Sure, I can try.
So you know, I think, as you'veseen kind of throughout the
course of this discussion,there's a lot of great stuff
going on and we didn't even geta chance to talk about all of it
.
But to a degree it's kind ofhappening in silos.
Tiffany and I aren't the onlyone in the AI task force, even
aren't the only people that areexcited about AI.

(49:29):
There's all these differentpockets of excitement throughout
the college.
We need some kind of structureto kind of capture all of that,
to be able to have a place topoint people towards when they
have questions, whether that'sstudents, faculty staff,
community.
There's so much that's alreadygoing on.

(49:50):
We need a way to and the AICenter is our effort to do that
is again to kind of bring thatto the surface show what we're
doing and really positionourselves as the leaders of AI
in this area.

Tiffany Petricini (50:05):
Yeah, I think everything that Kyle said is
perfect, 100%.
We need to position ourselvesstrategically because we are
doing great things with AI.
We need to position ourselvesstrategically because we are
doing great things with AI.
I also think that one of thethings that an established
center so a formal center woulddo is really really help make

(50:28):
sure that we have the resourcesavailable to develop the
infrastructure to continue withour AI efforts for as long as we
possibly can.
You know, right now we'relearning as we go and so we're
learning, and you know we'rebuilding the website and we're
doing the trainings.
But having an actual center asa location that external
partners and internal partnerscan see would really really be

(50:49):
helpful for staff to be able tosay hey, I want to upskill, I'd
love to know how to use AI foradvising, and then also having
someone from an industry come inand say I'd love to upskill my
employees and this is alreadyhappening, actually, by fellows
who have been interested in this.
Having a center as the centralhub would, I think, really draw

(51:14):
in the importance and hit homethe importance of needing to
have the resources to keep thissustainable for the long term.

Kyle Chalupczynski (51:23):
And I'll also add, just to maybe bring
everything full circle rightwhen I started out at Behrend,
probably within my first year, Iremember saying to my coming
home and saying to my wife like,like, the college just needs a
business analyst, right?
And that of course that's theway I thought of things, because
I was a business analyst, I waslooking at things through a
business analyst's eye.
The college needs a businessanalyst, right?

(51:44):
Somebody that can, you know,sit down with a staff member and
understand all of their painpoints and understand all the
different systems that they'reusing and come up with some
better way to do it to maketheir lives easier, right?
And I saw all kinds ofopportunities for that.
I think those opportunities arenow even lower hanging fruit.
And one thing that I've kind ofenvisioned for an AI center of

(52:07):
excellence would be, you know,having student resources,
obviously that are that areemployed by the center.
And my idea is again if youhave those students put on their
business analyst hats and dothose things and solve, you know
, maybe throughout the course ofa semester, three or four
different problems and savefaculty or staff tens or even

(52:29):
more dozens of hours, a studentthat can put that on their
resume is going to get snatchedup so quickly because the
generalist and the AI literatestudent that everybody is
looking for right now.

Ralph Ford (52:43):
Yeah, I can really see the vision that you both are
painting and it is trulyexpansive and I know we're going
to get there in one form oranother.
But I'm going to.
We're coming to the close ofour time.
I'm going to ask you both aquestion.
It's not a fair question, butI'm still going to ask it anyway

(53:03):
.
It's fair in that I can ask it.
But you know what's yourlong-term view?
Do you think this is a fad?
Is this going to profoundlychange our lives?
If so, how?
And Tiffany, why don't you gofirst?

Tiffany Petricini (53:20):
Yeah, so I study media ecology, and media
ecology is the study of the waythat media affects environments
in a really, really simplisticphrasing.
And so, 100% AI is going tochange the way that we

(53:40):
understand our world, that werelate to each other, and it's
even going to change our ownconsciousness.
So this happened with writing.
Writing is a human technology,and the birth of democracy and
the time of Aristotle and Platoand Socrates and then Rome all
of those major developments inhuman thought were because of

(54:02):
writing.
And then we had anotherrevolution and that was with the
invention of print.
You know, at that timeknowledge was no longer
interiorized, it was shareable,and we had all sorts of
revelations and thought.
You know, intelligence 100% isjust as transformative and

(54:35):
revolutionary as these othertransformations in communicative
technology.
It is going 100% to reshape theway that we think, we relate
and that we learn.

Ralph Ford (54:45):
Wow.
Thank you, Kyle.

Kyle Chalupczynski (54:49):
Yeah, that is an extremely difficult
question and I'm not going to doit as much justice as Tiffany
did.
I'm a person that, and I'm surewe all are to a degree, but I
love certainty and so while I'vebeen, you know, like I said,
kind of a kid in a candy storewith AI for the last two and a
half years, that's all beenintertwined with a little bit of

(55:09):
turmoil, because whether it'swhat does this mean for my job,
with a little bit of turmoilbecause whether it's what does
this mean for my job.
I think one of the last liststhat I saw has business
professors in the top 20 mostreplaceable jobs by AI, right?
What does this mean?
What should I be doing?
What should I be pushing myfour-year-old son towards?
Right?
There's so much uncertaintyaround this and when we try to

(55:31):
think about how this is going tochange our lives, and when we
try to think about how this isgoing to change our lives, if
any had any clue that, or evenyou know, any inkling that

(55:59):
social media, or Facebookspecifically, could one day be
deciding elections, right?
So we were currently thinkingkind of within the constraints
of our current understanding,our current worldview, and the
way that everything works, Ithink that we're going to see
incredible stuff, right?
The stuff that Tiffany mentioned.

(56:19):
You know that's going to changeeverything, and other stuff too
, right?
One of the most fascinatingareas for me is looking at what
they're potentially going to beable to do with this in the
biotech industry, and I won't godown that rabbit hole because,
as I'm sure you can guess, it'sa deep one, but there's so much
exciting stuff that couldpotentially happen that we can't

(56:41):
even really wrap our headsaround.
I think so.
Then, if I had to say how isthis going to change things for
us?
I think the one certain thingis going to be there's going to
be quite a few years ofuncertainty here and just
figuring things out.
If it was and Dr Ford, I thinkyou follow Ethan Mollick he says
you know that even if AIstopped advancing today, we'd

(57:03):
still have five or 10 years offiguring out what exactly this
means for us and how exactly wecan leverage it Right.
But we know it's not changingand we know it's not slowing
down yet, and we know that.
You know the scaling laws thatyou know the AI labs have
identified all seem like they'regoing to hold true for the
foreseeable future.

(57:24):
That's what makes things Idon't want to again.
I don't use the word scarybecause I think it's all
extremely interesting at thesame time, but there's certainly
a lot of uncertainty.

Ralph Ford (57:37):
Well, both great answers, and it's always hard
because people are notoriouslybad at predicting the future,
and that doesn't mean that youdidn't both just make some great
predictions.
We'll come back in five yearsand replay this and see how
close we were, but you made methink about those early days
when the Facebook came out andwe were all scratching our head
as to why anyone would be eveninterested in such a thing.

(57:58):
So you're correct, these thingsjust take down a life of their
own and we will see where theygo.
But we will navigate thatuncertainty.
I think you just have toembrace it and say I see no
other way, because it's notleaving us.
It's going to be here and weneed visionaries and people who
will jump in, like both of you.
So, Tiffany and Kyle, I meanthank you so much.

(58:20):
This has been an incredible,forward-looking, engaging
conversation.
Your work on the AI task force,working with all the faculty
and staff here, is trulyinspiring and you're really
looking towards the future, andthat that's what we need to be
doing all the time To ourlisteners.
Thank you again for tuning into another episode of Behrend

(58:42):
Talks.
I am Chancellor Ralph Ford.
I appreciate you joining us.
Thank you, and we'll see younext time.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.