Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
We are willing to
adapt to the current technology
and change our approach toteaching and learning to
hopefully make them betterpharmacists in the future.
Speaker 2 (00:10):
I love the idea of
having the education
personalized to perfectionphrase.
Speaker 1 (00:14):
I think we could see
personalized education happen,
with programs that are tailoredto the pace of the student, by
utilizing AI and technology inthat space.
Speaker 3 (00:30):
Welcome to Tech
Travels hosted by the seasoned
tech enthusiast and industryexpert, steve Woodard.
With over 25 years ofexperience and a track record of
collaborating with thebrightest minds in technology,
steve is your seasoned guidethrough the ever-evolving world
of innovation.
Join us as we embark on aninsightful journey, exploring
(00:52):
the past, present and future oftech under Steve's expert
guidance.
Speaker 2 (00:58):
Welcome back, fellow
travelers.
In today's podcast, we'rediscussing the role of
artificial intelligence inhigher education.
Today, we're thrilled to havethe prestige honor of having Dr
Caitlin Alexander, a clinicalassociate professor at the
Department of Pharmacy Educationand Practice at the University
of Florida.
Dr Alexander is deeply involvedin the critical care medicine
(01:18):
and other prestigious pharmacyassociations, with a ranging
interest in critical caremanagement to infectious
diseases, and her innovative useof AI in pharmacy education has
set all new standards, makingher the perfect expert on
today's topic around AI inhigher education.
Dr Alexander, welcome to theshow.
It's a pleasure to have you.
Speaker 1 (01:38):
Thank you so much for
having me on, Steve.
I'm happy to be here.
Speaker 2 (01:41):
Welcome to the show,
so I really want to dive into
this here.
So recently you were named tohave this prestigious honor of
the AI Teaching IntegrationAward, where you were recognized
for using artificialintelligence in advanced
pharmacy and practice experiencethat has not been traditionally
used in the classroom.
So can you share a little bitabout how you first decided to
(02:05):
integrate AI into kind of thecritical care and what this?
What really inspired you tohave this type of innovative
approach?
Speaker 1 (02:13):
Yeah, absolutely.
So.
I'm really fortunate to be atthe University of Florida, which
has a lot of support andmomentum behind artificial
intelligence but alsointegrating artificial
intelligence for teaching andlearning, and so I started this
journey last summer, neverreally interacting much with
(02:36):
generative AI, and started toreally gain an interest and see
the possibilities and the usesthat it could have to, I think,
advance education for studentsand for faculty.
Through that I found a facultylearning community though at UF
that was focused on harnessingAI for teaching and learning.
So I participated in that lastfall and that was really a huge
(03:01):
eye-opener and learningexperience.
And that was really a huge eyeopener and learning experience.
I would say some of us kind ofdescribed it as drinking from a
fire hose of all of the ideasand technology that we are
introduced to, of how we canreally bring AI into the
classroom and into our teaching.
So from that I started to get alot of creative ideas, network
(03:25):
with others in the group thathad similar interests and
develop this idea of how I couldstart to bring artificial
intelligence to my experientialrotation.
So as a clinical faculty member,I have didactic teaching
responsibilities in theclassroom.
I also take fourth-yearpharmacy students and residents
(03:46):
on rotation in the trauma ICU,where I practice as well, and so
this is a really uniqueopportunity where I'm working
with a small group of learnerson rotations and we collaborate
on our topic discussions, whereI may have a group of five to
seven learners during a rotationblock and we focus on different
(04:08):
topics that are related to thecare of trauma ICU patients
during that time to add to theirlearning experience, and I've
been teaching the same topicsfor every single rotation block
for a really long time and Ithought AI would be a creative
way to kind of reimagine whatthat topic discussion looks like
(04:29):
.
So I was just going to say that,and so that was kind of the
initial idea of how can I spicethings up, make it more
interesting, but also integrateAI for the student.
Speaker 2 (04:39):
That's an incredible
use case here and I wonder, you
know, just give me a little bitof background, you know.
You know, kind of before thisjourney started here, what was
your exposure to things likegenerator AI was?
Did you already kind of have areal kind of fundamental
knowledge of it?
Was it something you werearound in the academic kind of
environment where you you kindof have a foundational knowledge
(04:59):
of it, or was it something youhad to kind of get into, start
learning some of thefoundational building blocks of
it, understand kind of what itis, before you can learn how to
apply it?
Where did you really start withthis?
Speaker 1 (05:10):
I had no previous use
, so I am by no means an expert,
and I have been on this journeytrying to learn as much as I
can over the past year or so now, really.
And so that started with yeah,what is generative AI, how does
it work, how do these modelswork and what can they really do
(05:31):
, what are the possibilities?
And so that's where I startedfrom and, honestly, through, you
know, a lot of my own kind ofself-research, if you will
webinars, other opportunitiesthat I think have really started
to come up within teaching andlearning.
Higher education that hashelped a lot to get some of that
(05:54):
foundational knowledge, andalso, again, this faculty
learning community that I waspart of really provided a lot of
that.
We heard from experts you know,at the University of Florida
that are AI experts, right andlearned from them what it was
and then ultimately, how wecould use it.
And so I'm really an end userhere and I've gained a lot of my
(06:18):
ideas and experience now justthrough that action of playing
around with AI, seeing what itcan do, seeing what it can do
for me and for my students andlearners.
Speaker 2 (06:30):
And really the goal
of it was.
You know, my look at thearticle was the students
responded positively to the AIactivities.
But the goal of it was so thatthe students could understand
the uses and limitations of AIand evidence-based medicine.
Kind of walk me through.
What was it about that AI?
That they needed to learn thelimits around the use cases
(06:52):
around that?
What was the goal of that?
Speaker 1 (06:55):
Yeah, so we're really
in an interesting time here,
right.
Ai is pretty new for mostpeople, my students included,
and so these current cohortsthat are in graduate programs,
doctorate programs like ourPharmD program they didn't have
the same experience thatundergraduate students are
having now, where AI is beingintegrated into maybe their K
through 12 curriculumsundergraduate curriculums.
(07:18):
It's very new to my studentcohort still, and so they
haven't had a lot of experienceand exposure and, honestly,
their past experience has beendon't use it, it's cheating,
right.
So they've kind of closed theirminds off to the uses of AI,
especially when we're talkingabout classwork or an assignment
(07:39):
from a professor.
So, in this case, I wanted toexpose them, though, to what AI
can do, and also, though, that,especially in health care, there
are a lot of limitations.
If you're asking for medicaladvice or a question, the
information may not becompletely up to date, it may
(07:59):
not be fully accurate, and soyou really need to trust but
verify.
And, of course, we're seeingthe models get better and better
with the information thatthey're providing and the
accuracy and the sourcing andthe citing of information.
But, especially initially, thatwas a huge concern with these
hallucinations.
I was reading a journal articlethat I knew wasn't in the model
(08:22):
previously and I put it in andasked it to give me a journal
club or a synopsis of thatjournal article that I knew
wasn't in the model previously,and I put it in and asked it to
give me a journal club or asynopsis of that journal article
and it completely made it upRight.
And so I want the students toknow that, yes, this is a tool
that is out there that they canuse that will be very
influential in health careduring their careers, but they
(08:43):
need to know that they need toalso combine that with their own
knowledge, expertise, verifythe information, verify the
sources, conclusions, beforethey bring a recommendation to
the bedside that would impactpatient care.
So ultimately, that's what weworked through kind of in this
topic discussion together,giving them short cases that
(09:05):
they could put into a generativeAI tool, seeing what the
response is then, and then realtime.
I facilitate discussion withthem of what do you think about
that?
Is that what you wouldrecommend?
Why or why not?
What do our guidelines say?
What does the primaryliterature say?
And let's literally like dig inand critique what you're
getting out from your generative.
Speaker 2 (09:28):
AI.
It definitely seems like that'sa wonderful approach, because
you're looking at AI as a tooland then you're also having the
students look at and evaluate itto see if, in fact, the
information is correct basedupon certain data you mentioned.
Sometimes the data model is notalways perfect and it gives
inaccurate information.
Sometimes there'shallucinations and it's up to
(09:49):
the person who is kind of thesubject matter expert or who is
more of the practitioner in thefield to really look at it and
kind of make more of thatinformed or data-driven decision
.
I wonder, how did you know?
How did the students react toit when they heard that?
You know I couldn't use ChatGPTor I couldn't use any of the
other AI tools for classroomwork.
(10:09):
But here I get a chance toactually do some AI work in this
.
Well, this is blowing my mind.
Like what was their reaction tothis?
I?
Speaker 1 (10:16):
think that was their
reaction.
They're like you want us to useAI.
What they were surprised, right?
They've never been told yes, Iwant you to use this as a tool,
and I think they thought it wasreally cool.
I think it shows that, right,we are willing to adapt to the
current technology and changeour approach to teaching and
(10:39):
learning to hopefully make thembetter pharmacists in the future
.
So they I had some that werereally excited, right,
especially those that werealready maybe using an AI.
Some that were apprehensivethough as well, right, because
they have been told, you know,don't use this kind of stay away
.
And they had never even had theopportunity to kind of play
(11:02):
around with it, know how itworks or what to do with it,
what to expect.
So there was definitely a mixand a mix of experience that
students have right now with howto use it.
Speaker 2 (11:16):
So it seems like it
seems like there might have been
a little bit of a skills gapkind of, amongst most of the
cohorts right.
Some probably were veryproficient using some sort of AI
tool, some were not right.
So I, some probably were veryproficient using some sort of AI
tool, some were not right, soI'm sure you probably had to
have some sort of ramp up periodfor some of those folks to get
ramped up on the tool beforethey can start evaluating it
here.
How did you narrow it down toit?
Maybe a specific AI tool Likewhat was the decision that went
(11:41):
into how you look at thelandscape of artificial
intelligence tools and languagemodels and different things how
did you kind of narrow it downto maybe one or just a few that
really were going to be fit forpurpose for particularly your
use case in the classroom?
Speaker 1 (11:57):
Yeah.
So when I first started thisidea, I immediately kind of went
to ChatGPT, just being thatit's readily available using the
free, unpaid version, the 3.5model, because I wanted it to be
accessible, or I needed it tobe accessible for all of our
students, right, and we can'texpect them to pay for an
additional service or AI tool.
(12:20):
So that's one.
Consideration is just the costassociated with some of them,
and that being a limitation whenyou're asking a student or a
learner to engage with the tool.
And then the other thing wasthe purpose, right, like.
So there are other AI tools thatI've played around with.
One that I considered, forinstance, in this assignment, is
(12:46):
perplexityai.
That is a little bit betterwith giving some of the sources
of its information.
It's meant to kind of be moreof a search tool of the
literature along with givingkind of an overview.
So that's another one toconsider, and I think any of the
kind of open AI models couldwork well for at least the type
of assignment that I adopted,and I didn't want to be too
prescriptive.
If the students did haveexperience, you know they were
(13:09):
welcome to use another tool oftheir choice, but to me ChatGPT
is the one that's most common.
Students know the name and youknow, if I wanted them to get
experience, maybe with one tool,that was the one that I went
with.
Now that's also changed already, right.
So the University of Floridanow supports Copilot for both
(13:30):
faculty and students, and sothat is secured once we're
logged in behind our GatorLinklogin, and that is what we are
being encouraged as faculty tokind of use with our students in
the classroom now, as we'regoing to use the generative AI
tool.
Speaker 2 (13:48):
So in the future, and
what I've been doing now is
utilizing Copilot for theseassignments- I was just going to
ask you kind of like you know,kind of now that it's now that
you've been able to prove thisprototype happening in the
classroom and having a profoundimpact on the learning
experience for the students andalso being able to actually
really solve real world usecases, I wonder what the
(14:09):
implication has been kind ofacross the college faculty.
Now, of course, we know that AIis really kind of starting to
move into higher education and Iwas very curious to know, kind
of now that we start to see theemergence of ChatGPT, copilot
being integrated, secured in aspecific environment like
universities, I wonder how it'sbasically been playing a role
(14:31):
within, maybe enhancing thingslike within the faculty, whether
it's within efficiency, withincourse development or something
like that Could you talk alittle bit about how it's now
starting to kind of bleed intoother areas within the faculty
of the university.
Speaker 1 (14:51):
Yeah, I think that
this is a huge topic.
With faculty no-transcriptwriting.
(15:26):
It can be really challengingand very time consuming to come
up with a multiple choice examof questions.
It's kind of one of the thingsof being faculty you don't
always think about.
So getting ideas for questions,generating questions from AI
based on content, has beenreally helpful.
It can also be used in coursedevelopment, course mapping,
(15:48):
thinking of how you would createor approach new course content
goals, writing objectives,assessment instructions.
So I've come up with theassessment.
I want to, you know, have mystudents complete, but help me
just write up the instructionsso that they're clear and the
students know what they'resupposed to be doing.
It helps me summarizeinformation for presentations.
(16:09):
I have a PowerPoint slidethat's super wordy.
I can put it into Copilot andtell it to make it into shorter
bullet points.
Saves me a lot of time andefficiency coming up with that
myself.
So there's a lot of uses therejust from your faculty tasks
that we're completing, and thenI think there's a whole other
(16:30):
realm of possibilities of howthis in the future is going to
help support research, dataanalysis, qualitative analysis
of large volumes of feedback,for instance, that our
generative AI models are reallygood at summarizing and really
good at summarizing in a shortamount of time.
Speaker 2 (16:50):
I love the idea of
kind of having the education
personalized to perfectionphrase Somebody had mentioned
that at one time.
You know, kind of keeping aneye on how it's going to kind of
allow us to kind of tailor-madethese resources to have more of
an adaptive learning experiencethat really has a more of an
ebb and flow to the student'sacademic journey.
(17:12):
We know that.
You know that professors anduniversity teachers have a lot.
So you mentioned things likemultiple choice questions,
grading and being able tosynthesize large amounts of data
to be able to summarize it foryou quickly, kind of offloading
that differential heavy liftinginto kind of an AI model that
allows you to free up time tofocus on the things that really
(17:33):
do matter in education.
And I know that there's beensome big players in the market.
You know there's been a hugegrowth within AI and education.
You know, I know there's, youknow, microsoft, google,
facebook.
They've really been pioneeringa lot of development into the AI
space.
But I kind of want to ask thequestion around things such as
(17:53):
you know, like you know, whatare the ethical uses that you,
maybe even you talk to otheruniversity professors, or maybe
some of the things that thefaculty or maybe the university
is talking about is kind of theethical use of AI in education.
Always seems in technology, weuse it, it's great, but then you
get into education and it's adifferent conversation.
There's many more differentlevels of complexity.
(18:13):
Can you kind of expand on acouple of things that you might
be able to talk on around thisuse case, around ethical use?
Speaker 1 (18:21):
Yeah, I mean there's
a lot of concerns and challenges
with implementation and makingsure that you're implementing
appropriately.
One of the biggest things thatcomes to mind initially is just
the confidentiality factor,right, especially when you're
using an open AI model.
I'm working in health care aswell, so you obviously don't
(18:44):
want to put in any patientinformation to the model.
It sounds obvious andstraightforward to, I think,
those that are in the know, butto a student that's on rotation
and has a patient-specificquestion, they may not think
about, you know, putting thatinformation into the model.
Same thing in education, though.
In higher education we haveFERPA laws that we follow, and
(19:07):
so we need to maintain studentconfidentiality with any
information that we're puttingout there into the model
confidentiality with anyinformation that we're putting
out there into the model.
So I think number one, that's ahuge ethical issue and concern
just with generative AI use andhow students may use it.
There's other challenges thatfaculty see.
Academic dishonesty is anotherbig issue or concern that comes
(19:30):
up of students utilizing AI tocomplete their assignments and
not putting in the work, thetime, the effort themselves, and
we know that the AI checkers,if you will, are not accurate.
It's very difficult to tell ifit's a student you know a
student completed thisassignment themselves or if it
(19:50):
was written by CHAT-GPTcompleted this assignment
themselves or if it was writtenby chat GPT.
We'll never know or be able toprove that right in an academic
dishonesty situation, and so alot of professors are kind of
turned off by that and don'tknow how to approach that.
And personally I think that it'seducation of our students right
(20:13):
about the appropriate use andhaving them cite the use of AI
when it is utilized, sharing howthey utilized it for
assignments, being open andhonest about that with them when
I'm using AI so they can seethat modeled for them and that
way that can help ensure youknow an authentic student
(20:34):
assignment and assessment.
So those are some bigchallenges that we're facing.
The bias also that we see withresponses and things that you
receive from generative AI isanother kind of hot hot topic
(20:55):
from generative AI is anotherkind of hot hot topic.
And then just the fact ofpotential misinformation and
correct information andhallucinations and how that may
impact the students' learningand overall education.
If we are encouraging use ofthese tools, you know wanting to
make sure that they're gettingthe information ultimately that
we want them to get to achievetheir kind of curricular
outcomes is a topic for it.
Speaker 2 (21:17):
When you mentioned,
you mentioned the biases in the
AI and things like that and youknow kind of talked, you know a
little bit about what are somebiases that you're you know
you're trying to kind ofsafeguard against.
You know, you know, astechnologists, many of our
listeners you know roughly 50%of our listeners you know may or
may not be in the technicalfield, you know, but for them to
(21:37):
understand, you know kind ofwhat is an AI bias.
What is it you're looking forwhen you're trying to kind of
guardrail against that?
Speaker 1 (21:44):
I think a kind of
simple example that comes to
mind, right, is I utilize AI tohelp me come up with, maybe, for
instance, patient cases oncertain disease states that I'm
trying to teach in the classroomand it can give me a starting
point of a patient scenario,details to include that then I
can expand upon and use that forcase-based learning with our
(22:06):
students.
And there are certain diseasestates, though, right, that tend
to occur maybe more frequentlyin certain patient populations,
certain genders, certainprofessions.
And if you prompt generative AI, then you know then that's when
I see, for instance, thosebiases come through, because
(22:26):
then the patient scenario thatit comes up with is always going
to fit kind of that, that mold.
So that's one example, and so Ithink we just want to be
careful with our students ofthem understanding and knowing,
right, that that's not alwaysthe case 100% of the time and
(22:46):
that there may be, you know,certain bias introduced there.
There's also bias in if you'reasking a question, right, and
the types of information thatyou're going to get out, that
again may lead you down a pathto get to a potentially
incorrect answer for a patientthat doesn't fit the general
(23:06):
mold.
Speaker 2 (23:08):
And I talk a lot
about patient care because,
ultimately, I'm teachingpharmacists and we're preparing
them to be taking care ofpatients upon graduation, and so
I really want to make sure thatthey are prepared to
appropriately utilize theinformation that they're getting
, to make the best decisions fortheir patients and all of the
(23:28):
teaching that we're doing, whenyou, you know, when you
mentioned patient care for you,as you look upon like the future
and the landscape of AI andwhere this is really kind of
headed, where would you kind ofsee an ideal scenario of kind of
the perfect complement of AIwith the right type of clinical
analysis, the right type ofclinical research that really
(23:48):
drives an enhanced patientexperience?
Like what do you kind of see,what do you envision, what's the
future look like for you?
Speaker 1 (23:55):
I think that's a big
question, steve.
There's a lot of opportunity.
I think there are certainly alot of AI platforms already live
and out there that aretailoring to patient care and,
for instance, helping to make adiagnosis if you put in some
patient information you knowde-identified patient
(24:16):
information to help come up witha diagnosis and a treatment
plan on the spot.
And I don't think that we'rethat far away from seeing that
really go live in a widespreadcapacity where it's integrated
into our electronic healthsystems.
And if I have a resident that'srounding on a patient and they
(24:37):
don't know the answer to aquestion, that they could pull
up this, you know chat bot,essentially within the EMR, and
ask a question and get an answerof what medication am I
supposed to prescribe, what doseis correct for this patient,
what medication am I supposed toprescribe, what dose is correct
for this patient?
What would you recommend?
So I think there's a lot ofopportunity there to again help.
(25:01):
Another area in healthcare isjust with writing and
documentation.
So it's a lot.
I'm sure you can understand orknow that physicians, providers,
are incredibly busy.
They see a lot of patientsthroughout the day.
All of those patient visitsneed associated documentation,
and so what I see from mycolleagues is that they're able
(25:24):
to see all the patients and dothe clinical work, but then they
still have 50 notes to sign atthe end of the day or to write,
and so AI can also be reallyhelpful, and I think we're
already seeing that beimplemented in healthcare to
help them with the documentationpiece of their patient visits,
to make that more efficient aswell.
(25:46):
So lots of things going on inthis space.
I certainly am not going topretend to have all of the
answers or ideas.
I'm sure there's a lot of otherthings happening too, but those
are just some of the thingsthat I'm seeing or potentially
forcing in practice.
Speaker 2 (26:04):
And how do you see
kind of more of the evolution
happening within AI, kind ofmore into more things that you
may have within the within yourclassroom setting?
You mentioned, you know,pharmacology.
You mentioned kind of thepatient care pharmacology.
You mentioned the patient careyou mentioned.
This is a really good.
This is a great win for youbecause you've been able to
showcase that it is a veryvaluable tool.
You can have a very positiveoutcome when you include AI into
(26:29):
a proper classroom setting withthe proper guardrails and
instructions with it.
What do you see as the nextventure into prototyping?
Something else like this, butmaybe a little bit different?
Speaker 1 (26:40):
There are again same
thing in education, so many
different opportunities.
I have a ton of ideas of otherways that I personally just like
to incorporate AI into myteaching, into my courses.
One of the next projects I'mworking on is potentially
enhancing students' ownself-reflection, self-reflective
(27:01):
behaviors by utilizing AI,getting them to think more
deeply about their ownself-evaluations, and kind of
interviewing with a chatbot toget to that deeper level.
Inherently, I thought that AIshould be avoided for reflective
writing, and I'm actuallyfinding now the opposite again,
(27:24):
that it can be a really excitingand powerful tool there.
So again, just lots of ideasthere.
On a higher level, I think wecould see personalized education
happen right, with programsthat are tailored with a person
via video, voice huge foreducation and other
(28:03):
possibilities of how we couldimplement it For my students.
I'm thinking about practicingtheir interactions.
How would they interact with apatient?
How would they counsel apatient?
What questions would they askthem?
Right now we simulate all ofthat in a skills lab environment
with sometimes simulatedpatients, or we have also
(28:26):
patients that we hire in rightto come play that role, and now
I think in the future we coulddo that with AI right.
So lots of possibilities and, Ithink, lots of changes for
higher education and forlearning in the future.
Speaker 2 (28:43):
It's funny you
mentioned you know more of the
cognitive personas, aboutdifficulty with dealing with
patients, kind of in a hospitalsetting right.
I think that that's aninteresting one is that you get
all types of personalities rightHappy, sad and all the
different ones.
It's interesting.
So you said that you get somepeople to come in to play the
role of the patient and you say,okay, go over here and try to
(29:05):
diagnose this person andperson's in a really bad mood or
something.
When you mentionedself-reflection, maybe I'm not
really too sure about what thatone is here.
So when you mentioned havingstudents use AI for things like
self-reflection or not using AIfor self-reflection, what does
that really mean?
Help educate me.
Speaker 1 (29:24):
Yeah.
So I mean I want our studentsto be lifelong learners, which
means that they have to have theability to self-evaluate what
their strengths, what theirweaknesses are, where they need
to continue to grow.
Self-evaluate what theirstrengths, what their weaknesses
are, where they need tocontinue to grow Throughout the
pharmacy curriculum and thenalso particularly on rotations.
When they're on theirexperiential rotations with me,
(29:46):
they fill out their ownself-evaluation alongside my
evaluation of them, and what Isee a lot is they just click
through the boxes of theevaluation and they don't add a
lot of additional detail orexamples to support.
You know why they think they'redoing so awesome in this one
(30:08):
area, why maybe they rankedthemselves lower in another, and
also what they're going to doabout it if they aren't
performing well in that certainarea.
And so what I foresee, or whatI'm trying to implement, is I've
created a prompt where they caninterview with AI in these
(30:29):
domains right that I'mevaluating them on and the AI is
prompting them back withquestions of tell me an example
of when you did that well, whatwas the outcome, what would you
do different next time?
And giving kind of thatfeedback that I'm hoping that
they can utilize and then putinto their own self-evaluations
(30:51):
and again kind of reflect deeperthan what I'm currently seeing,
where students not all of them,but unless I'm very specific
about I want you to come withwhich I've learned in the past
too three strengths and threeweaknesses of what you're going
to work on for the next threeweeks or until the end of the
course.
They tend to not come with a lotof specific examples or areas
(31:14):
for growth, and so I'm hopingthat AI can be a tool that we
can utilize to help them withthat.
What I was sharing about myhesitation is, though, that
reflective writing, though, tome, really needs to be about you
coming up with your own thoughtprocess on things right, and so
using generative AI to write aself-reflection.
(31:36):
I was like, well, thatshouldn't be allowed.
That was my gut kind ofinstinct, but with this approach
, I'm seeing that they areproviding the examples the
students are.
The AI is just asking themquestions back to make them
think more deeply about theexample and, potentially, what
(31:56):
they could have done differently, done differently to improve,
or do in the future to improve.
So that's what I'm reallyexcited.
Speaker 2 (32:03):
That's interesting.
So it was going to almost kindof dovetail right into my next
question and it kind of, as westarted to kind of, you know,
kind of come into conclusion isyou know for the?
For the people who are are?
A majority of our listeners aretechnical people.
Most of them are non-technical.
What would you love to see fromthe tech community?
People who are writing, youknow, working in the large
(32:25):
language models, people who areworking with very different
versions of artificialintelligence, people working in
the you know, the generative AIspace For them.
What is your message to them?
How can we improve AI ineducation?
What are some things that wecould be looking at from a
future perspective, on thingsthat we should be focused on
building into our programmingsnow and also into the future?
Speaker 1 (32:48):
Yeah, I think there's
a lot of things out there that
I probably have not tapped into,so I apologize if these things
are made and you know, and Ijust don't know about it or
haven't used it yet.
But I think support for givingwhich I know we have this too
already but support for givingfeedback, grading students more
(33:10):
efficiently, especially inreally large courses, is
something that is difficult inhigher education and really
being able to give formativefeedback to students so
utilizing AI to kind of helpwith that is one area that I
would foresee being veryimpactful, kind of with student
assessment overall.
(33:30):
There, I think growing AIapplications where they can be
tailored more so to a specificcourse, right.
So that's another area where Iwant to maybe take my course and
make its own GPT, right.
So I know I have the capabilityof doing that and then right
(33:52):
now I'm limited, though, just bythe cost standpoint of being
able to make that available toall of my students, right, and
the cohort, and so making accessmore freely available.
Again, with this announcementfrom GPT, it sounds like they're
looking at that model goingforward where hopefully we'll be
able to make some of these onGPT for GPTs for our courses.
(34:15):
That would be really impactfulfor student self-learning right
where they can interact on theirown outside of class and study
with it and do more there and inthe healthcare space
specifically, I think just moreaccuracy is needed, honestly
tailoring those models tomedical conditions again and the
(34:37):
medical literature Again.
I know that that's out thereand people are working really
hard on that.
It's just not something that'sreadily available right now
again to the end user inhealthcare taking care of
patients.
And what I see right now isthat there's still a lot of
inaccuracies where I don't trustthe initial response as kind of
(34:59):
the expert in the field and I'mnervous about that at this
moment in time for my learnersthat they're not going to have
the expert knowledge right tocritically evaluate that
response.
It's not the best choice fortheir patient ultimately right.
So I think in that healthcarespace it's really more so about
(35:21):
the accuracy of the informationthat we're getting.
Speaker 2 (35:23):
And great question
and I wanted to kind of just
double click on this one here.
When you mentioned healthcare,it seemed to be a very broad
lens that you can look at itfrom and you say that you know
there's data models that arevery inaccurate, there's data
models that have incorrect ormaybe there's some
misinformation in there.
From a healthcare perspectiveis, who would you be looking at
(35:43):
in terms of kind of thetrendsetters or industry leaders
when it comes to AI?
Would it be something likehealth insurance companies,
healthcare institutes,pharmaceutical companies, who
would kind of be kind of likethat North Star, shining light
for you when you were to say, ifwe were to look at this AI
model from, you don't have tomention a specific company, but
(36:04):
is it a specific like?
Is it who would kind of be anindustry leader for you to look
at as a North Star of?
This is kind of what we'restriving to.
Speaker 1 (36:12):
I think that's a
great question.
I don't know that I have anawesome answer for you, steve, I
can tell you what would beprobably most impactful, though,
is working with our electronichealth record electronic medical
record companies to implementthe AI.
When it comes to developing themodel, I mean, I really want to
(36:35):
see the evidence-based medicineincorporated into there.
Right, our latest guidelinesare primary literature.
That's what we really base ourdecisions off of in practice,
and that's my kind of barometerfor, you know, coming up with
the best recommendation forpatients.
(36:57):
It's not looking to a healthinsurance company to tell me the
answer based on their formulary.
Right, we have to take thatinto consideration when we're
making treatment decisions forpatients, because obviously the
financial aspect is a huge partof whether or not they're going
(37:18):
to take their medication.
But ultimately, I want it to be, I want to be.
I want it to be whatever isgoing to be best for for the
patient and their condition.
That makes sense, yeah it doesyou want?
Speaker 2 (37:31):
you?
You kind of want like anunbiased, biased opinion, almost
right, like yeah, so it's likeinsurance companies, of course,
are going to probably skew somesort of, uh, predictive or
prescriptive analysis based uponcurrent trends and what they
would like to see, based upontheir actuarials, or something
like that.
Um, so it's interesting, I, I'm, I'm, I'm kind of curious now.
(37:51):
It's like you mentioned health,you know, the employee record
or employee record or the healthrecords that are being kept,
but then also still being ableto use AI to be able to look at
a broad set of data and thenstill kind of have almost kind
of like a board or peer reviewof the data, to kind of everyone
kind of go back and forth tosee if, in fact, if this is the
(38:14):
best outcome that we're lookingfor for something that is very
patient-centric, correct?
Speaker 1 (38:17):
Yeah, and I think
that's where the money is right,
like if we can synthesize realpatient data and outcomes into
an AI model and then work fromthat.
That's what's going to be hugeand changing healthcare kind of
going forward, healthcare kindof going forward and I know at
(38:40):
UF we have researchers that areworking on building those models
within you know, specificpatient populations and trying
to collate all of that dataright into a model, and so I
think those are the areas thatare going to be really impactful
for patient care and reallyhelp us look at outcomes, make
(39:03):
informed decisions based onpatient outcomes and then
ultimately improve care.
Speaker 2 (39:08):
I think that's the
biggest message is, of course,
just trying to improve thepatient care.
Dr Alexander, I really want tothank you for your time today.
Thank you for sharing yourthoughts and insights with our
listeners.
I know this is a topic thatI've been wanting to dive in
deeper onto Hope to have youback on the show again.
I would love to explore this tokeep up with your journeys as
to kind of how you'reprogressing with AI in the
classroom.
(39:30):
Your students also have thisgreat experience where they go.
Yay, we could use AI inclassroom.
Yes, Dr Alexander is awesome.
This is the best class ever.
Please take our class.
I, I don't know something likethat, probably Right.
Speaker 1 (39:42):
Me too.
Speaker 2 (39:43):
Good, good
evaluations would be awesome,
Wonderful Well again, drAlexander, thank you so very
much for this opportunity.
Look forward to the researchyou're doing.
How can people follow you tokeep up with some of the
greatest things?
Some of the great things you'redoing there at the University
of Florida?
Speaker 1 (39:57):
Yeah, absolutely so.
I mean you can feel free toreach out to me via email.
It's readily available on ourwebsite through my faculty
profile, so that's probably oneof the better ways, other than
you know your typical channelslike X and whatnot.
I'm at KAlexander4218, if youwant to look for me or take a
(40:17):
follow.
And yeah, I would love toconnect with anybody who's
interested in chatting.
Speaker 2 (40:22):
Wonderful Well, dr
Alexander.
Thank you so very much for yourtime and thanks everyone for
listening.
This has been an illuminatingshow.
Thank you so much for your timeand thank you for listening.
Cheers.
Speaker 1 (40:30):
Thank you.