All Episodes

December 17, 2024 36 mins

Send us a text

Generative AI is not just a buzzword; it's a game-changer in academia. Join us for a compelling conversation with Jules White from Vanderbilt University as we unpack the revolutionary impact of generative AI on higher education. Discover how this cutting-edge technology is unlocking new realms of innovation across academic disciplines, from nursing to interdisciplinary collaborations. Jules shares his pioneering efforts in integrating AI tools and infrastructure at Vanderbilt, enabling faculty, staff, and students to harness the power of leading technologies from industry giants like OpenAI and Anthropic.

The discussion takes a fascinating turn as we examine the concept of augmented intelligence in education and its broader societal implications. Generative AI, with its advanced capabilities and consumer-friendly applications such as ChatGPT, has captured the imagination of many. We explore the ethical considerations surrounding AI, particularly in addressing bias and ensuring diverse perspectives are included. By treating generative AI as a tool for augmented intelligence rather than a replacement for human decision-making, we promote a more thoughtful and responsible engagement with this technology, and we highlight emerging trends that academic institutions need to brace themselves for in a rapidly evolving landscape.

As we wrap up, we delve into the multifaceted applications of AI in both professional and everyday contexts. From AI-assisted meal planning to interdisciplinary collaborations in medicine and environmental science, AI is reshaping our approach to tasks across various fields. We tackle the misconception that generative AI can be seamlessly implemented without proper training, emphasizing the importance of mastering prompt engineering. Lastly, we discuss strategic challenges organizations face when integrating AI, underscoring the necessity of supporting leaders across disciplines to fully harness AI's transformative potential. Tune in to hear how embracing AI's creative power can unlock unprecedented possibilities in your world.

LinkedIn:  https://www.linkedin.com/in/jules-white-5717655/

Innovative SimSolutions.
Your turnkey solution provider for medical simulation programs, sim centers & faculty design.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Disclaimer/ Innovative Sim (00:00):
The views and opinions expressed in
this program are those of thespeakers and do not necessarily
reflect the opinions orpositions of anyone at
Innovative Sim Solutions or oursponsors.
This week's podcast issponsored by Innovative Sim
Solutions.
Are you interested in thejourney of simulation
accreditation?
Do you plan to design a newsimulation center or expand your

(00:24):
existing center?
What about taking your programto the next level?
Give Deb Tauber from InnovativeSim Solutions a call to support
you in all your simulationneeds.
With years of experience, debcan coach your team to make your
simulation dreams becomereality.
Learn more at www.
innovativesimsolutions.
com or just reach out to DebContact today.

(00:48):
Welcome to The Sim Cafe, apodcast produced by the team at
Innovative Sim Solutions, editedby Shelly House.
Jo ou hos, D T r a Join ourhost, Deb Tauber, and co-host

(01:14):
Jerrod Jeffries as they sit downwith subject matter experts
from across the globe toreimagine clinical education and
the use of simulation.
So pour yourself a cup ofrelaxation, sit back, tune in
and learn something new from TheSim Cafe.

Deb Tauber (01:31):
Welcome to another episode of The T The Sim Cafe,
and today we are here with JulesWhite from Vanderbilt and Jared
is here with us.
And welcome to the podcast.
Thank you so much.

Jules White (01:43):
Thank you for having me.

Jerrod Jeffries (01:44):
It's great to connect.

Jules White (01:45):
So maybe first do you want to kick us off just a
little of your overview, Jules,and tell our listeners what you
do and a bit of your background,yeah so I'm a professor in
computer science and then I'mthe senior advisor to the
chancellor on generative AI andenterprise and education, so a
long title that basically meansmy job is to figure out all the
different ways that we canincorporate generative AI into

(02:08):
everything from the classroom tohow we go about and do our
normal operations withinVanderbilt.
So I have a group that we go andwe've built the infrastructure,
the generative AIinfrastructure for Vanderbilt,
so all faculty staff andstudents on campus have access
to essentially unlimited use ofall the models from OpenAI, all

(02:28):
of the stuff from Anthropicstuff, from Mistral, you know
Meta.
And then we build all kinds ofunique tools out of our research
and deploy them across campusindividual groups like the
Department of Alumni Relationsor Endowment or Faculty Affairs,
and we figure out what are theways that we can incorporate it

(02:50):
to help do things I think ofthat we couldn't do before.

Jerrod Jeffries (02:54):
I love that there's a lot of things that I
want to go into already there,but I mean, is there a specific
initiative that you want to take, so for Endowment or Naval
Alumni Relations?
One of them is there somethingthat?
One, when did you start with alot of this?
And two, or is there a goal setout initially, or is it more
just experimentation?

Jules White (03:15):
Well, I think that the starting goal for for me is
that I start all my talks Ifyou've heard one of my talks,
you've probably heard me saythis, but I start all my talks
with the same thing, which islike if you'd stopped me on the
street November 1st of 2022, andyou'd said this thing called
ChatGPT is going to come out atthe end of the month and here's
what it's going to be able to do, I would have told you trust me
, I'm a professor in computerscience.

(03:35):
I will not live to see thatlevel of advance in computing.
And then, a month later,something came out that I didn't
believe would be possible in mylifetime.
And so my starting point ishelping people appreciate the
importance of what's takingplace, and that it's this huge
opportunity I think particularlyfor higher education to because

(03:56):
the technology is so generallyapplicable and does sort of
certain foundational things thatare going to impact every
single discipline that thiscreates, this opportunity to go
and reinvent every discipline,to think of things that we can
now do that were impossiblebefore, that the technology
makes possible, and that createsan opportunity for higher
education to go and figure outwhat are the ways that we're

(04:18):
going to change the way that theworld works in all these
disciplines.
Now industry is going to do itsimultaneously, but the problem
in industry is they're often atcapacity and there's not as much
of the culture that there is ineducation to go and think from
a research perspective of how dowe reinvent things and then
also measure does thereinvention work?
So I start from the perspectiveof like.

(04:38):
Let's show capability andpossibility and make sure that
people are informed about whatall the building blocks are and
the different sort of ways ofcomposing them and solving
problems with them, and I builta lot of Coursera courses out of
this idea.
So I have about 500,000students in my online classes on
Coursera.
I teach the building blocks andthen we go in and work with the

(05:00):
groups to understand what theirproblems are and which ones can
realistically be tackled withthese building blocks, and then
often what we see is they have abetter sense of like, which
problems to go after, how theymight use the building blocks,
and our goal is to support sortof how they compose and put
these things together and alsoto know when there's something
that realistically, we're justnot going to solve for them.

Jerrod Jeffries (05:24):
Okay, great, and then being I'm going back
out a little bit, but being atVanderbilt, are you working with
one department, are you workingwith the overall school or
university, or how does thatwork?
How does your role work there?

Jules White (05:35):
Yeah, well, we work across all schools and
departments.
So we're actively trying toengage and work with anybody
that's, you know, looking toengage with generative AI, and
people all across campus areexcited.
So we have thousands of peopleacross probably every school
using the infrastructure thatwe've built, and then as a team,
we go in and work withdifferent groups or individual

(05:56):
faculty.
So we just did a project with afaculty member in nursing and
she looks like becoming anexpert on the prompt engineering
.
For how do you take a researchstudy and use generative AI to
assess if the research study hasbias in its design?
And so she was working on this.
She came to us and said can Iget help automating this and

(06:18):
scaling it up with theinfrastructure?
So we helped her set upautomation so she could go and
scale up this idea and theanalysis, and then we worked
together on that.
Or we go, and we worked with HRand they were interested in can
we build a generative AIassistant?
Or you can think of it as achat bot or agent that can
answer questions about someone'sbenefits so they can go and say

(06:40):
well, I'm in this interestingsituation, can I use this
benefit, can I use it in thisway, and so we worked with HR to
build out that chatbot and itwill eventually go out to
everybody on campus.
So we're sort of like behind thescenes trying to provide the
infrastructure, but then thebroadly and we don't actually

(07:00):
know everything that everybody'sdoing with it.
I can guarantee you that it'slike thousands and thousands of
use cases, and whenever we'vedone things like we worked with
the nursing school, pattySingstad and Regina Russell in
the nursing school ran a bigstudy within the nursing school
to collect use cases, and whenthey collected the use cases, we
started looking at them and wewere really fascinated because
there are lots of things that wehadn't seen before.

(07:21):
We didn't know people weredoing.
There were lots of things thatwe suspected people were doing
and we had done, and so wediscovered.
But then we go and work withspecific departments on very
targeted projects.
So we have things that arehappening on our infrastructure
that we don't even know what itis, that it's probably creating
efficiencies and all kinds ofnew innovations.
And then we have things thatare targeted, where we know

(07:41):
exactly what the problems arethat we're tackling, and those
are usually focused onindividual groups.

Jerrod Jeffries (07:47):
Wow.

Deb Tauber (07:48):
What steps is Vanderbilt University taking to
address ethical concerns relatedto generative AI, such as bias,
privacy and misinformation?

Jules White (07:59):
Yeah, well, I think the first thing is we have to
realize that a lot of the waythat we approach and think about
ethics from AI is really basedon the way AI used to work
before generative AI came along,and so a lot of the way that we
used to use AI was we used touse it to give us the answer or
decide in place of the human.
And with generative AI, reallythe way you want to use it, I

(08:23):
would talk about it as augmentedintelligence rather than
artificial intelligence.
You don you want to use it.
I would talk about it asaugmented intelligence rather
than artificial intelligence.
Like you don't want to use itto replace your intelligence,
you want to give humans toolsthat help them to go and solve
bigger and harder problems.
So I describe it as like anexoskeleton.

Jerrod Jeffries (08:36):
Can I even interrupt you?
Sorry to do this, but just forour listeners, can you even
explain the difference betweenwhat AI is versus generative AI?
For our listeners, thanks, andI want to get back to exactly
what you're going on.

Jules White (08:47):
Yeah, yeah.
So AI is like an umbrella term.
Generative AI is a type of AIand generative AI is where all
the excitement is right now.
So I had been doing AI for along time and I still didn't
predict or believe thatsomething like ChatGPT would
exist, and what I say is thatgenerative AI really took things

(09:08):
to just this huge leap inadvance.
So generative AI is a type ofAI, but it's where all of the
excitement and the growth andthe explosion and capability is
happening.
And you can take generative AIand you can pair it with
traditional AI, but generativeAI is like when you go into
ChatsBT or you go into AnthropicCloud or Microsoft Copilot or

(09:29):
Meta AI.
That's the technology that'spowering all of those.

Jerrod Jeffries (09:36):
So maybe it would also be fair that when
most people hear the term AI, orwhen they're seeing something
on a consumer level, that wouldbe mostly generative AI.

Jules White (09:47):
Well, it's not clear, because what we see is
that, because generative AI islike this rocket that's taking
off, everybody wants to say thatit's AI that's taking off,
because, if you've been doing AI, you want to claim that AI is
the thing that's taking offbecause you want to be lifted by
generative AI.
In reality, when people saywe're doing AI, you don't know

(10:09):
what they're actually doing, andmany, many of the software
vendors, companies and thingsthat are claiming that it's
generative AI, while they'reclaiming AI, they may not
actually be doing generative AIand probably most of them aren't
doing all generative AI, ormaybe they're not actually be
doing generative AI and probablymost of them aren't doing all
generative AI, or maybe they'renot really even doing something
that unique, and so it's hard todifferentiate, and this is part

(10:31):
of the reason I like to talkabout generative AI and not AI.
Despite everybody wanting totalk about AI, I think we should
really be differentiating andfocused on generative AI.

Deb Tauber (10:42):
What about the ethical concern?
How are you addressing thatissue?

Jules White (10:47):
Well, I think this is an important part where
there's a difference reallybetween AI broadly and how we've
done AI in the past.
Which AI broadly and how we'vedone in the past is have it
automate and do something inplace of a human.
And when you start doing it inplace of a human, you run all
kinds of risks like it has biasand it makes a bad decision.

(11:08):
And the truth is is that allthe bias comes from us right.
As human beings, we produce thetraining data or the source of
the training data, and ourbiases then get trained into the
models.
With generative AI, if you lookat the way that we use it
through these chat-basedinterfaces like ChatGPT, it
doesn't have to work the waythat we did before, where we use
it to replace our owndecision-making.

(11:29):
We can use it as a tool wherewe augment our decision-making.
So I talk about it in terms ofaugmented intelligence rather
than artificial intelligence.
So the goal is to work with thegenerative AI and have it
support your decision-making,but not to replace it.
And so it turns out you can dosimple things with generative AI
that you couldn't do with justAI before, to help eliminate

(11:50):
bias.
So one thing you can do is youcan give it data and you can say
give me three conflictinginterpretations of this data and
it can argue many differentsides of the data and then
suddenly, as a human being, youhave to go and confront and
decide which one is the rightone and why.
Because normally we go in andwe have confirmation bias, we
say this is what it is and youdon't go and confront other

(12:11):
perspectives.
And so generative AI can go andgive you many different
perspectives on data, an issue,a situation, but only if you go
and ask for it and only if yougo in and push for that.
So one of the things we want todo and we try to do and I do
this in all of my online coursesis I teach like the concept of

(12:32):
don't go for it and ask for ananswer.
Go for it and ask for manydifferent perspectives and then
take all those perspectives backand use it to inform your
decision making and think moredeeply about the issue, form
your decision-making and thinkmore deeply about the issue.
And so I think we combat itprimarily by teaching human
beings to use it in a thoughtfulway.
That isn't about replacingtheir own decision-making.
And I tell students this I saylook, if your sole job in life

(12:57):
is to copy and paste somequestion from somebody else into
generative AI and then copy andpaste the answer back in, like
it's replacing your intellect orthinking, it doesn't benefit
you, and that's the one thing Ican guarantee we can replace
with AI is we can automate thatpiece.
But what we can't automate isyour sensibility, your
aesthetics, your emotions,intuition, and that's what you

(13:19):
want to put together with it,not replace with it.

Jerrod Jeffries (13:23):
That's the first time I've heard that.
I really like that.
I mean different perspectivesand it is, I mean, removing
confirmation.
Bias, of course, is huge forany, you know, student or
faculty or above, and I reallylove that.
You know, you can put it,because students of course want
to take the shortcut to mostthings, or, I guess, humans,

(13:43):
right.
We try to create these mentalshortcuts or whatever it is, and
you know, trying to point blankof that.
You know general AI can replaceyou.
If you're just copying andpasting, you know, control,
control C, control V, then thatcould be the case.
But do you see, I mean, asyou've spoken to, that, it's
rapidly evolving.
Do you anticipate anythingwithin the field in the next

(14:06):
five years, or what woulduniversities or different type
of academic institutions do forsuch a change in the landscape?

Jules White (14:14):
Yeah, Well, I think there's what's going to happen,
I would say, in the next yearor two.
The big, I think, buzzword thatwe're going to see a lot of and
rightly so, because it's goingto be a tremendous change is
this idea of agents or agenticAI.
And if you go back to whatthese tools were built for, like
it all goes back to this paperreally came out of Google called

(14:37):
the Transformer Model, and ithad the title was Attention is
All you Need.
Came out of Google called theTransformer Model, and it had
the title was Attention is Allyou Need, and the idea was they
built this architecture that'sfundamental to all these tools
that we use for generative AI,and part of that what they were
doing it for is they weretesting it on translation, so
translating human language fromEnglish into Spanish or

(14:58):
something like that.
But what it turns out is that,once these models get big enough
is they can translate ourthoughts and ideas and goals
into computation, and this iskind of a crazy idea.
But like you go in and you saylike, okay, here's the data, I
want a visualization of this andit can translate and control

(15:20):
the computer to visualize thedata in the way that you're
requesting.
It's not doing it itself, it'sactually running code behind the
scenes.
It writes code and executes itlike a software engineer to
produce the visualization.
Or you can go in and say takethis and turn it into powerpoint
for me, and it can write codeto turn it into powerpoint.
Or it can go and if you tell itlike here are a couple systems

(15:42):
like you can go and look thingsup over here and you can search
this database, it can go and doit on your behalf.
And so what agentic AI is aboutis like we go and we give it
access to computing systems andthen we describe what are the
goals that we're trying toaccomplish and it goes and runs
all the computations and reactsand interacts and reasons about

(16:05):
what's happening in order toachieve our goals.
So you can imagine universitieshave all kinds of operations
where that type of capabilitywill help support what we do.
Because we struggle so much withall these software tools that
don't exchange data.
I'm sure this isn't a problemin the healthcare domain.
We don't have any tools thatdon't exchange data, right, and

(16:26):
we go into these tools and thistool doesn't have the button I
need over here, but this onedoes.
But now I gotta figure out howto get the data over there and
they don't work together and itbecomes this big problem.
And this is gonna has thepotential to eliminate a lot of
the problems, because you go inand you say, okay, here's what I
need to know or do, and then itfigures out how to move the
data around and how to performthe computations on your behalf,

(16:49):
and then you get back what youneed to do.
The work and that's whatagentic AI is going to be about
is that type of automation tosupport, you know humans and
getting work done without havingto struggle through all these
bad software tools that computerscientists like me, you know,
write and give out to the world.

Jerrod Jeffries (17:09):
And can you use that interchangeably with
agents, or is agents a differentthing?

Jules White (17:13):
Yeah, agents, agentic AI is really about
agents.
So agents are like the and theysound really complicated, but
it's actually looks very similarto behind the scenes, to what
you do in ChatGPT.
So if you go into ChatGPT andyou say like, tell me step by
step how to bake a chocolatecake, and now tell me the first

(17:35):
step, it'll say go do this.
And then you could come back toChatGPT and say you know,
here's what I just did.
Or you could take a picture ofwhat you just finished mixing up
in the bowl and give it thepicture and it would say okay,
now go do this.
And it would react through theconversation and, as you had a
conversation telling it how youwere baking the cake
step-by-step, it would tell youwhat the next step was and the

(17:56):
next step and you could give ityou know feedback and text, or
you could give it photographs oryou could talk to it.
And the same thing is happening,except, rather than a human
being going out and carrying outthe stuff like, okay, I just
mixed together the eggs and theflour, is telling a computer
system go and do this.
But whereas we can't directlytalk to the computers because we

(18:18):
don't understand the languagedirectly it can, and so it can
go and write and directlytranslate into the language of
the computer and behind thescenes it's essentially having a
conversation with the computer,and in the same way that it
would have a conversation withyou about step-by-step how to
bake a cake and a recipe, andthat's the foundation of it.

Deb Tauber (18:40):
Jules.
What do you use it for on aday-to-day basis?

Jules White (18:44):
Yeah, so I mean really all kinds of things.
I mean, one of the things thatI think is so valuable to
anybody is just using it to.
We often rush off into I'mgonna solve the problem this way
, or I need a solution thatlooks like this, and so I've
tried to get in the habit of,before I rush off to do
something, to take a step backand say am I doing the right

(19:07):
thing in the first place?
You know what are other waysthat I could solve this problem?
Or is my strategy justfundamentally off and I take
time to do that exploration,more that thought partnering is
what people call it, and so if Ihave a problem, I'll often be
biased towards the solution thatI've used in the past and I'll
go in and I'll say give me fivesolutions to how I could solve

(19:28):
this problem, and then thathelps me to stop and think is
this the right way?
And that helps inform mythinking.
Or if I have a really importantemail, my goal is not to have
it write the email for me.
Like just this morning, I waswriting an email and I was
trying to think about the bestway to communicate what I wanted
to convey to the other side andI said let's talk about the

(19:51):
psychology of the different waysthat I could convey this idea.
No-transcript, I use it as asoftware engineer to write code

(20:14):
all the time.
I use it to do data analysis.
I use it to do extraction ofdata so you can give it like.
A great example is you couldgive it a photograph of a
hospital room and you could sayhas anybody fallen in this
picture?
Is a nurse in the room?
Is there an infusion pump inthe room?
Is the family in the room?
Is the nurse at the workstation?

(20:35):
And you can take one photographand you can essentially keep
extracting information and datafrom it.
You can give it a screenshot.
One of my favorites is I justdid a class on meal planning and
the challenge for me with mealplanning is that I have these
grand plans and then I come homeat the end of the day and I'm
tired and I talked to my wifeand she's like it's your day to

(20:56):
cook according to the meal planand I'm like great, let's order
out and in, like it when it hitsmy actual schedule, like the
reality of my schedule doesn'twork with the meal plan.
So I take a screenshot of mycalendar.
I give it to chat GBT and I saydesign a meal plan around my
calendar.
Look at my calendar and read it.
Design a meal plan around mycalendar so that easy meals or

(21:19):
leftovers are available on dayswhere I've got a long schedule
and like that's, like theability to go and take something
two things that I can't fusetogether easily otherwise, or to
adapt a plan on the fly basedon you know what's going on like
.
Those types of things arereally valuable to me in simply
even eating healthier or savingmoney on eating out or not

(21:42):
wasting food.

Deb Tauber (21:44):
Right, I used it just the other day.
I'm having a five and six yearold little boy's birthday party
on Saturday, so I went ahead andtyped what are good activities
to do for a five and six yearold birthday party and he came
up with a treasure hunt andpinatas and you know some of the
stuff I I hadn't thought of ina long time.

Jerrod Jeffries (22:06):
So it really helped guide me to, I think,
which is going to be a greatfive and six year old birthday
party yeah absolutely, and eventhinking back to yours, jules,
it's like you just take apicture of what's in your fridge
and tell them to make yourrecipe with what's in there.
Absolutely, absolutely, and Iguess that's where we're seeing
all these different types ofwrappers, right when it's

(22:26):
someone's just saying, ok, have,have recipe AI, and then they
use something with within.
Yeah, and then, as a professorof community science, how do you
approach this interdisciplinarycollaboration towards fields
like medicine or environmentscience, or maybe even
humanities, so on?

Jules White (22:44):
well, you know, I I look at it as like I can
provide building blocks andcapabilities to support, but I'm
never going to be knowledgeableenough in the in the domain to
be able to really have an impact.
So my goal is to like, informof the art of the possible and
get my examples close enoughthat there can be some of those

(23:06):
early adopters in thatdepartment or discipline who can
see what I'm showing them andthen know how to take it and
apply it in the impactful waywithin their department.
So, like you know, whenever youget those early adopters, what
they do is I really like to workwith examples, concrete
examples, because I feel like alot of the discussion of AI is
so nebulous, like I don't evenknow what it is Like I listen to

(23:28):
some of these talks and I'mlike I have no idea what they're
talking about.
And I'm in computer science.
You know it's so nebulous.
What does this mean?
But I like concrete examplesmean, but I like concrete
examples.
And so I think when you showconcrete examples and then you
get an early adopter who canthen go and build concrete
examples for nursing or anyother discipline, then that
becomes something reallypowerful.

(23:51):
And if you show somebody look,I can take a photograph and I
can do this.
They can say I can take aphotograph of a hospital room
and I can do this other thingthat's really important.
Or simulation's a great examplewhere I learned something that
I'd never thought of when I wasat the GNSH conference, where
somebody showed me that they'ddone a simulation and they
turned on chat, gpt voice modeand they put it on the basically

(24:14):
in the room and they just hadit listen to the simulation.
And then at the end they saidnow act as the expert trainer
that's evaluating the simulation.
And then at the end they saidnow act as the expert trainer
that's evaluating the simulationand using this framework and I
apologize because I'm not expertenough to know what the
framework was Give the teamfeedback on how they did in the
simulation.
And it gave a fantasticanalysis of how they performed,

(24:36):
what they'd done really well,what they were a little slow on
and what they could maybepractice to improve on in the
future.
And now I learned something thatI'm going to go and take back
to you, because we do similartypes of things in, you know,
simulations and cybersecurity.
They're called cybersecuritytabletops.
We do this in classes.
You know you have role playing,all of these types of things.

(24:56):
So you know the goal is toreally, I think, exchange
examples that can inspire.
You know that we can be eachand be inspired by.

Deb Tauber (25:05):
Essentially cross-pollination of ideas and
thoughts.
Yeah, to move the sciencefurther.

Jules White (25:11):
Absolutely.

Deb Tauber (25:14):
Now in your role as an advisor.
What challenges have youencountered in bridging the gap
between administrative strategyand the technical complexities
of generative AI?

Jules White (25:25):
Yeah, well, I think that one of the challenges that
you see so I do a lot ofworking with industry and a lot
of the challenge you see inindustry is that basically they
go and they give generative AIout without training, and there
I believe it's a hugemisconception that you can just
give this out to people withouttraining and expect them to use

(25:47):
it effectively.
And there's all thisdenigration of prompt
engineering.
That's just messing around withlittle words, and I
respectfully disagree.
I think it's much more aboutlearning to solve and think
about how to use generative AIto solve problems and
understanding the capabilitiesand the building blocks.
And what I see as a problem ispeople go and just unleash it
without training to support it,and when you do that, what you

(26:10):
have is people that go in andthey use it in this very surface
way, and then you hear all ofthe discussion of well, what's
the return on investment?
Right, what is it really buyingme?
And like that, 11 minutes inthe day or whatever this quoted
is the real benefit, and that'snot the benefit of it.
That may be the benefit, if allyou can do is use it in a very
surface, sort of superficial wayis you can get 11 minutes, and

(26:32):
so that's the biggest challengeis so many places rush to give
it out without real supportivetraining.
And I think the second piecethat's a big problem is that a
lot of places don't givesufficient time.
Because you need to givetraining but then you need to
give people time to go andpractice and experiment and work
with it and many, many placesdon't do that.

(26:52):
And then they come back and theysay what's the return on
investment?
And the return on investment isactually an interesting
challenge because many of thethings that you can now do with
it, we don't even have abaseline to know what to compare
against, you know.
So those are the challenges.
People expect these big instantlike monetary, quantitative
results that we don't even knowwhat the baseline is and there

(27:15):
wasn't the proper training andtime and everything given to do
it proper training and time andeverything given to do it.
And then also a lot of peoplego off and focus on, like the
things that replacing thingsthat we already do and trying to
do it more effective.
But the thing that I'minterested in is like if a CEO
goes and uses it, a thoughtpartner, and they make one
better decision for the companythat pays for it, probably for

(27:36):
everybody, but those ROIs aren'tcaptured, and so I think that's
a bit a lot of the challengetoo.

Jerrod Jeffries (27:42):
I love your analogy on your foundational
building blocks for every usecase, right You're.
You're not ever saying do thisor do that, you're just saying
this is the way it hascapabilities and it's.
I guess maybe the saying is youlead a horse to water, right
and then, but what it does is upto them.
This has been fascinating, andso I think one of our last

(28:05):
questions so in this again goesback to the, because you were
advisor to the senior advisor tothe chancellor.
That's correct too, right,vanderbilt?
Yes, yeah, those are some bigshoes to fill.
So again, appreciate the timehere.
Jules, do you see a lot ofcomplexities around
administrative strategy and thiscomes from your CEO question as

(28:26):
well around administrativestrategy and how to overcome
those?
Can you just give me a littlemore color within those or how
you're kind of filling your role?

Jules White (28:36):
I think one of the most important things from a
strategy perspective is one youreally have to focus on
realizing that it may not be thepeople that are doing AI or
computer science, data science,in your organization.
They may not be the leaders inthis Some of the leaders may
come from there but there may beleaders all over who are

(28:57):
innovating within yourorganization.
So one of the sort of keythings is figuring out how do
you find those people and how doyou support them and support
them in going in building outand inspiring the next wave of
innovators, and that's sort of.
You have to sort of come upwith a strategy that creates
those ripple effects ofinnovation, and it may not be as

(29:17):
easy, like a lot of times,because of the structures and
foundations and organizations.
Certain groups want to say weown this, we are the leaders,
that's it, we do AI, it has togo through us, and that I don't
think works well in this spacebecause it's so
interdisciplinary.
I think budget is anotherchallenge, because people want
to think of like pay for X outof here, but when the impact is

(29:42):
so broad, nobody knows whoshould pay for it and how, and
so that creates issues onfiguring out who needs to pay
for this.
Leaders really need tounderstand it themselves and
understand it themselves byusing it, and actually being a
leader is not actively engagedand does not know how to use it

(30:03):
themselves.
I don't think this is atechnology that they can
effectively design or strategizewith, and so what we've seen
one of the things that's kind ofbeen amazing at Vanderbilt is a
lot of our leadership has goneand trained themselves on it and
they use it, but then you gointo organizations that say, oh,
I know about AI, I've learnedall about it and they've watched

(30:24):
all these slide decks that haveit at a high level and they
can't really effectively buildstrategy around it.
So I think those are some ofthe important sort of things
that have to be worked out whenyou're dealing with it.

Jerrod Jeffries (30:37):
Well, and into that I mean one.
I love that.
But that's also not just withinacademic institutions, right?
That should be across everysingle organization in the world
.
But I think, if you're finewith it, we should also put a
link up to your Coursera so allour listeners can touch on that
too.
For the we'll put that in theshow notes, but I think like
sorry.

Jules White (30:57):
Yeah, yeah, yeah, that would be great.
Appreciate it.

Jerrod Jeffries (31:00):
Yeah.
Well, yeah, I mean half amillion people taking the course
.
I mean there has to besomething, something,
something's good there.

Jules White (31:06):
Yeah, the biggest one has about 340 ish thousand.
I just put the link into it andthen I have another 15 or so
courses on this that make up therest of the 150,000.

Deb Tauber (31:18):
What did you think when you started to use the
generative AI?
Because you're a forwardthinker with your field, it had
to be kind of overwhelming.

Jules White (31:29):
Well, you know, actually it's interesting
because I had a graduate studentwho I owe a huge debt for
telling me to go and payattention.
And so, before ChatGPT came out, he was looking at the earlier
GPT versions and he said this isamazing.
And I was like yeah, yeah, yeah, and I wasn't paying much
attention.
And then ChatGPT came out andhe immediately was texting me

(31:51):
and saying you have to go, lookat this, you have to look at it.
And I went and looked at it andI was like, oh, that's
interesting.
And I started off kind of likeyou know, playing around, you
know, write a poem about this.
Ha ha ha, that's humorous.
And then my father was aprofessor of creative writing,
so I called him up.
I'm like this is fascinating.
And so we were on the phone Iremember it like we were talking

(32:12):
on the phone and coming up withthings to say to it.
And the one that really got meto where wait a minute, this is
different was we said imaginethe world that didn't have odd
numbers.
What are all the things thatwould have to change?
And it gave this incredibleanswer to what happens if odd
numbers go away what is that andhow does that impact the world

(32:36):
and what do we have to dodifferently and like it's?
It's that's not a question acomputer should be able to
answer, and it could, and itanswered it really effectively.
And then we started diggingdeeper and so then it was just
like the the most exciting, funthing that I've done, you know,
for the last, for my entirecareer, basically, is exploring

(32:56):
the depth of capability of thisspaceship that landed Wow, thank
you.

Deb Tauber (33:03):
Is there anything you'd like to leave our
listeners with?

Jules White (33:07):
I would say that realize that whatever you've
heard about generative AI andwhat's happening, it's
completely underselling howimportant this is going to be in
our lifetimes.
And as one concrete, simpleexample, I can take out some
leftover sushi in my fridge.

(33:27):
I can take a picture of it andI can say tell me how to make it
and give me a complete list ofingredients.
And it can tell you how to makeit.
And if you had gotten togetherlike 100 PhDs and given them all
the supercomputing in the worldand you had said now make a
system that you can take apicture of an arbitrary thing
and it will tell you how to makeit, it wouldn't have been
possible before this came out.

(33:48):
And now you can do it on yourphone with two sentences, and
that is one of like an infinitenumber of things that it can do,
and it's just really limited byyour creativity and critical
thinking about how you use itand innovate with it.
So the future is out there andit's not going to just be
computer scientists driving it.

(34:09):
It's going to be everybody andevery discipline and their
creativity and what they canfigure out to do differently or
make possible because of this.
And so I encourage people to goand engage with it, really take
a deeper look at it and thinknot about using it like internet
search, like ask it a questionand get an answer, but thinking

(34:29):
about solving problems inconversation.
And how could I do somethingvery differently than anything
I've ever imagined with thistechnology?

Deb Tauber (34:39):
Thank you.
You've inspired me to ask somedifferent questions.

Jules White (34:43):
Yeah, of course you should check out my course.

Deb Tauber (34:45):
Okay, and it's available on that link that you
just sent.
Okay, perfect, thank you somuch for the time.

Jerrod Jeffries (34:53):
This has been fascinating, Jules.
I mean it's, it's reallyappreciated.
Yeah, thank you so much.
We appreciate this has beenfascinating.

Deb Tauber (34:58):
Jules, I mean, it's really appreciated.
Yeah, thank you so much.

Jules White (34:59):
We appreciate you, we appreciate what you're doing
and I will absolutely take thatcourse, okay, and you can be
reached on where LinkedIn's agreat place to find me, or just
search Jules White, promptEngineering for ChatGPT, and
you'll find me on Coursera.

Deb Tauber (35:16):
Perfect, all right.

Disclaimer/ Innovative Si (35:18):
Thank you so much and happy
simulating thanks to innovativesim solutions for sponsoring
this week's podcast.
Innovative sim solutions willmake your plans for your next
sim center a reality.
Contact Deb Tauber and her teamtoday.
Thanks for joining us here atThe Sim Cafe.

(35:44):
We hope you enjoyed.
Visit us atwwwinnovativesimsolutionscom and
be sure to hit that like andsubscribe button so you never
miss an episode.
Innovative Sim Solutions isyour one-stop shop for your
simulation needs A turnkeysolution.
Advertise With Us

Popular Podcasts

24/7 News: The Latest
Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.