Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Rebecca Hogue (00:05):
Welcome to
Demystifying Instructional
Design.
This is the new season and thisseason we're going to focus on
the use of generative AI, inparticular, in instructional
design.
Welcome, jeremy.
Can you start by introducingyourself?
Jeremy Tuttle (00:19):
Yes, First of all
, thanks for having me back.
I was in the last season and Ihad a great time and I'm so
thankful for thanks for havingme back.
I was in the last season and Ihad a great time and I'm so
thankful for you to invite meback.
I'm Jeremy Tuttle, I'm theDirector of Learning Design for
Niche Academy and I lead afantastic team in the creation
of many, many tutorials.
Rebecca Hogue (00:41):
When we last
talked, chat GPT was just
starting and you said you weregoing to wait until chat GPT-4
came out before you tried it out.
My question is how did that go?
Did you try it out?
Do you use it?
Jeremy Tuttle (00:56):
Yeah, yeah, we
tried it out and it did a great
job with the text generationside of things that we were
looking at, but it didn't quitemeet our needs in the way that
we thought it would.
We have a very particular stylein which we write and we have
(01:18):
very specific goals that we wantto hit as we're putting
together our learning material,and when we asked ChatGPT to
spit out something to help us inour process, we found that we
were spending more time editingwhat had been spat out to bring
it into our style and then alsospending more time fact-checking
(01:39):
the information than if we hadjust done it ourselves.
So we don't use it to generatelearning material, but we have
used it to generate scenarios.
You know, if we're going to aska learner to think about a
certain situation that theymight be put in in regards to
the topic at hand, an easyexample would be you're working
(02:03):
with an angry customer and youneed to do X, y, z as part of
the process.
Instead of trying to envisionwhat an angry customer could be,
you could just ask ChatGPTwrite up a script of a customer
who's angry about this thing andthen it'll spit out three or
four paragraphs describing howthis angry customer is angry and
(02:27):
that has saved us time.
Rebecca Hogue (02:30):
That's
interesting because I heard that
as well, the idea of scenarios,but I wasn't quite sure how you
would use it.
I have Notions I pay for theintelligent version of Notion
with the AI in it, and I askedit what questions to ask, and
the first question it gave mewas how has AI influenced your
approach to instructional design?
Jeremy Tuttle (02:51):
Good question.
As of right now, it hasn't,because I see AI as something
that's alongside me, notsomething that's leading me, so
my process hasn't really changed.
I just now occasionally go toit for help with ideas or as a
(03:14):
sounding board, more thansomething that's going to
dramatically change my process.
Rebecca Hogue (03:20):
And we talked a
little bit about the text
generation.
Do you use it for any imagegeneration?
Jeremy Tuttle (03:26):
So that's another
thing where we have a very
specific style that we have andthe files that we create for our
(03:48):
imagery are then used foranimation.
So they have to have layers inspecific spots so that our
animators can find the thingthat they need to animate and
then use it effectively.
If you generate an item inAdobe Illustrator, it might look
good, but when you dig into thelayers it's a mess.
Rebecca Hogue (04:02):
Okay, I was going
to ask you about Adobe
Illustrator because, yes,chatgpt4 can create images.
I use it to create featureimages for some of my blogs, but
in instructional design, we'remore likely to be using
illustrations and using AdobeIllustrator, and they have this
new beta adding in backgroundsas well as icons, and so you're
(04:24):
saying that the backgrounds thatthey add are just sort of a
mess.
Jeremy Tuttle (04:29):
Oh.
So the way I like to describeit is that if you want to take
what it provides you wholesale,it's great.
If you just want to take whatyou see and run with it, it does
its job.
But if you need to edit it inany capacity the way that the
file is built, the way that thelayers are organized, the way
(04:52):
that the groups are groupedtogether it becomes a real
hassle to try to get the imageto look the way you want it to
look.
For the example, this wasn't aspart of my work, but just a side
project that I was doing.
I was in a play and we needed alogo for a sporting goods store
.
So I was thinking this play isset in Minnesota.
(05:18):
People like to fish.
Why not have this sportinggoods store be like a
fisherman's and a huntingsporting goods store?
So I asked Illustrator togenerate a logo that would
include a fish in some capacity,and it spat out this beautiful
I believe it was a bass jumpingout of the water.
(05:40):
It had lovely splash marks, itlooked good, but it was far too
busy to be believable as a logo.
It had way too much detail.
So I wanted to go in and justremove a little bit of the
detail.
So I click into layer one andthere are, out of 200 layers or
(06:01):
so, group one, 200 layers, andso I have to go find this shape,
delete that shape, this shape,delete that shape.
But they're these little tinyspecks everywhere.
So, try, trying to bring itdown to be a more simple one,
and even if I prompted it togive me simplify it, simplify it
, simplify it, didn't matter.
(06:22):
I still had that issue.
So if I want to createsomething that I'm going to work
with and on, I feel it's stilleasier to start from scratch
than it is to take something andtry to mold it and push it into
something that I feel I need.
Rebecca Hogue (06:44):
And have you
tried any of the video creator
stuff?
Jeremy Tuttle (06:49):
So I know in
Adobe Premiere they have a
couple really good tools likethe Remix tool.
If you're working on video andyou're in Adobe Premiere and
you're going to put music downon your video, use the Rem remix
tool.
What it does is you can stretcha song longer or you can shrink
(07:12):
a song shorter, but it willalways end on the end of the
song and it will find ways tostitch the song at good specific
spots so that as you'relistening to it, you can't tell
that there was a transitionapplied within the music as you
shrunk it or stretched it.
Rebecca Hogue (07:34):
That sounds super
handy.
Jeremy Tuttle (07:36):
It is incredibly
handy.
One of the things when I wasdoing video editing a lot in my
prior job, I was really good atstitching music together in that
in that way, I play a lot ofmusical instruments.
I love music, so I could findthe beat.
I could cut on the beat, pullit together.
(07:58):
If it didn't sound quite right,I could add a little bit of a
crossfade so that it blended thecut a little bit and it took me
probably five minutes five to10 minutes per video doing that.
But with the remix tool youjust go.
I wanted this long.
(08:18):
Then it processes for maybe 15seconds and it's done and all
the ones that I've checked itsounds immaculate.
So highly recommend the remixtool.
There are other tools.
I have a list up on my otherscreen so I don't forget.
They have other tools like autoreframe, which, if you're
(08:40):
filming your, your training,that could be helpful, but it's
not that difficult to reframeyourself, so adjusting the scale
of the image and then changingthe position to sit where you
want it to be.
Rebecca Hogue (08:58):
Yeah, I don't
find that a particularly
difficult task.
Jeremy Tuttle (09:01):
Removing the
background was an interesting
one yeah, and that's beenavailable in After Effects for
quite some time Using keyingeffects.
Traditionally, you would autokey using the color green or the
color blue.
I'm probably diving way toodeep into video production here
for this audience, but takingthat same technology, you don't
(09:24):
always have to do green, youdon't always have to do blue.
So is now that we have uh,what's it called?
I can't remember what it'scalled, but it it can detect
lines in pixels, so the shape ofmy head produces a line
compared to the background.
As long as the system canidentify where that line is, it
can easily pull everything elseout.
(09:47):
So it's a good thing thatthey're adding it to Premiere.
Rebecca Hogue (09:51):
Yeah, and
actually if you're recording on
things like Zoom, you just askyour audience or you're
recording people to use a greenscreen background, which I
thought was a very interestingthing.
You just use, change thebackground to green and you have
a green screen.
Jeremy Tuttle (10:06):
Yep, there you go
, there you go, done.
Rebecca Hogue (10:10):
So can you
provide some examples of
projects where AI wasparticularly useful?
Jeremy Tuttle (10:17):
Yes, one of our
instructional designers was
working on a project for workingsafer hours.
It's part of our OSHA seriesmaking sure that people
understand what safe work hoursis and this is one of the times
where she could not quiteenvision a specific scenario
(10:38):
where a manager who's in chargeof setting a schedule could look
at the situation and then reactto it.
So she asked chat gpt uh,provide me a scenario where an
employee is working unsafe workhours.
Just give me that.
(10:59):
And she was given a story.
She was able to change some ofthe details, but it, she said,
it saved her three hours worthof time and effort trying,
because she just felt like shewas hitting a wall.
She didn't have the creativespark at that moment in time to
put that kind of a storytogether, so that that's one
(11:22):
instance where it came in handy.
Rebecca Hogue (11:32):
It sounds like
that is the most common use from
instructional designers rightnow is in that, creating
dialogue or creating scenariosthat you know.
Act as that other person andtell me what you think.
Jeremy Tuttle (11:43):
Right right.
Rebecca Hogue (11:45):
Have you had
challenges in integrating AI
into your processes?
Jeremy Tuttle (11:49):
That's a good
question and I think it comes
down to what.
Do I consider the threshold asa challenge.
I don't think we've tried toimplement it as earnestly as
others have.
We have a very specific processthat we follow and we've
considered it at differentpoints along the process.
(12:12):
And if it hasn't worked in thatpoint of contact of the process
, we haven't tried to force itin some other capacity.
And some people might saythat's not innovative.
Other people might say you'redoing the right thing.
From from my perspective, Ineed to keep my production line
(12:33):
moving and I'm not willing tohalt the presses long enough to
really dig in and see if itwould make a difference.
And it's not impacting our ROIto not put it in with fidelity,
(12:54):
so I'm not incentivized.
Rebecca Hogue (12:57):
Do you have any
thoughts on how it will change?
What do you need generative AIto do to be useful for you?
Jeremy Tuttle (13:07):
That is a really
good question and I recently
spoke at the Al Polly HumboldtInnovation Summit and with this
group.
They were not concerned.
They were interested ininnovative ed tech and how
people are approaching AI and aspart of that presentation I
(13:31):
went into user experience, userinterface, ux, ui stuff and I
think that generative AI isfollowing an extremely similar
trajectory as UX UI faced adecade ago.
So I'll give the example that Igave there, which is and all of
(13:56):
a sudden my brain just turnsinto presenter mode and I was
about to ask the room raise yourhand, If you ever had raise
your hand If you ever had aMySpace page.
Rebecca Hogue (14:08):
Oh, can't say
that one.
No, I did not have a MySpacepage.
Jeremy Tuttle (14:12):
But are you aware
of what MySpace looked like?
Yes, yeah.
So everybody on MySpace had thecapacity to change how their
page looked.
You could put a plainbackground.
You could put an image as yourbackground.
You could desolate an image ofyou and five friends to be your
(14:36):
background.
So that background could beincredibly simple or incredibly
busy.
You have the capacity to changethe font color, to change other
bounding box color.
So if you were so inclined, youcould put a bright orange text
on a slightly less bright orangebackground and you weren't
(14:59):
stopped.
So if people came to your pageyou'd have to squint really hard
or get out your at mantisshrimp yeah, notoriously bad for
that yeah.
So in thinking about that, uxuiand in its nascent stage had a
whole world of openpossibilities.
(15:22):
You could design a web pagehowever you wanted, no matter
how bad compared to modern daystandards it is.
Think of the space jam web page, where it's got sparkly
sparkles going on in thebackground.
And now we know that ifsomebody is going to interact
(15:43):
with a website and needs to beclear what is interactable and
what isn't interactable, we needto remove noise from those
things that we want them toengage with, whether it's text,
video, a button and if we removethat noise, people are.
They feel better in the digitalenvironment.
Another example would be thethree line icon.
(16:07):
You know the three horizontallines Seven years ago?
Well, today that's called themenu icon.
You go to a website.
You see the three lines orhamburger icon.
Right, yeah, you see that.
You click it.
It opens a menu and it turnsinto a little X and then, if you
want to leave that menu, youclick the X.
The menu swivels back up intoitself.
(16:28):
But seven years ago, when I wasbuilding tutorials here for
Niche Academy, I had to call itthe three line icon because if I
didn't, people didn't know whatto click.
Today I can just call it themenu icon, and I get zero
negative feedback from thelearners of those tutorials
saying I don't know what a menuicon is, I don't know where to
(16:49):
click.
So this progression in UX, uiand understanding how people
want to and need to interact ina digital space has vastly
improved.
I think the same can be saidfor generative AI, in that right
(17:10):
now we're sitting in theMySpace era.
Everything and anything ispossible.
You can ask it to do all thethings and in five years from
now, we're going to look back attoday and go man, there were a
lot of bad practices going onback then.
So do I know what those badpractices are in this moment?
No, I don't.
I don't have the ability tojump ahead five years and then
(17:32):
look back, but I do think thatwhat is the entire possibility
of what's happening right now isnot going to be the case in
five years from now.
It's going to be more limitedscope as we learn what end
learners and users for justgeneral use cases, as they
(17:53):
decide what feels good and whatmeets needs.
Rebecca Hogue (17:59):
And just figuring
out what that is and getting
yeah, because again you thinkabout the hamburger menu and you
know, yeah, it took a littlewhile for everyone to know what
it means, but now everyone doesRight, it's the same kind of
yeah, what can we do?
Have you done any learnerpersonalization?
Do you do any of that?
Have you done any learnerpersonalization?
Jeremy Tuttle (18:22):
Do you do any of
that?
I do not.
So the training that my teamcreates and provides is meant to
work in any organization andtherefore be very broad, and
then we hand that training offto our customers and then
customers can customize it totheir unique needs.
(18:42):
So for that sort ofpersonalization it would have to
be done on the customer side,not ours.
Rebecca Hogue (18:49):
Okay, and do you
do?
Have you done any chatbots?
Jeremy Tuttle (18:54):
We do have a semi
chatbot on our website.
It pulls from the knowledgebase that we have for our
product, but it's very limitedand we do that very
intentionally.
Rebecca Hogue (19:09):
And so that's
just more on your marketing side
than not really on yourtraining and creation side.
Jeremy Tuttle (19:13):
Correct, correct.
Yeah, we don't do a chat bot onthe training creation side.
Rebecca Hogue (19:18):
Yeah, I think
that's think that is potentially
an interesting future practice.
Whether it turns into being afuture bad practice or future
good practice, the jury's stillout on that one.
Jeremy Tuttle (19:30):
Yeah.
So I'm glad you mentionedbecause one of the aspects of
generative AI that I'm veryinterested in is the concept of
authority and expertise.
So, if you don't mind me takinga minute or two to help define
(19:52):
those terms so that we'rerunning on the the same
vocabulary somebody who is anexpert is a person who has
extensive knowledge, practiceexperience, research on a very
specific topic, and authority isconstructed around expertise
(20:13):
and it's dependent on thesituation and context in which
that information is being used.
So I'm going to give you a verydisparate, very extreme example
of expert information versusauthoritative information.
(20:33):
So we have a dietician,somebody who specializes in
diets, in nutrition, in gettingpeople to eat food in a specific
way to meet their end goals.
Then we have you.
You are an expert in what youeat.
(20:54):
If I asked you what you had forbreakfast, could you tell me
what you had for breakfast?
Coffee, yes, coffee, perfect,beautiful.
Who would I go to to ask whatyou had for breakfast?
Should I go to you or should Igo to the dietician?
(21:17):
If I wanted my questionanswered, I would come to you.
You are an authority on what youeat, but a dietitian has spent
way more time thinking aboutnutritional value and diet plans
and whatnot, right.
So even though the dietitian isan expert, they are not an
(21:39):
authority on your diet.
Using that kind of thoughtprocess, let's put that in
context with generative AI.
Generative AI is pulling fromexpert sources.
Is it pulling from all theexpert sources?
Could be debated.
(22:00):
What we'll assume for the sakeof this, this discussion, that
it's pulling from at least oneexpert source.
Is that expert source that it'spulling from authoritative for
your needs?
I'll pull a more recent example.
(22:20):
I believe it was air canadathat lost ruling because their
chatbot on their websiteguaranteed something yeah yeah,
person very sad, her husbanddied on a flight.
She went to the chatbot saying,hey, this happened, I need to be
(22:44):
refunded.
And the chatbot said, well, aslong as you've done this within
90 days, we can refund you.
So she followed that procedureand then she got a message back
from Air Canada human, not AirCanada chatbot saying, actually,
our policy is that doesn'thappen at all, sorry.
And she took him to court andshe won.
(23:06):
Air Canada lost and she won,she won, she won, yeah.
Rebecca Hogue (23:10):
Because chatbots
yeah, that was actually quite a
remarkable thing.
So the chatbot is considered anauthority.
Jeremy Tuttle (23:18):
Exactly so.
It's expert information was notthe policies of the company.
The expert information was justwhatever it had back in its
repository.
If we take that in context forour learners, if we release
(23:38):
responsibility of the companycontent that we put in front of
our learners, they are going toassume it is authoritative
information and they're going torun with it.
But if I, as a trainer, as aleader, as a manager, as a
supervisor, do not verify thatthat is authoritative
(24:01):
information, that it is what thelearner should be doing, then
that learner is going to berunning down a path that they
shouldn't be and they're goingto be going down it faster if
they're using the AI as theirsource because the AI spits it
out faster and if they're notthinking critically about what
(24:21):
that information is, where it'scoming from, what the sources of
that generated GISA informationcomes from, they're going to
keep going and they're going tokeep going.
And the next time I as amanager, supervisor, trainer,
check in, I'm going to go Whoa,how did you get all the way
there when all I put was washere, and then you got to
(24:46):
backtrack, you got to retrain,you got to.
Rebecca Hogue (24:50):
Well, and yeah,
it's worse.
Right, yeah, there's nothingworse than training wrong.
It's like documentation, right,wrong documentation is worse
than no documentation.
Jeremy Tuttle (25:01):
Absolutely.
Rebecca Hogue (25:03):
And yeah, I can
see that from a training
perspective as well.
And so where does the AI?
Where is AI going to learnauthority?
Absolutely.
And yeah, I can see that from atraining perspective as well.
And so where does the AI?
Where is AI?
Jeremy Tuttle (25:11):
going to learn
authority.
It doesn't, because authorityis contextual.
You need to know the situationin which it's being presented
and what the intended outcome is.
For that to happen, but AI,unless extremely specifically
prompted, cannot do that.
Rebecca Hogue (25:35):
And then, in
order to do that prompting at
least today you spend more timefiguring out the prompt than if
you would have just done ityourself.
Jeremy Tuttle (25:44):
Absolutely,
absolutely.
Rebecca Hogue (25:47):
So what are some
of the tools that you use
regularly that involve AI?
We've talked a little bit aboutAdobe, creative Cloud tools and
ChatGPT.
Is there any other?
Jeremy Tuttle (25:59):
There aren't any
other tools that I use right now
.
One tool that both intrigues meand concerns me is Sora.
It's the film generative AI.
You can ask it to say a womanwalks down a Japanese or a
(26:22):
street in Tokyo with neon lights, and it'll produce a very
convincing woman walking downthe street in Tokyo with neon
lights going on around her.
Rebecca Hogue (26:37):
In three arms
because you know the image stuff
, you can't get human and youcan't get text.
Jeremy Tuttle (26:43):
Sora is pretty
good, and that's why I find it
both intriguing and concerning.
I concerning on the ethicalside making sure that you're not
generating politician doingthis insane act but on the
instruction side, there is thepotential to say I need to
(27:05):
demonstrate this dangeroussituation without putting an
actual human at risk.
Can I prompt this filmgenerator to put a fake person
in this dangerous situation sothat you can help somebody
identify what's going wrong?
So is it at that point?
(27:25):
Yet I don't think so.
Is it cost effective?
Yet?
Definitely not, but at somepoint in the future I think it
will be.
Rebecca Hogue (27:36):
That sounds like
a very useful use case for
generative AI.
Is that danger case where youdon't want to put, or you can't
put, somebody in danger, but youneed to demonstrate what that
danger looks like?
Yeah, yeah, that's actuallyreally, and we talk about that
(27:56):
as a reason to use simulationrather than being able to do the
actual in the training context.
Right, you have to simulatebecause it's not safe or it's
not cost-effective, right?
Those are big reasons forsimulation, and so that sounds
like it could be a very.
That's where video could, thatgenerative video could compete
(28:21):
in that realm.
That's an interesting idea.
How do you effectively evaluate, or how do you evaluate, the
effectiveness of AI in yourprojects?
Now, you've talked a little bitabout how some of them are just
not time effective.
Jeremy Tuttle (28:36):
Yes, the other is
the intended outcome with the
generative text side of things.
Chatgpt is good at grammar, buttone, even if you ask it to
switch tone, is it really thetone that you want or is it the
tone that they're giving you?
Same thing for Grammarly, right.
(29:00):
In Grammarly as you're writing,you can get suggested changes
to how you're writing to improveit in some capacity.
Grammarly as you're writing,you can get suggested changes to
how you're writing to improveit in some capacity, whether
it's for brevity, for fluidity.
Rebecca Hogue (29:15):
And that's
actually a great example.
Grammarly is one of the earlierAI examples that people don't
necessarily know that.
Yeah, that's AI that work inthe background giving you those
things.
But yeah, tone, that's a goodpoint.
Jeremy Tuttle (29:33):
So understanding
what I want out of this
situation and if I'm getting it.
That takes the mental effort.
It takes the mental effort.
It takes the mental effort, thecognitive load of that creative
experience which, in my opinion, is unnecessary.
(29:53):
If I'm throwing a pot on apottery wheel and we're 10 years
in the future and we have AI inour glasses and we can see a
certain shape projected out ontothe throwing wheel so that I
can produce the pot to thatshape, is that creative or is
that just derivative replication?
(30:16):
I find joy in the creativity.
I find fulfillment in thecreativity and having the words
be my own.
So, personally, I don't use itfor any of my writing.
I had to think for a secondhave I used it to have it right
for me in any capacity?
And no, though I do supportothers in using it for their own
(30:43):
personal reasons.
Not everybody is like me whoderives joy playing with the
same sentence for 30 minutestrying to get it just right, and
I appreciate that.
Rebecca Hogue (30:56):
And how does cost
affect your decision to use
different things?
Jeremy Tuttle (31:01):
Everything has a
cost.
It's the ROI and theopportunity cost.
So right now in theinstructional design space there
are a flood of companies comingin that are touting ai,
supported, ai generated trainingin.
(31:21):
In.
Whether it's you put in the,the prompt, we give you the
material, or we've put in theprompt and give you the material
.
There are a number of companiescoming out now that do that.
So to differentiate within thespace, is it worth trying to
(31:42):
make a splash doing that sametactic, or is it a bigger
opportunity to say we don't dothat.
So that's something that nicheacademy, my, my company is
weighing is.
Is it worth getting out more,you know, getting out more
training content through the useof AI and maybe it being not to
(32:08):
the same standard of qualitythat it has been up till now?
Or do we continue our currentpace and make sure that it's
produced thoughtfully by humansand we advertise that so that
customers can see that this is amore not necessarily thoughtful
, but a more intentful approach?
Rebecca Hogue (32:31):
That's a really
good point.
It's like again back into thefuture of AI and as a company,
it's like now you're advertisingthat we don't use AI and that's
your competitive advantage overthe AI ones.
I've played with a couple ofthose AI generative ones and so
far I'm finding everything thatcomes out of it is bland.
(32:54):
Yeah, yeah, it's just sort oflike you're spitting out facts
but you're not applying learningtheory.
Jeremy Tuttle (33:03):
Oh, and so I'm
very glad you mentioned that,
because another aspect of my jobis I've been in conversations
with managers, people who leadother people within a company
and they're responsible for theprofessional development of the
people on their team.
They see generative AI as anopportunity to release some
(33:28):
responsibility in regards tothat training.
If I don't have to sit throughan hour of training in the
conference room with a slideshowthat I prepared, wouldn't that
be great?
I can just hand this off to myteam and they can go running.
So, ignoring the discussion wehad earlier where you know if
(33:54):
you do that, they're just goingto run down a path quickly and
you're not going to catch them.
Uh, the other aspect of thatthese managers who want to just
set them free don't understandlearning outcomes unless they're
instructional designer ortrainer by trade.
So, without thinking about thelearning outcomes or objectives
(34:21):
or competencies whatever termyou use at your organization,
your, your learners, aren'tnecessarily going to get the
right information.
Again, going back to theauthority.
So we need to ensure that, forproper training transfer, that
(34:42):
managers still care about the,the work that their staff
produces, the effort that theygo through.
You can't have AI be yourfail-safe for quality.
And going back to who was it atIBM that said you can't, air
(35:05):
computers shouldn't be makingbusiness decisions, because you
can't hold them accountable.
Rebecca Hogue (35:11):
I hadn't heard
that, but that actually is a
great quote.
Jeremy Tuttle (35:17):
I'm pulling it up
.
A computer can never be heldaccountable for decisions.
Therefore, all computerdecisions are management
decisions.
That's the actual quote, but itstill holds true with training,
in that if you release yourresponsibility to something that
(35:39):
can't have accountability, thenyou therefore don't have
accountability and you aren'tdoing your job.
I'm calling you out, managersout there who think that you can
release that responsibility andassume that your staff, your
employees, your team members aregoing to be effective and
(36:02):
supported.
Rebecca Hogue (36:02):
It's a good point
.
We are coming towards the endof our time, at least for the
podcast portion.
Is there anything else you'dlike to chat about?
You said you had a list.
You had a list.
I'm curious what otherquestions I should be asking.
I certainly deviated from thequestions that I had in front of
me because I'm like, okay,that's not really what we want
(36:23):
to get at.
So again, which is an exactexample of using the AI to
generate the questions but thenusing the human to adjust them
to the particular context of theinterview it's like, well,
that's not really the rightquestion to ask.
Jeremy Tuttle (36:42):
Absolutely,
Absolutely no.
I think all the points that Iwanted to get across got across
yeah so what's your next bighope for ai?
Rebecca Hogue (36:55):
my, my next big
hope yeah, like short term, like
in the next six months.
What are you hoping to get?
Jeremy Tuttle (37:02):
I can't remember
what the status is in the EU on
legislation regarding thetraining material for large
language models and othergenerative software, but I hope
to have stronger legislationaround it stronger legislation
(37:33):
around it.
I'm friends with many artistsand a lot of the artists that I
talk with are strongly concernedabout software just coming and
gobbling up their style, theirwork, so that people can pay
pennies to get artwork in theirstyle.
And are we in an age where artis no longer valuable?
I hope not, but if that isn'ttaken care of, it will be the
(37:59):
case where anybody who wants anysort of artwork, they just go
generate it themselves.
Rebecca Hogue (38:09):
Sort of changes?
What art is?
Jeremy Tuttle (38:11):
Yeah, it would no
longer be a human endeavor, and
a large part of art is makinghuman connection through a
medium of some sort.
So if it's no longer abouthuman connection, then those
participating or engaging withnon-human media it.
(38:34):
It doesn't feel the same, atleast to me and so how does
legislation help that?
it prevents certain art frombeing generated, that it
shouldn't only be from theartists themselves.
So there's a long list ofartists that was posted, trying
(38:59):
to remember when and by whom.
But Dolly was trained on a listof hundreds of artists and in
that list of artists there arepeople who adamantly affirm that
they did not give consent fortheir art to be put into the
(39:20):
system to be part of thetraining material.
And so people can go into dollyand say make me this thing in
this person's style.
And had that not been the case,had that dolly not been trained
on that style, it wouldn't havebeen able to spit out that kind
of art.
The only way to have receivedthat kind of art would be to go
(39:43):
to the artist themselves, so ithas effectively removed money
from their pockets.
There's an argument to be madethat art is free, but also no,
but it is.
Rebecca Hogue (40:02):
I understand that
part of it, but the as long as
it's generating a true remix andnot reproducing the exact same
thing.
Jeremy Tuttle (40:13):
At what
percentage?
Right, if you talk to the musicindustry, you say I'm going to
remix your song and I only remixit 5%.
The music industry is going tobe no, I'm slamming you with a
DMCA, take it down, because itwas that.
Rebecca Hogue (40:32):
5% does not
qualify as new and innovative or
whatever the copyright,legality or legal terms are yeah
, it's, and I find itinteresting that the same didn't
happen for the text-basedgenerative AI, but it does for
(40:57):
the visual-based AI, which Ifind that fascinating.
You can say write me a poem inthis style and it will, and.
And there hasn't been that samepushback that we get in the
minute you go visual, suddenlythere is just much more pushback
from the artists on.
(41:19):
You know, wait a second.
I didn't give my permission forthat and that's actually one of
the big things I teach whenpeople are creating web pages
and eBooks and whatever it'slike.
Yeah, is that Creative Commons?
Do you have a license to usethat?
Jeremy Tuttle (41:38):
Absolutely.
Rebecca Hogue (41:39):
Is that valid?
Is there enough CreativeCommons, Rebex, CCO as opposed
to CC BY out there?
That would allow the dallys ofthe world to get their databases
so that they can be generative.
Jeremy Tuttle (42:00):
Absolutely.
Visual generators shouldn't orcan't use art produced by
artists, just that those artistsshould be able to consent into
being part of that training pooland they should be compensated
as part of that.
Had this list of hundreds ofartists been compensated, I
(42:23):
don't think there would benearly the uproar as it
currently is in this space, andI think that's what the
legislation in the EU is tryingto and I'm not an expert in that
space.
I've only read it once, so I'mjust going off of what I vaguely
remember, so I could becompletely wrong, but they're
(42:43):
trying to set up those ethicalstandards in regards to how the
system is trained so that thepeople who are affected by its
training have either recourse orability to also profit from
yeah, that that actually bringsup an interesting ethical
(43:08):
question on the use of some ofthe visual stuff.
Rebecca Hogue (43:11):
If I'm using it
to create something, am I
participating in this unethicalbehavior because I'm using it to
create something?
Jeremy Tuttle (43:24):
Yeah, and an
interesting parallel is back in
the early 2000s, using napster,but it was downloading music for
free and an ethical problem.
In the modern day we we canthink about and go, yeah, that
kind of feels like stealing, butat the time Napster was a giant
(43:46):
company and millions uponmillions of people were
downloading music for free.
Did they?
Were they moral and ethicalmonsters that in that moment I
don't think they felt that way.
Rebecca Hogue (43:58):
No, yeah, it's
kind of.
Yeah, that's an interestingparallel example.
Okay, any any last thoughtsbefore we close up.
Jeremy Tuttle (44:06):
No, thank you for
having me.
It's always fun to come on hereand chat with you, hear your
insights, and I'll gladly comeback anytime.
Rebecca Hogue (44:18):
Well, thank you
very much.
I really appreciate yourwillingness to come on and chat
with us about what is this AIbeast, especially now that it's
been around for a little whileit's not quite so new anymore,
and so that's quite interesting.