Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:10):
Hello everyone and
welcome to our weekly Power
Lounge.
This is your place to hearauthentic conversations from
those who have power to share.
My name is Amy Vaughn and I amthe owner and chief empowerment
officer at Together Digital, adiverse and collaborative
community of women who work indigital and choose to share
their knowledge, power andconnections.
You can join the movement attogetherindigitalcom and today
(00:33):
we are tackling one of the mostpressing challenges digital
teams everywhere are facing, andthat is how to embrace AI
responsibly without gettingoverwhelmed by the technology or
the ethics.
Our guest, nikki Farrell, is acommunication leader and AI
strategist who has walked thispath firsthand and, as the
Associate Director of OnlineEnrollment and Marketing and
(00:56):
Communications at MiamiUniversity, nikki launched an AI
steering committee to addresswhat she calls the rapid, messy
adoption of generative AI inhigher education.
So big task there, with AIalready impacting how teams work
and whether or not leaders areready or not.
Nikki's approach of startingsmall, building clear guardrails
(01:19):
and fostering transparentexperimentation offers a great
roadmap for anyone any leader,anyone looking to champion
technology that uses and centerson people, purpose and ethics.
So welcome, nikki.
We're excited to have you heretoday.
Speaker 2 (01:36):
Thank you for
inviting me.
I'm very excited to talk.
Speaker 1 (01:39):
Absolutely so.
I actually had the pleasure ofseeing you speak at Cincy AI
Week here in Cincinnati, andthat room was packed.
A lot of us, you know, in amultitude of areas and arenas,
are really looking at how do weleverage and use AI and how do
we do so in a way that feels,you know, a little bit guided,
(01:59):
when we don't quite know whatthe roadmap actually is.
We're kind of planning it aswe're driving it or building the
plane as we're flying it, as alot of us would say, and you've
described AI adoption,especially in higher education,
as a rapid and messy process.
What was it that made yourealize that your team needed a
(02:20):
formal structure around AIrather than kind of just letting
it happen?
Speaker 2 (02:23):
however it happened,
yeah, so I started actually a
master's program.
So I work at Miami Universityand I started taking a graduate
class on writing businessprofessional writing and in that
class we were asked to do awhite paper style paper on any
(02:45):
topic in our field.
So I did it on AI and marketingand this was in late 2023.
So ChatGPT launched in early 23.
So it was something that folkswere talking a lot about.
You know, playing with,experimenting with and it has
obviously grown in leaps andbounds in quality since then.
But ChatGPT was the first majorconsumer large language model
(03:08):
that got popular.
So I started doing that researchand looking into AI and
marketing, who was using it, andI found some really startling
things.
Like you know, a universityVanderbilt University, one of
the colleges sent out an AIgenerated email about a tragic
event and got a lot of backlashfor using AI for this personal
(03:33):
communication, and that's theone that really sticks out to me
the most.
But of course, there's a lot ofother instances of AI being used
in marketing, especially inthat early time when companies
got backlash or were criticizedor used it in risky ways, and so
(03:55):
our department and ouruniversity hadn't started
talking about policy for AI yetat this point, and I chatted
with my boss about it and sheagreed that we needed to, you
know, get something together, to, you know, help guide people,
and you know, I think that oneof the big stats that I keep,
(04:18):
that I see and it's changedevery year but 80% of marketers,
you know, are using AI in theirjobs, but only 20 some percent,
of companies actually havepolicies or guidelines around AI
use.
So if you, whether you have thepolicies or not, they're using
it.
And if you don't have policies,if they think that they're not
allowed to use it, they're stillusing it.
(04:39):
They're just not telling youand that makes it a big risk, I
think, for companies.
So that's kind of how itstarted.
Speaker 1 (04:49):
Yeah, that's amazing.
So I know a lot of our livelistening audience, which hey,
friends, thanks for joining usand listening live.
A lot of you are in this boat.
Either you are running abusiness or you're a part of a
business or company that isobviously adopting.
We're in the 80% right and Ithink a lot of us probably fall
within that 20% as well, andoftentimes I feel like, when it
(05:11):
comes to the shiny new objectswithin marketing, we are so
afraid we're going to end upbeing the tail that wags the dog
, but that really gets us intosome trouble.
As your example.
You know that you stated, reallyput it, and so I think you know
creating a steering committee,I think, is really, really smart
, and we had a member on as aguest a while back, several
(05:31):
months ago too, who workedwithin an agency and created a
steering committee for AI, andyou know a lot of learning and
listening to other folks who aredoing this within companies is
like a great place to start.
I mean, outside of listening totoday's conversation, finding
other people outside of yourcompany who are creating these
steering committees andguidelines, like because there
is no book or rule book written.
(05:52):
Finding others who are goingthrough this is a really, really
.
I think it's a great, great wayto kind of even the playing
field in some ways and not feellike you're the only one trying
to figure it all out.
So for those of you who arelistening live listening
audience, please drop questionsin the chat if you have them.
We'd love to hear from you andknow what it is you're sort of
(06:12):
dealing with and struggling with.
And then for you, nikki, I waskind of curious, like when you
were walking into that first AIsteering committee meeting, once
you've established that we aregoing to do this, what were some
of the biggest fears orconcerns that you heard from
team members about the idea ofeven just creating AI
integration within what you do?
Speaker 2 (06:34):
So the biggest
concern is the same that we've
all heard AI can't replace humancreativity and they're worried
that we will try to let AIreplace human creativity and
innovation, and so I think thatthere was some reluctance about
(06:55):
using AI, but there was notreluctance about having this
committee.
I think that the folks who aremost concerned about AI were
also in support of havingguidelines and policies for
using AI and adopting itresponsibly.
We also started with a lot ofresearch.
Like I said, I was in a gradclass at the time and I had to
create this white paper, and soI gave it to my boss to use, as
(07:19):
I made it a justification or aroadmap for this committee and
what we would do and why, andshe didn't need it, she already
wanted me to do it anyway.
That's good, but you know, thiswhole effort has been grounded
in research and information andeducation.
I think that is such animportant piece of it to ensure
that you are using, working withthe best information available,
(07:41):
that you know what you'regetting into, that you
understand the risks and that,working with the best
information available, that youknow what you're getting into,
that you understand the risksand that, um, even folks who are
not adopting ai themselves, atleast have a border, a baseline
education about it?
Um, because in our fee, inmarketing especially, it's um,
it's really important foreveryone to at least have that
knowledge, that baselineknowledge of what it is and what
(08:03):
it's doing, because even ifyou're not using it yourself,
you will encounter it in thisfield and probably in every
field moving forward Absolutely,and it sounds like the kind of
whole learning research mindsetreally helped right.
Speaker 1 (08:19):
Make this kind of
happen and I know a lot of you
that are listening are probablysitting in spaces that are new
is not always the best and itcan be scary.
So I think that taking thatmentality of like we're here to
learn, we're here to research,kind of even presenting it as
like an academic exercise, evenwithin a corporate environment,
could be a really good way tosort of break through any
barriers you might be coming upagainst.
(08:41):
But I love that you had thatsupport.
But it also sounds like youwere very clear on the why as
well.
Speaker 2 (08:48):
Yes, it was very much
.
I had a great, you know,starting argument for why this
needed to happen and nobodycould really disagree with that.
I think that um aligning yourproposal with your mission, with
your company or yourorganization's values and
mission um is a really is also,you know, of course, a great way
to not only do the work thatcontributes to the big picture,
(09:12):
but also get buy-in fromleadership on whatever you're
trying to do.
Speaker 1 (09:16):
Yeah.
Which is what you have to right.
There's just no way around thatyou definitely have to get the
buy-in from the top.
So my next question for you isthat you know in some of your
stuff that you submitted to us,you mentioned that AI use can go
underground when there's noclear guidance.
What does that look like inpractice?
(09:36):
I'm curious.
And why is that problematic fororganizations?
Speaker 2 (09:41):
So I'm happy to say
we didn't catch anyone on any
underground use of AI in mydirect experience, but I was
looking around at AI and how itwas being used and how it's
still being used.
You know, you can.
There's data, there's researchand surveys that come out where
people talk about.
(10:02):
I use AI at work and I don'ttell anyone because we're not
allowed, or even justanecdotally in conversations
with other folks at events likethis, and I think that it's
really important for it not tobe underground.
Even if you don't have a formalcommittee or policy yet,
hopefully you can work your waytowards that goal.
You can work your way towardsthat goal.
(10:30):
But leadership should beambassadors of AI and champion
its use its use in a responsiblemanner.
So that means talking about howthey have used AI today or, you
know how, encouraging others toshare their use cases and
examples and to let folks knowthat it's okay to experiment and
we have to experiment in a waythat is responsible.
(10:53):
So I think that kind of ifyou're a leader in an
organization where you feel thatthat might be happening but you
don't have policies or you'renot there yet, at least start
educating your team and, youknow, in informal ways on Slack
create a culture of of opennessand experimentation so that
(11:15):
because you risk your team usingit and without understanding
that they, that AI, that AI,large language models will have
biases, and not just in what yousee, with the content that they
create, but with how theyanswer your question or how it
(11:37):
approaches your question,because you need to people.
I think that users have tounderstand some of those things
so that they can use it in themost responsible and ethical way
and encourage your staff tolearn.
Together.
Let's ask questions, let'sfigure this out and, like I said
(11:58):
, that culture is a reallyimportant piece.
It's a great way to start ifyou can't or don't have the
policies in place yet.
Speaker 1 (12:09):
I love that.
Yeah, it just made me think AIis the elephant in the room you
know for so many businesses.
Speaker 2 (12:14):
It shouldn't be Some
people yes.
Speaker 1 (12:17):
And for some places,
I mean again, we're a small
little business over here, sowe're just like business over
here, so we're just like bringit like.
AI, something that'll help uswork faster, smarter.
By all means, you know, bringit on, but you're so right.
It's for some people, though, Ithink it is it's the elephant
in the room that they're afraidto address, and that fear is
driven by a lack of knowledgeand understanding, and I don't
think people realize howempowering it is to actually
(12:39):
allow people to say it is okayto come in and play and explore.
Here's the guidelines.
Say it is okay to come in andplay and explore.
Here's the guidelines.
Speaker 2 (12:49):
Here's where we draw
some lines and without creating
your own best practices.
If you don't have time to dothat, there's so many out there
already.
Marketing AI Institute is agreat place to start.
It's not just marketing.
They have creative commons,policies that we adopted or
principles that we changed andadapted for our uses, and, um,
(13:10):
there's a ton of resources outthere and you could, it could.
You could take 15 minutes tojust send out a note to your
team here.
You know, before you do it,please go, you know, learn a
little bit or follow these bestpractices.
Speaker 1 (13:23):
Right Cause, like you
said, then you don't have to
have be fearful of people goingrogue and misusing it and
abusing it and then yourbusiness being under fire
because of it.
So, yeah, I love that advice.
That is so great.
All right, uh, let's get alittle tactical cause.
We love to get into the nittygritty.
Um, for marketing, uh, for amarketing leader who wants to
start an AI initiative, say youknow tomorrow what is the first
(13:49):
step that you recommend thatthey would take?
Speaker 2 (13:55):
Um?
So I think the first thing toremember if you're thinking
about this is that you don'thave to be a leader in your
organization in order to makethis kind of change and, on top
of that, the leaders in yourorganization have less time and
less familiarity with theday-to-day work that you're
doing to know what kind ofguidance and policies you need
For us.
I'm an associate director.
(14:16):
I wasn't managing anybody whenI started this and I was four or
five levels down in my orgchart four or five levels down
in my org chart but I'm the onewho's benefiting from using it
and using it every single dayand understanding what people
need and what my colleaguesmight want.
So, first of all, I think thatit's important for folks to
realize that you, at any levelin your organization, you should
(14:40):
feel empowered to get thisconversation started, and I
think for us, the first thing wedid was I made that proposal.
I didn't have to, but it's agreat place to start, especially
if you think that you mightstruggle with buy-in.
Do your research and look at andidentify the risks that your
(15:02):
company, your organization,faces from AI, both from using
it and from not using it.
Oh, I like that Because that youknow your leadership is focused
on making sure that yourorganization succeeds, and so
think about how your committeeor how adopting AI could benefit
(15:23):
the organization as a whole.
And then think about culture.
You know your colleagues andyou know creating a culture
where professional developmentis encouraged and championed and
all those things are part ofour AI principles.
That you know we are looking atit not only as a tool to help
(15:48):
us, as a division, improve andimprove the mission and
reputation of Miami University,but also how it impacts us as
individuals, as professionals,and how that, how this adoption
helps bolster, you know, ourculture, our work culture and
our um, the expertise andinnovation of our team.
(16:10):
So, you know, think like aleader when you're trying to put
together you know your proposalfor this Um and then you know
if you get the okay.
We started with principles,then policies about AI use, and
now we're moving into education,so creating a library and
(16:33):
trying to get as much educationresearch out to our colleagues
as possible.
Speaker 1 (16:39):
I love it, that's
fantastic and I love that advice
.
So all of that great soundbitethere of taking the lead, I
think a lot of us people areafraid right, they're worried
about job security, like youstepping in and taking the lead
and creating the practices andthe policies and taking
ownership of learning this andthen sharing what you learned.
You're right, you don't have tohave a title in order to lead
(17:01):
in that way.
Like this is a greatopportunity to step up into that
role, into that space, and dothat, and that's a lot of like
what Lexi the past guests that Imentioned as well really shared
.
Nobody asked her to do this.
She's like oh my gosh, it'scoming and it's coming fast and
we need to know how to respond.
You know, and honestly, for hertoo, it led to some better
(17:24):
opportunities.
Like I think she got apromotion out of it and then,
after she's in a whole notherrole, they added AI to her title
, so like there's benefits allaround for you in that way, I
hope my boss is listening.
Speaker 2 (17:36):
Yeah, yes, please,
promotion Right.
Speaker 1 (17:40):
Noted.
Come on, yes, let's show Nikkithe love for taking the
initiative and doing all this,because, yeah, it's such an
added benefit, because it is oneof those things where, once you
create those practices andthose policies and the education
and people are comfortable andyou are using it the
efficiencies alone that you gainthe money that they're going to
(18:00):
save now by using it and usingit well, yeah, they should just
take that money and give yousome more.
Speaker 2 (18:06):
Yeah, now I won't say
I will say they um I it has,
without it's led to a lot of uhgrowth and development for me
and um it's uh, you know it hasdefinitely um increased my
footprint or my um, you know myuh my presence my visibility
(18:27):
exactly around the university.
Speaker 1 (18:29):
I love it.
Speaker 2 (18:30):
So if you're looking
for that, that's another.
Speaker 1 (18:32):
yeah, exactly, it's a
great way to get it A hundred
percent right, embrace it Allright.
So your research?
It focuses a lot ontransparency, authorship and
trust in human and AIcollaboration.
How do these academic insightsinform the more practical
policies that you're creating?
Speaker 2 (18:49):
So one way, one thing
that I love to repeat over and
over again is if you're notcomfortable writing AI or chat
GPT as a byline on your whatyou're creating, if you're not
willing to sign that and say AIhelped me in this way, then
don't, don't use AI for thatthing.
So you know.
(19:10):
That doesn't mean that wealways that.
You know we want to betransparent about AI use, but we
aren't going to.
It's not reasonable to putChachiPT on every digital ad
that we used, that we helped,that ChatGPT helped us
brainstorm the taglines forright, yeah.
So that doesn't mean that youalways we don't necessarily say
(19:34):
that you always have to put it,but if you wouldn't feel
comfortable admitting that AIhelped you with this task, then
don't use AI to help you withthat task.
I think that's a really that'sa great guiding principle.
And nice, yeah, a nice way tokind of help draw the line Right
, cause it's nuanced.
I mean sometimes it's you know,I, you know writing emails.
(19:56):
You know, if we had a policythat said AI can write emails,
then does that mean that it canwrite emails from the president
to the whole university about,um, a tragic event, you know, um
, so it's there's a lot ofnuances in policy and guideline
creating and so, um, it'sthere's a lot of, you know that,
(20:19):
that guiding principle of youknow, would it be okay if
everyone knew that you use AIfor this?
And it gets right at the heartof it, right.
Speaker 1 (20:29):
It absolutely does.
And, like you said, it's likeyou have to think through all of
the use cases, right, becauseobviously there's so much more
even than just generative AI.
But in the case of generativeAI, even that is just a lot to
unpack and looking at theopportunities for messaging and
optimizations within messagingand where it makes sense and
really where it doesn't.
(20:50):
And I love that idea that it'ssuch a great guiding principle.
Like, if you wouldn't put AI inthe byline as a co-author, then
, yeah, maybe you should rethinkusing it at all to begin with,
because, yeah, a hundred percent, you've got to be sensitive.
And this was a conversation, Ifeel like, even before AI with
(21:12):
automation, right.
So I used to work on whichbrand was it?
I think it was a Curel and youknow this has happened a lot
with brands but this was like 12years ago, maybe 13 years ago,
and they had done like amother's day email blast based
on some insights, and what theyhad not considered was that
there were a lot of people outthere who had lost their moms
and you know, thank goodnessthere was some.
(21:34):
Sadly, there was somebody on ourteam who had lost their mom in
the last two years and she'slike I am putting the kibosh on
this campaign, like you justcannot assume that every person
you're emailing has a livingmother right now.
And I think, instead of tryingto blast and automate like we
really probably should find adifferent way to message and
just position this Like we cantake advantage of Mother's Day,
(21:54):
but we need to find a way to doit that is like cognitive and
sensitive and not a blanket.
So yeah, lou's agreeing with mein the comments Mother's Day
campaigns are the worst and I'vedefinitely seen people over the
last decade sort of take thatinto consideration.
But it was so funny because wewere just, you know, all full
steam ahead on all these kindsof campaigns Ready to hit send
and then and we're like, oh gosh.
Speaker 2 (22:18):
I mean, that's where
the human, you know, touch comes
in with any technology.
Is that context and rhetoricalknowledge that humans have that
AI cannot have?
You know, we have to providethat or we just make the
decisions ourselves.
Yeah, so that's a reallyimportant part of it.
(22:49):
Speaking of talking abouttransparency and authenticity,
there's a lot a ton of researchout there about marketing, using
AI in marketing.
That was one of my in mymaster's program.
I wrote a focus paper, a focusinquiry about how should higher
ed marketing, how transparentshould higher ed marketing be
with how they're using AI andhow do they decide and how do
(23:10):
you make policies about that,and I found that it was
interesting.
I did some research on thedifferent generations, because
obviously, with higher ed, ahuge chunk of our audience is
Gen Z and now Gen Alpha, and howdo they feel about it, and Gen
Z and Gen Alpha are lessaccepting of AI than older
(23:30):
generations.
These are things that you've gotto know if you're going to use
AI.
Interestingly, though, theyalso were okay with AI being
used to message to them.
Okay, much more if it was likea bot.
Okay, hi, I'm Nikki, I'm an AIbot.
Yeah, I'm going to help youregister for classes or answer
(23:54):
your questions or help youaround the website, but for
other things where, if they weregiven an example of getting an
email that said in the bottomyou know AI helped generate this
email that was much less okayto them and it was signed by
like a person that was much lessokay to them than an AI 100%
(24:18):
transparent.
Speaker 1 (24:18):
AI.
It's either one or the other.
Speaker 2 (24:20):
Yeah, that's what it
kind of feels like.
So you know you've got to doyour research and know your
audiences and understand howthey see this and understand
what their expectations are, andso you know basically what the
paper my conclusion was.
Basically, we need to be astransparent as cultural norms
(24:41):
expect us to be.
So today, people expect to knowwhen they're being spoken to by
an AI versus by a human.
In 20 years, will that still betrue?
Right, that might not be so.
I think that the importantthing about transparency and
(25:01):
authorship is understanding youraudience, being really in tune
with that and establishing trustby listening to them and
responding to.
You know how they expect to becommunicated to research.
Speaker 1 (25:29):
Did you find that
like?
So AI in use for messaging tothem not such a fan of, but for
them using AI, are they likeadopting to it themselves?
Speaker 2 (25:35):
They.
So the Gen Alpha and Gen Zs areless by the statistics are less
likely.
I'm enthusiastic about using AI, but what's interesting is this
program that I'm in is anEnglish program rhetoric and
composition and I'm studyingspecifically AI in marketing.
(25:56):
But a lot of my colleagues arelooking to get their PhDs in
English and then go on to teachand be professors.
So a lot of what we talk aboutis pedagogy and teaching and how
ai, um, how you can teach withai, how human machine teaming
works with writing, how whatstudents should know, or should
(26:19):
they be expected to be open tousing it or expected to use ai?
And is it okay if they use aito help them write a paper?
How much of that right?
And a lot of the research thatwe've read in the past just the
past two and a half years sincelarge language models exploded.
Students are very hesitant atfirst and then they try it and
(26:46):
as they get more familiar withit, they tend to be more
accepting of it.
And I have to plug my professor,heidi McKee If you guys are
into reading academic papers,you should look her up.
She wrote a paper right aroundthe time that I was in this
class at the end of 2023, andshe had students in her class
(27:10):
required them to use AI or toldthem they could use AI however
they wanted in the whole class,and some of them didn't touch it
.
Some of them one of them became,like she said, the AI became
her best friend during thatclass.
So you know very interestingdifferences in how they adapt to
it.
But I think the big underlyingthing is a lot of the fear and
(27:31):
hesitation about using AI andadopting AI comes from not being
familiar with it, thinking thatit's an all or nothing thing,
that it either does the wholepaper for you or you don't touch
it, and there's so much inbetween those two extremes and I
think that as folks get moreused to that, they become more
open to it and, you know,transparency becomes less of an
(27:55):
issue.
So I think as AI becomes, youknow, domesticated in, you know,
in our culture, it's going tobe less and less of an issue.
But we have to listen and keep,keep, keep disclosing until
we're told we don't need toanymore.
Speaker 1 (28:10):
Right, yeah, yeah.
Footnote we're not AI.
We're real humans.
I promise yes, exactly.
Yeah, that's true.
We almost have to caveat thattoo, right.
Speaker 2 (28:18):
I was just going to
say I don't know how to prove it
there are three R's instrawberry, and that's one way
to prove it right.
I know how many R's are instrawberry.
Yep, that is so funny.
Speaker 1 (28:29):
Yeah, there's three I
had to check.
You're like wait, is that right, let me write this down.
That is, and it's you know andI, I know so many people and
okay, chef, that's you lou onthere.
She is a great example ofpeople who have become so
attuned to the tool andunderstanding how to train the
generative AI tools that theyuse that it really you can tell
(28:52):
who is a seasoned AI user,unfortunately, and you can tell
who's not.
And I can sit alongside someonewho has not done a ton of
prompting or created projects ortheir own bots and those who
have and like the quality.
It's almost like I had thisconversation last week with a
friend of mine who's aphotographer.
It was kind of like going fromfilm cameras to digital cameras.
(29:13):
It was so scary to be likewe're not shooting for film,
it's digital, you can point andshoot, you can see the picture
before it's developed, likethere was all this fear behind
it.
But it's like if you kind ofcome in and you, you know the
craft right no-transcript,broadened and flourished and
(29:51):
compounded and their capacity todo the work when they have a
well trained AI, you knowcapability or well-trained AI
bot, or you know great AIcapabilities.
So I think learning all thosethings is really, really
important, and what's greatabout it is it's it's super
accessible, right?
Every one of us can jump on,open up a generative AI tool and
if you're not sure where tostart, ask it.
Speaker 2 (30:12):
Ask it, where do I
start?
Speaker 1 (30:13):
And say I want to
learn how to be a great, prompt
engineer with AI.
Tell me what I need to do.
And that's the nice thing, isit's really in my mind, leveling
the playing field for a lot ofpeople from an education
standpoint.
Right, because we don't have togo get a college degree to
learn how to use AI.
Yet I don't know, Are there AIclasses yet?
Like specifically.
Speaker 2 (30:34):
Well, there are AI
building AI.
Okay yeah, building the largelanguage model, those kinds of
computer engineering and, Ithink, prompt engineering.
There was a little push wherewe saw a couple of those classes
and then, quickly, I thinkeveryone is realizing rightly
that you don't need to be aprompt engineer because AI will
do it for you.
(30:54):
You just ask you give it aprompt and say can you improve
this prompt?
So if you're a thing, if you'rea young person on here and
you're like I'm going to be aprompt engineer when I grew up,
stop do something, picksomething else.
That's one of the that's notgoing to be a viable career path
.
I think.
Yeah, I think there's so muchthat you know using it and using
(31:19):
it responsibly and ethically isit's also it's hard, you know,
to think about these things.
There's research that came outat the end of last year from
Microsoft that folks werereporting cognitive declines.
Who you people who use AI a lotreported that they do not feel
like they are as strong atproblem solving.
Um, the same goes for writing.
Um, you know, for, in myopinion, um, we talk about this
(31:43):
that writing is part of aprocess, a learning process.
You, you as you write, youanalyze and you figure things
out and you process, and ifyou're taking that away and
giving it to an AI, then you aregetting rid of some of your
learning.
You're not practicing thatskill anymore.
(32:05):
So I think that it's soimportant to have a holistic
view of what you're doing andhow you're using AI and doing it
in a way that only improves youas a person or as a
professional and does not takeaway those important things that
(32:26):
the skills that you practiceNow make a spreadsheet for me,
to give me the formula for thisGoogle doc so that I can make
all of these numbers add up.
Please tell me that I don'tcare about learning that.
Speaker 1 (32:40):
Let it be the
computer.
Yeah, let it be, I'm happy withthat Right.
Speaker 2 (32:43):
So I think that you
know there's a lot of things to
consider when you're thinkingabout how to use AI.
Speaker 1 (32:49):
Yeah, yeah, random
sidebar.
One of my favorite uses ofgenerative AI because to me it's
also just like a souped upsearch engine is to use it for
tech support.
I don't have.
IT tech support and I'm settingup backend systems and
integrations and like my ADDkicks in because there's all
these blogs, because everybodyhas their FAQs and they've got
(33:10):
articles, because they want youto read all of that before you
call somebody.
And so me just talking to mychat, gpt or cloud, about what
I'm doing and I'm like I don'tsee this, what are you talking
about?
It?
Like literally, I fly throughthat stuff.
So if you guys haven't used itfor it support yet, give it a
try.
Speaker 2 (33:28):
I used it, um, like a
month ago, to create my bills
spreadsheet and auto-populate itevery month with.
I made a pipe or I made an appscript.
I'm not a coder, I don't knowanything about that, but it
wrote an app script to.
So every time I run that scriptit creates a new spreadsheet
for the month that has not justthe paydates that are set,
(33:52):
because I get paid monthly andmy husband gets paid every other
Wednesday.
So it's tough.
Every month I was having to putall that in manually.
Yeah, and it said you just runthis app script.
And I said, okay, give mestep-by-step instructions on how
to run an app script and whatis an app script.
And it led me, you know.
(34:13):
And then I would run it and itwouldn't work and I would say
this is the error message.
And it says that's probablybecause of this.
We used it in a class to builda website last semester.
You know, from scratch, folkswho have no HTML or coding
experience whatsoever.
So this is where these smallbusinesses and organizations
(34:36):
that don't have a techdepartment and a web development
team and I don't want to saymarketing team but you can do a
little bit, but you can do alittle bit, I guess, but it
opens up so many doors for folksto be able to be more
productive and doing such stuff.
Speaker 1 (34:55):
Absolutely, and, like
I said earlier, it goes back to
you have to understand and knowthe craft.
That's why it's like AI won'treplace marketing.
It'll just reveal all the badmarketers.
Another great example I wasmentioning photographers earlier
there's another foodphotographer here in town who's
been doing food photography for30 years and you know he's
almost 60 and he decided as soonas AI came around the bend.
(35:17):
He's like I'm going to embracethis and his ability to leverage
the tools.
You can see it in his work.
Like you would be hard pressedto find somebody that could
create and generate foodphotography at the level in
which Terry does, but it'sbecause he understands the
nuance of food photography,because he spent 30 years doing
it and he really dug in andlearned the tools the craft you
(35:49):
know really works well, and alsowhy you can't just replace your
whole marketing team with AI.
Like unless you got a reallysuper well-rounded, really
brilliant you know AI user andmarketer behind the keyboard,
like you're kind of screwed.
You're just going to get veryvanilla, generic and potentially
damaging marketing that getsput out there that has bias,
that has just assumptions builtin and things like that.
Speaker 2 (36:09):
And that looks
exactly the same as every other
agency that's using AI to createtheir marketing.
That I mean you'll stand out bybeing the same as everybody
else.
Speaker 1 (36:22):
Exactly, yeah, 100%,
all right.
So we know that there's a lotof leaders out there who've kind
of feel caught betweenembracing AI for competitive
advantage and protecting theirteam's jobs We've kind of
touched on this and theirwellbeing.
How have you worked to navigatesome of that tension, if you've
experienced any of that at theuniversity?
Speaker 2 (36:39):
So I think one thing
that we've always emphasized,
you know, that's why we startedwith the principles of AI use we
.
We've always emphasized, youknow that's why we started with
the principles of AI use we.
One of our principles is humanswill always be critical for the
work that we do, and that AIshould not replace anyone but um
, augment their work.
(37:00):
Um.
So we make that clear from thebeginning that this is not an
adopting AI is not an attempt toslash jobs.
It's to allow all of us to growand do more things in our roles
.
So I mean, you can kind of lookat it two different ways.
You can use AI to get rid ofall your people and do the same
or less or worse job at whatyou're, what you've been doing
(37:21):
with all those people.
Or you can use AI to keep allthose people and do and increase
your performance Right, um, andso that's definitely the way
that we're um, that we'reembracing it, um and uh, I think
um again there, the folks thatare skeptical of AI use um.
(37:43):
You know that's their right.
It's fine.
There's folks in my field thatin um, in um, around um, writing
pedagogy, who's who reject aiand they have really good
reasons for that and I respectthat and I would never pressure
anybody to use it they don'twant to.
But I do still expose everyoneto the ways that we're used,
that I'm using it, the ways thatthe folks who are using it are
(38:04):
using it, and occasionally youget some folks who are surprised
and get you know and under andit takes those specific case,
use, use cases and examples forthem to actually start to see
what it means, what it means forwhat, what it would mean for
them in their day to day.
Right.
Speaker 1 (38:25):
So, all right, we got
a question from our live
audience, so awesome.
Thank you guys for asking Nikki, how do you introduce AI in a
university environment?
It seems higher education andnonprofits are the slowest to
adopt tools that could help themstreamline work.
Speaker 2 (38:40):
We are the slowest.
We win the turtle ribbon everytime.
Um, so for us it was um.
You know we have support fromthe top for sure.
Um, president Crawford at Miamiuniversity is super excited
about uh technology and theadvancement of technology and
(39:01):
what it means for highereducation, what it means for our
learners and our students andthe student experience as
they're coming through Miami Um,so that helps a lot.
So you know he knows thatthere's red tape that is built
into higher education that hecan't even he can't change, but
his support makes a bigdifference on getting through
that red tape For us.
(39:23):
You know it's also it doesn'thave to be all or nothing right,
it doesn't have to be a bigadoption.
For us.
It just started with havingsome principals and UCM my
division is UniversityCommunications and Marketing
it's 2025.
So it's been almost two yearssince we started the committee
(39:43):
and we have adopted one paidtool and that's ChatGPT Team.
And we have adopted one paidtool and that's ChatGPT Team.
It doesn't mean that, you know,we do have some other ones that
some data specialists and folksare using that are built into
tools that we use, but as far aslike big, like just AI-centered
, ai-forward tools.
(40:05):
That's the only one that we'vebought, you know, went through
the software, went through it toget access to, and the reason
why?
For that is, first of all, itdoes a lot for all of us, so it
was it made the most sense.
We were very careful inchoosing a tool that has a
(40:25):
closed system, so we have theTeams version so that, no,
nothing that we go, that we putinto it goes out and is shared
and supposedly is not used totrain any other models.
If you trust that, people leavethe fine print.
Yes, people leave the fineprint.
Yes, and everyone on our teamhas to learn our principles and
policies.
Take a quiz a little nice, it'svery short before they get
(41:02):
access to at least acknowledgethe policies and principles and
think about that before theyever start using AI.
Now are folks using other freetools and we have Gemini.
We're a Google school, so wehave everyone on the campus has
access to Gemini built intotheir accounts for free, and so
(41:22):
folks are using those as well.
But it doesn't have to be ahuge lift where we suddenly flip
a switch and we're an AIforward institution.
For us, it's okay that we'retrying to be patient.
For us, it's okay that we'retrying to be patient, and a lot
of times when you do go afterthose big changes, they end up
(41:45):
just being shiny objects thatcompanies buy and then use.
So that's something to keep inmind as well.
Speaker 1 (41:54):
Great advice, great
answer.
Thanks for the question, lou.
Yeah, it's like the totalopposite of some agencies that I
know of that we're like we'reto build our own model, like
we're going to have our own LLMand we're going to sell it and
we're going to white label it.
I'm like, can you all justchill for like a moment and
actually like say, is thissomething that people are going
to want to buy?
When you've got these superpowered?
You know large language modelsthat are read by like Google and
(42:16):
Microsoft, you can't compete.
You know, I understand like thedesire for privacy, but even
those guys I'll have that allfigured out as as well.
But I love what you said aboutlike the quiz taking and like
the education in order to evenget to the paid opportunity or
service.
So I'm kind of curious if youcould share a little bit more
about, like, what a responsibleAI adoption might look like in
(42:36):
the day to day and what kinds ofguardrails you know actually
matter when you're trying toimplement something like this.
Speaker 2 (42:43):
So so the training is
we.
They go through our policies,so we have I mean, we're a
university so we have a learningmanagement system called Canvas
, so we kind of already havethat at our disposal, which is
really definitely helps a lotwith making doing any kinds of
(43:04):
training.
So we've built a course that isreally just a Google slides
deck in a Canvas course thatincludes the principles, a
little bit about each principleand then some guidelines that
are specific to our marketingdivision.
To be clear, these principlesand policies are for the
marketing division.
(43:24):
They are not university-wide.
I don't have that kind of poweryet.
Speaker 1 (43:28):
It's a whole other
can of worms too right?
Speaker 2 (43:30):
Yes, but they go
through and for each principle
it's not just reading theprinciples or policies, it's
policy and then example, policyand then example.
So the policy for we don't useAI for sensitive or
(43:51):
controversial content, and theexample there is the Vanderbilt
one that I talked about at thebeginning of the power hour.
So they go through that.
But one that I talked about atthe beginning of the power hour,
so they go through that.
There's a Google Docspreadsheet that I have where I
check them off if they getthrough the training and then I
send them the invitation to jointhe team environment.
(44:15):
And then we have Slack channelswhere we talk about what we did
today with AI, how we're usingit, and what we're developing
now, as we've gotten through thepolicies and principles and
guidelines, is the educationpart.
So we're developing.
We have a Google form wherefolks can.
I'm asking and bugging peopleconstantly Did you use AI today?
(44:36):
How did you use AI today?
Go fill out that form and thenthe form populates a spreadsheet
that folks can search and say Idon't know how to use AI in my
job, but I do emails.
I'm writing an email and thenthey can search email and see
how other folks have used AI tohelp with that, what barriers,
what tools, specific tools theyused.
Did they you know?
(44:58):
Would they?
Was it a success or a failure,like what did they?
Or what did they learn abouthow AI works from that
experience?
So we're working now onbuilding that use case library,
a prompt library.
You know lists of resources andstuff for folks to use.
Speaker 1 (45:16):
That's brilliant.
I love that idea of a use caselibrary no-transcript by ChatGPT
(45:52):
, but definitely I think it's agood tool and well worth looking
into when you're kind of inthat collaborative, you know,
generative AI use situation.
Speaker 2 (46:00):
And it's not a huge
investment.
I mean it's $20 a month if youare using the pro version
individually and it's $25 forteams, so for per person.
So if you've got four people onyour team, that's not terrible.
Speaker 1 (46:16):
No $100 a month can't
beat that.
If you're using it right andusing it well, absolutely Again,
the optimizations you'll createand the efficiencies you'll
create are huge.
So I love that idea of Itotally wrote that down in the
use case library and might bestealing that.
I think that would be a coolthing for our together digital
community to do together as well, cause we have a lot of
business owners but then we havepeople that are within
marketing agencies, that arethat are within marketing
(46:37):
agencies that are we're all justtrying to figure it out and it
is so cool when you can kind ofjust scroll through and really
kind of see that can be a hugeobstacle for folks is like I I'm
fine with using AI, I justdon't know what how I would use
it for.
Speaker 2 (46:52):
You can also put your
job description in there and
ask AI how what it can do foryou.
Oh, that's a great idea.
The Marketing AI Institute hasa jobs GPT, where that's
basically what it is.
I listened to the Marketing AIInstitute's podcast.
Are you going?
Speaker 1 (47:08):
to be at Makeon this
year.
I am, yeah, awesome, wonderful.
Yeah, and a huge shout out toKathy McPhillips and all of her
peeps who make that happen.
Yeah, she's amazing.
So yeah, it's a great event.
Speaker 2 (47:21):
Cleveland is
beautiful in October.
And I lived in Cleveland for alittle bit.
So, it's a good chance for meto see folks and yeah go to.
Mekon.
It's great.
Speaker 1 (47:29):
Yeah, agreed, agreed.
Yeah, I love Cleveland.
It rocks.
I like being up there.
It used to be.
Content Marketing World was formany, many years and they've
moved now but that's how I gotto know Cleveland.
Was those events Awesome?
All right, let's see.
I think the next two questionsyou kind of already answered in
a lot of ways.
So live listening audience.
We're coming close to time, soif you've got additional
(47:50):
questions, don't be shy aboutdropping them in.
And then I'm going to get toone last question and then our
power round of questions.
So my last kind of biggerquestion for you, nikki, is for
leaders who are still hesitantand those who want to lead who
are still hesitant about AI,whether it's due to ethical
concerns or feeling overwhelmedby the technology.
(48:10):
What would you want them toknow?
Speaker 2 (48:12):
Oh gosh, I think it's
happening, whether we resist it
or not.
I hear that, I know everyonehears that a lot and it sounds,
it's a, it's kind of mean youknow to say, but I think it's
true, I think that you know, I Idon't like to say you know, it
(48:33):
doesn't mean that you have touse it and love it and adopt it,
but I do think that you reallyneed to take a hard look at what
you're doing and what yourorganization is doing to
determine whether ignoring thistool is going to be good for
your future, your strategy, andI think that what's holding you
(48:57):
back, I mean, if it's that youdon't want robots to take over,
uh, I agree, I get it, it can bescary, but I think that's an
even makes it even moreimportant to have policies and
talk about AI, and you know yourAI policy might be do not use
AI.
Um, and you may, and there'slots of sectors that have great
(49:19):
reasons for that, but, um, atleast start the conversation and
listen and hear.
You know, hear from your, fromyour team, about um, what they
want to see, how do they want touse it.
Are they, are they finding ithelpful?
You know, be open to um, seeingto um, listening to your team's
(49:40):
feedback and what solutionsthey've come up with, because
they are the ones that are intheir day to day, so they have
the best perspective on how itmight be helpful for them.
Speaker 1 (49:53):
Yeah, and they're
likely already using it, like
you said on the download, so youmight as well come on out and
address that elephant in theroom.
It also makes me think about,like for some people who are a
little trepidatious aboutleveraging it at work, I said
try using AI for your personallife.
You know, we were visiting mydad in Florida a couple of weeks
ago and last minute he's inClearwater, so Disney's two
(50:15):
hours away decided we were goingto go to Disney the next day
and I'm like, wait a second, whodecides to?
Unless you live in Florida andgot a pass?
Like who just shows up atDisney?
Like you have to have a planright and of course, there's
millions of sites and blogs andexpert articles that are out
there about Disney.
And I just literally went inand I said, hey, we're going to
go to Disney tomorrow.
(50:35):
Here's my kids ages.
Here's what they like.
One likes rides, one doesn'tlike rides.
One likes rides.
One doesn't like rides.
One likes shows.
We're going to be with my dad.
We need to minimize the amountof back and forth across the
park.
We're going to be there one day.
Here's the two parks we'regoing to Give me an agenda and,
oh my gosh, nikki, it wasamazing.
(50:56):
It was a really tight, awesomeagenda and, granted, we flexed
within it, but at least it gaveus some guidance to know what
direction we were going.
So everybody got a little bitof what they wanted out of the
day, and for me it was likegoing in with a little bit of a
plan and not like a wing and aprayer.
You know, going to the one ofthe most crowded and sometimes
meant to be the happiest but canbe easily the most miserable if
things go sideways.
(51:16):
And so I've used it for that.
I used it for, like, lookingfor gift ideas for the holidays
when I started to get burnt.
I've done it for meal planning,for creating workouts for
myself, like just kind of playwith it.
Like you said, have thatmindset of you know,
experimentation and find safeplaces to do that, and then
you're going to, all of a sudden, you're going to try to like AI
, everything Right.
Speaker 2 (51:37):
Yes, and it's.
I mean, next time you think ofGoogle, you're you start to
Google something.
Try AI instead.
That might be a good, like agood thing to try.
I've used it for all thosethings, plus my garden.
I sent a picture of a plot thatI can't get things to grow in
and I said why isn't anythinggrowing here?
And it said I noticed that thesun is over here and that it's
(51:57):
probably shade all day andthere's a big tree and, um, the
soil is this kind, because Iknow that you, where you live
well, okay, so it's getting.
It does get kind of creepybecause it does know way too
much about me.
I also had it.
I was asked to share a favoritepoem earlier this week for
another event, and I don't.
I haven't read any poetry sincecollege and so I said what
(52:18):
poems do you think I would like?
And it gave me 10 poets.
I liked most of them.
It is, and it was interesting.
It was it's, you know, gave mepoets that write concisely, um,
and you know, um, that, uh, getthe point across.
And you know, see, look simplebut aren't simple.
(52:39):
And it was like, how would I?
I wouldn't even know to lookfor poems like that, but that's
yes, you're right, you'reexactly right, that's exactly
what I would like.
So it was interesting.
Speaker 1 (52:50):
You know us so well,
that's great.
I love it, I love it.
All right, let's get to ourpower round questions.
We've got lots of stuff goingon in the chat, but it's mostly
just comments and support andagreement.
So, all right, what is yourgo-to prompt when you're stuck
and need AI to help you thinkdifferently?
Speaker 2 (53:06):
Help me do this.
I mean, literally, it's AI.
I'm trying to um.
I have this problem that Ican't solve, where I have a seed
of an idea and I'm not surewhere I'm going to go with it.
Yet.
What would you, what, what?
Give me three things that Icould do with this Um and it's.
I mean, then I have I do.
The other thing that I love todo is talk to it in the car.
(53:28):
So if you have the pro, you cantalk to it and I can pick up
what voice I use and Ibrainstorm, especially like on
the way home from work, when mybrain is still really going Um
and um.
I have.
I have a lot of thoughts andstuff that I want to like get
down, but I'm literally drivingso I can't write them down.
(53:49):
Oh, I'm going to try that.
Oh my gosh, it's a great.
It's great to just bounce ideasoff of and like okay, expand on
that.
Okay, now make me a to-do list,great.
Make me a Google sheet, aspreadsheet that I can uh import
into Asana, my projectmanagement tool, and make it a
project that you know, and then,when I get home, I can do two
clicks and I've got a new Boom,there it is, it's great.
Speaker 1 (54:08):
I love it.
And then I love it too when itstarts to like anticipate.
You know, like you start to putout those ideas and I was like
would you like me to make thisin a PDF format or a spreadsheet
for you?
Speaker 2 (54:19):
Do you want an action
?
Speaker 1 (54:20):
plan.
I was like oh my gosh, yes,please, it upsells you.
Speaker 2 (54:23):
so that you can use
more data so that you can be
like.
Then it can be like you've runout of questions this month.
Would you like to pay moremoney?
I totally have the pro.
I can't not have the pro.
Speaker 1 (54:35):
I can't not, it's
just it's not going to happen.
I use it every day every day,All right.
What, what, uh?
Speaker 2 (54:45):
what's the most
unexpected way AI has improved
your daily workflow?
Um, just like getting over hubsright, the wall of awful I.
You said you mentioned ADD.
I also have ADHD and I andthere's this great YouTube video
about the wall of awful.
It's a task that you just can'teven start.
It seems exhausting.
You don't even you know, andespecially working more with
strategy and less with tactics.
(55:06):
In my role at Miami, it'shelped me to look at something,
you know, where somebody says,okay, create a ambassador,
marketing, stakeholder marketingstrategy for this thing, and
I'm like, great, that soundslike a big job and I don't even
know how to start with that.
So, going to AI and saying,okay, please write out this
(55:28):
strategy, write out a strategyfor me.
And then we fine tune it, goback and forth and edit and
stuff, and then at the end Ialways say what's the first step
, what is the first task that Ineed to do to make this happen?
Make it small.
And it says pick up your pen,go write one email.
That's the great helps.
(55:49):
It's very good for myneurodivergence, for sure.
Speaker 1 (55:53):
I love it.
That's a great use case.
Thank you for sharing that.
I really appreciate it and Iknow our listeners do too.
Speaker 2 (55:57):
All right, last one
One AI myth that you wish you
could bust for everyone rightnow it's lazy, that using it is
lazy, that people who use it areusing it to do everything, that
it's all or nothing.
I would happily let anyone gothrough my chats to see how I'm
(56:17):
using.
You know how I use the LLMsevery day.
It is, it's back and forth,it's brainstorming, it's
quizzing me on the brand style,an AP style.
You know it's.
You know I'm telling it thatidea is.
You know that idea is stupid.
You need to try again, right?
(56:38):
You know it's verycollaborative if you're using it
, um, in a good and effectiveway, um, so that's one thing
that I think that we should alltalk about and be open about
using AI, so that there's noshame, so that folks aren't
thinking that you know we don'tneed to be ashamed of using it
because we're not using it toreplace us.
(56:59):
That would be dumb.
Then I wouldn't be.
I have a job anymore, so right.
Speaker 1 (57:03):
Like you said, it's a
supplement too.
You know it's an optimizer.
It's an enhancer.
It's a tool.
It is a tool.
It is still a robot, you stillneed those.
Speaker 2 (57:11):
One thing, one thing
I wanted, I want to.
I know we're almost at a time.
There's one thing that I wetalked about with our in my in
one of my classes abouttechnology, adoption and the
literacy crisis that we're goingthrough right now.
With AI, we're shouting fromthe rooftops that students
aren't writing papers anymore.
This happened with theinvention of the pencil.
(57:35):
Scholars believe that if youhad an eraser, that you wouldn't
be as smart you wouldn't if youcan just erase things.
That's a shortcut.
Yeah, oh my gosh.
Speaker 1 (57:49):
I love that analogy.
Speaker 2 (57:50):
It is.
It's absolutely true.
There were, you know there werescholars who talked about that.
Yeah, and you know it'shappened.
We know it's happened with thetypewriter, with the computer,
with the pencil.
It happened when writing wasinvented.
That writing was invented whenwe started, when education,
public education, became a thing.
There were people who werepushing back on that because
(58:11):
they thought that it would makethat if people don't have to
memorize things, they can justgo look it up, then they're not
going to be as smart anymore andthe public will be dumber if
they all learn how to write.
That's so wild, isn't it?
Speaker 1 (58:26):
It's crazy that we
think about that today, make it
make sense.
Yes, exactly 100%.
Oh, Nikki, thank you so much.
This was such a greatconversation, Lots of thoughtful
comments and thank you to ourlive listeners as well Some
really practical conversationsabout something that so many of
us are really grappling with,you know, trying to find a way
to be responsible and reallywork with the adoption of AI in
(58:49):
a way that doesn't have to beoverwhelming, just more
intentional, ethical and humancentered.
So thank you so much forjoining us today.
We really appreciate it.
Speaker 2 (58:58):
Thank you so much for
having me.
Speaker 1 (58:59):
It's been great and
listeners keep an eye on both
our YouTube channel and ourpodcast.
Make sure you subscribe to bothfor this episode.
When it finally goes live, it'susually in and about a week,
and if you're inspired byconversations like this, I would
really encourage you to checkout the Together Digital
community, where we just love totalk the truth and share really
(59:21):
what everybody else is thinking.
We like to address theelephants in the room.
Our amazing members is such abrilliant community of really
super savvy, smart women who arethere to listen, who are there
to learn.
They own their own career andpersonal development by being
members.
It's just a great place to findongoing support for you that
understand the unique challengesthat we're all facing.
So own career and personaldevelopment by being members.
It's just a great place to findongoing support for you that
(59:42):
understand the unique challengesthat we're all facing.
So, all of you, thank you forjoining us today and we will see
you all in another week or so.
Until then, everyone keepasking, keep giving and keep
growing.
Bye now.
Speaker 2 (01:00:04):
Produced by Heartcast
.
Speaker 1 (01:00:05):
Media.