Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:09):
Hello everyone and
welcome to our weekly power
lounge.
This is your place to hearauthentic conversations from
those who have power to share.
My name is Amy Vaughn and I amthe owner and chief empowerment
officer of Together Digital, adiverse and collaborative
community of women who work indigital and choose to share
their knowledge, power andconnections.
Join the movement attogetherindigitalcom All right.
(00:32):
Today, friends, we are excitedto welcome one of our own
Together Digital member andambassador at our Detroit
chapter.
Lexi Trump is the Director ofDigital and AI at Franco,
another amazing agency that Ihave loved getting to know your
coworkers over the years.
It seems like such a phenomenalplace to work.
I know you've been there for awhile, lexi.
(00:53):
We'll get into that here soon.
She leads the digital strategyfor B2B automotive and SaaS
clients and heads Franco's AIAdoption Task Force and brings
her a background as a formaljournalist with bylines in
Eastern Detroit, thrillist andothers.
She's the perfect guide forthis conversation.
(01:14):
Lexi and I had the pleasure ofmeeting Actually, I was just
thinking back to that.
The last time I was in Detroitwas our panel and you were on
our panel.
I don't think you were a memberyet, but I remember sitting
there listening to you going, ohmy gosh, this girl.
She knows what she's talkingabout, so I am so thrilled that
you're a part of the community.
You helped champion togetherdigital and our members.
So we're excited to championyou here today and give our
(01:35):
listeners a little bit moreinsight onto how they can start
with generative AI.
Obviously, you can't spend a daywithout hearing that term If
you're in digital marketing andadvertising.
Everybody's kind of on adifferent spectrum to kind of
know where we are and whatyou're doing in that space.
There's a lot of overwhelmright Within the landscape of AI
(01:55):
, from everything from tools toapplications.
Lexi's approaches you are allgoing to find very practical,
easy to use hopefully notoverwhelming on how to build an
intentional AI strategy thatpreserves the human touch.
So we're excited to have herhere with us today.
Thanks, lexi, for joining us.
Speaker 2 (02:12):
Thank you so much for
having me.
I'm excited Absolutely.
Speaker 1 (02:15):
Right, all right.
Before we dive in and startnerding out about all things AI
and generative AI, I would loveto hear about your journey from
journalism to leading digitaland AI.
Now, like I love how you'rejust like acquiring all this
amazing work, what sparked yourinterest in becoming what you
call a professional nerd?
Speaker 2 (02:32):
Yeah, so I always use
the same, probably corny
analogy that all of this startedwhen my parents got me a
gateway computer in 99.
But, really, I think from thatmoment on on I've been kind of
obsessed with the internet andall things digital, mostly for
the reason is it allows me tokind of do this self-discovery
and the self-exploration and assomebody who loved the library
(02:55):
and loved reading, it was alwaysa way that I could kind of hear
stories from all sorts ofdifferent sources and learn so
much.
Um, and journalism reallyallowed me to do that too.
Right, kind of amped up alittle bit.
My favorite thing was justgetting to talk to people and
then they would recommend me doresearch into something and I
could spend all this timeputting the story together.
(03:16):
I hated transcribing and Ihated deadlines, but I really
loved learning all the things.
So that kind of naturalcuriosity kind of led myself
more into that digital realm.
And that was right around whendigital first content started to
come up as well, and realsocial media strategy, which
maybe I'm dating myself now, butthat was an entire other kind
(03:38):
of build the plane as we fly itkind of time.
I think.
In journalism specificallyshort form journalism, digital
first that offered kind of boththat challenge and that learning
curve they've always beenlooking for, which then agency
life, I think all I mean alsothe joke that all journalists,
when you reach like mid 30s youmake that agency life jump.
(03:59):
But for me personally, I thinkit was the natural next step,
because I get to look in so manydifferent industries across so
many different types of clientsand figure out, you know, what
makes sense for them and theirindividual audience.
Speaker 1 (04:15):
I love it.
Yeah, you are like thequintessential TD member in that
sense of like that insatiable,like curiosity and always
wanting to learn.
And it's interesting because alot of our members kind of fall
within that 15 to 20 yearexperience.
And I do think it is because weall have had the chance like
let's go ahead and dateourselves, that's okay, let's
own it.
We should be proud of that fact.
We got to see the birth of thedigital era and how fast it's
(04:39):
come along.
You have to be curious andwilling to be open to learning
and exploring and as you weredescribing yourself on your
gateway computer back in 99,totally reminded me of something
my mom would always saywhenever we had a question.
She'd be like look it up, lookit up, and it was this like
little definitive, you know 26,you know for all the letters of
(05:02):
the alphabet encyclopedia set.
That was probably done in likethe 70s.
That didn't really have a tonof data.
So for us it's like we rememberwhat it was like when you had to
spend all of the time at thelibrary or poring over books and
encyclopedias, and now it'slike all at your fingertips and
so I don't think we take thatfor granted as much as kids who
have been born right, kids thesedays that basically have all of
(05:23):
that right at their fingertips.
So I do think that appreciationof it is such a good thing and
I think it's definitely shown asto like how and why although
I'd love for you to color that alittle bit too is like how
you've been able to evolve yourrole right to kind of find their
place within the workplace asfar as like what's that next
(05:45):
role, what's that opportunity,what does growth look like?
But you know, knowing you forhow, for the few years that I've
known you, lexi, I feel likeyou've done nothing but ground
like what's what's some of yourthoughts and advice on that.
Speaker 2 (05:57):
So I've been
particularly, I guess, lucky or
had the opportunity that I'vebeen kind of put into this
intersection where things arechanging in a really big way but
they're also becoming much moreaccessible at the same time in
multiple different assets, onebeing obviously the technology
boom and you can be able to havea home computer.
When I did early on, way beforemy siblings who were in high
(06:20):
school who probably, like that,would have been a game changer
right, I had it through, youknow, middle school all the way
through high school, which wasan incredible opportunity.
But it also allowed me toaccess more information and kind
of continue feeding that hungerfor knowledge and that
curiosity.
I had Same thing right whenjournalism.
Everything was changing veryrapidly but it was also becoming
(06:43):
much more accessible.
We didn't need a full cameracrew anymore to do anything.
We had our phones and we couldgo out and do live feeds.
We were telling live stories,right.
I was able to record reallyhigh quality footage and people
expected that now it didn't haveto be so polished and now it
was more accessible and moreexciting.
But all of these times times wevery much had to fly the plane
(07:07):
or build the plane.
As we fly it.
I always say so.
There has to be a little bit ofthat nimbleness and, I guess,
not being afraid to fail.
There is no wrong way to do anyof this.
We're figuring it out as we goright.
We may stumble and evolve, butI think it's that willingness to
be wrong and see what happensand learn from it.
Speaker 1 (07:26):
Yeah, and that leads
so nicely into my next question,
because you know, in this kindof AI overwhelm that a lot of
folks are feeling, you know youeven referenced this in a recent
blog post that you did is thatyou know people's first question
is often about what tools areyou using?
It's all about the tools andI'm curious, you know why do you
(07:48):
think our focus tends togravitate towards the tools
rather than what you're talkingabout, which is more mindset,
more strategy?
Speaker 2 (07:54):
Yeah, I mean, the
most easy answer to that is
there's just so many.
I tried to look up studies theother day to how many like
hundreds and thousands of toolshave been released, but it is
asinine.
Um, there's actually a reallyinteresting story about the dot
ai domain that I won't go intoand how that's exploding.
Yeah, but there's so many ofthem and every day I think my
inbox is filled with anothercold email with a tool that says
(08:18):
it can do x, y or z, all supertargeted towards me, right, my
particular needs, yeah, and allof them feel very much like that
easy button that we're alllooking for.
Everyone is just looking forsomething that will just work
right, especially because thesethings maybe take a little bit
of that thought and risk out ofit.
When something is, you know, alittle bit uh, being built as we
(08:42):
fly it, yeah right.
But the problem is is when westart kind of chasing these
tools without that strategy, itkind of leads more to that
wasted time, because these toolscan only really matter if we're
using them to first solve ourreally real problems, or else
we're just going to beshoehorning solutions in where
it doesn't make sense, and nowit's more time being spent.
Speaker 1 (09:03):
I agree.
I agree I've got an upcomingevent that I'm really excited
about to talk about AI withoutfear for small, namely small
business owners, because I thinkit can be, like you said, a
great, like a great tool I meana great democratizer but if
you're not using it in the rightway.
The analogy I came up with withmy presentation was like it's
kind of like finding scissorsfor the first time.
You've been like ripping paperand trying to do it really well
(09:26):
for years and all of a sudden,you find this new tool, aka
scissors that are AI, but thenyou're running.
It's like running with scissors.
Right, you're just runningaround looking at what can I cut
?
I'm going to cut everything now, but scissors aren't the tool
for everything, right, if you'vegot a big honking piece of wood
, if you've got something know,something else, it's.
(09:46):
Scissors aren't the answer foreverything, and neither is AI.
And it's like, when you'rerunning around like that, acting
a fool, it is kind of likerunning with scissors.
Speaker 2 (09:51):
This is my analogy,
you know.
Imagine it's a Swiss army knifeand it's all the other things
that we aren't using it for thatit could be good for in
addition to that.
Speaker 1 (10:01):
I am adding that to
my presentation.
Extended metaphor.
Thanks, lexi, you're the best.
I love it 100.
I am adding that to mypresentation.
Extended metaphor.
Thanks, lexi, you're the best.
I love it.
A hundred percent.
That's so, so true, becauseyou're right, I think people are
chasing I mean we hear it allthe time right, chasing the
shiny object.
You know, looking for that easybutton.
But you know the wrong tool andthe wrong place could be
disastrous.
So really making sure that youhave a sound strategy and then
(10:22):
determine the tools that are theright tools for the job, really
making sure that you have asound strategy and then
determine the tools that are theright tools for the job, really
the best way to go about it.
And when you think about it ina practical kind of physical
world sense, it makes all thesense in the world.
It's just I don't think wethink about it that way.
But I love your multi-toolreference too, because I agree
there are a lot of things peopleare not using AI for that it's
actually really made for more sothan just generative all right
(10:43):
you.
All right, you've done some morewriting.
I love that you're continuingto write Like clearly your
journalistic chops have not leftyou, so be sure to go onto
Franco's website and check outsome of Lexi's blog posts.
We'll include them in the shownotes.
But you've written about theimportance of understanding the
foundational technology behindAI tools, right.
So it's not just understandingwhat's the right tool, when is
(11:03):
it the right time to use it, butlike what is the technology
behind it?
For those of us who aren't verytechnical, what's the minimum
we should try to understandabout how large language models
which is really what AI most AItools are based on work?
Speaker 2 (11:18):
Yeah, and again, I
think it's so important again,
just even if it's that basicunderstanding for that exact
thing that we had just talkedabout, it's using it for the
right solution, right.
Using the tools correctly, andwhen you understand what they
are inherently, that's going tobe so much easier.
So again, breaking this down ina way that I, as a very much
(11:39):
non-engineer do for the rest ofmy agency, I like to describe
them kind of as giantstatistical engines.
These like math driven statsmachines and they're not
sentient, but they're reallygood at spotting patterns, which
means they don't understandlanguage like you and I do.
They don't read letters, theydon't read words or sentences,
(12:02):
but they're really good atpredicting what chunks of
letters will come next.
Right, they're really good atpredicting what numbers will
likely come next, but they don'treally understand the data like
you and I would, and thatbecomes really important to
understand for some of thethings that you're using it for,
because it's just looking forpatterns based on its training
(12:24):
data and what you feed it.
And if you aren't feeding it agood input, you're not
necessarily going to get a greatoutput back.
Right, you're just kind ofhoping it figures it out from
his eye, guys, anything that itpulls to put together.
The more intentional you can bewith that input in your, what
you're putting into the model,how you're asking it, the better
(12:46):
that output is going to be whenyou get it out.
And then you can use it acrosstools.
Right, you're not beingshoehorned in necessarily to
this one specific solution.
Those skills can be appliedacross models.
Speaker 1 (12:58):
Right, yeah, I love
it.
That's.
I love that.
I love the way that you justdescribed it it is.
It's highly predictive.
It is literally just lookingfor patterns.
It doesn't.
It doesn't have a brain.
It's not like understanding andprocessing everything the way
we would.
It's literally looking at largesets of data and saying, oh, I
can predict what's next, whichis what you know again, what
machines are good at and humansmaybe not always so much, and
(13:19):
that's great, but that's reallylike the mindset you need to
have when you're starting to usethem, right.
But, yeah, I think you know, Ilove that we're talking to you
today, because I think a lot ofour listeners and members are
people who are advocating forthe use of AI and you know newer
technology and tools andsometimes it's a challenge,
(13:41):
right, because there's a lot offear, there's a lot of
misinformation,misunderstandings, you know, and
really trying to get people tokind of embrace these things, it
can be a challenge, you know.
And so for those who arelistening, who are leading small
teams with those limitedresources, and you're spending
your time not just trying toimplement but also trying to
educate, you know you're doingthis and you've been doing this
(14:02):
at FrankL how could theyapproach AI adoption
intentionally, maybe, withoutgetting caught into what you
call, what we're calling likethis AI overload?
Speaker 2 (14:12):
So the biggest advice
that I can give anybody is
don't start with the tools.
Again, they seem like easybuttons.
It's the first thing you can do.
You see something in your inbox.
You've heard really good things, you know it.
It can help.
You've heard of another agencyor company that uses them.
Don't start with the tools.
Really start with your needs.
Speaker 1 (14:30):
And ask yourself.
Speaker 2 (14:31):
Why are you exploring
using AI?
What do you need it for?
What's not working right nowwith your current processes and
what are your goals?
Because, ultimately, ifwhatever you're implementing
isn't supporting your goals, whyare we using it right?
So if we can start by firstidentifying real pain points,
then we can evaluate if and howwe can use AI to solve them.
(14:53):
And it might not be right inthat one way, the scissors right
.
It's a whole Swiss army knifeof things that we can do to
maybe solve that problem in away that works best with your
workflow.
Speaker 1 (15:03):
Do to maybe solve
that problem in a way that works
best with your workflow, and bydoing that you're going to be
able to kind of keep your workfocused right on what is again,
eyes on the prize, what's at theend of the rainbow here,
without getting kind of pulledinto all these various tools and
then trying to shoehorn theminto your process Absolutely,
because I think people are goingto feel and sense that, right,
like if you're if you're justkind of trying to force a new
(15:25):
tool without a rationale or awhy you're just you're really
going to struggle, right,because that's what we all
gravitate towards is the why Ido want to call out that we've
got our live listeners here withus today, so we're so thrilled
that you're here and we want tohear from you as well.
So, if you have questionsthroughout the conversation,
drop them into the chat and I'llbe sure we get them asked
before we wrap things up heretoday.
Because, like I know,everybody's unique circumstances
(15:46):
are here.
Although we've got a good listof questions for you, lexi, I
want to make sure we're helpingour listeners as much as we
possibly can.
All right, the next question Ihave for you is in another one
of your articles, you mentionedthe relationship between AI
literacy and our perception ofAI as kind of magical, and how
has demystifying AI changed yourown approach to implementing
(16:06):
this?
So this is a nice build to ourlast question.
Speaker 2 (16:09):
Yeah, so we've been
now, as in, I guess, our AI task
force, been working for about10 months doing this exact
process I just described.
Right, we're starting withanalyzing the what are we trying
to improve and then again, howcan we use AI to do that?
And really, what we've learnedthroughout, you know, these 10
plus months.
A, we've learned it's not magic, right, but we've also learned
(16:31):
what AI is really good at andwhat it's not good at, and I
think by doing that alone, we'vekind of taken it that we're
again no longer a spectator, weare now the magician.
I like to say we can be alittle bit more intentional with
our usage, versus beingreactive and trying to
experiment with different thingsthat aren't necessarily working
(16:53):
.
We're kind of thinking that AIfirst, now mindset, and with
that too, we're starting tothink about that a lot more
holistically within our ownprocesses right, it's not after
the fact looking at this wholething and figuring out okay, now
where can?
Throughout the entire process,now we're looking at ways that
we can make it more efficientand that we can improve our time
(17:13):
usage.
Speaker 1 (17:15):
No, that's fantastic.
I mean, you put it so simply,but I don't take that lightly.
That kind of change managementis never an easy thing, right,
when there's just a ton ofconcern and fear, security,
people's jobs, all thosedifferent things in mind.
To guide an entire companyalong to taking a mindset of an
AI first approach is no easyfeat, and especially in 10
(17:37):
months.
Girl, that's impressive, that'sreally impressive.
Speaker 2 (17:41):
And we have so many
different types of clients, too,
and different processes, andall of our services are
different, right?
So trying to figure out a thisis how we do this task approach
wasn't necessarily going to workfor us, right?
We really had to get into theheart of what.
Are we spending time doing thatwe don't want to when?
do we want to improve our time.
(18:01):
What do we want to do more of?
We really focus, and you'llhear me say efficiency privately
too many times, but we reallyintentionally focused on that
first, yeah, doing optimizationsbefore innovations, because
optimizations pay the bills.
Speaker 1 (18:17):
Love it Optimizations
before innovations.
I'm sorry, I'm just going tosay, like we have to quote that
Take it right, but it does, ithelps.
Speaker 2 (18:25):
you know, that's that
ROI that you're looking for,
which can be hard when you'realso trying to invest in new
tools and invest your time offiguring out how to use them.
Speaker 1 (18:34):
Right and we do get.
We get so excited.
I love don't get me wrong, I amall for innovation, but if
things are not optimized first,like what is the point?
You're just, you're just.
You're just innovating toinnovate You're actually not
improving upon anything.
So I love that and I want toget a little more practical here
too.
If you could walk us through,like, your process for
determining whether an AI toolis actually worth adopting
(18:55):
because you've given us a littlebit of guidance, but maybe,
like a case in point example,would be helpful for our
listeners as well.
And what questions should weask beyond?
Okay, what can this thing dowhen it comes to implementing AI
tools?
Speaker 2 (19:06):
Yeah.
So if we're going to get downto any kind of situation task,
whatever it might be fittinginto it First, I think we've
done this a lot.
What is the problem that you'resolving?
What do you want to do and whatare you trying to achieve at
the end of this?
Making sure it's measurableright at the onset, I've learned
throughout, this entire processhas been so so valuable.
(19:29):
Again, not only is it allowingus to show that ROI, but now
we're actually able to see if,whether or not, this is working.
Second, from that askingyourself, how will this tool
actually help us achieve thatright?
Will this tool make us better,faster, more strategic?
What is it doing for us?
How does it integrate in thework that we already do?
(19:51):
Right?
Is this going to create morefriction?
Because if we're implementing atool to improve our processes
and now we're just creating morefriction, right, we can start
to see ahead on where you knowthere's going to be problems
there, and then can we trust theoutput?
What is our plan for validatingit?
I say this with everything,because hallucinations are a bug
(20:14):
.
They're just destined to happenwith the way that AI works
right.
So we need to be having aprocess in place for any of
these things to really guide ourethical usage right and making
sure the content that we'reputting out is accurate and
correct.
Speaker 1 (20:29):
Yeah, yeah, no,
definitely, human in the loop is
absolutely essential, and a lotof marketers, I think people,
are sometimes afraid like, oh,it's going to take our jobs.
It's like, no, no, it's thepeople who know how to use AI
that are going to take your jobs, not AI itself, because you
can't just run on AI alone.
It's not possible.
It's you're going to get andthat's another phrase I heard
recently too, somebody put outthere it's like marketers are
(20:51):
not going to get replaced by AI,but it's going to expose the
bad marketers, because peoplewho don't understand marketing
are going to try to use it inplace of people who actually
know what they're doing andthere's no check the balance
there, right, and so they'lljust put up something and it's
total garbage or it's bias,right, because we know there's
bias in the code.
So therefore there will be abias in the output, and so not
(21:11):
having that human checkpointabsolutely essential.
I think that's a good call.
Let's dig into ethics a littlebit more, because that comes up
often in AI discussions.
Although I need to clear mythroat, give me a second.
Speaker 2 (21:23):
I love the mute ahead
of time.
You're a pro Right.
Speaker 1 (21:27):
Done this a few times
and I can like feel it coming,
so I'm like all right, all right, let's get practical here.
Could a little bit more, andI'm going to focus on ethics and
how it comes up so frequentlywith AI discussions.
What ethical considerationsspecifically should digital
professionals be mindful of whenimplementing specifically
generative AI in their workflows?
Speaker 2 (21:50):
Yeah.
So when we talk about ethics, Ithink it's important to specify
that ethics isn't just abouthaving a policy right.
It's ultimately aboutprotecting trust.
So when we're figuring outabout what is our AI ethics
policy, or how should we bewriting this, we need to first
assess what is ourresponsibility right, and not
just our responsibility to ourteam, but also to our clients,
(22:12):
to our audiences, anyone who'sgoing to be impacted by your AI
use cases.
For example, no AI generatedcontent should be going live
from any of our brand channelswithout human review, because we
know that our audiences we havea responsibility to them to
build their trust, that theyknow that the content coming
(22:33):
from us is accurate.
These sorts of things of firstanalyzing who will be protecting
what is our responsibility tothem is really key to that sort
of mindful application andsetting up the right guardrails
and vetting processes.
Speaker 1 (22:46):
I love it and that's
so great.
And there's a lot there,obviously, that we could kind of
expound upon.
And you know, and it's, this isone of those situations, like
you were saying earlier.
It sounds crazy, but like AIand ethics, like that's, that's
a plane we're building whilewe're flying it.
It's one of those things whereit's like you just have to stay
vigilant, right, and you have tothink about your use cases and
(23:07):
just know there's going to beslip-ups.
But, like, when there's slip upsor there's instances where it's
like, okay, this didn't reallyalign to what our values, our
morals, our standards, ourethics are, well then you gotta
put it on in writing, you gottatrain, you know things
differently.
You have to communicate acrossyour team that this is not how
things get used or done.
I think, early and often and assoon as kind of the slip up is
(23:28):
made, but at the end of the day,like during this time, like
that's just what's going tohappen, right, because that's
how we're going to figure it outis by falling on our faces a
little bit.
So I think that, too, hopefullyhelps alleviate some of that
fear, right?
That is like you said, when youstay vigilant, when you're on
the lookout and you're keepinghumans in the loop.
Ethics becomes an easier thingwhen you kind of understand, too
(23:49):
, that this is something we'refiguring out as we go and
there's not a lot of regulationout there either, which I know
is terrifying.
We're having a whole one of ourTogether Digital Cincinnati.
In-person events is all aboutunregulated tech in a highly
regulated industry likehealthcare right.
I can't even imagine.
I mean, I worked on automotive,I worked on Ford, I know what
that legal department's like,any and easy to get stuff
(24:10):
through there.
So I can only imagine with AIit kind of just compounds, like
you were saying.
It almost creates maybe morefriction sometimes.
So finding the right places touse AI makes a whole lot of
sense and generative to me.
To me seems to be the mosttricky right, because it's a
thing you're going to take andthen you're going to put it out
into the world.
Versus take and use foranalyzing, yeah.
Speaker 2 (24:32):
Yeah, and again this
is.
I did a presentation on thisout in Grand Rapids earlier in
the year and again, like yousaid, we're very much building
the plane as we fly it on this,so it is continually changing.
But I use the analogy it'sreally easy to lose sight of the
forest amongst all the trees.
Right, when it comes to ethics,we're kind of pulling out these
(24:52):
individual things that we needto focus on, versus first
analyzing what is it that we'redoing and who again do we owe
that sort of moralresponsibility to?
And I think when we have thatcompass in mind, it's much
easier to look at all theindividual processes we have in
place through that lens and see,okay, where could there be an
opportunity that we could letsomebody down?
(25:14):
and be able to kind ofproactively assess that risk.
Speaker 1 (25:18):
That's great.
I think that's a good way tolook at it too.
It's just being mindful andvigilant throughout the whole
process, even at the verybeginning, to say like what
could go right, what could gowrong?
Right, yeah, exactly.
And so what are some ways inwhich you are trying to kind of
balance the convenience that AIbrings while maintaining the
control of the outputs that AImaybe is creating for some of
the work that you guys might bedoing?
Speaker 2 (25:39):
Yeah.
So when it comes to AI tools,specifically to when we talk
about third party tools andthings, a lot of them are
designed for ease, right.
But with that ease andconvenience, there's often that
cost of understanding exactlyhow they work.
That's that secret sauce.
But that abstraction canobviously limit your
(25:59):
understanding and also limityour control.
That's always why I like to tryto make it a point to really
understand how these tools workand how they handle data.
That way again, you're notbeing shoehorned into one
specific process, but it can betaken across as these again,
there's going to be a lot morenew tools that come out right
(26:20):
With that in mind.
These tools are kind ofcontinuously changing, right.
There are risks involved.
There's evolution involved.
When you're working with thirdparties, you're putting your
trust in them.
Now, that.
A they're staying up to date andthat they are keeping your data
private, right, right.
So the more layers of kind ofabstraction there are, the more
(26:40):
kind of, you know, viabilitythere is.
Just inherently, and a lot ofthe.
I mean, all of these tools areworking on the same handful of
models.
That's also the other thing.
That was just the eureka momentof realizing core models versus
, like, third party models.
If you can understand kind ofthe basis of that, it gives you
so much power to kind of be toolagnostic, which is very cool.
Speaker 1 (27:03):
No, definitely.
Yeah, I think there's somethings interesting too.
So I don't know if we talkedabout this yet, but my husband
recently made a transition fromacademia to a startup and it's
an AI company and he's head ofAI research, and one thing I
have taken from him that I thinkis fun, that you reminded me of
as you were speaking.
There is, as I'm trying, a newtool.
I kind of try to break it.
I try to see where thefallacies are, how it will start
(27:27):
hallucinating if it'll startmaking up information, anything
like that.
Because as a head of researchfor AI, that's his job is.
He just gets to sit around notall day, but sit around and try
to find ways to basically getthe AI to do something it's not
meant to do or not supposed todo.
Speaker 2 (27:44):
So I would say, yeah,
test the limits.
My prompts are abysmal, Like ifyou look at my prompts,
compared to what I tell peopleto prompt with on my team.
Again, for that exact reason,if I'm always trying to test it
to be okay, what would say theperson that's going to use it
the worst?
Right, they break it.
That sort of proactivity is soimportant too, as we're trying
(28:06):
to be.
Again, we know there might besome resistance to some of these
things, right?
So if you can kind of beproactive and figure out where
doesn't it work and put some ofthose I guess that's all that
transparency right up front withyour team.
Speaker 1 (28:17):
It's really going to
again prevent some of that
resistance to change and I waskind of curious like what
resistance have you encounteredwhen introducing, you know,
(28:38):
either AI or any of thesetechnologies, and how do you
address the concerns that arecoming from your team members,
because I can imagine we've gota lot of folks listening that
are kind of in a similarsituation.
Speaker 2 (28:44):
So when we, when I
first came into this our whole
team well, we developed people,a people first approach, right,
even based off of our AI taskforce that we put together.
Um, there's only, I think, twomembers of our digital team are
actually on that task force andthe rest are from other you know
, members of our agency who wedo integrated communications for
(29:04):
context there.
So a lot of members of our teamare more heavily involved in
media relations or influencerwork, um, so having all of those
seats at the table was a reallyimportant as we develop their
processes, because it's not justhow I'm using the tool.
That can often, you know, leadto some biases and, you know,
misunderstandings about whatwould be easy, what are our
(29:26):
processes.
So, I think, putting that firstand making sure that we had
everybody at the table right,and then from there, we just
asked our team exact things thatagain that we're talking about
what are you guys challengedwith right now?
Where are you struggling?
What do you want to do more of?
What do you want to do less of?
Um, and from there, we wereable to really look at all that
(29:48):
through the lens and figure outokay, where could we maybe use
ai to do some of this, right,yeah, um, so we were directly
answering their questions, wewere directly solving their
problems, um, and with thatthere's inherently less
resistance, right, I love it.
The other thing is justknowledge.
Knowledge is very much power inthis way, that same explanation
(30:10):
of kind of what I gave earlier.
There's a reason again that Icame up with the statistical
math machine analogy because itmakes it a lot less.
Yeah, especially when you knowthat if you're putting your
content in, you're inherentlygetting a version of your
content back.
Right, makes it a lot lessscary.
Knowing how it's trained makesit a lot less scary.
(30:31):
I always say if you're planningon putting something publicly
online eventually, like a pressrelease, that's fine that you
use AI with it, becauseeventually it's going to end up
back in that model anyway.
If it's going out on the wire,it's going to be in chat, gpt in
like six months max.
It's getting faster and faster,right.
So understanding some of thesethings just make them a little
bit less scary and prevent someof that resistance early on.
Speaker 1 (30:53):
Yeah, I agree,
education is the antidote to
fear, for sure, and I thinkexactly what you said at the
very beginning of the podcast.
It circles from the so nicelyright back to now, which is, you
know, when you're working witha team and you're trying to
implement something such as AIor any other technology tool
like focus on the needs Cause,then you're like I got this
paper, I got stacks and stacksof paper, I got to get cut, and
(31:14):
how are we going to do it?
Oh, look, I have scissors.
Yay, all of a sudden you're notquestioning any of it.
But I agree, understandingeducation, even just finding
ways to kind of simply explainso that people aren't feeling so
fearful, is a great way to getthem to embrace those tools.
So, yeah, I can definitely seewhy you've done such a great job
implementing it and I love thatyou have, like you know, an
(31:37):
identified task force within theagency to help kind of own this
and you know steer and guideand having, I imagine you have
that support too, right From thetop down, which also is a huge
difference, right.
Speaker 2 (31:49):
Absolutely Well.
We again we knew with an agencyof our size we're about 30
people, a little bit over.
Now we're right at that sweetspot too that we're like
boutique agency size, but wehave the clients right, we have
a large workload.
We've always very much workedin that sort of way, so the
innovation and optimizationsthat we were going to
(32:09):
potentially unlock with AI wereso particularly valuable for us
that it became a really bigpriority for us early on.
When we saw that power, I gotto again give it to our
leadership on that, because thatcan be a little bit scary,
right, because, like you said,we're making this up as we go
and the only way to do it is bytrying things and sometimes
failing and figuring it out.
(32:30):
But we knew that if we weren'tkind of like first to market on
one of these things, we're justgoing to be learning from other
people and our agency is uniquein that way that we knew that we
wanted to kind of forge our ownway.
Speaker 1 (32:42):
Yeah, I love it.
I think that's the way to do it.
It seems really smart and Ilove that you've embraced it and
again, that your leadershipteam is like aligned, because
that really helps.
If you don't have thatalignment from the top down, it
just makes it hard for anybody.
Speaker 2 (32:55):
It's that women-owned
agency.
We're getting stuff done.
Go, ladies Well done.
Speaker 1 (32:59):
Franco, folks Love it
All right.
Looking ahead, how do you seethe relationship between human
creativity You're at a creativeagency right and AI evolving in
the market and marketing andcommunications field over the
next few years?
I mean, we can even tie thisback to your experience as a
journalist, you know.
Speaker 2 (33:15):
Yeah, yeah.
No, this one is such a funquestion and I gotta say I think
it was last year where we hadhelen todd um at the illuminate
and I just fell in love with herand her entire presentation.
Um, so I will say I've taken abit of a note of that sort of
optimism, and I do have a littlebit of I mean definitely
optimism in the way that we, ascreatives, can use it.
(33:38):
I think, like with all tools,like with the internet, like
with everything, we're going tohave this phase, a kind of
shallow, surface level AI use,and that's normal.
I think that's where we're verymuch in right now.
There's a lot of risk right now.
We're still figuring a lot ofthings out, but I hope I'm
inherently optimistic that, asour understanding continues to
(33:59):
deepen, that we can start to useAI more as that sort of
creative co-pilot that you weretalking about.
The ability to improve ourideation without needing an
entire roundtable of people andits ability to free us up for
new thoughts is something thatreally, really excites me.
(34:21):
The ability to look at massamounts of data and look for
trends in a way that previouslywe weren't able to do at this
sort of scale is incredible,right?
So I don't think it's going toreplace human creativity.
It can't.
Inherently it can.
It's regurgitating.
Again, it's a math machine.
It's just giving us back whatwe put in, exactly, Exactly, but
(34:43):
it can make us more creative bygiving us so much more capacity
that we were never able tobefore.
Speaker 1 (34:50):
Yeah, I love it.
I'm so glad that Helen inspiredyou.
Yeah, her talk last year at ournational conference was
phenomenal.
Y'all should check out herpodcast as well Creativity
Squared.
She's been on this kind of youknow role with talking about AI
and creativity for about a year,two years now, I think.
The podcast is actually twoyears old, so she pre-ordered
the book.
Speaker 2 (35:10):
Did you?
Yeah, I'm excited about it.
Speaker 1 (35:12):
So good, yeah, I'll
have to let her know that she's
got some pre-orders coming in.
She's writing a book and shereally talks about you know this
like a lot of times.
Uh, I think it's.
Sam altman calls this, likethis time point in time, the
innovation age, and she's like,no, it's the imagination age.
Like this is really not about,again, like you said, innovating
without optimizing, likethere's just we do.
We need more innovation rightnow.
(35:33):
No, because, honestly, it'shard enough for us to keep up
with it.
But how can we start to be moreimaginative?
How can we use AI to actuallyenhance our ability to be
creative?
And I will say too, just likeas a small business owner you
know, on the flip side of things, I know it's different for
agencies like trying to adoptand figure out what are the
right tools, what's the rightconversations, what are the
(35:54):
right ways in which we could useit, how do we talk about using
it with our clients?
Like so tricky.
But I have to say, as a smallbusiness owner and, honestly,
like one of my other favoritepodcasts that we've had on AI
recently is with Sarah Dooley,talking about AI, empowered moms
and how it's such a phenomenaltool, just even like a little
life hack in ways in which youcan like I have gone and asked
(36:16):
for like tips on meal prep andthings like that based on like
food allergies and concerns.
I've used it as like a littlebook in the moment therapist for
things when I'm like in amoment and I need somebody to
like help, because it justrationalizes and it's got all
the CBTs, so cognitivebehavioral therapy like trained
up on it.
So, yeah, no, it's not just notgoing to replace your therapist
(36:38):
.
However, in a moment like,honestly, it at least gets you
to slow down and think.
And so there have been.
I've used it to plan parties formy kids.
I've used it for so manydifferent wild things that I
would like it.
That's the Swiss army knifemoment right, that's the unlock
is when you start to realizethat you know, yes, it is a
great tool and, yes, that'ssomething we should be working
to champion and educate those atwork to be being aware of.
(37:01):
And how do we use it, when dowe use it?
All of that.
But also, I think another wayto maybe drive down that fear is
just kind of start finding funways at home to use it.
And you might be surprisedbecause, like you said, that
pattern recognition is real andit's new, like deep research
capabilities have just been wild.
Speaker 2 (37:21):
I always try to show
a couple of personal use cases
whenever we do a rollout orthings on those lines, just
because again it gets us usingit.
But the mobile app of Chad GPTthe number of times that I just
take pictures of things and askit to tell me about it.
I'm a thrifter, I love tothrift.
Speaker 1 (37:37):
I love it.
Speaker 2 (37:38):
A bunch of weird old
patches and pins and things.
The number of times I've toldit to do deep research on it for
me and tell me all the history,stuff like that.
Just it takes the ability ofsomething I couldn't google
before.
Yeah, again, just finding thosereal use cases.
What do you want to know?
What do you want to do?
I coded my first, uh, my firstJavaScript app in my life the
(38:00):
other day.
I'm not an engineer, but I gotrid of 26,000 emails, right.
Speaker 1 (38:04):
Oh my gosh, I need
this.
Speaker 2 (38:08):
I spent two hours
making the app, but I cleared
out all the emails.
Oh my gosh, that's amazing,opening up new avenues of
creativity that just previouslyweren't even possible, which is
so cool.
Speaker 1 (38:19):
That is so cool.
Yeah, I'm excited to see kindof where it goes, also within
healthcare.
Like, I'm excited there justbecause there's so much just for
diagnosis and things like that.
There's so much complexitiesbehind like a series of like
tests and conversations andsymptoms over the years, months,
whatever, that don't gettracked, that don't have pattern
, that anything that the humanbrain can like, recognize and
(38:41):
realize.
I think we're going to see alot of innovation, I think, in
the healthcare space too.
For that Cause I think aboutthat.
That pattern recognition isessential in understanding how
our minds and our bodies areworking.
There was another example I wasthinking of too, actually, as
you were talking, and then itleft my I left my non-AI powered
brain.
If I think of it, I'll bring itback up.
But yeah, there's so many cooland fun and interesting use
(39:05):
cases out there and I do thinkit's you know, what are the
things that you find belaboringthat you would rather off put to
.
You know, like you said, aco-pilot oh, that's what it was.
Tech support I have used it fortech support, where I'm like I
am struggling with this thing.
How do I create, like, evenlike setting up zaps?
(39:25):
I'm like I want to set up a zapfor this and that and the other
and I'm like how do I even dothat?
It is like a super poweredsearch engine, right?
Because instead of just givingme general search results, it's
actually specifically answeringmy question.
So yeah, tech support's beenanother kind of lifesaver for me
.
Speaker 2 (39:46):
That's my life hack,
I swear it's.
My biggest hack is if you werelooking up which I hate Facebook
questions.
Stop asking me Facebookquestions.
But the meta AI.
I ask it every Facebookquestion that comes into me
because I know it's directknowledge base.
Similar thing Bing owns or I'msorryrosoft owns uh, copilot.
Microsoft also owns linkedin.
Go ahead and ask uh, thatcopilot?
(40:07):
Any of your linkedin questions.
I try to match them up where itmakes sense.
Speaker 1 (40:10):
Yeah, just knowing
that they're going to have way
more access to those knowledgebases and, like you said, like
having that understanding helpsyou know how to use the tools
smarter, because you're askingthe right tool to do the right
job.
Speaker 2 (40:21):
Google ads questions
go to Gemini.
Right, You're able to kind ofask the right ones.
Speaker 1 (40:25):
Oh, that's great tips
.
I love that.
I know some people are going totake that home for sure.
I have a fun little bonusquestion before we move on to
our power round questions,unless we get questions from the
audience.
I am curious and this is likelatest breaking news right, and
AI is now.
We're being asked not to sayplease and thank you.
Have you seen this on AI?
Because apparently we're likeburning down the world and all
(40:47):
of the power and energy that ittakes for all of this stuff to
compute.
We're being requested not tosay please and thank you.
Will you continue to say pleaseand thank you to AI, and do you
already?
Speaker 2 (40:57):
Yeah, here's the deal
.
I'm always going to phrase myinput the way that I want my
input to come back out, and if Idon't want it to be a but, I'm
going to be nice to it.
Right Beyond that, I think ifwe're worried about the data
processing, we probably shouldstop making Studio Goodly images
.
But it's fine.
There are probably biggerconcerns than the please and
(41:18):
thank you.
Yeah, I agree, but it's fine,there are probably bigger
concerns, right, please andthank you.
Yeah, but I think this is wherethat intentional usage really
goes far.
I love it.
I love that answer.
Speaker 1 (41:26):
That is so great.
Yeah, can we attack some otherthings other than just genuine,
like just courtesy?
Speaker 2 (41:31):
you know it went down
for all of last week because of
the image generation.
So we can stop that first.
Speaker 1 (41:36):
It'd be great, right
oh, I hear you there, friend.
All right, let's get to thesepower round questions.
I had to ask because it justkeeps coming up and I'm like you
know, I'm just curious.
Speaker 2 (41:46):
All of my coworkers
have asked me this week if I had
a can.
I won't lie.
Speaker 1 (41:52):
I believe it All
right.
What is one AI tool people tendto overlook, but shouldn't?
Speaker 2 (41:58):
I hear it a lot, but
maybe not this feature but
notebook lm from google for deepresearch.
Um, but also that podcastability.
Within the last couple ofmonths they introduced the
interactive ability.
So you can interrupt them andask them questions now and like
direct the conversation.
That's super useful for me.
Um, and all that, live again.
(42:18):
You live person to bounce ideas, off of which is so cool.
Speaker 1 (42:22):
No, it is really.
It's like the best littleintern ever.
I love it.
All right, fill in the blank.
Speaker 2 (42:26):
The biggest mistake
people make with Gen AI is
Starting with the tools insteadof understanding what they're
trying to solve for I love it.
Speaker 1 (42:35):
Yep, exactly, all
right.
Speaker 2 (42:45):
What's your favorite
aha moment you've witnessed when
introducing AI to someone else?
Lately it has been basicallyany of the ways outside of
content generation, specificallythe deep research, when people
get to see everything that comesback and then all of the
citations for it.
They're blown away every timeand it is super cool.
Speaker 1 (42:58):
It's my little
encyclopedia set.
They're just exploding andbecoming infinite website.
Speaker 2 (43:04):
It's right there for
me it's so nice.
Speaker 1 (43:06):
Oh, it's amazing.
I love it.
Yeah, it's gonna help a lotwith a lot of things.
All right, last one um, what isone thing about ai?
You wish everyone understood,uh, but most don't halluc.
Speaker 2 (43:19):
Hallucinations are
not glitches.
They're ultimately baked intohow the models work right.
So validating your output isalways going to be a
non-negotiable before you put itoutwards.
Speaker 1 (43:31):
Yeah, I love it.
That's a great frame ofreference for us to kind of keep
in our little back pocketsthere as we continue.
Lexi, this has been amazing.
Thank you so much Again.
All these insights have beenreally helpful the tips, just
the practical advice, justreally kind of helping build out
and navigate the AI landscapewith some intention and purpose.
You know it's just reallyhelpful because you know it's
(43:55):
all moving fast, it's all movingfurious and it can feel a
little overwhelming, but itdoesn't have to.
I think your measured approach,your open mind about it, I
think all of it's, you know, areally great example for those
who are trying to do the sameout there in the world.
So thank you so much forsharing with us today.
Speaker 2 (44:10):
Thank you so much for
having me.
This was super fun.
Speaker 1 (44:12):
Absolutely.
Yeah, long overdue, but, youknow, at the right time right?
We're all sitting in the thickof this right now, so everyone
who's listening be sure to checkout our past recordings.
All of our Power Lodge sessionsare available on YouTube.
Also, anywhere that you streamand listen to your favorite
podcasts, you can stay updatedby subscribing to any of those
channels.
Definitely, if you have feltlike you've learned something
(44:35):
here today, you're feeling alittle less alone and you're
looking to meet and connect withmore amazing women like myself
and Lexi, who are just reallyhere and excited to again just
nerd out together, you know, andcome together and nerd out and
also are just really wicked,smart and generous.
Definitely, check out and learnmore about Together Digital
togetherindigitalcom.
(44:55):
Lexi, I'm excited to come up tomichigan and see you next month
.
Um right, yeah, absolutely.
Speaker 2 (45:03):
Come to our events
again together.
Digital ones are typically openas well, if you're looking to
dip your toe in absolutely,absolutely.
Speaker 1 (45:10):
Yep, we're excited to
have you all here.
We'll be back next friday, sowe hope you join us then.
And until then, everyone, keepkeep giving and keep growing.