All Episodes

September 18, 2025 39 mins

Experimenting with AI is exciting—but how do you make the leap from tinkering to transforming agency operations at scale? In this conversation, Galen Low brings together Melissa Morris (Agency Authority), Kelly Vega (VML), and Harv Nagra (Scoro) to talk about how agencies can carve out space for experimentation, align AI use to business goals, and actually implement the good ideas that emerge.

The panel shares stories of saving hours on PM tasks, setting up accountability frameworks, and creating safe spaces for knowledge-sharing. They also surface the tough stuff—fear of job replacement, cultural resistance, governance challenges—and how to navigate it with clarity and empathy.

Resources from this episode:

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Galen Low (00:00):
What is the AI holy grail that agency
operators are seeking out bygiving their people time and
space to experiment with AI?

Kelly Vega (00:07):
I would usually spend about 30% of my week
doing what now takes meless than 5%, no question.

Harv Nagra (00:13):
This was a couple years ago.
During our monthly all hands,we would create space for
anybody to volunteer to kindof present some of their
experiments, sharing thatknowledge with other people.

Melissa Morris (00:24):
Don't throw it out to the team.
Hey, we're gonna X, Y, andZ. Let's make that happen.
When everybody's accountable,no one's accountable.
Treat it like a clientproject or it's just
not going to get done.

Kelly Vega (00:37):
We actually made a standard agreement that we would
have everything transcribedand everyone was just like,
please, yes, accountability.
That's great.

Galen Low (00:51):
All right.
Today's session is all abouttaking your agency from
a state of like tinkeringwith AI into a state of like
truly benefiting from AIenhanced operations at scale.
So let's maybemeet our panelists.
Melissa Morris, who is afounder of Agency Authority
and also like a prolificLinkedIn video poster.
You were posting so much video.

(01:12):
It's all great.
It's all valuable.
I have to ask Melissa, inaddition to you posting like
high value agency relatedcontent almost every day, I've
also seen your name appear onpodcasts alongside names like
Sharon Tarrick, Robert McPhee.
What is your regimen for keepingyour energy so infectious
everywhere that you show up?

Melissa Morris (01:30):
Yeah, I drink a really lot of caffeine, Galen.
No, that's a joke.
I think a couple of things.
I've been doingthis a long time.
I really feel like Ideeply understand the
agency owners I work with.
I'm super excited to supportthem, so the extrovert in me
is very happy to go on andshare lots of good information
and hang out with them.
And I also have a reallygreat team that supports me,

(01:53):
so we have a really strongcontent management system in
place and workflow, so theymake it super easy for me to
just get to pop on videos andtalk and hang out, and then
piece out and they handleall the hard stuff for me.

Galen Low (02:06):
I love that.
What should I pick on next?
Maybe I'll pick on Ms.
Kelly Vega, who is a ProgramDirector at VML and also
a viral TikTok comedian.
Kelly, you recently reunitedwith like a household
name account at a familiarhousehold name agency after
taking a little bit of aroad trip into a completely
different niche and universe.
Does it feel like you'vewoken up for a dream
in a parallel universe?

(02:26):
Like what's different abouthow things worked at your
agency before versus now?

Kelly Vega (02:31):
Big time perspective.
Sometimes, you know, whether yousee it as like grass is greener
on the other side, it wasn'tnecessarily greener or not, it
was just a different experience.
And when that experiencecame to a head, I'm
like, okay, what's next?
And what is next was going back,which I've never done before.
I've never boomeranged.
So it's been wonderful.
I mean, I focus on agencyoperations and delivery

(02:52):
for large accounts.
I came in with a freshperspective and experience
under my belt to speak forit, so I'm really happy and
excited to be here and talkingabout AI and agency operations.

Galen Low (03:04):
It's the joy of agency stuff too.
It's like, you knowthe variety, right?
So you can kind of go and cutacross verticals and cut across
industries, come back withlessons that you've learned
in like adjacent spaces.
And bring in that perspective.

Kelly Vega (03:18):
Yes.
And seeing that at a smallscale, at a larger scale, and
kind of having that, I mean,so much to be said, it's cool.

Galen Low (03:24):
I'm gonna leverage some of those
perspectives today.
So thank you forbeing a part of this.
And last but not leastis Mr. Harv Nagra, Head
of Brand Communicationsat Scoro and host of The

Handbook (03:35):
Agency Ops podcast.
Harv, you recently revealedto me that you actually
had your acting debut ina video case study when
you were a Scoro client.
Now you're actually at Scorousing your agency ops background
and your Oscar worthy actingchops for a new LinkedIn video
series called Ops Quickies.
So, the big question for meis how long until we see  The
Handbook as a major motionpicture, or at least maybe

(03:57):
more video content from you.

Harv Nagra (03:59):
Well, I think it's in production already, actually.
Horror story, you know,evil client team not doing
their time sheets andChatGPT shutting down.
So I think you're gonna love it.

Galen Low (04:08):
I would watch that.
Thriller.
Absolutely.

Harv Nagra (04:11):
Yeah.
Waking Nightmare.
I'm based in London inthe UK, but originally
from Vancouver, Canada.
So excited to see some ofyou from BC over there.

Galen Low (04:19):
Yeah, Harv and I share a hometown.
I was originally fromVancouver, BC as well.
We did not know each other inour time in Vancouver, but I'm
glad we're connected now and Ican rope you into doing a panel
discussion online from the UK.
Alright, let me tee this up.
I'm hearing from a lot ofagency folks that their agencies
are asking their staff todedicate like two to four hours

(04:39):
a week on like mandatory AIexperimentation and exploration.
While I don't necessarily findthis surprising given like AI's
promised potential, I will saythat it was just a heartbeat
to go when there just wasn'tenough room to take two to
four hours per week and stillhit an 80% utilization rate.
You know, and, but whileagency leaders are likely

(05:01):
hoping that these experimentsand explorations will
lead to like innovativebreakthroughs that will 10 x
their business, the reality isthat tinkering with AI won't
necessarily get them there.
Experiments need frameworks.
They need criteria for success.
And beyond that, successfulexperiments need to be like
mobilized into an implementationstage in order to benefit
the organization at large.
So really the question is,what's the best way to go

(05:23):
from informal AI experimentsto agency-wide AI supported
operational enhancements, andwhat is at stake if agency
folks aren't able to get there?
Paint a picture.
You know, we're talking aboutAI, we're talking about growing
an agency or any type oforganization, but sometimes
it's unclear of like whatthat destination looks like.
So I thought I'd ask this panel.

(05:44):
What is the AI holy grail thatagency operators are seeking
out by giving their people timeand space to experiment with AI?
Like is it just about givingstaff exposure to the technology
or is it about acceleratinginto this like sort of hybrid
human AI business model?
And if it's a ladder, likewhat does that even look like?
Kelly, you wanna kick us off?

Kelly Vega (06:04):
I'd love to.
Yeah.
I think that really the timethat is being encouraged to
spent to investigate AI andlook into it, I mean, from
a PM and ops perspective,it's all about efficiency.
It's all about puttingadmin and making that more
efficient, taking awaythe analysis paralysis.
You know, when you are spinningyour wheels on meeting recaps
and who said what, there'stranscriptions that you can

(06:26):
put into a recap that youshould read through and should
make sure everyone's name isspelled right and should make
sure that there's not some,a line that says someone was
frustrated when you didn'tnecessarily wanna note that.
But to take away a lot ofthat overhead work that
you're otherwise spending,I would usually spend about
30% of my week doing whatnow takes me less than 5%.
No question.

(06:47):
So I think that is a big focusdefault because that's where
my head is, that operationsprocess, standardizing process.
I could go on about manyexamples with that, but
that's where my head goes.
That's where I've seen a lotof PM and digital PMs go.

Galen Low (07:02):
I like the human in the loop stuff, right?
It's not zero.
But also like the otherperspective that I hadn't
really thought about is like.
How painful it is actuallyfor your bosses and your
bosses to watch like their topperformers probably do Excellent
meeting minutes and notes.
Yeah.
With names spelled right and,but like, they're like, oh, but
like what I really appreciateabout you and your talents

(07:22):
is not that really right?
Like, I want you tobe doing other things.
Like it's actually not justpainful for the people who have
to do some of the administra,but actually could be painful
for supportive leaders wholike actually want you not
spending 30% of your timedoing that because you're
better at something else.

Kelly Vega (07:38):
Exactly.
It's, it supports delivery.
It certainly doesn't replacethe talent or the strategy
that is necessary and that yourbrain otherwise needs the space
to focus on and can focus on.
My burnout is less becauseI am simply not using
that energy toward thingsthat our admin and not
productive toward a strategy.

Galen Low (07:57):
I like that the holy grail is like
not that overwhelmed, thatlike breeds shortcuts are
spreading oneself thin.
Melissa, you work with alot of agencies to look
at their operations.
I'm imagining that you've gota pretty good perspective on
this as well in terms of likewhat the people you're talking
to are holding up as theirsort of, you know, holy grail,
their vision for the future.

(08:17):
What are some of thethings that you hear?

Melissa Morris (08:19):
Yeah, I think you know, quite similarly to
what Kelly is, and that's on avery just like practical day to
day, what does this look like?
But I think to go a stepabove that, what they're
really looking for isincreased profitability.
I think agencies cansometimes end up in these
really thin margins,particularly creative types

(08:39):
of agencies and industries.
Those are notorious forgetting out of scope and three
rounds of revisions turnsinto 10 rounds of revisions,
turns in, you know, then theclient change directions.
We're back to thedrawing board, right?
So I think there is always adesire for agencies to be coming
back to where can we simplify?
Where can we streamlineand where can we build in

(09:02):
some breathing room, in ourmargins, in our timelines?
And I think it is a strugglein some of the creative spaces.
So where can we really doubledown on administrative type
things like meeting notesor creating slide decks or
whatever that might look likefor your agency to maybe give
us a little bit more margin andgive us a little more breathing
room where we for years havejust been struggling to like

(09:24):
get the reigns around it.

Galen Low (09:26):
I love that.
In one of our past events,I think it was in July,
you know, we were focusingon creative projects.
And the thing that kindof struck me was that
like, it doesn't have tobe this uniform thing.
Maybe it is, right?
Maybe like for a projectmanager, someone leading a
project, you know, focusingon delivery or an operations,
it's like that day-to-dayoperational efficiency, but
like from like a projectperspective for the teams like.

(09:47):
There is a time and a placewhere you do want like a lot
of time and human energy spentto like create stuff and then
you know, if you look resizingimages, like, it's just
like there's a moment where,okay, well I'd rather that,
you know, I not have my bestdesigners and creative folks
resizing images or doing likelocalized copy for a campaign.

(10:07):
There's areas where, again,where it's like, it's just
like it's painful to watchmy best people do these
things and maybe AI can help.
I like that it's not necessarilyjust like cool, like dial up AI
to 110, like across the board,just finding the spots where
it's gonna make that difference.

Kelly Vega (10:24):
Yeah, I mean, building a Jira
ticket is something thatseems like something
that could be ad adminy.
This is like a re meetingrecap or summarizing an email.
But really like there's alot of technical stuff that
can go in there, and ifyou're taking a long-winded
response from a technical.
A developer, someone who'smore tech-minded, and you can
take that ongoing sentence andplop that and be like, Hey, I

(10:45):
need a Jira ticket out of this.
And then you're reading throughand you're like, okay, actually
that part is, and if you'reincluding like the platform,
which it's on or whatever,you're gonna get AI to beef that
up, and then you slim it downand you react to it and then
make it fit into your ticket.
There's gonna be labels,they'll suggest that you
don't need all of that.
But like that has saved somuch time for PMs when it's

(11:06):
like, make this Jira ticket.
Sometimes they just freezeor it doesn't get made for a
day just 'cause they have tohave time to think about it.

Galen Low (11:12):
I'm that guy.
You like, yeah.
It takes a few days to createa cheer ticket and if only
I can get a start sooner.
But I also like, I'm alsothat person who like,
that's the importanthuman bit for me as well.
You know, I think you're sayingthat better have my ducks in
a row to share with my team.
Like what?
We should be doing.
'cause that has knock oneffects if I get that wrong.

(11:33):
But also, yeah, there'san inordinate amount of
time that we underestimateof like clicking buttons,
copying and pasting, findinginformation and different tools.
Yeah.
If that can get streamlined,like that is a wonderful thing.
I thought maybe I zoomout from there and
sort of make this real.
Because I think for folks whomight be hearing about this for
the first time, maybe that'snot how the organization works.
They're like, oh, I'd love tohave like AI recess every day.
Like, that sounds fun, butwhat has that looked like

(11:54):
in some of the organizationsthat you've been talking to?
Harv, I know you, you talkto a lot of folks you know
in your role at Scoro.
Do you hear a lot about this?

Harv Nagra (12:01):
Yeah, actually, well one thing that came
to mind is like, this wasa couple years ago when
I was at my past agency.
It was just at ourmonthly all hands.
It wasn't kind of mandatorytime we were putting in, but we
were encouraging people to likeplay around with tools, right?
And so during our monthly Allhands, we would create space
or have a little space inthat session for anybody to
volunteer to kind of presentsome of their experiments

(12:23):
or what they've learned.
So that is beingexposed to everyone.
The other week I was at abreakfast event for AI and
marketing spaces, marketingcompanies, agencies and so on.
And something I heard somebodysay that was really nice
is, I can't remember whatthey were calling it, like,
a task force where peoplehave volunteered to join this
working group in the business.
And on a monthly basisthey get together and they

(12:45):
have to present something.
So that could be somereally interesting and
relevant AI based news.
Or an experiment that they run,and the fact that it happens on
a monthly basis and you have toshare something, means it gets
done, rather than just beingthis thing that maybe I'll get
to and you never prioritize.
Right.
And again, the benefit of thisis that you're sharing that

(13:07):
knowledge with other people.
Right.
Where I work now at Koro,there's a few things
that we have going on.
We've got an AI Slack channel,so again, we're sharing kind of
the news and experiments we'redoing, and we've just recently
done an AI skill survey acrossthe business as well to find
out what people are using.
I mean, we have very strictusage rules, so we already
know what people are using andwhat they're kind of, because

(13:28):
it's been vetted for data.
That's what I kind ofrecommend, some of those kinds
of things to just make surethere's a regular cadence of
kind of learning, presentingand knowledge sharing.

Galen Low (13:40):
It's such a cool like, analysis tool that
in a very optimistic way.
I've seen some that like, seemvery cold and sterile and like
you might lose your job ifyou answer the question wrong.
This is like driving education,which I think is like the
big piece and I think, youknow, the theme of sharing
information I think is huge.
I like that forcing function.
You know, it mightnot be for everybody.
I'm sure there's peopleculturally who like, you
know, be rolling their eyesand like, oh my gosh, I have.

(14:02):
To do this like AI thingand then present about it.
When am I in grade school?
But I think also it's theforcing function to share
knowledge because we'renot sharing this knowledge.
Nothing can scale, nothingcan sort of disseminate.
It's easy for someone tojust like maybe accumulate
all these like ways ofworking that are excellent.
But only they do andtherefore the like impact
to projects as like this biginstead of like that big.

(14:25):
It comes with its ownchallenges after that.
But we'll get into that.
Does anyone else have likean example of like how an
organization or an agencyis doing is structuring sort
of this experimentation?

Kelly Vega (14:35):
Yeah.
We actually have aproprietary tool.
It's very robust and it's,even when you go, it's spliced
out with like creative needs,operational needs, strategic
needs, and within there, whetheryou're looking up set prompts or
just types of files or what haveyou that need to be created.
So given the robustness,we'll say, it's funny how
often I'll just go to the chatfunction that I've been using,

(14:56):
you know, elsewhere forever.
So not to say the robustnessisn't nice when you need
something specific, butthat more so happens when
you just stumble across it.
Yeah, I'll use thechat function simply.
So having that proprietarytool kind of naturally
has this default of like,use this is yours to use.
If you have feedback,let us know here.
And it's something ourclients are aware of and use.

(15:18):
And so really it's lessabout a certain time that's
allotted encouraging us toexperiment with the AI and more.
So it's kind of up to usand encouraged to be like,
show us how you do itsuccessfully when that happens.
So any task that I'llhave, I mean go even going
through a Jira exportof time spent, right?
Jira's robust reportingfunctionality, great.

(15:39):
But if I'm looking forsomething so specific.
I just don't have time to digfor and mess with filters on.
I'm taking that export ifeverything with security and
all the guidelines are beingfollowed, but with that I can
be like, Hey, tell me this aboutthis, or how is this trending?
Or, you know, it helps mejust start thinking and then
I'm back in Jira lookingat how I can optimize.

(16:00):
So it's not just giving methese answers of what to
do, but it's helping me.
It just kind of, so anyways,I'm getting more into the
use of it, but it's just soencouraged, given that it's
a default tool now for us.
It's just a matter ofnow how we're using it,
not if we're using it.

Galen Low (16:14):
Do you have an opportunity to like input
into that proprietary tool?
Is it just training on allthe stuff that people are
doing in their day to day,like, or is there like time
spent going like, here'show I do a thing, I'm gonna
upload it, or, you know, I'mgonna train this model on it.

Kelly Vega (16:28):
I do wonder.
I mean, I, it's associatedto my account and so all
of the behavior and theactions that I'm giving it,
it is, I have noticed themessaging of summarizing is
looking more and more like.
My voice, so to speak, right?
But I still don't use thatas copy paste into my emails.
I'm still using like thebulleted things that are
provided or processes outlined,and then I'm finessing it

(16:49):
from there and making itmy own without Makes sense.
And there areadjustments needed.
I would say even, gosh, ifI were to quantify it, 10 to
30%, I'm adjusting of recapsor linking things out, right?
Showing that I do care aboutthe recap I'm providing you
and you're not just gettingsomething spit out with the
same emojis in their spotsand the em dashes, right?

(17:09):
Like, okay, so I digress.

Galen Low (17:12):
Or em dash.

Kelly Vega (17:14):
Right.
And I like, I legitimatelywould use them.
Now I can't use them becausepeople would be like, okay.
Like, well first we allknow we're using AI.
Second court em dash.

Galen Low (17:25):
Lemme go in and add some typos to my
email before I send it.

Kelly Vega (17:28):
Yeah, exactly, kind of.

Galen Low (17:30):
Utopia is it's too clean for humans to
believe, so we need tomess it up a little bit.

Kelly Ve (17:34):
Misspelled Definitely.

Galen Low (17:38):
Every time.
I wonder if maybe we arc thatinto the thing that I've kind of
been hinting at, which is that Ithink taking that, I don't know.
I call it tinkering.
I don't know if that's probably.
I'm not offending anyone bysaying that, but there is
this sort of like gettingfamiliar with, you know, the
technology and its capabilities.
There's the helpingyourself improve your own

(17:59):
personal productivity.
And personally I applaudthe fact that, you know, an
organization like an agency,which is like built on
projects and billable hours,is willing to like invest
non-billable time for folks to,you know, experiment with AI.
But I mean, arguably even thebest AI experiments might just
sit on some shelf and gatherdust, like, you know, all those

(18:20):
hackathons that we've doneover the past decade or so.
But like, you know, it requiressome structure, it requires
some formality actually.
It's not just the tinkering.
But to my ops folks, how canagencies set up a structure
that like vets great ideasand then like bakes them into
the strategy and then movesthem into implementation so
that they can be part of thefabric of the business, not

(18:43):
just like a tool that usesover here in this corner.
How can these projects avoidbecoming that dreaded, like
internal project that'salways like the lowest
priority versus client work.
Almost shippable.
It was great.
We spent, you know, all ofour experimentation time
building this great tool, butevery time we try and like
roll it out, we get that bigproject from that client.
So it's just gonna sitthere and do nothing.

(19:04):
How do you guard against that?

Melissa Morris (19:05):
So this is something I talk a lot with our
clients about because obviouslyin the ops space, you know,
ops, projects, activities,whether it's working, creating
SOPs, your project managementtool, whatever that looks
like, it can be not thefun stuff sometimes, right?
And inevitably when there'sclient work, that's the stuff
that gets pushed to the side.

(19:27):
So when speaking with myagency owners, I'm always
reminding them, if you havean internal project that needs
to get done and gets acrossthe finish line, then you
need to resource it just asyou would a client project.
Don't, you know, throwit out to the team.
Hey, we're gonna X, Y, andZ. Let's make that happen.
Guys, when you knoweverybody's accountable,
no one's accountable.

(19:47):
I think we all know that, right?
And then also don't give it tosomebody though who we already
know is on a big deadlinewho's already at capacity with
their own client deliverables.
Because inevitably it'sjust not going to get done.
So how can we treat it likea client project build in
real milestone moments?
Is it a creative brief?

(20:08):
What are we looking to solve?
Right?
It's, I think it's also tosay, Hey, go figure out how we
can use AI to make the agencybigger, faster, stronger.
Well, that's a tall order.
Like that's very ambiguous.
But to say, right, touse Kelly's example,
Hey, writing Jira ticketsis a tremendous lift.
It takes a ton of time.
Sometimes I get it andI'm just like, I need a

(20:29):
minute to even sit on it.
How can we lean on AIto help us with that?
So let's get real specific aboutwhat we're trying to accomplish.
Also specific about whatthe expectation is for that.
I could spend the next threemonths saying, I'm looking
into it, I'm looking into it.
I'm looking into it.
What are my first steps?
Like, what do I want lookinginto it to look like?

(20:52):
When do I want that buy?
I want you to spend 10 hours.
I want you to spend it lookingat these type of tools.
This is the budget you have.
Also, don't, you know,tell me you found a great
tool, but it's gonna costus $800 a month, right?
Whatever that looks like.
So really build in someparameters and let the person
know, this is my expectationwhile you're off experimenting.

(21:15):
And then you roll it outjust in the way you would
a client deliverable.
Show up at that weeklystandup, that weekly huddle.
Tell me the update.
Tell me the roadblocks,tell me the challenges.
And when you resource it andtreat it like you do client
delivery, it gets completed.
Like client deliverygets completed.

Galen Low (21:33):
I like that.
I also like the, and youmentioned resourcing.
But also in my head,I'm going like, put a
dollar sign next to it.
And like, sometimes itcomes down to the business
casing, because fundamentallymy assumption is most of
these agencies and otherorganizations are investing,
literally investing timeto improve their business.
To improve operations,to improve quality of
life for their employees.

(21:54):
So there needs to be a return.
And I like that notionthat like even at the
experimentation level, likethere should be a hypothesis.
And then I like Carrie'snote in the chat, which
is that AI implementationmust be aligned to business
goals and objectives.
Because I think that'sthe other thing.
We were talking the otherday internally about.
Our experience in thepast and past lives with
the ideas Dropbox, right?
Just like put in your projectidea in the Dropbox and like

(22:16):
depending on your organizationalculture, your team culture,
you'll either get lots ofideas that are very aligned
and you're like, cool.
Like, yeah, that'd be greatif we could like climb over
that ledge and increaseour margin by this much.
Or, you know, increase ouroperational efficiency, reduce
the time it takes to do a thing.
Or you could have, you know,the ideas that are just
so outta left field thatlike nobody wants to look
at the Dropbox anymore.
Right?
So like there's actuallywork to be done to culturally

(22:39):
align everyone towardslike the broader goals,
the experiments themselves.
And you're spending thistime need to have some
kind of hypothesis or goal.
And I like that idea that you'rekind of prioritizing it like you
would a portfolio of projectsanyways, it's like where is our
strongest return going to be?
That's the one we shouldresource and invest in and treat
it like a real project and puta dollar sign next to it so that
we don't go, oh, but that clienthas, that's a $20,000 project

(23:03):
we could just take on tomorrow.
Yeah, but we could have a $4million return on this project.
That's actually onlytaking us, you know, like
$30,000 worth of resources.
Then it's kind ofgot that framing.
And that like motivationto actually get it done
and not sit on the shelf.

Melissa Morris (23:19):
Yeah, and just to kind of add to
that just a little bit,like how relevant is it and
start to prioritize, right?
Like that return on investment.
Like I can say, Hey look, wecan create a slide deck in five
minutes from this transcription.
Look at how awesome that is.
Okay, well we create oneslide deck every 18 months.
Like we're not gonna rollthis out, train the team,
like throw a parade.
Like that's amazing.

(23:40):
But at the end of theday, I don't care, right?
Like when I have to make aslide deck in another 18 months.
I'll take a couple hours,I'll knock out the slide
deck and then I'll move on.
So calibrating, right?
Like the input and resourcesin what is my ROI out?

Galen Low (23:54):
I like that sort of prioritization.

Kelly Vega (23:56):
I think there's something to be
said too that the buy-inof the team too, right?
Like I think that excitementfrom some doesn't always
mean complete like agreementor buy-in from everyone.
So to explain those businessgoals and what the, whatever
it be, KPI or the efficienciesthat you're showing, as long as
people understand that it canmove fast once there's buy-in.

(24:16):
And people are like, oh, okay.
Oh, okay.
And now it's more of a priority.

Galen Low (24:20):
Like that's change management piece.
I also like, I wonder ifthere's, like, in my head I'm
thinking like, is there nuance?
And you know, I'm open toanyone's thoughts on this,
but you know, when you do thatproject that like you've never
done before, you actually don'tknow how long it's gonna take.
And a lot of this AIstuff is kind of new.
So there's nuance, like with thefolks who might be resistant.
They're like, great.
Yeah.
You know, hire me totrain my replacement.

(24:41):
Thank you.
Or they're just like,this is so silly.
Like, I don't knowhow to do this.
Who knows how long it will taketo actually operationalize this.
Yeah.
Like how can organizations,agencies in particular,
like, how can they navigatethat so that the projects
actually, you know.
Fit within the sort ofconstraints that we've mapped
them out to fit within.

(25:02):
And is it okay that it's notlike when we're thinking about
like the best kind of, youknow, scope creep or like the
change request that comes infrom that internal project to
be like, actually, you knowwhat, this is taking longer.
I don't know even know whatmy question is really, but
like, is it as simple as justlike having another project?
Or does AI kind of create asort of nuance or inflection
that might actually be a lotmore complicated than just your

(25:23):
standard fair bread and butterproject that you do for clients?

Harv Nagra (25:27):
I think, you know, no matter what you're
doing, having a kind of aguess or an estimate of what
you think, it could take timeboxing something is always
just really valuable, right?
You don't have 40 hoursa week for somebody to be
doing these experiments.
You maybe have a couplehours per week, and so
saying like, we're gonnadedicate 10 hours to this.
Over the course of threeweeks and then we can assess.

(25:48):
And if it doesn't cometo a conclusion, then
you can decide like, arewe gonna continue this?
Are we gonna park it or arewe gonna kill it because
like it's too complicatedor this is not turning out
the way we think it is.
So I think that's a goodway of just kind of time
boxing, I guess is the point.
And coming back and assessingif it's kind of getting
somewhere that's useful.
Right.

(26:09):
That's what I would do anyway.

Galen Low (26:10):
I like it also because it's like a
microcosm of how like roundfunding would work anyways.
It's like, we'll give youa bit of money to get this
far, then we'll give you moremoney if you get that far.
And if you don't thenwe might not give you
that much more money.
Or we might not giveyou any money at all.
But like, yeah, what are thosesort of milestones that allow
us to iterate towards value?
But you know, Iguess shrunken down.

(26:32):
Hopefully less shark tanky,but still the ability to
sort of pause and go, is thisheading in the right direction?
And the culture to say, yeah,maybe we spent a whole bunch
of money into this so far,but it's really not working
and we need to, like, it'sokay for this to fail and, you
know, let's like cut it offnow before we bleed out more.

Kelly Vega (26:50):
AI projects and initiatives.
There's instances wherejust because you can
doesn't mean you should.
Many instances.
I mean, I think more so wehear some of those coming
from request sides from theclient who may not know all
the best practices or whereit's applicable and whatnot.
I think that's a very importantrecognition at some point to
be like, okay, just becausewe can, should we given either

(27:10):
the effort that's left or whatit's not doing so far, assess.

Galen Low (27:14):
I'm like, should I take this to
devil's advocate zone?
And I think I will, becausein some ways I'm like,
all right, let's review.
So we're experimenting withthings we kind of need to, you
know, be able to measure them.
Implementation might requireus to have a plan and resource
it like a project, but alsoit should be iterative.
And then the devil's advocatein me says like, well, maybe

(27:37):
we do what probably mostorganizations want to do today.
It's like, why don't we skipall that planning stuff?
We're gonna beiterative anyways.
Let's just take all the goodideas that we came up with
week over week, and let'sjust like start the first
half step of each one of them.
That should be fine, right?
Because it's iterative.
We can measure, we're notinvesting a lot, and then
we can organize, but likewhy wouldn't we just iterate

(27:58):
on everything all at once?
Like planning is for preAI, you know what I mean?
That's the olden days.
Like that's a blackand white photo.
Why might that be a good ornot good idea in your mind?

Melissa Morris (28:09):
Yeah, I'll jump in for a second.
So I think anytime we'rerolling out something new,
whether it's AI or not.
We need to understand that thereare varying degrees of buy-in,
as Kelly mentioned from the teamtech ability, comfort, and so I
think we have to be careful tonot leave some people hanging.

(28:32):
So we do this a lot when we'rerolling out new time tracking,
project management tools,CRMs, whatever that looks like.
We have a very dedicated planfor when are we looping the
team in, do they understandhigh level why we're doing this?
What is our training plan?
And then what does supportlook like for people who
need additional handholding?
Because inevitably you havepeople who are very into

(28:54):
tech, they're really pumpedabout AI, and they're ready to
lean all the way in on that.
And then you have otherswho are more apprehensive.
Maybe they're just not astech inclined in that way.
Maybe they just don'treally see the value.
And then I kind of wanted tocircle back just real quick to
something you said before too.
'cause I think it'sworth mentioning and
important is the fear.
Some people may be having.

(29:14):
Am I creating the resources?
Am I showing them how toreplace me with an AI bot?
I'm getting nervous about that.
Or are they going to nowask me to do twice as much
work because they think Ican't do it twice as fast?
For now, I'm going part-timeinstead of full-time because
they can handle this.
So I think there's definitelya different charge around

(29:35):
AI than maybe we've seenwith some other, you know,
tech advancements wherepeople may have a real
fear about job security.
And so I think being extracareful to be very clear about,
this is what we're doing, thisis why we're rolling it out,
here's the training and thesupport, and having a space
for team members to shareany thoughts or concerns is

(29:58):
gonna be really important.

Galen Low (30:00):
I love that.
Yeah.
It's like that's the thing that,you know, it can't be rushed,
I guess, or at least in myopinion, is that like we could
technically start rolling out ahundred of new ideas tomorrow.
What does that do toour brains as humans?
What does that do to our likeemotions and like the way we're
able to conduct ourselves?
How do we keep tabs on all100 goals and KPIs that we're

(30:22):
all trying to achieve thatsomehow, you know, go towards
some North Star maybe, or maybeit doesn't and like that bit
is probably where, you know,a lot of these implementations
are falling over.

Kelly Vega (30:33):
I mean, I don't right now have a PM team
reporting to me, but if I did,I mean I certainly from a PM
ops perspective might say likejuniors, I want you to work on.
How to use AI for thingslike the recaps and come up
with a standardized prompt.
What's the best prompt forafter backlog refinement?
What's the best prompt forafter a sprint planning?

(30:53):
What's the best prompt?
You know, all of that andhave the juniors do that.
And then I'm havingthe mid-levels looking
at, like, look at ourdocumentation and confluence.
Whatever is secure to be ableto upload high importance,
start uploading some ofthose pro whatever, see how
we can standardize some ofour processor documentation.
And if that's more of likea tech lead thing, great.
And then your senior levels,you're like, okay, look at 2026.

(31:16):
I know you're not, maybe you'renot the account managers,
the sales team, but how,for me, your PM perspective,
can you strategize for 2026and whatever that means.
The initiatives youhave, the different
projects and they're like.
It can get bigger and bigger.
So I think you can have multipleinitiatives going at once.
Now that's just me and myPM ops world and what I
would come up with off thetop of my head like now.

(31:36):
So I think that there issomething to that because if
there's a big initiative thateveryone's like, whoa, AI, okay,
here's all the little steps.
And then like other peoplearen't really pushing 'cause
they're like, well, they'repushing and I don't really care.
Whatever it is.
But yeah, so I digress.
I think that therecould be value to that.

Galen Low (31:51):
I like the right sizing of the experiment.
It's like, don't get thenew hire intern to like
rewrite how we do payroll.
Yeah.
Like, I think there's that.
But I think the other thingthat you touched on was like and
I do wanna like circle aroundback to os, but like, or maybe
I am doing that now, like.
You know, in an operationsrole, you need project managers
to lead these projects.
And going back to what Melissawas saying, it's like these
are projects, and in fact,to what Harv is saying,

(32:13):
and we were gonna iteratethrough them and literally
need to make decisions.
It's a program or a portfolioof projects or initiatives
that need leadership.
And yeah maybe that's asvaluable as client work.
Like is that the nextstage of, you know, how
we're investing in this?
Where I would be able tosay, yeah, that's okay.
Like, you know, my PM team, youknow, they're usually, they're

(32:33):
running whatever, let's justsay four projects, ha at a time.
One of them is gonna be aninternal project and you
know, Kelly and Ops is gonnabe the sponsor here, right?
She's our client and like she'skind of overseeing how this all
rolls out, aligns with theirgoals, like becomes SOPs and
like that's what that internalinitiative looks like versus I

(32:53):
think what I was kind of devil'sadvocating about where it's like
just everyone just do stuff.
It'll be fine.
Like do it and start becauseit's better than standing
still isn't always true.

Kelly Vega (33:03):
Yeah.
I was gonna say it's somethingdeliverable from that, that
you could have is like screenrecord yourself making a ticket
from scratch, screen recordyour whole screen where you
have to all click around to nowscreen record yourself taking
that same information and doingit with chat, and now you have
a deliverable to whatever.
Whatever you wanna say.
But also to your point, Melissa.
It can get to be a slipperyslope when they're like, oh,

(33:23):
great, well then do more.
Here's two more projects.
So you have to protectthat other space to
be like, well, no.
Now I have two hoursheads down to focus on our
strategy for this, or tohelp churn tickets with the
client, or whatever it is.
So you're holding that spacefor the other, what was it?
But that's more now, so that youdon't have to be as scrambled
and it can be higher quality.

Galen Low (33:42):
Also that trust thing.
Right?
The building thetrust to be like.
Rerecording of youdoing your job.
I promise I won't judge it.
I'm not gonna judge it.
I judging it.
Yeah, we're judging it.
Yeah, Sam, we're going.
But we want it to be more of it.
It's like, yeah, there'sso many layers of things
to totally ice through.
To get people to evenparticipate in the project the

(34:03):
way that they would if it wasjust a regular project that
didn't have to do with theirjob or new technology or.

Kelly Vega (34:07):
And perhaps that's more along the
lines to have approvalto use an AI tool, right?
If some aren't onboardor they're skeptic for
good reason for whateverit is, so education.

Galen Low (34:18):
You mentioned like, you know, data security,
compliance and stuff like that.
Harv, you mentioned that scar,there's like certain tools
you can use because, you know,they've been sort of vetted into
the sort of overall governanceprogram around AI tools.
I guess maybe my question islike, is or is it not one of
the things that folks seemto be either sidestepping
missing or fully taking intoaccount when they're rolling

(34:41):
out their sort of like internalAI processes slash you know,
what is the right level ofgovernance at a certain stage?
Data privacy,compliance, all that.

Harv Nagra (34:52):
Totally.
I think it comes down to theorganization and kind of the
maturity around that, right?
Over the past couple years sincelike this whole AI kind of race
started, I think there has beena lot of experimentation and
there was a lot of concern,especially in the early days
of people like uploading clientdata and stuff like that.
I'm sure that was happening.
I'm sure that happens now.
At least now we've got likekind of clear guidelines for our

(35:14):
businesses to say, this is whatyou're allowed to do and not.
Like shadow.
It has always been a problemthat ops and finance people
have had to worry aboutpeople kind of just finding
tools that they want to use,downloading them, installing
them, asking for, you know,signing up for a subscription.
And the thing is, like, one, youend up having too many things
that you're paying for that you,you might not have control over.

(35:35):
And number two, not everyone'sbenefiting from those
kind of platforms, right?
And sharing thatknowledge and opportunity.
So I think that's why havingthose kind of controls in place.
From the operational point ofview to say, this is how we
kind of select products andmake sure that they're safe
and this is how, what ourguidance is on what you can and
can't use that platform for.

(35:57):
So I think that'ssuper, super important.

Melissa Morris (35:59):
And I would add to that, I think we've
been talking a lot about itfrom the context of within our
own agency or within our ownbusiness, but I think having
some clarity for your clientsand a line of visibility on
how you are using it whenrelevant and appropriate.
I'm just thinking of acompany we're working with.
They have conversations thatare quite sensitive to the

(36:21):
point where if brought into acourt of law, suddenly there's
a transcript of the meeting thatnever happened or shouldn't have
happened or recording of it.
So there's definitely certainsituations where you would
want to be very careful andmake sure your client knows.
And I mean this isstandard practice about
recording and such too.

(36:42):
But do I take that recordingand go back and make it
a transcription and saveit in my Google Doc?
'cause it's really easy for meto go back and reference what
we talked about, make surethey know that or that's okay.
And just knowing what sort ofdocumentation you're keeping
and what implications that mayor may not have down the road.

Kelly Vega (37:00):
We actually made a standard agreement that we would
have everything transcribedor anyone had the ability if
on these calls to transcribeor record at any point.
And everyone was just like,please, yes, accountability.
That's great.
'cause just the interruptionof like asking is this okay
now it's not for everyone,but that has been simply nice.

Galen Low (37:18):
Yeah.
'cause you've worked in alot of like heavily regulated
industries as well, which Iimagine some of those clients
would be like, no, thank you.
Did it surprise you?
And they're likeyeah, please do that.

Kelly Vega (37:27):
Actually, for me, it was more the other way
around where it was preferenceto have it recorded for
simply accountability reasons.
I mean, because somethingcould so easily be associated
wrong, or when it came to,gosh, so many technicalities
with regulations andpermissions and whatnot.
It was actually from, in myexperience, defaulted to.

Galen Low (37:48):
That's fair.
It's like the opposite islike, because of regulation,
we do want to have a recordof this thing having happened.

Kelly Vega (37:54):
Yes.
Now I will say from the topof a meeting that I simply
want to be more casual, notbecause of accountability
things, but I'll say I'mnot gonna record this one.
If you feel like itabsolutely precedent any
time, but sometimes I knowthat having that can change
the era of a conversation.
So, be mindful of that.
Like I'm not doing that withmy one-on-ones with my PMs.
I'm not a you know, blown out.

(38:16):
That's good.
We're gathering requirements.
The red button's on.

Galen Low (38:19):
Love it.
A big thank you to ourpanelists for volunteering
their time today.
I know we hang out all the time,but this has been so much fun.
Also, just getting youthree in a room together
has been loads of fun.
Thank you so muchfor your insights.

Kelly Vega (38:32):
Yeah, thanks for having me.

Melissa Morris (38:33):
Thank you.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.