All Episodes

April 28, 2025 48 mins

Text us your thoughts on the episode or the show!

Feeling swamped by marketing operations busy work?


AI-powered automation can help you reclaim your time — but knowing what’s real versus hype isn’t easy.

In this episode, Tarun Arora, a marketing tech veteran and founder of RevCrew, explains how AI goes beyond traditional rule-based automation by handling tasks that require human judgment. He shares how "agentic AI" systems act like virtual team members — making decisions, managing your tools, and only checking in when needed.

You'll hear real-world examples, from inbox management and campaign optimization to audience selection, showing how AI can eliminate busy work and free you up for more strategic projects.

Tarun also offers practical advice on where to start: focus on your biggest needs first, test real use cases, and remember — this is just mile one.

Episode Brought to You By MO Pros 
The #1 Community for Marketing Operations Professionals

Support the show

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Hello everyone, welcome to another episode of
OpsCast brought to you byMarketingOpscom, powered by the
MoPros out there.
I am your host, michael Hartman, flying solo today.
Joining me today to discuss howto find ways to leverage AI and
automation for ops tasks isTarun Arora.
Tarun has over two decades ofleadership and management
experience in marketingtechnology, operations and

(00:23):
analytics domains at high-growthtechnology companies like
Workiva, romini Street, newRelic and Cisco Systems.
He has worked as a practitioner, both in-house and as a
consultant.
He recently launched RevCrew, acompany focused on AI solutions
for marketing operations toeliminate the busy work and
increase productivity and growth.
In addition to that, tarunregularly coaches marketing and
operations professionals andhelps them with their careers.
So Tarun regularly coachesmarketing and operations

(00:44):
professionals and helps themwith their careers.
So, tarun, thanks for joiningme today.
Thanks for having me, michael.
Yeah, it's good to have anothercoach and this is going to be a
fun topic because I think whereI've gotten to in my journey of
trying to integrate AI into mylife is I really wanted to help
me with the mundane stuff.
So I know we'll probably talkabout more than that, but I

(01:08):
think this I want this to be.
I think it'll be a good peoplewalk away as, like this is some
practical stuff they can.
They can take away.
But before we get into thatpart of our conversation, it's
always interesting to hearpeople's career stories, how
they ended up in marketingoperations.
People's career stories howthey ended up in marketing
operations.
I personally like to hear aboutkind of pivotal moments or key

(01:32):
people who maybe had an outsizedimpact or significant impact on
your career trajectory.
So maybe you can walk throughyour career in a little more
detail and then we'll kick offinto the rest of the
conversation after that.

Speaker 2 (01:42):
Yeah, sure, michael.
So you know, honestly, I neverknew that I'm going to end up in
marketing operations.
So I've been in the marketingand go-to-market space for quite
some time I think over 20 yearsnow.
I started my career inengineering and back in the day,
you know, there were no likeSaaS applications.
So we were building actuallyapplications for sending out

(02:03):
emails I mean, if you rememberright, databases for hosting
contacts and accounts, customerdata platforms and all that kind
of stuff, right.
So I did that for many years.
And then I switched to productmanagement.
I just wanted to understand themarketing business more.
I was always fascinated bymarketing but, being on the

(02:23):
engineering side, I never sawthe business side of marketing.
So I wanted to understand thatbetter and I got an opportunity
to move to the productmanagement side.
So I did that and that wasfascinating for a few years and
that was the first time I gotexposed to serious decision
waterfalls and all that.
I was doing it on theengineering side but I didn't

(02:44):
know that these things are likedemand waterfall and demand unit
waterfalls and you know allthose cool things right.
So I did that productmanagement for a few years and
then finally I moved to thebusiness side, on the operations
, and for the last few yearsI've been leading teams on the
operations side, you know,dealing with marketing

(03:05):
technology, marketing data,analytics and just campaign
operations and everything youknow that comes into the
marketing operations umbrellaTerrific.
So I think you asked about thepivotal moments.
Yeah, yeah, yeah.
So I think one of the pivotalmoments was definitely when I

(03:26):
moved from engineering side tothe product management.
I think that opened the worldof marketing to me because for
the first time I was directlytalking to the business people
and trying to understand, youknow, what is marketing and how
do you actually translate thatinto systems and data and
analytics, right, and how to usethat to actually optimize
marketing, translate that intosystems and data and analytics
right, and how to use that toactually optimize marketing.

(03:47):
So that was definitely one big,I think, pivotal moment in my
career.
I think the second one has beenmore recent.
And then eventually I moved tothe business side.
But the second one I think hasbeen more recent when I, you
know, first actually used ChatGPDeep and I was, you know, and I
was taken aback.
I was like, oh my God, this isrevolutionary, right?

(04:08):
I mean this can changeeverything, and since then I
have been fascinated by this andquite recently I have actually
launched my own company.
It's called RevTrue, and whatI'm trying to do is eliminate
the busy work that we have inmarketing operations and try to

(04:29):
automate this stuff so we canactually focus on things that
matter.

Speaker 1 (04:37):
Yeah, I think I've told this many times, but I was
slow to adopt AI, in particular,things like chat, gpt, but I've
started really going to that asa first place in many, many
cases.
But actually this brings up aninteresting thing for me.
So, yeah, we hear these termsright AI, automation, llm,

(05:04):
generative AI, all these things.
It feels like sometimes they'reused interchangeably, or at
least interpreted as being thesame or close to the same, but
probably are not.
What's your take on what is AIversus automation and some of
these other things?
How do you see them as eitherthe same or different or related

(05:27):
?
What's your kind of workingdefinition?

Speaker 2 (05:31):
I think there's just like so many terms out there
right now.
It's like overwhelming.
There's just so much noise it'shard to discern the signal over
noise right now.
But to me, automation we'vebeen doing it for a long time,
right.
I mean automation is somethingthat you take, a process that

(05:51):
you've been doing manually,right, and it's a well-defined
process and you systemize it,right.
You, basically, whatever wayyou use you know rule-based
system, deterministic systems,rpas you automate the process so
that it can be done repeatedlyin a consistent manner without

(06:13):
the need of a human person.
Actually, for the most part Now, I think AI is again like a big
umbrella and most people thinkof it as only Gen I, but AI has
been around for quite some time.
I mean, gen I is more recent,right, with the LLMs, right.
But AI includes things likemachine learning, which we've

(06:34):
been doing for quite some years,right.
You know the regression modelsand classification models,
clustering, right.
It also includes like deeplearning, you know things like
which are used in likeself-learning cars and all that
kind of stuff.
So, but JetAI is definitely theit's, it's kind of the newest
kid on the block and it'sexciting.
So you know, to me what I feelis that automation.

(06:55):
We can use AI as part ofautomation to make make the
process reason out by itself andtake independent decisions.
So you can use LLMs as part ofyour automation so that they can
understand language better andtake decisions whenever needed,

(07:16):
and you can actually evenautomate the processes which we
were not able to automate before, which actually needed some
human judgment, which were notvery humanistic and needed some
reasoning thinking before youactually take the next action.
But with LLMs, and especiallywith the reasoning models, we
are able to actually use them aspart of automation, where they

(07:36):
can be more self-reliant and dothings on their own right.
I think the step up on that isagentic AI right.
I think the step up on that isagentic AI right, where you're
not just using LLMs but you'reusing, you're giving your agents
tools to complete a task sothey can take decisions on their
own, they can reason things andthey have a set of tools to

(07:59):
take actions on, so they canactually not just tell you what
the next step is.
They can actually go, do itlike and then do the next step
after that and the next stepafter that and be independent.
So I think yeah.

Speaker 1 (08:12):
Yeah, so I think maybe this is a this will be an
example that I can think of.
So at one point in my career Ibuilt I guess I would call it
sort of a lightweightattribution model, basically
using what, in this case, wasAlteryx right, sort of a
lightweight ETL thing that Iwould pull data from multiple
sources, combine it together,mix and match, but every time an

(08:35):
unexpected value showed up inthe source data I had to go
tweak the model.
If you will, yeah, and I thinkthat is what I think of is like
it was automated until there wassomething unexpected, in which
case then I had to intervene.
What I think the difference,what you're describing, is you
could have that same kind ofautomation, but rather than it

(08:55):
have having to, like you know,stopping and me intervening, you
might, it might be able toreason its way to oh, this new
value is like these other ones.
I'm going to categorize it,this other, the same way, and
I'm going to keep going.
Is it like something?

Speaker 2 (09:09):
like that.
Yes, no, for sure, for sure Ithink it can take case of.
You know it can take care ofoutliers, you know, and things,
and we've what you're describing.
You know we've been doing thatall our life, right, that you
know and things, and what you'redescribing you know, we've been
doing that all our life, right,that we go in to fix something,
that automation.
You know just where automationjust stalls, right, and we have

(09:29):
to go in.
So I think we now have thecapability where it can actually
take, you know, care of thesekind of outliers and the things
that stalls a normal automationprocess.

Speaker 1 (09:43):
Yeah, and I mean just from my own experience, like
the biggest kind of use casesI've had with using ChatGPT or
BookPlexity, or I still want totry Grok, but yeah is sort of a
complicated question, right, orit's actually not just one
question, so it's not a simple,there is an answer kind of thing

(10:05):
there's maybe.
Yeah, I want to give it somecontext, you know, and uh, but
it's more or less it's like it'sdoing research on my behalf,
right, and I'm giving it somedirection, but then it's doing
research that otherwise I wouldspend, you know, you know, you
know probably hours, if not days, doing on my own Right, right,

(10:28):
that is like that's where I'vereally found the value.
So I can imagine havingsomething like that in the
middle of, say, I'm an SDR, Iget a lead that is bubbled up
and I'm supposed to follow upand it could automatically do
some research on the prospect.

Speaker 2 (10:44):
Absolutely, Absolutely, absolutely.
I mean, if you think theprocesses, you know the steps
that an SDI would take when theyget a lead.
They would actually just go outon LinkedIn and internet and
actually research that lead, youknow, try to understand, you
know where the company you know,maybe look at their LinkedIn
posts right and try tounderstand what their pain

(11:05):
points are and then, and youknow, after doing this research,
they would actually pick up aphone and talk to them because
they have all this context.
Now an agent can actually doall this.
You know when a lead comes in,it can actually go to different
systems and research this leadfor you.
You know both the account andon the personal level, and
actually give you a summary inyour CRM which is available

(11:27):
right with the lead in real time.
And then when an SDR wants tomake a call, he's got all that
context with them.
Now with AI SDRs, I mean, thecalling can also be automated.
But that's a differentdiscussion because you know
different discussion, becausethere are people who like that.

(11:48):
There are a lot of people whoare opposed to that.
But to your point, yes, a lotof the research and the steps
that a person would somebodylike an SDI would take to find
the context before callingsomebody can be done
automatically.

Speaker 1 (12:05):
So that makes sense.
So if I short version, I thinkyou already said it right.
Ai is a another tool in theautomation toolkit.

Speaker 2 (12:15):
Absolutely, absolutely.
I think it's just another toolin our in our toolkit, and I
think we can decide if you wantto use it or not.

Speaker 1 (12:25):
Right, right, right yeah, and there's always times
when it makes sense and probablysometimes where it might be
overkill or too expensive.
You brought up agents too, sothis is one that I still am
trying to wrap my head aroundwhat that means.
You briefly touched on what anAI agent would be like.
Can you go a little deeper onwhat does that do?
Sure, and I mean both from aprofessional and personal

(12:50):
standpoint.
I'm curious, like how would oneuse it.

Speaker 2 (12:53):
Yeah, excuse me, so you can think of agents Maybe.
I think a good analogy is that.
Let's say you have, you knowyou have a new hire, right?
Um, what do we normally do?
I mean, a new hire comes in we.
We train them on our businessknowledge and processes, right?

(13:13):
yeah and we give them a set oftools.
We give them logins.
We give them, you know, a setof internal tools that we used.
You know our wikis.
We give them a certain set ofexternal tools that we used.
You know our wikis.
We give them a certain set ofexternal tools that we use and
then, using the businessknowledge and the set of tools,
they can actually do the jobthat they were hired to do.
Now, only in this case, you knowit's not a human, it's an, you

(13:37):
know, it's an AI agent that canthat is doing it.
So once you have, you know, youhave this software system,
which is built out of LLMs andother systems, and once you
train them on your businessknowledge and processes and give
them a certain set of tools, soyou may have to give them a
login to your email, you mayhave to give them a login to

(13:59):
your wiki and access to externaltools like Zoom info, you know,
your CRM, you know, and stufflike that.
Right Now this becomes like youknow your new hire.
It has the business contacts,it has a set of tools.
Now it can independentlyactually execute on that you
know.
So it can reason, it can takeindependent decisions, it can

(14:21):
actually use the tools to getthe things done.
Now there might be cases, justlike a new hire or anybody like
even you know an existingemployee, where they're doing
something and they get stuck orthey need approvals, right, or
they need somebody to actuallylook over their work.
You can always have, like ahuman in loop, what's called the
human in loop you know thing,right, where an agent actually

(14:43):
just messages a person sayingthat, hey, this is where I am.
Do you approve this Right?
Should I go ahead or not?
And then you know it justcarries on its job.
So I mean, if you think of itthis way, right, that it's
basically somebody whounderstands your business
knowledge and the context ofyour business, has the set of
tools, has the brain, you know,has the thinking and the

(15:07):
reasoning ability to actually dothe job independently.
I think that's probably a goodway to describe an agent.

Speaker 1 (15:14):
So one of the maybe this is not fully an agent, but
it feels close I know at least ahandful of tools that are
AI-based that will help manageyour inbox right.
It'll scan through, mayberecategorize them and we'll kind
of learn from how your tone ofvoice, your typical, and it will

(15:37):
actually draft responses if itneeds to be a response or
whatever.
I mean, is that kind of anagent too right, something like
that, or is it something alittle bit different?

Speaker 2 (15:46):
I think, again, there's a.
There's a lot of terminology outthere, I mean, and there's a
lot of noise, right, I mean, andhonestly, there's no nothing
right or wrong in calling thatan agent versus not Right, but
but you know the the industrydefinition that's actually
emerging is that an agenticsystem is something that not
only has the business knowledgeand the context and is trained

(16:09):
on your you know, whatever yourbusiness processes or your brand
or your tone and your voice,but can also take independent,
has also a set of tools, accessto a set of tools, and is able
to actually take independentdecisions to use those set of
tools.
And it can decide what tool touse.
Maybe it has like 10 tools,right, and it can decide that,
hey, based on the input, maybenow I have to go update the CRM,

(16:32):
or, based on the input, now alead has come in, maybe I have
to go out to the internet andresearch and research on these
points.
And these are the guidance thatwe've given them, that you
research on these points, andthese are the, these are the you
know guidance that we've giventhem, that you research on these
points.
And then I'm going to use atool that is actually going to
log into the CRM and then I'mgoing to write that context and

(16:53):
that research into the CRM.
So it's it's using both yourbusiness knowledge and a set of
tools to take independentdecisions to do a job to
complete.
A job to complete a job.

Speaker 1 (17:04):
Okay, makes sense, okay.
So I mean, and it sounds likewhat you're kind of doing with
your new venture.
But you know, I mentioned Ifeel like I'm a semi-late
adopter I don't think I'm thelast one and I'm finding a lot
of use for the LLMs particularly, but I feel like I'm just

(17:25):
scratching the surface.
So, when you think about how toidentify the kinds of things or
use cases whatever terminologyyou want to use where AI and
automation could be applied, andwhether I guess maybe keep it
to the domain of marketing,marketing operations, right, how
do you do that?
What's your approach to doingthat?

Speaker 2 (17:46):
Yeah, you know, I think the first thing is, like
any technology, uh, I think weneed to use the technology to
solve a business need, I mean.
So I think we need to startfrom there.
Uh, ai, like any othertechnology, is a tool that we
use to solve a business problem.
So I think we need to startfrom there, and I think the way

(18:11):
to look at it the way at least Ilook at it is that there are
hundreds and hundreds of usecases that you can apply AI to.
I mean everything in, let's say, even if you just talk about
marketing or go-to-market orsales, there are thousands of
things that you can apply AI to.
Now, where do you apply?
Where do you start to apply?
Now?
I mean, if you take AI out ofthis equation, if you were

(18:33):
trying to do something withtechnology, I think you would
look at your most immediateproblems, right, where
technology can make an impact,and you probably get the biggest
bang for your buck, right.
I think we should use the samelogic here, in spite of all the
noise out there and the FOMO outthere.

(18:54):
Right?
We should apply the same logichere that what's my immediate
business need that I'm trying tosolve Now in some company.
That might be growth, right.
Hey, we are not growing company.
That might be growth, right.
Hey, we are not growing as wewant to be or as we need to be.
So that's probably the place tostart with.

(19:15):
So you can think about growthuse cases.
You can think about leadgeneration.
You can think about how can Iincrease my conversions, how can
I do better lead scoring, howcan I do better account scoring
Things like that, that.
How can I fine tune andoptimize my campaigns?
You use AI to use that.
Now, let's say, your businessproblem or the problem right now
is that you guys are busy,drowned in busy work.

(19:36):
I mean, there's so much of busywork that there is, the
productivity is low, thestrategic projects are not
getting done.
Now, that's a place where AIshould be used for automation of
these kind of busy work orprocesses right Now.
It could be your data hygiene,it could be your list uploadings

(19:56):
, it could be your reportingprocesses, any task management,
even project management, thingslike that right.
And if you were, let's say, ifyou're trying to cut costs, like
hey, we don't have the budgets,now you can do things like AI
content creation right, so youdon't have to hire that many

(20:17):
content writers.
Right, you can use it to.

Speaker 1 (20:25):
For our audience.
He paused because I was shakingmy head kind of like
questioning that statement aboutcontent creation and just in
general, like it's interestingto me because I think that was
the biggest use case that peopletalked about with ChatGPT and
all the content creators werelike worried about I actually I
think it's true that it can beused for content creation, but
in certain cases I don't thinkit's great for well.

(20:46):
At least I haven't seen ittruly do like innovative,
creative new content like fromscratch.
It's really good at repurposingcontent.

Speaker 2 (20:57):
Yeah, no, I think I totally agree with you and you
know I've been I've been tryingto post more regularly on
LinkedIn lately and I one thingI found is that you know, like
most of the people say that it'sgreat for creating the first
draft, I actually have foundthat it's better if you actually
give it a draft and then ask itto draft something.

(21:17):
I haven't found it great increating a first draft because
that it sounds too machine-like,it sounds too cheesy.
That first draft right, it'slike a little bit cringy.
So what I do is I give itcertain draft, I give it a small
draft and then I ask it toimprove that and then iterate on
that.
But I think you're right.

Speaker 1 (21:38):
What's interesting?
Because I've had experience,good experience for drafting, um
, things like emails, in somecases, posts, or helping other
people with some things that areunrelated to what we do, but
what I've done.
In some cases I've helpingother people with some things
that are unrelated to what we do, but what I've done.
In some cases I've had a littlebit of a draft like here, or at
least, or like here's what I'mthinking, like bullet points,

(22:00):
but I tend to give it a hugeamount of context.
Um, and this is what this iswhat I've learned like this is
significantly different thanwhat I would do with, like a
typical Google search orsomething like that.
Right, it was like I would givelike all this context and then
give it and then refine it If itdoesn't quite get what I think
I want out of it.

Speaker 2 (22:21):
Yes, no, totally right.
But I think what I meant from acutting cost perspective is you
know it can at least take you50% there.
You know it can at least takeyou 50% there.
You know whether it's you usingit for your first draft or
you're using it for generatingemails, right, or you're
iterating on it.
It's not at a point where itcan just be your independent

(22:41):
content writer, honestly right,but it can take you some ways
there and help you probably gofaster.

Speaker 1 (22:48):
Yeah, yeah.
So here's an interesting.
So I've been thinking, as youtalked about this right.
I've often you probably havegone through this right.
We're often asked like how toprioritize all the possible
things we could do on any givenday, given week within
operations, and I always myreaction to those kinds of
requests are always, well, likestack ranking is always going to

(23:10):
fail, right, and if you do somesort of like low, medium, high,
it's also going to fail,because it's just it's missing
the nuance of what's the benefit, what's the level of effort or
cost, right, and there's a kindof a trade-off.
So there's like at least twodimensions to it, right, and you
know, sort of broadly costbenefit.
It feels like maybe there's athird dimension when you think

(23:34):
about like what are those thingsthat I could do that could be
automated, whether or not theyleverage AI, which is, how
repeatable are they?
Something like that, right, oryou know and you know and that's

(24:02):
.
And maybe another componentwhich I think you got to like
there, like how time intensiveare they, especially if the time
intensive part of what thecurrent process is is also a
relatively low value kind ofthing.
Maybe it's important but notreally.
There's not a Privacycompliance is a big one to me,
monitoring an inbox.
If you're sending out emailsand you're getting actual
replies, you need to be watchingfor people who say unsubscribe

(24:25):
me.
There's a compliance componentthere.
It's a huge time sink if you'resending any kind of volume out,
right, cause you're gettingautomatic replies.
I'm out of office to see.
You have to sift through thatand it's a manual process for
most places, right?
To do that and there are a fewtools out there that and I've
had some but like that feelslike a the kind of thing, like

(24:47):
it's important.
There's not a huge amount ofvalue in it.
There's a risk associated withit, time-consuming, but
repeatable, right If you give it, especially if you could give
it some broad rules that an AIcould, an agent could do
interpretation.

Speaker 2 (25:05):
Yeah, and you're totally right, and this is
actually one of the automationswith my new venture we're
looking at to automate.
You're totally right that thisis one of the busy jobs that
people in marketing operationsdo and there is compliance risk
to it because a lot of times,just depending on the volume of

(25:27):
the emails that you send out andthen the replies that you get
back, right, I mean, I have seenthat people just stop managing
these inboxes.
Yeah, you know, because there'swho wants to look at like 500
emails.
You know a day or a week orwhatever, right, and you know,
once you stop doing that, you'relosing on.

(25:49):
I mean, of course, you'refacing the compliance issues and
if you're not opting out people, but you're also losing on
opportunities to enrich yourdatabase.
You know from alternatecontacts which are from out of
office, right, and you're alsomissing on some high intent
leads which have actually askedyou a product question, or.
Or sometimes you also see that,hey, somebody has actually
asked for a demo, asked you aproduct question, or sometimes

(26:10):
you also see that, hey,somebody's actually asked for a
demo.
Somebody said that can I talkto sales or can I have a demo,
and it's buried in your inboxand nobody's even looking at it.
So I think there is value morefrom Michael in even this kind
of work where you might.
There are high intent leadsburied in your inboxes.
There are things likecompliance issues.
There is a dollar value to yourdatabase enrichment because

(26:34):
you're not letting your databasego stale.
You know you can and you don'thave to buy new contacts if you
can actually just get, you know,scrape contacts out of these
return emails.
So there's definitely value inthis from a dollar perspective
as well.
But there you know, to yourgeneral point.
I think it's basically theprioritization has to happen on.

(26:57):
You know you're totally rightthat stack ranking might not
work, but the prioritization hasto happen based on hey, what's
the impact of doing somethingand what's the level of effort
in doing something, and if thereis an easy and of course there
are some quick, easy fixes orthe quick fixes where you either
throw, not throw, but you puteffort or a little bit of money

(27:19):
in it and you can actually solveit pretty easily.
So I mean, these are definitelythe top where you say okay, I
can solve these very, veryeasily, and let me do that.
And then I'm actually going tolook at the other cases which
are important.

Speaker 1 (27:32):
Yeah, I mean, when I've done that before, I always
end up with a quadrant rightRelatively low effort,
relatively high return.
Those are kind of no brainers,you just you got to go do this
Right and then you've got the.
So the other end of it was highlevel effort, relatively low
level of return.
Other end of it was high leveleffort, relatively low level of
return.
Like those sit in your queueforever unless you just

(27:53):
absolutely have nothing else todo, which never happens, or
unless there's someone, likewith a title or some sort of
regulatory compliance, says youhave to do this, like there's
exceptions.
But the other two quadrants arethe hard ones to figure out,
right right right, right rightso, um, and if you add in in
this element of is it arepeatable, like if you're

(28:15):
trying to replace an existingprocess, as one of the things
you do, like there's also thiselement is like, how repeatable
is it?
Um, how much time does itconsume of our team that could
be otherwise used to for otherthings that are of higher value,
right, I think that's aframework that I've kind of in
my head, so, so maybe one morething.

(28:35):
So how do you kind of go backto, like, how do you go about
through the process, like, say,you identify one, two, three,
like a you know half a dozenthings that you could automate
in some way?
How do you decide where you,when or if you should apply, say
, I'll call it I even hate tocall it traditional, but
traditional like rules-basedautomation versus something

(28:57):
that's AI enabled along the way.

Speaker 2 (28:59):
Yeah.
So I think a very cleardistinction is if you're looking
for a deterministic outputright and something can be
solved by if, then else, orrule-based right.
You don't need an LLM in thereand most of the automations that
we've done that.
If this happens, then do this.
If you get this kind of anemail, or maybe lead routing,

(29:23):
you look at lead routing.
If you get a lead from thisregion for this product, route
it to this person.
I mean, that's a simplerule-based automation right Now.
More AI-based automations wouldbe, let's say, list enrichments
, things like classifying yourjob titles into personas.
Right Now, we build all of usin marketing ops.

(29:46):
I think this is one of ourfavorite use cases that have
been debated forever right, andwe build rule-based things for
that.
Right.
We've built like wildcardsearches.
We've done like keyword-basedthings.
Right, if job title is this,job level is this.
Oh, yeah, normalizing job titlesfun stuff, creating personas

(30:06):
out of that right.
But an LLM can do it veryeasily.
Of course, you can give itcertain guardrails based on your
business, but it is able to doa much better job of actually
converting these job titles intopersonas.
So things like that.
I mean, if you want to automate, so far we have automated this
process, but we've always foundoutliers.

(30:28):
Right far we have automatedthis process, but we've always
found outliers right.
We've always had to go back andkeep adding stuff or keep, you
know, changing stuff in there,right.
But with the LLMs now it can doa much better job of you know
making that kind ofdetermination.
Because this is not totallylike rule-based.
It cannot be 100% rule-based,but something like that is a

(30:50):
great use case for automatingwith an LLM right, I think.

Speaker 1 (30:56):
I'm going to throw you a curve ball at you here.
So, like on that one inparticular, it feels like a
combination of having an LLMthat could handle the yeah,
unexpected scenarios right,they're outliers, like you said.
Could it also be, could it alsogenerate some sort of
confidence level on that as well, because then I think that

(31:19):
combination would be actuallyreally valuable.

Speaker 2 (31:23):
Actually it's funny that I was talking to somebody
yesterday just about this ongenerating a confidence score.
It definitely can, itdefinitely can, it definitely
can.
And based on your business, youknow business context, right.
Sure, what do you want thesepersonas to be?
Because you know one.
For somebody, a certain titlemight be a certain persona, for

(31:44):
somebody else it could be adifferent persona, right.
So you can definitely generatelike a confidence score on that.
And then you know, based onthat, if the confidence is low,
you don't have to, you canactually have the system not
automatically do it and actuallyget a human in loop, right?
Versus if the confidence ishigh, you know it just does its

(32:05):
thing.

Speaker 1 (32:06):
Yeah, or or you could just leave it just like,
publish the conference level andthe normalized value, and then
if someone wants to use it, theycan make a choice.

Speaker 2 (32:15):
For sure yes.

Speaker 1 (32:17):
Yeah.

Speaker 2 (32:17):
For sure.

Speaker 1 (32:19):
Okay, so that's interesting.
So that deterministic part is agood one.
All right, we talked aboutmonitoring an inbox.
What are some other commonchallenges that you?
You know?
Have heard from others that inlike marketing revenue ops that

(32:40):
seem ripe for AI and automation.

Speaker 2 (32:44):
Yeah, I think one of the biggest challenges I hear
and again it's from anoperations perspective there are
a number of things you cansolve for marketing, right, I
mean more from an operationsperspective is the challenges
around planning a campaign,selecting audiences and campaign
creation and execution as well.
I mean, this is such a drawnout process that I think it's

(33:09):
ripe for some kind of anintelligent automation there.
I mean, even if you think andMichael, you've been being in
ops, you've probably seen thathow sometime how drawn out it
can be to select an audience andhow many back and forths are
needed to do something like thishey, if I select all the VP of
marketing in this region, whatis the count?

(33:29):
And then, if I add this filteror if I remove this region, what
is the count?
And then, if I add this filteror if I remove this filter, what
is the count?
And it goes on and on, right.
Sure.
So just same for campaigncreation, where you continuously
try to tweak the copies andthings like that.
I think this entire process canhave intelligent automation for

(33:51):
this to do it better and bemore self-service for the
campaign managers and thedimension people as well.
One of the other things I alwayshear is that the campaign
managers or the dimension people, once a campaign has gone out,
they don't get real-timeinsights about the campaign, how

(34:12):
the campaign is doing right,unless they actually ask
somebody and a lot of timesthey're slacking us ops, people
and hey, how many registrationsdo I have?
Or they have some kind of adashboard, or, if somebody is
more Salesforce savvy, they canactually go to Salesforce and
actually see this campaign right.
Sure, but it's again somethingthat they have to keep chewing.
Now, you know if something isintelligent enough to actually

(34:34):
tell them that, hey, yourcampaign is performing well or
not performing well, and maybethis is the way to optimize it.
I think that's whereintelligence in this process can
come in.
That you're actually doing awebinar.
You know it's been two weekssince you launched it.
Your registrations are stillbased on the registrations that

(34:55):
you've got so far.
You're not going to meet yournumbers, so maybe you want to
send out another email, maybeyou want to target another set
of people, things like that.
You know, there are, I think, anumber of cases regarding
personalization, data analysisand insights and all that kind

(35:15):
of stuff that can be done Right,but this is something you know,
more closer to operations,where you know we are day in and
day out, like into, likecampaign planning, audience
building, campaign creation,execution, providing them
insights about the campaignsRight, and then answering
questions about hey, what'sworking, what's not working, is
this offer resonating or notright?

(35:36):
Is this copy resonating or notright?
I think all this can.
There's a lot of scope here forautomations, and not just
automations, but intelligentautomation.

Speaker 1 (35:45):
I love the idea of the like call it early signals
from a tactic, because I knowmaybe you and I have talked
about this even when we talkedbefore.
But I think of one specificexample where I was, where a
person was running a webinar,everything launched and really

(36:07):
nobody was asking for reportingor anything until about two
weeks before the webinar.
And then somebody asked Well,it turned out.
Well, there was zeroregistrants in the webinar
platform.
Now, the good news was, peoplehad registered.
It was captured in them.
In that case, I think it wasEloqua Eloqua and Marketo, one
of the two.
It just hadn't flowed, like thedata hadn't flowed over to the
webinar platform.

(36:28):
The downside was, like, all thevery personalized stuff that
would come out of the webinarplatform with specific links,
and all that was not hadn't beengoing out, and so, um, it would
have been nice to have hadsomething that.
Just, hey, I know, as part of mylaunching of a webinar, one of
the things that's going tohappen is there's going to be
whether it's an agent or somesort of automated thing that's

(36:49):
going to start giving meinsights into how it's
performing.
Right, all the promotion, um,maybe as a as compared to
expectations or goals or ascompared to similar kinds of
tactics, um, to that kind ofaudience that you know.
I think that would have beenreally valuable.
It would have been early signallike oh, there's a problem,

(37:10):
like in this case it was more ofa operation, like a system
level problem, not a performanceof the, the campaign problem
per se, but like any of thosekinds of things would have been
helpful to get like that and Isee a lot of teams that are
moving so fast, like they'realways, they get one thing
launched, they go to the nextone.
They're not really right takingadvantage of that data, I could

(37:33):
see.
Then you take it a step furtherand you go like I'm going to
launch a campaign.
It's like, yeah, this is thecharacteristics of it.
It's going to go and go likehere's how I suggest you do your
segmentation, what you shouldexpect, like actually to help
you generate the expectationsfor the new one.

Speaker 2 (37:49):
Yeah, no, absolutely.
And there's so much scope inactually helping you plan
campaigns based on historicalperformance of it right, when it
can even suggest that, hey,based on your goals and based on
your historical performance,this is what you need.
Maybe these are the numbersthat you need to target right?
These are your historicalconversion rates and stuff.

(38:09):
Yeah, Right, these are yourhistorical conversion rates and
stuff.

Speaker 1 (38:12):
Yeah, interesting.
I mean, I'm like super excitedabout all these kinds of
potential things out therebecause it feels like stuff that
we've talked about that youknow, at the end of the day, has
been, you know, required humancapital and time.
That was limited, and nowyou've got something that could
potentially replace some of thatnot all of it, because I still

(38:32):
think there's a need for thehuman intervention, insight,
whatever you want to call itright.
That is not totally there.
It feels like Although I amprobably not aware of stuff
where it actually is a littlesmarter than the humans, doesn't
have the bias maybe, but anyway, so you're launching your

(38:55):
venture or have launched it.
I mean, are you seeing, youknow, are you expecting that
there's going to be more likecommercial kinds of solutions
that are solving some of thesemore operational?
It feels a little bit tactical,but not not tactical kind of a
hybrid things.
Or do you think it's going tobe, you know, people doing
bespoke solutions based on theirparticular entity or some

(39:18):
hybrid of those?

Speaker 2 (39:21):
Yeah, I think it's always going to be a hybrid.
There's definitely going to becommercial solutions that
address problems right andthere's going to be innovations
that come out in commercialsolutions, and I think there's
still going to be a place forcustom, bespoke solutions where
your process is something reallydifferent.

(39:42):
The industry that you work inis, let's say, heavily regulated
and a commercial solution, orat least a commercial solution
that is not specifically builtfor that industry, is not
available.
In that case, you may have tobuild a custom solution.
So I think it's definitelygoing to be hybrid and with

(40:04):
tools like Zapier, make, netanand hundreds of other you know
AI platforms out there, I thinkpeople would also start building
you know these automations atscale within their organization.
So I think there's a place forboth.

Speaker 1 (40:24):
Yeah, okay, yeah, and it seems like stuff like Clay
is also big right now.
Yeah, yeah, yeah is also bigright now.

Speaker 2 (40:32):
Yeah, yeah, yeah, yeah.
Clay's impressive.

Speaker 1 (40:34):
Um, yeah, I don't want to do an ad for them, but,
um, I did one little play around, played around with it, and
within a few minutes I was ableto generate something that I
think I probably wouldn't beable to do on my own, even man.
I don't think even manually,like any kind of close to
automated, and certainlymanually, would have taken
months.
It's pretty yeah it's crazy.

(40:55):
Yeah, um, well, that's, this isawesome, uh, any, so we've
covered a lot of ground, but isthere anything else that you
wanted to that we didn't cover?
That you want to make sure thatour audience hears about?

Speaker 2 (41:06):
yeah, I think.
I think one thing I telleveryone is that, and one thing
I generally see is there is alot of like FOMO out there.
People are like feel that ifthey don't, if they're not doing
it, they're missing out, or ifthey're not doing it in in a
very big way right now, they'remissing out on something you
know.
Honestly, I think this is justmile one.

(41:27):
This is just getting started,so I think the the right way is
probably to learn it, experimentwith it and apply it to certain
use cases, see what works, whatdoesn't work, and then go from
there.
I think that's important,because I think people are just

(41:48):
getting overwhelmed and they'realso having this FOMO of missing
out, right.
So, which is which probably isnot right at this point in the
technology curve, you know?
And the second thing is, youknow, what I also see is that
people are thinking of AI assome kind of a magic wand, you
know, and not thinking of it asjust another technology.

(42:09):
I mean, agreed that it'srevolutionary, right, it's as
just another technology.
I mean, I agree that it'srevolutionary, right, it's not
just another technology.
It's definitely something thatcan change a lot of things, but
at the end of the day, it's atechnology and like any other
technology, right, I mean, it'snot a magic wand.
It needs work, it needsdevelopment, it needs testing,
it needs experimentation, itneeds deployment and even after

(42:32):
deployment, you have to actuallymaintain it, right?
I mean, these LLMs are, thetechnology is evolving so fast,
so rapidly on a daily basis,right that you have to maintain
these applications.
So I think people are ignoring,or at least not looking at, the
hard work that's needed toactually implement these
technologies, and just thinkingof it is like, hey, we put AI to

(42:54):
this and this is going to solvethe world's problem.
I think that's not the case.
It's just another technologythat can be put to use, but it's
, again, hard work like anyother technology.

Speaker 1 (43:02):
Yeah, yeah, it's interesting because I think the
combination of those two reallydescribed, maybe, my own
experience, which was, I think,the first couple of times I
tried.
I'll just keep it to chat GPT.
I was unimpressed, right, itwas just.
It was like oh, I don't get whyeveryone's all up and excited
about this and I think becauseof that, whether I was a skeptic

(43:24):
or I just like it, just feltlike I was like it wasn't worth
the effort.
But at the same time, I kepthearing so many people talk
about it.
Maybe I'm missing something.
And it wasn't until I can'tremember there was probably a
particular thing I'm thinking ofwhere I was struggling with
trying to do something.
I was like, maybe this is thekind of scenario where it would

(43:45):
help and it did.
And that was the catalyst forme to go like, oh, trying to
think about, like, what wasdifferent about that.
And it was really like itwasn't just a simple question,
right, it was something morecomplicated, it needed a little
context.
And then I saw some otherpeople with examples of the kind
of prompts they were doing,like, oh, this is what you
actually can't just go ask it arandom question, and it doesn't

(44:08):
really think about you.
It doesn't have any context,it's not.
It's like would be askingsomebody on the street, right,
it's not really 80.
And um, but up until that point, right, the idea of even
thinking about should I tryusing chat, gpt or one of these
tools to help with this problemI have was way down the list of
things I might try.
Right, it is now like gettingmuch closer to the very top of

(44:31):
that list of things like, oh,can this help me with this
problem?
And I think that's been theshift for me.
When I think about it, it'shappening much earlier in my
thought process.

Speaker 2 (44:43):
Yeah, no, absolutely, and I think, going forward, I
think it's going to be top ofthe list right of things to try
before you try anything else.
The way you know, one thing Itry to do is that whenever

(45:10):
during the day I think ofsomething that I have to do, you
know I force myself to first goto ChatGPT right and try to,
you know, get the try to see ifit can do it right, and nine
times out of 10, it's actuallyable to give me a new
perspective right If I'mbrainstorming or researching
something, and also give me thedirection to do something.
So I think it's awesome and Ithink, slowly, people would.
That would be the first placewhere people would go to to
actually even start anything.

Speaker 1 (45:30):
Yeah, I mean, I did that with.
I've done that with at leasttwo of my kids on very different
things.
One was college choice and onewas on a warm-up routine for a
track meet.
Two very different things right, both of which I have a little
bit of knowledge about, but notall the knowledge I needed to
help them with it.

(45:50):
In both cases, it generatedsomething within a few minutes
that was absolutely useful.

Speaker 2 (45:57):
Yeah.

Speaker 1 (45:58):
Crazy, absolutely.
So, no, it's going to beinteresting.
I'm involved with an advisoryboard for an engineering school
and this has become a topic onthat on that front as well,
right, cause I think there's theobvious like should students be
allowed to use those things?
But there's also just like theidea that kids coming in

(46:19):
probably in the not too distantfuture, are going to come in
with already having like theireducation agents coming with
them, right?
No, I mean it's weird, likethis is why I'm like it's weird
to think about, but, um, pretty,it's funny my I think my oldest
son actually is like you knowmore about this than I do.

Speaker 2 (46:40):
No, absolutely, and I think, uh, there's no point in
keeping students away from it.
I mean, it's a, it's a tooljust like any other tool yeah
you know for them to use andactually, uh, learn from it
right.
Use it to increase theirproductivity, to increase their
knowledge.

Speaker 1 (46:56):
It drove me crazy.
When my kids told me theyweren't allowed to use Wikipedia
as a source for a researchpaper.
I was like I get it, there'scrap out there, but there's also
stuff.
You've got to learn how todecipher what's real, what's
useful, what's not, what do youlike what?
How to decipher like what'sreal with like what's useful,
what's not.
What do you like?
What do you like?
What do you trust, what do younot trust?
Same goes for this yeah, no,absolutely so well.

(47:20):
Uh, tarun, thank you so much.
Uh.
If folks want to learn moreabout what you're doing or hear
more about your perspective onall this, what's the best way
for them to do that?

Speaker 2 (47:29):
yeah sure, I mean, the best way I guess is reach me
on.
Yeah sure, I mean the best wayI guess is reach me on LinkedIn.
You know, just DM me, send me aconnection request.
I think I would love to connectand talk more on this.

Speaker 1 (47:39):
So LinkedIn is the best place.
All right.
Well, I can attest he asks goodquestions and he likes to learn
from others.
So again, tarun, thank you somuch.
Appreciate it.
Thanks again to our audiencefor continuing to support us and
giving us ideas and suggestions.
If you have an idea orsuggestion for a topic or a
guest, or you want to be a guest, feel free to reach out to

(48:01):
Naomi, mike or me and we wouldbe happy to talk to you about
that.
Till next time.
Bye, everybody.

Speaker 2 (48:07):
All right.
Thank you so much, Michael.
Advertise With Us

Popular Podcasts

24/7 News: The Latest
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.