All Episodes

May 19, 2025 45 mins

Text us your thoughts on the episode or the show!

The rise of AI tools has dramatically changed the landscape of email marketing and sales outreach, creating both exciting opportunities and significant risks. As Mustafa Saeed, co-founder and CEO of Luella, explains in this eye-opening conversation, many revenue teams are now "scared shitless" about how their reps might abuse these powerful new technologies.

When AI-powered automation tools are implemented without proper guardrails, organizations face serious threats to their brand reputation, domain health, and even compliance standing. The problem isn't AI itself, but rather "AI coupled with reckless automation" that floods inboxes with content that lacks genuine value. As email service providers like Google and Microsoft respond with increasingly aggressive spam filters, even legitimate messages from trusted senders are getting caught in the crossfire.

The solution isn't abandoning AI but reimagining how we use it. While many AI tools promise to remove humans from the loop, Saeed argues for bringing them back in through thoughtful collaboration between humans and AI agents. This approach combines the best of both worlds—AI's ability to analyze vast datasets and humans' talent for building authentic relationships.

Episode Brought to You By MO Pros 
The #1 Community for Marketing Operations Professionals

Support the show

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Hello everyone, welcome to another episode of
OpsCast brought to you byMarketingOpscom, powered by the
Mopros out there.
I am your host, michael Hartman, flying solo today.
Mike is in the throes ofgetting Spring Fling 2025 off
the ground, which is the weekafter this one that we're
currently in in May 2025.
So hopefully you're goingNaomi's off doing whatever she's

(00:25):
doing, but joining me today isMustafa Saeed, and we're going
to talk about emaildeliverability in the age of AI.
I think email deliverability isone of those mysterious topics
anyway, so always good torecover that again.
Mustafa is the co-founder andCEO of Luella.
Mustafa is a serialentrepreneur who paid his way
through university by creatingand monetizing a YouTube channel

(00:47):
.
He has also been a consultantwith a variety of different
industries and companies, and so, mustafa, thanks for joining me
today.

Speaker 2 (00:54):
Thanks for having me on the pod.

Speaker 1 (00:56):
Yeah, this is going to be fun.
Well, we talked about thatbefore.
I did the two-sentencethumbnail sketch of your career,
which doesn't do us justice.
So before we get into the coretopic of email deliverability
and how AI can be a part ofhelping with that, why don't you
talk through more about yourcareer and life journey and

(01:19):
share with that?
I think our audience alwaysfinds it interesting.
I find it, even if they don't.
I find it interesting to hearall these different variety of
stories about how people endedup where they are.

Speaker 2 (01:29):
Yeah, absolutely, and , like you said, I'm an
entrepreneur and revenue growthleader.
I've been building ever since Iwas 16 years old.
Like you said, I started asmall YouTube channel that paid
my way through university.
My first job out of school wasat a marketing agency that
failed after Apple released iOS14.
And I got to be part of thejourney of transforming that
failed agency into a nowsuccessful SaaS that generated

(01:52):
several hundreds of millions ofdollars for a lot of really cool
e-commerce brands likeHelloFresh.
I then joined Clearco as anaccount director, which is a
fintech unicorn.
I was pretty much a Swiss Armyknife sales guy and I've done a
lot of growth consulting.
I've worked with a lot ofincubators and startup
accelerators, largely from ago-to market perspective as well
, and my team and I cametogether largely because of our

(02:15):
shared experiences.
We saw a lot of the samechallenges.
We saw a lot of new AI salestools entering the market, a lot
of AI SDRs that were making iteasier than ever to spam
seemingly infinite numbers ofemails and content, and we
started to see early signs oflarger policy shifts from Google
and Microsoft when it comes totheir spam filters, now LinkedIn

(02:36):
when it comes to aggressive webscraping on their platform, and
that's why we wanted to enterthe space and wanted to build
Luella just to help revenue andmarketing professionals better
protect the reputation, improvetheir deliverability and scale
their outreach volume safelywith guardrails to prevent AI
abuse.

(02:56):
So we want to help teams makethe most of AI without abusing
AI.

Speaker 1 (03:01):
Yeah, that's interesting.
My first job, pre-college wasactually very different than
yours, so I cleaned an auto bodyshop everything from the shop
floor to the bathrooms.
So lessons learned there stillhaunt me, I think is probably
the best way to put it.

(03:22):
But, yeah, I've shared thatwith my.
I have three boys, so I'veshared that story with them and
I was like this is why we pushyou to like go to school and
learn and and be prepared to dosomething else if you wind up
wanting to work like that, Ihave no issue with that.
But like, yeah, give yourselfoptions, right, right.

(03:43):
So that's great.
Thanks for sharing thatFascinating stuff.
So when you and I first chattedand you touched on this, I think
a little bit as well, thecatalyst for you starting Luella
is is the way that you'reseeing revenue teams abusing AI
tools or I think even justautomation tools in general has

(04:05):
been an issue for a while,leading to people who we like to
cater to, right Marketing opsand rev ops teams put in the
position where they have toclean up after the fact.
So I think I know the answer tothis, but like, hey, do you
still see this as an issue andthen maybe elaborate a little
bit on like, how big of theissue do you see it in the

(04:27):
market?
And then, what are the risks ifpeople who are out there
thinking this is a new tool thatcan really help us scale, which
it probably can right, what arethe risks of doing that,
without which it probably canright what?

Speaker 2 (04:46):
are the risks of doing that without, like you
mentioned, guardrails?
Yeah, Most of the RevOpsleaders we speak to they see a
lot of opportunity with AI andautomation but they're scared
shitless to how their reps mightabuse AI and automation.

Speaker 1 (04:59):
And it is really In my opinion, they should have
been scared shitless already.

Speaker 2 (05:03):
Yeah, exactly, and it's really revenue operations
marketing operations that we'reseeing step up to help implement
these kind of guardrails acrosstheir sales and marketing
organizations.
These are the teams that aremore focused on operational
efficiency and protectinggo-to-market systems and very
grateful to have been workingwith a lot of incredible RevOps

(05:24):
leaders.
The senior VP of RevOps atOracle is a strategic investor
and advisor in Luella and it wasthe RevOps leaders at
Travelperk that really helpedtake this over the finish line.
That was our first really bigenterprise client and if it
wasn't for their RevOps leaders,I really don't know where we'd
be with that.
But it really depends on everyorganization, right?

(05:45):
I feel like the definition ofRevOps like changes every single
organization that we speak to.
Some organizations, revops isliterally just Salesforce, like
anything and everything.
Salesforce.
If it deviates from Salesforce,they won't do it.
Some organizations it's agrowth marketing leader, it's a
VP of sales.
It's very scrappy that'simplementing it.
So it really depends on theorganization to how much

(06:08):
resources they have and howthey're structured as to who
will own this initiative, butmore often than not, it has been
RevOps and marketing ops that'sbeen stepping up and in terms
of the range of risks brandreputation risks, domain
reputation risks, complianceissues, spam and blacklist
incidents and account bans aswell.

(06:32):
The last thing that you want isone of these AI agents
misappropriating your brand.
We work with a lot of healthtech companies, for example,
that are very sensitive when itcomes to the claims that they
can make about their offering,about their software.
Even just one word can landthem in some compliance hot
water.
The last thing you want is foran agent to spam an insane
amount of content and hurt yourdomain reputation and contribute
to account ban.
So those are just some of therisks that we've seen in the

(06:56):
wild.

Speaker 1 (06:57):
I mean, are you?
Are you?
Have you in this something Ihaven't really been paying that
close attention to, but I know,like I can imagine, that there
are scenarios where, yeah,there's actual financial risk
too, and by that I mean likeactually like penalties and fees
for going around or not, not,not, not.

(07:20):
Not complying with regulations?
So of course GDPR castlewhatever.
Not complying with regulations?
So of course GDPR.
Casl, whatever.
Have you seen or heard anystories around that where some
of these tools that enable scaleare actually creating that kind
of issue, or is it mostly moreabout the reputational stuff?

Speaker 2 (07:40):
Yeah, mainly on the reputational side that we've
seen.
Yeah, mainly on thereputational side that we've
seen.
Uncontrolled automation cancontribute to violations of, you
know, cam, spam, casl, gdpr andthose kind of compliance
frameworks Like we have seenexamples of AI agents saying,

(08:00):
you know, very sexist remarks toa prospect, for example.
So like those kind of thingslike can land you in hot water
and there is a financial impactto that for sure.

Speaker 1 (08:06):
Yeah, that's interesting.
Yeah, I mean, in some ways,this is not a new problem.
I think there's always beenthis risk of inappropriate stuff
going out and all that.
I think it feels like the bigdifference now is twofold One,
the ability to scale and do morefaster, but also with this

(08:31):
belief that there can bepersonalization that can take
into.
It's not just rules-based right, it's kind of based on context
and has some more insight.
It can still generate stuffthat's inappropriate and it and

(08:51):
then, because it's notrules-based, it's a lot harder
to diagnose like how did thishappen?
right, exactly, so, um, I canimagine that being a problem.
So, um, I guess, how do youthink that I?
I mean, it seems like it wouldbe an obvious potential risk to
me, right, if someone'sconsidering that.
So why do you think that wasnot something that was

(09:16):
anticipated when these companieswere rolling out these kinds of
tools?
These companies were rollingout these kinds of tools.
Is this because they were sofocused on the benefits?

Speaker 2 (09:33):
or potential benefits that they hadn't considered the
risks.
I think you know, whenever weget new technologies, there will
always be risks that people areunaware of.
Like these technologies arecontinuing to improve over time
and we're always staying on topof that to continue to build
guardrails as well.
So I'm not surprised.
There is a lot of unexpectedand I think we're going to

(09:55):
continue to see that as well.
So it's a matter ofexperimentation, right Taking
these tools, experimenting withthem, seeing the pros and cons
and mitigating any problems thatwe see during that process.

Speaker 1 (10:11):
Yeah, maybe it's as simple as expectations were.
Maybe too optimistic about it.
All that's interesting, so soagain, I can't can imagine
there's multiple sort ofcomponents to how this could be
a risk um, to reputation, sincethat seems to be the main thing.

(10:33):
Um, so Is the issue with AI,llm technology being used to
create content that is bad,wrong, inappropriate, whatever,
or is it with the automationcomponent of it that can really
drive the scale of this withlimited oversight and

(10:54):
intervention?
Or is it combination?
What's the mix?
Or is there something else thatI haven't even really
considered?

Speaker 2 (11:02):
Yeah, so AI by itself isn't the problem.
It's AI coupled with recklessautomation, that is, you know
it's one thing to use.

Speaker 1 (11:09):
Yeah, that's a good way of saying it, yeah.

Speaker 2 (11:12):
Exactly and like.
It's one thing to use AI to,you know, move faster, sharpen
your thinking you know that'ssomething we very much encourage
.
You know people should beexperimenting with these new
tools and finding ways toinnovate and incorporate them as
part of their workflow.
But it's another thing to floodthe internet with AI generated
emails, linkedin posts, comments, you know content in general

(11:33):
that you know is just going to,you know, annoy the end user.
Like, especially with zerooversight and zero limits.
Like that's the kind of AI spamthat Google, microsoft and,
recently, linkedin are reallyfocused on combating, and that's
why we believe we're in urgentneed of AI systems that do
enforce guardrails.
So, things like human oversight, you know, while everyone else

(11:55):
is trying to remove humans fromthe loop, we want to find ways
to bring them back in right.
We've seen what happens when youlet an AI agent just go
completely autonomous.
Human to agent collaboration issomething that should be
encouraged.
Things like approval processescertain pieces of AI generated
content should be approved by anadmin on the account or from
the sales rep themselves.
Things like automation limitsright, testing your messaging at

(12:19):
a much smaller volume beforepressing your foot on the gas
and scaling further just to makesure that people are responding
favorably and you are providingvalue to the people that you
are creating that content for.
You know, abuse monitoring,just so we're able to catch
issues before they do become aproblem, and educating your reps
to the ethical boundaries ofthese technologies and how you

(12:41):
want your brand represented inthis AI native world.
Like anything less than thosethings?
I believe that's just a risk toyour brand.

Speaker 1 (12:48):
Yeah, it's interesting to me, sort of just
in general.
Right, my view of this stuffhas been evolving, but I think
where I land right now is that Itry to make a distinction
between the AI technologies,because I think that encompasses
a number of different thingsLLMs probably being the most
common one that people think of,but there are others and a

(13:10):
distinction from that versusautomation, because I think
we've had automation for a longtime.
We had a great guest on a fewepisodes ago where we this
really hit me right thedistinction between automation
and AI.
Ai is it can be a part ofautomation that might replace
something that was veryrules-based in the past.
You know where it can beintelligent in quotes.

(13:32):
The other thing that youtouched on and I think this is a
really interesting one, whereyou touch on, like to me, I
think there's this, maybeassumption that this is
replacing the human componentsof this, and I've gotten to the
point where I think really thethe true value in all this is
going to be where the ai tools,agents whatever are going to be

(13:55):
great tools for that can be canaugment and accelerate someone's
ability to get things done.
What I'm on a advisory board foran engineering school here in
Dallas that I went to and wejust had a meeting recently a
new Dean, a new department chair.
We were talking and there's, hewas talking about this in

(14:16):
particular.
It was like there's actuallybeen research that has already
been done that shows thatstudents and this is more like
middle school students studentswho use AI as a tool actually
tend to perform better in theoutcomes.
So grades, scores, whatever andinterestingly, there was a

(14:37):
difference in sex, gender right.
And interestingly, there was adifference in sex, gender right.
Girls got a bigger bump thanboys at that age, which I think
to some degree I could attributeto just school, for kids at
that age is better, more adaptedto the way girls work at that
time than boys.
But that's a whole like,probably a subject I don't want

(15:02):
to get into because I'm not anexpert, but it was interesting
to me, like it was another sortof aha moment for me, that the
tools themselves are notnecessarily the replacement but
they can really augment.
So, which is kind of like.
I think it's been in the back ofmy mind for a while because I
thought really like, even if youthink about analytics and
reporting, if AI, I'm reallybullish about the idea like that
can really have some hugeimpacts.
You still need somebody who caninterpret the output because

(15:25):
sometimes it could generate, say, a, an insight that says go do
x, you know, and you're going toget a better result, but like x
is actually not a realistic orpractical thing that you could
do.
So like you got it, like okay,well, that's not going to make
sense.
Um sorry long diatribe, butlike um.
So I mean, it seems like that'sreally where we're at is this

(15:48):
stage where the idea of ai andautomation could replace is now
becoming one where it couldaugment, but we still have this
sort of, but we still have thissort of the initial steps
towards that really were focusedon replacement and I think it
feels like that may be the rootcause of this.

Speaker 2 (16:10):
Yeah, we've seen these AI agents act
independently and we've seensales reps act independently and
like we're seeing more success.
You know, combining the best ofboth worlds.
And I hate some of themessaging that we're seeing from
some of these aisdrs where,like it is very aggressive, you
know, trying to replace salesreps, and like I, I don't think
that's the direction that we'removing in.
Uh, even if, like, these aiagents are going to continue to

(16:32):
get better, but, uh, there's alot that we humans can do that
these ai agents can't.
So, uh, blending the best ofboth worlds, that that's the
direction where we're seeingthis industry going in.

Speaker 1 (16:41):
Yeah, I mean, maybe I just haven't been able to.
It's interesting because onLinkedIn I see a lot of people
going like, oh, I can tell whensomething is AI generated and
I'm like I actually don't thinkyou can.
I mean, I think it's prettyobvious.
Maybe obvious when someone isreally bad at prompting, but if

(17:02):
they're good at prompting, Ithink it can generate pretty
good stuff.
So I'm not sure that people canreally distinguish between the
two if they're like if we reallydid a test and they were really
focusing on it.

Speaker 2 (17:16):
There are some telltale signs.
So like ChatGPT very often usesem dashes for that very long
dash, but at the same time, likepeople have been using that
since the 1800s, right.
So like I feel like people arestarting to move away from
unique forms of grammar justbecause Chachibiti does it.
So there are some telltalesigns, but even then, like it's

(17:40):
very difficult to know for sure.

Speaker 1 (17:42):
Yeah, yeah, and I think the reason I bring that up
is when I think about mostlyLinkedIn but I get an email too,
but I get outreach from things.
I find myself wondering is thisbecause I haven't really
noticed much of a distinction inthe bad ones, right, they're
just bad much of a distinctionin the bad ones?

(18:03):
Right, they're just bad, and Idon't like, to me it doesn't
really matter if it was aigenerated or human generated or
collaboration, even right, ifit's bad, it's bad.
Um, I mean so like it?
I mean, is that kind of whereyou're at, like it doesn't
really matter how it's done,it's like the quality and the
effect on it.
Reputation is what matters asopposed to, like this, the

(18:25):
amount that is automated ordeveloped from AI.

Speaker 2 (18:29):
Ultimately, we want to provide value to the people
that we're reaching out to, andyou know there are forms of spam
that are created from AI agentsindependently and sales reps
independently, so that's why weneed to leverage these
technologies to move faster inareas that are currently slow,
just so we can increase the rateat which we are providing value

(18:53):
to the people that we'rereaching out to in any form of
the content that we are creating.

Speaker 1 (18:57):
Interesting that we are creating Interesting.
You touched a little bit on theLinkedIn's, some of the things
that recently happened to someof the big data providers.
I don't know if we shouldmention names.
I think people will alreadyknow if they are paying
attention to this at all, and Iactually don't know.
It's been a long enough that Idon't know if they're back on
LinkedIn or not, but anyway,they're not.

(19:17):
Okay, yeah, I actually meant togo look at this um recently,
not related to this because Ijust there was something else
that came up and I wanted to gogo look anyway.
So you know, yeah, are, arethose?
How?
How is that tied to this?
Because I mean, this is areputational thing yeah is it a

(19:37):
reputational thing for thosedata providers?
or is a reputational thing, yeah, is it a reputational thing for
those data providers, or is areputational thing for those
brands that use those dataproviders, or like is it?
Is it both?

Speaker 2 (19:50):
There are unique challenges for both.
You know, like you mentioned,linkedin has removed the company
pages of several very largecompanies in this industry that
I will not name.
And like, I believe that thisproblem is this is just really
the beginning.
Right, this is going to getworse before it gets better,

(20:12):
just like, with a lot of policychanges, and like we're going to
see a lot more scrutiny fromthese platforms, especially
LinkedIn.
And like, unless you arescraping LinkedIn aggressively,
I wouldn't worry about yourcompany page being removed.
Like, the reason for why certaincompanies had their company
page removed is because they'vebeen abusing the living hell out
of LinkedIn systems for thelast several years, even before

(20:37):
3.5.
Like, especially one of thelarger ones, they've been in
court with LinkedIn over thisover the last several years, so
this is really nothing new.
What is new is their companypage being removed A unicorn in
this industry having theircompany page being removed.
That is very eye-opening, butfor the majority of brands, the
risks are more related to branderosion, compliance and legal

(20:59):
exposure, blacklist incidentsand deliverability destruction.
I wouldn't worry about, youknow, company pages being
removed.
Like that is just somethingthat we're seeing from the
companies that are doing insanevolumes of scraping, especially
for personally identifiableinformation.

Speaker 1 (21:17):
Right.
I can imagine a scenario,though, where someone maybe,
probably maybe a largeenterprise goes oh, we've been
spending X amount of money withone of these data providers.
We can.
Now, with these newtechnologies, we can just
replace that on our own rate tosomething bespoke.
It could run.
It could run into the sameissue, right?

Speaker 2 (21:36):
Yeah.

Speaker 1 (21:37):
I mean exactly Maybe, maybe not as likely, Like it's
not going to be at the samescale Like these companies are.

Speaker 2 (21:45):
They're doing it like literally hundreds of millions
of contact details that they arerefreshing on a very regular
basis.
So but who knows where likeLinkedIn is going to go when it
comes to controls over webscraping automations, that there
is a potential that they can gothat aggressive.
And there's a lot ofspeculation that I won't mention

(22:07):
because it is just speculation,but potentially LinkedIn
venturing into this area as welland kind of building like a
sales nav 2.0.
So, yeah, a lot is still up inthe air when it comes to this,
but we're tracking this veryclosely, okay.

Speaker 1 (22:21):
Yeah, okay, interesting.
So you know one of the otherthings we talked about and you
said something to me that kindof stuck in my head and maybe
I'm going to tie it back to mymental model of rules-based
versus I was going to sayintuition-based, but it's not

(22:41):
really that it's like but um,intelligent kind of um
automation is, um, the idea thatthe one of the other challenges
is that, like this adoption ofai as a part of the automation
was not really.
It didn't really take advantageI think I'm putting words in
your mouth here, so correct meif I'm wrong but not taking

(23:03):
advantage of the ai component ofthe automation to be smarter.
It's essentially just laid itover already existing static
flows, um, I can imagine there'snurture programs, things like
that that are out there sales,sales, uh, cadences, etc.
I mean, what, what do you seelike?
Is that what you're seeing aswell?

(23:24):
And that's a contributingfactor that it's maybe not just
the technologies themselves, butbecause of the way it's been
adopted and not really changingthe way they're approaching it.

Speaker 2 (23:34):
So the way that millions of sales reps all over
the world are currentlyoperating and when I was at
Clearco, our team operated thisway as well is we'll take
several thousands of leads fromsome stale database like Apollo
or ZoomInfo and following astatic sequence where, on day
one, you send an automated email.
On day five, you send anautomated email.
On day 10, you send an automatedemail.

(23:55):
That in itself is a very roboticway of doing business.
Now imagine throwing LLMs atthese legacy workflows and
making it easier to add contactsto a campaign, making it easier
to generate copy and making iteasier to add contacts to a
campaign, making it easier togenerate copy and making it
easier to write personalizationthat really isn't personal and
really isn't pulling the rightcontext that you actually need

(24:15):
to provide value, like that isalso contributing to this
broader problem of spam.
So, rather than just throwing AIat these legacy workflows like,
we urgently need to build theright infrastructure for this
AI-native world, and thatrequires more comprehensive data
integration.
There are thousands of datasources that give us an
understanding to who are thepeople that actually have a

(24:36):
problem that your company cansolve for, instead of spamming
some database.
That also requires algorithmsto be able to move away from
those rule-based systems butactually have those algorithms
be useful.
And there is a lot of more workthat needs to be done by the
broader industry just to makesure the content that we're
putting out is contextuallyrelevant and, rather than

(24:59):
following a static sequence,we're doing a better job of
nurturing those prospects overtime.

Speaker 1 (25:04):
We're doing a better job of nurturing those prospects
over time.
Yeah, so I'm going to throwthis totally unplanned thing for
me here, but it popped in myhead because I recently was
talking to a young salespersonwho was picking my brain about
the mind of a buyer.
But I've been in sales where Ihad BDR, sdr, things, things.

(25:25):
One of the things I always madesure I did or tried to do is,
if I get into an interactionwith a prospect and it becomes
obvious that our product service, whatever, we're not going to
be able to help them, I alwayswant to try to do something to
help them along the way.
When the next thing comesaround, they might think of us,
do something to help them alongthe way.
Right Is that's?
Because I think it leaves likethen there's, you know, when the

(25:46):
the next thing comes around,they might think of us is my was
my thought process, but do youthink that?
Um, I mean, that's one of thethings about these, the, the way
that the traditional whetherit's rules-based, you know stuff
or very regimented it alwayshas felt like this is what you

(26:08):
need from us, as opposed totrying to understand what is it
that you need as a buyer that wemight be able to solve, and if
we're not, like how can we helpyou solve the problem?
You have, right, put you incontact.
So do you think there's anopportunity?
Is this a kind of rethinking ofhow to approach this?
Because I could imagine AI LLMsactually could be really good

(26:29):
at like first identifying whenthat's the scenario, and then
two like tapping into, say I'llcall it, a knowledge base of
maybe not direct competitors butalternative solutions that you
could then provide.
Do you think that that might besomething that we could see in

(26:50):
the future?

Speaker 2 (26:51):
That is going to be crucial.
On the nurturing piece, becausethe overwhelming majority of
the people that you are reachingout to they're not going to be
ready to buy right away.
And if you're just going tocontinue spamming them over the
course of the several months ofyears until they are ready,
you're not going to be top ofmind.
But if you are providing valueand you're solving a problem
that is more relevant to them,that is how you stay top of mind

(27:13):
.
So, like one of the biggestdeals that I closed in my career
, like it took over a year toget them over the finish line.
So like, when they announced around of funding, they got an
email from me congratulatingthem, letting them know how I've
supported other venture backedstartups and scaling cost
effectively.
When they announced they'reexpanding into international
countries, they got an emailfrom me letting them know how

(27:34):
I've supported other companies,expanded to English speaking
international countries.
And I went very detailed interms of the vendors that worked
for me, vendors that wasted mytime, tactics that I EB tested
and saw a measurable improvementin performance.
And when they startedcomplaining about their agency,
they got an email from me andthat's when we finally managed
to get them on a call and overthe finish line over a year of

(27:57):
nurturing of relationshipbuilding.
But I provided value along theway, I didn't spam them.

Speaker 1 (28:01):
Following some sequence yeah, I mean, one of
the things I told this earlycareer salesperson was like make
it interesting and helpful, andif you can do that, you're
going to earn the right to havethe next communication.
All right, so one of the thingswe talked about and I think

(28:22):
this is maybe where kind of partof what you are trying to solve
here is trying to integrate thehuman and the AI slash
automation elements of these, Ithink, specifically go to market
motions.
How are you thinking about that?
How does that like?
How do you, how do you approachthat challenge?

Speaker 2 (28:43):
How do you approach that challenge?
Yeah, like, while so many AISDRs are looking for ways to
remove humans from the loop,we're looking for ways to bring
them back in because we see thevalue in it.
Right, we're going to see a lotof agent-to-agent collaboration
very soon and we would alsoencourage human-to-agent
collaboration, just so we canmake the best of both worlds.

(29:03):
You know there's a lot that AIcan do that we humans can't.
You know AI can analyzeenormous data sets.
You know, very quickly, I, youknow, looking at a spreadsheet
for a couple of hours, I'mtapping out.
My short attention span can'thandle it.
Luella did not complain, on theother hand, but there's a lot of
things that we humans can dothat these AI agents can't.
Right, I can meet a prospect inperson at a conference or for
coffee to do a more meaningfuljob of building a relationship

(29:25):
with them.
Like, an AI agent is not goingto do that.
Like a robot can show up inperson, but that's not going to
be as valuable as like a humanbeing actually showing up and
being present.
And with the proliferation ofAI, like, human connections are
going to become increasinglymore valuable and that's why,
like, we're working towards'reworking towards augmenting
instead of replacing, just likeyou said.

(29:47):
So the two aren't mutuallyexclusive, right, you can use
both.
These AI agents don't have tobe entirely autonomous, and
there's a lot of value inbringing humans back in.

Speaker 1 (29:58):
Just for clarification, when you are
using the term agent, what doesthat mean to you?
Because I hear that term a lotand I think it's one that right
now has everybody's going to.
It kind of has their owninterpretation of what that
means to them.

Speaker 2 (30:14):
So an agent is an AI system that can operate
semi-autonomously not entirelyautonomously, that can do some
tasks autonomously, but canstill take in inputs from other
AI agents and other humans aswell.
Usually they'll have access tounique data sources and an AI
model as well.
So, for example, one of the AIagents that we have is a

(30:37):
deliverability agent that willprotect your sender reputation,
monitor your emaildeliverability and prevent
blacklist incidents and spamincidents over time.
So there's a lot of things ourdeliverability and prevent
blacklist incidents and spamincidents over time.
So there's a lot of things ourdeliverability agent does
entirely autonomous for you.
There's a lot of things thatLuella will get your input for.
So you're right, there are alot of definitions, and a lot of

(30:59):
the definitions that I've seenout there define agents as being
entirely autonomous, no humaninput whatsoever, and I don't
believe that to be the case.
I think this industry is movingto a world of more
collaboration between agents andwith humans as well.

Speaker 1 (31:15):
So maybe a very personal or personal world
example might be I want an agentthat's going to go hey, you
haven't had a haircut in, youknow, two weeks, three weeks,
whatever your normal pattern is.
Do you want me to schedule thatfor you?
Yeah, yeah.

(31:37):
And then go like, are there any?
Like, are you having anyrestrictions?
And then it will go actually gobook the appointment and get
put on your calendar, includingdrive time, et cetera.
Yeah, is that the example?
Yeah, okay, okay, so I can.
Okay, that makes a little moresense.
Um, so maybe go a little moreabout like this deliverability
agent, like, like again, like Idon't I, I I'm imagining a lot

(32:01):
of potential things that mean so, like, how does that work,
maybe just at a high level?
Um, and how is that similar ordifferent from something like?
Um, I mean, when I've run intoproblems with potential risk,
you know, uh, reputation center,I've used, like center score
right, or something like that,to be able to just keep an eye

(32:24):
on it.
Help me understand what therole is it plays and then how it
works.

Speaker 2 (32:32):
Yeah.
So it starts with theinfrastructure that you are
sending from.
So a lot of legacy outreachtools on the market will use
these shared servers withseveral thousands of senders in
them, all linked to just one IPaddress.
And in these servers you'regoing to have both good actors
that are running cold app bound,tastefully and ethically, or
you're also going to have thosebad actors that don't give a

(32:52):
shit and are spraying andpraying.
When you're using these sharedservers, you're gaining exposure
to those spammers and rifters.
Google and Microsoft know that,and that's going to hurt your
reputation and you're morelikely to land in spam because
of it.
So for us, the first line ofdefense is, rather than using
these massive shared servers, wewill create your own mini

(33:13):
server, and we call these miniservers clusters.
Each cluster has its own IPaddress and each of your sales
reps has their own cluster, justso if one sales rep goes rogue,
it's isolated, it doesn'tcontaminate your entire
infrastructure.
So that is the first line ofdefense.
The second has to do withdeliverability monitoring.
So, michael, do you know what aplacement test is?

Speaker 1 (33:37):
So let me see this is testing me.
So you have I'll call it a seedemail address, and that inbox
is then monitored to go like.
Did it actually make it to theinbox?

Speaker 2 (33:48):
Yeah, we're sending emails on a regular basis from
your mailboxes to some of oursto see where they land the inbox
, spam folder, the promotionsfolder.
Sometimes they don't getdelivered at all and that is one
of the best indicators that wehave to the overall health of
your email infrastructure.
Why?
Because Google and Microsoftdon't tell you what your true
spam rates are.
They only give youself-reported spam metrics,
which is a small fraction of thedata that they actually have,

(34:10):
and most outreach platforms willhave a deliverability score or
a sender score, and that's alsoa bullshit metric because they
have full control over what goesinto it.
It's not a real metric.
They get to control the inputsand it is, more often than not,
aggressively inflated.
So these placement tests allowus that much-needed visibility

(34:31):
that we won't get elsewhere andthat allows Luella to diagnose
your email infrastructure.
And we do have an integrationwith Slack just to be able to
push notifications to your teamand ours whenever there is an
action that needs to be taken.
Let's say your authenticationsbreak to your team and ours.
Whenever there is an action thatneeds to be taken, let's say
your authentications break.
Let's say Google and Microsoftmake a change on their side that
they are more vocal about.
That warrants action on yours.
Let's say you have a sales repthat has gone rogue and is

(34:53):
sending a bunch of BS.
Luella is able to use thoseplacement tests and a bunch of
other metrics as well todiagnose your email
infrastructure and surfacealerts.
We're also simulating naturalmailbox activity.
When you're sending cold emails, that is not natural.
When you're sending a highvolume of cold emails, you're
going to have a very lowresponse rate.
Google and Microsoft don't likethat.
That is not natural behavior.

(35:14):
So the bylaw will send exactlythe number of simulated
conversations needed to balanceyour response rates, just to not
trigger a red flag in the eyesof Google and Microsoft.

Speaker 1 (35:25):
Oh, I see what you're saying, yeah.
Yeah, it's interesting becauseI think, as we're talking about
this, I'm reminded of who ChrisArundale is.
He's in the marketing opscommunity.
I've had him come in, workedwith him as a client of his, and
then also had him come in andspeak about deliverability.
I'm now really remembering thatconversation is like he made a

(35:47):
distinction betweendeliverability and delivered
right, which is did it make itto the inbox, which to me is
probably more important, becauselike deliverability actually is
kind of like not reallymeaningful, but I think your the
reputation matters, which is, Ithink, what people equate to
deliverability.
But inbox, like there's so manyways that you could even have a

(36:11):
good, say, center scorereputation, but things are not
actually making it to inboxesand unless you are getting
people you're sending to to golike why didn't I get?
Like they're expectingsomething and they didn't get it
, that's like almost the onlyway you know.

Speaker 2 (36:29):
Yeah, and like response rates as well.
Like, yeah, and like responserates as well.
Like, uh, you run those kindsof AB tests too, uh, but, but
yeah, it is a bit of a black boxwithout those placement tests
because, like you, don't havethat visibility elsewhere.
Um, but yeah.

Speaker 1 (36:43):
Well, I think it's.
It's interesting.
So, if I understand it right,so not only are you doing the
inbox placement kind of and thenmonitoring that to verify that
it made it to the inbox, orwhere, where, where an email
landed, you're actually in somecases, um, through automation,
having the, the bot you know howwould that call it actually

(37:04):
take some actions on that emailthat are indicators of the give
you, send signals back to theemail service providers that,
yes, this is, you know, alegitimate, like improving your
sender score, right?
Yeah, interestingly.

Speaker 2 (37:18):
We're simulating natural, like email exchanges
and also browser activity, justto make sure we're not
triggering red flag with emailservice providers.

Speaker 1 (37:28):
Okay, that's interesting, right, okay, that's
interesting Right, have you,has, have you?
Do you think there's any riskof that being identified in sort
of you know, like the emailservice providers or the big
platforms right Going like, oh,now we're seeing this happening,
that they're going to adjusttheir algorithms, et cetera, et

(37:51):
cetera, for how they manage that, or are you working with them?
I'm just curious now.

Speaker 2 (37:57):
We've had many conversations with current and
former engineers at.
Google and Microsoftspecifically responsible for
Gmail and Outlook spam, and theyprovided us a lot of
inspiration, especially for theguardrails that we've built.
It's really a necessary evilbecause of how aggressive Google
and Microsoft have become withtheir spam filters.
Even trusted senders, even thebiggest enterprises in the world

(38:19):
, are seeing more of their salesteams' emails land in spam.
So, just to make sure more ofyour trusted emails actually get
delivered, this is somethingthat we do need to do, and there
are several things that we aredoing differently compared to
our incumbents here.
So most players in the spacewill simulate conversations by
sending random templatedmessages that have nothing to do

(38:40):
with your unique customer orbuyer's profile.

Speaker 1 (38:43):
We're actually ingesting that context yeah.
Okay, that makes sense.
Okay, yeah, okay, that'sinteresting.
So one of the things that Ihaven't heard people talk about,
although I've I guess I'veexperienced it when I've used,
say, chat, gpt on personal stuffRight, I try, I've learned it

(39:04):
two things One give.
I give really elaborate promptsin a lot of cases, and I can
think of several cases where theprompt is much more elaborate
than the actual output because Iwanted something concise.
But when I've had somethingwhere it's been like, oh, I need
to add, like, oh, I need thisnext thing and this interaction
with whatever that it feels likeit gets smarter or at least the

(39:29):
context continues to improve.
So how is that built into thistoo?
Because I can imagine kind oflike that point right, microsoft
and Google start going werecognize you're doing this now,
but even without that, how doesit get better at formulating or

(39:51):
just monitoring and things likethat?
Is there like a built-in sortof improvement on that or how
does that work?

Speaker 2 (39:57):
So, michael, do you know what reinforcement learning
is?

Speaker 1 (40:00):
I could guess, but no .

Speaker 2 (40:03):
So reinforcement learning is a branch of
artificial intelligence thatallows these models to learn and
improve over time.
So LLMs are just one branch ofAI.
Llms are like, really just likeglorified copywriting machines
and reinforcement learning.
That is the optimization layer.
That is what allows us to lookinto the past, understand what

(40:23):
has worked, what hasn't worked,in order to better predict the
future.
And have you ever run Facebookads before, michael?

Speaker 1 (40:32):
So it's funny cause I like I've been involved with
that, but not hands-on, no, soI've done it.

Speaker 2 (40:38):
Yeah, we built Google to operate in a very similar
way to how Facebook does theirad targeting, so we've built a
lot of the same algorithms.
The way that Facebook ads workis you give them 10 different
ads.
They'll test every single oneof those ads against a small
subset of your audience and willautomatically place more weight
to the version that isperforming the best, the version
that's getting the likes andclicks and people buying your

(40:58):
product.
Luella operates the same way,right?
So Luella will take, say, 10different message variations
that you provide her and willtest each and every one against
a smaller subset of yourcontacts and will automatically
place more weight to the versionthat is providing value to your
audience and resonating thebest.
And, over time, luella willsuggest new message variations

(41:21):
with new hooks and offers andlead magnets and CTAs that are
likely to outperform what youhave tested in the past,
especially once she does havemore of that data as to what has
historically worked and whathasn't.
We're using reinforcementlearning throughout the platform
, so identifying who to reachout to when, what we should be
reaching out to that prospectwith and also the volume that we

(41:43):
should be sending out everysingle day.
It's really the messaging thatis going to be most important,
even from a deliverabilityperspective.
If you're blasting a bunch ofof like AI generated bullshit to
your audience and you are goingto land in spam folders because
if you're not providing valuelike, you are a spammer and you
deserve to be in the spam folderright.
So it's reinforcement learningis what we're using to do a

(42:07):
better job of making sure yourmessaging is resonating and
providing value over time,instead of just using LLMs
independently.

Speaker 1 (42:17):
Interesting, yeah, okay, so we're kind of towards
the end of our time here.
I know you've gotdeliverability stuff and you've
talked about some other thingsyou're doing that are
agent-based, that were maybe inthe works still when we last
talked.
Any updates on those, and thenany other last-minute things we

(42:37):
haven't covered yet that youwanted to make sure we hit, and
then we'll wrap up.

Speaker 2 (42:40):
Yeah, we have two agents that are launched so far.
The deliverability agent wasthe first one that we went to
market with.
Before you can send emails, youneed to make sure those emails
get delivered in the first place, and email deliverability
especially, like.
We interviewed over 250 peoplein the space enterprises,
agencies, startups just tovalidate that direction as well.
We also have a prospectingagent, so this is what allows

(43:03):
you to scale your outreachvolume safely, with those
guardrails, you know, withoutburning your TAM or your
reputation.
The third agent that we're stillworking on is the nurturing
agent.
So this is the piece that willallow us to move away from
static sequences and do a betterjob of providing that value to
that prospect over time.
There's a lot more work that weneed to do to get that agent

(43:26):
launched.
We've really been going deep onthe deliverability and
prospecting side, but we'vebuilt the algorithms for it.
We've built the algorithms forit, like we've built the
workflows for it.
The only missing piece rightnow is the more comprehensive
data integration.
So there are again thousands ofdata sources on the internet
and we have over 150 of themcurrently on the roadmap.
That is a larger priority forus, but those are the three

(43:48):
agents that Luella will beorchestrating very soon.

Speaker 1 (43:53):
Awesome.
All right, mustafa.
I continue to be fascinated byall these things that are going
on.
I tell people all the time Iwas kind of a slow adopter, but
I'm now fully on the bandwagon.
This is not just a fundamentalshift, but it can be truly
valuable if people use it in theright way.

(44:14):
So appreciate that If folkswant to learn more about you,
know what you're talking aboutor what you're doing what's the
best way for them to do that?

Speaker 2 (44:25):
You can go to our on our website, luellaai.
My calendar link is on thewebsite itself, so feel free to
book some time.
Always happy to geek out, andmy name is Mustafa Saeed on
LinkedIn as well.
Feel free to connect.

Speaker 1 (44:38):
Awesome Again.
Thank you, mustafa, thanks tothe support of our longtime
listeners and new listeners.
We are grateful for that.
If you have suggestions orfeedback for us, please reach
out to Mike Rizzo, naomi Lou orme.
If you have ideas for topics orguests, or want to be a guest,

(44:58):
also reach out to us.
We'd be happy to talk to youabout it.
Until next time, bye, everybody.
Advertise With Us

Popular Podcasts

Stuff You Should Know
The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

True Crime Tonight

True Crime Tonight

If you eat, sleep, and breathe true crime, TRUE CRIME TONIGHT is serving up your nightly fix. Five nights a week, KT STUDIOS & iHEART RADIO invite listeners to pull up a seat for an unfiltered look at the biggest cases making headlines, celebrity scandals, and the trials everyone is watching. With a mix of expert analysis, hot takes, and listener call-ins, TRUE CRIME TONIGHT goes beyond the headlines to uncover the twists, turns, and unanswered questions that keep us all obsessed—because, at TRUE CRIME TONIGHT, there’s a seat for everyone. Whether breaking down crime scene forensics, scrutinizing serial killers, or debating the most binge-worthy true crime docs, True Crime Tonight is the fresh, fast-paced, and slightly addictive home for true crime lovers.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.