Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
How much is media contributing relativeto customer base is a really nice place
(00:03):
to start.
And the benefit of runningincrementality and media mix modeling is
informing the model withsome of that causal data.
(00:24):
Well, hello and welcome to another editionof the E-Commerce Evolution podcast.
I'm your host, BrettCurry, CEO of OMG Commerce.
And today we have gota doozy of an episode.
We're talking about the threehorsemen of measuring your
marketing effectiveness. We'retalking MTAs Multitouch attribution.
(00:45):
We're talking M'S. Media mixedmodeling. We're talking incrementality.
It's going to be nerdy,
but I also promise you it's going tobe practical and it will make you more
money. And so we'll hopefullymake it fun as well.
And so my guest today is Tom Leonard.
We are LinkedIn friends first.
So I saw Tom on LinkedIn posting aboutincrementality, talking about MMM,
(01:09):
throwing shade on certain tools and stufflike that on LinkedIn. And I'm like,
this is my type of guy. So I reachedout, we had a call, and then we're like,
Hey, we got to record a podcast.
Let's create some insightsfor people on the pod.
And so Tom is a fractionalmarketing leader.
(01:29):
He's operationalizing MMMand incrementality testing,
and I'm delighted that he's my guesttoday. So Tom, with that intro,
how's it going? And welcome to the show.
Good. Thanks for having me, Brent.Excited to be here. And yeah,
some of my favorite things to talkthrough, so excited to do it. Good stuff.
It's good stuff, man. So briefly,
before we dive into themeat of the content here,
(01:52):
what's your background andhow did you become a guy who's
operationalizing MMS and incrementality?
Yeah. And what does that even mean?
That's a good point.
For sure. Yeah, totally. Yeah.
So spent most of my career thus far onthe agency side at performance agencies.
And I'd say the crux ofhow I got to where I'm now,
(02:15):
or I've been reflecting back a littlebit more on the why I have such a passion
for measurement. And I was ata pretty hardcore DR agency,
and it was right shortly after TRUBYfor Action came out when YouTube was
starting to invest in, DR.
Moved into a new role we had createdwith a centralized group of basically
(02:35):
people who had different areas of subjectmatter expertise and a few analysts
that ran tests across apretty large client base.
And I was our YouTube SME,
and worked with a coupleanalysts to run a bunch of tests.
And really it was to evangelize how to,
and is YouTube a platform to drive growth?And it was really interesting
(02:56):
because I started spending a lot of timeon YouTube and then also connect to TV
and broader programmatic video.And it was this really interesting,
for me, the biggest learning was lessabout how to make YouTube as effective as
possible,
but more how to help brands think aboutdemand creation as opposed to just
demand capture. And frankly,
the difficulty of getting brandsto leverage YouTube relative
(03:19):
to connected tv,
because YouTube sat so close to Googleads and therefore last click attribution
and see tv, you couldn't clickand was sexier in a deck.
And it was just this sortof recognition of the
irrational kind of human behavior justin any sort of industry or any thing
in life.
(03:39):
But it sort of helped frame up thisidea of you really have to do more than
just, I don't know,
represent logic or rational arguments.You really have to also
bring the easy to understandclear data. And that's,
I think what draws me to incrementalitytesting specifically and why
(03:59):
that's sort of the backboneof a lot of what I do now.
And I think I use the wordoperationalizing, NMM and
incrementality testing.
And really what I mean by that is a lotof people will run medium mix models or
run incrementality tests,
but oftentimes they'll sit in a slideor in a report to be shown once,
but never to be looked at again.
And so what I'm really trying to dowith brands now is how do you build a
(04:22):
framework and a repeatable methodologyto get insights from tests,
but not just leave them asinsights but to take action?
Because the only way that you createvalue from any of these sort of testing
methodologies and measurementmethodologies is by
acting on the insights.
And so that's sort of what I mean by myfunky little headline of those words.
Yeah, it's so good, man.
(04:42):
And it's one of those things where datareally doesn't matter if you don't take
the right actions from it.And what's so interesting,
and our paths are similar in thatI got my start in actually TV and
radio and doing traditional media, andthen I got into SEO and paid search,
but I loved video. Video was mything, but I love paid search as well.
And then when TrueView and TrueViewfor Action came out, I was like, whoa,
(05:06):
these are all my world's colliding.
This is.
Video and there's some search components,
at least some search intent involvedthere. And it's direct response.
I've always been a direct response guy.
I believe that marketingshould drive an outcome, right?
Advertising should drivea measurable outcome,
and that should be measured in termsof new customers and profitable new
customer acquisition. Andwhat's really interesting, Tom,
(05:28):
and I think this kind of feeds intothe conversation we're having today.
There was a period of time, so Igrew up reading some of the classics.
So David Ogilvy of course, but JohnCap's tested advertising methods,
Claude Hopkins Scientific Advertising.
And they would do things like they wouldrun and add in a newspaper or magazine
and people would clip acoupon and bring it in,
or they would call a certain number andthey would track it and they would have
(05:51):
codes and stuff.
And I remember thinking once I gotinto e-commerce, I was like, oh man,
we've got so many tools. The world isso clear now we have every piece of
data at our disposal.
And now the more I've gotten into itand the more I've matured, I'm like,
we've got more data. But I don'tknow that we've got more insights,
and I don't know that we'vegot any more clarity. In fact,
(06:13):
there's maybe more confusion.
And I think it goes back towhat you said a minute ago,
this idea of demand generationversus demand capture.
We're really good at measuring channelsand campaigns that are demand capture,
meaning they're capturingdemand that's already out there.
That's harder to measurethe demand generation,
which is usually where the magic happens.
(06:35):
And so super excited to dive in here.
I think what might be usefulis let's talk about what
are these kind of three horsemen thatI laid out there, MTAs, multitouch,
attribution, and incrementality.So let's start with MTAs first.
So Multitouch attribution tools,
what are they and whatis your take on them?
(06:57):
Yeah, big question. Greatquestion. Yeah, I mean,
MTA been around for a while,
different flavors and waysof trying to make it work,
especially as so much has changedin privacy and the tech and tracking
landscape.
But ultimately the goal is to tryto give fractional credit to all the
touchpoints along a customer journey witha recognition that the last touchpoint
(07:22):
click or last impression isultimately not what drove that person
to purchase.
That may be the last or the only thingthat you might see in something like
Google Analytics or your analytics suite.
But there's this general recognitionthat that is not what drove the purchase.
So MTA, the kind of promise, which Iultimately think is a failed promise,
(07:45):
is whether all the different touchtouchpoint and then how can you
value those differently. Somaybe you use first touch,
maybe you use even distribution. Theidea of data-driven attribution was the
holy rail or the Promise many years ago,
and I guess still to adegree for some is like,
how do you know this channel was moreadditive or more necessary and therefore
(08:07):
should get more credit than that channel?
Which I think makes aton of sense in promise.
I think in reality it's really hardand I would argue impossible to do,
especially as a lot of the ability totrack users at a one-to-one level degrades
generally my perspective,I'm very bearish on MTA,
(08:27):
so that'll probably comethrough pretty strongly.
But I guess I don't think the toothpasteis going back in the tube in terms of
the ability to track a customer acrossall these different touchpoints,
especially as the ability totrack through or impression based
touchpoint erodes. And then youreally get reliant on clicks,
which I think then leads to a lot ofall the issues that just last click in
(08:50):
general has.
So I think it's really hard tomake a compelling case for MTA.
I've seen too many brands,
especially trying tobuild MTA tools internally
and just be a huge time and resourcesuck. And then when you ask to compare,
show the multi-touch view versuslast click, it's like, I don't know,
(09:12):
80 or 90% only had one touchpoint anyways, that's all
that MTA model could see.
So is it really that muchmore useful than last click?
It's sort of multi-touch when that canbe measured, but usually it can't be.
Yeah, and It never really answersthe causality question either,
which we'll get to when wetalk about incrementality.
(09:32):
And I always kind of tell this,
I think the short story of why MT Aisn't really viable anymore as all the
tracking and privacy changes.
But I think the slightly longer storyis the kind of recognition that just
because an ad was shown or aclick occurred doesn't mean that
that medium was needed orthat channel was needed.
It doesn't answer the causal question,
(09:54):
what would've happenedwithout this ad running?
Did somebody just happen to use multipletouchpoints as navigation or was it
more convenient to click on one ofthese ads that happened to be served?
But if you're not comparing that to somesort of control group to really hard
to assign causality to the factthat there just was a touchpoint.
Yeah, it is so good. And it's one ofthose things where I remember again,
(10:16):
early on,
you would look inside of Google ads oryou look inside of Meta or was back when
it was Facebook only, and youwere like, the data's here.
I see row ads and I see clicks andI see performance and all that.
Then you realize, well, wait aminute, this isn't fully accurate.
If I add the two together,that's double my total revenue,
so I can't just rely onwhat's in the platform.
And that got worse as I was 14 wasintroduced and other privacy changes were
(10:40):
made. But then MTA camealong and it's like, oh,
finally we're going to get to see thefull picture. It's going to decipher,
decode the shopping journey,
and we're going to finally see with akeen eye in perfection exactly what caused
this ad or what caused this purchaseto happen. And then we finally realized
(11:01):
MTA is maybe just a thirdoption. It's like, okay,
Google's imperfect, Meta'sdata's imperfect, and then mt A,
it's just imperfect too.
So now we just got three imperfectthings to look at and make
decisions from.
And in some ways it leads to moreconfusion than it leads to clarity.
And now I don't want to wholesale discard
(11:25):
MTAs because I do believe there's somehelpful insights that can be gained
there,
but it's incompleteand incomplete at best.
And one of the best analogies I've heard,and this actually comes from Ben Ter,
who's also a LinkedIn friend,but I met him in person as well,
but he talks about this analogy of, Hey,
if we're trying to measure whatcaused people to watch this
(11:48):
movie at our movie theater,
and we look at all theseresults and 30% say they saw a
billboard for our movies,20% say they saw a TV ad,
but you know what? A hundred percentsay they saw the poster on the
door. So we're like,let's just cut everything.
Let's just do the poster at the doorand that's it. And you're like, well,
(12:09):
wait a minute. Everybody saw it.Everybody was walking in the door.
But the movie poster is notwhat caused someone to purchase.
It was the billboard and the TVand some of the other things,
word of mouth and other thingsthat caused them to come in.
And so this idea of causality,super, super valuable.
So that really leads us to incrementality.So talk about incrementality.
(12:31):
What is it and why are you ona quest to operationalize it?
Yeah, it's really the best way,
if not the only way toestablish that a causal
portion that we've been talking about.It has a distinct control group,
so it has a counterfactual,
it has what would've happenedwithout this intervention,
(12:54):
whatever that intervention is.
And there's a handful of ways to derivethat counterfactual that control.
The most common would be geographicbased. So like a match market test.
I've got this market over here thathistorically has behaved similarly to this
market over here. I cansee that in an AA test,
the lines sort of move similarto one another. They're not,
(13:14):
if they're influenced by outsidefactors, they're influenced.
In what's an AA test forthose who don't know.
Before an intervention happens.
So just over time are those linesessentially moving together?
Are external factors or stimuli equallyimpacting both sides of that test
so that you can feel confident thatwhen you do intervene and it becomes
comparing A to B,
(13:35):
the delta is what was aresult of that intervention.
So oftentimes it's my Atlanta
and I don't know Memphis,
maybe some other midsize city thatyou've done this market matching for.
Historically, they bothlook like this on a line,
all of a sudden you turn offads on Facebook in Atlanta,
(13:55):
what happens to your top line thatDelta is what was attributed or
should be attributed toadvertising in Atlanta.
Whereas the flip side of that would beattribution would say basically anything
that was attributed to that couldbe attributed to that would really,
it should just be the gap between aworld where that ad does not exist
(14:16):
compared to a world where that addoes exist. We can't take credit for
everything.
We can only take credit for as muchabove and beyond what would've happened
anyways. And so that's thebasis of incrementality testing.
There's other ways to do it.
If you use a Facebook or Googleconversion lift study because they own
that auction or anybodythat owns an auction,
(14:38):
they can do that hold outfor you at a user level.
They can track all of those usersregardless of if you serve an ad.
Good examples are maybe easier todescribe in a first party data capacity.
If you're running email, you may blastall of your customers and say, Hey,
I sent an email to all mycustomers and this many purchased.
(14:59):
They went back to the website orclicked it. But if you just said, Hey,
I'm going to serve just to oddnumber of customer IDs and not to
even number customer IDs,I can then just compare,
forget about who clicked on ads,
who did anything.I'm just going to look at my backend.
I know I exposed these users,but not these users 50 50 split.
They've historically kindof done the same thing.
(15:20):
All I did was even an odd and justmeasuring the difference between those two
groups.
So really any way that you canestablish a true control that
passes that AA test. Sobefore you intervene, do they
continue to look similar?
Are they influenced at the same rate sothat you can feel confident that when
you do intervene with newmedia, retracting media,
(15:41):
some new sort of test that you areconfidently comparing to what would've
happened in a worldwithout that intervention?
Yeah, yeah.
It's applying the scientificmethod with some rigor behind
what happens when I turn this channel on,
or what happens when Iturn this channel off?
What is the actual impact of this channel?
(16:02):
And what's interesting is Iremember back in my early days
of being in the advertising world,
this was when online stuff wasjust getting kind of warmed up.
I was talking to this furniture storeowner and I'm like, Hey, what do you do?
Do you invest in radio ads?Do tv, do you do newspaper?
And so as I went through Themm like,Hey, do you do radio ads? And he is like,
yeah, I mean, yeah, I sort of do.And I'm like, newspaper's like, yeah,
(16:26):
there's a big sale, something willhappen. I'm like, well, what about tv?
And he said, yes. And hiseyes lit up and he is like,
when I run TV ads, I feelit. People walk in the door,
it happens. And I remember early onin my online career thinking, man,
that was so unsophisticated. Didthat guy really know what's going on?
But now looking back, I'm like,yeah, that's maybe all that matters.
(16:46):
That is incrementality in a real looseeasy just to observe with your eyes think
because you had one. Totally.
Which I think peopletake for granted. Yeah.
They do.
Yeah.
That's not exciting. That's notlike, where's all your data?
It's in my cash register.That's where all the data.
Is, especially for smaller brands,
when you have the abilityto feel if something's
(17:10):
working or not working,
if you double spend in something thatyou think is working really well because
attribution says it's working really well,
and all of a suddenyour cash just doubles,
even though your attributed numberscales linearly, something has to give,
right?
And what has to give is it wasn't reallycausing any additional top line growth.
It was just really good atgetting the attributed credit.
(17:30):
So I think the feelingit in the p and l is
definitely overlooked.
It's valid, and it is overlookedthough. You're a hundred percent,
especially now that we haveso many tools at our disposal.
And I think another way to look atthis, and look, I'm a Google guy,
YouTube and Google is kind of whereI really got my start in online.
Marketing.
But listen, branded search is aperfect example here. What happens,
(17:53):
we see this all the time.
What happens if you turn brandedsearch completely off? Now, I believe,
and this is top of front of the podcast,
there are strategic ways to use brandedsearch and there's ways to run it and
not waste money, but a lot of peoplecould shut it off and nothing happens,
nothing. Maybe sales get in a little bit,
but you take meta meta's really workingand you shut it off and you feel it.
(18:14):
Sales go down and that'san incrementality.
Same is true for YouTube if you're doingYouTube the right way. And so yeah,
I really like this. And onekind of anecdote here to share,
we just did a test with Arctic,Arctic coolers, Yeti competitor,
my favorite cooler, my favorite drinkwareas well. And so they wanted to see,
(18:35):
Hey, can YouTube drive an incrementallift at Walmart? So they had just
gotten into most Walmartstores, coast to coast.
So we did exactly what you laid outthere. We had a 19 test markets,
19 matched control markets.So similar markets.
So think like a Denver and aKansas City or the example,
use Atlanta and whatever elsethat's kind of comparable. And hey,
(18:57):
let's run YouTube in oneand not in the other.
And let's measure then thegrowth in Walmart sales,
and let's do a comparisonbetween the two in Walmart sales.
And it was remarkable. Itwas about an eight week test.
We had three test regions, so 19markets, but three test regions,
test region. One, we saw an averageof 12% lift in Walmart sales.
(19:19):
The test region two was like 15% lift.
And then our final testregion was 25% lift.
And there were some standouts,
like Oklahoma City was up 40% and SaltLake City was up 48%. But it was one of
those things where, okay, now welook at that and we can say, okay,
YouTube had a big impact. Andwhat's also interesting, Tom,
(19:40):
is we just ran the YouTube portion at OMG.
They also did a connected TV testin other markets, not related,
didn't see a lift, didn'tsee a measurable lift.
And so it could be lots ofthat was not to throw shade on
CTVI like CTV,
so maybe they just did a wrong orwrong creatives or who knows what.
But it's one of those thingswhere it's like, okay,
(20:01):
if you do this the right way,you should see an impact.
And I think touching on thepiece that I didn't mention,
the other beauty or value ofincrementality testing relative to
attribution or mt a is the abilityto see beyond your.com to be able to
see what's happening on third partieslike Amazon, what's happening in store.
If you get that data own an operatedstore or if you can get that through
(20:24):
wholesale data, it really simplifies.
There's so much complexity.And I think that's, again,
one of the rubs that I havewith MTA is all of them,
all of the data you have towrangle together to try to
patchwork this kind of story together.
Whereas in incrementality testing,it's pretty straightforward.
(20:45):
It's what did I spend and howdid I run that spend in these by
market by day or by week, and whatwas my sales? What were my sales?
What were my new customers or whatevermetric I'd want to look at with that same
granularity and same dimension.
And that's really it because you'rereally just trying to understand the
(21:06):
relationship that calls therelationship between spend and outcomes,
all that kind of muddy middlein the middle, trying to
get it at the user level,
which again, not going back intothe tube really simplifies things.
Yeah, it does.
And another thing that waskind of interesting that
came a light doing this test
for Arctic is all of the ads wetagged with available at Walmart,
(21:28):
shop at Walmart, find on theshelves and Walmart, whatever.
We measured everythingthough in those markets.
So you could look at Walmart sales,online sales, so the.com and Amazon.
And what's interesting is thepush to Walmart really worked.
It's a reminder of what you ask someoneto do in an ad is what they're going to
(21:48):
lean towards. Becausein some of the markets,
we didn't see that much of an online lift.
We saw some clicks and stuff likethat, but the Lyft was at Walmart.
But we also saw a prettystrong lift at Amazon as well,
because I think that just speaks to,
there's some people that are just goingto buy everything from Amazon right
there, tell 'em to go online value proproposition. Is it on Amazon? Yeah, yeah.
Yeah. Here in a day or two, it's hard.
(22:10):
To beat, dude. It's hard to beatsame price in a couple days.
I don't have to leave my house. Butyeah, really, really interesting.
And so we'll circleback to that of course,
but let's talk about thenMMM or media mix modeling.
What is that? How are you using that?
And then how does that kind of relate toincrementality testing? Because again,
(22:33):
going back to your tagline, Tom, youdid not say operationalizing NTAs.
You said operationalizing mand ms and incrementality.
So what is MM and how doesthat pair with incrementality?
Yeah,
basically a big correlation exercisetrying to suss out without a true kind of
holdout group,
what is the impact and contribution ofeach media channel and also what would
(22:55):
happen without media.
So trying to suss out a lot of thesame questions as incrementality,
but basically using correlation asopposed to having a true holdout group.
So basically,
and I'm sure all the hardcore MMM peopleand data scientists will thumbs down
this or whatever you can do to podcast,but hey, in this period of time,
(23:17):
sales went up and nothing could reallyexplain that other than the fact that
TikTok spend went up and essentiallydoing that at a mass scale over longer
periods of time trying to take intoaccount anything that could explain that.
So you'll always kind of flag it withthese are promotions that happen,
it should because you're going to givea model at least like two years worth of
data or two years worth of data,
(23:38):
it'll bring in seasonality and try tounderstand those sort of trends. So it's
trying to pull out if notseasonality, if not promotions,
if not some other thingsthat we are flagging.
And it wasn't price reductions,it wasn't all these pieces,
what was happening in mediathat could explain that change.
And so that's ultimatelywhat MMM is doing.
(23:58):
It's a big correlation exercise,
figuring out roughly what is the channelcontribution to a top line revenue or
order number and what's really important.
I think the nicest part or the bestfirst step with M is trying to get an
understanding of a base,
which is what it's going to be called orintercept what without the presence of
ads,
does this model think that my sales wouldbe such that I can then calculate not
(24:23):
a total CAC of just looking attotal new customers divided by cost,
but incremental to mediaor remove base from
that equation,
how many conversions were contributedbecause of media as this model sees,
which no model is going to be perfect,
no measurement methodis going to be perfect,
but it's a really niceplace to start to say,
(24:43):
I knew I couldn't account allnew customers to advertising,
but what's a good number to use orto start with? Well, it looks like,
and this will depend on the maturity ofthe brand, but a really mature brand,
I mean super mature brand,
the big CPGs might be like 99% basesmaller brand might be something
like 50% because you've gotthis word of mouth flywheel,
(25:03):
you've got product market fit,
but trying to get an understanding of howmuch is media contributing relative to
customer base is a reallynice place to start.
And the benefit of runningincrementality and media mix modeling is
informing the model withsome of that causal data.
You see that a lot and there's areally powerful feature of media mix
(25:24):
modeling is saying, Hey, yes,that's a correlation exercise,
can't pull everything out,
but let me inform the model or at leastrestrict the priors it can use or the
coefficient, whateveryou want to call 'em,
what it's searching for to try to finda fit in this model and say, well,
I did a hold out test. I knowyou don't have the causal data,
but we ran this in this channel and thatchannel and helping that restrict the
(25:46):
model and giving it data that it can'thave without that human intervention can
be a really powerful flywheel.
So using your incrementality test data,
feeding that back into your MMMmodel to make it more accurate and
more causal and make that correlation.
Stronger.
Because the two things that are reallylike you're really trying to get,
(26:07):
but you don't get with Multi-Techattribution or attribution in general.
And you do get with the combination ofmedia mix modeling and incrementality
testing is the incremental impact,
the causal impact of whatwould've happened without
the presence of ads as well
as the diminishing returns curve,
which we know can be reallypowerful and important too,
is what has happened over time as Ispend in that sort of a feature of big
(26:29):
feature of media mix modelingis understanding where
are you on a diminishing
returns curve? Is thereif I keep spending more,
I know it's not going to scale linearly,
but are there channelsthat diminish faster?
Is there more headroom in other channels?
And it really becomes thistrue optimization game of
where do I put the next
dollar? Ultimately thequestion that every marketer,
every finance team istrying to answer is, Hey,
(26:51):
if I find $20,000 into couchcushions, where do I put it?
And if I need to give back $20,000,where do I pull from to have.
I want to hang out at your house andlook at your couch cushions and find 20
grand? That's.
Great. Yeah, it's easy togive it back, but yeah, right.
We're trying to figure out what is goingto be the least impactful if I have to
(27:11):
give the money back and cut budgetsand where is it going to be the most
impactful if I have another $20,000?
Because the answer is not going to befound in what has the highest or the
lowest ROAS in an attributedview. And in fact,
that can have the completeopposite impact that you want.
Yeah, yeah, it's really great.
So I want to actually talk aboutthat point in a minute where
(27:35):
if you've got cut budgets,which hey, listen,
there's been some uncertainty even as werecord this, tariffs up, tariffs down,
markets up, market down, whateverconsumer sentiment is all over the place.
So if things get a little bittight, what are we going to do?
We can't slash marketing,we can't slash growth.
I think that sends youinto a death spiral,
but we might have to get pullback and get more efficient.
(27:58):
And so let's talk about thatactually for a little bit.
So where can you be led astray?
I think you just made a poston LinkedIn about this, right?
Where you start looking at performance,which feels like the smart thing to do,
looking at ROAS and whatnot, andyou're like, well, great, well,
let's just cut the lowest ROAScampaigns and channels. We'll be fine.
(28:19):
How does that lead you astray?
And if you want to talk about yourspecific example to help illustrate these
points, that'd be great.
Yeah, totally.
I think the other one you're referringto is I think branded search,
which we were talking aboutearlier. And I love using both a,
because it can be really, if a brandis spending a lot of money there,
it can be a really great place to gofind those savings without impacting top
line. But also frankly, it'sreally easy to understand.
(28:42):
I think most people understand thatup and down the organizational chart
across departments, everybody sortof understands the idea of, Hey,
if somebody's alreadysearching for my brand,
do I need to pay to get thatclick and that conversion?
And I found that just the fact thatit's easy to understand can be a
really good gateway to incrementalitytesting because it's easy to get buy-in.
(29:06):
Everybody understands that idea,
whereas it may be more challengingto express that idea in
other types of campaigns.But branded search is a good example,
and the example that you're referring to,
kind of a midsize brand that I wasworking with went through that exact
exercise, had to cut budgets.
(29:26):
They looked at up and down the campaignsthey were running. It was like, Hey,
we just got to make the best decisionwe can with the best available data.
They were basically running p maxnon-branded search and branded search and
p max and branded search where hadthe best attributed roas Best CPA
non-brand was really hard to justify ina lower budget kind of environment based
(29:49):
off the attribution data cut that leaneda little bit more into branded search
as a percentage of their budget.And over the next couple months,
new customers in total revenuewas declining despite the
attributed ROAS and CPAlooking even better than ever.
And that's where was broughtin, looked at all these things,
(30:12):
saw the loose correlation tonon-brand and new customer
acquisition and top line,
just the general skepticism thatmany have around branded search,
especially in a lowcompetition environment,
which they were in. There weren't manycompetitors in the auction that we
could see in Auction Insights. So yeah,
(30:32):
ran a very blunt instrumentmatch market test,
which at a brand of that size and for abranded search I don't think is ever a
bad idea. And yeah, noimpact to branded search.
It was about 20% of their budget,
which was substantial that youcan either make the decision,
I'm going to put that 20% back inmy pocket or save it for a rainy day
(30:53):
or give it to some otherplace in the org or say, Hey,
I'm going to redistribute this tosomething that I see in correlation
data that might helpdrive top line backup.
Let's reinvest that in non-brand asopposed to keeping it in branded. Again,
complete opposite of whatattribution would say.
(31:14):
And you see that a lot frankly withbranded search is an easy one to pick on.
Same with retargeting,
but really anything that's especiallychallenging with the black box
solutions that blend,
and I'm sure we could do a whole talkshow on p max Advantage plus some of the
things that bundled together historicallyradically different levels of
incrementality can be a real challengewhen you're then measuring on
(31:36):
attribution. But yeah, aranty way of saying yes,
finding areas to cut oftentimesif you follow the attribution kind
of data can lead to really kindof impactful in a negative way
business outcomes because the attributionview just does not take into account
(31:57):
what would've happenedwithout the presence of those
ads like Incre Ality does.
And so can definitely lead brandsastray as they're looking to cut.
Yeah, really interesting. And yeah,
max notorious for leaning intoremarketing or branded search.
If you're not diligent about that, itcan lean into both of those things.
And so got to be mindful of that.
(32:17):
You also quoted somethingthat totally ties into this.
It's from a shop talk talk thatyou went to shop Talk the show,
and I can't remember who saidit, but if you see high roas,
I know something is wrong and that theauto targeting is just finding existing
customers. Do you remember actuallywho said that and unpack a little bit?
(32:40):
Yeah, I forget his name and I couldlook real quick. He worked for.
Mic.
The Post Dan Danone, the big CPG.
Yeah, I just really appreciatedthat quote because I
mean always wonder if I live in sort ofa bubble of being super passionate about
(33:00):
incrementality versus attributed metrics,
but that was just really refreshing tohear because I don't think that's the
natural.
It's not.
Thought in people's.
Head spend more.
But I really think it shouldkind of spark some skepticism,
especially when your goal reallyis to try to drive new customers.
(33:22):
My first,
especially if you think about bothincrementality in the context of a SC
or pex that's blending retargetingand prospecting by default
and knowing diminishing returns
are my first dollars, yes, they'regoing to be the most effective,
but if they are focused on people thatare already buying from me and my goal in
(33:44):
my head is new customers,
I should be shocked that I canspend a hundred dollars and drive
this amazing new customer revenue
and not think that something is up oreven over time as I continue to spend
our BS meters should probablygo up a little bit more.
And I don't think they do by default. SoI found that comment really refreshing.
(34:09):
Yeah, I think thatreally illustrates that,
right where it's like most of us wouldthink, oh, ROAS is going up great,
we're printing money.
Whereas maybe you should say BSdetector, something's wrong here.
This campaigns leaning into customersthat we're going to buy anyway.
And I'll give two examples here toillustrate this a little bit more.
And I'll also, since we've beenpicking on branded search so much,
(34:29):
I'll share a couple of ways Ithink we should use it. One.
If.
Other competitors areaggressively bidding on,
just know that if you're not Nike andyou're not Adidas and you're not like Ford
or something, it's not alock. If it's a new customer,
they could be swayed by a competitor.
And that's generally how welike to separate it out is like,
(34:50):
let's have branded search for returningcustomers and let's make that crazy
efficient or just turn it off altogether.
If.
It's a new customer, then again,we want it to be very efficient,
but maybe we want it on because wedon't want our competitor to come in and
swipe us to give and swipe ourcustomer. And so one example of this,
I did a podcast with Brian Porter,he's the co-founder of Simple, modern,
(35:12):
great Drinkware brand has become a friendand they did a study incrementality
study and they found, I'llget these numbers off,
but it was like brandedsearch was 10% incremental.
So basically what that means is if itshows that I got a hundred new customers
from Branded Search,
I probably would've gotten 90 ofthose if I had shut it off, right?
(35:33):
Only 10% were incremental.
So then what you would need to do thereis you need a 10 x row as on branded
search for it to even makesense. If it's below that,
you're completely wastingmoney. Pair that with,
and you and I were commentingon the House analytics, HAUS,
Olivia Corey and team did 190incrementality studies involving
(35:54):
YouTube and they showed withtremendous amounts of rigor
that hey,
YouTube is probably 342 times more
incremental, meaning ifyou see a one in platform,
it's actually like a 3 42 interms of incremental impact.
And so wildly differentbetween those two. But again,
(36:16):
we're just so drawn to in platformrow as man, we'll just say spin,
spin spend on p max and branded searchwhen really we should be saying,
let me lean into YouTube or letme lean into top of funnel meta.
I think both those examplestoo are really good examples.
To me it also speaksthough to the importance of
cost per incremental almost beingmore important than incremental
(36:40):
percent incremental. And that's somethingI always use with branded search.
I think you and I have a very similarfeeling around branded search.
There's definitely atime and a place for it,
and it's one of those things whereit might not matter that it's 10%
incremental, 10% incremental relativeto what Google's attributing.
If your attributed CPAis a dollar and now it's
(37:01):
$10,
but your margin when you sell aproduct is a thousand dollars like
hammer that all day long,
that cost per incremental is stillextremely profitable and valuable.
And same with the YouTube piece.
If YouTube was four times asincremental as Google said,
but your YouTube was crazy expensive,
(37:22):
it still might not be worth iteven though it's four times.
More.
Incremental than the platform was making.
And that's how I think a lotabout this with connected tv where
connected TV can be super powerfuland maybe more so than linear tv,
but if you can buy scatterlinear TV for a 10th
(37:44):
of the cost of CTV,
well it just has to be morethan a 10th as effective and
it's accreted, it's a positive.
So it becomes more of comparisonof a cost per than just a
blanket.
How incremental is something which Ialways think is important to focus on and
call out.
To. Yeah, it's so good.
(38:05):
I mean measuring something in terms ofpercentages can provide insights and help
make decisions, but ultimatelyit's the cost per right.
Translate that into real dollarsto see if it makes sense.
100% agree with you,
but I think this also goes backto and use your linear TV example,
and I still love TV andconnected TV and stuff. Again,
(38:25):
I'll use YouTube just becauseI've got the numbers in my brain,
but with YouTube sometimeswe'll see a $5 CPM or a
$7 CPM in certain audiencescompared to other channels that are
15, 20, 30, 50, whatever.Totally. And I'm like, well,
if we're reaching the right personand if the message and offer are
good, how could this not work? And it'sone of those things where it's like,
(38:49):
okay, we're either one of those isoff, we're talking to the wrong person,
that's the wrong message,
or we're just not measuring it properlyand that's where we need to look at it.
So did you have a thought on that?
You another question onMM here in just a second.
Yeah, yeah, totally. But itmade me think of the idea of,
I think the reason I'm starting to becomeway more bullish on any channel that's
historically been hard to measurewhere I think there's that arbitrage
(39:11):
opportunity of costs are still relativelylow because people haven't all moved
in because it's easy to attribute.
It'll be really interestingwith a house example,
does that inspire a lotmore YouTube buyers?
That's something that Googleshould have put out way long ago,
but I think it would undermineundermine search and that's their bigger
business. And I could do a wholekind of rant and I'll save you that,
(39:34):
but the idea of incrementality firstmeasurement probably wouldn't be great for
the search business. So probably exactly,
haven't been able to make such agood point that case on YouTube.
But you think about all the channelsthat have historically been harder to
attribute,
that's where costs are deflated justfrom a supply and demand perspective.
So when you can move in and get CPMs atfive to $7 and it's really effective,
(39:56):
but most people that are measuringthrough attribution don't know it's really
effective, that's a huge win for certainperiod of time until everybody's flood,
everybody and the costs go.
Up the market.
I'm sure there's a lot of people thatwere not excited to see that study from
house like dang it, that means my costsare going up. I don't like that at all.
So really good man.
So we talked about incrementality testingand I think you can use tools like
(40:19):
House and then there are others.
We're just talking about work magic andthere's a number of others you can lean
into. Full disclosure,they're pretty expensive,
but you can also do stuff on your own too.
If you've got someone thatcan measure this stuff,
you can do a little bit of it on yourown. What about the MMM side of things?
What's kind of the easy way to startthere? Is there an easy way to start?
(40:42):
What do you recommend to people.
There? I don't know. I dunno ifthere's an easy way to do anything.
I think, well, I guessthat's not totally true.
I think there's some ways torun relatively easy incre tests.
So I think that's theeasier place to start.
Certainly you can alwaysratchet up the scientific rigor.
I think the problem with lookingfor an easy MM solution is
(41:06):
anybody could run a model with Robin orthere's a lot of open source packages,
but just because you can run a model,
it could say anything.
It's not necessarily rooted in thiscan all of a sudden predict the future
and tell you exactly thecontribution from media.
Whereas incrementality can dothat a little more out of the box.
(41:28):
You may have wildly wideconfidence intervals,
but it answers the question.It gives you the comparison.
I didn't do it in this market,
I did it in this market.What is the Delta Media mix modeling?
You could build a modelto tell sort of any story.
The proof is sort of in the pudding ofif I do the thing that the model says,
(41:48):
does it change my top line?
Can I see over time thatwhen I listen to the model
that improves my top line?
So it's a lot easier to get startedwith incrementality testing.
You can run poor man's matchmarket tests as I sort you can just
sort of pick,
some markets historically behavesimilarly and there's certainly some risk
(42:12):
there, but with a model you mightthink that it's an amazing model.
I just don't feel like there's a greatplace to DIY that together without some
real scientific or statisticalrigor. Or if you do,
you've just got to try to prove it overand over by taking some big swings. And
that's really,
I sort of feel like you can get awaywith the kind of feel it sort of tests
(42:35):
without really running a trueincrementality test or model.
If you're a small enough business andyou spend a decent amount on Facebook,
maybe you're not willingto turn off Facebook,
but are you willing to drasticallyincrease spend and see if you can feel
something at the top line? Okay, thenwhat happens if you cut it in half?
What happens?
And start to understand those curves onyour own is probably a less risky way
(42:56):
than trying to, I've never doneanything in R and I'm going to run
or done any sort of medium amount.I'm going to try to run one.
That's probably a risky proposition.
Yeah, it's a really good insight. I'mglad you answered the question that way.
I think, yeah,
leaning into the poor man's incrementalitytest or just leaning really heavily
into a channel and measuring your topline if you've got a small enough business
(43:18):
to look at that, but probably ifyou're going to lean into MM M1,
you need a couple years of data and soto be able to make some correlations and
you probably need to lean in tosomeone or a tool with quite a bit of
experience because you can do that astray.
And on your comment on cost too.
I mean it's all relative and a lot oftimes where you're going to need a medium
(43:39):
mix modeling is when you're spendinga significant amount in a significant
number of channels,
which you're probably only doingif you are spending a lot total,
which you're probably only doing if yourrevenue can support that high level of
spend,
which means that a tool may not beall that expensive relative to the
opportunity you could derive fromit, which is where I always net out.
So I'm paying 10 or 20grand for a tool monthly,
(44:01):
but it's allowing me toredeploy millions in ad spend.
And it totally in completelymakes sense. So Tom,
this has been fantastic.I'm just watching the clock.
I know we're kind of comingup against it, but one,
I recommend people follow you on LinkedIn.You put out some awesome content.
I love reading it.
Thank.
You. People should definitely followyou on LinkedIn and you are, is it Tom,
(44:23):
what is your handle on LinkedIn?You are Thomas B. Leonard.
Thomas B. Leonard. That'sprobably confusing.
I'm very self-conscious of LinkedIn, soI'm glad to thank you for saying that.
I think it's good, man. I think it'sreally good. I like it a lot. Yeah.
Yeah, it's been fun to startdoing connecting with folks.
Definitely an area that had a lotof excitement and passion for,
(44:45):
it's fun to have thesesort of conversations,
so I appreciate you reaching out awhile ago and that we could connect.
Absolutely.
Man. Absolutely. So then ifother people were like, Hey,
I just want to talk to Tom because maybeyou can help my brand or my business,
how can they connect with you and who areyou looking to or who do you feel like
you can help?
Yeah, definitely appreciate that.Yeah, reach out on LinkedIn.
(45:10):
I spend time there. I love readingeverybody's thoughts and content. So yeah,
reach out on LinkedIn mostly we workwith consumer facing brands that
are trying to understand where toput the next dollar or where to pull
in the scenarios. They have to reallykind of rescue people from attribution,
trying to better understand where theycan get more with their ad dollars.
(45:35):
I think to your point that you teedup now is such an interesting time or
anytime that there's margin pressure,
there's more scrutinyon a marketing budget.
Really want to try to helpempower marketing teams to
feel more confident with
what they're doing and ultimately thefinance teams to feel more confident with
what marketing team is doing. Hundredpercent. That's where I love to plug in,
(45:57):
but also just love to talk about thisstuff probably more than I should.
So always open to the conversation.
Yeah, I talk about that a lot.
I've read analytics and measurementbooks on vacation and my wife
is like, what is wrong with you? And I'mlike, it's interesting. I don't know.
I like it. And so totally, we arejust a different breed I suppose,
(46:18):
but I love that.
And then I think this is a great way toend it where if I've got an extra dollar
to spend on marketing, where do I putit? If I need to cut a dollar of spend,
where do I cut it from?
And that's really whatthis approach is about MMM
and incrementality. And soI think their necessities,
(46:39):
I think attribution is broken and ormisleading in so many different ways.
There's some correlations there, so wedon't have to throw it out completely,
but I do believe you need to leaninto MMM and incrementality for short.
So connect with Tom on LinkedIn.And with that, we'll wrap.
Tom's been fantastic. Thanks for thetime, the insights and the energy. Yeah.
(47:02):
Thanks so much Bretttime. Glad to connect.
Absolutely. And as always, thank you fortuning in. We'd love to hear from you.
If you found this episode helpful,
someone else in the D two C space ormarketing space, and you think, man,
they got to listen to this, pleaseshare it. We mean the world to me.
And with that, until nexttime, thank you for listening.