Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Michael Hartmann (00:00):
Hello everyone
and welcome to another episode
of OpsCast brought to you byMarketingOpscom, empowered by
all the MoPros out there.
I'm your host, michael Hartman.
Today I'm digging into a topicthat many teams feel, but few
name what our guest calls thefoundational operations gap.
Our guest is Evan.
Evan, I knew I should haveasked you this.
Am I going to butcher it?
Kubitschek, kubitschek.
(00:25):
Okay, I knew it was one or theother.
I would have gotten it wrong.
Founder of Grow Rogue, he workswith early stage companies to
help them build the rightmarketing operations
infrastructure before thingsstart to break.
We'll discuss howinfrastructure issues often get
overlooked, what the warningsigns look like and how to avoid
or recover from an operationshouse of cards.
So, alvin, welcome to the show.
Evan Kubitschek (00:42):
Thank you for
having me.
Michael Hartmann (00:46):
Looking
forward to it.
Yeah well, let's just diveright in.
So let's start with thatconcept of what you call the
foundational operations gap.
What is that?
What do you mean by it?
How did you come to define itso?
Evan Kubitschek (00:56):
I've been in
marketing operations for 15
years now, which is a terrifyingthing to say 15 years now,
which is a terrifying thing tosay and I prefer to work with
early stage companies, primarilybecause I have the opportunity
to address this gap.
And what I think of thefoundational operations gap is
the foundational layer ofoperational frameworks,
(01:20):
scaffolding, architecture,however you want to describe it
that I would say probably 90% ofcompanies skip out on, and so
it is critical that you addressit early on, because I think
that at a certain scale, itcould be series B, it could be
50 employees the inertia ofchasing revenue becomes so great
(01:45):
that it's very hard to returnto these foundations and justify
the investment in fixing them,and so I consider it a
foundational layer.
If we're talking about specifics, I would probably say database
health, and that could bedefined broadly as having a
minimum viable record.
What is your definition of whata record should look like?
(02:08):
Uh segmentation, speed to lead,data dictionary and an
understanding of how your techinteracts with each other, your
stack interacts with each other,what the uh order of operation
should be, kind of thatfoundational documentation that
(02:28):
I think doesn't really comenaturally to a lot of companies
and there's that inflectionpoint that I talked about that
many people skip and I think, totheir detriment, because I've
certainly walked into W2 rolesfor myself, where none of that
exists and things breakconstantly because no one knew
(02:48):
they could break or they're notinterested in investing in that,
because they just want to pushout more campaigns built on
broken architecture.
Michael Hartmann (03:00):
So one quick
question and maybe a deeper
follow up.
So it seems like I wasinitially going to say this
sounds like what people wouldcall tech debt, which, by the
way, I used that term with mywife the other day and she was
like what is that?
I can't explain it.
But it seems like it's morethan just tech debt.
Evan Kubitschek (03:19):
I would define
it as more than just tech debt.
I think that tech debt iscertainly involved and it's an
important piece of that puzzle,but I think often, if you go
back to sort of the firstprinciples, of what business
problem are we solving with thistech investment, or what is our
(03:42):
point of view on what a recordor a prospect should look like,
how do we define that, let's putthat to paper and have that
documented for the entire team,the entire company, to orient
toward.
I think there's a lot morearound process and people and
not just platforms.
(04:03):
I would almost rank platformlast amongst those three okay in
terms of how I would addressthese things, because I think,
yeah, I wrangling the peopletogether to define your goals,
creating process out of thosegoals and then finally mapping
that to where you want to investyour dollars.
In terms of platform, in termsof stack, I would rank that
(04:25):
third out of those three Okay.
Michael Hartmann (04:26):
Makes sense,
okay, yeah, so like one of the
things I see often, maybe on theprocess side, when I've got,
especially at largeorganizations, which I tend to
see, what I think are sort ofoverburdened approval processes
(04:47):
right Number of levels, time andthen people wonder why we can't
get stuff out the door, becausewe've got just too many steps
and too many hands in the middleof it, with too many people who
can veto something and so on.
Right, is that an example ofthe kind of thing you look for?
Evan Kubitschek (05:05):
It is.
I think that you're you'redescribing a level that might be
beyond where I prefer to work,just because I think that's what
you're describing is when thatinertia has become too great,
right when you you can't maybereturn to address the process.
You're too caught up in sort ofthe tactical swamp, the tactical
(05:28):
hell of pushing out as muchthroughput through that process
as possible, and that's the goalthat the campaign ops team
might be measured on.
But what I would look for thereis understanding the underlying
system.
Do we have actual ticketingsystems defined?
(05:48):
Do we have definitions of whatthe requirements are for someone
to make it onto the calendar?
Do we have definitions of howcompeting priorities work and
how these things take precedence?
What are the KPIs we'remeasured against?
Can we justify these SLAs interms of an actual calendar that
makes sense for throughput forthe MOPs team?
(06:10):
Are those things defined in away that maybe if you're under
growth marketing, you're under aCRO, depending on who you fall
under?
But did they understand that?
Are they aware of that and arethey supportive of you having
boundaries and pushing back onthese things?
That is the type of work, sortof that foundational layer that
I think isn't always in place,because many times I'll hear a
(06:35):
phrase I hate, which is movefast and break things Right.
Yes, there's a space for that.
Yes, there's a space for that,but if all you do is break,
things over and over and overand never return to fix them.
Michael Hartmann (06:55):
of course,
things are going to become
tougher over time.
Yeah, so is it?
You use the term, or you had aphrase that you used with me
when we talked before You're notscaling marketing or automated
chaos.
Evan Kubitschek (07:08):
Is it like?
Is that kind of what you meantby that?
Is that what you just describedthere, or did you mean
something else?
I think that's accurate.
I like to use an example of ifyou think about your kitchen,
right, and let's say, in thisscenario, your kitchen is a
disaster.
You got dirty dishes everywhere, there's food and ingredients
all over the floor, you have nomeal planning and I say, well,
(07:29):
why don't we just buy a fasteroven?
That doesn't make any sense,right, right, there's lots of
analogies that you could use inthat scenario.
But often people say, well,let's just buy this bandaid, and
it usually comes in the form ofa platform, a tech investment.
That's what they want.
Someone is sold on it.
But when you ask about well,how does that play in our
(07:53):
current stack?
What problem does that solve?
Could this be addressed withbetter process and keeping the
same investments that we have?
Those types of questions, Ithink, are skipped.
So it's sacrificing that hard,upfront work on the altar of
speed and efficiency, and if youdon't address those upfront
(08:16):
questions, I think you will getcaught in this gap, and I see it
quite often.
Michael Hartmann (08:22):
Okay, yeah, I
mean what you remind me of is
early in my career I did work inconsulting in a totally
different space where we did alot of work with, like, helping
companies choose and implementfinancial and accounting systems
Right, and one of the things wealways coached our clients on
when they were evaluatingvendors was they're going to
pitch you in all the bells andwhistles that they could do,
(08:42):
right, amazing reporting, yada,yada.
Evan Kubitschek (08:51):
We always had
to remind them if you can't do
AP, ar, gl, work every day andcount on it to be right, then
the reporting doesn't matter,right, I think the I've joked
about this before, but I wishthere was a way that we could
put sales under some type ofcross examination where they're
(09:12):
bound to tell the truth abouttheir platform in a sales
conversation, cause certainlyearly in my career I have fallen
victim to the siren song of asales engineer or sales telling
me that this platform is apanacea, will solve all of our
ills.
And as I've gotten more maturein my marketing operations
career, you know I want a poc, Iwant, uh, to see your
(09:39):
documentation.
I want to understand, you knowis, do you have robust
integrations or is it just likea trade on IO or Zapier wrapper?
That's sort of the things youlearn to ask about the platform
that they're not always willingto answer on the sales call.
Those types of questions, Ithink, are just so important and
it's it's getting harder andharder, I think, for those
(10:04):
things to be asked.
Michael Hartmann (10:04):
You know I
look at Well, I think it matters
that you asked the rightquestions I think about.
I was looking at visualizationtool for some reporting.
I wanted it to integrate withMarketo and it did so.
I was like okay, but I neverreally looked under the hood
that much, I was moving too fast.
It was relatively low cost.
Well, it did integrate, but itintegrated with stuff that, like
(10:27):
was not useful at all.
Right, I couldn't do anythingwith it like we wanted to, and
so we ended up like spending alot of time trying to make it
work and like finally, I justtossed it because it was like
shame on me, right, I didn't dothat real level of due diligence
that I should have.
Evan Kubitschek (10:46):
And I do think
in today's market and how tough
it is to justify softwarepurchases, asking for a POC is
not out of the realm ofpossibility, Like give me a
couple months with the platform,we'll pay for it, but we need
to understand exactly how itworks and find the bugs.
You know I've I've seen demoaccounts without the full
(11:08):
functionality, these types ofthings.
But to your point, they maytell you that it integrates, and
yes, it might integrate, butthen you might find that that
key data point or that key fieldyou're looking to measure
against doesn't or isn'tavailable in some fashion, and
then the whole use case fallsapart.
Michael Hartmann (11:26):
Yeah, no, it's
funny you bring that up, I had
forgotten.
I've talked about this before alot Like I wish the software
vendors would do something morebecause, like, a seven day trial
, 30 day trial usually is notenough time, right, to truly
evaluate it, because it's hardto allocate the time from your
team, right, and I've said likebe to allocate the time from
(11:47):
your team, right, and I've saidlike, be willing to offer a.
It doesn't even have to becheap, right, but like a
reasonable cost, short-termlicense of what you would
actually be thinking aboutgetting so you could really
truly evaluate it.
I think that, like, so they'renot totally giving up.
I get that that's a challengefrom a revenue, predictability
standpoint and everything elsefor them, but it's like, yeah, I
(12:11):
think it would be a better, Ithink it would be an option that
more people would takeadvantage of than they probably
think and maybe I'm wrong.
Evan Kubitschek (12:19):
I would.
I think I would agree with youon that.
I think that you would findmore people willing to to jump
onto that offer rather thanhaving to go all in on something
that you know potentially theymight stake their position at
the company on, especially withsome of these platforms being
six figures plus If you're notabsolutely sure that it's the
(12:42):
right investment, you know.
I'll give you an example.
A client just secured two POCsin a row with a chat vendor.
That went really well and wemade the full investment for
that client.
That was multiple six figuresand, to their credit, they were
willing to extend that.
There were conversations aboutwhat level of investment it
(13:03):
would take for both of thosePOCs but they gave it and in the
end they secured that revenueand I think that flexibility was
what won them that contract andI wish more vendors would be
willing to explore that.
Michael Hartmann (13:15):
Yeah, I love
that.
So let's get back to thisfoundational operations gap so
you talked about.
You work mostly with earlystage companies, so what are
some of the reasons that thesecompanies get into that
situation and how do you spot it?
Evan Kubitschek (13:37):
So I think that
the warning signs that I
typically see with clients areit usually comes with their
first round of funding oroutside investment, and that
typically means there's newspeed pressure being put on them
(13:59):
from their first board membersor an expectation of some type
of return with angel investors.
But there is now outsidepressure to grow at a certain
rate, and I do think that,especially if it's VC money,
you're starting to see a littlebit of an adverse relationship
between how growth, marketing,brand building works and the
(14:22):
expectations around growth, andso that's where I usually
initially see people sacrificingthe foundational work for speed
and growth at all costs.
Speed and growth at all costs.
I think that initially, you'regoing to see a lot of manual
(14:43):
workarounds starting to happenat that stage, a lot of tribal
knowledge.
In terms of the team, it'susually a very small team.
It might be an individual ormaybe one consultant who's doing
the work, but they don't wantto pay for someone to be
documenting what's happening orcreating a data dictionary, so
if that person departs, they'rein a lot of trouble.
There's no one to understandexactly how the systems fit
(15:06):
together.
Michael Hartmann (15:07):
Single point
of failure.
Evan Kubitschek (15:08):
Single point of
failure.
Attribution questions can takedays because no one's defined
the channels and the offers thatmatter to the company.
They're using something out ofthe box or they're just glomming
reports together.
They're not creating a singlesource of truth across the
company.
And so those are some of thetypical things I see as warning
(15:32):
signs is when you start to getthis outside pressure at.
As warning signs is when youstart to get this outside
pressure but also you'reignoring some of that
foundational work sacrificed onthe altar of speed or on the
altar of growth, and it's it'svery common, I think, with with
sort of a seed or series a.
That's when I start to see thatinflection point, and if you go
beyond that, it justaccelerates.
Michael Hartmann (15:53):
Right, that's
interesting that the external
pressures are a driver behind it.
So when you talked before, youtalked about process, and people
is more important to you thantech, but it feels like some of
this manifests itself withproblems in the technology or
being able to use the technology.
Technology or be able to usethe technology what are?
(16:18):
What are like?
What are you seeing in terms ofyou know priorities or like
processes that tend to?
I know you talked about movingfaster because of the outside
pressure, but what are like?
Could you have any?
Can you go down a little deeperon that, on what like what?
Some examples of where thathappens?
Evan Kubitschek (16:34):
Yeah, so, as an
example, it's very common for
me to see with clients that thatneed for speed means there has
never been any definitional workaround campaign operations or
some type of ticketing orproject management system.
Very often there is nothing buta Google Sheet and requests
(16:54):
coming in via Slack.
Things are not threaded in away where multiple people can
reference them.
It's kind of a game of tag ofhow things should be triaged.
That's very often a failurepoint.
And it doesn't have to beexpensive.
A very simple form, somethingcheap, like an Asana, a ClickUp,
any number of projectmanagement tools, just the
(17:16):
minimum viable system will getyou quite far, but that is
certainly one that I see nothaving a data dictionary.
So, to the point of having thatprocess around, how are fields
being populated?
I think there are lots ofsmaller tools.
(17:37):
You think about clay, waterfall, enrichment, all of these
smaller investments that caninform your core systems but
very often they are not beingdocumented in a way that lets
you understand what isoverriding your core fields.
Where's the source of truthcoming from?
(17:57):
What is the direction of thedata?
What's the cadence of thingsbeing pushed into those systems?
I'll give you an example with aclient.
We did a lot of work over onemonth and then I found out that
there was a shadow system thatno one told me about, where an
SAP was overriding the accountdata every 24 hours and finance
(18:19):
had full control of SAP and wehad never touched it.
So, of course, all of theinfrastructure work that we were
doing was useless until weaddressed that root cause and no
one had thought to go back andthink about how that was
completely muddying the watersin their CRM.
And that's the type of thingthat I think gets missed.
(18:39):
It's not understanding how yoursystems communicate with each
other, not defining what, theway they should perform and your
expectation of how they shouldperform, and then defining the
business problem that they solvewhat are they actually being
implemented for and what doessuccess look like for that?
(19:04):
If those definitions changeconstantly, you're never going
to be able to measure success ina meaningful way.
Michael Hartmann (19:11):
Yeah, I liked
the term you used before and
I've heard it before, but it'sbeen a while.
The minimal viable record,right, I think that's a concept
that a lot of people probablylistening or watching have never
heard of um, and if theyhaven't, they should really
start thinking about like that,because I think that addresses
one of the common complaintsfrom sales teams.
Right, you gave us crap data.
(19:32):
Right, crap leads, whatever.
So it's interesting that thatyou know that is not as common,
including places I've been tohave something like that or
definition of that.
Do you also include definitionsof um?
Like on reporting, youmentioned attribution but like
other reporting, where you havecommon definitions of how you
(19:55):
define the metrics, how youcalculated that kind of stuff.
Evan Kubitschek (19:58):
Oh yes, I could
rant about this for hours.
I have been in so manysituations where a report is
built by another team andthere's ARR as a field, arrv2,
arr latest, and the report isbuilt with a field that's been
(20:21):
deprecated or is no longer inuse and all of a sudden it's a
five alarm fire because theirprojections for their team's
growth or pipeline has crateredthem, not knowing that they are
using a field that no one usesanymore.
But the problem is no one hascentrally defined what the
(20:44):
measures of success are and whatthe definitions are and which
fields are the ones that shouldbe used.
So ideally everyone would beoperating from the same core set
of reports and dashboards andthat's the source of truth for
everyone and that's locked down.
But if you want to let peoplefreelance, at minimum you have
(21:05):
to have the definitions of whatparameters, what reports they
can build against and whatfields are actually in use.
If you don't have that, you'reasking for people sniping at
each other, having differentdefinitions of success,
disagreements at a leadershiplevel, a board level.
That gets really messy reallyfast.
And that's another very commonissue is you don't have the
(21:28):
standard definitions of how youmeasure things.
Michael Hartmann (21:32):
Yeah, it's,
but I think people get caught up
a little bit in like this isthe right way to do it and they
kind of, you know, dig theirheels in a little bit sometimes.
And I think you got to let goof that, because to me, like I,
there's a few things where Iwould probably argue pretty
strongly for this is the way weshould calculate this particular
metric.
(21:52):
But in general I almost don'tcare, as long as we all agree
what it is and it's it's areasonable one, because I'm more
interested in us being able toagree on it and be able to trust
it that we're all talking aboutthe same way in what it's like,
because there's downsides toevery one of these and upsides
to them.
Evan Kubitschek (22:10):
So think I
don't know, in in my early
career I would fall on my swordand argue so hard for things
that I felt passionately werethe correct way to do things.
And as I've suffered the slingsand arrows of many, many years
(22:30):
having those battles, I'm rightthere with you.
It's so long as we are agreedupon the direction and we're
going in the same direction, I'mokay with that.
If we can agree that we want torevisit those metrics or the
direction on some type ofcadence so we can make sure
we're course correcting, we'regood.
(22:51):
As long as we have a standarddefinition together that I will
take as a win any day of theweek, Couldn't agree more, all
right.
Michael Hartmann (23:00):
So we talked
about a number of things about
how, what it looks like when youget to this situation with the
gap.
What are some of the thingsthat tell you you're kind of
approaching that tipping pointwhere you're going to be going
into that systems are tangled orwhatever it might be Like.
(23:21):
How do you, how do you identifythat if you're sitting in a
chair right now listening tothis, before they want to have
to hire you or Evan?
Evan Kubitschek (23:31):
So I think the
tipping point or the warning
signs that I look for are if Igo into your system and I can't
understand within a couple ofhours how things fit together,
an architect's vision or anoverall vision of how data,
(23:58):
prospects, customers, shouldflow through your system, I
think that's a big warning sign.
So that typically would looklike uh, completely disorganized
programs and workflows, nonaming conventions, no
conception of centralized leadprocessing, lead scoring,
(24:22):
generalized nurture tracks, someof these sort of core functions
of a Marketo, of a HubSpot.
If they're not in place andthey're not organized in a way
that is easy to consume, I thinkthat's an early warning sign.
What that typically means to meis that you're spot fixing
requests rather than pushingback on those requests if they
(24:45):
don't make sense in the broadercontext of your instance.
And that's not an attack onanyone, right?
Junior employees, it's very hardto push back against a CRO, a
CEO, for what they want.
But as you mature, you need tohave a vision or a roadmap of
how things should work in apoint of view and fight for that
(25:07):
point of view or push backagainst some requests.
Another area that I look for islooking at the database right.
So I'll give you an example.
Uh, we had a client recentlythat had 700,000 records in
their database, of which only150,000 had been touched in the
(25:29):
last three years, and so theyare paying for 550 000 records
annually that are gettingabsolutely no touches from
marketing, no marketing comms ofany kind, even translating that
to the sales side.
They're not being touched, andso I want to understand I can't
tell you how much like thatmakes me hurt, like physically
(25:53):
when I pointed that out to thepoor stakeholder, she was so
upset and I, you know, I wantedto let her down gently, but it
was.
The reality was painful thatyou had paid a lot of money over
the years to not touch any ofthese folks.
But I want to understand howare you using the database
(26:14):
available to you?
Do you have a plan to touch allof them?
Are they categorized orsegmented in a way that supports
your sales function?
Do you have concretedefinitions of their life cycle,
so the expected buying cyclethat they would go through?
(26:34):
Are you respecting theirsubscription preferences?
That same client, as an example,you would find prospects that
had received three wake the deademails during that time, and
then the fact that they hadselected no, I am not interested
was ignored and they were keptin the database.
(26:55):
So it was good intentions.
But the final step that shouldhave happened, never did, and
that's the type of thing thatI'm going to hunt for is are you
using the tool for its corefunctionality?
That's the first thing I'mgoing to evaluate.
Do you have a base level oforganization and a vision for
(27:17):
how it should be used?
And very quickly, I think youcan suss out whether or not
that's happening.
It doesn't take long.
Michael Hartmann (27:25):
Let me ask you
this.
So I had a scenario where Iinherited a Marketo
implementation that was doingthings in a non I call it
non-standard way, right, sothere's a lot of sort of I
forget what Marketo calls it.
They're essentially they have acapability that's built for,
like nurture streams, right, Ican't remember exactly what it's
(27:46):
called, but the implementationI inherited basically didn't use
that at all.
It had some highly customizedstuff to the point where, if we
wanted to change out a singleemail in a thread, right, we
were worried about it breakingthings downstream.
Um, yeah, so I mean, is thatthe kind of thing too where
(28:08):
you're like I get it, likesometimes you get, if you get
sophisticated enough, you can gosort of beyond the most common
ways in which to use theseplatforms, and I don't have any
problem with that.
But what bothered me is that itwas so complicated and
convoluted and interrelated thatwe really were struggling with.
(28:30):
We knew we needed to updatecontent, but we were so worried
about breaking something that wedidn't, and so eventually, I
mean it was all built based onthe vision of an agency that I
also exited fairly quickly afterI got there, but we basically
had to undo everything they didand took advantage of the
(28:51):
built-in functionality instead,right?
So I guess where I'm going isyou know, I don't mind
complexity if it truly addsvalue, but if it restricts what
you're able to do, then I thinkit's a warning sign too.
Is that the kind of thingyou're talking about when you're
using the systems as expected?
Evan Kubitschek (29:10):
Absolutely.
My point of view is that youearn the right to add complexity
to a system, and the way youearn that right is by nailing
the basics that everyone canagree upon.
And so the situation that youwere in I see very often, and
(29:31):
unfortunately it's past thatinflection point, where any fix
you might try and implement islike doing surgery on a patient
that's awake.
There are so many processesthat are live and running that
becomes exponentially harder toaddress those while keeping the
lights on, while keepingcampaigns running, and you can
(29:53):
get caught in analysis paralysisjust trying to figure out if I
pull this thread, does the wholegordian not untangle?
And you know I've got egg on myface, yeah, and so I do think
that in every instance that Istep into and that's why I
prefer to work to your pointwith those early stage orgs is
(30:15):
because that operational debt,tech debt, process debt, people
debt generally hasn't built upenough where it's too painful to
fix.
But addressing that and using Itypically describe it as 80%
best practice, 20%, yourpractice, right, it's not always
(30:38):
going to map 100% of whatMarketo suggests you do, hubspot
suggests you do, but start with80% and then you earn the right
to do that last 20%.
That can be very difficult andtake the most time, but you
adjust it to your business.
But start with what everyoneelse is doing, what is best
(30:58):
practice, and then, as youconquer that, you nail that you
earn the right to get to themore complex, bespoke things you
want to do specific to yourbusiness.
Michael Hartmann (31:09):
Yeah, I like
that idea that you have to earn
the right to to do heavycustomization.
Evan Kubitschek (31:15):
Yeah, I think
that you know.
I think that you know there's amisconception also that simple
means easy and that is not thecase.
It's simple laying these thingsout for a client, but that does
not make it easy.
Right, the upfront work ofdefining what a record should
(31:37):
look like in your database,defining segmentation, what is
the customer journey for yournurture workflows all of those
things are much harder than thetechnical build of them in a
marketo, in a hub spot, and sothat people and process side is,
to my mind, the much harderpiece to get right than the
(31:58):
platform itself typically.
Michael Hartmann (32:00):
We're on the
same page.
Let's switch gears a little bit.
So for those folks who arelistening or watching, who are
in the seat going like, yes, wehave this operational gap, I
want to go fix it and I'vetalked to my boss, or my boss's
boss or whoever, and they don'tget it.
They don't understand how doyou go about the process of
(32:24):
pitching, selling, convincingthe organization that you need
to address it?
Evan Kubitschek (32:31):
So I think
every org is different, but
typically the lens that I willtake with clients if I'm working
with a bin market company or acompany that's a little more
mature I look at three lensesthat I typically see leadership
caring about, that's growth,efficiency and risk management
(32:54):
and risk management and so Ithink those lenses give you a
rubric to work through tojustify this type of thing.
So very often I think,especially with mops, we're
nerds, we're geeks, we lovetools and you can get lost in
(33:16):
talking about the technicalcapabilities or what a platform
can do without.
Michael Hartmann (33:21):
Look at how
cool this is, yeah.
Evan Kubitschek (33:24):
Without tying
it back to brass tacks.
What does it get leadership?
Why should they care?
And so I think that you knowsome examples, examples of a
growth connection.
So a database cleanup enablessegmented campaigns that, on
average, perform three to fivetimes better than mass emails,
(33:49):
right?
So if you are not doing anytype of segmentation, you can
tie it to we're going to do thesame level of effort as today,
but you're going to get three tofive times more effective
results or efficiency.
Eliminating manual processfrees up team capacity of the
(34:10):
equivalent of one to two fulltime employees.
One to two full-time employeesright.
If you're talking about campaignoperations or instituting SLAs,
you measure those.
You can start to justify,because you know how much time
it's taking, how much moreefficiency you can wring out of
the same systems with a robustprocess or risk management.
(34:31):
What would happen tomorrow ifthe person who built that
beautiful mind nurture systemyou laid out departed?
I'm guessing there wasn'tdocumentation.
The agency was probably longgone and you've got two weeks to
try and write documentationfrom scratch, if, if they give
(34:52):
notice.
So all of those things, I thinkas examples, you can tie them to
the things that leadership careabout.
You're still doing the samework you'd like to do.
It's just a tweaking of themessaging of the positioning and
how you present it.
So there is an element oflearning to how to speak leader,
(35:15):
how to speak executive, and youjust have to learn how to tweak
what you're talking about alittle bit for it to resonate
with them.
And that's something that Ithink is the difference between
an individual contributor kindof the manager, senior manager
you want to make that leap todirector and above you have to
learn how to position it tothose leaders.
Michael Hartmann (35:39):
Yeah, I'm just
curious because I had recent
conversations with other guests.
One who is a CMO basically saidthe efficiency one is one that
he didn't even really care about.
Right, that tying it to growthand what I would say is
profitable growth these days.
Right, like there was a timenot that long ago when it was
(36:00):
just growth, but to me it feelslike it's the environment
shifted to where profitable,profitable growth is matters.
But like, have you just curious?
Are you?
Do you see the same thing thatif you could tie stuff to growth
, it tends to be better receivedand you get more movement?
Or have you seen kind of justas much success with the other
(36:23):
types of storytelling, if youwill?
Evan Kubitschek (36:29):
I think for my
corner of the market, growth of
those three resonates the most.
Growth of those three resonatesthe most, but efficiency, I
would push back and say that isa very close second, and the
reason for that is, for a lot ofthe clients that I work with,
(36:50):
they tend to be early stage VCbacked companies and so, yes,
they want growth, but, to yourpoint, they want sustainable
growth and they don't want tolight cash on fire to get it.
So efficiency, to my mind, is avery close second.
Make marketing more efficientwithout having to invest
additional dollars, just throughbetter process and better
(37:20):
definitional system architecture.
That is a very compellingmessaging message in my mind,
and that's certainly somethingthat I found to be successful in
some of the orgs that I'veworked with.
Uh, but yes, I think growthoverall is still the name of the
game.
It's just, I think, sort of thethe zerp era, growth at all
(37:44):
costs.
Has that's gone away a littlebit?
Michael Hartmann (37:47):
yeah, well,
and I think, if you touched on
this, but it feels like thatmiddle one, if it was efficiency
and effectiveness, right, whichare sort of two sides of it,
cause I think effectiveness isanother one, but it also depends
on metrics that you are focusedon.
It feels like that's.
That's another piece of thispuzzle, okay, so I think we're
(38:13):
going to have to kind of wrapthings up here.
So, as we wrap up, so if youwere going into a client and you
were talking to the mops leaderor the CMO, whatever I don't
know who you end up typicallytalking to like what advice
(38:33):
would you give the mops folkslistening to?
Like how yeah, we touched onthis a little bit trying to tell
a more compelling story abouthow to talk about the value of
what they're doing, especiallyif they're pitching for these
ideas that I mean, one of thethings I think people struggle
with with these sort of fixingthe gaps is they're not always
(38:54):
as obvious in terms of theoutput.
Right, there's downstreambenefits.
So, cleaning up your data, itmakes your segmentation better,
faster, cleaner, right, moreeffective but you don't see the
results immediately.
So how do you, how would youcoach someone to talk about that
?
Evan Kubitschek (39:12):
So I think the
first thing you have to do is
establish some level ofcommunication cadence so that
people know what you're doing.
And I think that is a challengefor a lot of people in mops is
that you tend to be more of theWizard of Oz type role.
You want to be behind thecurtain, you want to make things
(39:32):
run, but if no one knows you'rethere, they're not going to
realize the work that you'redoing is making everything run
on time.
You know, in our lastconversation we talked about
this.
Like no one cares that thetrains run on time.
They care when the train islate or the train breaks down,
but unless you make the workvisible that goes into making
(39:54):
everything run on time, peoplewon't care.
Visible that goes into makingeverything run on time.
People won't care.
So for me, what does that looklike?
For my immediate leader that'smanaging up a weekly email of
what I accomplished and why itmatters and what I'm doing in
the coming week, it might beevery two weeks.
Team Slack post something tothe marketing team about what
you did and why it makes theirlives easier.
(40:15):
Tie it to their goals.
Do the same thing with sales.
I think you will find thatpeople are a lot more willing to
invest in your efforts whenthey know how it affects their
bottom line, how it affectstheir goals.
And then the other piece is Iwant to get in front of as many
complimentary stakeholders aspossible.
(40:37):
Right, take RevOps out to lunch.
Take sales out to lunch.
Take product out to lunch.
Take marketing out to lunch.
Understand what they care about.
Document that so that you cantalk about it in a way in their
language.
Right, a lingua franca across.
Okay, well, I know productcares about this, sales cares
(40:59):
about this.
I am sitting at the center ofthis web and I can think about
what is something I can work onthat helps me but also touches
on products, touches on sales.
You're going to get buy-in onthat effort because it's
touching multiple areas at thesame time.
The more you know about theconnective tissue across orgs
(41:21):
that kind of plug into what Mopsis doing, the easier it is to
message your impact and theeasier it is to show that as you
are pitching things andcreating your roadmap.
Michael Hartmann (41:32):
I like that.
I would add finance to thatlist.
Yeah, I like that, I would.
I would add finance to thatlist.
Um, yeah, and then the otherthing I just to kind of what I
heard is ask don't assume whatthose people like, what they
care about, what they, how theythink about stuff.
That's, that's a good bit ofadvice, right, and just be
humble.
I just like I want to learnwhat matters and I think that
(41:56):
that would go a long way towardshelping that.
Evan, it feels like we barelyscratched the surface on this
one.
I know we probably could havegone a lot longer.
So thank you for thank you forthe time, for the insights.
It's been a lot of fun.
If folks want to connect withyou or learn more about what
you're doing, what's the bestway for them to do that?
Evan Kubitschek (42:16):
I got two ways
for you.
So you can find me on LinkedIn,Evan Kubitschek.
I post quite often lots ofmemes, lots of videos, but the
foundational operations gap isthe space that I like to play in
, and you can also find mylittle corner of the internet at
grow roguecom.
If you're interested inaddressing the foundational
(42:38):
operations gap at your own place, reach out.
I would love to chat with you.
Michael Hartmann (42:43):
Perfect.
Well, again, thank you, evan.
Thanks always to our listenersand supporters.
We appreciate all that.
If you have an idea for a topicor a guest or you want to be a
guest, feel free to reach out toNaomi, mike or me and we'd be
happy to get the