All Episodes

December 8, 2025 47 mins

Text us your thoughts on the episode or the show!

In this episode of OpsCast, hosted by Michael Hartmann and powered by MarketingOps.com, we are joined by Nadia Davis, VP of Marketing, and Misha Salkinder, VP of Technical Delivery at CaliberMind. Together, they explore a challenge many Marketing Ops professionals face today: how to move from being data-driven to being data-informed.

Nadia and Misha share why teams often get lost in complexity, how overengineering analytics can disconnect data from business impact, and what it takes to bring context, clarity, and common sense back to measurement. The conversation dives into explainability, mentorship, and how data literacy can help rebuild trust between marketing, operations, and leadership.

In this episode, you will learn:

  • Why “data-drowned” marketing ops is a growing problem
  • How to connect analytics to real business outcomes
  • The importance of explainability and fundamentals in data practices
  • How to simplify metrics to drive alignment and action

This episode is perfect for marketing, RevOps, and analytics professionals who want to make data meaningful again and use it to guide smarter, more strategic decisions.

Episode Brought to You By MO Pros 
The #1 Community for Marketing Operations Professionals

Support the show

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Michael Hartmann (00:48):
Hello, everyone.
Welcome to another episode ofOpsCast, brought to you by
MarketingOps.com and powered byall the mode pros.
I'm your host, Michael Hartman.
Today, flying solo.
Mike and Naomi are off doingwhatever they're doing.
Today I'm joined by two guests,both from Caliber Mind, who
live at the intersection of datamarketing and business
strategy.
First, Nadia Davis, VP ofMarketing, and Misha Salkinder,

(01:10):
VP of Technical Delivery.
Nadia and Misha have been vocalabout something many of us in
marketing ops have felt for awhile, that we've gone from
being data driven to beingdata-drowned, a sentiment I
agree with.
We'll talk about what's beenlost in the process, how to
bring context and explainabilityback to analytics and how teams
can focus on the fundamentalsthat truly move the needle.

(01:31):
So need it.
Nadia and Misha, welcome to theshow.
Thanks for joining.
Thanks for having us.
Yeah, looking forward to it.
Yeah, it'll be fun.
I feel like I have this kind ofconversation too often these
days, but let's let's getstarted, Nadia.
I think when we talked before,you mentioned that marketing is
one of the most uh is a dynamic,diverse function agreed.

(01:52):
I think we've had severalguests on talk about that.
You know, it's an unusualcombination of creative,
technical, uh, but also kind ofaccountable to numbers.
So, how how do you see thatcomplexity creating challenges
in how data is used andinterpreted for marketing teams?

Nadia Davis (02:11):
It's actually a packed question with we just got
back from B2B Marketing Forum,B2B Marketing Profs Forum in
Boston.
And this very topic was frontand center, right?
And it's true that undermarketing leadership, you have
the full gamut of creative, um,you know, creative beginnings.
You have people in comms, youhave writers, you have graphic

(02:34):
designers, you have people whoenjoy campaigns and like that
puzzle-solv creativity, right?
And then you have people whoare more down the lines of STEM
and math and measurement, andthey really enjoy bringing data
together, you know, graphs,charts, everything.
And then you have thetechnologist bringing all of the
marketing tools together.
Let me tell you, the Martechcommunity, like I've always been

(02:56):
on the outside of it, themselling to me.
Now I'm on the inside of it,you know, yeah, hearing with
others what we got.
There's a lot of noise going onthough, right?
Well, a lot of noise.
And sometimes even vendorsdon't know, or the, you know,
the sales teams within vendorsdon't know what it is that they
got and whether what they'reselling will match the needs of

(03:17):
the client on the other side.
I don't think, you know,they're selling you something
knowing that you don't need it.
They truly believe it wouldwork.
And maybe the client is notsophisticated or savvy enough on
the data side of things tounderstand how it would all come
together, right?
So the challenge becomes youhave all of these different
marketing minds.
Some are more data savvy, somehave more data affinities, while

(03:40):
others are, they're really, youknow, their talent is not in
the numbers.
Their talent is elicitingcertain emotions, engaging
people within the prospect pool,right?
And they're the ones startingthe journey of delivering
something to the client.
So your brand is memorable.
So, you know, you drive recalland then them them, other people

(04:01):
measure it.
But everybody's heldaccountable to some kind of a
metric.
Coming up with a metric is achallenge in itself.
A lot of us have you knowinvestors and VCs and Ps behind
us.
And I understand those peopletoo, because when they give you
money and we are the spendingfunction, they want to make sure
the money is working reallyhard every single dollar.

Michael Hartmann (04:20):
It's a reasonable request, right?

Nadia Davis (04:21):
You invest in your 401k.
Do you want to ask Vanguardwhat return you got this
quarter?
Is it up or down?
Right.
So you kind of get it.
But I think all of thiscombined compounds to the notion
that there is this a little bitof running in place, trying to
measure everything while beingable to measure nothing.
And this confusion and thenoise that pushes you to go fast
while really keeping yourrunning in place, because you

(04:44):
don't even know if you canmeasure it all within the set
the setup that you have.

Michael Hartmann (04:48):
Yeah.
And I think there's some of itis also driven by the the need
for seeing short measuringshort-term impacts versus
long-term impacts.
Oh, that could be anotherpodcast.
Yeah, I know, I know.
Okay, I won't go there.
So so I think one of the otherchallenges, Misha, you talked
about it.
You said you you see a lot oftimes that there's smart people
out there doing advancedanalytics, even, right?

(05:09):
Let alone basic ones, butthey're sort of disconnected
from business impacts.
It just like first maybe breakdown what you mean by that, but
like why do you why do you thinkthat's happening?
And maybe more are you seeingit more more today than in the
past?

Misha Salkinder (05:24):
Yeah, you know, I think it's like with many
things in life, it's a questionof incentives and and for whom
and and who's involved and whichteams.
And um we're seeing more andmore, uh, uh, particularly with
this notion of yeah, we knowanalytics is is important, we we
want to make data-drivendecisions, and that's all great.
But um everybody wants to showthat that their decisions are

(05:48):
the correct ones, that theirmetrics are um uh that that what
they did worked, um, that ithas returns.
So, right, I mean, there's aquestion, right, of incentive
and getting the next budgetarycycle.
So um why this exists kind ofuh financially, you know, I I
can probably speak to that, butthere is some really unexpected
consequences from this.

(06:09):
Uh and there's some reallyfunny examples, you know.
Um one that just came up theother day, uh, an organization
says, well, um, we have afunction that uh does um
outbound uh interactions, right?
So something like BDRs, and uhthey need to be part of an
equation um of attribution so wecan figure out how what returns

(06:31):
we can get from this team, butreturns we're getting from this
team.
But what happens is thatinclusion of outbound touches in
an attribution model can bereally uh adverse, right?
You say, well, the more themore emails we send, the higher
the attribution for this channelwill become, right?
So we remove a little bit ofthe kind of the common sense of
what you're introducing in themodel because of the incentive

(06:53):
to right so to show returns forthis one team.
So it's it's funny how thesethings happen.
And I guess what I meant bythis is we need to take a step
back and say, does this makesense?
Or are we just like trying toyou know fill in all the needs
of all these teams, ignoring thewhat the data tells us piece?

Michael Hartmann (07:10):
Yeah, I mean I've seen I think I've seen
examples and other kind ofvariations of similar thing,
right?
You put in place a metric thatdrive that you think is going to
be beneficial, but peoplefigure out how to gain the
system a bit, right?
So I think as things likecustomer support, in where I've
done some work in the past whereyou're like the goal is to

(07:31):
close cases quickly.
So cases get closed, but theydon't get resolved, like they
actually don't resolve theproblem, right?
Yeah.
So it's like um it's it'sinteresting, and there's obvious
ones in our domain, right?
Where sales teams like I'veseen this before.
Marketing generates a lead,hands it off, sales declines it,
then goes opens a newopportunity, right?
And I'm like, I'm sure there'sall kinds of variations of this

(07:53):
kind of stuff so that you getcontrol over it.
So you you mentioned the youmentioned common sense though,
uh, Misha.
And so like what do you mean bythat?
I mean, I I mean in my head,like one of the things I
remember distinctly, like I'm anumbers person, I like an injury
by training, and I remembertalking to a CMO at a really
large organization who happenedto be a friend um years ago in

(08:17):
the early days when like searchmarketing was a new thing, and
you're like, I love so muchdata, we can do all this great
stuff, and she's like, Yeah, butthey can't measure everything,
and I don't want to like just goby the numbers, right?
Is are you talking about thatdifference, or is there
something else you mean by that?

Misha Salkinder (08:32):
Yeah, intuition certainly plays a big part in
this.
Uh there's there's no doubt.
Um, I think you know, havinglooked at spent most of my days
looking at different types ofmodels and what makes sense, and
I will probably give nowadaysas much weight to an explainable
model as I will to a veryaccurate one, because if it's a
very black box model and yet youcan't explain it to the end

(08:53):
user in your organization, trustme, this account is very
engaged, but but but they'vejust visited the same you know
careers page 60 times.
Well, you know, so like likeyou you have to we've you know,
we we all have to take some timeto think about like this even
make sense in in the model.
Um and then there's also thepiece of like what what business

(09:16):
question are you trying toanswer with this one metric?
And um sometimes I I I know welike to have very unified KPIs,
uh the idea, number of MQAs, umor something like that.
Um and uh but you know, I'veI've just come across these
situations where uh the answerto how many of a certain type of

(09:40):
activity or a certain signal umexists is could be one
question.
Um but the what happened inbetween these signals, what
actually helps me move from thislevel of engagement to this one
is a totally different type ofquestion, right?
So for the former, you mightwant to count uh the behavior
every time it takes place.

(10:01):
But for the latter, you mightsay, you know what?
No, if it happened one time,even if it continues to happen,
let's consider that they'realready in this stage, they
already showed this level ofengagement.
Let's not give this signal overand over again, right?
So, and I've seen I seeorganizations say, oh no, no, we
have to have one definition andlike and so we're gonna try to
fit every type of measurementinto this one, into this one

(10:24):
entity.
It just simply doesn't work.
So um, I guess maybe that's alittle bit of common sense is
yeah, it's it's still datadriven, but uh I need to think
about what it means.
What is this data telling us?

Michael Hartmann (10:36):
I wish I would go ahead.

Nadia Davis (10:38):
What comes to mind when Misha was saying that, you
know, there's the saying, if youtorture data long enough, it
will tell you exactly what youwant to hear.
So that comes, right?

Michael Hartmann (10:48):
Yeah, my my version, my version, which is
maybe more on the spin side, isthe the Mark Twain quote, the
revised damn lies andstatistics.

Nadia Davis (10:56):
Yeah, yeah, yeah.
That's another side of it.

Michael Hartmann (10:58):
Yeah.
No, I think I think there's alot of that I see too, that
where well, in the pursuit ofbeing quote, data driven, right?
People kind of like they theyfeel like they have to bring
every like anything that'spossible, right?
If I go back to my early daysof marketing, it was a database
mode that I can one of thethings I built a early days,

(11:19):
built a big consumer marketingdatabase, and we had third-party
data, and it was like when wego to the vendor for the data,
it's like, here's the whatever,hundreds of fields that we could
bring in.
And it was really hard tochoose.
So what the default is likewe'll just bring everything in,
even if we don't ever actuallyuse it, which means the costs go
like it feels like we're atthat time some multiple, right?

(11:41):
And it's easy to get caught upin all the data that's out
there, but not actually thinkingabout how is it how you can use
it?

Misha Salkinder (11:49):
Yeah, it's it's it's almost like I feel like
the on any dashboard or on anyyou know sets of widgets, I
would almost nowadays recommendto have the business question as
the title of for that specificwidget.
So it it you you don't likedecouple this, you don't just
have MQAs, you have a chart ofMQAs over time.
Well, why?
Who who does this matter for?
I mean, sure there is an youknow a C-level sure you might

(12:12):
have a C Swing, you know,dashboard, but tactically, like
what are you gonna dodifferently about this?
Maybe the maybe the question iswhich campaigns last month were
more effective in driving MQAs?
And so you have a very specificdistribution of campaign
touches focused on very specifictouches, right?

(12:33):
And and but but the the but thebehavioral action is much it's
much closer to the end user.
Maybe takes away the noise.

Michael Hartmann (12:41):
Yeah.
Now it's like you were about tosay something, yeah.

Nadia Davis (12:43):
Yeah, the other side of that, and actually this
is not my piece of wisdom.
This is from Evan Hanscott, whois our head of data science.
He presents kind of the idea ofcommon sense this way.
He goes, when you get a stat,right?
Something uh that you derivefrom your data, if there's
nothing that you can do with itafter that, once you got it,
meaning you cannot answer thequestion, and so what?

(13:07):
What do I do with this?
What's my next step that makesyou know an improvement within
the business?
Then that's not common sense.
You just got a stat.
You just did a math sentenceand got an answer, right?
So in business, it has to makesense, meaning it it informs you
of either what the currentsituation is so you can make a
decision of what to do next, orit helps you pick the next step

(13:30):
and where you're moving forward.
So I think that's the, youknow, Misha was talking about
the front end of it, kind ofwhat questions are we asking and
what are we presenting in frontof our users?
And then the back end of thisis like once they see that, do
they have enough to make adecision?
Is the data the way it's shown,you know, uh gives them enough
to make that decision?

Michael Hartmann (13:49):
Yeah, no, I I I can think of many times I've
seen that too.
Mishi, you mentioned the wordexplainability versus sort of
the black light.
Like can you break that down alittle bit more?
Like, what do you mean by that?

Misha Salkinder (14:03):
Um you can have uh different, so there are
several vendors in this spacethat attempt to rank uh uh kind
of engagement levels or youknow, whether first party or
third party levels oforganizations.
Um the simply ranking or simplysaying this organization has

(14:29):
you know 17 hot peppers ofengagement, uh and this one has
14 hot peppers, uh certainlyuseful for a sales uh rep or for
whoever's gonna take the nextaction on this account.
But uh without speaking as towhy or without showing the why,
one is it's really hard to takethe next action because there's

(14:50):
no context.
But also, as soon as somethingfalls apart, as soon as you call
an organization and they say,Well, we I have no idea.
Who are you?
Where are you calling from?
Like as soon as that happensone time and there's no
explainable element behind it,it kind of all falls apart.
And so that you know, this canbe applied to an attribution
model as well.

(15:10):
If you're a campaign managerand you say, Well, your campaign
is doing you know, doing thiswell, and last month it did
worse, so you're doing well.
Um how?
Um right, how how how is thisbeing calculated?
Well, it's actually prettystraightforward where you know
uh we're doing this model.
But if you say, well, we havethis really sophisticated model,
it's it's absolutely accurate.

(15:30):
Like you just it it's you know,we all need evidence.
Um, and I think sometimes thesimpler the evidence, the
better.

Michael Hartmann (15:37):
Yeah, I think I think that's right.
Like people want to it kind offalls into this like the trap
that people get into.
Was it the Pareto principle,the 80-20 rule?
Right.
Right.
If you get a model that's 80%accurate, predictive, whatever
it is you're trying to get outof, and um it's easy to explain,
it's probably better than whenit's 100% predictable but

(15:59):
impossible to explain.

Nadia Davis (16:01):
And I would love to add here that the one thing
that we all have at our disposalnow that prevents in a lot of
cases that use case ofexplainability is the AI
analytics, right?
Where just trust me, becausethe model within AI, the black
box within AI, came up with thisnumber.
Great.
Can I click into that numberthat takes us to data lineage,

(16:23):
right?
Can I see how what the numberderived from?
And when you can, to Misha'spoint, people that live in the
world of numbers who kind of onthe VN diagram, overlapping
marketing, finance, sales,right?
They need to understand thatdata lineage.
But if you cannot show whichyou explainabilities out of the
window.

Misha Salkinder (16:44):
Yeah.
Yeah, it's like um go ahead.
Let's have more users at theend of the day.
You know, like we have to thinkof the end user, and you can
have more people engaged withyour data.
Like I see this so many timesthat there's um there's a
champion who creates thisamazingly over-engineered model
that has so many, I mean, it'sso clever.

(17:04):
I mean, I think and it'sprobably really, really good.
The the the the concern iswell, one if the individual
leaves, right, and somebody elseis gonna adopt it, but also the
you know, any any other user ofthe same model, any questions
they have, only one person cananswer.
Which is why I always, eventhough we have you know in
calibrate very sophisticatedmodels, the journey to get to

(17:27):
those models from you willalways go by something like an
even weighted attribution model.
It's like that's first validatethat all your touches make
sense.
And only then can we say, well,do you want to really like are
they the same level of effort?
And then you start tuning fromthere.

Michael Hartmann (17:41):
But yeah.
Well, I think the like the thethought that's going through my
head is like all models areflawed, right?
And if you add in that you'redealing trying to model human
behavior and sentiment, right,it's nearly impossible to get it
right.
You can get lucky, but you Ithink what happens is like the
scenario you described, Misha,is that downstream, if people

(18:02):
are relying on that and it's notexplainable, and something
comes out of it that doesn'tmatch what their expectation is,
they start to lose trust.
That's right.
Right.
At which point then the modeldoesn't matter, they don't
believe it, and it doesn't, itgets really hard to regain that.
And that's like it feels likethat's the the real impact is
that, right?
So as good as your model is, ifit if it can't people can't

(18:25):
understand it and they see propthey see things that are quote
wrong with it, then they'regonna start to distrust it less
and less over taunts.
Okay.
Um so so maybe this is the sameor it's a other side of the
coin, but Nadia, you also talkedabout like this like complexity
is sometimes like in that likelevel of detail in data is seen

(18:48):
as a badge of honor, right?
Like, what do you would youlike any examples of that, or
what do you mean by that?

Nadia Davis (18:54):
So if we think about the go-to-market team at
large with all the differentplayers, right?
You have sales, you havefinance, you have CS, you have
marketing, you have data scienceOBI, you know, well, operations
potentially sit in the samethings.
If we think about thepersonalities that make up
successful players on thoseteams, we have a whole range of

(19:16):
personalities.
And those people are motivatedby different things.
To assume that everyone'smotivation and assumption of
when they do a good job is thesame, it's a very flawed
assumption.
So when you cut, you know,marketing usually is the most
collaborative function.
But so if the marketing playerdoes not have that collaboration
ingrained in them, being ableto facilitate cross, you know,

(19:38):
pollination of ideas, uh,processes and everything, you're
not a good player because thereare many of us and we tend to
hand off things from one personto the next to get it to market,
right?
Then you take more technicalroles in the mindset and
internal kind of motivation andpriding oneselves on a job well
done is different.
It's more like I am challengedby complexities.

(20:01):
That's what drives me, that'swhat feeds my curiosity.
And I am valuable to thebusiness because of my
curiosity.
Right.
So naturally, I'm more inclinedto create very complex things
and over-engineered just becauseI want to see if I can put the
man on the moon from my keyboardat the computer, right?
And this African proverb comesto mind if you want to run fast,

(20:24):
run alone.
If you want to run far, runtogether.
So you will run slower, but youwill, you know, business is a
team sport.
However, you know, you youcannot forget that these people
that are so good at what theydo, they are that good because
they run fast, because they havethat curiosity.
So I've seen kind of gettingcloser to the example, which is

(20:47):
Did a virtual uh event seriesand we had all sorts of guests
coming talking about you knowdifferent things.
Hitchhiker's Guide to MarketingAnalytics, people can watch it
on demand.
Yes.
Um, we had some brilliantpeople, brilliant people talking
about the use cases of whatthey built.
And let me tell you, some ofthem said that if they were no
longer in their seat, whateverthey created would probably be

(21:09):
an abandoned castle with a keyloss, nobody being able to get
in there.
Just because what got createdhas so much of the know-how and
you know level of attention thatis specific for this individual
who knows how it's all done.
It's a custom build, you know,and that's where the the legacy
kind of tribal knowledge isimportant.
But if you don't have that,that's the complexity that

(21:32):
really prevents the businessfrom going forward faster if
that individual is no longerthere.
I don't know, Misho, if you canrelate to that.
You're like you're more kind ofa technical person.
You probably see that side ofthe fence way more.

Misha Salkinder (21:44):
Well, I I will just say that my earlier
comments don't mean that robustmodels aren't necessary.
I I I still think that modelsneed to be robust, but it's just
there's a at the layer as closeto the end user as possible,
you should be able to answertheir question as specifically

(22:04):
and as clearly as possiblewithout much noise.
But under the hood, you'reabsolutely right.
You know, if you have a lens oflooking at a campaign
performance, is my campaigndoing well or not well?
Really depends on who the user,you know, what what is their
lens?
It yeah, is is their lens uhmaybe more financial and is a

(22:24):
return on you know to the bottomline of the business?
Uh or is it a matter of like,am I getting the right job
titles into this campaign?
Or is there an easy way that Ican gauge that and it's change
over time?
Um so under the hood, I stillthink there's room for a robust,
or I think the comment wasmaybe a sophisticated or um kind
of you know a lot of data.

(22:45):
It should be there, but umoftentimes we created this new
metric, so we should also put iton the dashboard for this end
user.
And and I think that the thethis view level, like what the
lens is, the lens, I would thinkover time should be maybe fewer
metrics, but very specific towhat's gonna make the end user's
job easier.

Michael Hartmann (23:05):
Do you do you find um one of the things I see
is you said like what did thecampaign perform well or not?
Right.
And it feels like it's a littlebit of a loaded question.
And I think the I see people golike, what is the one metric
that tells me that?
And my my take on this is alllike there's usually like a
basket of metrics, but it all itcomes in two flavors, right?

(23:26):
One are like absolutes, oryeah.
Did I how many say it's for aweb?
How many people did I open myemail for advice?
How many registered, how manyshowed up, et cetera, right?
But also like, and did I did Ihave a goal for those, right?
So that's one.
The other is how did it performcompared to others that are
comparable with the sameaudience, same type of thing,

(23:47):
same channels?
And I don't see a lot of peoplethinking about it that way.
And there's not like one,right?
If you had a webinar that yousaid the most important thing is
to get, if we got five of theright people who then moved on,
that's great.
But if it was if we got 500people, but none of them were
those any of those five people,then was it a success?

(24:07):
Like I don't I don't thinkpeople think about it that way.

Misha Salkinder (24:10):
Yeah, yeah, that that that that that's
exactly it.
Um I I like the earlier commentabout can you do something
about this?
And uh and if you can't, thenmight ex maybe don't include
this in your models.
Like um, I often see uh democampaigns uh having really,
really high attributionpercentages or numbers.

(24:31):
Like, well, yeah, because everysingle sale you have is gonna
go through this.
Like, are you gonna what'swhat's the action?
The action, oh, we should domore demos.
Well I'd like more demos.
Yeah, yeah, exactly, exactly.

Michael Hartmann (24:43):
Yeah, right.

Misha Salkinder (24:44):
So maybe a different question would be
okay, how do we should we notuse the metric necessarily of
dollars?
But what happens before demos?
What if we look at the periodof time in bio journeys before
that point in time?
What happened there?
And what what campaigns that werun that we can try to
replicate?
Is it physical events or is itum you know, is it webinars or
is it more emails?

(25:05):
And that that question canbecome can have an actionable
answer potentially.

Michael Hartmann (25:11):
Yeah, interesting.
So uh this kind of brings me toa little bit of like it feels
like there's an opportunity ormaybe a challenge, maybe it's
both, for people to kind of goback, either go back to
fundamentals and basics a littlebit, but I feel like there's
still my strong feeling is thatthere's a still a gap in we

(25:32):
we're like we're drowning indata.
It's like one of you said thatearlier, right?
But at the same time, we don'thave a lot of people.
And if I think about marketingapps specifically, but marketing
in general, I think there's alot of people who have the
skills or capability to evenhave a conversation like we're
having, right?
Let alone take it and adapt itto their organization.

(25:53):
So how I think in one of ourconversations, you guys talked
about mentor like mentorship orthings like that.
Like, how do we help people inthe space get better at doing
this and thinking about it andcommunicating it with their
teams?

Nadia Davis (26:08):
I can take this one and then Misha would love for
your commentary on what I haveto say.
I think what I see in a lot ofcases, and you know, regardless
of what size of organization youare, whether if you have a
marketing operations team thatsits separately, like that would
be the case there, right?
You have campaign people,demand gen people, performance

(26:29):
marketing people on one side,revenue marketing people on one
side, right?
They are the closest to what ishappening before the numbers
start showing.
So they have the context,right?
Audiences, you know, intent,seasonality, you know, the
nuances, the creative, like theyhave all of that.
The marketing operations peopledo not have this context.

(26:51):
They're just asked in a way, ina lot of cases, a layman's way,
can you pull me the numbers of,I don't know, opportunities
created in the third quarter.
So that's all they know.
That's what the ticket says,right?
So you just go and do exactlywhat the ticket says, literally,
present that data to the otherside, they look at it, and

(27:11):
that's not what they wanted atall.
Then you maybe wanted to see,you know, the touches that
happened over the third quarterto opportunities that got opened
over, you know, first andsecond quarter, but they did not
articulate that because theydon't see the data.
They don't see, they don'tunderstand the data structure.
They don't live in the worldof, you know, uh rows and

(27:32):
columns like you would if youwere dealing with a CRM or
database, right?
So there's this disconnect.
And, you know, one thing thatcomes to mind, like in, you
know, for us specifically, it'srelevant.
One of the recent releases thatCalibrmine did is the
built-to-scale architecture.
And built to scalearchitecture, it's exactly, it
was ideated as a framework toprevent exactly that.

(27:53):
This disconnect of someonetaking a ticket and executing on
the ticket, delivering it withall the time that elapses
between the two just to get thequestion back and rework.
And then another question andanother question.
I think marketing operationsgets inundated with like, can
you do this?
Can you rework that?
Can you put a filter on that?
And you know, that's what we'readdressing with built-to-scale

(28:15):
architecture, where everythingis modular, where you can get
together, which people should domore often, look at the
dashboard and say, whatquestions do we want to ask?
And as marketing operationsperson puts these graphs and
charts and dashboards out of thelibrary of templates where the
data is already connected, thedata is already modeled.
They don't have to go and spendhours trying to bridge this gap

(28:37):
of pulling the data togetherbefore they can even do anything
in the flying from the demandjam person, right?
All of it is done.
So you start looking at ittogether in real time, and
people's eyes illuminate reallywhen they see how all this
changes.
They get inspired, they havemore questions, and you deliver
the final product way quickerwith people getting exactly what

(28:57):
they want because they had thecontext and they brought it to
the person who has the knowledgeof putting the data together,
and you give them the connectivetissue, which is the tool that
allows to model all of this inreal time and put it together
and visualize it so that bothparties kind of leave happy.
I think that's what it is.
It's three things it's context,technical skill, and the the

(29:20):
you know, the toolings, youknow, the platform that allows
people to do that in real timewithout too much like you know
working behind the scenes.
So that's kind of my take.
And a little bit of a shamelessplug.

Misha Salkinder (29:32):
Yeah, yeah.
I I I will just say that uhmaybe this repeats a little bit
of my sentiment from earlier,but I I would just highly
recommend to going to the enduser, the one you know, your
campaign manager or whoeverleads your physical events and
say what kind of what would helpyou make better decisions?
Like like you tell me.

(29:53):
Because I I just I find that alot of this happens because oh,
I I have I have access to thedata, and um, and they don't
even know that I have access tothis, so I'm just gonna
illuminate everything.
And so, like a very basicexample of this, um uh kind of a
little bit in the context ofCalibr Mind is you know, we used
to have these uh engagementmetrics.

(30:14):
Your account would have a scoreof 97.5 sales reps.
What is what do I do with this?
And what they what they reallywanted is they wanted in a split
second to know that thisaccount is better.
If I were to uh turn on mycomputer today, help me create
my order of operations.
Like that's really what theywant.
And so they know that thisaccount is probably you should

(30:36):
call first before this account,and the other thing they want is
context, and show me in a veryeasy way, very easy to
understand way.
What should I talk to themabout?
If I were to remind them of whowe are, how do I do this in the
easiest way?
So you know, yes, it might ittakes digging.
I'm not sure that they mightgive you the clearest answer uh
right away, but through I thinka brief conversation, you could

(30:59):
probably get to uh this notionof what would make your day be
more efficient, right?
For you to make betterdecisions, where what types of
invitations would work for yournext in next in-person event,
for example?
Like that might be more usefulthan knowing attribution for a
specific event, potentially.

Michael Hartmann (31:18):
Yeah.
So two questions.
One is pretty specific to whatyou just talked about, which is
um you brought up like a scorething, right?
Whatever, zero to a hundred orsomething, and you know, people
get stuck like, what's thedifference between 97.5 and 94,
right?
And this person we had on as aguest was like, really, what
helps get away from that isdoing something relatively

(31:41):
simple, right?
Gold, yeah, bronze, silver,gold, right?
And just kind of break it out,like make it simpler so they're
not caught up in the the rawnumber you like sound about
right to help.

Misha Salkinder (31:52):
Yeah, yeah, absolutely.

Michael Hartmann (31:53):
Yeah.
So the other thing that came tomind here, and this is maybe a
personal, maybe I'm hearing whatI want to hear a little bit,
which is you keep coming back tolike asking specific questions
or working collaboratively theway Natty you talked about, like
to come up with metrics.
But it feels like what I hearmore often than not is like we

(32:14):
need a dashboard, right?
The dashboard's not really welldefined.
And my my pushback generallywhen I get asked to build a
dashboard is what are like let'sstart with a handful of
specific things.
And over time, if we build adashboard, we build a dashboard,
but maybe we don't really needa dashboard.
So, like, is like do you have ado you have a preference,
right?

(32:34):
To start with a dashboard inmind of what everything we want,
or would you rather start withlet's tackle individual kinds of
questions that we want toanswer first?

Misha Salkinder (32:43):
Well, you know, a dashboard, I'm afraid that,
you know, or maybe it's what Ihear when I hear dashboards.
I hear we need a dashboard thatwill answer everything for as
many people as possible.
And I would rather say, hey,Michael, like for you know, I
don't know, for the next event,how do you like what what types
of things will help you make adecision for your promotions for
next event or something alongthese lines?

(33:05):
Those questions will becomemore specific.
It does mean that on the on theinformation on this dashboard,
you might filter out a lot ofthe noise.
And you might have fewer, theremight just be even less
information, but the informationwill be very, very specific.
So I think it I when I say yesdashboards, or yes metrics, but

(33:26):
also as specific and as easy tounderstand for the viewer of
that dashboard, and it doesprobably mean uh making
assumptions there and like notand excluding elements, even
though they exist in youruniverse of data.

Nadia Davis (33:45):
I actually have a comment on that.
You know how we all talk aboutdemocratizing the data?
Oh democratic is a good thing.
I mean, democracy is a goodthing, you know, democratizing
data is a good thing, but whenlike Nisha nailed it, when you
think about irresponsibledemocratization, that means you
open your data to people whodon't really know what to do

(34:08):
with this, but feel compelled totake action because they have
access to this.
And everybody has imagination,some people more than others.
Some people have, you know, aquick trigger finger, so they
want to pull it and go dosomething, right?
So it's almost like givingpeople less, but what matters,
that's where you know the goldof it all is.
So it's like the Goldilocks ofreporting.

(34:29):
If to Misha's point, you haveyour campaign's manager and this
person needs to see this, this,and that, but nothing more.
Give them that because thatcreates balance on the team.
They don't start getting intothings that belong to someone
else.
They don't start makingassumptions, and that person now
has to defend their stuff.
Like it's very important tokeep those guardrails so that

(34:49):
you have that level of, youknow, I pay attention to what
matters for my role to move theneedle for the business, right?
And, you know, another kind ofmetaphoric example of what came
to mind, what you guys weretalking about.
So my husband has been in theworld of uh intra-logistics
engineering his entire life.
So they service uh conveyorlines, they service robotic

(35:11):
equipment, you know, forklifts,like anything that you have in
an Amazon warehouse, a WilliamSonoma warehouse, like those
large consumer package goodswarehouses, right?
So sometimes they would get aservice call, and the person who
is on the ground, the warehousemanager, goes, I need you to do
X, Y, Z to this robot.
He is not qualified to makethat statement, but he thinks he

(35:33):
knows, right?
So he his best intentions, hetells him what to do.
My husband would always hewould come home and he would be
so like aggravated.
I just wish he would tell mewhat it does.
He doesn't need to tell me howto do our job.
I know what needs to happen.
Just tell me what it does.
To Misha's point, right?
Like explain to me what you'reobserving so I can make a

(35:53):
recommendation from myexpertise, what it is that you
need.
But I know it takes a littlebit for people to arrive there.
All things are done out of goodintentions, that sometimes good
intentions take their own tohell.

Michael Hartmann (36:04):
Yeah.
That's funny.
Yeah, so it's interesting,Misha, the way you described
that.
I like one of the things Ibelieve also is like there's
this huge volume of data now andall kinds of things we can do.
This is part of why I I Iresist the urge to build
dashboards, because to yourpoint, right?
It's the the the idea behind alot of them is answers all the
questions for everybody.

(36:25):
And I think I think there arecertain things that that um
metrics and reporting that isappropriate for different
audiences, right?
So I don't think I would everwant like I don't say never, but
I think it'd be less importantto show executives here's all
the email opens andclick-throughs and web activity.
Like they don't I still thinkthere are people who should be

(36:46):
paying attention to that.
Sure.
But it's not that group.
And I think what happens isthat we get the right metrics to
the wrong people sometimes, andI think then it causes more
confusion.

Misha Salkinder (36:57):
Um, so yeah, yeah, I I I'm I'm you know, um
the data having curated clean,normalized, standardized data.
Like if you were to ask me, youknow, what did uh Calvert Mind
or in this universe ofanalytics, what's the most
important thing?
It's that.
Because by you having data thatyou can go to, you have this
kind of very clean you knowshelves with with it all

(37:20):
existing, then you can go andyou can it's easier to to
actually answer or curate um viamodels or dashboards, however,
uh answers to differentquestions.
But I think what happens a lotof the time is uh, and and we we
see this happen a lot, is youknow, we need to prove value and
we need the model that's gonnagive us.
And so this model, you know,for this touch on the third time

(37:44):
this happens and not on aSunday, give credit, but
otherwise give it to this one.
Like, you get to such level ofengine over engineering because
of this pull, again, thequestion of incentives of this
of this pull of like, no, we weneed to show that what we do
matters, and you think at thislevel and you really forget the

(38:04):
the campaign manager that justwants to do incremental
betterings of their campaign.
They just want to do make surethe next campaign they do is
slightly better than the lastone, maybe a little bit more of
the right audience, but youalmost you don't enable them
because what you're thinkingabout is this kind of like,
well, how many, you know, whatis our attribution versus then?

(38:25):
And maybe what the right thingto do is a really simple model,
whether attribution orotherwise, that focuses on that
type of campaign and only onthose touches.
And is it doing better overtime?
Right.

Michael Hartmann (38:37):
So um so this is this brings up another point
that I run into a lot, which isI think what drives this idea
for dashboards or more data isthere's a lot of people I think
get like they're especially ifthey're actually using for
decision making, right?
They want more and more data.

(38:58):
Like they're uncomfortable witheither what they feel is
incomplete or missing datathat's maybe hard to get, or
because and I think the natureof this, like I like your point
about like I like clean anddata, but I think the reality is
like there's a level of effortthat requires for sales and
marketing data in particular andcomplex B2B, like it's just not

(39:19):
clean.
Like, I don't know anybody whothinks their data is great.
So if your assumption is it'snot going to be perfect, right?
You're now dealing with a levelof maybe incomplete data and
data that you don't totallytrust, right?
Or your confidence level is Idon't know, pick a range.
How do you get people past thatdesire to have complete and

(39:41):
accurate data when the level ofeffort to get from where they
are to that may be not worth it?

Misha Salkinder (39:51):
Um I think that the best analysts we work with
know that it's a journey.
Like it's again, it's not aboutgetting to perfection quickly,
but it's about just having abetter data set this quarter
than you had last quarter.
Um, you know, maybe change alevel of granularity.
So, for example, you know, uhjob titles in a B2B environment
could be a very messy entity.
There's just too manyvariations of them.

(40:13):
So uh in your you might say,well, I want to bucket them in
some way so that it becomes moremeaningful for analytics.
Um buckets might not be, youknow, that there might be steps
in how you bucket.
You can do something more crudelike keyword bucketing, or you
can do something much morerobust like introducing an AI.
Like there are steps.
You can take this one step andthen saying, you know what, I

(40:36):
think next quarter I'm gonnawant to revisit this.
It it doesn't mean that this isfutile, but you it it it will
it again, it might not be asaccurate per se, but it could
still be very useful.
So um, and and I think the bestcustomers, you know, our best
customers in general, people I Iwork with in the industry who I
think are really good aboutthis, understand that it
perfection won't exist.

(40:58):
It's like trying to make ummake them always be useful.
Um and yeah, it's a big part ofthis is the curation of data.
And the other part of it isspeaking probably to more end
users and seeing whether we evenconsider the questions that
will help make their day moreefficient.

Nadia Davis (41:17):
I actually have a counter-argument to this one,
kind of being empathetic to theworld of business and seeing
different sides of it.
Um what I see on the analystside in a lot of cases, there
is, and this may sound harsh, Idon't mean it that way, there is
this luxury of time to wait andsee that incremental

(41:40):
improvement, right?
The same luxury is not extendedto the other side of the house,
performance marketers, demandgen marketers, sales who live
and die by the quarter.

Michael Hartmann (41:52):
Yes.

Nadia Davis (41:52):
So in this specific scenario, this desire to bring
more data stems from the notionthat we're spinning our wheels,
going 100 miles an hour overhere.
We did all of this stuff, andonce it trickles through the
pipelines, we don't see theoutput.
What if we could have more?
What if something else could befeeding into this?

(42:13):
What if we're missing out?
Or what if there's some kind ofother tool that would surface
whatever we're missing?
So we could take credit formore.
We could articulate that allthis stuff that we did till
seven o'clock on Friday nightfor four weeks in a row actually
pans out to be something.
So, I mean, I don't have ananswer to this one necessarily,
but I am mindful of differencein like the pace of roles and

(42:35):
different goals and sense ofurgency.
And they say that pressure cutsdiamonds, but it's really hard
being under pressure.

Michael Hartmann (42:44):
Yeah, it is.
All right, so we're kind ofrunning, kind of running up
against our time here.
Let maybe let's like we'vetalked about a lot of different
things and some ideas here.
If you were, maybe each of youtake a shot.
Like, if you were to give ouraudience, I think marketing ops
in particular, maybe like one,like one nugget, like go back

(43:05):
and like this is the thing youcan do to help move the needle
on your ability to be effectivewith reporting analytics,
attribution, pick like whatever,what would that be?

Nadia Davis (43:18):
I can get us started, and Michelle give you
time to think.
Um, the one thing I would saytalk to your demand gen people
more, get the context of whatthey're doing.
Because then when they startasking you questions, at least
you will have the backgroundagainst which they're operating.
So when you start bringing themdata and reports and start

(43:39):
being that translator orinterpreter of what the
dashboards mean, you'll be ableto tie that to what actually
happened on the ground.
And the more you talk to them,the better.
Don't be the kind of the tickettaker, submit an asana task,
and we'll never talk again typeof person, because you will see
that incremental improvement inquality of your output and

(43:59):
probably less rework too.

Michael Hartmann (44:00):
Yeah.

Misha Salkinder (44:03):
Yeah.
And for me, it would probablybe something uh along the lines
of familiarity with the data orbeing so comfortable, I guess,
with the underlying or the rawdata that exists or that you
have access to, that you canconnect the dots between
questions and what's available.
Because I think that's that'sthe best thing you know you you
can do for your organization.

(44:24):
Um there are many, of course,uh flavors of questions, but
knowing what's available, um andand and um you know, and then
also like I know it seems silly,but uh validate it.
Um you know, we always try tolook in things in aggregate and
so oh yeah, these are numbers,kind of look they make sense to
me in aggregate, but drillinginto a specific journey or a

(44:47):
specific opportunity, and whatdo I have the right touches?
Does that does this make sense?
Um that can go such a long wayin getting confidence, you know,
and understanding right whatcan be answered with the data.
So um I guess getting the handsdirty in terms of in terms of
what's available so that thenyou can be a better resource for
your business.

Michael Hartmann (45:08):
It feels like those would you would you um the
one that I would say I thinkalso is if you're concerned
about your data quality, likedon't wait to start reporting.
Like I think to your point,Misha, earlier, like if your
goal is to continue to improve,right?
If you wait till it's quotegood enough, right, you'll never
get to it because it'll alwaysbe flawed.

(45:29):
And so I always go like startreporting on it.
That's you can expose theissues and you can then fix
them, and like it can it can bea flywheel effect.
And I will say nine yeah, likeI tell people all the time, go
spend time, sit down, likevirtual in in person, right?
Spend time with those peopleyou're working with, understand

(45:49):
what they're what they're doingand what drives them.
Because you you both said itmultiple times, right?
Uh, people's incentives aregonna drive the kinds of
questions they ask and theirbehavior.
And if you don't understandthat, then it's gonna be a
challenge no matter what you do.
Any final thoughts before wewrap up here?

Nadia Davis (46:12):
It's 2026 around the corner.
Everybody's trying to.
I know it's teary.
Everybody's trying to come upwith their metrics and their
frameworks and their goals.
Um, it's gonna be a fun season.

Michael Hartmann (46:24):
Yeah.
I someone asked me to dosomething, and I still need to
do it to do like a little quickvideo of predicting 2026, and
I'm like, I I'm so hesitantabout doing something like that
because like I just I don't wantto look back a year later and
go like, I was so off.
That all will happen.
But everyone needs to do it.

(46:46):
Well, hey, it was so much fun,y'all naughty, Misha.
Glad to do it.
I'm glad we were able to makethis work.
I know it was a little bit of ajourney for us, but if uh folks
want to kind of keep theconversation going and learn
more about what y'all are up to,what calibromind's up to,
what's the best way for them todo that?

Nadia Davis (47:02):
Probably LinkedIn.
Yeah, I'm on LinkedIn all thetime.
Happy to do that.
Yes, you are.

Michael Hartmann (47:08):
Yes, you are.

Misha Salkinder (47:08):
Yeah, happy to answer.
Happy to answer any dataquestions, just brainstorm.
I'm always happy to do it.
I like to geek out in this timeon you know on this stuff all
the time.
So um, Calibr Mind orotherwise, LinkedIn is great.

Michael Hartmann (47:20):
Fantastic.
Well, again, thank you so much.
Appreciate it.
Thanks for our longtime and newsupporters.
We appreciate that.
And as always, if you haveideas for topics or guests or
want to be a you can reach outto Naomi, Mike, or me.
We'd be happy to talk to youabout it.
Until next time.
Bye, everybody.
Advertise With Us

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by Audiochuck Media Company.

The Brothers Ortiz

The Brothers Ortiz

The Brothers Ortiz is the story of two brothers–both successful, but in very different ways. Gabe Ortiz becomes a third-highest ranking officer in all of Texas while his younger brother Larry climbs the ranks in Puro Tango Blast, a notorious Texas Prison gang. Gabe doesn’t know all the details of his brother’s nefarious dealings, and he’s made a point not to ask, to protect their relationship. But when Larry is murdered during a home invasion in a rented beach house, Gabe has no choice but to look into what happened that night. To solve Larry’s murder, Gabe, and the whole Ortiz family, must ask each other tough questions.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.