All Episodes

January 30, 2025 • 40 mins

Text us your thoughts on the episode or the show!

Unlock the secrets to transforming your marketing strategies with the power of AI, featuring insights from Luke Crickmore of Algomarketing. Learn how artificial intelligence is not just a tool but a game-changer that elevates data analysis and campaign strategy to new heights. Luke shares his expertise on automating the mundane, freeing up time for creativity, and crafting more personalized marketing experiences that cut through the noise. Discover how AI empowers marketers to concentrate on what truly matters: interpreting data to make informed decisions and fostering innovation in their campaigns.

Imagine a world where entire marketing campaigns are nearly complete with AI-driven insights, needing only the human touch to refine and finalize. That's the future we discuss, exploring AI's role in generating dynamic hypotheses for campaigns, moving beyond basic multivariant tests, and ensuring brand alignment through finely-tuned models. We delve into the crucial topics of intellectual property and data security, explaining how AI can enhance content creation while safeguarding brands' unique voices. Through our conversation, we paint a vision of a more efficient and effective marketing process, driven by AI yet validated by human expertise.

Episode Brought to You By MO Pros 
The #1 Community for Marketing Operations Professionals

Support the show

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Hello everyone, welcome to another episode of
OpsCast brought to you byMarketingOpscom, powered by all
the MoPros out there.
I'm your host, michael Hartman,once again flying solo, so
we're looking forward to Naomiand Mike rejoining hopefully
before the end of 2024, but itmay be 2025 when we get that
going again.
Joining me today is LukeCrickmore from Elgo Marketing.

(00:22):
Luke is the MarketingTechnology practice lead there
at Elgo Marketing and we aregoing to discuss what the future
of Marketech will look likewith AI as the backdrop so
really excited about that.
We've had fewer than I thoughtconversations about that with
our guests, anyway.
So prior to joining ElgoMarketing, luke spent the bulk
of his career in several rolesat another technology or a
consulting company with a focuson marketing automation and

(00:46):
marketing and sales technology.

Speaker 2 (00:49):
Luke, thanks for joining me today.

Speaker 1 (00:53):
Yeah, so if you can't tell Luke is not from the US,
that means I'm probably verybrief.

Speaker 2 (01:00):
Don't like to talk about myself.

Speaker 1 (01:05):
We'll have to draw that out of you, that's okay, no
, so okay.
So let's get started, because Ialways feel like I should know
more about this whole AIrevolution than I do Someone
who's even posting on LinkedInrecently about asking people are
they using AI for recording andcapturing notes for meetings

(01:28):
and things like that?
And I'm, oh, I've wanted to,but I just haven't.
So I'm not opposed to it, I'mjust a slow adopter, I guess.
All right.
So when you and I talked before,though, I think one of the
things that we were prettyaligned on was, you know, the
impact that AI can have on, dareI say, revolutionizing the way

(01:50):
that we do analysis, especiallyin the marketing and marketing
and sales context.
So, you know, my belief todayis that the you know there's two
sort of two limiting factors,two sort of two limiting factors
, uh, you know, the primary onejust being it takes time and
effort to do analysis, um, forany you know a person, even if

(02:11):
they are skilled, to be able toto do analysis and do it well
and identify, you know, uh,interesting stuff that might be
useful for an organization.
So I'm curious, like do youthink are curious?
Do you have a similar belief,or am I off my rocker?

Speaker 2 (02:28):
Personally, I think the thing that AI unlocks more
than anything is time andexpertise.
Really gives people that don'tnecessarily have that expertise
the ability to go beyond whatthey can do today, but it also
unlocks a lot of time.
So, if we're talking purelyabout data analysis, ai can

(02:49):
obviously find trends for you inthat data, get you to, uh, one
step ahead of where you were.
If you didn't have it, maybetwo or three steps ahead
actually now thinking about it,um, and then, like, give you the
ability to ask more questions,to upskill yourself, to be able
to go deeper into data.

(03:10):
And I think the other thing thatit unlocks is really the
opportunity to be able to domore, um, like, if you you know,
if you think about a standardmarketing campaign that we might
run today, not that many, manypeople are running many tests on
those marketing campaigns.
If they are, they're generallyquite simple tests that they're

(03:31):
running day to day.
What I think AI is going to dois enable people to do that way
more in an automated way, withlots and lots of control over
how they do it and deliver it,and then AI can give them the
skills to potentially be able toreport on those different tests
that they run.
So, yeah, I think AI is goingto really, really change

(03:55):
people's day to day.

Speaker 1 (03:59):
Yeah, I think the idea of AI I'm going to
paraphrase what you said, but,like, changing the nature of
what you do when you're doinganalysis is really what I expect
out of it.
It's not so much that therewon't be more analysis, but
doing the heavy lifting ofpulling the data together, doing

(04:21):
the analysis is going to leavemore time to do additional
analysis, because that'stypically what happens right you
do one set of analysis, you getsome outcome, there's
subsequent questions come up butalso allows you to really spend
more time assessing what cameout right.
I mean, the numbers are not allthat interesting by themselves.

(04:44):
They're more interesting in thecontext of what.
What does that mean?
What can we do with?

Speaker 2 (04:47):
it?
Yeah, exactly right.
I think if people are using aito right now to try and do
analysis on their data sets,they're probably not going to
have the best outcome if theyaren't, uh, considering what
they put in to the ai.

(05:08):
So, uh, what I'm trying to sayis if you purely if you go to
chat, gpt or gemini you uploadan excel sheet, you say, give me
the trends in this spreadsheet,it'll give you some really
broad level trends.
But really, what you need to bedoing as a marketeer is saying
what you want it to produce foryou.
So you've already go into itknowing that there's an output

(05:30):
that you need to produce.
The other thing that you need tobe able to do is feed it with
the right information to be ableto do that.
So if you're just chucking inthis massive spreadsheet with
tons of information that itdoesn't need, it's still going
to try and use that informationin its output or it's going to
consider that information asoutput.
So if you are doing analysis toreally try and streamline that
analysis sorry, that data sothat it's really specific to the

(05:54):
task at hand, and then leveragethe ai to get you to that next
stage, so ask questions of theai, ask other things that it
might try and analyze in thisdata set, because that again
just gives you a bit morecreativity, um, you know, yes,
you went into looking at thisdata in a certain way, but maybe

(06:16):
then there's a reason for youto want to ask different
questions of that data, um, andthat then lead to additional
questions that it can then helpyou answer.
So, yeah, I'm in completeagreement.
Really, I think, greatopportunity with analytics and
AI.

Speaker 1 (06:32):
Yeah, at some point I want to come back to this idea
of limiting the input data,because maybe it's again going
back to my lack of understanding, but I thought that would be.
One of the benefits is that youcould check in a bunch of data
and let it kind of do its ownthing.
But I think you hit on anotherthing, though, too, or maybe I I

(06:55):
was hearing what I wanted tohear, which is, yeah, to me, the
other sort of limiting factortoday and I still think it's an
important one, even with AIdoing the, maybe the heavy
lifting is that there's stillnot, from what I can see, a lot
of people, particularly inmarketing, marketing operations,
revenue operations who havereally good I'm trying to think

(07:21):
of a way to phrase it.
Well, I'll just say it this waythey don't really understand
statistics or analytics in adeep enough way.
Maybe they know what questionsto ask, but they don't know how
to analyze the output.
And part of that comes from myown experience in leading teams

(07:41):
and coaching people, andparticularly when I've coached
people on how to present dataright.
One of the things I see a lot ofpeople do is they get asked
generate this report.
They generate the report andthere's an obvious sort of
anomaly right, something thatdoesn't match the pattern, and
they're not prepared to answerthe question that is going to
come up when they put it infront of somebody who's less

(08:02):
familiar.
Because, like, what does thatmean?
And it it can undermine, uh,everything in terms of the, the,
the belief that you know whatyou're doing right.
So, uh, do you?
Are you, do you see the samekind of like, uh, gap in skills
and knowledge and those, thosefundamental things that I think
will still be important, even ifyou've got ai, because you

(08:23):
still need to interpret.

Speaker 2 (08:24):
I can relate to that because I also struggle with my
analytic skills, like trying tobring things together, trying to
be able to analyze large datasets.
I struggle with it will becomeeasier.
And one of the ways that itbecomes easier.
So we develop applications forour clients, obviously enable

(08:45):
them to do things like uh, uh,understand data and create
narratives of that data using ai.
What we do in that is we createthese really bespoke uh models
that do a lot of that analysison behalf of the um, the
marketeer.
They will understand theconcepts that they need to put

(09:09):
into the data, they willunderstand how it needs to
analyze the data and then wewill basically tell it what to
look for and the way then topresent that back to the
marketeer.
And one of the things that Ithink a lot of AI isn't doing at
the moment but it will becomemore regular is the
explainability.

(09:31):
So when AI is making decisions,when AI is showing results to
you, a lot of the time you won'tnecessarily know if those
results are right because youhaven't been the person that's
done the work to get there, butAI will start to show you how it
came to those results and fromthat I think there is going to
be an upskilling as well in sortof people's understanding of

(09:52):
how these types of analytics ishappening, and it will benefit
the marketing ops community inthe long run.
But I think in the short term,absolutely, absolutely there is
a requirement still on uh moredata science or data analyst uh
roles to help you get to thepoint where you need to get to

(10:16):
interesting.

Speaker 1 (10:17):
So, um, do you one of the things I guess I've noticed
the few times I've used some ofthese platforms and I think I
know it's.
You know, seeing my nieces andnephews I see them every once in
a while when they change, it'svery obvious to me.
When it may not be to theparents, but like I'm not in and
out of AI platforms to say,chat, gpt, and I've started

(10:47):
noticing that there's a littlemore of you.
When you put in a question andit comes back with stuff, it
will now provide more likereference, like here's where I
got this input from.
You think that same kind ofconcept would come in the in the
terms of analyzing data, whereit was like.
This is how I came to thisconclusion.
I don't know that means puttingin whatever absolutely, at
least yeah everything that weproduce

Speaker 2 (11:06):
okay has an ai element, always has an
explainability layer which tellsthe end user what we did and
why we did it to get to theresult that we're showing you,
especially when it's arecommendation.
So if we do some analysis andthen we make a recommendation,
say your best performingcampaign is x, y and z.
This particular data set, say,is oversaturated we think you

(11:27):
should build this type ofcampaign that we will have an
explainability element behind itthat will say we've made all
these decisions because of x, yand z reason and then show the
underlying data that got us tothat point.
And on some things we've evenmade it for the data analyst.
We've written SQL queries thatthey can then run on the data to

(11:48):
be able to do some of their owndata analysis.
We're early stages really, in alot of this tooling and I think
a lot, a lot of the audiencesthat we work with are the data
analyst types and the marketingoperations types that are a bit
more technical at this stagebecause they're obviously more
willing to sort of jump in andtry these new things and in that

(12:09):
we're sort of creating toolingspecifically for them to be able
to do some bigger, better umanalysis interesting.

Speaker 1 (12:21):
So, yeah, I've become think more convinced that AI is
going to have more of an impacton analysis than it will on
content, even though the hype Ithink was centered around
content initially, but I don'tthink it's not.

(12:41):
This was also that um, like, Ithink we are on the same page
generally about the ability ofAI to really help with the, the
analysis of data.
You went so far as to say thatat some point AI is going to and
maybe this is what you'regetting to on a recommendation,
right that?
Or maybe not a recommendation,but that AI could even develop

(13:02):
hypotheses, right?
So one of the things it'sactually been talking about this
on other episodes recentlyabout um, you know, sort of
applying the scientific method,right, if you think of tests is
in that mode, right, part of oneof the first steps is to define
a hypothesis and then you doyour analysis and then test to
see if the hypothesis holds trueor not.
Um, but you think you thinkit's going to enable us to

(13:25):
actually have it generatehypotheses like how does, how
would that work?

Speaker 2 (13:29):
just stepping back to your previous comment about
analysis and content, I thinkthere is huge value to actually
be had from both sides.
One of the things I think thatis probably less exciting about
content is that we're just doingmore of the same, like if you
think about any marketing usecase for sure personalization,

(13:50):
sentiment analysis, competitoranalysis, like any sort of
market research that marketingteams might do.
What ai is now enabling us todo is way, way, way more of the
same, and that unlocks new usecases for us where we are able
to do this really truly hyperpersonalized content.
We are able to launch morecampaigns than we've ever been

(14:11):
able to do before.
We are able to analyze whenpeople are potentially
oversaturated or when they wanttheir emails to be received in
their inbox.
All of this stuff enables us todo way, way, way more of the
same.
What I think and sort of leadingon to this hypothesis statement
, what I think we will be ableto do with AI, and actually what

(14:34):
we're already testing in ourlabs team with AI, is this idea
of generating hypotheses formarketing campaigns.
If you imagine, at the moment,multivariant tests are often
happening, say, subject linebased, maybe the hero image
based, and you test that onething at that one time.

(14:55):
What AI will enable us to do isactually write a hypothesis that
showing someone a certain imageat a certain time and then them
landing on a landing page andseeing a different image, or the
same image with some differentcontent, with a different call
to action to somebody else, ismore effective than a different
variant of that.
So, rather than us just testingone thing and seeing if that's

(15:18):
more beneficial, ai will enableus to test a number of things
over time automatically, andthen we'll be able to see the
value of that.
So it will be creating its ownhypotheses about your personas,
about the data you have, aboutthe behavior they're seeing, and
starting to try and react tothat and provide the best in
class experiences at all timesfor all people.

(15:39):
So, yeah, I think there's agreat.
One of the major benefits formarketeer market to market is
actually going to be able to seehow effective their content is
being, and for AI to enable themto have way more effective
content than they've had today.

Speaker 1 (15:58):
So, ok, I'm still not 100 percent convinced about the
content piece, especially today, but maybe go a little further.
So one of the concerns I wouldhave with content and why I
think the ChatGPT stuff wasoverhyped, is that ChatGPT has

(16:21):
access to enormous amounts ofcontent to then generate the
next letter, next word, whatever, whereas if you have, but it
also if you're concerned aboutIP or whatever right, you don't
want to put your stuff into thatmodel, so you might want
something that's limited, whichnow limits the ability of the

(16:42):
engine to really come up withstuff that's new and creative,
because I think what you'redescribing is a little bit of a
combination of content somehowbeing a part of the analysis as
well, and so maybe go a littlefurther on what that means.

Speaker 2 (17:01):
On model, just to make people not worry if they're
already doing some stuff.
Unless you're using freeversion of ChatGPT and for any
of these sort of use cases I'vebeen talking about it's
automation You'll need to usethe ChatGPT API or vertex api
through google or amazon.
All of that stuff is nottrained on your data, so you can

(17:22):
trust that they're not learningfrom your data, they're not
storing anything about your data.
But one of the ways that we canbe super impactful in uh, this
content, when we're talkingabout content and generating
content, is to create our ownfine-tuned models, so our own
like trained models on our owncontent.
So if you think of the use casewhere imagine, you've got

(17:44):
marketo, with marketo, you canuse the api to pull out all of
the data about an email,including the html.
You could then train a model onthe behavior data versus the ht
HTML data and the values thatare in that to see which is
better.
And this isn't necessarily newto AI, right?
This isn't using the generativeelement of AI, this is just

(18:06):
machine learning.
But what we are able to do nowis now generate content based on
that best in class content thatyou currently have and then
layer on top what I was talkingabout with testing.
You're then not only creatingthe best version of the content
you have today, you then arecontinually creating better
versions of that content as youcontinue to test, leveraging ai.

(18:28):
So I I really do see therebeing one of the quick, to be
honest, one of the quickest winshere.
I think do see there being oneof the quick to be honest.
One of the quickest wins here,I think, is being able to create
content is really focused onyour brand and will enable you
to really quickly create newtypes of content are written in

(18:49):
your brand voice, your message,with your products at heart.
Just leveraging the data youalready have sat in your
marketing automation platformand then marrying that up with
the behavior data, I thinkthere's a really, really quick
win for a lot of businessesthere.

Speaker 1 (19:08):
So you're matching up the actual source email content
, html, subject line, maybeaudience characteristics, along
with the performance of saidemail and maybe, if you can go a
step further, right conversion,if there's a landing page,
something like that, and you'resort of marrying those two.
So I guess, theoretically youcould say oh, I'm looking at the

(19:31):
performance and I'm looking atthe characteristics of the email
.
Maybe it is the specificcontent, could also be the
structure, um, the elements thatare used are using different
fonts or things like that.
I mean, is it getting to thatlevel of stuff?

Speaker 2 (19:49):
to be honest, I doubt all of these things would.
They would have a priority andI don't think we would be really
considering fonts as a priority, but we would consider, say,
the type of message.
And one of the ways that wewould leverage LLMs and
generative AI in this is to tagall of the emails that we have

(20:10):
to be ones that we expect tohave higher conversion.
For example, if it's anautoresponder, that's saying
download this click this button,we expect that to have a really
high conversion and thereforeit get tagged in that way,
whereas, like a nurture email,that would be tagged as nurture
the most, the highest performingnurture emails.
We would then start looking atthe characteristics that are

(20:30):
driving that and then using thatto train our model on what it
should try and produce in thefuture.
And then the marketing teamthey go into that tool where the
ideal, the, the vision is.
The marketing team go into thattool.
They click generate me acampaign based on some
recommendations that they'vebeen provided through the
analysis another ai agent hasperformed for them, and then

(20:51):
they've already got content.
That's 90 of the way there.
Then they're the human in themiddle that's checking and
validating before this goes outto the end user, and you can
imagine just how that wouldscale.
Um, you could really scale that.
We're building some things likethis in our sort of labs at the
moment and, uh, yeah, it'sproving to be really effective.

(21:11):
Uh, their marketing teams haveso much data.
It's unbelievable how much datamarketing teams have, how much
metadata they capture.
Oh, yeah, how much data.
It's unbelievable how much datamarketing teams have, how much
metadata they capture how muchdata is stored in marketing
automation.
Salesforce in GA like that isjust ripe for building this
automation on top of.

Speaker 1 (21:32):
Sure, yeah, I mean that's part of why I believe for
a while that I get the freeversions of ChatGPT and the
risks associated with that.
But I thought this is part ofwhy I thought, like, even if you
have a limited amount ofcontent, right, you're still
going to have a ton of data.

(21:52):
If you have it in a tool thatis isolated to your data, that
it could learn from that.
That's part of why I believebelieve that you mentioned um
along in there about like uh Ithink you said 90 done right on
this stuff like uh, is theresome sort of I think there's an
implication there then that um,there's still a human element of

(22:19):
helping to build a model ortrain a model.
What does that look like?

Speaker 2 (22:23):
Everything that we build uses human reinforcement
learning, which is where humansare in the process determining
what good looks like, and thereis so much that we can
understand from the data.
So, if it's behavior dataimagine we're talking about
email still it's behavior datawe can obviously see which ones

(22:45):
were looking good and which onesweren't looking good from the
behavior data.
But, like when we create thecontent that's related to that,
we still need a human to tell usif that reads well, if it makes
sense, if it's related to thething that we're sending out,
and we use that informationthrough feedback to fine-tune
the model again to determinewhat good looks like from.

(23:07):
Whatever it is that we're doingwhether it's creating insights
using generative BI, whetherit's building content, whether
it's automating some process viaan agent there's usually always
a human involved in thatprocess for validation.
Over time, that human will getinvolved less and less, because
you're obviously able to trainout some of the things that the

(23:30):
AI is doing.
But up front, there will bethis intensive process where
humans are used in almost everystage to validate that
everything that's happening isthe right thing.
Be this intensive process wherehumans are used in almost every
stage, sure, and to sort ofvalidate that everything that's
happening is the right thing Imean, it's a little bit like how
the deduplication platformsthat I've seen work right.

Speaker 1 (23:51):
Where there's, you generate a uh here's, so I get
right again.
This is another one where thenature of a marketer or content
person changes over time basedon this.
But, um, is there a risk ofdoing that human element?
Uh, is there a risk of like, uhbiases, uh, driving the driving

(24:18):
, the model or the enginetowards things that are less
productive?
How do you put that intoaccount for that?

Speaker 2 (24:25):
I would say there is already a risk with bias in
everything that happens today,and actually one of the really
good things about AI is that youcan unbias it by using there's
a few different ways that youcan sort of unbias models.
Initially, they're alreadyunbiased because they're purely
looking at data.
When you start to tune them,train them, the people that are

(24:49):
submitting the feedback, youneed a wide enough range of
people that are submittingfeedback that you can sort of
determine where there areoutliers amongst those humans
that are potentially biasingthis model.
Um, obviously, there is a riskwhere all of your humans that
are involved might be biasedtowards one way or another.
Uh, that's where, that's wherethe testing comes in, right, so

(25:10):
you don't only rely on humanfeedback.
You're still trying to test uhon, as you were saying, around
productivity, for example.
You're still trying to test, uh, even if you didn't have that
feedback occasionally, what theoutput would look like.
Um, and then using thatfeedback in your model to then
continue to train it goingforward.

(25:31):
Um, I'm not I'm not astatistics person, so that
that's probably the best I canrepresent.

Speaker 1 (25:42):
No, and I barely am.
So when it generates thiscontent almost, can it also
produce a predicted outcome ofwhatever I said?
Productivity?
It's not really what I meant.
I really meant whatever metricit is you're trying to affect,
right or set of metrics, can youalso generate expected results

(26:06):
of a given?

Speaker 2 (26:07):
variant yes.
Expected results of a givenvariant yes, by leveraging
simulations.
So with one of the apps that webuilt it's our generative BI
tool this enables us to sort oflook at data in different ways.

(26:31):
We use at the very top level.
We have target data data andattainment data, so how close
someone is achieving that target.
We then have funnel health data, uh, underneath that, and
campaign data underneath that.
By looking at the target andattainment data, we can tell the
ai where to look next, uh, thenit can go and look at funnel
health and it can, say,determine if opportunities are
moving through the funnel, ifthere's one massive opportunity,

(26:52):
if there is a stall at somepoint in the funnel, if sales
are potentially not following upwith certain leads, and then we
can look at campaign data andwe can determine if some
campaigns are performing well ornot, which campaigns are
producing the most conversions,and we use that data to then

(27:15):
sort of marry those two areastogether so we can say for this
particular region there is astool in the funnel late stage.
You should be building campaignsthat are about late stage
conversions for that particularmarketing team, so we can make
these recommendations.
Then we run simulations basedon other regions that are doing
late stage campaigns, we canthen run simulations for this
region to determine what kind ofimpact that might have on the

(27:38):
conversions that go through forthe next phase.
So like the fourth level ofinsight really that we create,
if you consider first level isthat top level.
Second level is looking at newlayers of data.
Third level is like joining thedata together and then fourth
level is making therecommendation.
Based on simulations, we canreally show people really
accurately, based on the datathat they have, what the

(28:01):
likelihood is of doing thesemarketing actions would have on
potential pipeline and obviouslyfor most marketing teams that
is what matters.
At the end of the day, it'sthat pipeline.

Speaker 1 (28:16):
Well, yeah, it should .
But so you use the termgenerative BI somewhere in there
.
I mean, is what you outlined issort of the process?
Is that what that means?
Or is like what is that?
I'm not sure what that means.

Speaker 2 (28:31):
So generative BI is like a more broad term, it's
using large language models tohelp you interrogate data and
make sense of data.
We I guess there are a fewdifferent ways of doing it but
we, when we build generative BIplatforms, we make them
hyper-focused to specific usecases.

(28:52):
And the reason why we do thatis because loading and coming
back to the top of the call,where you mentioned the idea of
loading in lots of data or usingthem for analysis the reason
why we make them hyper-focusedon specific use cases is because
LLMs don't really perform verywell when you upload a huge

(29:15):
amount of data.
Google Gemini will let youupload 2 million tokens, which
is quite a reasonable amount ofdata, but the more data that you
load in, the more likely it isto hallucinate or not give you
the right result.
So we provide and it makes itdoes make sense, because it's
now having to consider a hugescope of data to answer your

(29:39):
very streamlined, specificquestion.
So we provide it with as littledata as we can to get the
output that we want to.
So we will.
If, if we're doing data analysis, we provide in aggregated
tables, summarized tables, wethen provide a huge amount of
context about what this datameans and what kind of questions
the AI should be asking on thisdata to create the insights

(30:03):
that we need to create, and wedo that at every level that I
was talking about.
So we do that at the top leveland at our target level, let's
say.
Then we move down to the funnellevel and we're asking it
questions like are there anyparticular regions with huge
opportunities?
Which opportunities are at risk?
Which ones haven't moved in thelast six months?

(30:27):
Ask it all of this informationto give it a picture.
We then start prioritizing theresults that it's producing,
before then giving the insightback to the user.
So I guess in that use case,what we do is we do a huge
amount of analysis and then wegive the user back three
sentences of the most primarything that they should care
about right now.

(30:48):
And one of the great things thatwe can do with generative BI is
we can that output that we'reproducing, we can make that as
verbose as we want, so it couldbe three, five, ten paragraphs,
all with rich information forthat end user, or we could do
what we do with this particularuse case and we stick it
straight into a PowerPoint deck,which then gets presented to

(31:08):
the leadership team.

Speaker 1 (31:14):
So just curious.
So I think I know I probablyconflate this a little bit.
When I hear BI or businessintelligence, I often think
about visualization in additionto analysis.
So are you able to do?

Speaker 2 (31:32):
visualization stuff as well.
Yes, yes and no.
Uh, yeah, yeah, out of the box.
No, it is quite hard to do likeget ai to produce you some
really impactful charts thatwould make that are beyond what
marketing teams normally havetoday, but what we do really

(31:55):
well, there's two differentthings that we're doing at the
moment.
Actually, in our labs labs team, we're testing this as well.
One of the things that we'redoing is we're providing people
the ability on their existingdashboards to ingest the data
that's relative to the thingthat they want insights from.
Hit a button and then it willgo through our insights engine
and then generate insights onthat dashboard or even without

(32:16):
hitting a button, when itrefreshes.
Obviously, that will justgenerate insights related to the
thing that they're looking at,things that they should care
about, the so what factor.
We only want to show themthings that make them think so
what, or give them a reason tothink so what.

(32:38):
The other thing that we're doingis we're looking at integrating
with looker.
Uh, looker has a model in itcalled look ml.
Uh, that model lets us sort ofabstract the data um from the
sort of underlying sql queriesthat you would need to run.
So we can just ask ai uh, goand look at this data, consider
this, consider that, and then itwill produce us a dashboard
alongside insights, which iswhere we're trying to take our

(33:00):
sort of generative functionality.
But a lot of businesses alreadyhave dashboards really good
dashboards At least a lot of thebusinesses that we work with.
They have, like, best-in havedashboards really good
dashboards.
Um, at least a lot of thebusinesses that we work with
they're like best-in-classdashboards.
So really, what they want is theadditional layer on top, which
is just providing them thatreal-time insight deep into the
data that they need um, wherethey're looking at, uh, where

(33:24):
they're already in.
That's another thing with AIreally is you should build it,
yeah.

Speaker 1 (33:31):
Right, or it's like one of those, one of those, one
of those typical complicatedanalyses.
It's done one time because anexecutive asked the question and
now you can do that in a moreautomated way, right, yeah, so I
like that.
I still want to get I'm stillstruggling with this, I think
cause again my own bias rightBelief that you know you can

(33:52):
throw a bunch of data at this.
It's going to comb through it,find things that you wouldn't
have otherwise found.
I'm still unconvinced that youcan't do that.
I'm not saying that you can'tlike that you should start with.
But maybe you have examples ofwhere you tried throwing gobs of
data and it came out withsomething that was nonsensical

(34:14):
or something like that.
But you know, you kind ofwalked through a little bit of a
process.
You know top, top level, theygo a little deeper, and going a
little deeper, if I understandit right, is it, is it, was that
something you learned over timethrough working with your
clients at Alga Marketing?
Or how did you get to the pointwhere you said well, really,
what we need to do is start witha smaller I'll call it a

(34:36):
smaller question, I don't meannecessarily as a smart, but
smaller data set.
Asking more specific questionsis maybe a better way of saying
it and then going from there.

Speaker 2 (34:46):
Obviously very new.
We are hiring, and we havehired experts in particular
domains data science, in ai, inmachine learning, in prompt
engineering, which helps us getto the point where we are now,
which is like having these toolsworking inside uh, you know big

(35:08):
enterprise businesses.
But what we have to do is beunbelievably agile.
Even today, gemini 2.0 has comeout and we're already testing
with how Gemini 2.0 wouldpotentially impact some of our
models.
We have worked in an extremelyagile way.
We have had to build, test,refine the outputs that we've

(35:39):
been able to produce.
We have gone through many, manydifferent stages.
One of the things that's really, really hard, actually when
you're building this type ofgenerative intelligence, is the
data validation.
So when AI has said something,how do you validate that that is
actually correct?
And a lot of that actually isquite a lot of hard hours and

(35:59):
effort to look into the data anddetermine it is correct or
isn't correct, and if it wasn'tcorrect, where did it go wrong?
So we've just put in a lot oftime energy to build these tools
, as well as a lot of expertiseand smarts like.
We are experts in sales andmarketing, so we understand how
that works and we've hired to beexperts in ai and we're really,
really keen to sort of drivesales and marketing ai forward.

Speaker 1 (36:23):
Um yeah, yeah, that makes sense.
No, I mean I've often when I'vehad responsibility I mean like
a broader responsibility, a morespecific responsibility for
reporting and analytics as amarketing operations leader when
I've had the opportunity to tryto hire somebody like a data
scientist, I've struggled,finding no problem finding

(36:45):
people who have general datascience sort of experience and
knowledge.
But the domain of sales andmarketing, which is, I think, is
been the hard part, becauseit's I tell people all the time
when I started my career andsort of database stuff doing,
doing stuff in the financialworld right, where the data is
pretty well, you know, clean andgot controls and you know, you

(37:13):
know and, and so when you moveto sales and marketing data,
especially in the b2b context, Ithink it becomes really messy
really fast because you don'thave the controls.
You have a lot of humansinteracting with uh data and
content that um are notincentivized on the quality of
it.
So I think that's a bigchallenge.

Speaker 2 (37:32):
Yeah, data scientists come in with grand visions of
what they want to do and thenrealize that the data isn't in
any way clean, which then limitsthem.
But yeah, absolutely Likefinding someone with domain
knowledge and that sort ofunderstanding.
Generally they're always a bitscrappier as well, so they're
more willing to just try and getthings done.
Those have been the people thathave definitely succeeded in

(37:54):
their teams.

Speaker 1 (37:58):
Yeah, no.
I think it's really easy tosort of throw your hands up and
say, oh, this data's a mess andwe can't do anything about it.
I think people the premise thatyou're assuming the data is
going to be quote right or quoteclean is the first mistake you
made.
So if you assume the data isgoing to be a mess and dirty but
I think that also goes back towhy it's important to understand

(38:21):
whether you're doing your ownanalysis or using AI to do it To
understand what comes out of itneeds to be.
I use the old term trust butverify.
I mean that's yeah.
I use the old term trust butverify right.
I mean that's yeah.
I think that's what I'm hearingand maybe over time, your trust
seems to go up.
Wow, my brain is on overload atthis point now, luke.

(38:44):
So this has been a greatconversation.
Thank you very much for sharingwhat you and Elgo Marketing are
doing conversation.
So thank you very much for forsharing what you and algo
marketing are doing.
Uh, if, if people want tofollow up and hear more about
what you're talking, what you'redoing with your team or what
algo marketing is doing, what'sthe best way you can visit algo?

Speaker 2 (38:59):
marketingcom.
You can find me on linkedinluke crickmore on linkedin.
Um, I'm yeah, you can'tnecessarily find me anywhere
else because I'm british and Idon't really like to share um
but yeah, I'm really happy tohave a conversation about
anything martech, anything, ai,anything technology.
Used to be a developer, so I'llbe really happy to just like,
yeah, talk about some proof ofconcepts and things just geek

(39:24):
out on that.

Speaker 1 (39:25):
Yeah, so I also I'm, I'm, I'm I can't say I'm British
, but I am old and so you know Ilimit what social media I'm on.
So this has been a lot of fun,luke, thank you so much.
Thanks for letting me, you know, throw some hard questions at
you.
Thank you man Today, soappreciate it.

(39:45):
Thank you to our audience, asalways, for supporting us and
providing your feedback.
If you have feedback orsuggestions, uh, you want to be
a guest?
Feel free to reach out to naomi, mike or me and we will be glad
to talk to you.
Take your feedback until nexttime.
Bye, everybody.
Advertise With Us

Popular Podcasts

24/7 News: The Latest
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.