All Episodes

October 29, 2024 65 mins

udging by the number of inbound pitches we get from PR firms, AI is absolutely going to replace most of the work of the analyst some time in the next few weeks. It’s just a matter of time until some startup gets enough market traction to make that happen (business tip: niche podcasts are likely not a productive path to market dominance, no matter what Claude from Marketing says). We’re skeptical. But that doesn’t mean we don’t think there are a lot of useful applications of generative AI for the analyst. We do! As Moe posited in this episode, one useful analogy is that thinking of using generative AI effectively is like getting a marketer effectively using MMM when they’ve been living in an MTA world (it’s more nuanced and complicated). Our guest (NOT from a PR firm solicitation!), Martin Broadhurst, agreed: it’s dicey to fully embrace generative AI without some understanding of what it’s actually doing. Things got a little spicy, but no humans or AI were harmed in the making of the episode.

For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:05):
Welcome to the Analytics Power Hour. Analytics topics covered conversationally
and sometimes with explicit language. Hi everybody, welcome. It's the Analytics
Power Hour. This is episode 257. You know, since the Industrial Revolution,
it seems like the interest in automation is always around. And in the

(00:26):
analytics space, there's always a lot of interest here as well.
You know, that entails handing off parts of the work to a machine,
to increase efficiency. These days, AI is the newest entrant into this discussion.
How and what can we hand off to an AI when it comes
to analytics? Are they gonna take our jobs? Will it truly usher in

(00:47):
an era of data democratization? I don't know.
I guess we should talk about it. And to do that,
let me introduce my co hosts, Moe Kiss, how are you going?
I'm going great, thanks for having me, Michael. It's awesome. And Tim Wilson,
some would say you're already a computer already. Your results are too perfect.

(01:07):
Now, how you doing, Tim? Ouch. I'm getting to where I'm a computer
when it comes to responding to a podcast pitch about... Pitches about generative
AI for analytics. There you go. That's a part of the job that's... They're
flowing in fast and furious and... Fairly automated. Reached out to Martin

(01:28):
'cause we're like, "How about we go with someone who we reached out
to instead of somebody who came in to us?"
Yeah, a lot of interest in this, and I'm Michael Helbling and we
did wanna bring on a guest who is at the forefront of this
issue and luckily at Marketing Analytics Summit this year we met Martin
Broadhurst. He's a consultant on AI for marketing, the owner of Broadhurst

(01:48):
Digital, and he serves on the editorial board of the Journal of Applied
Marketing Analytics, and today he is our guest. Welcome to the show Martin.
Hello Michael. Hello Moe. Hello Tim. All right. Well, we've got a lot
of questions. So buckle up and in the next hour or so,
hopefully we'll learn a lot about what AI can do for us in analytics,
or what it can't. I'm not gonna lie. I'm like weirdly scared of

(02:13):
this episode and it has been on my mind a lot.
What? Why? All right. Well, let's dig into that. Maybe this is just
a... Martin what we need is a reassurance for all of us that
we'll still have jobs after this or something I don't see anybody's job
going anywhere in a hurry, not to spoil what's to come. But yeah,
I think you're okay for the time being.

(02:35):
Yeah. Well, maybe Martin, to kick this whole thing off, we can talk
a little just about how you got into this area in the first
place and sort of some of the things you're seeing in the industry
right now. Yeah. So my background is in the CRM and marketing automation
space. This is where I've been working with businesses for years now.
And when OpenAI made the GPT 3 API available. I immediately started playing

(03:02):
around with it and experimenting with the different tools, seeing what the
capabilities were and understanding the mechanisms of how these large language
models actually worked to try to kind of push them to the limits.
And over time, I've just built up a lot more experience with that.
And yeah, this has turned into a nice general addition to my skill

(03:26):
set where I'm working with clients on how to automate and find use
cases for AI and generative AI in their workflows and in their day
to day tasks. And unsurprisingly, data analysis is something that comes
up quite a bit. So, yeah, I've been
trying to test the models as much as I can to see where

(03:50):
the limits are before they break. And this month I've just published an
article in... A journal about how to use large language models with spreadsheets
with a bunch of different techniques for how to think about using generative
AI alongside spreadsheet and spreadsheet design. I mean, it's a short article.

(04:11):
Don't you just say, "Here's the spreadsheet and find me insights,"
and then it just goes from there? I mean... That is the dream,
isn't it? Wouldn't it be great if that actually worked like the marketing
spiel? Well, okay, the fear, the fear I have at the moment is
actually not about losing my job because I see the amazing efficiencies

(04:36):
I even already have in my own job.
What is terrifying me at the moment is the,
"We wanna do AI." We had a conversation the other day,
"We wanna do GenAI on this thing." And I get really...
Let's just say anxious. Let's call a spade a spade. We're kind of
swapping the way we would normally solve a problem from what is the

(04:59):
problem? What are all the ways to solve it? What is potentially the
simplest, most explainable, whatever way to get there? Versus going,
"We're going to solve this problem with X. How do we do more
of X?" And that's the bit that's stressing me out.
Yeah. Finding the... Or prescribing AI first before you've even dug into

(05:22):
the potential solutions, starting with that and saying, "We... " And that's
actually one of the things that clients will sometimes say to me,
they'll just come to me and say, "We want to use AI."
It's like, "Well, why would you start with that as the solution before
you've looked at the implementation?" And I think this is
a really common problem. I would always start with, look at those tasks

(05:42):
that you do that have things that require certain amounts of batch work,
where there's just repetitive nature to the tasks and you can
automate that away. But yeah, really understanding the nature of the problem
is probably the starting point before you even get into what the solve is.

(06:05):
But is there... Generative AI seems, it's tangible 'cause it's so easy for
somebody to play with it, which you, that I would say there's a
higher bar for someone to just like dabble with SQL or Python or
are coming from scratch. So it's broadened the audience of people who can
get a taste of what the technology is.

(06:29):
And to me where the massive miss is just because you get a
sense of what it does, you have a back and forth with ChatGPT, it
kind of misses what analysis is. And like, it feels like there's an
oversimplification of the steps of saying, "Oh, well, no, no, just gonna

(06:51):
be AI is gonna be the drudgery of the tasks." And you say,
"Well, the drudgery of my analysis work is doing this data cleanup. And
I've played with this ChatGPT. So what if I just told it to
do that." But it kind of misses what the, even in the drudgery
of the work, what the human component is,

(07:14):
much less just the reality of identifying a problem where you're trying
to use data to solve it, like it just... It feels like it's
this big bucket of like a tool and somehow people are like,
"Oh, well, the tool must be smart enough to get
to how to fix it." The fact that you said you've got the
spreadsheets thing feels like even that is nuance because you have to kind

(07:39):
of help it understand what a spreadsheet is, which maybe it kind of
knows and then sort of what the data within it represents, right? I think
the, what you're kind of driving at this is that people don't understand
the tool and the nature of the tool and the kind of mechanism
behind the tool. I think it's really important that with generative AI people

(08:00):
understand things like next token prediction. What does that mean?
What is it doing under the hood? And
when you've played with the models a bit and you understand some of
the settings, things like temperature, for instance. So for anyone that
isn't aware of the temperature setting in a large language model,

(08:22):
there is a setting between zero and two and the higher it is,
the more chaotic the answers you are... And the basic principle of temperature
is that it's like in the physics systems, the higher the energy in
a system, the more chaotic, and the lower the temperature the more controlled
it is. If you play around with that in the API,

(08:43):
for instance, you can get really consistent answers.
But where you use something like ChatGPT, you don't have access to that
particular setting. So it's generative, right? It's not
descriptive or calculating. It's coming up with a range of answers and the
subtleties in the way that you write, the way that you input,

(09:05):
the way that the data might be structured, whatever it may be.
And if people think it's like computer software that they've always used
in the past where you press a button and it always gives you
this thing, it does this job consistently in the same way every time,
they will be sadly mistaken because that's not what's going on under the
hood. It's time to step away from the show for a quick word

(09:29):
about Piwik PRO. Tim, tell us about it. Well, Piwik PRO has really
exploded in popularity and keeps adding new functionality. They sure have.
They've got an easy to use interface, a full set of features with
capabilities like custom reports, enhanced e commerce tracking, and a customer
data platform. We love running Piwik PRO's free plan on the podcast website,

(09:52):
but they also have a paid plan that adds scale and some additional
features. Yeah, head over to PIWIK.pro and check them out for yourself.
You can get started with their free plan. That's PIWIK.pro. And now let's
get back to the show. Oh. I've just had this,

(10:13):
I don't know if this analogy makes sense Tim but hear me out,
I constantly am doing this thing in my head
where I'm trying to understand stakeholders' perspectives and understanding
an MMM, and how it's different from their worldview of attribution and what
attribution gave, which when you see a table and it goes:

(10:37):
This channel this many sign ups, this channel this... Like the concept of
MMM results is quite difficult. We start talking about diminishing return
curves. We start talking about return on ad spend at different spend levels.
And like, there's just all this like complexity there. And I feel like
a similar analogy could be made here, right? Like, you expect input output,

(11:00):
but there's actually so much nuance. Like, is that a... Is that like...
I don't know if I'm grasping at straws here, but in my mind,
I was like, this would be the problem of people trying to
take data analysis using GenAI without understanding it well enough. That
would get you into the danger territory, right?

(11:23):
I think that works on both... At two levels. There's the not understanding
the GenAI mechanism well enough, so not really understanding
the strengths and the weaknesses of the tool. Which is going to be
a hindrance in and of itself. But then there's also that level of...
People often say that if you use a large language model and you

(11:46):
are an expert, you can get expert level outputs from it.
The better the quality of your input, the better the quality of the
outputs. But if I, as someone who isn't a
seasoned data analyst, throws in a spreadsheet and says, "Give me some insight
into this." I'm asking bad questions and I'm getting very average outputs.
So it works on both ends of the spectrum. If you're not giving

(12:09):
good context and good prompts, you're going to get bad outputs.
But also if you don't understand the limits of the technology itself,
you might just... Well you don't know that it can't actually do the
thing you're asking it to do. That's... Cassie Kozyrkov last month wrote
a post that was very timely as we were prepping for this episode.

It was 'Strawberry's Paradox (12:29):
When Perfect Answers Aren't Enough'. And she
sat with some Nobel Prize winner who she worked with and she was
working on her PhD. And they have like just a rift for a
conversation for a while. But what she... I thought very, very well
articulated, put that in that said, "Imagine the AI that can give the

(12:50):
perfect answer, that it is perfectly accurate and correct if you,"
just as you said, Martin, "If you don't ask it a good question,
it's," you know, it is going to be like,
"What's the answer to life, the universe, and everything?" "It's 42."
Right? It's not a good question. And that's this other piece that has kind

(13:12):
of bothered me that it feels like we're looking... The people who are
looking at AI all have lived in a world without generative AI.
So we're bringing our human experience, having worked with data, having
dealt with the business problems, having grappled with trying to explain

(13:32):
multi touch attribution versus MMM. And that's the lens we're looking through
it at and saying, "Oh, here's the future. It's gonna take
everything." Well, if you fast forward and say, "Wait,
that's discounting the expert level of the input." So even if that worked
for a very, very short, for a period of time,

(13:55):
that would start to go away because all of a sudden you'd have
people who were trying to skip a bunch of steps of the human
existence to get to the AI and hoping that the AI can close
that gap, which seems very... I don't know if that's just like philosophical
or it seems like, "No, that's what would happen. It's we're counting on

(14:17):
the tool to close a gap that doesn't seem like the tool is
ever gonna be equipped to fully close." I don't know if that made
any sense. Moe, I'd really like your analogy. Or we can just cut
this whole section out and... I mean, where do you, with a spreadsheet,

(14:38):
where are you using... What's the start and end point of generative AI
when given a spreadsheet? So I think some context has to be given
there in that these models are changing rapidly. It was only a few
weeks ago that we had GPT or ChatGPT o1 preview released,

(15:00):
which is supposed to be, you know, much better at reasoning,
although that's its own conversation in and of itself.
The models' capabilities are changing all of the time. So in... What I
propose is that there are, as it stands, four ways that you can
really use ChatGPT or any large language model with a spreadsheet.
And one is to... And my preferred route is to just use it

(15:24):
as a coach or a mentor. It's that
very clever assistant that you're not actually giving access to the data,
but you are... You get stuck on something. Maybe you need a bit
of code writing that you can stick in a macro, or you've forgotten
the function to do a certain thing, or you've got a really long
formula that you need optimizing and reducing. It will do all of that

(15:46):
for you. And it's the actual spreadsheet and
the language model don't interact. This is where AI is very strong at
the moment. It can be quite good for that. There's over 500 functions
in Excel. Trying to keep all of those in your head is very
difficult. Whereas if you've got that very smart assistant next to you,

(16:07):
it can go, "Oh yeah, I know exactly what that is."
Then you've got the file ingestion. This is where you can give the
spreadsheet to the model. So you can upload to ChatGPT the CSV, the
Excel file, whatever it may be, and it can
use Python in its code environment to execute tasks and functions on the

(16:29):
data. The outputs from this can be very good. It can do some
incredibly powerful things, but there comes a big
flashing light warning sign saying the outputs can also be complete hallucinations.
I have got lots of examples. In fact, nearly every single time I
do this, the data that it presents back has some errors in it

(16:52):
that if you're not paying attention, you would not spot. So,
case in point, from the Marketing Analytics Summit,
I showed an example where we had a bar chart showing cohorts grouped
by age. And there were two bars, satisfied or unsatisfied.
And it was just, which one was higher? A blue and an orange

(17:15):
bar. And it... In the written text, so in the charts that it creates,
they are accurate. It seems that the data manipulation in the charts that
it creates are accurate. But then its description of the charts,
its written description of it is wrong. Like consistently, it would say,
"You can see that for the 35 to 50 year old cohort,

(17:38):
satisfied is higher than dissatisfied," and it's clearly the other way around.
And this is really consistent. This comes up time and again.
So you wouldn't want to rely on it for uncovering the insights.
Because the... Do you know why... Like, I know that me asking why
is a stupid question right now. Like we don't get to look inside
the black box, but like, that's a really strange error, like really strange

(18:04):
that it would be able to interpret it correctly in the graph.
But then... Is it like something to do with converting it to the
graph and then the graph back to the descriptive text? Or like, is
that the step too far? Like I just... How do you know...
You don't know where the boundaries are. So the graph is separate from
the... From what... The model doesn't see the graph. The model runs the

(18:26):
Python and then takes the... And turns the Python script into something
that sits in the HTML in the browser window, but the actual model
doesn't see the output. Because the model has turned everything into tokens,
where you've got a graph that has, or it's done the...
It's used Python, it's got some numbers attributed to the different cohorts

(18:49):
and positive and negative, also satisfied or dissatisfied. They're just
token IDs for the model. So it's not like...
The system doesn't see the raw number. It sees the tokenized version of
the number and then has to, in its model, understand the relationship between
these... This is my best guess, right? So I'm

(19:10):
making some assumptions here. I would like to see, particularly within
the chatbot version of these tools, I would like it where it creates
the graph and then turns the graph into an image and feeds that
back in. Because the funny thing is, if I take a screenshot of
that graph and feed it back into ChatGPT and say, "Tell me what's

(19:31):
going on with this data," it consistently does a very good job of
that because it's got the vision capabilities. That is nuts. Sort of like
the second order of thinking is where it starts to fall apart.
But what, so that's the second... So that was like number two,
I think of four, like the, like to what end it ingests it

(19:51):
and outputs a result. And maybe that's going to get
better with the added reasoning as more models come along.
Is it going to be easy for somebody to just wave their hands
and say, "Oh, well, you're the second one," it'll ingest it and it'll output
results. And the results will be very, very reliable. It can count the number

(20:11):
of bars in strawberry and it will always give the right answer.
So is that an easy one to kind of check off and say,
"Yeah, that'll get fixed," or. I would expect so, but we don't know
at the moment. With o1 doesn't have... You can't do file uploads. You
can't upload images. It's just text in text out.
You would expect that to improve. The next method is actually the using

(20:35):
the assistance within the spreadsheet software itself. So Microsoft Copilot
by way of example. This is a really
difficult one to judge because the version that I wrote the paper on
was the previous version. And then literally I think that the day the
publisher signed it off, they announced wave two of Copilot, which has new

(21:00):
capabilities. So the new version, which I haven't yet tested is supposed
to be able to actually write and execute Python on new spreadsheets and
do more. It can actually interact with more of the tools and the functions
because the old version could do that, but would often say

(21:20):
that it had done a task and it hadn't done a task,
or it would tell you that it couldn't do a task because there
was too much data. Whereas I think those limitations on wave two have
been lessened somewhat. So that's really, if we think about where we...
What the ideal is, I think this is the ideal. You want the
chatbot in the environment where you're working with that data

(21:41):
and it's able to actually execute almost agentically different functions,
tools, tasks, directly within the file itself. So the, okay, the lay person's
version of this, rather than going to something separate, having to kind
of ingest the data, yada, yada, yada, it's built into it.

(22:04):
And the difference is not only can I ask as, like, work as
a "helper", or whatever smart marketing person called it.
It can also actually execute functions on your behalf. So it can do
the doing, not just give you steps on how to do the doing.
Yeah. And the first version of Copilot in Excel was supposed to be

(22:25):
able to do some of the doing, but it did it wrong really
more often than you would ever want the tool. It felt like it
was released a little bit too early, which, you know, fair enough they're
iterating on these things really quickly, but yes, it should... And I think
the more important thing with that is actually it can write and execute
Python within the environment, which just adds a lot more

(22:48):
capability to Excel. I'm really curious from like a product perspective,
because that, what you're talking about here basically implies that unless
you were truly embedding this technology into the product roadmap in a really
meaningful way, you will probably fall behind in

(23:10):
any kind of tech company, which I hadn't really thought about.
Yeah. Okay. I'm having lots of light bulbs. Maybe I should do more
recordings in the evening. But but so I'm trying to figure out the
limits of that. And this is also realizing that,
again, the slew of pitches we're getting for guests on the show,

(23:31):
like the term Gen BI, like, "Oh, Gen AI is going to bring
Gen BI," which I'm trying to figure out of these three categories,
like, where does it go from I'm a user of Excel,
which means I'm a human being on the planet. And
I've got a tool that gives me a little bit more of a

(23:52):
natural language interface at kind of a micro level to go
bite sized along the leap where that gets... Where I'm not sure if
that's included is, "Oh, well, you're just going to have a natural language
interface to ask how much revenue did we get by
channel last month?" That feels like more dangerous territory than saying,

(24:15):
"Hey, can you extract... " put a filter in so it flags anything that's
within the US as US and everything that's rest of world,
rest of world, which is a more specific instruction.
Is that a spectrum, or is there a hard line where you're crossing
from a Copilot to my hope for wished for

(24:40):
natural language interface to the data that is reliable? I think that's
where Microsoft would like Copilot for Power BI to get to.
I don't have any experience with that, particularly with this new wave of
updates that are coming or have recently been announced.

(25:01):
What I can say is that people that were using...
That Power BI power users that were really interested in Copilot stress
tested it at the start of the year. And
they described it as, one description said, "It's not ready for CEO level
insights and presentation of data at the moment. It's

(25:24):
quite simple. If there are several steps of manipulation of the data that
you need to do in order to get the insight that you're after,
it falls down. It doesn't understand at the moment relationships between
different entities in your data set." So how are you seeing companies use
this, or like analysts use it in their workflow?

(25:46):
Kind of like, I know we've talked a little bit about the spreadsheets,
but if you take the CEO example of
amazing boss lady comes to you and says, "Sales are down,
what's happening?" And you go through that analyst workflow of solving the
problem. Like, do you have kind of any intuition how people are really
leveraging this in their day to day? The file ingestion, if you can

(26:11):
get your data sources into ChatGPT, you can get, with the right prompting,
really good insights really quickly. It can bring together multiple data
sets. It can merge them, and it can, if you are very good
at being able to describe your data and what you're after,
it can give you those graphs and those charts.

(26:33):
How much people are doing that day to day? I am...
I don't see that a great deal. When I speak to people
the most common experience I have is people going, "It didn't quite do it
for me." Like, "It told me something was wrong." So there's an element
of doubt that is seeded in people's minds. And this is the thing.

(26:55):
I think people are so used to using a spreadsheet, a calculator,
something that gives numbers in numbers out, that makes sense and is always
true. Where you have a tool that you use it 10 times and
2 times you go, "That's not right." It plants a seed of doubt
in your mind. So I think until the hallucinations issue is cracked,

(27:22):
we're not quite going to get there. Everything feels, particularly on the
data analysis side, I would say you can get surface level
insights, or you can get visualizations created very quickly. You can do
data manipulation very quickly. If you're someone that doesn't know R and
doesn't already know how to manipulate the data, you can do that.

(27:45):
It gives you those additional skill sets or access to those kinds of
skills in a limited capacity. But how much people are
using this in the day to day, I would
dare say it's more as an assistant to
help them shortcut some code writing functions rather than really relying
on it for insight. So, what is the fourth way? I feel like

(28:08):
I want to dive back into, and I'm not sure whether I'm hitting
a gap or whether I'm hitting a... Or just know that enough of
our listeners will be like, "He said four, he said four." So, and
I want to break that tension, so. Yeah. I did say four. This
show much like an AI gets lost along the way.

(28:28):
There is a fourth and the fourth is actually
less useful for analysts in some respect, but it's actually adding an entirely
new function to the spreadsheet itself. So a good example of this is
Anthropic's Claude model has a Claude for sheets add on. So it's a
Google sheets add on and it creates a new function, equals Claude, and

(28:50):
then equals Claude open bracket, and then you can put your prompt in
there. And then the return of that prompt is what populates that cell. So
that means that you can assemble prompts using data input from other cells.
And just like you would any other formula, you can build a formula
and then send that to Claude and get Claude's response straight back into

(29:13):
the spreadsheet. Okay. So then now I've got,
well, so one, I think one thing, and maybe it falls in the
second kind of the file ingestion. It seems like there is a lot
of using generative AI for analytics and it winds up, it's really using
generative AI for analytics engineering or for data engineering or for data

(29:38):
observability or for... So there does seem like there's a whole class of
tools that are either kind of pipeline building assistance or data monitoring,
which to me, that's not the analysis, that's the upstream. And it seems
like just my gut is there 60 or 70 percent of things that

(29:59):
get labeled as generative AI for the analysts are really generative AI for
the data engineer or the analytics engineer; would you
agree with that? I mean, are you seeing those where that's getting labeled
as for the analysts, but it's not for analysis and that's causing

(30:20):
maybe some confusion in the market? Yeah, I think that's probably true.
And I'm just yet to see really strong use cases.
And I guess you guys are more at the coalface of this than
I am. I'm yet to see really strong examples where people have said,
"We use generative AI for this level of insight and analysis and

(30:44):
look how I did it. And that was all AI, ta da," you know, "We sprinkled
in some data and got this amazing output; isn't it great?
Aren't your jobs all doomed?" I'm not seeing that. See, I find there's
two groups of people. There are the people that are doing very cool
shit and are doing it pretty quietly and not telling people.

(31:07):
And that kind of tends to be the way that I, I mean,
I'm not saying that I'm awesome, but like when I tell people I've
taken a shortcut, I'm not going to tell people, let them think I
did all the work. Not that I transcribed a voice note and then
used it to write up my interview feedback and
then pasted it in, in a really efficient manner. And everyone thought that
my interview feedback was spot on. But then there's the other group that

(31:30):
are like, "Oh my God, we did AI, look what we did.
Over here, over here." And it's like, it just seems to be really
polar opposite. I don't feel like we're at that
maturity of educating people about how to do it well and the pros
and cons. Like it seems to be, I don't know, like very polarizing

(31:50):
at the moment, but maybe that's just my lived experience.
Having gotten buttonholed by somebody who was definitely the latter,
it was a really long and exhausting conversation. And it really wasn't a
conversation, it was just him going on, which what was interesting is that

(32:12):
when the probing that I did do with him was all of this
really, really cool stuff was around rapidly pulling in data sources
and being able to use web hooks and generate
code to pull data sources in. And then with some
iterating on the model, do some kind of mining of these multiple data

(32:35):
sources to generate something, which was all very interesting, except
the two things. And he, clearly this fella talks about it to anybody
who will listen to him and does not stop. And then he started
making these bold claims about any company could go from $0 to $10
million with one person with just AI. This is amazing. But then as

(33:00):
I was probing, one, he admitted that all the stuff he did,
he did actually have to talk to the subject matter experts to even
figure out what it should be doing, which seemed like very much a
human task. The thing that we didn't get into that just seemed...
He went on a great length about he does not have a technical
background. And, but he also went on about how he didn't have to

(33:24):
write any code. He would just have this generate the C Sharp and
then he'd take it. And that felt like another component of,
well, that seems sort of fragile. Like you're... The playing around I've
done with code generation is it'll generate something, but it may

(33:45):
not be clean or well written or something that you want to have
code that lives on for the ongoing production of
any sort of ongoing deliverable. Like it equates... I talked to my son
who's a software engineer, and get him started on somebody who's a crappy

(34:06):
software engineer, or sometimes a faceless person in the past where he's
inheriting the downstream. And I'm like, "Oh my God, the ability for
a machine with a temperature setting probabilistic in nature
to generate code. That's then going to live, that some poor analyst or

(34:27):
some future generative AI needs to modify the code? How is that going to
work?" Like, so the ability to say, if you're going to write something
that needs to have staying power, you can use the code assistant,
but you probably need to know the code and
maybe do some real iteration with it, as opposed to just saying,

(34:50):
"I don't need code." I mean, I've had multiple people saying,
"No one needs to learn to code. It'll just generate it for you."
And I'm like, "Well, that's somebody who's never learned to code says that."
Can I challenge you a bit here? One of the things that is
a little bit exciting, like when anyone asked me, I was like,
I will say, "I'm not that technical. I can do a bit of

(35:11):
programming, but I'm pretty shit." And I'm way more shit than I was
five years ago. And I know I have people in my team,
for example, that they would share that and they would say,
"That's not my strength. Programming is not my strength."
They definitely have made endeavors to learn and will try their best,
but they're never going to be a gun programmer. They're not the one

(35:32):
that are like QA ing 50,000 PRS from other engineers, data scientists,
every day. One of the things that I find so challenging in data
is to find people that are really good at
figuring out how to answer a question, solve a problem. And it's like
the idea that you could have someone who might not be strong in

(35:53):
a particular skill, like programming, but has this real superpower to understand
and answer a business question and you can make them
better at their job by kind of giving them this free buddy or
coach or technical mentor. Like, I find that fucking exciting.

(36:15):
That is cool. So I think you glossed over the thing that I
think just gets glossed over all the time, which is you... Oh, tell
me more. And those people, well, and all those people, they tried.
They're like, it's not saying you have to be an elite level programmer.
And I think the Cassie's article that I mentioned, the Strawberry's Paradox,

(36:36):
is very much on that. It doesn't mean you have to be a
hot shit programmer, but discounting the effort to learn,
learning SQL and learning VLOOKUP and learning what a left join,
what a join is. If you completely skip that and say,
"Oh, but somebody just has a great sense of answering business questions."

(36:59):
One, I think that's actually often discounting what they've learned and
part of their ability to answer the business questions
from struggling through some of the technical aspects of it. Like,
learning that stuff helps you understand how data
works, right? If you do a thought experiment where somebody's never had

(37:22):
this sense of a join introduced to them and they just say combined data
sets, you wind up with kind of the very casual business user where
you're having a very circular discussion because they don't understand that
you need a key to join two data sets.
So I think we're really good at skipping that point of saying,

(37:43):
"No, no, no, this is gonna be great." It's like, well,
no, but the people have to learn that that's not their interest or
their passion, but they're learning very, very valuable aspects that go
into their ongoing cognition to try to learn that technical stuff.
That is part of who we are. And we've started saying,

(38:04):
"Oh, we can skip that. You don't need to do it at all."
But who's saying that? Like I know that there is the odd person,
but I would... Like, if anyone came to me and said,
"I wanna be a data analyst," and they're like, "Guess what?
AI's out there, I don't need to learn any programming." I tell them
to... That's... I'd be like... A thousand percent what the fucking analytics

(38:25):
translators were saying. I definitely dealt with... No, not at all. No.
I've had people tell me... I had somebody who was a
long time Google person adamantly tell me, "No one ever needs to learn
code again." And I was like... He was like, it's just...
He's like, "No, you don't need to ever do code." And I'm like,

(38:47):
"I can't believe." So, absolutely. And I will say going back pre AI,
there were people who were coming who were enamored with the idea of
analysis and the idea of doing stuff with data, but said,
"Ooh, I don't wanna learn anything technical and
this analytics translator role, I can just... " And this was way before

(39:08):
Gen AI. I'm not actually denigrating the analytics translator role,
only if somebody thinks that means, "I don't have to have any technical
chops." But I don't know, Martin... 'cause for example, I know analytics
translator is a very contentious thing, but when I
think of it, I think of something very different to what you think
of. And like, and this is the same situation... I'm looking at the people

(39:32):
I know who have jumped on that role. Yeah, sorry. There is a
spectrum though, at one end there is like, "I think I can do
no programming and AI's gonna do it all for me." And then there's
like the middle people that I kind of talked about who might be
like not great at it, bit rusty can use it. And then there's
the people that are like, "Why would I ever need AI?
I'm such a great programmer." But it's always this spectrum. Hearing your

(39:56):
description that you gave many people would jump to saying, "Moe thinks
that if somebody looks at programming and is not interested in it,
they can therefore completely ignore it. They're... " No. I just... But
I think that that's how that can be heard. But Martin's talking a lot...
Hold on. Hold on. Let's... Wait, no, Martin is talking a lot about

(40:18):
the fact that there are so many mistakes made. How do you recognize
a mistake if you don't know what the wrong or right output is? Right. And
that's the thing is... And I know I'm using wrong and right in
a very binary sense, but... The people who are super excited about what
this is gonna do, have probably never done it.
And that's what we're probably kind of circling around right now.

(40:41):
Okay. Let's put up... No, I wanna say more about that.
Circle back. Let's bring it back. 'cause I think the place I want to go
next is I want to talk, Martin, a little bit about sort of
this idea, and this goes into a couple things. So one is sort
of like Moe to your point, people who are using AI for various
things aren't really necessarily talking about it. And I think sometimes

(41:03):
because there's not a scalable process for the way that I might use
an AI, I use it kind of in that first use case as sort
of this sort of assistant coach mentor thing. I'll just pop open
my little ChatGPT and be like, "Hey, I'm thinking about this.
What are some ideas you've got?" And blah, blah, blah. I've never had
ChatGPT look at data for me ever. I've had Claude look at a

(41:26):
couple things, but I've never used them to do any kind of analysis
of data. But I think this idea of exploring sort of
the agentic process in analytics and sort of like, let's step through some
analysis scenarios and maybe look and see where we could leverage it.
And Martin, where do you see kind of the best places for

(41:50):
analysts to use AI in their day to day jobs? And that could
be... We can give you some scenarios maybe to help with that.
Tim, you look like you're about to... No. No. Okay. No. I think I'm still...
I'm waiting for my Generative AI to tell me that my blood pressure's come
down enough from my last rant to... Yeah, so give me the scenarios.

(42:15):
Okay. Perfect. I'll start with one. So, one that I think about all
the time is a lot of what we do in analytics is really
thinking through sort of basically an experiment of some kind, or some kind
of analysis around this versus this. Like, "We're gonna try this."
So, one of the really crucial skills for an analyst, I would say

(42:37):
is being able to design a good experiment or think through the design
of a good experiment. And so like, let's say somebody comes to you
on your team and is like, "Hey, we wanna run this campaign.
We wanna see if this is a better way to do this."
Could you use AI to start to work through the answer to that
question? I think that's a... Like, the kind of

(42:58):
design of experiments is really interesting, particularly with
the new model, o1, with the reasoning capabilities. So it's the chain of
thought capabilities mean that it thinks, "thinks" he says in their quotes,
through the process. And it can be a very good constructive

(43:24):
critic. So, giving you feedback, giving you alternatives
to the point that we made earlier about being generative. And it can
come up with lots of things, very quickly. It can generate huge amounts
of content, some good, some bad So, if you want to just throw
in an experiment design, or a hypothesis, or whatever it may be,

(43:46):
and ask it to give you feedback and then just keep going.
More feedback, more feedback, it will generate lots of it. Some of it
you would disregard, but hidden amongst that there will be some gems.
Now all of this is talking about the current state of these models.
I think it's not going to be long before,
and actually I'm quite interested in o1 and where this goes with the

(44:08):
reasoning capabilities. I think you'll just be able to put in very simple
prompts saying what it is that you are looking to achieve
and it will spit out very high quality
experiments that you can execute. Yeah, I asked the o1 model,
how many golf balls will fit inside of 747? It did a pretty

(44:31):
good breakdown, honestly So, those kinds of reasoning problems. I think
it does a good job with, I think Moe brought up something else
about sort of, there's a value in being able to take on and
answer a question or understand and answer a business question effectively.
And how could an analyst leverage AI to maybe even work with that

(44:54):
kind of use case? Moe, can you unpack that for me slightly? You're
saying if somebody comes with a problem coming back and saying these are
scenarios for... Well, like... Analysis approaches that... I don't know.
Actually this happened the other day. No shit

(45:15):
I said I would not talk about this, I said I would not
talk about this and here I am talking about it.
There was a CMO type question, and someone put it into ChatGPT to
say what are the possible hypotheses that might be an answer to this
question. I was a little bit surprised at how good the answer was.

(45:41):
And the reason that the answer actually was very good, and I found
this with my own experimentation, is I find a lot of the responses
I get come down to structuring things very logically.
And so it'll be like reason one, reason two, reason three, reason four,
which as someone who ends up writing things into a lot of documents

(46:03):
or like writeups, it then becomes a very easy structure to work with
in terms of writing it up. And so I was like,
"You know what? We are gonna just lean hard into this.
We are gonna then tackle this," Tim, you'll love this, "Almost as
analysis of competing hypotheses and be like, 'Okay, these are the nine

(46:23):
hypotheses that ChatGPT gave us. Let's go through. Let's try and knock them
out. Let's see what we can't, what we have evidence against.
What can we say is possibly responsible, partially responsible? What are
the data that we have for each one?'" And that's actually ended up
how we ended up structuring our analysis, was based off the hypothesis generated

(46:44):
from ChatGPT. There you have it folks. I said I wouldn't say it and
I did. But one, that's back to that number one, I think the,
in Martin's list of four, like the coaching or mentoring to me.
And I feel like that's... Is it coaching and mentoring? I don't know
if that's the same. Or it's the same, whatever.

(47:04):
To me that's what... I mean, that is what Jim Stern has been
kind of... I'm now multiple times seeing him due to various iterations where
he's saying, "Ask it for ideas." And Michael, that's what you were saying.
I've used it for that. That... Okay. I... Sorry, Tim, I apologize profusely
for interrupting, but I can't stop my brain from thinking right now.

(47:27):
I think of coaching and mentoring as helping you
make something you've already got better, or get there faster. So for example,
it might be like using a different function. It might be QA ing the
work or, you know, making the language more concise. Whereas I think ideation
is almost its own separate category, which is distinct to coaching or mentor.

(47:50):
Like, I don't know, but that's, maybe I'm being too...
I don't know, Martin, I mean, how would you define it was ... the four.
Yeah. I thought of, when I thought about coaching and mentoring,
helping you to ask better questions or think about things in different ways
was part of that. So, I did see that kind of ideation of

(48:11):
things being part of that kind of umbrella.
But would you also, I mean, Moe with your, the,
"Okay, these are nine, maybe two," you could say, "These are garbage.
I didn't... " one, iterating.... There were some that, but then also how
would I actually validate that, right? Because there's multiple ways to

(48:34):
validate. I mean, you could take it farther and say,
what data would I look at or to get a causal relationship
to truly, if this, if my life depended on validating this hypothesis,
number three in your list, what would you recommend that I do?
Assume infinite resources. You know, I think, which all to me goes through

(48:56):
a good iteration. But it's interesting you asked it for, like, what are
some hypotheses not what are the insights? What are the answers,
right? You had it be that upstream piece and then, "Okay,
we're gonna put a human in the loop, who's gonna say which of these
are worth pursuing and how," and hopefully someone was looking at it saying,

(49:19):
"Some of these we just factually know there is no data that can
validate that hypothesis already in existence. The only way I could do that
is to generate some new data that the Generative AI doesn't have access
to. Because I need to run an experiment," or, "I need to gather
some data for my users," or somewhere else. So,

(49:40):
it's in the process, but it's not... I still feel like it gets treated
as like, "Oh, oh, it's this close, as it gets better,
it'll generate those nine things and they'll be CMO ready." And it's like,
"No, it's gonna generate those things and then we need humans and work
in the process." And I don't wanna come like, hopefully I'm not coming
across as anti Generative AI. I just think there needs to be... Oh you

(50:03):
are, Tim. Decision Oh, I... You are. No, I'm just kidding. We're gonna
run the transcript of this through Claude and say... That's right. Who's
the asshole? Yeah. It's actually really interesting. Like Martin started
this whole episode talking about the terrifying scenario we wouldn't have
jobs. And it's funny, I am also using ChatGPT a lot at the

(50:24):
moment for testing different ways to explain a technical concept
to stakeholders. So, the other day I needed to describe
probabilistic and deterministic probabilistic, you can tell it's ... probabilistic
and deterministic. And I was trying to test out, I actually had a
few different models going against each other to figure out what was the

(50:45):
best option. But it still comes back to that human component of me
looking, knowing my stakeholders well enough, having a good understanding
of what concepts they're familiar with, or what terminology has stuck with
them. So that will land. And then sometimes using different bits from different
outputs to stitch it together. And yeah, I don't know. I'm sure maybe

(51:06):
when my kids grow up, maybe that step won't exist, but for now,
I definitely feel like I still need that. Well, you keep discounting that.
Like, you keep discounting that like, "Oh, well maybe it'll get to where
it's better." It might. I'm not the future reader. Well, but I mean this
is... This goes back, it's not new that 10 years ago they were

(51:30):
saying, "We'll get to... I don't need to learn R don't need to
learn Python. I don't need to learn SQL, because the computer will just
do it for me." And it's like the
half life of getting partway there, it does something better, and then we
have this world of optimism that says, "Oh well this other part
that it can't do now I'm sure it will get there.

(51:50):
If I just wait, it will get there." Like, I feel like there
is a tendency to say, "I don't need to become
better at communicating, because I'm sure within six months it'll just generate...
Canva will introduce the next feature that it just says, 'Here's the data
set, generate the slide deck,'" and then you spiral into, "I'm gonna lose

(52:13):
my job." As opposed to saying no... Like, knowing who the people are
that you're working with, that matters, which of these analogies would work
better. What's the fine tuned right level? And it's not that it's not
gonna continue to get better. I mean I'm terrible as a futurist,
but I think that it's like saying, "Oh, well maybe it'll just do

(52:36):
this for me within a few years, I
feel like is... " Yeah. Okay. Number one, I don't think I'm discounting
that stuff, but I just maybe don't get quite as passionate about it.
So, given Tim's rant though about, you know, we all still need to
learn programming skills, we all still need impeccable communication skills.

(52:59):
Computers won't solve the day. I did not say that. Okay.
Now it's just fun. Paraphrasing. Come on, come on. No, I'm... The thing
is you're putting it is... I mean. We're gonna use ChatGPT to paraphrase
what Tim said. You can't. What I'm saying there's value in this and
then you put a label that Tim says you need to be perfect
at this, that... Oh, come on. That is fucking annoying. Right?

(53:21):
Okay. Sorry. I mean it's not... You're painting it.
I take it back. I take it back. Can AI do this folks.
I don't think so. Okay. Tim and I are gonna be banned from
being on a show together for a while. But what I was gonna
say, Martin, is with the companies that you're working with and the use
cases that you are seeing, if you are starting out in the data

(53:41):
space, you have finite time. You do have to choose where to spend
your energy and your learning. I guess you probably have quite a good
intuition of the direction of the industry and where it's going.
Where would you spend your energy, 'cause we always get this,
we're like, "What is the programming language I should learn? How much time
should I spend on learning data visualization, or on communicating results?

(54:03):
Or writing up analysis?" It's like there are so many things to learn
where to focus and knowing, I suppose, the pros and cons of AI,
like, where would you spend your energy if you were new in the
data space? So, full disclosure, I am not an analyst. So,
giving career advice to future analysts is You know, I'm not the most

(54:25):
qualified there. But I think the fundamentals are always. Or maybe maybe
you're the most qualified. So Yeah. There's... As I mentioned earlier,
being an expert in the field helps you get more
quality content or quality outputs from the AI, you know the questions to
ask to steer it. I also think from an AI perspective and from

(54:49):
the Gen AI space, I just think there's a really fundamental play with
the tools, play with it, poke it, prod it, pull it to bits, and
really look at the outputs that you're getting to understand where those
limits are within these tools. It's very easy
to just take it at face value. It's an AI, surely it's a
computer. It's told me the answer. And as I've

(55:11):
mentioned earlier, this is clearly not true. We can fall asleep at the
wheel if we just take the outputs at face value. So yes,
from a data end, I would pursue the career or pursue the skillset completely
ignoring that AI exists. And I would treat learning AI

(55:32):
as a separate endeavor in and of itself, to understand what that is
and what it, more importantly, isn't at this moment in time.
Yeah. That's good. All right, we've gotta wrap up. This is interesting.
I didn't think we had any passion for this topic at all,
but apparently we have quite a bit, so this is awesome.

(55:52):
Well, one thing we like to do is go around the horn,
share our last call, something that might be of interest to our audience.
Martin, you're our guest. Do you have a last call you'd like to
share? Yeah. So there's a Machine Learning Street Talk, a podcast about

machine learning. They recently did an episode on (56:05):
Is o1 preview reasoning?
So the new OpenAI model is it actually reasoning? And it's about an
hour and a half discussion going deep dive, quite philosophical in nature
about what is reasoning? What is knowledge? Are the things that these language

(56:28):
models doing, truly reasoning? And it's really fascinating for anyone that's
interested in learning more about that. Nice. Awesome. Thank you. This is
funny, Cassie, that same post by Cassie talks about how, I can't remember
which model that says thinking, and she was like, "It says
thinking," she's like, "It's not thinking, it's kind of poking a little

(56:48):
bit of fun at the human when it's spinning around." I was like,
"Oh, I never thought about that." Appear to be human. All right.
Moe, what about you? What's your last call?
Okay. Mine's a weird one. So, I am talking about something that has
nothing to do with Gen AI. I'm doing a professional

(57:09):
leadership course internally. I'm very lucky we have internal coaches at
Canva that we get the opportunity to do this.
And the topic we covered last week was about kind of like our
leadership values and our, what's called our leadership shadow,
and I had written my leadership values a few years ago.

(57:29):
I'd run it through some mentors and people that I'd chat with,
and I was pretty happy with them. And of course I dusted them
off the shelf and I looked at them and was like,
"Yeah, shit." I think what really stood out to me is that at
the time I wrote them, they were all very aspirational
and I would say very soft skill based.

(57:51):
And I didn't feel that I had something there that captured
the team's output or drive. And I realized over the last few years
that is something that's really important to me. So number one,
this is a reminder if you do have leadership ... go check on them.
But the other thing that happened is we started talking about our leadership
shadow. And so that's where you say something's important to you,

(58:11):
but you maybe the way you behave doesn't show up in the same
way. And so an example, not a reflection of me at all,
is you say that your team are the most important thing.
You really care about everyone that you manage, but then you move your
one to ones regularly or you reschedule the team meeting every month,
or something like that. And so it's about identifying where are you saying

(58:34):
things are important, but your behavior is actually quite different if you
were in the team and seeing that. And
yeah, it was just like kind of a nice, I mean challenging exercise,
but good exercise to see how you then overlay that with the values
that really are true to you and how you're gonna show up and
make sure that you're demonstrating that to the team. You should throw those

(58:54):
into ChatGPT and say, "What is my leadership shadow?"
I don't think it knows me well enough yet. Yeah. But I bet it will, give
it a couple weeks. I feel like maybe I should let Tim be
in charge of the prompting and then maybe we would get some real
gold there. All right. Well Tim, what's your last call?

(59:15):
So, I'm gonna do a plug. We are just a little less than
a year out from the... I'm gonna do a twofer. My first one's a plug for
the data connect conference, so it's in early October of 2025.
So, I've talked about it before. We've done promos for it.
It's dataconnectconf.com. But the call for speakers is already open. So

(59:39):
if you are, or if you know someone who is a woman or
a gender queer, gender non conforming or non binary individual who would
have something to speak about at a data conference, consider putting in
a pitch for that. It's a great conference open to all to attend,

(59:59):
just limited on who the speakers are. So that's my plug for that
conference and getting great content there. And then as
my actual last call, which maybe does tie into this topic,
there's a guy named Peder Isager? Isager? I don't know how to pronounce
his last name, wrote a post called Eight Basic Rules for Causal Inference.

(01:00:23):
What's funny is the URL actually is seven basic rules for causal inference.
So, I am really curious as to which one he thought he had
not realized, but it gives simple little diagrams that actually made me...
The first couple I'm like, "Yeah, knew that, knew that." And then it
got really interesting. So, when it comes to
this topic we had today, I think causality is one of those things

(01:00:47):
that is really kind of profound and tricky. And
that was kind of a nice post with simple little diagrams that kind
of make you think, "Oh, this is why
all the answers are not just in the data that I've already collected."
So, eight basic rules for causal inference. Michael,

(01:01:07):
what's your last call? Well, in the spirit of this topic,
a couple of people that I know very well 'cause they used... I
hired them both and they used to work for me. They've started a
startup in the AI space called Moonbird, moonbird.ai. And they are building
agentic tools and services and things like that. But their first product

(01:01:28):
is around an agent or something... An AI agent for specifically looking
at Adobe Analytics implementation. So, if I were walking into a situation
where I was looking at an Adobe implementation today, I would be using
that tool to bring me up to speed, give me information,
provide me some knowledge. So, if you're in that space,

(01:01:48):
it's a great little tool for that. So big shout out to the
Moonbird team over there. All right, well Martin, thank you so much
for coming on the podcast. Who knew that little networking session,
at Marketing Analytics Summit would eventually lead to this?
Martin and I were at a table together
at Marketing Analytics Summit. We got to introduce ourselves and here we

(01:02:11):
are. So thank you, Martin. Thank you. Yeah, it was some good dim sum we
had. Yeah. That's right. All right. And then of course no show would
be complete without a huge shout out to Josh Crowhurst, our producer,
does so much behind the scenes to make things happen. Josh,
thank you. And of course, big shout out. Thank you to Tim and Moe, my
co hosts, for bringing so much life and passion to this episode. Arguing.

(01:02:35):
You mean arguing. Yeah. Well, you know, I asked ChatGPT like, "Give me
a positive spin on all this bullshit." All right. Well this is an
awesome topic and obviously what I think is super interesting and obviously
growing and becoming more and more a part of the conversation.
So I think this is probably not the first time or the last

(01:02:56):
time we'll talk about it on this podcast, but I like the start
we got today. So again, thank you Martin. And again, I think as
you're going out there, we'd love to hear from you. What are you using
AI for? What kinds of things do you see in your work?
It's easy to reach out to us. You can get a hold of
us on the measure chat group or on LinkedIn. And we also now

(01:03:18):
have a YouTube channel, so you can check us out there as well.
So, go ahead and reach out to us. We'd love to hear from
you, unless you're pitching an AI related topic or host from PR auto
bot type situation. We do get a lot of those emails,
but we'll do the picking. Thank you very much and I think we

(01:03:40):
got the great person for this today. All right, anyways, I know that
as you're going through life, you're gonna be using AI more and more
and so keep the good work going. And I know I speak for
both of my co hosts, Tim and Moe when I say,
keep analyzing. Thanks for listening. Let's keep the conversation going

(01:04:02):
with your comments, suggestions, and questions on Twitter at @analyticshour,
on the web at analyticshour.io, our LinkedIn group and the Measure Chat
Slack group. Music for the podcast by Josh Crowhurst.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.