All Episodes

April 29, 2025 61 mins

We finally did it: devoted an entire episode to AI. And, of course, by devoting an episode entirely to AI, we mean we just had GPT-4o generate a script for the entire show, and we just each read our parts. It's pretty impressive how the result still sounds so natural and human and spontaneous. It picked up on Tim's tendency to get hot and bothered, on Moe's proclivity for dancing right up to the edge of oversharing specific work scenarios, on Michael's knack for bringing in personality tests, on Val's patience in getting the whole discussion to get back on track, and on Julie being a real (or artificial, as the case may be?) Gem. Even though it includes the word "proclivity," this show overview was entirely generated without the assistance of AI. And yet, it’s got a whopper of a hallucination: the episode wasn’t scripted at all! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Announcer (05:17):
Welcome to the Analytics Power Hour. Analytics topics covered
conversationally and sometimes with explicit language. Michael Helbling:
Hey, everybody, welcome. It's the analytics power hour, episode 270. Back
in 2023, I asked AI to write an intro to the podcast, and in the words of
Moe, AI did a pretty shit job. But AI hasn't gone away, and the possibilities,
capabilities and potential of these LLMs are expanding, seemingly by the
minute. So today we're strapping on our robot helmets and plunging head
first into the wild, worrying world of artificial intelligence. So what's
AI really for? Is it just a fancy predictive model? Maybe just a massive
checkbox on your boss's latest buzzword bingo card? I don't know, hype versus
help? Automation versus annihilation? And whether or not your next co worker
might be, I don't know, a chatbot with boundary issues. So grab a drink,
mute your Slack notifications, and prepare to find out if your career path
is evolving or being quietly replaced by a GPT powered spreadsheet whisperer.
Speaking of spreadsheet whisperers, let me introduce my co-hosts, Julie

Hoyer. Julie Hoyer (05:19):
Hi there. Michael Helbling
on the show, Julie, because I feel like you're probably one of the most
knowledgeable people about AI in our group, so I'm going to be leaning on

you quite a bit. Julie Hoyer (05:19):
I don't. Michael Helbling

all of us and pretty sure, yeah, you're going to be. Julie Hoyer (05:19):
I'm the

sleeper. Michael Helbling (05:19):
I mean, we'll see. We'll see. All right, next

up, Val Kroll. Val Kroll (05:19):
Hello. Michael Helbling
fool stuff that you and Tim put together for facts and feelings. Tim Wilson:

A month ago. Michael Helbling (05:20):
So I guess that's a good use of AI. Yeah.

Val Kroll (05:20):
Yeah. Facts and furious. Michael Helbling
a month ago. Everyone knows when April Fools is, Tim. Moe Kiss... Val Kroll:

He's just remembering back. Michael Helbling (05:20):
Yeah, welcome, welcome. Moe

Kiss (05:20):
Thanks. Excited to be here. Michael Helbling

to learn that good chunks of that intro were written by AI? Moe Kiss (05:20):
Yes.

Yeah, yeah. Michael Helbling (05:21):
They were. Moe Kiss

Helbling (05:21):
The models have progressed quite a bit. And speaking of people

who haven't progressed quite a bit, Tim Wilson. Count... Julie Hoyer (05:21):
Insert

cheering sound. Tim Wilson (05:21):
XLOOKUP. Michael Helbling

That's right. Michael Helbling (05:21):
I'm whispering to the spreadsheet. That

is actually the first... Val Kroll (05:21):
Eye roll. Michael Helbling
read Tim's blog way back in the day with Excel tips and tricks. Like, I
learned things from Tim Wilson about Excel. So that is true... Tim Wilson:

2008. Michael Helbling (05:22):
Hey, listen, it's working for you, so don't give
up, all right? I'm Michael Helbling. So, yeah, let's... What do the kids
call it? Vibecast or Vibe podcast? I don't know. Let's do this thing. All
right. So, Julie, is it going to take our jobs, this AI thing? Julie Hoyer:

No, definitely not. Michael Helbling (05:22):
All right. No, thank you. Julie Hoyer:

My experience. Michael Helbling (05:22):
Great show, everybody. Julie Hoyer

not worried. See you next time. Rock flag. Michael Helbling (05:23):
See you next
time. Okay, but why isn't it going to take our jobs? We should probably
dig into that a little bit. And let's also maybe dig into what our jobs
are a little bit so that we can kind of see where AI helps, where it doesn't.

And I guess other people can also chime in too. Julie Hoyer (05:23):
I guess. Okay.
Most recently, something I'm running into a lot is... And I feel like this
is an example we've talked about previously on the podcast, multiple times.
A lot of people have written blog posts about it. And it's just funny because
now I'm fighting this battle in multiple fronts at work. The same discussion
of. I think for analysts, AI is not ready to just replace us. Even for,
like, writing queries. There is no, like, talk to your AI and ask it your
business questions and have the data insights come from, your big data warehouse

or anything. People are still so excited about that. Tim Wilson (05:24):
Wait a

minute. Julie Hoyer (05:24):
From what I have seen, it's not... Tim Wilson
versus giving it a... I mean, Julie, just said from business question to

having it write it. Julie Hoyer (05:25):
Yeah. Tim Wilson

Fair. Michael Helbling (05:25):
Yeah, Yeah. I think the distinction is important,

but let's let you keep going. Julie Hoyer (05:25):
Yeah. I think it's still that
there's a lot of this, like, fantasy of, like, it's gonna be so much faster
for an analyst. Like, go into your analytics tool and just like, type away
your questions that you have to answer and get insights really quick. And
I have just had some specific, like, experiences recently where I'm like,
see, it's still not. You guys are saying that that's like the promise, that's
what they want, but it's not true. So I'm still not seeing it even in that
sense of for an analyst and reporting, we're not close to that, which that
takes a ton of time as an analyst to synthesize the data, put it into a
coherent answer, and have it be insightful for your business stakeholder.

Moe Kiss (05:27):
Without giving away too much, this is a delicate tightrope to
walk. Ah, so what we've been trialing, and there's some super smart people
at Canva. Adam Evans had a really brilliant idea, and then Sam Redfern,
who I used to work really closely with, has been exploring kind of
productionizing it. It's been really cool. It's like looking at like,

(05:49):
what are the top queries that are getting asked, like SQL queries,
versus like a table, right? Or like a report table or a model
table, and then using AI to help like generate the best query possible
to get back the data. And what we've noticed is if we do
that, and then we return the data back, and then ask our business

(06:10):
question, it's doing a better job. And we're starting to like test that
out across multiple different business streams. And I've decently played
with it and pretty comfortable. Like I think the thing is like you
definitely, we're not at a point where you don't need a data person
involved at all. Like you still definitely need to QA

(06:31):
data, you definitely need to be like looking at the query logic,
all that sort of stuff. But it is a lot more promising probably
than I expected in a faster time. And I'm almost... I'm going to
throw out something controversial and Tim is like sitting on the edge of
his seat. I think we might get to a point

where we don't need dashboards. Mic drop. Tim Wilson (06:54):
Well, yeah,

I think I agree with that. Moe Kiss (06:59):
Oh, maybe it's not surprising.

Tim Wilson (07:02):
Well, I mean, I think dashboards are generally bullshit.

Julie Hoyer (07:04):
So I was gonna say more that I
think, Moe, hearing your success with this so far, though, the difference
for me is I'm not working with clients that are building something homegrown.
They want something out of the box that

(07:36):
works. I think people don't realize the training that goes into it. I mean,
it's contextful. It takes a lot. There's a lot to think through and people
aren't connecting those dots of like, all the steps in between. But. Moe

Kiss (07:37):
Sorry, you're thinking that most people just want to buy a tool and

be like, here's access to our data warehouse. And now... Tim Wilson (07:38):
Those
fucking tools are hitting me every goddamn day. We have solved it.
And they stand up the biggest fucking straw man of the problem is
business users can't get to their data. And imagine if they could just
ask what were sales like in the Northwest region last month and it
would generate that query. And it is the biggest fucking farce.

(08:02):
I was having the exact same reaction. Within an enterprise organization
with experts that have the ability to have captured a lot of queries
and captured a lot of expertise and train it, I do feel like
is very, very different from the promise of the BI
platforms and all these Johnny-come-lately upstarts that are like,

we can solve this. That drives me nuts. Julie Hoyer (08:24):
Because they're saying,
you come with all your data. We have a really good
LLM. Now ask it your questions. And it has the data there so
it can answer for you. And it's not taking into account that it's
not smart enough. It doesn't know... You haven't trained it how to actually...
It doesn't have the context around your data. It doesn't have the context

around your business. That takes so much. Moe Kiss (08:44):
The thing that we're
still having to do is we have a very unique data warehouse in
how we've chosen to build it where we have like... Well, we've tried
to build a lot more like small lean tables to answer specific questions,
which means that we have thousands of tables, right? And so the joins
become complex, all that sort of stuff. And the thing that we are

(09:06):
still very much having to do is helping point it at the right
table and provide context on that table. And so I think the thing
that I'd probably... Like my own thinking has developed quite a bit is
that previously I probably used to see our data warehouse as being like
almost a barrier to us using AI, whereas now I'm starting to see

(09:28):
it as much more of an advantage, but you still need that like
SME knowledge of like, this is the best table to use.
And one of the ways that we've been solving for that is looking
at what are the top dashboards that people are looking at at a
company level, because often it's like the report layer table that's sitting
underneath that dashboard is the best possible data source because it's

(09:52):
all structured and clean and like has all the right dimensions.
And then we point it at that specific table. So like,
I totally hear what you're saying. Like I have such a different perspective

because we do have the SME knowledge, Julie Hoyer (10:01):
What it's intriguing
if you actually are using is your training data, the history of all
the queries that have been run. And I mean, that's kind of the
wisdom of the crowds. If your training data is what are the queries
that the experts have written and now we can estimate the best query.

Michael Helbling (10:19):
What's definitely become clear to me is that your source
data requires many different other pieces of metadata
or parallel data, like the queries being run, like what questions people
are asking when internally reports are being used, what other things are
happening in the business that aren't stored in that

(10:42):
dataset so that an inference engine like an LLM can actually come up
with something that is not just sort of like that intern level,
time on site was 42 seconds type of bull crap you get from

big agencies. Did I just say that out loud? Sorry. Tim Wilson (10:55):
I think
maybe somewhere you would have named a specific agency. Michael Helbling:
I didn't want to go that far. But it's also interesting because the
companies who are out on the forefront of this trying to build
these chat-assisted or AI-assisted data exploration tools, probably one

(11:20):
of the ones that I'm most familiar with right now is
Zenlytic, which is they're very upfront about the fact you have to build
this other layer, which they call a cognitive layer, on top of it
so that you can actually leverage their tool. And they don't claim to
provide you insights at this point. They just claim to provide you ad
hoc data. So if you need to get a metric, they can do

(11:43):
that for you. And I appreciate both the honesty and the
progress. Because I am bullish on this. I think there is a future
here. But I also think we're nowhere close to asking a question and
getting an answer that includes context and insight that gives us a next

action. Tim Wilson (12:00):
Picture this. You're stuck in the
data slow lane. You're wrestling with broken data pipelines, manual fixes,
suddenly streaking across the sky, faster than a streaming table, more powerful
than a SQL database, able to move massive data volumes in a single

(12:20):
bound. It's not a bird. It's not a plane. It's Fivetran. I need
a hero for data integration. Fivetran, with over 700
pre-built, fully managed connectors, seamlessly syncs your data from every
source to any major destination. No heroics required on your part.
That means no more data pipeline downtime. No more frantic calls to your

(12:45):
engineers. No more waiting weeks to access critical insights. And it's
secure. It's reliable. It's incredibly easy to deploy. Fivetran is the tool
you need for the sensitive and mission-critical data your business depends
on. So get ready to fly past those data bottlenecks and go learn

(13:07):
more at fivetran.com/APH. Unleash your data superpowers. Again, fivetran.com/APH.

Check it out. Julie Hoyer (13:14):
So an example of that cognitive layer that
we're running into. So we were trying to
use, or trying to use, Explore Assistant in Looker. And I don't love
it. I don't understand. Anyways, we won't go down that. I don't love
it. Here's an example. Two examples. We don't have that cognitive layer.

(13:35):
I don't know how we build that in. We're trying to do this,
let's say, like we're trying to do a project that's pretty at
scale. And like a cognitive layer for every client we might use this
for, right? Like that's quite a bit of work to spend it up.
So we were even doing a test use case where we were working
with this, the AI agent Explore. And we asked it, we said,
show us the top 10 performing landing page, like cost per landing page,

(14:01):
right? And then we asked it, the worst performing. And we were like,
look, I was working with some engineers. They're like, look, we got it
to provide the data we were expecting. And then I realized it actually
wasn't understanding best and worst either. Like even those semantics of
me saying best cost per landing page would be the cheapest ones.

They were showing me the most expensive and vice versa. Julie Hoyer (14:19):
When
I said worst, they were showing me the cheapest. So it's even little
things like that. Or we were trying to ask about a specific metric,
but we were just using the layman's terms, right? Like a business user
asking about it. And because the name coming from the data source is
nowhere near that, you know what I mean? It was never going to

get to that data point for us. Moe Kiss (14:39):
Okay. So what is
top of mind for me right now is like, why do I not
seem to be having these same challenges? Is it just
that like we also have an enterprise account and we're uploading so much
more of our own business context. And so then we're like not having

these hurdles. Is that like a big part of it? Michael Helbling (15:01):
I
think yeah, because what you can train an LLM on is all about
what you get back out of it. I could so I live in
a world where Moe, I don't have clients who are taking all of
their data and storing it in an LLM or how about

(15:25):
this consciously executing a data strategy aligned with growth of AI usage
consistently. I have some clients are doing quite a bit with it,
but what they're seeing is the exact same thing. They now have people
full time whose job it is to ensure that the AI is getting
fed the right information, which I think is kind of fascinating.
And then the other thing is that there's such a big expectation gap

(15:49):
because of what AI is able to do in other categories.
So like, for instance, when I sat down with my son recently and
we quote vibe coded a video game the other night and we had
a working video game in like five minutes. It kind of blew my
mind. And here's why, because I don't know how to write

(16:10):
code, but this AI could take a step forward and capability so big
it makes people think, oh, that step forward is available in every
context. And it's simply not. Because, and I've thought about this a lot,
like, why is it so good at coding already?
And I think the reason why is because code lives all in the

(16:31):
same place and is logical in its structure.

So, like, the code is right there. Moe Kiss (16:34):
It's good at some

code. Michael Helbling (16:36):
No, no, it's not perfect at coding, but it's the
best... Like, writing code is what AI is really... The most
product-ready thing it can do, I think, besides making cool animated versions
of your own photos now, is what it's really amazing at.
And it blows my mind how good it is now at it.

(16:58):
Like, it's so impressive. But I also start to realize
that, like, oh, yeah, because everything it needs to know is right there.

It's all in the code. Moe Kiss (17:05):
Yeah, but, okay, can I talk
you guys through an example that someone in my team showed me. Tim Wilson:
I want to call out that, Ethan Mollick did a whole,
like, vibe coding to build a game piece
that's, worth a read that was kind of... It was speaking things into
existence where he... It was a little bit more involved game but kind
of where he took steps forward and steps back. So

that just reminded me of that, your example. Michael Helbling (17:27):
Way to slip

in a last call there, Tim. Nice job. Tim Wilson (17:30):
Nope, wasn't even

on my last call. Michael Helbling (17:34):
Oh, Moe yeah, go ahead.

Moe Kiss (17:37):
Just showing off altogether. Okay, so someone in my team showed
this last week. And to be fair, I have not played with Claude
at all. I have been quite monogamous in my AI tooling.
And basically what he did is created a new Claude project.
He uploaded into it LookML for an existing... So LookML is the language

(17:59):
that sits behind Looker, which is a dashboarding tool for anyone listening.
So you have to write LookML code to basically get the data in
the right format to build a dashboard. And so he
uploaded, like, basically a LookML for an existing
Look. He then added, like, the underlying data that sits behind it from
the data warehouse as well as the code of how that table is

(18:23):
created, then gave it a sample data set. And basically, like,
saved these all to his project. And then
within, like, a good 15 minutes, Claude using the... Because he put a
lot of thought and effort into the steps and what data he... And what
context he uploaded. It gave him back the LookML to build a dashboard.
And that was... He turned that around in 15 minutes and built this

(18:48):
whole new dashboard for our stakeholders, which, to be honest, we didn't

have the resource in our time to build. Moe Kiss (18:51):
He definitely,
talked us through the fact that he had to make tweaks and make
changes to this or that or the wrong visualization was picked here or
he wanted the colors to be this or that sort of thing.
But that is another example of it is so much about what you're
putting in. And I just wonder sometimes if the expectations of people are

(19:16):
here is one very selective bit of data. Now answer this really complicated
question, which it doesn't have enough business context to do. And that
we need to spend more energy on putting quality in. Oh,
I don't know. I feel like Tim's rolling his eyes at me.

Tim Wilson (19:31):
Hopefully not to generate a fucking

dashboard. But okay, that's awesome. We found a fashion... Moe Kiss (19:33):
It
generated a fucking dashboard. Do you know how excited I was?

Big shout out to Steve Austin. Julie Hoyer (19:39):
I'd be excited and save

you all those steps in time. Michael Helbling (19:42):
Hold on. You just said
we're not going to need dashboards. Why are we generating them? No, I know.

Moe Kiss (19:48):
People still think they need them
now. But in a year, I don't think they
will. Because they'll be... Ultimately you look at a dashboard to be,
like are we on track or not? Like, what was our performance?
Are we hitting it? Blah, blah, blah. I feel like it's a crutch
that people need. And it's like, if you can answer that question without

having a dashboard, why would you need it? Michael Helbling (20:08):
Yeah. I look
forward to a future where my brain gets stimulated and I smell apples

when sales are down in the Northwest. Tim Wilson (20:15):
I mean,
that's kind of bizarre. I mean, to me, the only place... I mean,
not to mount the dashboard. But the only place a dashboard is really
useful is actually showing in a consistent manner are we delivering against

(20:36):
the business outcomes, against our targets. So I actually would think that
would be useful. I don't want to ask an LLM every time,
what is it I care about? What metric is it that I want
to look at? I don't know. That's maybe a topic for a whole

other. Val Kroll (20:51):
For you to say, where am I underperforming?

And have it spit it out. Moe Kiss (20:53):
Am I on target?

Where am I underperforming? And what action should I take? Tim Wilson (20:56):
Okay.

Actions. Julie Hoyer (21:05):
Couldn't say that one. Tim Wilson

need another... Moe Kiss (21:08):
Tim's going to need a drink. Michael Helbling:
I think Canva has another breakthrough product category here, analytics

tools. Tim Wilson (21:14):
I did a thought experiment where I said this is
kind of really the best, the perfect dashboard would be one that only
showed where you were underperforming. So you'd have the same structure,
but everything would go away if you were
actually delivering, you were meeting your results, and so you'd wind up
with a very sparse. But I still think there's human value of knowing
what to look at and where, because that's been another thing that

(21:39):
so much of the hype around AI, and this even goes back to
other products pre-AI that were still doing the, oh, we're going to put
stuff, our users don't want to see charts, they want to know what's
going on. And so it basically would barf out text that described the
charts. We are, as human beings, a visual representation of data is easier

to internalize than prose. Moe Kiss (22:05):
Some tools do the visualization too.
Like I didn't realize how good Claude is at doing that.
Like it does visualizations for you and like scorecards and all that sort
of stuff. So it's like, do you need this dashboard to exist in
perpetuity? Or is it like, you're going to do your check-in at whatever
cadence it is for whatever meeting, and it just pops it up and

there you go. Tim Wilson (22:26):
But I hope that it would pull up
the same thing every time. Like there's the same... There's value in consistency.

Moe Kiss (22:34):
That's a good point. Tim Wilson
Yeah, but I think you could have a prompt that schedules that and

runs it the same way every time. Julie Hoyer (22:38):
But is it more
efficient, like technologically and the whatever it takes to run AI to keep
asking it the same thing when you need
to like just create it once and let it sit and go look
at it, right? Like is it really worth

the like energy... Michael Helbling (22:54):
Much like computers, I expect
the cost to come down over time. So I don't know.

Who cares about that? I mean inefficiency. Julie Hoyer (23:03):
It just feels inefficient.
Yes, exactly what I want. Like let me build it and save it
in a dashboard and I'll go click on it every Monday.

Like that to me just seems easier. Tim Wilson (23:13):
But the hurdle that
is much easier, and it just goes a little bit to that example
of I'm building something, I'm writing some code, I'm writing some SQL,
I'm doing considerate just the traditional task where I might hit a snag
and read through and put in comments and try to figure out where

(23:35):
the hell it's breaking and then go and search and read like seven
Stack Overflow posts that aren't quite on. I mean, I've been... The limited
work I've been doing when I'm like, I want to specifically do,
I want to take the system time and I want to convert it
from this to this and compare it to that. And it's probably old
school now. Like I wind up in perplexity. And I think,

(23:56):
Michael, you made a comment like offline that the coding part,
it is good. And with the interface I was using with perplexity where
I'm like, oh, it's watching me. It's looking at the posit
community. It's looking at Stack Overflow. It's basically doing a bunch
of Google searches and consolidating and comparing them to my query.

(24:16):
And then it's returning me code that is very good and reliable.

Tim Wilson (24:21):
But that's not me asking it a business question.
That's me as an analyst saying, I want to see this.
Can you help me write some code to do that? And because I'm
asking it about doing stuff like in R, I have a decent grounding
in R. So what comes back, one, it's not writing... It's not coding
the whole video game where I know nothing. It's giving me 10 lines

(24:45):
of code. And I'm like, oh, I didn't know the system function existed.
That's pretty cool. I've learned more. So in that case, I feel very
comfortable that it's like rapidly speeding up instead of me doing 12 searches
and winding up on the same unhelpful Stack Overflow
post. It's actually returning the right result. And I'm like, oh,

(25:06):
I've learned something and moved on. I was like, holy
cow, this accelerated my iterations on writing the code. I'm like,
that's pretty cool. And that seems wildly better than it

was even six months ago. Val Kroll (25:21):
So to go back to the
original question that launched us into this, which was like, is... Val

Kroll (25:31):
Yeah, I've been holding on to my answer for this whole time. Michael

Helbling (25:35):
Listen, Val is trying to talk here, people. Come on.

Val Kroll (25:42):
No, I just remember one of my... Because lots of people
have written about that topic. Like that's definitely an interesting thing
that people read. But one of the best articles I had seen on
this, no surprise, Eric Sandosham. And one of the concepts that he brought
up around this was that, AI is really good at problem solving and
it's getting better and better, but it's not making a lot of progress

(26:03):
on the problem defining part of it. And that's like where that human
component always is. And that's like the business context that we've been
talking about, like coming up with the hypothesis, structuring exactly what
tasks needed to be done in order to do whatever you were working
on. Tim, if you want to reveal your project, I'll leave that to
you. But I think that that's a really helpful way that my brain

(26:25):
kind of organizes and categorizes where there will continue to be improvement,
but where they'll always need to be an assist. And that's why we

can be comfortable. Michael Helbling (26:34):
And I'll go a step further than that,
Val. I actually really think that as AI comes into its own,
it'll start to really show who can do that really well and who
cannot in organizations. Like AI is going to basically highlight the people
who are really shit at understanding the levers that drive the business

(26:57):
and driving down into the causes and effects that actually make things happen.
And it's actually going to make people look bad eventually because it'll
be like, oh yeah, you're not getting anything of value out of this
tool. That's strange. Let me just, oh, no, it's like that.
And then suddenly that person's going to be shown to be like,

not really of the caliber. Moe Kiss (27:18):
I don't know.
I feel like maybe it's just me being crazy optimistic as usual.
I see this really exciting. Like there are so many boring bits of

the data job. Tim Wilson (27:30):
No one's saying it's

not. Michael Helbling (27:32):
I'm bullish. Totally. I want those people out.

So I think that's great. Tim Wilson (27:36):
But it's the difference
between, and Moe, you shared an example that did not make it to
a recording and we won't name who did it. It was some business
partner saying, hey, can you generate some hypothesis? Like it literally
asked, like the prompt, can you generate hypotheses, took those, threw them

(27:58):
over the wall to you and said, hey, can you prioritize and validate
these or your team? Compare that to... And I've heard, like I was
talking to John Lovett about how he went about writing his
latest book, The New Big Book of KPIs by John Lovett,
which now it doesn't have to be my last call. And his part

of his... Val Kroll (28:16):
Look at you stuffing this episode

with last calls. Tim Wilson (28:19):
Stuffing it in. Michael Helbling

Wilson (28:21):
But part of his technique, and I've heard others talk about... I
mean, this is not totally original, but he said, imagine you are a... He
gave specific industry people. He said, you're responding to me as an ideation
assistant. And I feel like a lot of people, and I mean,
Ethan Mollick, Jim Stern, John Lovett, lots of people are saying,

(28:42):
let the AI be really smart coworker, use it as a sounding board,
still be a human, but instead of saying, hey, Julie, can you hop
on a call so we can kick some stuff around about
that? Before you've done that and got to find time on Julie's schedule,
it can instead be, hey, you're an analyst with

(29:04):
a applied math master's degree who's been working in agency, whatever.
Now, I have a question about this. What sort of prompts would you,
what would you ask me? What would you think? What would your ideas
be? So that is an ideation companion. And I've tinkered with that as
well. Not saying, give me this and I want to take and edit

(29:25):
the responses, but much more of a, I want to use you as

a nonjudgmental and infinitely patient sounding board. Tim Wilson (29:29):
And that
I think from a hypothesis generation, because that forces me to actually
express what am I thinking? What do I see? I think it might
be this. I think it could be this. Just like I would in
more of a human interaction, as opposed to, I want to write
the one sentence prompt and have it just give me the answer.

(29:51):
And when you look at some of the people out there who are
posting, like their prompts are pretty involved. And
it is the case back to the coding of where Cassie Kozyrkov had
an article where she said, if you know how to code,
it is actually in many cases faster to write the damn code than
to write a prompt that describes what you want the code to be.

(30:15):
And that's very different from Mike, like your example with writing the

video game with your son. Michael Helbling (30:19):
Oh yeah, because I can't write

the code. Tim Wilson (30:20):
Right. So I'm like, so I'll just describe it
and I'll work in that prose. And I was like, oh,

okay, that makes... I don't know. Val Kroll (30:26):
On the sounding board
front, wouldn't it be cool if we could make it talk to Julie's

gem? Moe Kiss (30:32):
Oh yeah. Julie Hoyer
that I like. If people were like, oh, I don't want to bug
Julie, I'll talk to her gem, I'd be like, I'm gonna toss this

gem real quick. Michael Helbling (30:40):
So speaking of sounding board, so I built
this in Notebook LM Plus, I just took all the personality assessments and
leadership style stuff I've ever done, dumped it in there, and
I made an AI chat agent that people can interact with about my

(31:01):
personality, my style, ask questions about how to conduct meetings with

me, and I've given that to my team. So that... Val Kroll (31:06):
Oh my

God, gotta go, sorry guys, gotta go. Tim Wilson (31:10):
I'm Out! Val Kroll

busy, all of a sudden. Michael Helbling (31:12):
But I mean, there's lots of
these amazing little things you can do with tools like that. Julie Hoyer:

I love that. Michael Helbling (31:19):
And it's not just idea
starters, it can also be things that are like, things we never thought
of as tools, because before what I'd do is I'd kind of type
up sort of a one-pager of here's how I work best with
people, and kind of like, people would read it or throw it away
probably, but now it's sort of like, if you're curious about something,

(31:40):
here's eight years of leadership personality stuff I had to take tests on,

feel free to just ask it anything. Julie Hoyer (31:48):
And kind of fun
if you're like, ooh, I don't want to ask Michael this question,
but I need to know, I'll ask his personality,

AI agent. Michael Helbling (31:56):
It's not me in there,

okay, it's just about me. Val Kroll (31:59):
It's like a Black Mirror episode.

Michael Helbling (32:03):
And before anyone asks, I could only share it within my
own organization, because that's how Notebook LM Plus works, so I cannot

share it with you, so don't ask. Moe Kiss (32:10):
Do you

know, okay, I need to have a ____ about something. Michael Helbling (32:14):
Yeah,

do it. Julie Hoyer (32:16):
We love a gripe. Moe Kiss
I'm seeing like AI really just like fuck up my life.
I'm so sick of reading things that have been written by AI.
I am like so violently angry about it, especially it is getting overused

(32:39):
to write work on analysts and data scientists behalf, like put together
the findings. And it is crap, because I think
there is a way you can make it okay of like,
you write it and like just clean up my text versus like... But I am
reading so many documents that are written by

(33:01):
AI. And the thing that also frustrates me is if like anyone has
like a half-baked idea, it's suddenly like, here's a doc on it.
And you're like, great. So now I have like 5000 times more docs
to read. And it's a half-baked idea because you didn't have to spend
the day writing it or a couple of hours writing it.
You could basically leave yourself a voice note and then turn it into
a doc. And so people are just like throwing these docs

around. And I'm like, it's actually so frustrating. Michael Helbling (33:25):
You
should see some of the social media promotions that are like AI

generated. They're the worst. Tim Wilson (33:33):
Tell me about it. But Moe, I've
got a solution for you. You take those docs, you chuck them in

an AI, you get a one-sentence summary, move on. Moe Kiss (33:40):
And then,

no, but the problem is... Val Kroll (33:45):
No, I didn't really like your

insert one-sentence idea. Moe Kiss (33:46):
The issue is though that often the...
I feel like almost like the directness or the like the takeaway gets
so watered down that what you're reading starts to turn into like
smush. And you're like, it loses like the crispness of what the idea

was. Michael Helbling (34:08):
And this is... I think this is very,
very important. There's a point about AI that I think is really important
about what you're talking about, which is the way I say it is
AI is right down the middle in terms of an average.
And basically when AI does something, it kind of does it just okay.

(34:29):
And sometimes that's really great. Like it made me a just okay video
game. And that's amazing because I can't, I'm zero on that.
But if I'm like, I'm pretty good as an analyst and it makes
me a just okay analysis, that's pretty crummy. I can't work with that.
I need better than that. And so one of the things that sort
of stood out to me about AI and its usage is that knowledge

(34:52):
and expertise actually becomes a massive and important filter for how AI
is actually going to be beneficial or not beneficial. Like I was talking
to my tax accountant and he's like, oh, Michael, you wouldn't believe the
crazy things people are getting from AIs about taxes. I'm like,
yeah, because they have no idea how they should be doing their taxes.

(35:14):
You as a tax expert can take one look at that and know
if it's good advice or bad advice. Just the same way as I
could take one look at an AI's output and it's something I'm an
expert in and know if that's good enough or not good enough or
like 50% of the way there and I can tweak it upward.

Michael Helbling (35:32):
But the point is without knowledge, I only could possibly
hope for average. And so that's where everyone has to understand is sort
of like when you let AI do something you don't have expertise in,
you're basically only gonna get maybe 50 to 60% good quality.
And of course that number's improving. I'm excited for it to keep improving,

(35:54):
but the reality is that's really what we're getting out of that.
And we're not getting anything that no one's ever thought of before.
We're only getting what's been thought of before and it's most
standard. Because I tested this with data strategy. I went to the deep
research in ChatGPT and I said, really put together a research around the
top themes and things like that with data

(36:17):
strategy. Like what are people saying about it? What are the... And it did
a great job. I mean, pulled 40 different sources and wrote this whole
thing about it. And I said, what's the missing thing from all of
these different things? And it literally fell over. It couldn't really come
up with anything, because it's not there to like do that kind of
thinking. Now I can do that kind of thinking, but

(36:40):
there's not enough other people in the consensus applying that to it that
it can build a knowledge base around to say, oh, I've trained myself
on that information. Here you go. And so that's where we always have
to... It's important to think about, okay, yeah, my expertise applied to
AI gives me a superpower, someone without expertise applied AI brings you

(37:03):
up to average. And so then now you can see like, okay, then how should we
use it in our businesses? The one thing I do get concerned about, about
AI and how we're going to proceed because we're obviously not going to stop
using it, is what do people without expertise do to build expertise now?

(37:25):
Because if AI is writing all of our code in the next three years, how do
people who are starting out as software developers build that expertise
to be able to coach the AI to write amazing code? Or how does that next
amazing breakthrough and coding language or the replacement to SQL or ever
come about if all we're using is the same things AI knows the most about?

(37:47):
Because like people I've talked to who are developers, the more esoteric
the language is, the less the AI is really doing a good job with it. The
more popular the language, the more amazing it is because there's a bigger
corpus of information for it to consume and learn on. So it's a really interesting

challenge to think about. Michael Helbling (38:05):
As analytics people, I think
about it for us mostly, it's sort of like, okay, so yeah,
how do we take a junior analyst and make them into an amazing
senior analyst down the road? And if AI is coming in and
doing like a bunch of that job, the nice thing is AI is
nowhere close to doing the analyst job. Now give it two years and

(38:25):
my story will change like so much progress is being made and I'm
super excited about that. But that's the thing I think for a lot
of us and especially experienced listeners think about is how do we make
sure there's a bridge backwards so that we don't lose the connectivity so
that future people can come in and be good at this as well.
Because the last thing we all want is everyone getting to average and

no further. Tim Wilson (38:50):
This one I can't remember the source on,
but I do remember seeing someone who had said they'd used AI to... It had
given it kind of a what it was wanting to get more expert
at and said, develop a training plan for me. And these are the
criteria I want to do a half hour a
day because I kind of along those lines, that's why I'm terrified that

(39:13):
people think this is going to let me skip the steps of
hard work and frustration and thinking about the business, about how code
works, about architecture, whatever it is. And I don't think there is...

(39:33):
That that's not what it's going to do. Like people still need to
develop expertise and you develop expertise through practice and there's

a degree of accelerating, but I don't. Yeah, Julie. Julie Hoyer (39:45):
I think
it's crazy that one, Michael, I love the way you were talking about
the averages. I've never thought about it that way. And that was definitely
like a clarity moment for me, because I feel like people can't start
with a blank slate. Like how do you... To your point,
how do you gain the skill or Tim, kind of what you're saying, like, how

(40:36):
do you gain the skill to look at a blank screen and be like, I need to go
write code to do this, or I need to get my thoughts out in a coherent way.
And if you've always had the ability to, like, go to AI and get even, just
a starting point? I don't know. I just feel like that's such a core skill
in problem solving and problem definition and just like, growing in general
in your capabilities. Because something I found too, is like, sometimes
I struggle or or push back, maybe drag my feet on going and
using AI sometimes because to Moe's point earlier,
I don't like the brain work of going through and slogging through its

long verbose kind of average answer and tweaking it. Julie Hoyer (40:48):
Like I
sometimes do better my workflow and the way I like to work and
like the output I get, I like a blank screen and I just
brain dump or I just try something. And then to Tim's point earlier,
like then maybe I go and use AI to help me.
But I don't know. It's like such a different exercise in my head

(41:10):
that I find it exhausting to take an initial AI output and then

make it into something good. Moe Kiss (41:13):
Do you know what's so funny?
I'm the complete opposite. Like I loved it
because I'm one of those people that literally needs a rubber duck on
my desk because I need to like have something to bounce off and
be like, oh, I'm hitting this wall. Like, or, oh, I haven't thought
of this. And like I am the epitome of the rubber duck when

(41:36):
I'm... Especially if I'm writing code. And that's what I essentially am
using AI for now is like to go back and forth and then
be like, oh, no, you haven't gotten this right.
OK, oh, no, I want to look at this
now or like I want to change this wording. And I do... I
was thinking about this the other night. I was working on something and
part of me was like, oh, I feel like this might have been
faster if I just did the whole thing from scratch. But I feel

(41:58):
the output ended up being better for my working style because I got

that feedback loop, if that makes sense. Tim Wilson (42:02):
But I don't see
how that's different. That's you still initiated it. You brought your expertise,
your point of view, your thoughts, and you
put it in. I think Julie, if I'm hearing right, saying,
but if I start with a... If I don't come in with a
starting point and ask a query is something I'm looking, if I don't

(42:24):
come up with something to bounce it off, I just show up with
a prompt. I'm going to write this kind of vanilla
thing and I'm going to get vanilla back. And then it's... I'm going
to say, send it to my favorite presentation tool and say,
generate a presentation of it. And it's going to make a vanilla presentation

(42:46):
that checks a lot of boxes, but doesn't move

anything forward. Julie Hoyer (42:49):
I don't know. It's even like when I've asked
it to help me like summarize a lot of data, like I've done
a sentiment analysis recently. So I was using a sentiment
analysis like gem in Gemini, and I like stripped it all PII and
all that. But I put some of these responses and I was asking
like, help me take these 700 responses and just like, help me identify
some themes. And at first read, it's like, oh yeah, that's great.

(43:12):
I could ask for direct quotes that prove each of those themes.
But then it's like, I'm going through and checking and I did read
through like all the responses. And it's just interesting how much
rework and I'm not saying that's like not a good place to start,
but that is like exhausting to me of being, it wrote up this
thing. And now I actually have to re dissect it and take it
out. And it's just a very different. Yeah, working style. I like when

(43:34):
I can come with a more like vision of what I'm trying to
get. And I guess for the sentiment analysis, I don't know a better
way, right? Like, how am I supposed to go through all these written
things and remember all the quotes and what like physically like put them
in categories? Unrealistic. But that exercise made me realize to Tim's point,
yeah, I like using AI to further something I kind of already have

(43:56):
going rather than it spitting out this initial kind of messy thing and
having to rework it. I guess it's just a preference thing.

Tim Wilson (44:04):
So there's my Cassie Kozyrkov from the her what is
vibe, vibe coding piece where she said she was making the difference of
like trying to read somebody else's code versus writing your own code and
trying to debug it. And she was like, at least when you write
buggy code yourself, you understand the flawed thinking that created it.
With vibe coding, you're playing archaeologists in someone else's mistakes.
So if you went through your and did the

(44:27):
sentiment analysis yourself, by the time you got to responses number 650,
you'd be like, Oh, I'm doing this differently from how I did it initially,
now I need to like go back and do it again.
But you'd have that kind of baked into what you're doing.

If you just skipped all of that. You don't know what... Julie Hoyer (44:41):
Yeah,
yeah, that's exactly that's the exact example and perfect way to put it.
I don't trust it. I don't know all the assumptions
that made and now I'm kind of having to dig and check it

off. Moe Kiss (44:54):
That's interesting what Cassie said, though, because I found...
I've used it quite a lot not sorry, I haven't used it to
QA someone else's code, but I have used it definitely to understand someone
else's code. And I found that super helpful. Because it was like a
business area that I wasn't as familiar with. I wasn't familiar with the

(45:15):
tables and all that sort of stuff. And I kind of was like,
I wanted to like sense check of how is this being... How is
this metric being calculated? All of that sort of stuff. And it helped
me understand that at a time when the person who wrote the code

was asleep. And it was really useful. Tim Wilson (45:28):
That's not her point.
Her point at all. That's not her point at all. Her point was,
if you ask it to generate the code, then you've gotten the code

that is the person who's asleep. Moe Kiss (45:37):
Oh, got it. Tim Wilson

was saying this is like debug... So absolutely... Julie Hoyer (45:42):
If you don't
go ask it all the questions of like, why'd you choose this?
Did you think of this edge case? What happens at this edge case?

It's not like you just don't know. Tim Wilson (45:52):
I mean,
I think that's a great point. If you're trying to look at somebody
else's like, if it's a spaghetti hot mess, and you're saying,
I mean, I could even see asking it like, how good is this?
Like, this seems like it's 4000 lines. Is could this be done better?
So isn't it? But again, that's an assistant of saying, I don't understand
what this is doing. And the person who wrote it's

not here. Help me out. I think that's... Michael Helbling (46:11):
That's a great

use case. Moe Kiss (46:14):
Okay, one of the things that I feel like
is coming to mind also with vibe coding. I'm gonna say something that
might also be controversial. I wonder if the
reason like the LookML example is so good is because LookML is so
basic. Like you're not like... Most people can write LookML, it's fairly

(46:36):
like, simple, I would say. Versus like, if you're trying to write code
for something that's very complex, and then trying to debug it,
I could see that being very... It would be very bad at that.
Whereas like, I don't know. So maybe it has to also do with
the complexity of the problem, like what the code is trying to do,

or the coding language. Tim Wilson (46:58):
But that has been... The complexity
front has me thinking, if you look at where people
kind of jumped to self-labeling themselves data scientists after they'd
taken a Python boot camp and didn't have the... What are the

(47:19):
trade-offs in the different models that I could choose to run on
this? Asking AI, you're probably giving it incomplete information, and hey,
what kind of, should this be gradient boosting? What should I use?
And maybe, Michael, it goes back to what you were saying,
somebody who doesn't know any better, if the AI says, well,
based on what you gave me, it didn't think to probe for some

(47:42):
other factors, it didn't know some context or nuance, it could totally send
you down a path that wasn't helpful, whereas if somebody's like a legit,
experienced data scientist who probably wouldn't even need to... They wouldn't
query it, they'd say, well, given the nature of this, I think we

should use X, Y, and Z. Val Kroll (47:59):
There's something that you said
there, Tim, because you were saying if they had taken the Python boot
camp, they might not know to think about different models,
having that knowledge, and then when you were juxtaposing that, you said
someone who has more experience, because I think that that's a key part
of it, is the chipping down and falling down and knowing what the

(48:21):
watchouts are, I think that that's a huge part of it,
too, is there's nothing to replace the experiences that we, the scars that

have made us who we are. Julie Hoyer (48:27):
They stick with you.

Michael Helbling (48:30):
I want all of you to struggle with applying
Stephen Fuge's principles to data visualizations of random BI tools. Moe

Kiss (48:37):
Okay, so one concept that has been churning around in my mind
a lot of late, and this is tangential to this whole AI piece,
I kind of keep coming back to, what happens if we give people
more ability to self-serve, answer their own questions using AI, whatever
it is, and they misinterpret it, or they make mistakes? And recently someone
said to me, they're like, and what if they

(49:00):
do? So they make a mistake, they misinterpret the data, they're accountable
for that mistake and that misinterpretation, and then they need to fix it,
and they don't make that mistake again. And I feel like it's this
tension that's been rolling around in me where I'm like, I always want
to protect people from making the less good
decision, and so I'm like, I want them to make the best decision
possible the first time, and so I'm always like, oh, and we can

(49:23):
help you do that, that's what data science does, and it's like,
the funny thing is, though, as we talk about expertise, so much of
your expertise comes from making those mistakes yourself, so it's
like, anyway, I'm just thinking out loud about letting people just fuck
up themselves and then figure it out and how there's value in that.

Tim Wilson (49:43):
Plus, that's actually kind of part of the human experience.

Moe Kiss (49:45):
I was going to say, yeah. Tim Wilson
this, I think lots of things have been... Lots of thought pieces
around, like, if there wasn't hardship and frustrating stuff and mistakes
made, I mean, that's getting rather philosophical, but
if everything is a smooth path, then what are we.. We've got to

(50:08):
go find aliens to fight or something, that's where Star Trek.... Moe Kiss:
But isn't that the point, though, that all these people that
think, hey, I can just throw a CSV into ChatGPT, it's going to
answer all my business questions, I don't need data scientists,
blah, blah, blah, why not let them do it? Be
like, sure, you want to upload these CSVs in, answer your questions,
get some shitty answers back and make some shitty business

(50:31):
decisions? That, my friend, is going to be a great learning opportunity.

Val Kroll (50:34):
And then they'll make a bad decision, and then it will
come back and they'll be like, oh, it was just really
low-quality data, we really just need to clean our data and just we
need some more tools, different tools, it was the tool's fault.

Michael Helbling (50:45):
The tools are always the ones that are messing us up,

for sure. Moe Kiss (50:48):
Oh, Val, that hurts, that hurts. Val Kroll
someone thinks that that's a solution, Moe, do you really think they're
going to have, like, the self-reflection to be

like, oh, it's not me. Tim Wilson (50:57):
That's the other thing that these
guys, separate from the we're going to stand up, our little
Johnny-come-lately, just ask the question and give you the
answer, the other is that there are so many people have jumped on
this, well, with AI, you've got to feed the beast, so you need
to get all of your data. So that has also stood up an
enormous number of companies that are now sowing fear, uncertainty, and

(51:22):
doubt that we've got to have all the data pumped in,
and it's kind of energized the... I was talking to a
long-time friend, she's a marketer, and she was like, went on a tear
about cookie blocking, European-based company, she's in North America, and
she's like, we had to fight so we could get the

(51:44):
cookie, even if they don't track the... If they don't accept consent,

they can... If they don't consent to... Moe Kiss (51:47):
What's going on?

Julie Hoyer (51:56):
Val's checking her blood pressure. Michael Helbling
checking her blood pressure on Tim's rant. She's like,

poor Val. Tim Wilson (52:03):
But that has been fed as well.
It does get to where Val was, that oh, and now if a
bad thing happens, it's not because I tried to shortcut
it, nobody's going to accept that the AI is no
good, it's going to be we must not have had enough data,
the data must not have been clean enough, and they throw it to

(52:24):
the data team, and that becomes the problem when it's just often not.

It's like, no, you need to think harder. Moe Kiss (52:28):
Thanks, Tim,

I'm back to pessimistic. Full swing! Julie Hoyer (52:31):
Just make sure people
can still sniff out the BS. You need enough people that can sniff
out the BS, and you need enough people to not get stuck in
the echo chamber that maybe AI is making worse in some areas.
You know what I mean? That's where my head goes is
the people who can see beyond will still rise to the

(52:54):
top. Because I feel like you're going to get a lot of that

echo chamber stuff. Michael Helbling (52:55):
It's hard enough to maintain data
quality in a single source of data or a single data set.
Now map out the four to five data sets you'll need to maintain
in complete alignment with complete accuracy. It's not a job that's going
to be very easy very fast. That's the truth. And we have to

(53:17):
do that if we want LLMs to be able to house the context

for actually doing what we would call analysis. Julie Hoyer (53:22):
Blood pressure

is back. Tim Wilson (53:25):
Blood pressure is back. Val Kroll

Michael Helbling (53:29):
We're the one audio podcast with prop comedy.
All right. Well hey, we better wrap up this episode. Congratulations each
of you. Now go ahead and go put AI expert on your LinkedIn

profile. Everyone else is doing it. Tim Wilson (53:50):
AI strategist. Michael Helbling:
Oh AI strategist. Oh I like that. That's better.

Did you use an AI to come up with that? Tim Wilson (53:58):
No, I
might have seen that on a long time member of the analytics community.

I was like, oh, interesting. Michael Helbling (54:04):
Very good. And actually what
stood out to me, I loved Moe, hearing from your experience because it's
a lot different than what I'm experiencing out there in the context that
you're operating in. So that was really great. And I love the juxtaposition
and just sort of learning from that. So that was amazing.

(54:28):
Tim, not that I got nothing. Tim, of course, name
dropped like everybody, Cassie and Ethan Mollick, which I also loved. Moe

Kiss (54:39):
Most well-read individual. Michael Helbling

on in his quintessential analyst ways. Nice job. Val Kroll (54:43):
He can't help

himself. Michael Helbling (54:47):
And Julie, way to lead the conversation today.

Thank you. Val Kroll (54:51):
We knew it. Michael Helbling
Listen, I just typed into Gemini and I said, who's going to be

the best? Val Kroll (54:59):
Who's made the best gem? Michael Helbling
And it was like, oh, who's... Gemini's like, who are my options?
Oh, Julie. Julie, my mom. And Val, thank you, too.
No, because I think what you did, Val, which was actually super important
for the conversation was you turned it back into who or what we're

(55:21):
going to do with people around this, which I think we
were all over the place. And you brought us back
to probably the more important central element of this, which is we're analytics
people. All right. And I went on a few rants, so yay.

All right. Michael Helbling (55:36):
Let's say right now that I bet you're out
there passing this whole episode through an AI filter to bring it down
to like 30 seconds or something. But if you hear something you're interested
in, we would love to hear from you. So please do reach out.
You can reach us on LinkedIn or on the Measure Slack chat or

(56:01):
by email, contact@analyticshour.io. And we'd love to hear from you. Please
do not send us AI created emails. Moe does not appreciate that.
Or if you do, train the AI to be very

succinct. Moe Kiss (56:18):
And funny. And funny. Michael Helbling
Yeah. And they're getting so much better at being humorous
now. So it's good. And then the other thing I'd like to say
is, we've been around for a long time, and if you've never thought
to go on your favorite platform and give us a rating or a
review, I'd say AI can help you with that too. So we're not

(56:40):
above it. Just go out there and give us five stars and
a long-winded AI driven... No, don't do that. But do rate and review
the show. It helps AIs consume the show and then tell people the

cool things we say. Michael Helbling (56:54):
And then, last and certainly not least,
a big shout out to Josh Crowhurst, our producer, for everything he does

to help us get this show off the ground. Tim Wilson (57:02):
Can we
just say that every time you fuck around with,
AI-generated images of us as a group, and Josh always looks amazing.

Moe Kiss (57:15):
He looks amazing. You know what it
is. I've worked this out. AI knows what to do with images of

men with beards. That is, like, the summary I have. Michael Helbling (57:19):
Oh.
Okay. That's interesting. Okay. That's probably a whole episode right
there, Moe, I don't know. But anyways, yes, Josh Crowhurst, who looks amazing
in Studio Ghibli form, as well as other elements. But yeah,

(57:42):
thank you, Josh, for everything you do. And I would just say,
and I think I speak for all my
co-hosts out there, no matter what part of your job AI is doing,
the part it can never do for you and you got to keep

doing, is to keep analyzing. Announcer (57:56):
Thanks for listening. Let's keep
the conversation going with your comments, suggestions, and questions on
Twitter at @analyticshour, on the web at analyticshour.io, our LinkedIn
group, and the Measure Chat Slack group. Music for the podcast by Josh
Crowhurst. So smart guys want to fit in, so they made up a

(58:21):
term called analytics. Analytics don't work. Do the analytics say go for
it no matter who's going for it. So if you and I were on the field, the

(59:13):
analytics say go forth. It's the stupidest, laziest, lamest thing I've ever

heard for reasoning in competition. Val Kroll (59:28):
Guys, I've got an exciting

example to share today. I'm not telling you now. Michael Helbling (59:28):
Yeah.

Julie Hoyer (59:28):
I haven't even had coffee. Like this is fucked. Michael Helbling:

I am going to need to get another beer. Tim Wilson (59:28):
See, told you to chug

that. Moe Kiss (59:30):
Well, I don't know if you need coffee. Julie Hoyer
if we push it right up to a 5:30 central ending time, we might get an appearance

of Abby Lou. Michael Helbling (59:33):
Oh. Val Kroll

that's perfect, actually. Moe Kiss (59:33):
That sounds wonderful. Julie Hoyer:
I opened up my laptop over when we were eating breakfast this morning just
to like do something really quick. And she's like, are you talking to Tim?
The way she refers to Tim constantly cracks me up. What was it she was pretending
to be working when she was homesick? Yeah, she was like, hey, Tim. Like

she was pretending to talk to Tim. Moe Kiss (59:42):
My kids do the
same, but they're like, I'm gonna go do work now. And then they sit at my
desk and tap and I just turn up my keyboard. But they are call Tim. Michael 887 00:59:51,985 --> 00:1:00:08,570 Helbling: They don't name specific co workers. I mean, you have a few more 888 00:1:00:08,575 --> 00:1:00:11,435 co workers... Julie Hoyer: They probably have a little more variety. Moe 889 00:1:00:11,424 --> 00:1:00:11,443 Kiss: Well, they do. They do. When they come to the office, they're like, 890 00:1:00:11,445 --> 00:1:00:11,450 where's Auntie Priscilla? Yeah, they do have their favorites. Michael Helbling: 891 00:1:00:11,440 --> 00:1:00:16,987 I put in Slack my first attempt to make us into Muppets and it invented 892 00:1:00:17,030 --> 00:1:00:20,188 a random other Muppet and put it in there. I was like, 893 00:1:00:21,137 --> 00:1:00:23,678 there's a ghost. That's Ken Riverside. I don't 894 00:1:00:27,823 --> 00:1:00:27,855 know. Moe Kiss: That's Ken, but that's like old Ken. I definitely thought 895 00:1:00:27,844 --> 00:1:00:33,259 of him as like younger, hipper, more dapper. But I like. Michael Helbling: 896 00:1:00:33,249 --> 00:1:00:33,253 Yeah, yeah, no, we've already got Ken nailed with AI before. Julie Hoyer: 897 00:1:00:33,243 --> 00:1:00:44,167 I like how we're all Muppets and Tim is from the Simpsons. Michael Helbling: 898 00:1:00:44,156 --> 00:1:00:44,164 Yes. Moe Kiss: Oh, wait, which one's that one? Val Kroll: Ted is Flanders 899 00:1:00:44,154 --> 00:1:00:46,349 cousin and we're all Muppets. Tim Wilson: So that one didn't work very well. 900 00:1:00:46,339 --> 00:1:01:02,576 Michael Helbling: Rock Flag and more dashboards through AI, now.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.