All Episodes

December 1, 2025 • 53 mins

Text us your thoughts on the episode or the show!

In this episode of OpsCast, hosted by Michael Hartmann and powered by MarketingOps.com, we are joined by Spencer Tahil, Founder and Chief Growth Officer at Growth Alliance. Spencer helps organizations design AI and automation workflows that enhance go-to-market efficiency, streamline revenue operations, and strengthen marketing performance.

The discussion focuses on how to move from experimentation to execution with AI. Spencer shares his systems-driven approach to identifying automation opportunities, prioritizing high-impact workflows, and building sustainable frameworks that improve strategic thinking rather than replace it.

In this episode, you will learn:

  • How to identify and prioritize tasks for automation using a value versus frequency model
  • The biggest mistakes teams make when integrating AI into their workflows
  • How AI can strengthen strategic decision-making instead of replacing people
  • Practical prompting frameworks for achieving accurate and useful results

This episode is ideal for marketing operations, RevOps, and growth professionals who want to turn AI experimentation into measurable, scalable execution.

Episode Brought to You By MO Pros 
The #1 Community for Marketing Operations Professionals

Ops Cast is brought to you in partnership with Emmie Co, an incredible group of consultants leading the top brands in all things Marketing Operations. Check the mount at Emmieco.com

Support the show

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Michael Hartmann (00:48):
Hello everyone.
Welcome to another episode ofOpscast, brought to you by
MarketingOps.com, powered by allthe no pros out there.
I'm your host, Michael Hartman,flying solo this week or this
episode again.
But joining me today to diginto how to move from
experiments to experimentationto execution with AI is our

(01:09):
guest Spencer Tahill.
I should have asked Spencer soyou can thrust me if I
mispronounce your name, is thefounder and chief growth officer
at Growth Alliance, where hehelps companies design AI and
automation workflows that drivegrowth across go-to-market,
revenue operations, andmarketing.
His systems-driven approachfocuses on using AI as a

(01:29):
practical tool to free upstrategic thinking and improve
operational performance.
Liz Spencer, welcome to theshow.
And uh remind me, where wherein the world are you right now?
I forget.

Spencer Tahill (01:38):
And so I'm usually in Prague, uh, but
currently I'm in Genoa.
I'm in Italy.
So uh yeah, that's uh that's alittle bit of a change up.
So yeah.
It's a pleasure to be here.

Michael Hartmann (01:48):
Well good.
Glad to have you.
I I knew it was Europe, Icouldn't remember where, and I
would have like if I had toactually remember, it sounds
like I would have been off.
So I what I do know is it's butat least early afternoon, if
not late early early evening,there, right?
Yeah.

Spencer Tahill (02:04):
It's uh it's just turned almost 6 p.m.
almost.
All right.

Michael Hartmann (02:08):
So there you go.
Yeah.
All right.
So you're seven hours ahead ofme.
So good to know.
All right.
Well, let's when we start likewith a little bit about your
background.
Like, how did you get into AIand automation, especially for
like the go-to-market rev opskind of domain?

Spencer Tahill (02:27):
So it's kind of a long story made short.
And I was always told to ifyou're gonna do something, do it
right the first time.
And even if it takes youlonger, just make sure it works
the first time.
So um, when I was starting towork in kind of like a HubSpot
sphere, I think there's probablygonna be a lot of people maybe
in the Salesforce spherelistening.

(02:48):
Um, I did a lot of freelancework in the HubSpot world.
It was setting up marketinghub, starting up service hub,
setting up sales hub, um, all ofthese different modules, as
well as connecting everythingthrough everybody's favorite
marketing operations platform,Zapier.
Um, at the time that was at thetime, uh that was kind of like
the leading edge of the market.
Um, and as my kind of freelancebusiness evolved, I was able to

(03:10):
step away from you know afull-time engagement where I was
a marketing manager at astartup actually here in Europe.
Um, and I dove into doingfull-time freelance work for a
lot of American um Fortune 500companies, like creating their
HubSpot instances, helping themkeep up to date with their
modules and all theirautomations there.
So how I fell into it was verynaturally.

(03:33):
It was just playing around withthe workflows and the sequences
features inside of HubSpot.
And then when I wasn't able tobuild something inside inside
the platform, just moving overinto no code tool of Zapier was
the first one that I learnedreally well.
And then I played with make,but um, it wasn't until at least
a few years ago that I kind ofdoubled down into into AI

(03:55):
development.
So that's what we do now.
Um we we build out HubSpotportals mostly, sometimes in
addio, but everybody wants to domore.
And AI is kind of like puttingthe pedal to the metal at that
at that rate.
So what used to take a team offive or ten, you can do it in if

(04:15):
you get a nice prompt going oran AI agent chain going, you can
do it in a matter of likeminutes or hours.
You know, it's it's crazy thethings that you can build now.
And it's really interesting.
So that's kind of long storymade short, kind of fell into
it.
I love engineering, it's it'sbusiness engineering.

Michael Hartmann (04:32):
So it's an interesting way of thinking
about it.
I hadn't heard that.
Yeah, it's it's um it'sinteresting because so I'm just
really curious.
Like, do you because I've hadto start doing this in my own
head, like do you differentiatebetween AI and automation?
Because I think to me, what Imy view is that automation is a

(04:55):
thing where AI can be a part ofan automation automated workflow
or whatever.
Um, and I think I feel like alot of people sort of conflate
those.

Spencer Tahill (05:06):
Yes.
There's a huge issue.
Um, the long answer to this isthat they're two different
things.
Uh, or the short answer is thatthey're two different things.
The long answer would be thatthere's a lot of misnaming in
the industry, especially when itcomes to anything regarding
automation, anything regardingAI.
And probably you've heard it.
I just built an AI agent onLinkedIn or something like this.

(05:27):
I've replaced a 20-person B2Bsales.
No, you didn't.
And uh sometimes you can, youcan do it.
You know, like we've done coolthings with Slack integrations
and you know data across theworld, but these types of linear
flows, and I'm talking, youknow, you have you know, single

(05:47):
input, single output, you've gotmultiple inputs, multiple
outputs.
The more complex you go, themore essential AI is going to
become.
So when you build a workflow,my definition of a workflow is
A, B, C.
You have input, you have somesort of transformation step or
processing, and then you've gotC, you've got an output.
And then you have an AIenhanced workflow, which is you

(06:11):
have an input or you havemultiple inputs, and then you
have processing, theorchestration layer, and then
the thinking that is going on,instead of saying, you know, if
A put one, if B put two, ifthere's some sort of lead
scoring and the lead scores havedifferent weights, and you
know, you have to calculate iton different weighted.

(06:32):
It's a really good use case forAI if it's way too complex for
formulas.
In that instance, that becomesan AI-enhanced workflow.
Where the industry is goingnow, and where there's a lot
more confusion, is that there'sworkflows, AI-enhanced
workflows, AI agents, and thenthere's MCP, which is model
context protocol.
So there's it's all different.

(06:54):
And I would say the more, themore inputs you guys you have,
the more context you have, themore transformation steps you
have, and the more outputpossibilities you have, yeah,
the more the more AI is gonnaaccelerate that.
So that's how I define it.
Workflow is just gonna belinear or it's gonna be a single
path input, and then it's gonnado a formulation about output.

(07:17):
And AI powered workflow isgonna do a lot more thinking,
thinking that's been behind thescenes.
Right.
Yeah.

Michael Hartmann (07:26):
Yeah.
And it's uh you brought up MCP,which I still haven't quite
wrapped my head around, but wecan maybe we'll get there in the
course of the conversation.
But yeah, I want so this leadsreally to one of my next
questions is that one of thesteps that you think of towards
I think a lot of people strugglewith like, where do I start?
Right.
I want to do like I want to AI,I need to learn AI, I need to

(07:49):
do something.
And but what what one of thethings that was interesting to
me is when we talked before, yousaid one of the first steps
towards doing automation,especially AI enabled or AI
powered, whatever you want tocall it, is identifying the
things that piss you off, right?
So I think that was a phrasingyou used, but like what do you
what did you mean by that?
And like I how like maybeexpand on it a little bit.

Spencer Tahill (08:13):
Yeah.
So um some people know thisabout me, some people don't know
about me, but um, I I am at theend of the day, I want to do
the least amount of workpossible.
And that's probably not a veryprofessional thing to say.
But I'm an Amazon, right?
But but you know what?
Like it's a game of efficiencyand it's a game of focus.
And when I used to work for afew different startups, even

(08:35):
when I was like freelancing, itwas all about trying to like
optimize my day.
If I can try to like buildsomething that takes somebody
eight hours to do and I can doit in two, oh my god, like you
know, the value goes you know upthe ladder, right?

Michael Hartmann (08:47):
Right.

Spencer Tahill (08:48):
When I start to think about when I want to not
do something anymore, it's it'smore philosophical.
It's just like, I just don'thave I don't have a great
feeling, you know.
Like I don't want to wake up, Idon't want to turn the coffee
machine on at six in the morningto have coffee at six and six
thirty in the morning, right?
Like you press a timer on yourcoffee machine the night before
and then it's done, right?
That's like very simple usecase.

(09:09):
When you start talking about itin like the business aspect,
everybody has different painpoints.
And I'll get into that later,but I start by looking at the
real ground level threat of whatdo I just hate doing every day?
Like, yeah, okay, let's say I'ma sales guy and I hate picking
up the phone and dying.
Okay, maybe that's not theright, you know, industry for

(09:30):
you to be working in.
Right.
But but if you if you're asales guy and you hate going
into Salesforce and HubSpotafter every call, then what do
you do?
Like if you're a really goodcloser, but you're just terrible
at just like putting stuff inthis ARM, okay, that's your pain
point.
Let's solve that, let'sautomate that first so that you
can focus on what you're good atand you guys can produce more,

(09:52):
you know, you can be moreefficient at your job.
So that's why I always ask it'slike what pisses you guys off
the most?
Ah, you know, I just want toget on calls all day.
I just want to close, you know,make whatever, every whatever
everybody wants to do what theywant to do.
But I start by looking at it aslike, I don't like taking
notes.
Very simply.
Like I enjoy the process oflistening, I enjoy the process

(10:13):
of conversing, but I don't, I'mnot able to focus fully on you,
Michael, if I'm just like, youknow, off to the side writing my
notes.
So now you see all these AInote takers, and that's why I
use, you know, like somethinglike Fireflies or Fathom to be
able to take my notes for me,because then I can focus on what
I'm good at.
So I've gotten rid ofeverything that I that annoys
me.
Now I can 100% give my focus toyou.

(10:35):
So I think like when you startdoing, when you start making
automations on the things that,you know, the processes that
piss you off every day, that'swhere you should start.
That's the zero to one.
Get the idea of like what youdon't want to do.
Because eventually, once youget all the stuff out of the air
that you don't want to do, thepath is gonna light up.
You'll be like, okay, this isthis is the direction we want to

(10:56):
take it.
This is what I want to build.
And then you bring somebody into build that or you piss
yourself.

Michael Hartmann (11:00):
Like I and I'm with you, like the things that
I don't like to do, but like tosome degree, there's every job
right now has things that wedon't like to do, right?
You brought a salesperson, Ithink in the marketing context,
the one that pops to my head isyou know cleansing and you know
getting lists cleaned up to beable to upload into a marketing
automation platform or somethinglike that, right?

(11:21):
Which is necessary in a lot ofcases, right?
And it's an important piece ofwork.
But like I'm sure there's ahandful of people out there who
actually enjoy that work.
I'm not one of those, and Iwould love absolutely love to
just replace that in someautomated way.
So I think it feels likethere's a little bit of like I
what I'm hearing from you is notjust like things that annoy you

(11:43):
and bother you, but like thosethat are still important, but
are like things you don't enjoydoing.
Maybe it's a slight, slighttweak from what you were saying.

Spencer Tahill (11:53):
Yeah, it's it's uh if you've ever heard of the
eisen, I think it's called theEisenhower matrix, it's uh skill
and will, and or it's gonna besomething along between is it
urgent and important?
And it wherever it falls onthat matrix, if you have to
delegate that work, you shouldautomate it.
You should automate it to thebest of your ability.

(12:14):
You should save a hundredpercent, you should save 80% of
your brain function to focus onthe tasks that are the most
important and the most urgentbecause nobody's gonna be able
to replace that, right?
But if you're how I got intoit, very true story is you know,
my girlfriend at the time was avirtual assistant, and I was
trying to figure out what does avirtual assistant do so that I

(12:34):
can work from my laptop.
That's how it all started.
And I was like, oh, they, youknow, in my silly head, like,
you know, you book the calendarmeetings, you you keep an eye on
the email, you keep an eye onthe Slack, you you maybe take a
look at the car reservations,you know, you do your job.
And I was trying to figure outa way, okay, like what if I just
got a notification from all ofthese different platforms in one

(12:56):
place at every, you know, allsummarized in one place.
Would it make my job easier aslike a person?
So I built something for my owninbox that that does that for
me.
So um these annoying things,the repetitive aspects, like how
many times a day, how manytimes an hour, how many times a
month are you doing it?
What is the impact that it'sgonna have in the short term and

(13:18):
the long term?
I think this is the way thatyou could start thinking about
kind of like going from manualprocesses, you know, like you
just said, like enriching yourCRM.
That is a pain in the butt.
Um, that is that is somethinglike people come to me for.
It's like we have 400,000contact records in HubSpot.
What do we do with them?
Oh, there's a strategy.

(13:40):
Let's go pay for Zoom info,let's go, you know, use HubSpot
Breeze to enrich it, right?
That's like kind of like thewhole thing now.
But to do it in a sequentialorder and to do it in a way that
you can actually get money outof that project is a completely
different strategy.
So I think like before you doany automation, before you do
any like thing, you shouldprobably think about the

(14:01):
strategy before you do that andhow how you might make money
back, you know, if that makesany sense.

Michael Hartmann (14:07):
Yeah, it's like there's a like uh I mean,
I'm people with long timelisteners know I'm a big fan of
having sort of thinking in thefinancial terms, right?
So there's an ROI, right?
Like whether that investment isa time and effort one or actual
dollars, like doesn't reallymatter.
It's like are you gonna getmore back from that in the long
term or whatever time horizonyou're looking at, right?

Spencer Tahill (14:30):
Exactly.
It and it and that's not evenjust talking about I do the same
thing.
It exactly what you just said.
It it is everything has to beROI adjusted, whether that's
gonna be you're gonna build anautomation, you know, somebody
comes to me like, oh, what isthe, you know, we're gonna pay
$10,000, $20,000, $30,000 forsomething, you know, monetary
value, but what is the actualrealized ROI?

(14:54):
Okay, within three months,you're gonna your SDRs are gonna
be spending 35% less timebecause we've timed them, we've
looked at their data entrytimes, we've averaged it.
That's gonna be like the timethat they save.
But what is the percentage oftheir salary that you're paying
them that's gonna be freed up?
What is the 35-25% of theirsalary that they're gonna be

(15:16):
able to go and focus and dosomething else?
That's your margin, that's yourROI right now.
And that's how I, you know,that's a whole different story.
But yeah, I agree completely.
It's everything needs to tieback to revenue.

Michael Hartmann (15:27):
It's interesting.
The other like the tie backwhen you bring in other, like
now you if you're if you'reopening up resources to be able
to apply their skills andexperience in in ways that are
more productive or more uhprovide better return for your
organization or whatever, right?
It seems like it's a reallygood mesh with I don't know if

(15:47):
you are you familiar withStrengths Finder.

Spencer Tahill (15:50):
No.

Michael Hartmann (15:51):
Okay, so this is like one of those personality
test kind of things, right?
But the the real uh theprecursor to that book and then
the the actual assessment was abook called Um First Break All
the Rules.
And it was kind of a one of thefirst management books that I
read where I was like, oh, theylike it actually really changed
how I thought about things.
And the basic premise is a lotof modern um like management was

(16:16):
about right uh fixing people'sweaknesses, right?
Uh and this sort of flipped iton its head and said, really,
what you should be doing is takeadvantage of people's strengths
and minimize the impact oftheir weaknesses.
So like it's not ignoring it.
But where I'm going is likethis meshing of like what do I
like what's like what could Iget?
If I if I could free upsomebody who's got a strength in

(16:39):
a certain area, but they're notable to take advantage of it
because they're spending a lotof time on stuff that they don't
like, they're not good at, butit's a necessary part of the
job, um, you start to free upmore of that time.
Now you can apply that strengthto something else where it's a
bet going to be a better returnand it's probably gonna be more
satisfying for them.
I think this is a reallyinteresting way of thinking
about it.

(16:59):
That like literally in realtime, I'm like it's hitting me.

Spencer Tahill (17:03):
So uh this is like very common.
And I know I know we're kind oftangenting, but I I think it's
very common when people willcome to our door and they'll
say, We want to take callintelligence and we want to
figure out what competitors arebeing mentioned.

(17:24):
Okay, that's that's a wholecall intelligence workflow, you
know, that has to be built out.
But why?
Why would that, you know, likewhat where where's that revenue
gonna be made back?
What type of insight does thatdata give you?
And what are you gonna do withthat insight?
What's the action you're gonnatake?
And and I think being able tosteer the course before you

(17:47):
start building is very, veryimportant.
Because uh what I see a lot,and I don't I'm not sure if you
see this a lot, is like salesworks in their little bubble,
and then you've got revenue teamand marketing team kind of like
sitting over here, and butnobody had nobody actually sees
each other's data, and I thinkit's a huge problem.
So it's I think it's about whenyou're starting to think about
like making AI workflows andagents and stuff like this

(18:10):
inside of a like you know, Iwork in mid-market, so if you
have to connect the teams, youhave to make everybody play nice
together somehow, and it's notan easy job to do.
Um, it takes a lot ofcoordination, but there's a lot
of issues with the industry.
It's that like you just buildone thing and call it an AI
workflow, and it that's just notwhat it is, you know.

Michael Hartmann (18:32):
Yeah, I think you're hitting on something that
really like I think it I'vealways thought it's important to
go like what's the end result?
Like, what do you want to dowith what you're asking?
Very often how I it pushbacksmaybe not the right way, but I I
want to understand the why whenI like somebody has come to me
as a marketing apps leader.
Um, and I think example I coulduse is actually before

(19:52):
marketing apps was really athing, but I ran website for a
big global semiconductorcompany.
And when the product marketerscame, he's like, hey, we want to
get like we want to get asked,you know, put up a form and ask
for feedback from customers.
Great, easy.
What are you gonna do with it?
Because I'm gonna tell youright now, if you open that up
and people start providing thatfeedback, they're gonna want to

(20:13):
know what did you do?
And they hadn't thought throughthat part of it.

(21:16):
And I think it feels like thatis gonna needs to be even more
and more important with thesethings where they're less
deterministic if you'reespecially automating things,
right?
Where you've got some sort ofLLM or AI sort of component in
the middle of it where you don'talways know exactly what's
gonna come out of the otherside.

Spencer Tahill (21:39):
Yeah, I think um I think that comes down to all,
it just comes down tooperations.
And it comes down to processes,it comes down to nailing goes
down, it comes down to hey, ifyou're gonna do one, this second
thing is gonna happen.
If you're gonna do A, B isgonna happen.
Um, but it's the edge cases.
Like humans, you know, like wemake mistakes, but then we get

(22:03):
reprimanded and we're like, oh,we're not gonna make that
mistake again.
But the the the bigger issuethat I'm seeing now is that like
when you start to replacehumans with machines, is how do
you make sure that machinehandles that edge case a
different way the next time?
And I think like you can tellsomebody what to do and they're
gonna learn not to shove thecircle into the square hole, but

(22:24):
a machine doesn't knowinherently like what they did
wrong.
So you have to tell it what todo before it does it, before it
tries to do that thing, right?

Michael Hartmann (22:34):
Yeah, you have to give it it can't be totally
free form.
Yeah, you need to provide itsome rules.
It's interesting because Ihadn't really like my own
experience, just personal levelof using ChatGPT is I've had a
lot of several times where I'vebeen frustrated, where I get an
output that is say 80 to 90percent of what I was hoping

(22:55):
for, and I give it very specificinstructions, go change this or
that on the output, and itcompletely changed, like it does
some something completelydifferent, like it actually goes
away from where it was going.
And you know, that's like myfear would be if I'm doing
something that's automated andthat's like something like
that's in the middle of it, thatif it like call it an edge

(23:17):
case, call it a hallucination,call it whatever, right?
That's a risk that we have todeal with.

Spencer Tahill (23:23):
Yeah, and I I think that's gonna happen
anywhere.
It's you're gonna have that inorganizations, you're gonna have
that, you know, in in user, youknow, a user level, you know,
things like that, right?
Um when I started using GPT,you know, no hate here.
Like OpenAI did a great job bygetting GPT into the hands of

(23:44):
the consumers, the consumermarket, right?
But behind the scenes in theorganization level, like there's
already intelligence layersthat are crunching and
formulating and you know, likespitting out intelligence and
insights.
Um on the consumer level, I wasnot a power user at all.
I was a power user ofworkflows, I was a power user of
like, you know, formulas andAPI calls and you know,

(24:08):
sequences and HubSpot and stufflike this.
But damn, I hated it.
I hated the process of likeinteracting with it because,
like you just said, it's like alot of people get frustrated.
I was one of those people, youwere one of those people, you
type something in and it's like,uh-huh.
I'm gonna go get that thingover there, and it's not
anything relevant to what youdo.
And and now it's been like twoor three years since we've had

(24:32):
like this kind of like thispower in our in our hands to be
able to tell it what to do.
And now you can just like youcan do so many crazy things.
Yeah.
Um but it takes so muchprecision.
Like if you want it to do avery specific thing, you need to
be super precise about it.
And and the problem is actuallynot being able to produce the
quality result once.

(24:52):
Um I feel like on a on aconsumer level, the hardest
thing to do is to aim forconsistency.

Michael Hartmann (24:58):
Yeah.

Spencer Tahill (25:00):
Can it do it again?
Can you do it again, basically,right?

Michael Hartmann (25:03):
Yeah, yeah, no, it's um I think I heard
someone the other day talkingabout like training in AI on,
especially if you're thinkingabout like for your purposes and
like as a it's almost like umteaching a baby, right?
Because that's kind of where itstarts.
It doesn't really though itcould it's just what's weird is

(25:26):
it actually speaks your languagealready, right?
Which is not what a baby cando.
So like that's that part'sdifferent.
So it's a very interesting one.
So uh you kind of hinted atsomething.
So one of the other things youand I talked about before was um
systems thinking and and howthat needs to come into play.
Um, when you think about AI andadoption of AI.

(25:46):
Like, what did you like?
So what did you mean by that?
Like what like from your fromyour frame, maybe start.
Like, what is systems thinkingfrom your perspective and then
why you think it's so important?

Spencer Tahill (25:56):
So system thinking is imagine you have,
I'll just put it veryeloquently.
Imagine you have a car, right?
And each component of that carmakes up the whole machine,
right?
You've got the wheels, you'vegot the axles, you've got the
engine, you've got the motor,like whatever.
If you take the wheel off thecar, can it run?

(26:16):
Maybe on three wheels.
Depends on what you mean byrun, but yeah.
Depends on what you mean meanby run, exactly.
But if you change one thing ina in a s in a machine, if you
take that belt out, if you takethe engine out, it's probably
not going to work as well as youdesigned it to.
System thinking is the idea inmy head that before you change

(26:40):
one component, you should lookat the ramifications of the
impact across the whole system.
And you see this a lot in likeworkflows, especially in like,
especially in Zapier, right?
It's like you've got one Zapieraccount, you've got 20 folders
in there, and you've got 18different zaps in there that are
all floating around.

(27:00):
Ideally, they're organized.
If you change one custom fieldinside of HubSpot, how many of
those zaps break?
If you if one of your internsgoes in and breaks one thing,
how many of those things get youknow jeopardized?
And I think this idea of likesystems thinking and the way
that you can approach making aminor impact is that that one

(27:21):
thing can domino affect down thewhole system and shut it down.
And I'm sure that has happenedto many organizations, you know,
but I've seen it.
Um, but this idea of don'tchange anything until you're
know until you know don't changemicro until you understand the
macro.
I think this is the idea, andthis is how I try to formulate

(27:42):
like what is systems thinking?
Understanding the wholeembodiment of an organization,
of a system, not just HubSpot,not just the ad campaign, but
the whole creative process,looking at like creating the
copy, go, you know, go to Canva,post it on LinkedIn.
Okay, when you post it onLinkedIn, it's an ad, then you
boost it.
But how do you boost it?
Do you boost it with a UTM?

(28:03):
Where does that UTM get pickedup?
Does it go to HubSpot?
Where does it go?
You know, like tracing the lineback and being able to map out
the whole system like a skeletonto be able to see, yeah, if I
change this little thing.
That's how it changeseverything, and that's the
impact.

Michael Hartmann (28:20):
Got it.
So this goes back, as we had aguest on uh it's probably been a
year ago now, who talked kindof taught me about something
called the Kinebin framework,which is a Welsh word.
So it's done it's spelledC-Y-N-E-F-I-N, I think I'm
close.
But it's pronounced verydifferently.
Anyway, long story short, likethis the it differentiates
across different kinds ofproblems.

(28:41):
And one is the the one that'smost relevant here is
complicated versus complex,right?
And what you described, like infact, car is a great example of
complicated, right?
Where um if you have enoughknowledge and training, you can
understand how it all fitstogether.
But if there's a problem, youcan usually kind of figure out
and fix it, right?
Because it's a singularproblem, uh, even if it has

(29:02):
downstream effects.
The difference in a complexone, which is feels like we get
into very quickly it uh, butdoesn't always like we don't
always know the different likewhen we're going from
complicated to complex, is itlike you actually can't fully
understand it?
And I think this is where AIstarts to come in, where it's
like you have one input overhere and you change it, you

(29:22):
actually don't know what theimpact is gonna be in that one
case, right?
And now you start at likeyou've brought like you got a
new case, the input's slightlydifferent.
Do you get the same kind ofresult?
Maybe, maybe not.
And that's where I suspectwe're headed is that we're gonna
be moving from complicated,where it's not easy, but you can
kind of figure it out tocomplex.

(29:43):
Um, and it will be really,really difficult to figure out
if something changes and youdon't know because it's all sort
of uh hidden inside this quoteblack box.

Spencer Tahill (29:55):
Nail that you nailed it.
Okay.
Yeah.

Michael Hartmann (29:59):
But I mean, but understanding how those
pieces fit together isimportant, right?
I I like I don't want todiscount that.
Like so that systems thinking II now that you described,
right?
Yes, it's gonna be reallyimportant.
Um, but it feels like there'sgotta be this um acknowledgement
that even with systems thinkingin this new kind of world we're

(30:21):
we're either in or headedtowards, like it's hard to tell.
There's gonna be stuff that isgonna be really hard to figure
out.
Um both to set it up and tolike figure out if something
doesn't I wouldn't even saybroke breaks, but like produces
an output we don't expect.

Spencer Tahill (30:39):
That and that is probably the hardest part of my
job is edge case handling.
I can I feel confident tellingyou like I can build a research
prompt or I can build a promptin two or three days, right?
Like stress test and everythinglike this, but it is the
testing period that I tellpeople about I said, like,
listen, like I could build it,but we're gonna test it for two

(31:00):
weeks because I need to monitorthe edge cases before I hand
that over to you.
And it's just I think you tryto some so many people try to go
so fast on burning something orimplementing something that
they don't do the proper QA, andthen you get issues, right?
And and that's that's the wayof the world now where
everybody's going so fast, AI ishere, it's at your fingertips,

(31:22):
and you're like, it's gonnasolve everything, you know, like
a a brand new cooking pan isnot gonna make you a better
cook, right?
But like, but it's gonna help.
Maybe a sharper knife willhelp, but it's it's inherently
not going to it's not gonna fixthe root issue.
And I think if if a lot ofpeople more people try to figure
out you know, root causeanalysis, the systems thinking

(31:42):
approach, is they willunderstand that they have a lot
different problems than theyactually think they do.

Michael Hartmann (31:47):
So yeah, I mean, do you so do you think
that's one of the big mistakespeople make when they're trying
to adopt AI?
They're moving too quickly,they're not like what yes, like
one of the biggest red likethings, things that kind of call
bear traps, maybe like thepeople you watch out for when
they're trying to adopt AI.

Spencer Tahill (32:04):
I think when I I don't really get it so much
anymore across you know, acrossmy desk, but a few months ago, I
mean, even I would say even thepast like six months, right?
What I've noticed is that moreand more people that come to me,
they know like, okay, we'regonna do something complex.

(32:24):
Going back to complexity, yeah.
Um, I want to go, like, forinstance, I want to build a
Slack bot that connects onecommunity to another community,
but also routes into Adio anduses call intelligence and
everything like this to makecontent.
Whatever.
That's very complex, and youneed different API nodes.
But I think a lot of peoplethat come and they try to use a

(32:45):
new tool.
Tool.
They use this like shiny objectsyndrome where it's like you
see AI, they're like, oh my God,like I see all these things on
LinkedIn.
This is like the stemming ofthe problem.
Is that they see everybody elsedoing something just super
complex and it looks complex andit looks crazy, and it somewhat
sounds like it can solve theirproblem.

Michael Hartmann (33:07):
Right.

Spencer Tahill (33:08):
But they don't realize that every problem is
unique.
And I think like that's thevery like pinnacle of the
problem, like, not evenregarding AI adoption.
I think it's how you how youhire people.
Like, why are you hiring them?
Is the problem unique?
Are you making a workflow orsomething like this?
So I think understanding notbeing able to understand the

(33:30):
situation before you try toadopt something like AI and not
being able to define the problemproperly and not being able to
define possible solutions toimplement is a huge problem.
And and I don't I don't touchthat because it's it makes work
very difficult when you've got aproject manager.
It makes it makes everythingslow down and you're like, oh

(33:51):
wait, uh-oh, I built somethingand you didn't want that.
Everybody's been there, youknow, and then it costs
everybody time.
So I think it's rather justslow down, define the problem,
define the desired result, workbackwards, look up any
roadblocks, try to, you know,set up the task matrix, just
who's gonna handle what based ontheir strengths, delegate,

(34:14):
start at the beginning, and thensay, okay, this is gonna take
this much time.
And then that's done.
And I think I think companiesand people that fail to do this
in this order, just bound tofail.
And I it just it's too fast,not enough thinking, it's just
failed.

Michael Hartmann (34:28):
It's the whole the whole like to try to eat an
elephant in one bite is likedoomed to fail, right?
Um so what I'm curious whatyour take is.
I think I think when we talkedbefore, you're switching gears a
little bit, is that yeah, a lotI think there's a lot a lot of
fear out there that AI is goingto replace people.
And I think if I rememberright, your take on it was that

(34:50):
AI wasn't gonna replace people,is gonna replace people who
didn't know how to use AI.
Is that has it is that a fairassessment?
And like has it evolved at allsince then?

Spencer Tahill (35:00):
Has it evolved?
It's probably I I'll probablydouble down on that statement.
It's we are apex predators ashumans because we know how to
use tools and because we knowhow to think ahead.
That is that is my double-downstatement.
AI is a tool that allows you todo more way past your cognitive

(35:24):
abilities.
I'm not a P, I don't have a PhDin microbiology, I don't have a
PhD in you know, mathematics oranything like this.
But these LLMs that, you know,OpenAI and Propic, and you know,
any of the other LLMs out therethat are amazing, they've been
trained to do this.

(35:44):
I, you know, they've they'vegone through the steps, they've
gone through the data, they'vethere you have you have a
billion perspectives and abillion different data points
outside of your own cognitiveability to be able to tap into.
You just need to ask the rightquestions.
And I think the people thatknow how to ask the right

(36:05):
questions, the people that areinherently inquisitive about new
tools that are, you know, howto become a better prompt
engineer, how to use AI better.
This is step number one.
It's like, don't ever just useGPT, login and it's gonna say,
hey, help me do this.
No, no, no.
Go to GPT, open two GPTwindows, say, help, I want to do
this, write a better prompt forme to help me do this that I

(36:28):
can copy and paste.
Do this.
It's gonna be way smarter thanyou about prompting.
Take that, put it in the secondwindow, do your task, and then
after you're done with yourtasks, say, This is what I did,
this is the prompt I used, thiswas the original prompt that I
wanted to use, but help me learnhow to go from my original
idea, my original intelligencelevel through the process and

(36:51):
teach me how to do it at ahigher standard.

Michael Hartmann (36:54):
Yeah.

Spencer Tahill (36:55):
And I think like the people that know how to do
this are gonna soar over thepeople that that don't know how
to ask the right questions ordon't adopt AI.
The people that just you knowdon't adopt it, I think it's
quite frankly, I think it'snegligence.
And uh it's a tool.
You don't need to use it to begreat at what you do, but I
think if you're if you're tryingto do more and accelerate, it's

(37:17):
clearly the next step.
And yeah, I saw that's just mymy forte, Anna.

Michael Hartmann (37:22):
I saw somebody post on LinkedIn, I don't
remember, but it was recent,like the last week or two.
Somebody was saying, like kindof echoes what you're saying, AI
is a tool.
And he's like, as a writer,like my right, like I didn't
lose the ability to write when Istarted going from pen and
paper to a typewriter.
I didn't lose my ability towrite when I like from that to a

(37:43):
computer to no.
It's but it's it and it didn'tchange when I started using
grammarly or something likethat.
Like it just helped me getbetter and more efficient at it,
it's as opposed to atrophying,which we we could talk about.
Like, I I think it's gone bythe wayside now.
There was all those headlinesabout like AI causes brain
atrophy, but I think that wasjust headline fodder, but I

(38:07):
don't know what your take was onthat.

Spencer Tahill (38:09):
That's the MIT study, right?
Yeah, yeah.
The one I was covered a fewmonths ago.
Yeah, I think uh that's a wholediscussion.
Um but I had read I had readthe synopsis of that study uh
like a few whenever it came out,a few months after it came out.
Um for the people that didn'tread it, it was like a study by

(38:31):
MIT with a control group, and itwas it was it was roughly
talking about like people thatuse AI and the people that don't
use AI and kind of like theadoption and the competence
levels there.
Um, but roughly uh the synopsisthere that I would sum up is
that the way that it waspresented in the news that like
GPT users have you know lowercompetence of like XYZ.

(38:54):
Um that's canon fodder, but Iwould say that there was like my
personal beliefs aligned withthat article in the sense that
somewhere in there it saidsomething along the lines of
like higher competence users ofAI will lapse lower competence
users at a baseline level, butlower competence users will fail

(39:15):
to learn if they continue atthis pace of low learning like
aptitude.
Yeah, long story short, lazypeople get lazier because they
depend on AI as a shortcut, butthe higher performers, the ones
that are more inquisitive, willuse that as an acceleration jump
in their learning.
So it's that's how I understoodit.

Michael Hartmann (39:34):
Yeah.
Yeah, I'm with you.
And I it it's consistent withother stuff I've heard uh more
in the education realm, um,where the highest performing
kids who started using AI as apart of their learning process
continue to like distancethemselves and those that didn't
use it, and and those who whowere not performing as well and

(39:58):
used it also improved.
They just their improvement wasless, right?
So it's not quite what you weresaying, but it's very similar
kind of results.
Um I'm sure there's studiesthere.
I just I that one I've heardjust anecdotally about.
Um okay, so I'm trying todecide where to go here.
So you we kind of talked aboutthe system prompt, you know,

(40:20):
like using the tool to help youwrite better prompts.
And um one of the things youyou and I talked about, and I
still feel like I don't quiteunderstand, is the difference
between I think you called it asystem prompt and a user prompt.
Like so is this well, I'll I'lllet you explain.
What's the difference?

Spencer Tahill (40:38):
Yeah, so in in the context of a user-level
interaction, this is it layman'sterm, you log onto your
computer on your phone and youtype in the GPT.
You're the user.

Michael Hartmann (40:50):
Yeah.

Spencer Tahill (40:51):
On the system side, that is inherently going
to be how the system thinks,right?
It's going to be a blank slate.
You know, like imagine you as ahuman, just in a machine form,
right?
You're a system as a person,your brain is a system, it's
wired to think a specific way.
Your brain, it's wired.

(41:11):
You've learned all thesedifferent interactions, you know
how to speak, maybe French,Italian, Spanish, English,
whatever.
Maybe you have a maybe you havea doctorate, maybe you have
you're in a bachelor's degree ofengineering, maybe you're in
business.
You have all of these uniquedifferent ways of connecting the
synapses, right, in your brain.
Everybody is unique.
That's what makes us us.

(41:32):
But we're not a machine.
And if you want to replicatethe way of thinking, like a
machine, this machine thinking,quote unquote, right?
I'm gonna kind of likesimplify.
A system prompt is the back-enddesign of what you're talking
to.
A user prompt is what you'reactually talking, like how

(41:54):
you're interacting with themachine, if that makes sense.
So when you put in, hey, helpme make do this research, you
are prompting it as a user.
But you are prompting it as auser and not a machine.
The machine, the system, willthink in a very specific way.
And and it's trained to dothis.

(42:14):
So you have very specific usecases like I want you to act
like a market researcher.
I want you to tell me how tomake a podcast episode, but have
you ever given it extra potlike context right before you
tell it what to do?
Like, hey, I I want you.
This is contextual promptprompting.

Michael Hartmann (42:33):
Yes.

Spencer Tahill (42:33):
The system prompt is behind the scenes.
So you can already give it allof the intelligence and all of
the use cases.
You can fuse shot it, you cangive it edge cases to look out
for, you can tell it what tothink.
So not a lot of models will,well, nowadays you can, but not
a lot of models six, seven,eight months ago made it easy

(42:55):
for you to actually design thesystem prompt.
You were only able to interactwith it from a user prompt point
of view.
So when you are building aworkflow, you know, maybe you
there's some listeners that, youknow, understand API calls and
stuff like this.
If you have the chance to putinformation inside of a system

(43:16):
prompt window, this is whereyou're going to give it its
thinking ability.
This is where you're going togive it full context.
This is how you're going togive it all of its edge cases,
its limitations, and everythinglike this.
The user prompt is to drive theconversation and the contextual
output forward.
So these are the twodifferences.
And I think like when you are,you know, you're just in your
computer or you're on your phoneand you're just trying to like

(43:37):
find the answer to something,you're prompting it.
You've not designed it.

Michael Hartmann (43:41):
Right.

Spencer Tahill (43:41):
The first iterations of a system prompt
came with like a fine-tuned LLMwhere you would kind of design
it to you know handle these usecases to think in a specific
way.
It's not thinking.
I mean, it is, but it'sconnecting synapses, it's
connecting nodes of datatogether.
That's thinking.
That's a system prompt and Ithink those are two inherently

(44:01):
different things that you can.

Michael Hartmann (44:05):
I was thinking like programming languages like
that most people would useversus assembly language, but I
don't think it's quite the rightanalogy there, right?
This is like um, if Iunderstand it right, it's almost
like um providing that baby,right, um as much ex like

(44:26):
experiential stuff you can,right, and knowledge before you
then go ask it to go eitheranswer something or do something
or whatever, right?
Is that the way more of abetter way of thinking about it?
Yes.

Spencer Tahill (44:37):
I I would I would up the complexity a little
bit, but on a on a back end,yes.
Uh imagine like you have thatone friend that's really
annoying that you just don'twant to like talk to about
anything, but you know that ifyou tell him about this other
thing, that he's gonna be therefor you or XYZ, right?
This is like a very complicatedscenario.
But the system, that person isa system, and they're gonna

(45:00):
handle stimuli in a veryspecific way.
When X occurs, that's gonnahappen, right?
Because it's you've alreadytried to talk to that guy about
you know, 20 times, you know,two in the morning comes over,
whatever, you know.
Yeah, but the way the theconsistency, like the training
of this, that's that is like thepersonality, that is the way

(45:21):
that the system operates.
It's the operating model, it'snot like assembly, but the user
input is the scenario.
So instead of instead oflooking at it as like, oh, the
you know, like you're gonna givethe toy to the baby and it's
gonna try to shove it throughthe box, whatever, right?
You teach it, you tell you tellit what to do and how to do it.

(45:42):
And when it doesn't do that,these things happen.
That's a system prompt.
And then when the scenariohappens, when you when you kind
of give it the scenario in thecontext of like what's going on
at the user level, then it'llknow inherently, oh, this
scenario matches scenario 47that I already have knowledge
base about.

(46:02):
Let me handle it in thisspecific way.

Michael Hartmann (46:05):
Interesting.
Okay.
Yeah, I think I'm I'm gettingthat nuance now a little better.
So um I think I brought Imentioned the word hallucination
at some point.
Um, and I think pretty muchanybody who's used this stuff
has experienced a hallucinationfrom one of these LOMs.
But um so one of the things youand I talked about, you said

(46:27):
that you suggest there's likefive things that you suggest
that you do with every LLM everytime you try to go through
something.
I don't know if this is asystem prop or user promp, so
maybe there's a different shadethere.
But what are those five thingsuh um that we our listeners or
audience could benefit from?

Spencer Tahill (46:45):
There are I'm gonna explain it terribly, but
I'll do my best.
Um I always get it context,background, goals, constraints
or limitations, and formats.
So if you can give it thosefive things, and in not it
doesn't have to be in thatorder, but and you can look up

(47:06):
on the OpenAI website andAnthropic COD website, like how
to make a good system from, howto make a use good user prompt.
But if you give it context, youknow, you are this, you have
this data set, you you know,you're connected to the API and
FBI, whatever, you've got allthat data.
That's context.
Background, you've handled ahundred and a hundred thousand

(47:28):
iterations.
These are the a hundredthousand results that you got
that you produced.
Out of these a hundredthousand, only 50,000 of them
should be produced, whatever,right?
That's that's gonna be thebackground of like, what is it?
I'm a senior marketer research,everything like this.
That's step two.
So context and background.
The third one is gonna be thegoals.
So it's gonna be the directedoutput.

(47:49):
What are you trying to achieve?
Okay, I'm a market researcher.
I have act, I now I have accessto Google, I have access to web
search capabilities, I haveaccess to NPI, you know, all of
these different web pages,whatever.
But now my goal is to be ableto crunch data.
I want to analyze the data, Iwant to synthesize the data, I
want to triangulate the datasources, I want to see if there

(48:10):
are any data discrepancies.
Okay, awesome.
Now the fourth step.
Now we have context,background, and goals.
Very clear the system knowswhat to do.
Then you have limitations orconstraints.
This is the fourth one.
This is like what a lot ofpeople forget, and this is
leading into the discussion oflike hallucinations, you what
not to do.
Because you you summed it upperfectly.

(48:33):
You aren't on a GPT months agoand you were super frustrated
because it produced somethingcompletely different.

Michael Hartmann (48:40):
Yeah.

Spencer Tahill (48:40):
What you had forgotten, what you had
forgotten, exactly.
You had forgotten, andnaturally I have too many times,
is you didn't tell it to retainits memory.
You didn't tell you told it,you're like, hey, I want you to
make these changes.
But then it did, hey, I'm gonnamake these changes across the

(49:01):
whole thing.
But you didn't, you but whatwould have been better is hey,
great job.
Give it feedback, just talk toit like a person.
This is what I do.
I do Windows key H on mykeyboard, or you open up speak
to type.
There's a lot of speech to typepaths, and I speak to it.
I give it feedback.
That's how I work, but with thelimitations, do not do this, do
not output a string, do notoutput an array, output rich

(49:24):
text or you know, take out allthe spaces, take out all the
vowels.
I don't know why you do that,but do not do this, do not list
it down because it's not gonnaknow unless you tell it.
A child isn't gonna know thatthe the letter of the alphabet
starts with A unless you go A,B, C, D, you know, right?
So you have to tell it, youhave to treat it like a child,

(49:48):
right?
It doesn't know.
They're so smart now that mostof them do, but if it's never
seen the scenario before, thenit's not gonna know, right?
So now we have context,background, goals, limitations.
Now you have the fifth one,which is almost the most
important, which is the format.
So a lot of the form theformation of the output is the
result.
That's the thing that you needto get ROI from, right?

(50:10):
Like if you're making aworkflow or a program or
transformation program, that'sprobably what you care the most
about as a result.
The result doesn't matter ifit's not formatted properly.
If you go into HubSpot and youhave a first name field and it
says Spencer Tahill dotwhatever, but it's it's not
spaced out and it's my fullname, it's not parsed correctly.

(50:32):
Like, what's the point?
You need to do output and thenformat the output in a way that
makes sense.
I'm a senior researcher, don'tdo researching about this, don't
take X threads, don't look atsocial media, but I want you to
make the result in an array orlike in a summary, a one-page
summary in this format executivesummary, sources used,

(50:53):
exclamations, datadiscrepancies.
That's the format.
And that's if you can nailthat, you've pretty much got a
great prompt on your hand.

Michael Hartmann (51:02):
Yeah.

Spencer Tahill (51:03):
Context, background, goals, limitations,
format.
And you're pretty much good.
Outside of that, you can do allof your learning on Open AI.
Yeah, I think I I won't.

Michael Hartmann (51:11):
Yes, yes, yes.
I think that constraints onewas the one that was the biggest
help for me when we talkedbecause I started go like to
write the elucidations come fromthat.
And I started doing thingslike, you know, only use the
stuff that I have provided youspecifically for this task, and
don't use the other resourcesbecause otherwise that's where
that's where I found that it wasfinding stuff it had access to,

(51:35):
but wasn't really relevant.
Um yeah, I think that's thatwas helpful.
I still I I had already starteddoing the context.
I think the the big thing I'velearned.
So I've got some things whereI've got long threads of things
where I've provided a wholebunch of context and examples of
other up with it I've done togive it guidance, and it was
really good.
It's gotten better over time.

(51:56):
But what's happened is it's allin one big long chat, and the
performance has been bad.
So I actually did kind of whatyou said.
I was like, I'm gonna go askagain.
I'm not advocating for ChatGPT, it's the one I happen to
use.
Um, and in fact, I feel like Ishould like I there's a part of
me is like I should be lookingat others more, but I like I've
invested in this one.
But I actually said, like, I'mhaving this, like the results

(52:19):
are good, the performance isn'tis getting worse and worse every
time I use it.
How should I restructure this?
Like in if I'm gonna do itagain.
And it gave me a great like wewent back and forth, and it gave
me it actually gave me likehere's the structure do this, do
this, do this.
And um that's how I'm gonnaimplement it in the future.
So I get the benefit of allthat, but continued better

(52:40):
performance.
So yeah, I love it.
That idea of using the tool tohelp you get better at using the
tool.

Spencer Tahill (52:48):
That's that's how I learned.

Michael Hartmann (52:51):
Yeah.

Spencer Tahill (52:53):
I sucked at it and I got better at it.
The first step of being greatis sucking, unfortunately.

Michael Hartmann (53:03):
Yeah.
Um, well, this has beenawesome, Spencer.
Thanks for for this.
Um, if folks want to kind ofkeep continue the conversation
with you or learn more aboutwhat you're doing or what you're
doing with uh with GrowthAlliance, what's the best way
for them to do that?

Spencer Tahill (53:19):
Um guys, anybody that's listening, book a call
with me.
It's uh or text me on LinkedIn,you know, whichever.
Um I'm completely open.
My you know, my eyes can beyour eyes in the industry.
Um building a lot of complexthings.
It takes a lot of time.
So networking chat, dude.
Like uh book a time on mycalendar, tell them that Michael

(53:41):
Hartman sent you from marketingops, and we'll just catch up.
We'll just get a coffee.
I like meeting people.
So um everything is on myLinkedIn.
It's just like slash SpencerTahoe on LinkedIn or something
like that.
Gotcha.
Um, but yeah, don't think thatyou're alone in trying to learn
a new thing because I sucked atit too.
So it's okay.

Michael Hartmann (54:01):
Yeah, no, like I've it's definitely been a
like you have to invest a ton,right?
And it's I wish I had more, butit's it's been I feel like I've
gotten better and better overtime.
So yeah.
Anyway, well, it's a pleasure,Spencer.
Thank you so much.
Thanks for staying up late.

(54:21):
Uh I I know this was a longtime coming, so I appreciate it.
Thanks also to our ourlisteners and our rest of our
audience.
Uh, we always appreciate you.
If you have suggestions fortopics or guests, or you want to
be a guest, you can reach outto Naomi, Mike, or me, and we'd
be happy to get the ballrolling.
Until next time, bye everybody.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Are You A Charlotte?

Are You A Charlotte?

In 1997, actress Kristin Davis’ life was forever changed when she took on the role of Charlotte York in Sex and the City. As we watched Carrie, Samantha, Miranda and Charlotte navigate relationships in NYC, the show helped push once unacceptable conversation topics out of the shadows and altered the narrative around women and sex. We all saw ourselves in them as they searched for fulfillment in life, sex and friendships. Now, Kristin Davis wants to connect with you, the fans, and share untold stories and all the behind the scenes. Together, with Kristin and special guests, what will begin with Sex and the City will evolve into talks about themes that are still so relevant today. "Are you a Charlotte?" is much more than just rewatching this beloved show, it brings the past and the present together as we talk with heart, humor and of course some optimism.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.