All Episodes

May 16, 2025 26 mins

What if you could cut through the noise of AI hype and truly understand agentic AI's potential?

In this episode, host Andreas Welsch engages with Peter Gostev, Head of AI at MoonPig, to explore the practical implications of AI agents in business. Together, we delve into key distinctions between genuine agentic capabilities and simple workflows, as well as meaningful applications like research and coding agents. 

In this episode, we discuss key issues shaping AI adoption:
• Every company seems to be launching an Agentic AI product. How can leaders avoid getting fooled by marketing buzzwords?
• What defines a true AI agent, and how does it differ from a simple LLM-powered workflow?
• AI job titles are shifting—but do you need to update your title with every latest development?
• Who’s experimenting with AI agents, who’s rolling them out at scale, and which companies have already been using them for a while?

Whether you're navigating the complexities of AI in your organization or simply curious about its future impact, this episode is packed with insights that can help you make informed decisions. 

Ready to separate fact from fiction in AI claims? Don’t miss this enlightening discussion—tune in now!

Questions or suggestions? Send me a Text Message.

Support the show

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


Level up your AI Leadership game with the AI Leadership Handbook:
https://www.aileadershiphandbook.com

More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Andreas Welsch (00:00):
Today we'll talk about how avoid.
Getting fooled by AI andagentic AI claims, and who
better to talk about it thansomeone who's passionately
posting a lot of informationabout that and gives actionable
advice to the community.
Peter Gostev.
Hey Peter.
Thank you so much for joining.

Peter Gostev (00:16):
No, thanks a lot for having me.

Andreas Welsch (00:18):
Hey, I've been following you on LinkedIn for
quite some time, and I'm alwaysamazed and inspired by the posts
that you share because I seethat you take a really pragmatic
view on things that arehappening and you also give some
really good advice.
But maybe for those of you inthe audience who don't know
Peter yet, maybe Peter, you cantell us a little bit about
yourself, who you are and whatyou do.

Peter Gostev (00:39):
Yeah, sure.
So, yeah, apart, apart fromwriting here on LinkedIn I've
got a day job as well.
So I lead AI at MoonPig.
So MoonPig is a e-commercecompany.
We offer greeting cards andgifts which is actually a really
interesting space to apply AIbecause we've got, apart from
all the usual use cases, we'vegot images and maybe video,

(01:02):
audio, music.
Not that we've done a lot ofthat yet, but it gives us a lot
of extra dimensions to apply.
And before that, I was workingfor a bank.
Where the type of use casesthat we are looking at in the AI
space, were much more kind ofcorporate ones that you expect.

Andreas Welsch (01:20):
Hey, it's great to have you on the show.
So thank you so much.
Peter what do you say?
Should we play a little game tokick things off?

Peter Gostev (01:27):
Yeah, sure.
Let's do.

Andreas Welsch (01:28):
Alright, so this one is called In Your Own
Words, and when I hit thebuzzer, the wheels will start
spinning and when they stop, yousee a sentence.
I would love for you to answerwith the first thing that comes
to mind and why, in your ownwords.
So you only have 60 seconds foryour answer to make it a little
more interesting.
Are you ready?
Sure.
Let's do it.
Okay, then let's do it.

(01:50):
If AI were a book, what wouldit be?
60 seconds on the clock go.

Peter Gostev (01:57):
Yeah, I feel like it would be a combination of the
most crazy, fact based bookthat you would ever read.
'cause and the, the reason whyI say that it's a, a lot of the
things that actually happeningnow, if we used to it now, but
we actually heard it.
Five years ago you, you'd feellike it's completely insane.

(02:21):
So I feel like it's a, it's afactual story, but, but, but
real, but the other half is, isa lot of hype and and not very
well-founded information aswell.
So yeah, I feel like you'll getreally lost trying to make
sense of, of this book.
Right.

Andreas Welsch (02:39):
I really like that idea.
Right.
It's indeed a good mix ofeverything that, that captivates
you, but also keeps youwondering how far have we come
and how far do we yet have to,to go?
I really like that.
Thank you so much.
Great answer.
So, you know, but obviouslythat's not all we want to talk
about today.
And I mean, if you look atwhat's happening in in the

(03:00):
market, certainly there's a lotof hype and a lot of buzz around
this topic of AI agents oragentic AI.
All of a sudden, we're not justpointing and clicking anymore.
We're not just enteringinformation in form fields.
But the idea is that we have amore natural conversation with
this software, with this pieceof technology, and we can give
it a goal.

(03:20):
It goes and figures out whatneeds to be done.
Research is information maybe.
Some agents are connected tothe internet, some are more
connected to your own database.
But just, just overall, itseems that over the last
quarter, one headline has chasedthe next.
So I'm wondering where we're inApril.
You know, April Fool Day.
But who's the fool here withall the information?

(03:42):
What do you make of that?

Peter Gostev (03:43):
Yeah, it's, it's definitely, it's very hard to
make sense of of what's going onand.
What is good information?
What is bad information?
So I do always try to sensecheck, am I being a fool?
Am I seeing something that isor maybe I'm making judgements
that are wrong.
So I definitely don't feel likeI'm on the firm ground where I

(04:07):
definitely know everything, butwhat's gonna happen and so on.
So I think we've all gotpotential to be fools, but I
think the, there are two, twoextremes.
One is you get carried away byeverything that's being claimed
and the hyper anti agents thatwe are all about to get AGI and,
and all of that.
I think that's one direction,but I think there is equally

(04:29):
another direction where you justassume that everything is
nonsense and you just don'tactually take the center
account.
So the only antidote to thatthat I can think of is that you
just have to try and, and getintuition for what's actually
real or not, and it's hot.
Thing to do.
I do, whenever I write I do tryand stick to that rule of not

(04:53):
writing about it if I haven'ttried it.
It is a bit hard to stick tojust 'cause of the volume of, of
the things that are going on,but there've definitely been
some very happy tools that Ihaven't written about just
'cause I haven't tried it.
There's no point in me hypingit or criticizing it if I have
no idea what it actually islike.

Andreas Welsch (05:11):
That sounds like a very sensible, a approach in
in, in many conversations.
It seems to come down to thatas well.
Right.
First of all, see if you canfilter the information.
What is really relevant to youright now in your industry, in
your business, and what's allthe other noise that vendors are
talking about that managementconsultancies are talking about

(05:33):
that, you know, just happens toflood the news.
I also really like the tangibleadvice of, see if you can try
it out for yourself.
And I think sometimes that'snot even as easy as it sounds
for many of us.
When you're in a in aleadership role or in your daily
business and you're, you know,rushing from meeting to meeting
to meeting, and at the end ofthe day you're wondering, what

(05:53):
did I do today other than, youknow, be in meetings or talk
about things.
So being able to sit down andhave that time to experiment, I
think is to your point,critically important.
We all need to make that time.
So we can also assess forourselves, is this real?
Is that something useful?
Again, is somebody just makingbig claims and maybe a quarter
or two down down the line thingswill.

Peter Gostev (06:14):
And, you know, I think that we are, we are really
lucky that with the field of AIit's actually very accessible.
So if you, if you compare it tosome other previous trends that
we had in, I know in, in techsuch as cloud, you can't really
exactly sit down and try cloud.
That's, that's not really ameaningful thing that that
exists.

(06:34):
So before, if you were to readabout cloud, that's, that's
quite reasonable to say, okay,I've read some consultant
reports or something that,that's good enough.
But with, with something likeusing language models or image
models to see what's possiblethere's really no good reason
why you shouldn't just try itinstead of freezing about it.

(06:55):
And and the, the anotherdimension to, to it is that.
We, there is really nothing inour previous experience that can
give us intuition for what itcan do, what it cannot do.
It's just a completely new,it's like a new alien being.
So how could you guess?
It's completely unreasonable toexpect anyone to have any

(07:19):
intuition for how it works.
So it combined with, it'scompletely new and it's actually
easy to try.
You, you really have to try it.
There's no other way.

Andreas Welsch (07:30):
Now what you said deeply resonates with me.
I remember 10 years ago, alittle more than 10 years ago
when cloud was that big trendthat that came up.
Yes, there was a lot of readingand, and learning about it.
What is infrastructure platformsoftware as a service, but.
It also seemed pretty standard,right?
Pretty cookie cutter.
You either you either rent the,the bare hardware or you have a

(07:52):
little more on, on top ifyou're a developer or you get
the application.
But it was pretty much the, thesame concept for, for any
vendor.
So, great point there that youhave to experiment with it
because it's moving very, veryquickly and.
To your point, you cannotreally talk about it or, or
predict where it's going unlessyou've seen it yourself.
But you know, I think thatbrings up another point.

(08:14):
And, and for me having been in,in the tech industry and, and
having seen that firsthand howlarge tech companies So, you
know, having been in theindustry myself and having seen
how large technology companiesgo to market, how they do their
marketing, how others in theindustry act I feel like all of
a sudden every vendor has anagent AI product.

(08:37):
So how can leaders aboutgetting full or even tell what's
real and what's a real agent orjust an LLM or a workflow with
some AI.
What are you seeing or what'syour recommendation there?

Peter Gostev (08:49):
Yeah, the, the biggest problem there is that
the, there's lack of clarity ofdefinition around what, what is
meant by agents.
And and I would assume thatwhenever we hear the word
agents, I would tend to assumethat it's closer towards
exaggerating its capabilitiesrather than under staging its

(09:10):
capabilities.
And the reason for that is thatfor any hype.
Based term, right?
People always try to jump onthat trend to, to try and
describe it.
And there's definitely severaldifferent flavors of what, what
is meant by a agentic.
And, and everyone seems toclaim it.
So I would say the bigdistinction, I would draw is

(09:34):
whether that use case is definedwithin one specific workflow
with maybe like small deviationsin terms of, or maybe there's
like three ways you can go.
So that that could be onecategory.
And the second category is doesit have freedom to go and do
things?

(09:54):
And in the more open-endedspace.
So these are probably.
The many more variations tothis, but that's probably the
distinction.
And in my mind, my personaldefinition, where I would draw
the line is that I would notcall.
These systems which have astrict workflow to be agentic
and the, there are manydifferent steps that they might

(10:15):
go through, but I would not callthem Egen.
And, and the reason for that isthat it's not if we forgot that
this term existed.
You could have built thesekinds of systems two, three
years ago where we could havejust staged and people did that,
right?
People stitch togetherdifferent steps and they got,

(10:36):
got a good answer.
So there's not really anyreason to invent new terminology
to kind of imagine the, what isto describe workflows which
have AI steps in them to asagentic.
And I think the only reason tocall something agentic is why
it's truly has some.
Capacity to go beyond apredefined workflow and then

(10:58):
maybe make certain decisionsabout what they should do and,
use multiple steps, differenttools go back on itself.
That sort of thing is agentic.
So it come coming back tospecific questions.
How do you distinguish, I thinkestablishing that distinction?
Is it a workflow that someone'sdefined them built or is this

(11:21):
truly like you just going aroundand, and doing certain things
and it, it's, by the way, it'sdoesn't mean to say that you
always wanted to be an agent.
I actually think like a lot oftime you don't want it to be an
agent, but I think that's atleast maybe somewhat clear line
you can draw whether, which,which one is it?

Andreas Welsch (11:42):
I really like that part that you said about
you know, determining, is it astraightforward workflow,
something that you could havedone already a couple years ago,
or is there more of thisvariation, more of this
research, this thinkingreasoning to some extent some
extent in there that then reallymakes, makes it agent or more.

(12:02):
More capable, more intelligentin that sense.
But I also agree with you,right?
It's very easy to conflatedifferent terms or for vendors
to subsume differentcapabilities under this new term
agen and say, Hey, we, we'vehad this all along, so I really
like your recommendation of,Hey, think about what does this
thing actually do?
Is it a straightforwardworkflow?

(12:23):
Is it more you know, morereasoning is it more complex?
And, and look behind thecurtain, basically.
Now I saw on LinkedIn yourtitle is Head of AI, not Head of
Gen AI, not Head of Agentic AI.
What what does it mean?
Does it mean you're not workingon the latest stuff?
Or should we not get fooled bytitles?

(12:44):
What's your recommendation toother AI leaders who might also
have a title of head of AI butnot Gen AI or Agentic AI?

Peter Gostev (12:52):
Yeah, so the way we split it within within
MoonPig is that we do have adata science team which is doing
machine learning and is alsodata architecture and all of
that really important work.
And we also have AI team, whichis my team, and you could, we
could have called us our Gen AI.
And the focus there is muchmore taking the mostly APIs,

(13:19):
although not entirely but tosimplify it, take APIs and apply
it somewhere around thebusiness.
I think it's a mistake that alot of business are making, that
they would take their datascience team who are like
machine learning PhDs andthey'll say, by the way, can you
just stop doing your PhD stuffand can you call this API and

(13:39):
you know, back the model to upwith JSON.
So, and it's and it feels like.
It's not necessarily theirskillset anyway, and you're also
just wasting the, the thingthat they're actually good at.
Now, it doesn't mean that theycan't do that.
Like obviously the smart peoplethey would and the, the, a lot
of successful transitions, but Idon't think we should just
assume because it's vaguely inthe same space that we should be

(14:02):
stopping one doing the other.
So that's why we have a splitNow we collaborate on, on things
where it makes sense, butessentially we wouldn't say, oh,
by the way, you used to do, Idunno, pricing algorithm this
way.
Can you stop that and let's dolike some alarm stuff.
It doesn't like, it doesn'treally make sense.
So, but equally with gen AI,what we are focused on much more

(14:25):
is building software with withthe latest technology.
And to your point aboutAgentic, we do experiment with
things and we certainlyexperiment a lot with, with
different new, whenever newmodels come out we always test
and see what, what newcapabilities we can bring with
Agentic, I would say, we havereally struggled to find

(14:48):
anything where we would want toput anything Hy Agentic inside
mpe.
And the reason for that isthat.
Whenever we have some workflowor some specific thing that we
want to achieve, it's not veryclear why you would want to
introduce any ambiguity in, intothat process.
So we tend to be much betteroff in saying, at least at, at

(15:11):
this stage of our development,we still have quite a lot of
opportunities where we wouldsay, okay, we've good.
Let's say some data here orsome interaction here.
We want, we, our current teamsgo through these steps to
process the data somehow and putit in this system, and then
they put it in that system.
There, there are quite a lot ofuse cases, like along these

(15:33):
lines, which I guess you coulddesign to be agentic, but I
don't really see the point.
Why wouldn't I just say.
Take it from this system, I'lloutput it in this way, put it in
in this way.
So there's enough problems asit is in terms of like making
sure that that works properlyand having it, it feels like we

(15:55):
need to have a really, reallystrong benefit to say, or by the
way, it is now agentic so itcan decide whether it should be
here or here, where they shouldgo there or there.
And maybe it is.
Because we're not such crazycomplicated business.
Maybe we just don't have that.
But even in a bank, if I goback to my bank in days, I don't

(16:17):
see a lot of reason why youwould design it in a, that kind
of more free flowing way.
That's why I think there isdefinitely not that clear that,
at least in the internal usecases for.
Your own business processes, atleast at this stage of
development, it's agentic isnecessary at all.
I could be wrong.

(16:37):
I could be surprised, but atthis point I don't see a lot of
reason to really go down thatroute.

Andreas Welsch (16:43):
Well, there are so many good points in those
last few minutes, what youshared.
I think what I heard was on, onone hand, don't focus so much
on, on the title.
Rather look at what are theoutcomes and, and the output
that you can deliver and go andexperiment with new tools.
It doesn't matter if you'rehead of AI, head of Gen AI, or
head of agentic AI.
It's rather what is your team'smission?

(17:03):
And they're great and skilledpeople that can help you do
that.
The second part around agents,honestly, something that.
I sometimes wonder as, as well.
And if I heard you right, yousaid why introduce something
that is not necessarily reliableinto a process, maybe a process
where you need to have a higherlevel of reliability or where a
straightforward automationwould give you repeatable

(17:25):
results.
Same input, same output everytime, not most of the time, or
sometimes right.
You know, on one hand, I seecompanies looking for that as
well.
That's why we have processes.
That's why, you know, thingsare coded, things are supported
in workflow tools or in othother kinds of enterprise
software.
Yet we're introducing thatlevel of ambiguity on one hand,

(17:48):
just because we do more complexthings or we can do more complex
things with agents, but if wedon't really know.
How, how they work are theyrepeatable every time?
Do they get the, the sameoutput, the same quality output
every time?
I can certainly see these,these concerns as, as well that
you mentioned.
Which actually brings me to mynext question.
And, and that's, you know,who's fooling around with agents

(18:11):
to, to stay with the termfooling who's rolling them out
in what do you see, who's hadthem for a while?
I'm sure you're attendingdifferent conferences or talk to
other professionals in thearea.
Who, who's actually doingsomething beyond the, the
marketing buzz that this ishere, this here to stay this
awesome.
Try it out.
Who's actually doing it?

Peter Gostev (18:30):
Yeah.
So the, the best ones.
That, that we, we see publicly.
I would say there's a reallygood application of that
ambiguity is when you don'tactually know where to look for
information.
So research is a, is a reallygenuinely good application of
it.
So, and, and the reason forthat is that, so I said earlier

(18:52):
that if you've got a clear.
Process where you've got inputsand outputs.
When you actually don't havethat, then it's great.
So so yeah, deep research is aobvious example where it's a
constraint in the sense that itjust kind of goes around brows,
the web, but it's alsounconstrained in the sense that

(19:13):
it can go anywhere.
So that, that is, I think is anexcellent application.
And I think research isactually.
Uniquely well positioned interms of exploring the, the
different areas.
So that, that's clearly a verygood one.
Then we've seen the codingagents being also the, the

(19:34):
popular category.
This one is interesting interms of it being how successful
they're going to be.
I would say the, the key pointthere is that whether, and I've
seen some.
Examples of them doing it iswhen they close the loop and
being able to test kind of rightunit tests or actually click

(19:56):
the buttons on on the website,for example.
If that's what what you'rebuilding, then I think the
Gentech approach could, could begreat.
I've also seen some.
Bad examples of it.
Personally, I don't find theSonnet 3.7 kind of approach to
agency.
Very good.
It just seems to go around incircles and it just does way too

(20:17):
much and it's, it just kind ofgoes off the rails.
So that's, that's an exampleof, it's actually quite hard to
get that balance right, becauseI guess you want the agents to
be proactive, but if they'retoo.
Too far the way they go, thenthey, they go off the rails as
well.
And we've obviously seen a lotof like the MCP hype going

(20:39):
around as well.
So it's hard to say that howmuch of it is actually getting
real traction versus people justtrying things out, which is
great.
They should, but it's, it'salso hard to say like, oh, is
this really properly taken off?
Like, would we still talk aboutit like six months later?
It is, it's hard to, hard tosay for sure.

Andreas Welsch (20:56):
Yeah, so maybe also to some extent give it a
little more time to settle and,and to have more people explore
it and poke holes into it.
And then you see how real andhow durable it really is in a
sense, right?

Peter Gostev (21:08):
Yeah.
Yeah.
And one thing about agents aswell is that the sense that I
get is that.
People kind of imagine they,they've built, for example, with
MCP, it's a connector, sothere's a now capacity to go and
interact with different tools,which is fundamental.
Like if you don't have thepipes, there's nothing you can

(21:30):
connect to, then it's, doesn'tmatter how clever the model is,
you're still constrained.
Then we've got, yeah, likecoaching agents that can go
around and build their ownsoftware autonomously.
All, all of that.
I think what people forgettingis that if you give access, it
doesn't mean that the models aregood enough to do that.
So, and now I do get the sensethat I think we're getting a

(21:52):
little bit carried away how goodthe models are.
I think the models are just notthat good.
So even like silly examples oflike cloud playing playing
Pokemon.
Like, okay, it like canmechanically play, but is this
actually not, not that it'suseful, it's obviously a toy,
but it is like, is thisactually, is it actually good at
it?
And the answer is no.

(22:13):
It's, it's pretty terrible.
Yeah, it is very interesting.
But that's the state where,where in, okay, we can let, we
have physical capability to letclo.
Play Pokemon, but it's awful.
So it's, it's also like I thinkwe need to calibrate the, the
kind of two sides of Hype.
It's, it's very exciting in thesense that you can kind of feel

(22:36):
the AGI, you know, feel thepotential, but it's also not
there.
So you need to kind of.
It, it's still missing thatactual capability.
So it's kind of exciting.
But you, you should, when youget excited about this stuff,
you should wear a hat off.
I'm letting myself imagine thefuture rather than, oh, I'm
gonna actually build thistomorrow because we are

(22:56):
definitely not, not in thatspace.

Andreas Welsch (22:58):
I think that's a very pragmatic lens through,
through which you're seeing thatand recommending that others do
as well.
Certainly exciting, right?
All, all the things that AI andand technology has now come to
do and is now able to do that wehaven't been able to do before,
but still, there's always alittle further that we can go
now.
Peter thank you so much so farfor sharing all your insights.

(23:20):
I was wondering if you cansummarize the key three
takeaways for our audience todayon how not to get fooled by
agent AI claims.

Peter Gostev (23:27):
Yeah, sure.
So the, the, the thing aboutagents is that you need to be
clear whether they're trulyagentic or they are.
So where they have capacity togo, make decisions and operate
out there in a more free way, orwhether they're just more of a

(23:48):
specific workflow that, that ispredefined.
That is the clear distinctionthat I draw in my head.
The second point is.
As far as I'm concerned, theagent AI is a little bit too
soon.
While we do have a really goodapplications such as researcher
and coding agents, which, whichare really good as a generic, I.

(24:10):
Capability, were little bitearly and we're just building
the pipes with things like MCPand and and so on.
And the third point is that theonly way for you to know where
we are I is to test it and tryit yourself.
And definitely do not rely onwhat.
Anyone is writing, includingme.

(24:33):
And the only way for you toreally know and have your own
intuition, and that's really theimportant part, is what's your
intuition for how well it works.
And the only way you get thereis when you actually try these
things for yourself.

Andreas Welsch (24:46):
I love that.
That's very practical, verytangible advice.
Peter, thank you so much forjoining us today and for sharing
your experience with us.
It was a pleasure having youon.

Peter Gostev (24:55):
Oh, brilliant.
Thank you.
This was fun.
Advertise With Us

Popular Podcasts

United States of Kennedy
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Bookmarked by Reese's Book Club

Bookmarked by Reese's Book Club

Welcome to Bookmarked by Reese’s Book Club — the podcast where great stories, bold women, and irresistible conversations collide! Hosted by award-winning journalist Danielle Robay, each week new episodes balance thoughtful literary insight with the fervor of buzzy book trends, pop culture and more. Bookmarked brings together celebrities, tastemakers, influencers and authors from Reese's Book Club and beyond to share stories that transcend the page. Pull up a chair. You’re not just listening — you’re part of the conversation.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.