Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Hannah Clark (00:01):
Things
aren't always as they seem.
From bait-and-switchsubscription pricing, to the
people who swear candy corntastes good, we have no shortage
of reasons to be skepticalof everything—especially
right now in product.
The speed of change inthis industry is moving so
fast that trends can startlooking like best practices
and the real best practicescan start looking outdated.
But when the goal is to builda product that will scale
(00:23):
over time, it's on us to checkourselves from getting sucked
into hypetrain hypnosis.
My guest today is MattGraney, CPO of Celigo.
Matt has spent over 20 yearsin B2B product and almost
nine of them scaling theproduct team at Celigo from
four to over 45 people.
And while AI has proven to bean extraordinary opportunity
for the company, he's noticeda gap between expectations
(00:44):
and reality when it comes tosome of the shiny new tactics
blowing up our LinkedIn feed.
From prototypes masqueradingas production-ready code to
too-good-to-be-true tools,you'll hear his take on where
to tread with caution, andthe time-tested wisdom he's
counting on to make it throughthis AI transformation alive.
Let's jump in.
Oh, by the way, wehold conversations
like this every week.
(01:05):
So if this sounds interestingto you, why not subscribe?
Okay, now let's jump in.
Welcome back to TheProduct Manager Podcast.
I'm here today with Matt Graney.
He's a CPO at Celigo.
Matt, thank you so muchfor making time in your
schedule to talk to us.
Matt Graney (01:19):
Thank you, Hannah.
Great to talk to you today.
Hannah Clark (01:21):
So can you tell
us a little bit about your
background and your journeyto becoming the CPO at Celigo?
Matt Graney (01:26):
Yeah, so I've been
in B2B product for a long time
now, about the last 20 years.
Celigo is an integrationplatform, and so I've been
with Celigo for eight anda half years joining just
after series A funding.
And another five and a halfyears before that also in,
in the integration space.
It wasn't really by design,it's just how things worked out.
(01:46):
Before that, I was involved inproducts around the software
development lifecycle,including UML modeling tools.
For those of you who mightremember what those were,
originally I was a softwareengineer back in Australia
working in telecom and defense.
And first came to the USto work as a sales engineer
actually for a product thatI'd become a power user in
(02:07):
while I was at Motorola.
So kind of a diverse background.
And then took a tour of dutyin product marketing and then
finally made it into product.
Hannah Clark (02:16):
A tour of duty.
I've never heard itdescribed that way.
That's funny.
So today we're gonna be havinga little fun with a very quasi
Halloween themed episode.
We're gonna be focusingon the theme of bad
product plays in disguise.
And we're gonna start bydigging into vibe coding.
So lots of opinions beingtossed around in Vibe Coding
in the product community.
And I'll be the first toadmit, no code platforms
(02:37):
are amazing tools.
I use 'em all the time, butthey can also lead to some
very Scooby-Doo esque moments,mask off moments where we
take a closer look and it's,you know, not what it seems.
So what's been yourexperience, Matt, with vibe
coding within product teams?
Tell me the good, thebad, and the ugly.
Matt Graney (02:52):
Yes.
And we would've got awaywith it if it wasn't
for you, pesky kids.
Right.
So yeah, I thinkit's democratization,
which is incredible.
But we have to also thinkabout the delusion, you
know, with that power tomake something look good.
It doesn't mean it'snecessarily built right.
And I think there's a lot thathas to happen under the covers.
We shouldn't confuseperhaps quick prototypes
(03:15):
with production ready code.
And so, okay, citizencreators, but you also
need citizen architects.
And in the context of thebusiness we're in, for example,
B2B, we're dealing with, youknow, where infrastructure
software, I mean we'retalking about billions
of transactions a month.
There are only certain partsof the application, perhaps
on the very front end wherewe might be comfortable
(03:35):
vibe, coding, anything.
And so I think, you know,it's gotta be about the
right tool for the job,just as it's always been.
And, you know, while alsomaking sure there's a culture
of experimentation, makingsure that we're encouraging
the team to take risks, to trynew tools and stay current with
the latest developments thatwhat is truly a groundbreaking
(03:56):
time for the whole industry.
Hannah Clark (03:58):
So just to go
a little bit deeper on that.
When we think about, youknow, the various levels of
understanding of the technologyjust all along the organization,
you know, it can be verytempting for folks who are
less experienced either withthe technology or new founders
themselves, to kind of see whatlooks like functioning code
and just kind of run with it.
And puts a lot of pressure onengineering teams to kind of
(04:18):
match that speed or be ableto develop at that level that
quickly and make it look,you know, so shiny and new.
So how do we help stakeholdersunderstand the real constraints
and considerations between theprototype that we're seeing
from Vibe coding tools andproduction ready software?
Matt Graney (04:34):
Yeah, that's such
a good point because, you know,
speed to demo is not the sameas speed to production and
especially, you know, we'retalking about 5,000 customers
again with B2B workloads, so wesometimes talk about what we do,
infrastructure as the plumbing.
So, okay, AI might paint thehouse, it's not necessarily
gonna plumb the house.
Right?
You wanna be sure aboutsome of these things.
(04:56):
There are some challengesbecause whether it's pressure
on engineering teams or evenpressure from the exec team.
Recently I've seen internally,you know, fairly senior members
of sort of non-technicalstaff vibe, coding proofs
of concept to show newcapabilities that are much
needed by their customers.
It's certainly fired theimagination, but it's also
(05:18):
creating this unspokenpressure that maybe this is
accessible to everyone, thatthis is something that we
can rush into production.
And you know, if we'retalking about buildings, if
I go back to painting housesor plumbing houses, I mean
these are not necessarilyload-bearing walls, right?
These are the facade.
It might look good.
And again, that's not tosay there isn't a place
(05:40):
for it because the speed ofPOC, it's just incredible.
And we need to be embracingthat at every turn.
Whether that's to helpa product manager better
explain requirements, whetherhelping a designer to show
alternative workflows.
Whereas before, they might'vebeen going through designing
many different screens intheir favorite design tool.
(06:00):
The power of working code isundeniable, and we need to
be looking at embracing thatat every possible turn while
recognizing that it's not thesame as production ready code.
Hannah Clark (06:11):
And something
that's been on my mind as well
is that these are tools thatinvariably they're going to get
much more sophisticated and weas users will also become much
more sophisticated at usingthem, which means, of course,
we're trending towards this costof building, approaching zero.
So for yourself as a productleader, what are the concerns
that you have about thattrend and what have you
been doing to mitigatethose concerns at Celigo?
Matt Graney (06:32):
Yeah, Hannah,
I think that's a great
mental model, a greatthought experiment to run.
Like what happens if the costof building goes towards zero?
Okay, maybe it's fastapproaching the generations of
these tools, as you say, theability of users, skilled users,
just as we see improvementsin sort of the usual
ChatGPT kind of experiences.
(06:53):
So we can expect adramatic decline so
become so cheap to build.
I think that actuallycounterintuitively puts even
more pressure on productmanagers to make sure we're
building the right thing so itdoesn't absolve us of all the
right things we should be doinglooking at product analytics,
quantitative and qualitativeuser research, customer
interviews, product advisorycouncils, all those things.
(07:15):
Proofs of concept, AB testing,working closely in a triad of
product managers, designers,and engineering, right?
So none of that goes away.
And I think we have to guardagainst that because, you
know, to go to a Halloweentheme could help with a bunch
of zombies running around,like zombie projects, things
that were so easy to build.
Maybe littering the productwith all these ideas that never
(07:38):
really quite, you know, made it.
And so I think again, thatsome of the older disciplines
of product management reallycome back to the fore because
we have even more choice now.
I think, you know, the abilityto make decisions, to drive
the right kinds of outcomes, Imean, all that has to remain.
Hannah Clark (07:56):
I would agree.
And on the topic of maybeill-conceived ideas.
I have certainly been seeing anexplosion of tools flooding the
market that seem to offer, let'ssay, enchanting benefits while
hiding what, maybe even, noteven hiding, but just offering
some very serious risks.
Like I've seen some fairlyegregious concepts circulating
that I could just so clearly seean opportunity for bad actors
(08:18):
to just manipulate in a waythat's not really what we want.
So what are some of the thingsthat you're seeing in the
space that give you pause,and what role does product
judgment still play thatjust can't be automated away?
Matt Graney (08:29):
I think we've
always been looking for maybe
the magic eight ball of productmanagement, whether that's
scoring methodologies like rice.
I've seen them abusedas well because.
It still leaves a lot oflatitude for product managers
to have their thumb on the scaleto influence the scores, and
it doesn't take too many roundsof it to figure out exactly
how to move the needle andtip the scale in your favor.
(08:51):
So I think we'll see the samesorts of risks here as well.
I think there are some toolsthat promise maybe they're gonna
vacuum up all the intel, youknow, product, telemetry, every
conversation there ever was.
I think we all know there'splenty of things that happen
out of band observations thatmaybe, yeah, maybe it comes from
a session replay tool, but it'snot like written down in a form
(09:14):
that an LLM is gonna understand.
Right.
So I think there's no substitutefor sound product judgment,
and while we are, maybe wehave more data than ever.
I think at the end of theday it's incomplete data.
And there still needsto be a vision, a
strategy, and in productmanagement there are bets.
(09:35):
And yes, we take bets knowingthat we're gonna be able to
measure the outcomes, hopefullyif we do our jobs well.
But it still has to be aniterative process, and I
don't think there's any magicanswer that AI gives us to
suddenly produce an infallibleroadmap, put it that way.
Hannah Clark (09:51):
Yeah,
I tend to agree.
I was Speaking to a guest whohas not yet been on the show.
It was coming up quickly, butsomething that we discussed
was this concept that AI iskind of a jagged technology
in which there are somethings that it's very good at.
You know, we can exploitthose advantages, but then
there's things that it'snot so good at, but it looks
like it's very good at.
And so, yeah, it's a matterof the product sense, but also
(10:12):
understanding the technologywell enough to be able to
kind of check yourself on,like what are you really
relying overly relyingon the LMS to do for you?
Matt Graney (10:20):
And I think even
with basic chat interfaces,
I think we've seen plentyof examples of some AI being
quite sycophantic, right?
And if you're not reallyawake to that, you might
begin to think, youalways have great ideas.
And so personally, I liketo spice it up a little bit
and make sure that I havesort of an alternative view
and ask for a hypercriticalreview of the ideas I have.
(10:43):
Because otherwise I'malways sounding like a
genius when I talk to my AI.
So
Hannah Clark (10:48):
yeah, they
love us, don't they?
So let's kinda move past bypointing a little bit and I'd
like to talk about some other,let's say like attractive
but ultimately unsustainableshortcuts that we're seeing
product teams take right now.
What are some of thebigger offenders that
you've seen around?
Matt Graney (11:01):
I think one
of my favorites is OKRs.
I think we've got sort of atroubled relationship with
them, maybe at a company level,not too bad, but I think in
product it, it can be a bitchallenging and I think, you
know, it's what happens whenyou try to import something.
From a fang, in this case,like one of the big name
companies, without necessarilyhaving the rest of the
(11:23):
culture to go with it.
I think that's in general, youjust can't graft on a limb.
Okay, now we're gonna talkFrankenstein's monsters.
Right?
If you don't have that sort ofas part of the culture, these
things are never really goingto knit together properly.
Right?
So that's one.
I think there's also, we'veall seen metrics theater, a
bunch of vanity metrics thatdon't really tell us much or
(11:45):
don't drive better decisions.
I think that's a Gotcha.
That's been around for a while.
Maybe we talk about featurefactories or feature farms, and
maybe now we have the abilityto farm by the acre, right?
Because of the scale,again, where our ability
to produce is going up.
How do we make sure we'reproducing the right thing?
So I think any of thesesorts of things are
essentially shortcuts.
(12:07):
Again, looking for magicsolutions that apparently work
somewhere because someone readthem on on X or on LinkedIn
somewhere, and without that sortof rigor behind them, it's just
going to erode trust, I think.
Hannah Clark (12:20):
Yeah,
I would agree on the
topic of eroding trust.
I think that one of theones that comes to mind
for me is how it's affectedthe UX research community.
I know that the UX researchers,I think, have long suffered
as being sort of theunderappreciated aspect
of the product process.
But now LLMs, I think we kindof get an even more muddled view
of how to conduct that correctlyand kind of the role of AI in
(12:44):
assisting with that process.
That's one like, I wish I couldshut it from the rooftops that
you just cannot substituteuser research with LLM.
Matt Graney (12:52):
And our head of
user research recently was
telling me the same thing thatthe AI transcripts are great.
Okay.
You know, verbatim, thisis what was actually said.
But the insights oftenmiss the subtleties
certainly at the moment,and maybe that'll change.
I think as we say, thereis continued generations
of this technology.
(13:13):
I think we can be optimisticabout the future, but for
now I think it's as mostimportant to understand the
limitations and guard againstthem through of fashion
craft of good user research.
Hannah Clark (13:25):
So switching
gears a little bit, let's
talk a little bit about yourexperience scaling teams.
So you have beendoing it for a while.
You've scaled your teamat Celigo from two PMs
to 10 times that size.
You've got lots of experiencein building processes.
I'm sure that there havebeen missteps along the way.
So can you tell us a littlebit about what has been sort
of your, or a few of your besttakeaways in terms of scaling
(13:47):
teams from small organizationsthroughout their maturity?
What have you kind of learnedthat you think still holds
true even today in thisfast moving time of AI?
Matt Graney (13:55):
Yeah, so
it has been a journey,
as you say, Hannah.
So I joined the company,inherited two PMs
and two tech writers.
So team of four.
Now we're, you know, about45, close to 50, right?
So it's been a lot and it'sa team of PMs, designers,
researchers, technical docs,and product operations.
I think what I've learnedis maybe the order in
(14:17):
which to do things right.
So at the very beginning,life was simple, you know,
by my side on the more ofthe platform side, working
directly with our CTO.
And three of us arounda room prioritizing an
entire backlog, right?
I mean it was as simple as that.
But that clearly doesn'tscale in the long run.
You know, dealing withoffshore teams, both PMs
and engineering, beingoffshore, complicates things.
(14:39):
And we gradually added processat the beginning, design
was just the best we coulddo with the tools we had.
So it was PMs doing theirbest, you know, I always think
it was almost like stitchingtogether, screenshots, like a
ransom note that's kind of sogrungy and almost embarrassing.
And then really, I thinkour first foray into, you
(15:00):
know, professional designers,we didn't do that well.
I felt like we used designmore like a, an agency model,
Hey, make this look pretty,instead of really thinking
about it as user experienceas opposed to just design.
Stocks, we've alwaysbeen fairly good and that
has continued to evolve.
So we've added processas we needed it.
I'm not saying we're perfect,but it's tended to work for us
(15:22):
pretty well and maybe haven'tgot all the right ceremonies in
place at times it feels like.
But I think sort ofdirectionally it's been correct
and it's been a case of justenough process and responding
to the needs of the business.
And providing room togrow for all members
of the team and so on.
But it's been a journey.
I've never run a teamthis large before.
(15:43):
Right.
There are a lot of firsts here.
And I have some battle scarsand gray hair to prove it.
Hannah Clark (15:48):
Oh, really?
Where?
Matt Graney (15:49):
Oh yeah.
I'll blame my kids maybe.
Okay.
Hannah Clark (15:52):
Yeah.
You can blame everythingon your children.
I do it all the time.
So to expand on that a littlebit and kind of tie it in with
what we were talking aboutbefore, I'm curious about some
the tried and true productmanagement practices that you
think maybe are actually moreimportant now than ever, even if
they seem a little old school.
Are there any concepts orframeworks or practices that you
(16:13):
found yourself returning to morethan ever or emphasizing with
your teams in the age of AI?
Matt Graney (16:18):
As I said, you
know, as the cost to build
maybe approaches zero, Ithink it actually puts more
onus on us to prioritize.
So I'm thinking more in termsof the tools for prioritization.
So some of that has to bealignment around a vision
and strategy, making surethat everyone on the team
is clear enough to be ableto make localized decisions.
(16:40):
No substitute for firsthandcontact with users, with
customers, including, you know.
I feel like I made my earlydays of my career with sort
of hostage negotiation, likeunhappy customers, talking them
off ledgers or whatever, right.
I think there is no substitutefor that because it really
helps inform the full pictureof what the product is about
(17:02):
and gives the PM tools theyneed to better understand how
they ought to be prioritizing.
So I mean, there's no substitutefor ruthless prioritization.
At some level, you'regonna run outta capacity.
I sometimes look with envy atmuch larger companies, but I
know somewhere they have exactlythe same sorts of problems.
I know all PMs do, younever have enough capacity.
(17:23):
So it just comes down toprioritization and all
the usual tools apply.
I think, as I said, where AIcomes into play, okay, maybe
to help understand a whole anddigest a lot of information
indispensable when it comes toresearch, and then now as we
talk about with vibe coding,when put in its place, you
know, for rapid prototyping.
(17:44):
I think what some peopleforget about prototyping
too, is the original ideasof prototypes is that they're
meant to be thrown away.
They're not meant tobe the basis for what
goes into production.
And if you can do all thosethings, I think then, you
know, the tools are thereto really accelerate the
way we work and again, toassist with prioritization.
Hannah Clark (18:02):
Yeah, well said.
Well, to close, on an optimisticnote, where would you say are
the most legitimate high valueopportunities for AI to enhance
the work of product managers?
And what advice would you giveto product leaders who really
want to embrace AI in a waythat's thoughtful and without
falling into any of thesetraps or potential mask off
moments we've talked about?
Matt Graney (18:20):
Yeah, I think
really one of the big ones
has gotta be around research.
I think the ability to docompetitive research for a
PM these days, it's a toolI wish I had in the past.
Sure.
I think part of it, ofcourse, is vendors tend
to be a lot more public,like most docs for products
are available now, but youreally need to be doing that.
That is a huge one.
(18:41):
I think, obviously, quicklyputting together documentation.
A lot of people talk aboutwriting press releases
first, or FAQs first.
I think this is agreat opportunity.
It might've seemed laboriousto do that before, but it's a
great way to get started today.
I think it's about usingtools like this to help
expand the imagination.
What am I not thinking about?
(19:03):
Right?
I think AI, when prompted in theright way, can be really good
at identifying some blind spots.
Yeah, it might hallucinatesometimes, but even outta that,
sometimes there can be insightsor lateral thinking perhaps
that hadn't come to mind.
You know, I think AIthough, can be a bit of a
fun house mirror, right?
If it's messed up to begin with.
If you don't have yourdiscipline in place, then
(19:24):
it's only gonna make it worse.
But when done right, itcan provide that focus
that product managers need.
Hannah Clark (19:30):
Yeah.
I tend to agree.
Well, Matt, thishas been wonderful.
Thank you for sharing all ofyour knowledge and for, you
know, sense checking some ofthese things that we're seeing
so much of in the space.
I really appreciate it.
Where can folks followyour work online?
Matt Graney (19:41):
Best to
find me just on LinkedIn.
I seem to be on there morethan anywhere else, so
look forward to catchingup with people there.
Hannah Clark (19:48):
Awesome,
thank you so much.
Next on The ProductManager Podcast.
If you thought this episodewent hard on expectations
versus reality, we areabout to deep dive into LLM
technology in an episode thatwill challenge everything
you think you know about AI.
While the potential of thetech is limitless, the current
limitations are far more complexthan we realize, and so are the
(20:10):
impacts on us as both buildersand users of AI products.
This one is going to hithard, so subscribe now to
jump in with us next time!