All Episodes

April 2, 2024 39 mins

Send us a Text Message.

Maarten & Michiel Doornenbal are not only brothers, but also co-founders of Churned! They join me this week for a fantastic conversation about the platform they've built, AI, scorecards and more! 

Unfortunately, my audio/video didn't record - so you'll here me narrating the episode - but it's kind of fun to switch formats up here an there. 

In the episode, we talk about:

  • Their history as brothers and how they work together as co-founders today
  • History of Churned and what they do
  • Using machine learning models instead of rules-based scorecards to create health scoring and to predict customer churn
  • Part of the role of a CSP is to highlight were data cleanliness issues exist
  • Verifying your churn risk alerts by back-testing the data against historical churned customers
  • Personalization based on user-personas
  • AI is not just Gen AI. The future of AI in CS combines predictive, descriptive and generative models  to auto generate content to free up time for CSMs.
  • What the human involvement will look like with these future AI models

Enjoy! I know I sure did!

Maarten's LinkedIn: https://www.linkedin.com/in/maartendoornenbal/
Michiel's LinkedIn: https://www.linkedin.com/in/michiel-doornenbal-98710067/
Churned: https://churned.io

Resources Mentioned:

++++++++++++++++++

Support the Show.

+++++++++++++++++

Like/Subscribe/Review:
If you are getting value from the show, please follow/subscribe so that you don't miss an episode and consider leaving us a review.

Website:
For more information about the show or to get in touch, visit DigitalCustomerSuccess.com.

Buy Alex a Cup of Coffee:
This show runs exclusively on caffeine - and lots of it. If you like what we're, consider supporting our habit by buying us a cup of coffee: https://bmc.link/dcsp

Thank you for all of your support!

The Digital Customer Success Podcast is hosted by Alex Turkovic

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
A predictive AI model telling you which customers are
at risk, then a prescriptive AImodel telling you what is the
best action to take given hisbehavior, and then the
generative AI actually does it.
So it's connecting thedifferent pieces of AI to have
this autonomous AI vehicle incustomer success.

Speaker 2 (00:23):
And once again, welcome to the Digital Customer
Success Podcast with me, alexCherkovich.
So glad you could join us heretoday and every week as I seek
out and interview leaders andpractitioners who are innovating
and building great scaled CSprograms.
My goal is to share what I'velearned and to bring you along
with me for the ride so that youget the insights that you need
to build and evolve your owndigital CS program.

(00:46):
If you'd like more info, wantto get in touch or sign up for
the latest updates, go todigitalcustomersuccesscom.
For now, let's get started.
Greetings and welcome toepisode 46 of the Digital
Customer Success Podcast.
As always, it's great to haveyou back.
One quick note I'm not sure ifregular listeners are aware of

(01:08):
this, but on a weekly basis I dodistribute show notes and some
other digital CS I guess miniarticles, you could say as part
of a newsletter that goes out onThursdays.
So if you'd like to sign up forthat um, just go to
digitalcustomersuccesscom.

(01:29):
Uh, on most pages there'll be asignup form where you can sign
up to receive those weeklydigests.
You'll get, um, the show notesfor the week's show, as well as
a preview of next week's showand then usually a mini article.

(01:49):
Right now I'm doing a series onhow digital CS functions can
cross-collaborate betweendepartments.
So if you're interested, go tothe website
digitalcustomersuccesscom and gosign up for that.
Today's episode is a cool one.
I've got Martin and MihielDornenball from Turnedio.
They're actually brothers andco-founders, so we do get into

(02:13):
that a little bit.
As with most things, digitalthings don't always go right
when you take on some, you know,technical activities or
anything having to do withtechnology, and this is one of
those instances.
The episode's going to be alittle bit different because,
luckily, both Martin andMihiel's video and audio

(02:36):
recorded just fine but minedidn't.
In fact I had no audio or videofrom the session at all for me.
So it was an interestingexercise going back and editing
this episode because I had veryeloquent answers and about I

(02:59):
don't know 25 minutes ofcomplete silence while they were
listening to me.
So what I've done is I'veessentially I'm going to be
interjecting myself into thisepisode and kind of setting up
the question, you know, betweentheir answers, so that you have
some context for theconversation and how the

(03:19):
conversation kind of went.
It's a shame because we lostsome funny moments and funny
exchanges.
That said, there's a lot of goldin this episode.
If you're not familiar withchurned, it's essentially a CSP
customer success platform builtfrom the ground up with AI in
mind, and so a lot of what theyfocus on is churn prediction and

(03:42):
using AI models for churnprediction.
It's pretty fascinatingactually, and so we definitely
dig into that a little bit aswell as you know a little bit of
.
You know their history togetheras brothers and then as
co-founders of churned.
We definitely get into gen AI,but I think one of the most

(04:04):
compelling elements of thisconversation was the description
of different AI models and howthey can come together to be
really effective in CS,specifically looking at
predictive and then descriptiveand generative models to create
a fully automated system.
So lots of gold in this one.

(04:29):
I hope you enjoy my conversationwith Martin and Mihiel
Donenbaugh of Churnedio, becauseI sure did so.
In this first clip.
I just asked for a little bitof background, specifically
about them being brothers andtheir history as brothers and if
they got along well when theywere young and how that

(04:50):
translates into today beingco-founders and what that's like
.
So we get a little bit intothat.

Speaker 1 (04:57):
Sure, sure.
So I'm the older brother, I'mMaarten for the listener.
I'm actually only one and ahalf years older than Michiel,
and we grew up in a smallvillage approximately 10 minutes
from Amsterdam.
So in a small village you haveto deal with a small group of
people, and that also meant thatMichiel and I were not only

(05:17):
brothers at home, but we werealso sharing.
We were in the same footballteam, we had the same friends,
we were going out and as anolder brother in the beginning
you don't really like that,right.
You don't want your littlebrother to be everywhere where
you are.
And also there were momentsthat he was mad at me and he
snitched to my parents becauseeverything I did he saw.
So I think until when we wereyoung, I think, we did not

(05:44):
expect that we would have abusiness together, so I was not
very nice to Michiel.
I think when we were very young, at a certain point, that
changed.
We both started studying, wentin a different direction, we
both started living in Amsterdam, but in different houses, and I
think then things got backtogether and eventually even

(06:04):
starting uh, starting thebusiness and I think also to.

Speaker 3 (06:08):
To add to that, I think it's good that we are.
We are completely different.
So my brother's is uhresponsible for the commercial
side.
He's more the, let's say, theextrovert, where I'm more like a
data scientist, a little bitmore um, uh yeah, doing other
stuff, and I think that isimportant for, as brothers, if
you start a business, that youdon't want to be too much in

(06:30):
each other's space, so to say.
And I cannot tell him how hehas to do his commercial side,
because I don't have anycommercial experience and he has
no clue about data science.
So that's lucky me, he cannotbitch on me about that.

Speaker 1 (06:44):
Still, sometimes I try to tell him how to build an
AI model, but we're different.
I think we're different interms of personalities and
different in skills, so I thinkthat's a good thing for us.

Speaker 2 (06:57):
So, as you know if you've listened to the show, I
like to dig a little bit deeperand not just ask the question
that we prepped for.
I like to get a little bitdeeper and not just ask the
question that we prepped for.
I like to get a little bit moreout of the guest, and so I dig
in a little bit and ask themabout their.
You know what theirentrepreneurial history was and
you know whether they were so intheir youth and you know how

(07:20):
that has affected their businessrelationship today you know how
that has affected theirbusiness relationship today.

Speaker 3 (07:26):
Martha was definitely more entrepreneurial in his
youth, I think, trading in inclothing during his high school,
and I was definitely not notbusy with that, that stuff,
although the I found that shirtat the start and Marta always
had the urge to found something.

(07:46):
So that was funny that it wasthe other way around.
Then for a while we tested itout because we thought, okay, in
our youth we had some, let'ssay, challenges.
We thought is it going toaffect our relationship if we
start a business together?
Affect, affect our relationshipif we start a business together

(08:07):
?
And so that's where we testedit out a little bit on a one day
per week basis, see if we would, uh, yeah, get into each
other's hairs.

Speaker 2 (08:12):
But that turned out to be good and, uh, yeah, that's
how it developed after this wespent about a minute um talking
about how they separate work, uh, their personal life, you know,
especially around the holidaysand things like that.
So just a quick more minute onsome personal details, and then
we get into the nitty gritty ofDigital CS.

Speaker 1 (08:36):
I think we talk too much business.
I think Michiel's girlfriendwill agree to that.
But no, but it's so, we like it.
But it does happen that.
So I've been with him all dayand then when he comes home I
start calling him.
And still so I said, when wewere young we were not, uh, we
were even forced to be friends,as had, we're in the same team,

(08:58):
etc.
But now of course we both stilllive in amsterdam, which is a
big city, but still we, we go onholidays together with friends
with the same friend group, wego to the gym before work, so
we're actually very good friends, next to being business
partners and brothers.

Speaker 2 (09:14):
If you're watching this on YouTube, you'll notice
they're drinking from pintglasses once in a while, and we
did joke a little bit aboutwhether they were drinking beer
or not, because it was kind oflate in the afternoon Actually
it was evening when we recordedthis for them.
But yeah, it was tea.
It wasn't even spiked tea.
But anyway, as regularlisteners of the show will know,

(09:35):
one of the things that I askall of my guests is their
definition of digital CS,because it does differ from
guest to guest, and in in thisepisode you get not one, but two
definitions of digital cs onethat's kind of current state and
one that's um forward lookingyeah, so of course, we listened
to the podcast so we knew thiswas coming.

Speaker 1 (09:55):
no, so so we discussed this, obviously and um
.
So my definition of customersuccess, digital customer
success, would be usingtechnology to automate and
deliver the right information tothe right customer, or even the
right user, at the right time.
And so obviously, it's skill,and why I say this is because
it's not difficult to automateand deliver information to a

(10:16):
customer or a user, but it is todeliver the right information
to the right user at the righttime.
Deliver the right informationto the right user at the right
time.
I think that's the intelligencethat is required in digital
customer success, what we claimto bring.

Speaker 3 (10:34):
But, michiel, I think you have a yeah, so of course,
it would be boring if I wouldgive the same definition of
digital customer success.
So I thought how can I make itdifferent?
And that brought me actually tosomething to propose, let's say
, a definition which I think wewill have in a few years.
So not the current digitalcustomer success definition, but
one that will be a future one.

(10:56):
Of course, there are manyexperienced customer success
leaders have joined this podcastand might be listening, so they
will probably argue this andprobably write so, but I'm still
going to do it.
So the future we believe thatit will be autonomous, digital,
personalized digital customersuccess.

(11:17):
This doesn't mean, of course,that there is no interaction or
that there's no one-to-one hightouch approach from customer
success managers, but we see itmore in a way that the customer
success manager determines thedestiny and the AI will bring
you there.

Speaker 1 (11:37):
Yeah, so that's more the future how we see it
currently, it's actually freeingthem of time right, so that
humans can focus on things thathumans are good at.

Speaker 2 (11:46):
So I love these definitions.
I thought they were spot on,very much in line with previous
guests.
Also, the forward-lookingstatement is very much in line
with what we've been talkingabout on this show, which is to
say that we're trying toautomate as much as possible so
that the humans can focus onactually building strong human

(12:07):
relationships.
They're just thinking about ita few steps further along than
we are, and rightly so, you know, given the nature of their
business.
And so that kind of led us intothe next question, which is,
you know, we try to be platformagnostic on this show.
I try not to highlight one overthe other, but it's always fun

(12:29):
having CS-focused vendors andplatforms on the show because, a
they're serving the CScommunity, b they know a lot
about CS and C they're workingon cool cutting-edge technology.
So I did want them to get alittle bit into how Churned came
to be and what marketchallenges they're trying to

(12:51):
address.

Speaker 1 (12:52):
So Churned is an AI-driven customer success
management platform.
Maybe a bit of a history.
So Michiel, my brother, startedthe company together with his
former professor.
So Michiel did a master's indata science at the University
of Amsterdam, vu University, andhe wrote this master thesis
about can you use machinelearning to predict customer
churn.
And that thesis was supervisedby that professor in data

(13:15):
science.
And they found out in thethesis hey, you can predict
customer churn very well usingmachine learning.
So I said, okay, shouldn't wedo something more serious with
this?
But, as you can imagine, aprofessor in data science and a
data scientist are not the mostcommercial people on earth.
Actually, in our case, it's notreally the case, because
they're they're surprisinglycommercial.

(13:37):
But that made them ask me Ishould, shouldn't we do this
with the three of us?
So I did, we started.
That moment was about fouryears ago, january 2020.
And in the beginning, we spentsome time in developing these
machine learning models and themodels worked well.

Speaker 2 (13:58):
At this point I interject, as I often do, with
the fact that hey, wait, thiswas during COVID, right.

Speaker 1 (14:04):
Yeah, yeah, it was in the middle of COVID, that's
true, that was true.
Yeah, it was a great time todevelop models, in my case,
sales presentations.
Yeah.
So we developed these modelsand then thought, okay, so who
will benefit most from thissolution?
And we knew, of course, thatchurn was a problem in SaaS.
We knew that customer successwas responsible for the churn

(14:27):
topic, so that made us researchthat market a little bit.
So we did some interviews withSaaS leaders, with CS leaders,
with CSMs, and found out thatthe current software solutions
used by CS teams, our customermanagement platforms they were

(14:48):
not using AI or machine learningmodels yet in the topic that we
are focused on.
So on the portfolio managementpart.
So their health scores weredone, rule-based, right, so
which are not, let's say, wedon't see them as really smart
solutions to do it based on gutfeeling, and that made us decide

(15:12):
to do that in a different way,and that is the AI way.
So using predictive machinelearning models to predict
customer health, maybe toclarify.
So we started really on thechurn part.
It's actually now it's more thanthat.
It's a customer successmanagement platform with an AI

(15:33):
engine, and that engine containspredictive models that uses
customer data to predict what'sgoing to happen, and it can be
either churn, upsell.
We predict customer behaviorand the thing that we started
with was churn.
So that is how we started thecompany and we help customer
success teams target the rightpeople at the right time with

(15:55):
the right information to preventchurn and boost NRR.
That's it.

Speaker 2 (16:03):
So, given that churned is natively machine
learning based, I wanted to makesure that we did have this
conversation about rules-basedscorecards versus machine
learning-based scorecards.
On these rules, eitherourselves or an admin goes in

(16:28):
there and configures all theserules to configure a scorecard
that tries to be as predictiveas possible and that that
continually proves to beexceedingly difficult, whereas
machine learning has gotten tothe point now where it is
entirely possible to accuratelypredict churn, you know, with
these scorecards, and so we geta little bit into the details of

(16:49):
, uh, you know how the sausageis made, so to speak I think, um
, yeah, indeed, I think there'sa if you would have, uh,
multiple technical people in ain a company can define in a
rule-based way, health scorespretty well.

Speaker 3 (17:06):
I think the beauty is of what we do is that it's very
machine learning can find a lotof nonlinear relationships, so
to say.
So it's not always the case thathigher product usage is always
better for adoption.
What we often see is that rightbefore the moment of churn,
actually usage can go up a lot.

(17:28):
So that is unexpected behavior.
So we basically predict what isthe expected healthy behavior
of a customer and if it fallsoutside of those boundaries,
then it's actually a alert.
And it is, of course, acombination of all kinds of
variables that machine learningmodel can pick up.
And also, I think what isinteresting is that it should

(17:53):
adapt very well to changes.
So if you make changes to yourpricing, make changes, if there
are economical changes, machinelearning model should pick up
those parts, those parts and,yeah, the rules that you might
have need to change in arule-based platform.
Yeah, that requires a lot ofmaintenance and machine learning

(18:16):
model should be able to pick upthose changes and also adapt
the risk course according tothat and maybe for the person
that, in rookie words like thesalesperson, would explain it.

Speaker 1 (18:26):
So it works just like a rule-based system.
So you integrate all yourcustomer data, but then you
don't have to set up the rulesyourself, but the model will
just.
If the thing that you want topredict is churn, the model will
look at the customers thatchurned in the past to look at
what are the patterns in thebehavior and the characteristics
of those customers, to predictit for the current customers.
So it trains on historic dataand predicts what current

(18:49):
customers will do without humanhelp.

Speaker 2 (18:59):
Next we get into some topics related to data, which I
know is a sore spot for a lotof folks in digital, mainly
because it takes decent data todo anything worth value in
digital data, to do anythingworth value in digital, and so
one of the questionsspecifically that I had for them
was around how data variabilityaffects, you know, scoring with
machine learning and you knowwhat those outcomes really look

(19:21):
like.
One of the examples that Ithink I used in this
conversation was a scenario thatI'm well familiar with, which
is, hey look, we've got both anon-prem product and a SaaS
product and we get differenttelemetry from each one, and so
you know, we've had to designdifferent scorecards to account
for that, for instance, and soyou know, and so I wanted to get

(19:45):
a sense for you know how datavariability affects scoring, and
then we get into some otherdata things as well.

Speaker 3 (20:03):
No, I think that is exactly the beauty is usage, and
we see that there is also avariable that states that it's
an on-prem customer.
That relationship, basically,is being picked up.
So the model will recognize hey, for on-prem customers, no,
product usage doesn't meananything, because that
relationship yeah, it's justthat factors each other's out

(20:26):
basically.
So that is how it will dealwith those kinds of uh factors.

Speaker 2 (20:32):
Um, yeah, if you, if you give that information, of
course, to the model so thatthen naturally kind of led me
down the path of okay, what,what kind of data do we need
here, um, to get this stuffworking, um, correctly?
And and I kind of I think Ididn't quite ask the question
correctly.
Initially I think I asked howlong it takes the uh, you know,

(20:52):
to train the platform, um, youknow.
But we got around to the factthat that you know it it.
The actual question is how muchdata is required, um, and how
much historical data is requiredto, you know, properly train a
scorecard to to start scorecardto start to represent some
accuracy in predicting churn.

Speaker 3 (21:13):
So it depends a bit.
So it's not that we arebuilding a machine learning
model, it's not that it learnsover time.
We can just look at the historyand then it retrains
immediately.
So it's not that it gets betterevery week or so until a
certain moment.
No, we can immediately alreadyhave a good model.
There's not really a, let's say, a threshold or a clear minimum

(21:36):
amount of data that we need.
What we do internally is thatwe have different models.
So you have a very advancedmachine learning model, a neural
network, that very well.
If you have big data sets, ahigh number of customers, a lot
of churn, then you have verycomplex machine learning models
that work very well.

(21:56):
We also have customers, ofcourse, that are very enterprise
focused smaller number ofcustomers, limited data and then
we just run different models onthat.
So there's not a, let's say,minimal requirement.
It just requires differentsettings of different models.

Speaker 1 (22:13):
Basically, and that's done in the platform.
So we run the data through itand it tests it through multiple
, let's say, models and it willpick the winner.
But maybe also on the how muchdata do you need?
Of course, the question we alsoget often in the commercial
discussions and maybe inexamples.
We started also in B2C.
We started with, for example,an NGO.

(22:34):
They had 150,000 customers, butthey only have the name and the
donation amount.
You think you have a lot ofdata, but it's actually not that
much.
We have super enterprisecustomers that only have 100
customers doing 100 million ARR,that have data of every click
that a user does.
So it's not really about howmany companies or clients or

(22:57):
users.
More like what is the depth ofthe data that you have?

Speaker 2 (23:03):
And while we're on the topic of data, we then get
into a conversation abouttrusting in your data, because a
lot of us in CS really rely onour data and our insights to be
able to provide value, not justto our customers, but to provide
value to our internal customersand our stakeholders by

(23:25):
providing really valuableinsights on the health of
customers.
And so we talk about trustingdata, and one of the interesting
points that comes out of thisnext section is actually one of
the side purposes of a CSP,which is using it to highlight
where you do have gaps in data.

Speaker 1 (23:45):
Let's say, of course, we also get the question how
good are your predictions?
So maybe to start with, okay,trusting the data.
That is something that aobjection that we had to deal
with and how we dealt with.
It is through backtesting andalso people that are
implementing health scores thatare not done by a turn but by
themselves.
It is a way to verify can youactually predict what is going

(24:10):
to happen?
For example, you predict whichcustomers are going, so we
exclude the last year of dataand we predict which customers
churned in 2024.
That is a way, for example, andthen you actually measure which
customers churned.
So it is a way to, let's say,to verify how well am I able to
predict.
I think if you can explain thatto the business, that's already

(24:32):
a start, because if you showthat you can predict well, there
will be more trust and thehealth scores will be used.
I think that is a thing to startwith, but maybe it's a bit
technical for the people thatare listening to the podcast
that don't know how to backtestthemselves.
How we deal internally withthis thing is that we also
report on it in the tool itself.

(24:53):
So we predict which customersare at risk.
Then, of course it's up to thecustomer success manager or the
digital motion around it to dosomething with those predictions
and health scores so sendingemails based on risk alerts and
low health scores, for example,and then how we report on it is

(25:16):
did the companies actuallycontact the customers that were
at risk?
If not, then there's alreadyproof that you should actually
do something with it.
But it is still a difficultthing because even if you have
90% right and you have one off,then people will say yeah, but
see, it doesn't work.
So it is also a mind chat.

Speaker 3 (25:40):
I think it's, indeed it's, a challenging one.
I think, then, what we deliverat least what a customer success
platform delivers mightactually be more on the side of
hey, we help you clean up yourdata, point you in the right
direction, prepare you basicallyfor the journey that you can
embark on.
So it shows why your ARR in theplatform is completely

(26:04):
incorrect.
Why is that?
You highlight basically all theflaws that you have in your
data.
And also, I think, if you run amachine learning model or a
model on the data, it will showwhich features are important and
it will also guide them.
So it is important to storehistorical product usage data on
a feature type level.

(26:25):
Or it is very important to seeif a customer has been upsold or
has had certain plan changesthat they made historically.
So we should store that and notjust overwrite all the data
continuously.
So I think, in that sense,you're just taking a customer by
the hand and showing them hey,this is how you should build a
good, trustworthy data set setbut of course there are

(26:49):
situations that the customershave just crap data.

Speaker 2 (26:52):
Yeah, that doesn't happen, of course as mentioned
earlier, I love having cs facingvendors, um and platforms on
the show, because they see somereally cool stuff that their
customers are doing in customersuccess, uh, and so I ask you
know if there are some examplesof digital cs that they've seen
uh, within their customer baseand they came back with um this.

Speaker 1 (27:16):
A nice example is one of uh of our dutch saas
customers.
Um, so we just released a newfeature and we were very happy
that the customers had actuallytested, and especially if they
tested with some commitment,which they did.
And what we've developed isthis way of modeling the users.

(27:37):
So not what we see, that in B2Bit's often quite generic the
playbooks and next best actionsis really on a company level,
and what you see in B2C ande-commerce that is very on a
person.
It's on a user level.
It's super personalized andtailored, and that is what we
just released in this model to.
It's similar to an RFM modelI'm not sure if you're familiar

(28:00):
with that.
It's a model that usede-commerce and what we've done
is that.
So, let's say, as a company, youmight have 10 users, but those
10 users might engage completelydifferent with your product.
One uses it every day, one onceevery month, one is a cfo, so
it's not expected to use itevery day.
So every user has a differentexpected behavior and also

(28:25):
should actually be treateddifferently.
And that is what this modeldoes.
So it segments your users fromchampions to hibernating
customers, new users, et cetera,and what this customer of us
did is connected all kinds ofplaybooks on a user level
connected, all kinds ofplaybooks on a user level.

(28:47):
So, example, a new customershould become a new user and
should be in a welcome journey,while a champion could get a G2
request or even an upsellopportunity, and they had all
kinds of playbooks for the,let's say, the different user
segments.
And I think that, uh, thatworks really well.

(29:08):
I don't know if you haveanything to add to this, this
part no.

Speaker 3 (29:13):
So, um, yeah, the interesting thing to see there
is what they did with it is thatthey used also, what type of
features did they use of theproduct that is related to a
certain user role?
So, indeed, you compare thefeatures that a CEO is expected
to use in relation to what theyare actually using and, based on

(29:35):
that behavior, you can bringvery personalized messaging, and
that's what I think they didvery well, both in-app as in.
They're just their automatedoutreach, exactly.

Speaker 1 (29:49):
It's almost like you have one customer with 10 users.
Actually 10 different customersand 10 different health scores,
so 10 different playbooks.

Speaker 2 (30:10):
So this is where the rubber meets the road with these
two, because they've built thisplatform foundationally from
the ground up using machinelearning, and so I ask about the
future of machine learning andAI in CS specifically, and also
how that pertains to us humans,us mere mortals, and our
interaction with these machinelearning models one thing that I
think is good to realize thatai is not one thing, right?

Speaker 1 (30:31):
so you have it's more like a family of technologies
and methods that do somethingsimilar but different, right?
So everybody knows gen ai, asthat is ai, but it's, it's one
of the forms that AI, so it'sactually Gen AI is more the chat
GPT-like.
But we have predictive AI isusing a machine learning model
like churn to predict what acustomer is going to do.

(30:53):
You have prescriptive AI, whichis telling you what to do.
So what is the next best action?
So now we know this customer isat risk.
What is the next best action totake?
Nlp, so natural languageprocessing.
So I think there are more formsof AI.
So it's not only in the gen AIfield.
But I think our problem we willconfirm this in a second is

(31:16):
that we believe that the futureof AI is actually combining
those different sources.
So it's not so a predictive AImodel telling you which
customers are at risk, then aprescriptive AI model telling
you what is the best action totake given his behavior, and
then the generative AI actuallydoes it.
So it's connecting thedifferent pieces of AI to have

(31:39):
this autonomous AI vehicle incustomer success.

Speaker 3 (31:44):
Martin, you said it very well.
I think no need for going intothe depths of the AI, where it
will go.
I think this is indeed what iscrucial for the future combining
all the different componentsand bringing it into one piece.

Speaker 1 (31:59):
basically, Maybe many more different AI solutions
will appear, maybe on the so,where a prescriptive model is
very good at, or a machinelearning model is using historic
data to predict what shouldhappen.
So if you have never logged acall, a model cannot predict the

(32:20):
call as the next best action.
So there has to be a humantelling.
Okay, we should do calls as aplaybook.
If a customer doesn't log in,we call them and then at a
certain point the ai can measureokay, what did we see happening
after doing that call?
So it still needs the human,let's say, to create the
playbooks and to come with newactions, so that then the ai
model can look at all thesethings that you've done and to

(32:44):
see okay, for this customer thatshowed this specific behavior,
this was actually the next bestaction.
So after performing this call,we've seen that feature X had a
usage increase of Y.
So it's actually learning fromthe moments that you've done.
But it cannot create, let's say, new ideas itself.
So a human has to create theplan and then it can verify it.

(33:04):
Let's say, new ideas itself.
So a human has to create theplan and then it can verify it.

Speaker 2 (33:07):
Let's say, so I want to spend a little bit of time
unpacking that a little bit.
The first part of the fact thatwe're looking at, you know,
this kind of flow of differentmachine learning models to get
what we need in terms ofcustomer facing or internal
facing, you know, ai assistance.

(33:29):
Ai assistance is this notion ofyou've got the predictive
element that's really lookingfor those, you know, those
things that are about to happenin the account or with the user.
You've got prescriptive thingsthat are kind of, you know,
telling you or potentiallytelling a Gen AI model you know

(33:49):
what to do, and then you havethe execution of that which
could be human, could be Gen AI,but it's a very interesting
progression, you know, andconnection of different methods
of machine learning.
So I think that is fascinatingand something that I've been
digging into and really, youknow, trying to wrap my head
around and understand a littlebit more.

(34:10):
Their minds.
Wrapped around these sort ofconcepts are the folks that are,
you know, are going to have aplace, you know, within CS and

(34:34):
within the workplace, goingforward Because, look, this
stuff isn't going anywhere andthe more that we can really
understand about how these toolsare going to help us and how
these tools are here to augmentwhat it is we do as humans, the
better.
I think the other point here isthat you know, at the end of the
day, you know as a human howyou want your customer

(34:56):
experience to be.
There isn't an AI model that'sgoing to design it for you the
way you want it.
You know it may with someprompting, but again, the
prompting comes from you and thedesign comes from you and the
inspiration comes from you, andthat's the human element of this
whole thing.
So when I, when I think aboutthese, you know, generative AI

(35:19):
models and these, you know thesekind of things that we've been
talking about and these thingsthat, quite frankly, a lot of
people are scared about becausethey're, you know, going to.
You know, come in and take allour jobs.
You know, if you find yourselfdoing highly repetitive work and

(35:40):
you find yourself doingsomething that a machine could
do, yeah, it's a possibility.
But if you're, if you are ahighly creative individual,
you're used to using technology.
You like, you know adopting newtools and things like that, and
and and quite.
You know, quite frankly, if, ifyou're someone who, um, uses

(36:02):
your skills as a human to help abusiness thrive or help, you
know, create something thatdidn't exist before.
Those are the kinds of thingsthat are primed for these types
of AI assistants to come in andhelp.
So I don't know.
That's my two cents.

(36:22):
As we start to wrap up thisconversation, I do ask I mean
everybody for kind of theirshout outs and their resources,
and so this is a stitchedtogether montage, if you will,
of you know some shout outs thatthey like to give, as well as
some podcasts, some resources,some books that they're paying

(36:46):
attention to.

Speaker 1 (36:48):
Sure.
So first of all, shout out tothis podcast.
There's actually a podcast thatI do listen, no, but also, as a
SaaS founder, I also like tolisten to other people the same
situation telling about theirproblems and how they're doing
it.
And how this relates to CS isthat I listened to the what's it
called the Big Exit Show.

(37:09):
It's about I listened to thatyesterday Robin van Lieshout
telling how he exited inside toGainsight so he sold his company
to a customer's customer'splatform.
So I would recommend listeningto that.
Shout out to Robin van Lieshoutgreat story.
And just to follow somecommunities.
So Mick Weyers with CustomerSuccess Snack I don't know if

(37:31):
you ever heard of him, but he'sdoing very well.
He has Connect.
There are a couple ofcommunities that I follow and
events that I participate in nowand then.

Speaker 3 (37:42):
Yeah, for me it's just I think I just listen a
little bit on random stuff moreon the technical side.
So I like less, let's say,customer success related stuff,
more data science like whatmodels can we use for customer
success.
And I think interesting book toread is Outliers.

(38:02):
That just also keeps you.
It's not really machinelearning models but it tells a
interesting story.
That, yeah, that connects alittle bit of the data points to
events.
So you can, you know you canplay it a little bit on your own
story.
But yeah, I think the generalImpact Weekly scaling up podcast

(38:27):
is also a very interesting one.

Speaker 2 (38:29):
So there you have it.
That was my conversation withMartin and Michiel from Churned.
A lot of good stuff in thisepisode.
I really enjoyed it.
It was a pain in the butt toedit, but you know the format is
kind of cool.
I guess I wish it wasn't solabor intensive.
Churned puts on quite a fewevents and webinars and things

(38:51):
like that.
So keep an eye out on thewebsite, um, which will be
linked in the show notes, um,and and go have a look around
there to see what upcomingevents that they, uh, they do
have at churnio.
Um.
Again, if you want to sign upfor the newsletter that I
mentioned at the beginning ofthe episode, go to
digitalcustomersuccesscom.
And yeah, I hope you enjoyedthis episode, because I sure did

(39:12):
.
And here comes the outro.
Thank you for joining me forthis episode of the Digital
Customer Success Podcast.
If you like what we're doing,consider leaving us a review on
your podcast platform of choice.
It really helps us to grow andto provide value to a broader
audience.
You can view the DigitalCustomer Success definition word
map and get more details aboutthe show at
digitalcustomersuccesscom.

(39:34):
My name is Alex Terkovich.
Thanks again for joining andwe'll see you next time.
Advertise With Us

Popular Podcasts

Dateline NBC
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Nikki Glaser Podcast

The Nikki Glaser Podcast

Every week comedian and infamous roaster Nikki Glaser provides a fun, fast-paced, and brutally honest look into current pop-culture and her own personal life.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.