All Episodes

August 30, 2024 • 53 mins

"If they knew how we built [the AI], it wouldn't be cool."

Learn from Anwar Jeffrey, founder and CTO @ Glowstick, what it takes to launch an AI startup. It isn't about being first, being the best, or having the smartest AI -- so what is it about?

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome to Artificial Insights, the podcast where we learn to build AI people need and

(00:05):
use by interviewing product leaders who have launched AI products.
I'm your host, Daniel Manary, and today I'm joined by Anwar Jeffery, co-founder and CTO
of Glowstick, which is a venture-backed company that he founded over three and a half years
ago.
Prior to that, he went through Entrepreneur First, an accelerator that helps entrepreneurs

(00:29):
launch their companies, and I met him around seven years ago at Bonfire, a company that
does public procurement and got acquired for $100 million after a Series A.
I think the question on everybody's mind is, Anwar, are you an AI?
Am I?
No, I'm not.

(00:50):
No, I'm not.
I'd like to build an automated version of myself one day.
Anwar AI?
Anwar AI.
I'm just annoying, I fool myself.
I don't think anybody else in the world needs that.
I'll send this to your interviews for you.
I had an interesting, I guess, discussion recently with someone else who I was hoping

(01:14):
to interview, and I will be on Thursday, and I said to them, talk about your AI product
at this company because I know that they've worked on it, and they said, oh, that wasn't
really AI.
It was just data-driven decisions, and I was like, you know, there's a big perspective
in there of AI to not AI, so what does AI mean to you?

(01:37):
What does AI mean to me?
It's funny because I came into this not even wanting to build an AI company at all, right?
One of my biggest figures, I'm never going to build an AI company because-
It just happened by accident.
It just happened to be the right solution for the problem at hand.

(02:00):
That's what ended up happening.
So what does AI really mean to me?
I mean, I think like the couple of layers that I've thought through, but I guess we
use what is now referred to as traditional AI, and the limbs have become a thing.
So neural nets and everything in between, or everything neural net and prior, as opposed,

(02:23):
is now considered traditional.
Yeah, you're making a distinction between LLMs, probably generative LLMs, and then everything
before.
Everything before.
So I think in my perspective, you teach a model something, and it essentially takes

(02:46):
over and automates that process for you, or delivers the outcome that you want.
If anything, I just call it some sort of smart automation.
I'd say it's a high level of automation in a sense, with more features, aka attributes,
characteristics, or data points.
I haven't really thought about what does AI mean to me.

(03:08):
I just know that I'm running an AI company, which is the way it's like.
So something like, if you can automate a decision without having to pre-decide it yourself.
That's true, yeah.
There's a lot in there.
Yes.
I'd say, and get the whole you don't have to pre-decide it yourself aspect, but it's

(03:32):
more like if you could teach it how you are doing it.
I think that's the key thing, or how to do it.
And it uses that as the basis to go forth and do it for you, essentially.
So like a good intern.
Yeah, but at scale.
You could go through a hell of a lot more data than a human can.

(03:56):
But it's almost like automation on steroids.
That's how I think about it.
Because NLMs have changed that paradigm, but I mean, that's still automation on steroids.
You just ask the questions and it's got a bunch of data to go off of.
You could have gone off and read all that data in a couple of lifetimes and good luck

(04:21):
trying to find it.
But it's essentially like human automation in terms of like the ability to ingest structure
and retrieve at scale.
Right?
Okay.
Maybe we can get back to that one later.

(04:41):
Because you said something about Glowstick where you're at now.
You built an AI product because it was the right tool for the job.
Exactly.
What did you build and what made it the right tool?
So Glowstick is a rev-ups assistant for post sales teams.

(05:02):
That essentially helps them identify cross-sell opportunities, expansion opportunities, and
just run sales playbooks or leading indicators for those sales playbooks from calls that
people in the organization are having with their customers.
Now why this cure-bytes?

(05:25):
Think about it this way.
You've got two B2B SaaS companies, let's say.
Right?
Let's say you are selling, Microsoft is selling teams to you.
Right?
There are six people at Microsoft that handle your account potentially.
Your team's account.

(05:45):
That I'm going to talk to.
That you're going to talk to.
Yes, support your account manager who handles the actual contract negotiations.
Right?
The account manager handles the renewal bit.
They can make sure that you're successful in your renewing.
Right?
So one thing that everybody knows, and we've all experienced this, is you talk to every

(06:09):
one of these different people.
They're asking you the same questions.
You're repeating yourself.
Right?
I'm not picking on Microsoft at all, I'm just using them as an example here.
I've even done this at my Toyota garage.
Yeah.
They have no excuse.
There's like 10 people there.
Exactly.
So you have this breakdown of contexts about a customer within that part, even if it's

(06:35):
six people just dedicated to an account.
Because all those people also have 20 to 100 other accounts that they're also managing.
Right.
So most of the information is sort of like transferred because some critical event is
about to happen.
Like you're about to do a quarterly check-in.

(06:57):
Right?
And now as an account manager, you know, I'm doing 20 and then over the next two weeks,
but I need to sync up with everybody and give them to give me everything they know that
happened over the last quarter.
Yeah.
And you can imagine how much falls between the cracks.
Right?
So this is essentially the problem that we're solving.

(07:21):
When COVID hit, everybody switched to recording videos and meetings.
So we started ingesting all those calls and started essentially identifying all these
signals that point to expansion opportunities, churn risk, cross-sell opportunities.
What are the customers asking for?

(07:43):
What don't they like about certain features about the product?
What do they need to be supported more?
What are the intentions or goals over the next year or next year kind of thing?
And bringing those...
Product usage questions.
Exactly.
So it's product usage and intent or needs detection.

(08:08):
And we map all of those to, you know, the product that you sell essentially.
And that is the cool thing that like we could say, somebody could say, hey, we're looking
to do XYZ and we could be like, hey, you've got a product that does that.
But they set it to a support person who's not really in contact with your account manager

(08:31):
all the time, right?
So that account manager would be able to hop on Glowstick and see that like little nugget
of like, oh, cool, wait, I could potentially sell this customer this now, this product
now meets their needs because the biggest thing is timing.
Right?
If I ask for it right now, I want it right now.

(08:52):
You probably want it right now or you're going to want it...
Most expansions happen at renewal because it's easy to renew and expand.
Right?
Budget cycles.
Yeah.
Budget cycles.
So it's easy to also track that over time, essentially, instead of like, they started
talking about it eight months ago and they've been consistent about it for the last four

(09:16):
months.
Right?
They haven't let go of it.
Maybe we just need to circle back, stuff like that.
So you have this like sort of like timeline of how your customer is using your product,
what their needs are, how they're evolving.
Because if you've got a suite of products that you're selling a customer, if I sell

(09:38):
them features A and B, you've got a bunch of like white space.
Right?
All that white space is a potential opportunity to sell this customer something else.
The issues would be new features, new products you could build.
Exactly.
Exactly.
As you add more features and new products to your product line, all you have to figure

(10:06):
out is like, when do I sell this?
When do I discuss this?
The worst thing you could do is bring up a product that the customer doesn't need at
all because he is trying to sell them something.
Now you become a sales guy.
Right?
You want fries with that?
Exactly.
No, I don't want fries with it.
You always had to sell me something.
So you let the customer let you know, essentially, or you have the ability now to pick up on

(10:32):
the signals when a customer is ready to buy something.
You figure out the when and the why.
And you just talk about that product as opposed to some other thing that you've been asked
to sell.
Yeah.
Yeah, that brings me back to something you said about needs analysis or needs identification.

(10:54):
And that sounds like the core feature of everything that you've just described.
No needs over time.
You know needs from a support call.
How did you detect those?
Oh, man.
So we have a he feels like a real stick.
Dumb AI is always a thing.

(11:16):
Dumb AI.
Yeah.
So doing things as simply as possible.
And it meant breaking it down into a whole bunch of things.
I think the biggest thing was intent detection.
Being able to actually identify a customer's intent and then match it to the domain, which

(11:39):
is the customer's our customer's product.
So similarity to search there then but the instead of web pages you want products.
Basically.
So that's essentially what our sort of like secret I guess source is.

(12:00):
Which is interesting.
But we'll be open sourcing a bunch of like the tools that we used.
I have to give it like give a big shout out to Shred who's our founding ML engineer.
He used to be the chairman of Keras prior to the whole Google and TensorFlow.
That I did not know.
That's very true.

(12:20):
He also published BAL or Bayesian Icon Learning Framework and Azimuth which are two like really
cool repos or tools, AI tools that we've used to enable what we've been able to do.
But essentially when you talk about dumb AI dude it's like we didn't use any neural nets.

(12:43):
I'm not gonna lie.
We kept it very simple.
Right.
Sentimental analysis.
Like you name it.
Topic analysis.
Domain matching.
Like it was more it became more of an aggregate of essentially how would you as a human do

(13:03):
it.
Right.
Like that.
That's what it was.
It was the big thing that we had we had a challenge of was essentially how do you detect
an expansion.
That's how we started with this.
Right.
Like what does it actually mean?
What does it mean to detect a need?
You break down what a need looks like.
You could take it from like just the phrases.

(13:28):
You could use you could use phrases.
Right.
Which very easily you could start going into keywords and stuff like that.
Which is one way of doing it.
But then it sort of starts to break down because of the number of ways that humans can say
that they want something.
Yeah.
So you can't just control F.
Yeah.
So you can't just control F and just like use keyword analysis.

(13:50):
So we actually literally build our own sort of like intent detection, which is like key
points, goals, like needs.
And I keep saying needs.
But like it's stuff like it'll be ideal if we could.

(14:15):
This is I want to.
Right.
Yes.
So you've got that sort of like nuance.
It's interesting.
It's like I'd like to explain this more.
Okay.
That's it.
That's it.
Just be it.
We don't have to get into the technical side.
Yeah.
I think what's really interesting though is that concept of how did a human do it and

(14:39):
then not jumping straight to let's just throw it into an LLM.
Yeah.
Don't get me wrong.
We actually benchmarked throughout this whole process.
We've been evaluating our process because the ability to say how would you do it as
a human allows you to essentially create a, almost like a schema or flow that you could

(15:05):
then translate into your traditional ML or even LLM.
So we can do it with LLMs.
Actually we found that we could do the same thing with LLMs.
Right.
So she became, basically still, the false positives when it was actually like it was

(15:27):
basically a false positive.
Rates were three, four times higher with LLMs.
And don't get me wrong.
We could put in more work into like, what is it called?
Prompt engineering.
Right.
But like small team, there's only so much.

(15:48):
And the thing that actually stopped that prevents us going even further in that direction was
it costs more to essentially do it that way.
We're talking up to three to four, like three times to six times the price for one insight.
Right.
So it's just three to four times worse and three to six times more expensive to use an

(16:10):
LLM.
Right.
And three times worse is more because of the prompt engineering I'd say.
Like you could always do more to improve that.
It's just, yeah, sure.
That could also further improve price, but like it'll still cost more than us just using
our own things at the end of the day.

(16:30):
And what you said I think was if you have a process that a human can follow.
Sorry, excuse me.
If you have a process that a human can follow, then you can give that to some kind of AI,
whether it's dumb or whether it's an LLM.
Yeah.

(16:51):
You could give it to some sort of AI, whether it's dumb or an LLM.
Like it's crazy because as a human, when I hear I want to drive nails into the world,

(17:12):
right?
I already think hammer.
Yes.
You know what I mean?
It's in my head.
It's in your head, but like we take for granted everything in that sentence.
So I want to, that's you expressing a need, right?

(17:32):
So as a human, I'm suggesting you're expressing a need, right?
So what is the need?
What's the intent?
Driving a nail into the wall, right?
Now I've got to have all this context of what is the best tool to drive a nail into a wall.
If you said, yeah, I wanted to screw a nail into the wall, I could have said screw drive

(17:54):
a drill or electric drill with the bit, right?
We said nail, right?
So I'm thinking hammer, but I've got all that context essentially.
So you've got this context of like the tools and what they are good for essentially.
And then you need to know the, be able to pick up, identify the intent and have context

(18:22):
of like this thing, meal means nail, like means hammer, nail means hammer, screaming
screwdriver or like electric drill, right?
But you've got to teach an AI model that.
That's a lot of context.
Right?

(18:43):
That's the conscious that we're talking about.
So if you think about it, like it's a similar process, right?
A customer wants to do something with your tool.
We understand the features that you provide on one end and we have to essentially teach
this model how to put that all together and say, this can be met with this feature or

(19:06):
not.
None of these can, can like, you know what I mean?
Cause they also have that, where customers.
Maybe nothing you offer.
Exactly.
Whereas most like our prompts at the LLM would still say, oh, you know what?
You should just sell them this product because they, they need still quite a bit.
This product that you haven't made yet.
There you go.
I'm making a new one.
I know.

(19:27):
It'll tell you that there's some product that you have can meet that demand and it's like,
no, no, I can't.
No, I can't.
But yeah, so.
Not always reliable.
Okay.
So, so there's, there's a lot in there.
Like topic modeling is one, right?
Domain matching is one.
Like sentiment is one.

(19:49):
You get it.
I mean, so there's sense sentiment and then there's the, there's something else.
I keep forgetting what the other one is.
Those are the mess.
Oh, it's like similarity.
Like contextual similarity is another one that you need to also look, keep in mind.
Those are all just different signals for context.

(20:10):
Different signals for building the context and get into the decision point.
Right.
So as humans, we just do that like continuously.
Immediately.
Immediately.
Right.
Like I didn't have to think about it.
So like, it's funny cause like that's the hard part.
The hard part is like, is actually going, how do I do it as a student and how do I teach

(20:33):
a model to be able to do that?
So if you had to do this process all over from scratch, what would you tell yourself
to start thinking about how to approach the problem?
More funding.

(20:55):
Funding doesn't hurt.
All right.
Say, hit the reality with AI companies.
I think their capitalizations are big thing.
Right.
Because of the amount of research you mean.
Because of the amount of research, but also like, if you connect it to the data beforehand,

(21:23):
it helps, but real world B2B SAS data is dirty.
It's messy.
Real world data is messy.
And it takes a while to actually get something that works.
And then you get it to work for one person.
Then you get to two, that's a different issue.
You get a three, that's a different issue.

(21:44):
Just getting to generalizability is an issue.
Right.
And then once you get it to generalizability, getting humans to trust it is a different
like paradigm.
All in its own.
But I mean, the different ways to do that is what we found.

(22:04):
Like, we found that like, if the data is in certain places that they already live in,
where they go to look for stuff, they trust it more.
If you're your own solution, that's telling them that XYZ, like I'm coming to your solution
to find them.
They are more skeptical of it.

(22:30):
So you say in Salesforce, for example.
If you put it in Salesforce, they trust it.
Upsell?
Yeah.
We'll trust it.
But if you ask them to come to your product, they're like, yeah, this is questionable.
This whole product is questionable.
Right?
The design is terrible.
The design is terrible.
Yeah, it must be terrible.
So like, you end up with like adoption issues, user trust.

(22:55):
And it takes a while for AI companies to really scale or grow.
Growth is one of the biggest AI issues.
But then scalability then becomes a second issue now.
Right?
Generalizability and scalability become big issues.
So with that in mind, being highly capitalized, writing a lead team is one of the most important

(23:22):
things.
And in the market right now, the reality of it is you're either raising a ton of money
for something that's like a long shot or you're raising like very little in a sense.
But very few people are raising too little now, if you think about it, because you're

(23:42):
not sure when you're going to get your next like raise or if your growth is going to matter.
Especially in AI, in AI it's more like bootstrapped it for as long as you can.
Right?
Prove it.
Like, it might take you a while, but try to prove it.
And then you want to raise knowing that it works.

(24:03):
Right?
Knowing that you figured out how, where to put the data, how people are going to consume
it.
But like bootstrapping now for new companies is more important, super important.
But just raise a ton of money towards it.
That'd be nice.
And that's why you see a lot of AI companies raising a ton of money for what they're doing,

(24:24):
because it takes a long time to get to the point where it's like it's generalizable,
it scales and users are willing to use it and trust it.
So I think you mentioned what generalizability was before, which was being able to handle
different customers' data, for example.

(24:47):
But what would you say scalability was?
And that challenge is there.
So like generalizability is the model being able to consume and utilize different customers'
data.
Scalability is actually being able to serve all those customers.
Okay.
So then load on the system, like even...

(25:08):
That's more like load on the system.
That's more like how do you architect your models and AI and pipelines in order to get
them to actually handle scale?
Like don't get me wrong, that's not a hard...
It's not that it's hard.
It's more when you're building early days for one, two, three, four estimates, it's

(25:31):
a very different thing from your first five, to like your first 25, to your first 100.
But it's been proven.
Like the design patterns and architectures that are proven that work.
It's just you just have to keep those in mind.

(25:52):
But then you end up with interesting challenges from generalizability.
When you're getting the data in training, are we testing QA?
That at scale.
Doing that at scale.
That's where it starts getting tricky, essentially.

(26:14):
Like keeping your models' performance at scale.
Right?
You could sort of, but how do you keep your models' performance at scale?
So then would you say you're ready to hit the gas when you figured out generalizability
with three, five customers?

(26:36):
I think for us it was more like when we're at the three mark, we realized that there
was...
We saw leading indicators of generalizability.
That's how I put it.

(26:57):
Those similarities.
Even though these customers are in different domains, like different industries, we can
see that, okay, cool.
There's certain points where it falls over into the parts where it just works.
And then it becomes a lot of like...

(27:19):
I was going to say tuning, but tuning is actually a term.
I'll find tuning as a term in AI ML, which is not the right word.
But it's more iteration.
Tweaking.
Yeah.
It's more like iteration.
Right?
Like you try a lot of things.
We tried a lot of things based on what we're seeing in order to try to improve the generalizability

(27:44):
of the solution.
And it worked.
So like, is it the most generalizable tool just yet?
More customers, more industry still exist.
Our customers are constantly rolling out new product offerings.
Every time they roll out a new product offering, that changes the way that the model behaves.

(28:07):
Oh, their model.
You kind of have scale with just one customer.
Exactly.
Very good.
And then customers, businesses also change pricing models and the way they re-architect
the flags.
Now that's a dick.
It's like keeping up with your customers.
Yeah.
Right?
They change names of products.
Like you end up with some very interesting...

(28:31):
And this is the thing around like the scalability too, because these are our customers and their
businesses are always evolving.
So even your first 10, just keeping them, like serving them is a whole endeavor in itself.

(28:53):
Right?
Like you have the same challenges as onboarding a new customer.
Maybe even harder because if you have a customer that has a product named Salt and they change
it to Pepper, you got to know that those are the same thing.
Exactly.
Exactly.
And then don't get me like into the whole like Salesforce integration and what's happening

(29:16):
in the data.
Data is always changing.
Fields are changing.
How do you identify those changes?
There's a whole different world in there that we have to build our own tools that could
essentially detect changes in fields that are required, like whether or not they're
being dropped or changed and discuss and literally ping the Salesforce admin who made the changes

(29:39):
and ask them what the field is so that we could update that.
So anytime you're dealing with customer data, you need to have a process for dealing with
changes in customer data.
Exactly.
Even just Salesforce integration.
Even just Salesforce integration.
Right?
And Salesforce is messy.
That's what I'm talking about.

(30:00):
Like data is messy.
So like when you talk about enriching customer data, like you know, you'd have, let's say
something as simple as a customer's ARR that they pay.
Because maybe you want to know whether or not their company or the customer is like

(30:20):
essentially like, I want to say TAMDUX, maybe that's the term.
I forgot what the phrase is.
Maybe like have they spent as much as they can on your product?
Right?
Or is there more capacity?
Do they have more capacity and are they showing signs of being able to expand more?

(30:41):
So what's the historical, over time, how have they expanded and stuff like that?
So you want to go back and look at previous deals, what those deals were closing for,
how frequent those were, right?
You enriching all of this, you're using all this data to help make better predictions
of is this customer actually ready to expand?

(31:02):
Right?
Or have they just tapped out and you can't sell them anything anymore because over the
last four years, that was just a feat.
Even though they want stuff, you haven't sold them anything over the last four years.
There's no appetite.
You know what I mean?
So let's say that field for ARR, it could change from this year, the field was some

(31:27):
other field.
Today they've created a whole new field and the ARR is calculated a different way or for
some weird reason.
There's a different calculation.
It's broken up into four.
You know what I mean?
Like that's the thing that we saw where like one ARR field is like, okay, cool.
That one actually ARR per feature that they have.

(31:50):
So even just how they do the bookkeeping changes.
Exactly.
That changes everything.
Wow.
So this model that's looking for ARR, now I have to take into account ARR per product.
And so what was your process for dealing with that?

(32:17):
And did you know that that was going to happen when you went in?
You do a startup, you don't know half the shit that you're working in.
Maybe none of it really.
Like, which is honestly, this is why being naive helps as well because if you knew, I

(32:39):
don't think most people would do it.
You know what I mean?
So it's good entrepreneurial advice.
It wants to be naive and just take things as they come.
Really.
That's what the reality of it is.
You're only dealing with the things that are coming up and are important to your customers

(33:04):
that you need.
There's always many fires burning.
There's always smoke bellowing from somewhere.
It's just which fire do you put out and in what order is which keeps you going, essentially.
As long as the building doesn't burn down, that's the goal.

(33:26):
What's been the most helpful advice for you to putting out the right fire?
Most helpful advice for putting out the right fire?
For me, that came from an engineering background, so you see all the bugs.
This is the issue.

(33:47):
On the technical team, you see all the bugs.
I think a lot of us engineers love the idea of building something perfect, bug-free, well
tested, squeaky clean.

(34:07):
Code looks nice.
We all have this notion in our heads, but the reality is we all know that there's always
going to be technical checks.
The best advice that I... I lean heavily on customer value.

(34:36):
That's what the biggest thing is.
Meaning what's valuable to the customer.
How bad is it for the customer and the customer's experience?
How does it affect adoption?
How does it affect utilization?
That's the most important thing for me.
Do the users see this?

(35:00):
How affected are they by this?
That's what I prioritize more, especially at these early stages when you're trying to
really grow your product and make it very intuitive for your users to use and build
trust.
All the scalability issues, they don't see those.
You know what I mean?

(35:20):
They don't see those.
That's not the reality of it is.
That's happening behind the scenes.
We're seeing those.
The worst case is if your scalability issues mean, oh, now the product is down, but now
they're affected by it, then scalability becomes a thing.

(35:43):
But at the end of the day, it's just what's affecting the user the most and how can you
deliver more value to the user within a short period of time?
For us, it's more like within a week was always the goal.
Every week, we had to deliver something that was beneficial to our users.
I think that brings me to the question of how did you tell what actually did bring your

(36:10):
customers the most value?
That's where building strong relationships with champions comes into place and identifying
who the super users are at each organization.
I know people will talk about champions and super users, but you also want the people

(36:30):
that are really lagging and don't want to talk to you.
Why?
Because here's the reality with any solution.
Right?
If I should be to be sassed, I'd say it usually involves a lot of change management for a
lot of these tools.
Your best top two performing individuals will be outliers.

(36:54):
You'll have a champion and another super user.
Right?
Let's say everybody else is scattered in between and you've got people that haven't adopted
it or just haven't come to it.
No matter how much, let's say your buyer is hopping on them to use it.
Right?
And it's especially the case when the buyer's issues that your solution solves for are not

(37:20):
necessarily the users.
So when you're B2B, you can sell something, but they might not need to use it.
The users might not need it, but still want to use it.
So you're going after those people who are laggards, not just super users, because you

(37:42):
want to know.
Exactly.
What do you want to know?
You want to know why they're using it and how can we make it better?
Where are they spending their time?
Where else are they spending their time?
Right?
Because it's funny because a buyer's needs, most important problems are not the users'
most important problems.
Right?
And essentially the buyer is asking them to do homework sometimes.

(38:04):
Or asking them to do more work that they're like, you have got other stuff that I've got
to do.
Right?
But you need this or that.
You need to see, you need transparency on pipeline, for example.
Right?
Yeah.
As a buyer, but as the user, I'm trying to figure out which accounts to move into pipeline.

(38:27):
I'm trying to have different conversations.
I have, do you ever have to put in Salesforce yet that are sort of like in a notepad that
I'm not willing to put into Salesforce until they're actually a thing?
Right?
Yep.
I do want to ask you a question related to talking to people that aren't necessarily

(38:49):
your champions.
You told me a bit about the journey of finding that the way that the B2B SaaS market has
evolved probably during COVID, but the way that the B2B SaaS market has evolved towards
rip and replace, I think you called it.
So no longer people don't want a hundred tools, they want a big tool.

(39:10):
And can you just briefly describe that journey of discovery?
So think about it this way.
You have incumbents.
Like you've got the big companies in every space.
Right?
Like the leaders in every space.
This big, this overwhelming rise of we're just going to do the same thing, but cheaper

(39:36):
than incumbents.
Right?
So the issue with that though is like, we've got all these incumbents coming in and charging
a 10th of the price.
You affecting the market value now.
Right?
And the big companies are trying to essentially validate the price point and keep that price

(40:03):
point in check.
So when you say affecting the value of the market, you mean?
I'm talking about like the actual market value and I think, because now what you're saying
is we've gone from this could be worth 2.5 billion to worth this is worth 250 million.

(40:27):
By cutting the price by 8 to 10.
By cutting the price to 10%.
You cut the market to 10%.
Exactly.
So the market shrinks when people drive down the cost of, and that's what the reality of
sort of like competition creates.
Right?

(40:47):
But the prices, it's just typical, like, you know, standard economics, like there's already
so much demand and the supply is increasing.
So prices are going to drop.
It's simple.
So the idea of like the rip and replace is essentially everybody's trying to rip and

(41:07):
replace each other.
Right?
Because I am doing it for a 10th of the price.
A lot of us have got feature parity.
Price becomes the topic of conversation.
Right?
And now what you're running is why pay more for the same?

(41:30):
So now you got now all these companies identifying who is using which competitor's product and
trying to rip out that competitor and replace them.
That's what the game does.
That's what the game becomes.
Right?
So we essentially help our customers also run these rip and replace campaigns or play

(41:55):
works essentially where we can detect, you know, a customer saying, hey, you know, we
sell, you say our customer sells eight features.
One of the competitors is mentioned on a call or feature D essentially.
And then a new old date for that feature is nine months away.

(42:15):
We let the AM know that you could turn on the trial within six months time and try to
run this, you know, rip and replace campaign.
Let them try your product and try to swing them over different, better price.
Right?
And the benefit there is now you're trying to consolidate that customer to you for a
better price as opposed to what your competitors are trying to do the same thing.

(42:40):
They're trying to consolidate your customer onto them for a better price.
So as companies in the market were trying to free up budgets, this became one of the
biggest things that we started seeing happening.
Right?
You know, budget cuts are happening.

(43:01):
Everybody is scrutinizing the tools.
The idea of using tools that had some feature parity or similar offerings, you had to consolidate
onto one.
Right?
So it became a game of actually ripping out that, like ripping out your competitor and
being the one that the customer stays, being the tool that the customer stays on.

(43:24):
So it sounds like the two summaries would be expand and try to get other people in the
company on yours and also really to fight an active battle, know what's going on with
your competitor and then get there first.
And at the customer.
Well, it's not even that, like, get there first, sure, but it doesn't mean that you're

(43:48):
going to be the only one there.
It's more like you have to do a lot of discovery about your customer.
Right?
Yeah.
And you need to understand that tool stack.
You need to understand who's using what, where.
And this is where the niche stuff comes into play.
But a lot more.
Right?
Like, what needs have been met by what tool you're using.

(44:12):
Right?
It's a lot of discovery, a lot of relationship building so that people can actually tell
you when they're renewing on other solutions.
It could be interesting.
Like, why are you asking me when am I renewing on, you know, Google G Suite?
That's the weird thing to ask me.
Right?

(44:33):
But let's say you have like a meeting booking tool, for example.
Right?
It's like, okay.
I guess maybe for your integration with Google Calendar.
I don't know.
But it's really because you want to try and replace G Suite.
You've got a whole lot of offering and your customer probably doesn't even know it yet.

(44:54):
Right?
So there's, and there's also a lot of education too, but you can only educate your customer,
your buyer, but they're not always the same, the buyer of that other solution.
That's the other thing that I'm pointing out.
So that's where your champion comes into play.
Comes into play who can then introduce you to somebody else, to the other buyer.

(45:14):
And you could potentially bring that into like your, what is it?
So internal networking as well at a customer.
Yeah.
It's a lot of, until and they're talking and like just, less she building.
Okay.
Okay.
But yeah, the, the thing I heard you say too was doing needs discovery, not just use case

(45:39):
discovery.
So I got an example with one company I've seen where they have a competitor and the
competitor only has basically one feature and they have a feature that's like it, but
this competitor's feature is just better.
And so now that competitor is going around and saying, use our product instead, because
we have this one really killer feature.

(46:01):
And then the really simple thing would be to say, okay, let's just replicate that feature.
But then why are people buying that feature?
What are they doing with it?
Is that more important to ask or is replicating the feature good enough?
Better does not mean you're actually going to be the product that sells.
There are a lot of products out there that are actually the worst in the segment, but

(46:24):
the leading in the segments.
I'm not going to name any names, but like better is not one of those, it's not one of
those measures.
And it's like, because we're better people buy it.
And then there's relationship building.
Right.
And this is what's interesting because that's what the customer success teams or CSMs do.

(46:52):
The customer has to see a history of we can be successful with you and your tool helps
us do what we need to do with as little pain as possible.
Right.
Better, you can come and talk about better all day.

(47:13):
A lot of people I'm going to sell on being better.
They tell me why I'm better, which comes why you're better.
And it's like, yeah, but I could already do that.
I get that it's easier, but I could do it.
Not right enough.
Because what you're asking me to do if I switch to you is essentially upend all of my team

(47:37):
and all that data and move it over to your better solution.
Like you can't be marginally better.
You have to literally be 10 times better.
Just to win on better.
Just to win on better.
So like if you're creating feature parity and just copying, there's no marginally better.

(48:02):
It's not worth it.
Instead of feature parity, you'd focus more on relationships and trust.
It's relationships, trust and success.
Like a success track, like history of success.
Fair enough.
It's business.
So trust means a history of success.

(48:23):
The history of success.
Well, I only have one more question for you.
Go for it.
What do you see as the future of AI?
What do I see as the future of AI?
Is it going to get less dumb?
I'm just kidding.
I think, interestingly, the future of AI is actually building dumb in an interesting way.

(49:01):
Dumb AI makes it to production faster.
And I use, again, I'm using dumb loosely, but simple, if you could break it down into
simple components, it makes it to production easier.
The more complicated your AI solution is, the longer it takes to actually make it to
production.

(49:21):
So like if you want to make it to production, something that's scalable, something that's
maintainable, make it simple.
Break it down.
Solve the problem.
Don't build the coolest thing.
You know what I mean?
We could have just, because I remember I wanted to use neural nets.

(49:43):
I was like, oh, we could use neural nets to do all of this.
And Fred's like, no.
Thank you, Fred.
He's like, pardon, no.
Pardon, no.
So like, yeah, thank you, Fred.

(50:07):
The more I've seen, I've seen this over and over again in a lot of founders where I listen
to, you know, and then this is probably a controversial take, but like opinion, but
I listen to a lot of founders talk about what they want to build and how they're going to
do it.
It doesn't have to be that smart, you're over-complicating it.

(50:31):
And that's what's going to cost you time.
Time to market is one of your biggest things because of how long it takes to actually get
AI across the line.
And that's why I find like Ex Machina, I think is what it's called, Ex Machina in Montreal

(50:51):
exists.
It literally funds that by IP of AI companies that didn't make it all the way at scale to
scale.
And they take the time and funding.
Yeah.
And because I understand that it takes a while.

(51:12):
So they find the funding to essentially scale those and give them the legs that they need
in order to keep going.
But it takes time.
Don't do that if you don't have to go for dumb.

(51:32):
Yeah, but like people don't want to hear that.
No.
Right?
If I told people what we used, you do this, it doesn't seem cool.
But that's why it's cool.

(51:53):
It's because it works.
It works and it does something that needs doing.
Exactly.
I get wanting to build the shiniest thing and use the newest, coolest technology.

(52:14):
I understand that need, but at the end of the day, you need to get paid so you can actually
keep your shiniest growing that route.
Thanks for listening.
I made this podcast because I want to be the person at the city gate to talk to every person

(52:35):
coming in and out doing great things with AI and find out what and why, and then share
the learnings with everyone else.
It would mean a lot if you could share the episode with someone that you think would
like it.
And if you know someone who would be a great person for me to talk to, let me know.
Please reach out to me at Daniel Manary on LinkedIn or shoot an email to daniel@manary.haus,

(53:00):
which is Daniel at M-A-N-A-R-Y dot H-A-U-S.
Thanks for listening.
Advertise With Us

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.