Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome to Artificial Insights, the podcast where we learn to build AI people need and
(00:05):
use by interviewing product leaders who have launched AI products.
I'm your host, Daniel Manary, and I'm joined today by Jake Walker, who I've worked with
personally at Darwin AI, and he's got a couple of other companies under his belt, such as
TMMC with Toyota and Sportsnet, where he delivered some pretty cool stuff to a lot of people
(00:28):
who love hockey.
I think the first question on everyone's mind, Jake, is, are you an AI?
I don't think so.
Although if I was, I'd probably answer the same thing, right?
That's a good point.
If you ask JetGPT if it's an AI, I think it has to tell you now, and then it gives you
this big, long ethical spiel.
(00:49):
But all right, we'll just trust you on that one for now.
Have you seen the interview where Reid Hoffman interviews himself?
Have you seen that?
Oh, I actually did not watch that.
Did you see it?
Is it good?
I did.
It's mind blowing.
He's just talking about himself.
And of course, it starts with, I've learned everything about you.
I've read all your books.
I've skimmed all your articles.
I know how you think.
(01:10):
So the way that Reid answers himself is AI version is quite fascinating.
You should watch it.
All right.
I will absolutely take a look at that.
And we can share that with people who listen who are interested as well, because I feel
like artificial personhood is something that people are talking about and are interested
(01:30):
in.
But I'm interested in your thoughts from both a practical perspective from what you've seen,
but also what you think.
I'm talking about AI at a higher level too, because I think there's so many definitions
of AI right now and people go off the deep end sometimes and it's either chat GPT or
(01:50):
nothing.
So what does AI mean to you?
Well, I'll clarify for your audience.
I am not a data scientist.
I do not program.
I'm going to use these quotes, program AI.
I am a product professional.
I started my career as a manufacturing engineer, like nothing at all close to anything AI.
(02:15):
And as you mentioned before, TMMC, I was a manufacturing engineer at Toyota for about
six years.
From that point, I ventured into the world of products.
I became a product manager for an autonomous mobile robot company called Auto, which was
recently acquired by Rockwell Automation.
Kind of a good success story from our region.
They spent a lot of time building really sophisticated robots that drove themselves through facilities.
(02:42):
So really, really cool technology.
But again, I was a product manager.
I oversaw the hardware aspects of it and then I kind of figured out how do we branch out?
How does the company make money more?
Anyway, so not at all data science, data sciency.
That's a great perspective, I think.
Yeah.
I mean, really that's why do companies make products?
(03:08):
They make them to make money, right?
And you can use traditional ladder logic programming.
You could use simple JavaScript to show something on a screen and then some backend programming
to compute things.
Or you can use AI.
(03:28):
So to answer your question, what do I think is AI?
I think it's just another tool.
That's really all it is.
It's obviously a very sophisticated tool and has opened the doors to a significant amount
of opportunity in the world where value can be created for people to do things that they
(03:50):
typically couldn't do before.
That's the way I see it.
It's a tool.
It's a tool.
Obviously, you need very, very smart people to build it.
And eventually, maybe it'll build itself.
I mean, I don't know.
People are working on it.
Exactly.
You can even do coding.
(04:11):
You can have chat GPT code for you, right?
So I mean, it happens.
It's going to happen.
It's going to happen eventually.
But again, it's just a tool.
And to go back to my point about why companies make things, they make them to make money.
Okay, so great.
We've got this other tool now that we could potentially create new value in the world with.
(04:34):
That's what it is.
That's from a business perspective and from my perspective as a product manager.
That's what it is.
We'll go to Darwin.
Let's go to Darwin where I met you for the first time and we worked together there.
Darwin founded by Alex Wong and his team.
Alex is an incredibly smart person.
(04:55):
His team was really, all of his co-founders, really, really great people.
Very, very smart as well.
They had this great tech, this great deep learning proprietary technology, and they
needed a team to help figure out how do we commercialize something like this?
Where do we find a use for it in the world?
There's the key word.
What's the use for this in the world?
(05:15):
What is the value that this technology can create for the world?
We spent a lot of time trying to figure out the best use case, the best market segment,
the best place to double down, invest everyone's resources, and build a thing that could make
(05:38):
the company money.
Right?
AI doesn't bring you money by itself.
Not yet.
Not yet.
That's right.
Maybe it'll open its own bank account.
Would you say there's a difference between ladder logic and AI?
(05:58):
Or what would you say that difference was?
Of course there's a difference.
One's very primitive and one is very advanced in capabilities.
And I say that from a...the word capability I think is a good word that we need to use.
A capability is enabled by a feature.
A feature is built from a technology.
(06:23):
That technology could be ladder logic.
That technology could be AI.
But it depends on what capability is needed to create that value.
So it's a...on the surface the question is really simple.
But as you start diving into the world of applications, you start to realize, well,
(06:44):
okay, let me give you an example.
Let's say...I'm going to go back to my Toyota days.
Let's say you put a robot arm on the production line and you want the robot arm to pick up
a piece of...you want it to pick up the headlamp and you want it to put it on the car.
You could use AI to do that operation.
(07:06):
But given that a production line is very repetitive, everything's fixed in its location, you don't
need AI.
So you could use it, but it would be a waste of time.
You could just use simple ladder logic.
Pick up this piece.
The sensor's going to basically signal the headlamps there.
(07:29):
Okay, go ahead and squeeze your hand so you can grab it and then do your operations.
Lift up here, move over here, and then sync up with the motion of the line, which you
can do with traditional sensors, which again, you can program using your ladder logic, and
then put it on the car.
Somebody could look at that application and think, we have to use AI because there's no
(07:52):
other option.
Or maybe some corporate executive says, we need AI because it's in our vision.
It's one of our strategic objectives to implement AI and we just hired this whole data science
team and we need to make use of them.
So you could look at that application and you could solve it with AI, with intelligent
vision to recognize distinct features on the parts that you want to pick up and the distinct
(08:17):
features on the car so that you can localize and figure out where to put it.
Use pressure sensors to feed back to your model and blah, blah, blah, blah, blah.
You could use it.
It would be a complete waste of time.
So there's your difference.
I'm taking a long time to do, that's for sure.
Yeah, I mean, of course it would.
Of course it would.
I mean, that's your difference right there.
What's the difference between them?
(08:38):
Application and the capabilities that they can create, right?
It's a waste of time to do it on simple applications where you can use traditional technology.
That's a great point.
And you said the model of there's a capability given by a feature that comes from a technology.
What kind of capabilities have you seen that need something maybe a little more complicated
(09:03):
that you would call AI or something close to it?
Well, vision.
A vision is a good example of that.
You can't really use traditional technology.
You use an image or a video and recognize features on the image and video to make decisions.
You kind of need a more intelligent technology to provide that classified output and to deep
(09:29):
learning, right?
I think that's needed for that application.
It's just one example.
Voice is another one.
Anything that is variable, right?
Where your data set coming in can be variable or very, very complex.
Where it would take years and years to program using traditional technology.
(09:55):
We actually get to work on one example.
We had to classify the car coming down the assembly line.
So to be able to say what category of car is this, that's something that we needed deep
learning for.
Looking at the car, making a decision.
I don't know how you would do that with a sensor that wasn't a camera hooked up to deep
(10:18):
learning.
Well, I mean, on the surface, what you just talked about, there's other data sources that
you could use to make those decisions, right?
Like every car that comes down a production line has a tag associated with it that says
what the car is, what color it is, what parts are in it.
If you didn't have that data, then of course you would need to use vision to intelligently
identify each of the items that are in the image that you're looking at.
(10:42):
But again, I'll go back to my original comment.
I think there are other ways that you can solve that problem that don't involve deep
learning.
I like that as an alternative.
The data exists somewhere.
You just need to get access to it.
Exactly.
(11:03):
You need to be creative with the data that does exist.
You need to pull it somewhere and make use of it.
And what's one of the more exciting decisions that you've seen made off of data or something
that you've been able to do as a capability?
I'll give you two examples.
(11:24):
Automotors had a really great visual software that basically tracked the movements of the
robot through the facility and mapped it and basically created a heat map.
And the use of that was to show the production teams where the high traffic zones were for
(11:45):
the robots that were driving through.
And as you're driving, you store the data of your location and then you map it to an
image.
And then over and over and over and over again, you pool that data together and collect it
in a nice intelligent way, which basically just becomes a color on a screen, which creates
the value for the production team.
(12:06):
I think that was a really great use of that data that the auto team created for themselves.
The auto team created for themselves by storing the localization data of the robots.
So that's one example that I think was really cool.
Another one, I mean, I'll go to my days at Sportsnet.
(12:27):
So after I left Darwin, I joined the Sportsnet team and I ran a team of product managers there.
I was responsible for the Sportsnet Plus product, which is Sportsnet's digital streaming product.
So you can watch all of the channels on your phone, your computer, your smart TV, Xbox,
(12:48):
PlayStation, all of that stuff, anything with like a player, the ability to run a player.
In that world where you've got people watching streaming, digitally streaming all across
the country, you have millions and millions and millions of bits of data that you can
use to tell a story.
(13:09):
We used a service called Conviva.
Conviva built this really great big data platform that allowed you to basically plug all of
your player data into so that you can visualize performance across the country.
And you can start selecting, okay, I want to see the performance level of everyone watching
(13:36):
in Alberta, specifically in Northern Alberta, who's on this channel and is playing on their
iPhone.
You could just kind of keep subcategorizing all the data and you can use that to tell
a story to identify where you want to prioritize your engineering efforts.
(13:57):
So if I know that we're struggling in performance in Northern Alberta on iPhones, we can show
that data, which is just a giant pool in Conviva, which they've intelligently displayed on a
screen for us so that we can justify rerouting all of our engineering resources to fix a
problem that currently exists on iPhones in Northern Alberta.
(14:19):
Why?
Who knows?
Some networking thing probably where the CDN is not interfacing with the internet service
providers in Northern Alberta and it's specifically on iPhones because mainly people up there
have old iPhones.
I don't know, I'm generalizing obviously, but that's the kind of story that you can
tell to make intelligent business decisions using the data available to you.
(14:46):
And that sounds like all you have to do is get everything together in one place.
It's not even like you're looking at making AI on top of it.
You're not really doing anything other than trying to tell a story to understand it from
a human perspective to go deeper.
Yes.
Because you want to invest in...
(15:08):
Why does anybody...
I'm going to go back to what I said before.
Why does any business do anything?
They do it to make money.
How do you make money?
You make more money than you spend.
So you want to make the money, which is generally fixed unless you dump a whole bunch of money
into marketing so that you can bring on more users.
But let's say income, let's say revenue is fixed.
(15:31):
I'm going to throw crazy numbers out there, $3 million a month, whatever it is.
You make profit by spending less than $3 million a month across your entire company, marketing,
sales, engineering, admin fees, everything.
So you minimize the costs that you spend on your business so that you can maximize revenue.
(15:57):
And you can do that by making intelligent business decisions, which you can do if you're
crafty with data.
It doesn't need to be advanced technologies to help make these decisions for you.
You just have to use what's available to you.
I like that.
And can you tell us about a time where you were able to do that and launch a new product
(16:21):
or feature?
Yeah.
So I'll use Sportsnet as another example.
Again, this is not AI.
Anybody could look at this and think that it is, but it's not.
I like those cases.
Exactly.
So one of the things that my team and I did in my tenure there was launch what's called
(16:43):
dynamic ad insertion.
So when you're watching a game, so you're watching the Blue Jays, Toronto Blue Jays,
the program ends or it's the end of the inning and it cuts to commercial.
In traditional video when you're streaming this game, you're just going to watch the
ad that's embedded within the stream, which is the same ad that you watch in your cable
(17:05):
box or you go to a restaurant and you see on the TV when it cuts to commercial.
Well we did-
That's why I end up seeing US advertising sometimes.
Exactly.
Right?
If you're pulling in a game from LA, LA versus Texas, for example, you're going to watch
American advertising.
That's just what you're going to do.
So what we did was implement dynamic ad insertion, which is where we don't necessarily remove
(17:31):
the ad that is in that ad pod, that ad break, but we insert digitally over top of it a new
one.
And so the experience for the user watching the game is seamless, ideally if you implement
it properly, but it's net added revenue on top of what you're already making for streaming
(17:57):
this on your traditional cable, but it's a new audience.
It's a new venue.
And the other benefit of doing this is that you can use the data that is available to
you so you can know who the user is, you can know where they are, you can know their demographics,
their general age, gender, data, yada, yada, and you can create ad campaigns that are specifically
(18:23):
targeted to certain user groups.
So you could say in Western Ontario, I'm going to run a Jiffy Lube ad because Jiffy Lube
wants to get more London customers, for example.
So we're going to target a certain radius around London using their postal codes.
When we're in Google Ad Manager, we'll go in and create a new ad campaign that's called
(18:48):
Jiffy Lube London.
And anybody who's watching the Blue Jays in London with that area code set is going to
get that ad.
But if you're in Toronto, you're going to get a different ad, one that's targeted to
Toronto because maybe it's Budweiser.
Budweiser wants to get more people to see their ad in Toronto so that they'll go out
(19:08):
and buy their beer kind of thing, right?
And you can do that across the whole country and it also creates this new revenue opportunity
for your sales team to go out and sell net new inventory for ad space.
But again, no AI, right?
You're just using the data that's available to you, that significant amount of data.
(19:33):
You just have to connect all the pieces together so that you can automatically make those decisions
knowing where the user is, who they are, da da da, tying it into Google Ad Manager, tying
it into your player so that you can add stitch it in.
And then you obviously have to get that video across the air.
It's a very significant and complicated task to do, but again, using traditional technology.
(19:55):
Neat.
How do you as a product manager think about the technology that should underlie one of
your features with the reminder that features are what drives the capabilities?
I do not.
I try not to.
Because as a product manager, it is my job to properly distill the market and communicate
(20:28):
it to the business.
And when I say the business, I mean the technical teams so that the technical teams can make
the intelligent decision on what technology is needed to solve the problem.
It's my job to oversee the problem.
You could also call a product manager a value manager.
And this is the same as being a product manager in an AI space or a product manager for notepads
(20:53):
at Staples.
You know what I mean?
It's the same thing.
If you can say, here's the opportunity in the market.
There is a pervasive and impactful problem that that market has, and the market is willing
to pay for a solution to solve that problem.
Boom.
That is an opportunity that you as a product manager need to jump on.
(21:16):
You make a business case, you sell it to your exec team, and then you take that problem
and you properly and very efficiently communicate it to your engineering teams so that they
understand who is the user, what is the problem, what's the urgency of this, what approximately
timeline do we have before that willingness to pay goes away, et cetera, et cetera, et
cetera.
(21:36):
All those categories of building a good product requirement document basically so that your
engineering team can say, okay, we understand the problem, we understand who the user is,
and now we can make a decision on what is the best technology out there to solve that
problem.
If they come back and say, this is a very complicated problem, you should probably get
(21:58):
some deep learning.
You should probably get some data scientists on the team so that we can solve this using
deep learning, then okay, great, let's go back to the exec team with a saying, if you
want to solve this problem, it's going to bring in X amount of revenue.
We got to hire a data science team or we got to hire specialized data scientists so that
we can build the solution to solve this problem, which if we do, we are going to make money
(22:20):
because that market is going to buy it basically.
I want to highlight what you said of timeline before willingness to pay goes away because
I don't think I've ever heard anyone say that specifically as that's what you should be
building a business around because of course timeline to willingness to pay before that
(22:40):
goes away, that is going to change whether or not you can make money off of this by delivering
in a timeframe with hiring a data science team that might be expensive and so on.
Exactly.
There's risks that come with that, right?
So if we were to say, we found this market opportunity and there's some urgency that
sits with it, whether it be political or economical or geographical or weather-based, who knows?
(23:06):
We got to get this done before the winter because if the winter comes, then the summer
desire doesn't, it just basically goes away and it doesn't come back until next year.
So those are all just examples of a time-based urgency that you need to put on, you need
to slap on top of.
I mean, the other one is investor related.
If you're a startup and you've got investment, you've got a certain amount of runway, you've
(23:29):
got to make money before that runway goes away, right?
So there's your timeframe.
So you got to take that information to your exec team to say, we want to solve this problem
and we can, but we really need to do it quickly and we need six extra people so that we can
hit the timelines.
If your exec team come back and says, we're not willing to invest in an extra six people,
(23:52):
then you basically tell them, well, then we can't do this.
You want us to do this?
This is what it's going to take.
And if you don't want to do it, then we got to move on to find something else to do with
the resources that we do have.
So then as a product manager, a big part of your job is that negotiation with the business
represented by the execs to say, here's how and why we should spend that money so we get
(24:15):
more back.
Correct.
Yes.
The other one is competitor related because I mean, obviously if you know the problem,
there's a chance that someone else does as well.
Someone else who has the resources, let's throw the word Google, Apple, Amazon out there.
If this fits within their wheelhouse of what they're willing to invest in, they could just
(24:38):
throw 25 people at it and have it done tomorrow.
Know what I mean?
So there's that level of urgency as well.
And the other one is first to market is always going to win.
Not always, but 85 to 90% of the time, first to market is going to win.
First impressions.
It has advantages for sure.
(24:59):
Yes.
Yeah.
What do you think the value of being a close second is?
I'd certainly see in cases where the first to market kind of creates the category and
then once they've created the category, someone comes in and does it better.
I think the answer to that lies in understanding that distinct competencies that come with
(25:23):
coming in second.
And the market segmentation and use case analysis basically.
So if you're a follower, a fast following company of a startup that just hit the market
and through a new deep learning capability at the world that has caught everyone by storm,
(25:45):
let's use OpenAI.
Let's use OpenAI as your number one.
Let's use Perplexity as number two.
I'm not saying that they are one and two.
I'm just saying those are two examples.
So say OpenAI runs at the world and they're making a crazy amount of money because of
the fact that they ran on this and they got it first.
(26:06):
Perplexity on the other hand has a really unique, and I use both because I like seeing
how they compare.
Perplexity, let's say that they have a distinct competency of sourcing every answer that is
(26:27):
outputted and that's a value that a lot of people may see as worse using and not using
OpenAI, ChatGPT for example, because you want to trust the outputs.
So you could see an output from ChatGPT.
(26:50):
ChatGPT's math is terrible sometimes, right?
So if you use Perplexity and the math is better, you can validate where they got the intelligence
from to answer that question by seeing how they source the output, if you know what I
mean.
So that's an example of a company that has done well that wasn't first, but they focused
(27:12):
on being uniquely competent.
I like that, especially as a term, uniquely competent or distinct competency.
I was just having a conversation with someone yesterday about why Spotify doesn't allow
you to add bookmarks on podcasts.
And they said, Spotify team, you got to get on this.
(27:33):
And I asked them a bit about it and they said, well, Scribd has it, neither ever end, but
they've got bookmarks on podcasts.
And I thought, that kind of just sounds like feature parity.
It doesn't really sound like a distinct competency.
So would it be something that, how would you evaluate the usefulness of Spotify building
bookmarks in terms of Spotify's business?
(27:57):
I think it would depend on, okay, well, let's step into the shoes of the product team at
Spotify.
They would evaluate this feature against all the other features that they have, that they
want to do.
Every feature that you implement is going to deliver value to the market.
(28:20):
Each feature is going to have a different level of value that you can assess against
the other ones.
So let's say bookmarks is a feature that the product team is going to evaluate.
They've done a competitive analysis.
They've done a win loss analysis.
So they've seen, okay, who is using, what publishers are going to move to Spotify regardless
(28:44):
of whether we have bookmarks or not versus the ones that don't use Spotify because we
don't have bookmarks.
That's an analysis that you have to do.
And there's going to be some kind of ROI attached to that decision.
Is it worth us investing the engineering resources in building bookmarks?
(29:04):
And the answer to that is going to be, are we going to win business because of it?
So if we know we have to spend three sprints of effort across two different engineering
teams to implement bookmarks and then launch into the market, that's time, right?
Time is money because you got to pay these engineering teams to do the things.
(29:25):
So is two and a half months of time of engineering effort invested in this functionality going
to win us 10 times that in revenue?
Yes or no?
Is it going to win us eight times?
Okay.
If it wins us eight times, we know that we have another feature that's going to win us
15 times.
So I think we're going to go with that one because it's going to make us more money.
(29:47):
And let's just, I mean, that's a good idea.
Bookmarks is a great idea, but let's backlog it and we'll just prioritize it next sprint.
Oh, and next sprint is going to deprioritize again.
Now, next sprint it gets deprioritized again.
That happens so much where you know you want to do things, but they keep getting deprioritized
because of strategic priorities or just other things make more money.
(30:10):
And it comes down to that expected ROI.
Oh, it's all based on the money.
Exactly.
Investment in, money out.
So in that case, it almost sounds like unless there's a unique case for AI, AI might actually
be a negative in terms of ROI because it has a higher upfront cost.
(30:32):
Right time.
Yes.
Which is why the best place to start any solutioning of solving problems is starting with the problem.
Because when you start with the problem and you properly understand it, then you can do
an analysis of figuring out what technology is best to solve it.
You can hire the right people who are specialists in that technology and push it into market.
(30:58):
If it so happens that the best technology to solve that problem is deep learning, then
you hire a deep learning team.
As long as the market is willing to pay for you to solve that problem.
Yes, absolutely.
If you could go back in time 10 years, what would you tell yourself about product management,
(31:20):
maybe AI in particular?
What would I tell myself?
If I could go back in time and tell myself about it, about product, about product, I
would love to know about what it is way earlier because I probably would have jumped into
(31:43):
it sooner.
That being said, I think I'm a good product manager because I wasn't to begin with.
I was involved in solving problems as an engineer first.
(32:05):
Then I lost interest in the how to solve problems.
I gained interest in the why, I think.
That's what drew me to product.
Now if I could do anything differently in my academic career and early professional
(32:34):
career, it would be to learn about product more early on so that I knew that it existed
earlier.
I think it's a really fantastic career.
I've really enjoyed it.
I love understanding customers stepping into their shoes and really understanding why they
(32:55):
struggle with something so that I can work with engineering teams to solve them.
I love that part of it.
I think you also have to be a pretty strong people person to do this well.
You don't have to be, but I think it really, really helps because I think the majority
(33:17):
of the job is understanding your customer.
I like to use the rule of thumb of 20%.
At least 20% of my time is spent talking to customers or at least in the market, understanding
them, researching, talking to them, and whatever else.
I took the pragmatic marketing courses early on several years ago.
(33:44):
What they like to use is what's called Nehito, is nothing important happens in the office,
N-I-H-I-T-O.
It's a way of them saying you can't be a product manager if you're just solely focused on being
in the office and solving problems because you lose focus on the problem.
(34:06):
The problem exists in the market where the money comes from.
You need to be focused on that.
Just to close us off, what do you see as the future of AI?
It's very hard to say because the way that AI is advancing now is scary.
(34:27):
It's super scary.
I know that I'm going to watch your little video of your podcast early on, kind of like
your trailer, and you mentioned the word Skynet.
I was like, oh, boy.
That's a scary outcome for a potentially scary future that needs to be figured out.
(34:49):
We've got Reid Hoffman talking to Reid Hoffman and his AI self.
That could be scary.
That AI Reid Hoffman could call up some guy and say, hey, I want you to come work for
me and move to this place.
I'm just talking to Reid Hoffman.
He wants me to move here.
It's kind of an innocently scary situation, but just the beginning of something that could
(35:12):
be worse.
I think that if we're not careful and we're not focused on creating healthy value with
this emerging and growing technology, then it could be bad.
(35:33):
It also could be amazing for our future.
If you focus on creating that value in a healthy way that helps solve real problems that real
people have, it's going to be great.
I think that word healthy is subjective, but it's also one that I don't know that we have
(35:55):
a really well-accepted definition as a society.
And Canada especially, we're such a melting pot of different cultures and values.
I asked a VC recently about how what you invest in is essentially saying this is how technology
should be.
And he said, it's too big a question for me to even think about answering.
(36:20):
And I think it is a big question, but by participating, we're just like a market.
Put your money behind it, it's going to get sold to you.
Exactly.
Yep.
Thanks for listening.
I made this podcast because I want to be the person at the city gate to talk to every person
(36:40):
coming in and out, doing great things with AI and find out what and why, and then share
the learnings with everyone else.
It would mean a lot if you could share the episode with someone that you think would
like it.
And if you know someone who would be a great person for me to talk to, let me know.
Please reach out to me at Daniel Manary on LinkedIn or shoot an email to daniel@manary.haus,
(37:05):
which is Daniel at M-A-N-A-R-Y-dot-H-A-U-S.
Thanks for listening.