Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome to Artificial Insights, the podcast where you learn to build AI people need and
(00:05):
use by interviewing product leaders who have launched AI products.
I'm your host, Daniel Manary, and today I'm joined by Prerna Kaul.
Prerna has a 14-year track record of building many millions of dollars of AI and ML products
across public companies and startups in health tech, e-commerce, and enterprise, such as
(00:26):
Walmart, Alexa, and a little place you might never have heard of called Moderna.
In her latest role, she's building the end-to-end AI and data platform for a wellness startup
called Panasonic Well.
Thank you for being here, Prerna.
I appreciate you being here, and I'm excited even just to learn myself.
I find that I have a lot of learnings going back and watching the recordings that I've
(00:50):
done of interviewing people.
I'm learning new things the second time around.
And for me, I really like doing a podcast where I get to talk to people doing cool stuff
because it's inspiring and allows me to learn things that I would just never learn otherwise.
Yeah, I'm happy to be here, and thanks for having me.
That's great.
Yeah.
(01:10):
And the question, I think, that's on everybody's mind with a podcast like this is, Prerna, are
you an AI?
Sometimes I believe I am.
I certainly act in those ways, but no, I would say I am not.
No.
Okay.
I like that, though.
(01:30):
What do you see as that boundary between sometimes I'm an AI and sometimes I'm not?
Oh, interesting.
So I do think that, for me, when I think of artificial general intelligence, I think of
all the possibilities and ways in which we can quantify intelligence and whether we believe,
essentially, it's similar to how one would think about the Turing test, where it's impossible
(01:55):
to know between a human and an AI, impossible to distinguish between the both of them what
an artificial intelligence is versus what real intelligence is.
So there's ways in which we can look at intelligence.
One aspect of this is logical and objective mathematical reasoning.
(02:15):
And I do think that, at least in some of the use cases we have seen so far, gaming, for
example, is one big area, chess is another.
We are now starting to see even in more machine learning related use cases how reasoning,
the ability to reason is something we have seen models do repeatedly in a sort of a consistent
(02:38):
way.
So I do think that one area where I find myself as being this logical, planned, and reasoned
person, so in those ways I am like an AI.
But no, otherwise, I think for the most part, I'm fairly human.
Okay, for the most part.
Maybe we won't ask anybody who sees you really in the morning.
(03:03):
Yeah.
I think a lot of nerds listening to this call might relate to me.
Absolutely.
And just for everybody's context, what do you do for your day job?
Just give us some highlights.
Yeah, absolutely.
So I'm a group product manager.
(03:23):
I work on the AI platform for a startup known as Panasonic Well.
We are building an AI assistant for family wellness, so think of end-to-end healthcare
application that leverages AI for personalized coaching and helps users maintain and build
habits around a healthy lifestyle.
(03:44):
So my work is to define the product strategy, vision, roadmap, and work very closely with
engineering and design on the execution to realize the minimum viable product of this
app.
So what I do on a daily basis is basically brainstorm with these teams, come up with
(04:04):
what we think are the right set of capabilities to test, and then go prototype and build it
so we can present it to users, get feedback, and try to scale it as a product.
Very cool.
And you haven't always worked in the wellness space.
I know you've got a couple of big names on your resume.
So how does that relate to what you've been now?
(04:26):
Yeah.
So for my career, one optimization function has been being close to the technology.
So be it when I started off, I started off as an engineer working on big data platforms.
So basically started my career working on Hadoop, learning about different DBs in that
(04:47):
space, Cassandra, and using all of that technology to build out these trading platforms.
When I pivoted to work on product management, I also started at the platform space, building
our developer platforms at a company called Coins in Toronto, and then shifting to work
on building payments platforms at Walmart Labs and then Amazon.
(05:09):
So I think the optimization function for me consistently has been being very close to
the technology.
Obviously, I was always interested in machine learning and just learning how to use these
models for different use cases, and that's what attracted me to this field.
And I also had sort of a side interest and always has been an interest, healthcare has
(05:30):
been an interest.
So I tried to combine my sort of love for technology with this new space, and that's
how I ended up here.
Wow, very cool.
I feel like I have a similar journey actually, because I started technical, I actually work
with data platforms at a company that needed to process advertising data.
So I'm familiar with Hadoop and some technologies in that space.
(05:53):
And then I've moved myself to be more entrepreneurial because I really just enjoy the business side
of things and helping people get what they need and understanding the problem rather
than just diving into technology.
So I guess what's the biggest difference for you between being an engineer and doing what
you do now?
That's a great question.
And it's something that certainly is an acquired skill too, because as an engineer, I think
(06:18):
my default instinct is to solve things.
I tend to be very fixer-upper and I'm always in problem solving mode.
But one of the things I've really learned from a PM standpoint is to take a step back
and analyze the space before trying to jump in and solve.
I actually do think and one thing I've realized is that great engineering leaders, as they
(06:41):
grow in their careers and become more senior, they also become more product oriented.
So they become product thinkers just by virtue of having solved big large scale problems
and understanding that rather than jump in and fix everything, we need to take a broader
view and evaluate what are the key themes and try to solve in a more reusable way.
(07:03):
So that platform and product thinking also comes naturally to engineering leaders as
they rise.
But that's one big shift that happened in my mindset as I pivoted to product management.
And you did that not by becoming an engineering leader per se, but then by shifting towards
(07:23):
the product side and what was that first jump for you?
Yeah, absolutely.
And that's interesting that you asked that.
So when I was at Citigroup, when I was that sort of building out the big data trading
platforms and working on these equities and bond trading platforms, I actually got a lot
of exposure to the business.
(07:44):
So I would work directly with these financial traders across the board, like basically in
Europe and in New York and in other Asian countries as well.
And I got really interested in thinking through what is the business problem we are trying
to solve.
Even as an engineer, I got that exposure.
And I realized that the thing that I love the most is when I solve these problems, understanding
(08:10):
what users want and then going to build something that fixes it.
The technology is interesting, but if you can marry the two, that's what really drives
me and that's what something I'm really passionate about.
So you have a problem and you're like, oh, I can come up with a really interesting solution
to solve this.
And I actually go build it and launch it and then seat in the hands of users.
(08:33):
I mean, that's just the magic of building products.
That's what we all love.
And that was a big shift that happened.
I basically realized that, oh, I'm getting to work with the business.
What I really love is like marrying the two, being that person in the middle.
And turns out there is a job title for it and I can actually do that role.
(08:54):
So that's amazing.
So tell me a story about one of your favorite times where you got something into a user's
hands.
Yeah.
So I actually had a very interesting experience building out My Walmart at Walmart Labs.
I built out their first mobile app for the Canadian marketplace.
(09:16):
It was launched in 400 brick and mortar stores.
And what was interesting is it started off in a small town in Canada where there's a
huge aging population.
So these folks tend to have caregivers or somebody looking after them and they're living
in these senior nursing homes.
And they really love visiting Walmart for some reason.
I just found that super interesting because I think what it is, as I did walk throughs
(09:40):
and tried to work with them, I realized it's that journey of I get up in the morning, I
make my grocery list, I make my to-do list, I drive out of my house or somebody drives
me and I get to walk around the store.
I get to have that experience.
I'm walking, I'm doing something active and being out in the open.
And then I buy something.
(10:01):
I have an interaction and I lead.
When Walmart wanted to make a shift towards more scan and go technology, one missing piece
was it hadn't really solved for its end users, its end consumers.
So these are the sorts of people that would visit the store.
And I had the opportunity to essentially do walk throughs with my customers, build out
(10:23):
user prototypes and test and learn in the process, and then launch that first mobile
application that became my Walmart in eight next generation stores that Walmart was launching.
So I launched that and turned fast in experience for such users, for busy moms, for millennials,
for different populations and got them to use the mobile app, including features like
(10:48):
being able to scan alcohol because at that time, LCBO was very restrictive about who
they would allow to sell alcohol.
And Walmart was given that first license, including ability to pay within the platform,
even for folks who are seniors who are not as well versed with the technology.
So thinking of accessibility issues there, thinking of what it would be like to weigh
(11:11):
and pay for fresh produce, all of these unique features and use cases and actually build
that up and launch it, scale it across those 400 brick and mortar stores.
So that was a very unique experience I had and definitely a very exciting project to
be a part of.
Was there a user, like an older person that you've took feedback from and changed the
(11:35):
app with?
Yeah, that's very interesting.
So it happened around the time of our, maybe a week before our launch, we were doing some
testing in store and a lady came up to myself and a colleague of mine and she was very upset
because we were building a mobile app that could potentially take on roles or jobs of
(11:59):
the associates on the floor.
And it was very pertinent to our conversation on AI as well, because I know it's a topic
that folks think about a lot.
There's certainly a fear, a lot of like fake news, but also a lot of fear associated with
AI.
So that happened to me where she felt that we are automating something that a human could
(12:21):
do and we're taking away somebody's role.
So I had to, that was the situation.
I had to mitigate that and help bridge the perception that, oh, these associates, we
are training them, we're up leveling them, giving them new opportunities.
They become consultants in the process.
The idea is to augment them, give them superpowers versus take away their roles.
(12:45):
And yeah, that was my first experience.
I still recall it very vividly, it was about seven years ago at this point.
Oh, wow.
And that conflict between automation, but AI as well as a form of automation and human
jobs, I think you're absolutely right that it worries a lot of people.
(13:05):
And just to be a little more abstract for a second, how do you see that conflict?
Yeah.
So I do think that, I'll put it this way, with any new technology that has come in,
and there's a lot of history behind it, whatever roles humans previously played, so they had
certain skills, they were trained on certain levels.
(13:27):
We've seen that when a new technology comes in that takes away the need for that skill,
there is a reaction to it, like a backlash or a reaction to it.
And there are two reasons for it, first, that humans feel like the demand for their skills
is going to reduce, which means that how will they feed themselves?
How will they survive and take care of themselves and their families?
(13:50):
And second, they feel that they will potentially not be able to upscale so easily because this
new technology will take on everything that they knew or have been trained to do for years.
On the first point, I do see that we are seeing in the short term, at least from an economic
standpoint, that folks who are skilled in certain areas might see a reduced demand for
(14:15):
their skills.
But I don't think from what we've observed, it's a long-term effect.
So I can give you an example.
We know that there are code generation models, there are NLMs for code generation that are
coming about.
I use them all the time.
Yeah, exactly.
We use Cloud all the time.
We use some of the new NLMs coming in as well.
(14:36):
And to be honest, we've heard, even in our own individual organizations, is this going
to reduce the need for the number of engineers we need on ground to do the work?
Will that happen?
And in other ways, that's one perception.
The other is, will this improve our employee productivity by, let's say, 20% to 30%?
(14:58):
Both could happen.
Both would be true.
In the long term, we hope it's the second, where it improves our engineering productivity
and that it helps them upskill and scale.
But in the short term, we are seeing those side effects.
And I guess the only mitigation to that is remaining open, curious, and continuing to
upskill.
(15:19):
It's just the nature and power, of course, of being in technology.
And I don't think AI is like a one-time event where we are seeing this happen.
It has been the case every couple of years.
We had to relearn new technologies, relearn how to do product in a certain way, or relearn
a new set of skills to ensure that we remain competitive and our skill set remains in demand.
(15:45):
I don't know if this answers the question, but that's how I sort of think about it.
Yeah, that makes sense.
Essentially, the AI is not unique in its job displacement or our need to react to it because
it's a tool.
We can use it, we can get better with it.
And in my own job, like you mentioned, I use code generation models.
(16:05):
It does make me faster, and that's great.
So in my life, I'm more productive.
But then in general, yeah, it seems like it might make it harder to get an entry-level
job.
There might be fewer of them for a while.
So I like that approach.
Learn it, upskill, and tools go out of style all the time.
Exactly.
Yeah, I was thinking earlier about how important that is to product owner versus product managers.
(16:31):
That's another area where folks talk a lot about.
And previously, there was an agile product owner role, and that has quite swiftly changed
to product manager because there's a need for somebody really taking a more strategic
overview and understanding business priorities as well as customer priorities.
So yeah, we've seen it in every space, so it's definitely not unique.
(16:53):
And has that skill transition for product owner versus product managers affected your
career?
Not so much.
I think most companies that I have worked with, they started with the need for a product
manager.
And agile was one mechanism to achieving it, but that wasn't the full role as it was defined.
(17:15):
But it's my understanding folks who have been in industry for 20-plus years have had that
experience.
Okay, I understand.
So going back to AI products, you've managed to do a lot of them, I think.
You've been a product manager in quite a few AI-powered products.
So would you tell us about one of them that excites you?
(17:38):
Yeah, absolutely.
So I'm going to tell you about a time where I launched the generative AI platform at a
company called Moderna.
So Moderna is-
Nobody's ever heard of that one before.
Nobody's heard of it.
Anybody who's been through the pandemic has never heard of them.
So it was interesting.
(17:59):
I think around the time I joined, Moderna was shifting from the one-show COVID-19 vaccine
company to really thinking about platform capabilities on their end, more on the medical
side of things, on the pharmaceutical side of things, and thinking of how to scale their
business.
And the company came up with the goal of launching 15 vaccines over the next five years and quickly
(18:25):
realized that in order to scale that rapidly, they would not be able to do so with the current
pharmaceutical processes as planned.
So the pharma business is very complex and there's a lot of third-pack intermediaries.
If I were to take an example, if we made no change in order to launch one trial of one
(18:51):
vaccine in the next five years, it would cost $2 billion.
And this company wanted to launch 15 vaccines in the next five years and each vaccine takes
up around three clinical trials.
So we were basically talking about $90 billion.
And that's just a lot of cash that we did not have.
(19:11):
So in order to scale, I think we, as a company, decided that a strategic investment in AI
machine learning and also better software tooling is necessary.
So thinking of everything, like everything built from the ground up, from cloud technology
to the use of machine learning for certain drug discovery processes, to the use of AI
(19:34):
for productivity reasons and operational efficiency, to thinking about how to make our pharma
operations a bit more efficient as well.
So we had to rethink a lot of different processes.
So when I came in, I defined the product strategy for how we could help the company scale along
the lines of that goal of 15 vaccines in the next five years by defining a platform that
(20:00):
would help us unlock productivity for employees.
So all the application layer work was based off of OpenAI.
We had a partnership with OpenAI.
And my team and I built the generative AI platform to support that.
So this was more on the API layer and everything from what document pipeline do we use?
(20:23):
How do we support unstructured data, vector DPs?
What are the different data sources?
What do we need to do from a fine tuning standpoint is what my team and I built.
The outcome of which was we were able to support 750 GPTs in a matter of five to six months
to see employees around 30% towards that 15.5 goal.
(20:44):
Wow.
That is a lot of GPTs.
That's a lot of GPTs.
But it was interesting because I think one big realization we had is it's not just build
it and we will come.
We have to really focus on that learning and education aspect as well.
(21:06):
So I'm interested in two aspects of this especially, which are like how did that even come to be
a project who decided that it had to, but then also how did you end up getting that
adoption?
So why don't you start with adoption?
To me as a developer sounds great, but also how did that work?
Yeah, that's a great question.
(21:27):
So from my end, one area I was very deeply invested in was the customer discovery process.
So I'll give you two examples to highlight that process.
So number one, there was a team within my organization that wanted to generate the right
(21:50):
prescription information for drugs that they were administering.
So thinking of like they have all these clinical trials, they are testing 50, 100 different
variants of certain vaccines and all of these vaccines need to have drug information.
There is a cost associated with it.
That cost runs in the millions of dollars.
(22:12):
We were thinking what if we could use an LLM to generate that information?
What would that look like?
And is it even possible?
How do we ensure regulatory compliance around it?
Some very important and important questions and especially thinking through hallucination
and accuracy, how do we minimize that?
(22:34):
So we had to really think through what are the types of problems we are trying to solve.
I conducted nearly 120 plus customer discovery interviews to like come up with the right
themes and then make sure the platform supports that.
That was one very key sort of project I led and made sure we were working backwards from
the customer to come up with like the right set of platform components that we built.
(22:59):
That is I think was a key element in getting us adoption because we were chatting to them
and asking them what they want and then building according to that versus deciding more of
a black box what to build and then just launching it.
So in this case you actually spoke to someone with an internal or external customer.
(23:20):
In this case it was internal, around 120 plus teams within the organization like clustered
teams within the org.
And that was all to do with prescriptions?
Different youth cases.
So it spanned everything from human resources to finance to operations to these prescription
(23:41):
information to answering questions through health authorities.
So doing that in a structured way.
So it was encompassing a set of different functions.
Okay.
So then focusing on that prescriptions use case because I'm very interested in like how
did you make sure that it was accurate, didn't hallucinate and then serve that need and yeah
(24:04):
walk us through that process.
Yeah for sure.
I think a lot of what we were able to do had to do with our scope.
So taking single prescription, taking a single set of documents versus translating to multiple
languages and then just ensuring we had defined rag algorithms for that and had the right
document pipeline in place to make that happen.
(24:26):
So how it started, maybe I take the example of this and describe it.
Talking with a colleague and she mentioned that it cost them in the millions of dollars
to generate this information.
There's a long time span.
We contract this information out to external third parties.
They cost a ton of money as well.
(24:46):
And we just want to make sure it's easier, right?
Like we don't know what's going to happen in the future.
We have these big goals.
We just want to make sure the process is easier.
So we decided to take that and get examples of exactly what information would be generated
and what the inputs to that information would be.
So understanding the inputs and outputs expected.
(25:09):
We took those two things, took that back to the engineering team and essentially started
small scale testing.
So stitched together a quick prototype with the open source technology.
We did have access to everything from AWS open search to a V8 vector DB to taking some
existing connectors to our databases, putting all of that together and then connecting it
(25:33):
with an LM and just seeing what the outputs would be.
Another thing that's interesting with this technology in particular is how much prototyping
really matters and really helps.
We started really small with maybe like five inputs through different data sources and
a single output document, but then slowly and iteratively scale that.
(25:56):
And I really want to emphasize the importance of prototyping with Gen.EI technologies.
It's so important.
Yes.
And my own testing, it's amazing what you can do that it's 80%, but then that last 20%
is extremely difficult.
And if you can't nail it, then it's probably not worth going forward.
(26:18):
Did your team actually build something custom for that prescriptions use case?
Like a custom GPT, I mean.
Yes, yes.
We built a custom GPT for it, connecting it with our internal data sources and a custom
built RAG algorithm as well.
So whatever search capabilities are already available in GPT, we use that, but we had
(26:41):
our own RAG algorithms to support it and make it a bit more accurate.
We also implemented some work around chunking documents in a certain way.
So there was some additional API magic happening in the background to make sure the accuracy
has improved.
And then we measured that and iterated over it.
And of course we did a lot of on-ground testing before doing a broader launch.
(27:06):
So we did a lot of internal testing to make sure that information was correct before it
could be passed on to anybody.
You said that you started with the inputs and outputs.
Does that mean that you were working off of a well-defined process and then the people
doing it already knew everything that they had to do to look at it?
Or did you have to discover some of that too?
(27:28):
Oh, interesting.
So what was unique with this use case was that the folks we were speaking to, they came
from a non-technical space.
So they were from the pharma industry and very deep in the pharmaceutical language and
technology.
And myself as a product manager in this role had to basically translate between somebody
(27:54):
with a pharma background and somebody with an engineering, deep technical background,
which is an interesting role to play because I also did not have a background in pharma
before I joined the company.
So I think a lot of what I did was the very same, like analytically evaluating how do
they currently run their process and fully understanding the steps they take to make
(28:16):
sure that I understood all the inputs that go into the decision making.
So for example, the team gave me examples of how clinical trial works and what are the
different steps in a clinical trial process.
Where do all of the documents go?
Where do we store this information?
Where does it live today?
How accessible is it for search?
(28:38):
Understanding all of the permissions settings and different associations with that particular
document search DB that we have.
There's a system called Viva.
Not that it's, yeah, I barely remember it now, but it was a very complex and convoluted
system.
It was difficult to manage.
So just asking and drilling down into that process to understand where do all of the
data sources live.
(29:00):
And understanding examples of what do they send to contractors today?
What does their RFP look like?
What do the contractors return?
Who is involved in the rewriting and editing process for what is returned?
So taking all of those steps and just breaking that problem down completely to understand
what tech we can actually build to automate that or make that process more streamlined.
(29:25):
Wow.
So you did a lot of work even just to see what's out there now.
What do you have access to?
How do you get it?
Did you encounter anything unexpected in that process that just really changed the way you
thought about it?
Yeah.
I do think that, and I'll generalize this.
(29:47):
I won't keep it specific to pharma.
We have so much opportunity with AI and machine learning, especially with the use of generative
AI in industries which have been paper-based and manual in the past, including healthcare,
including logistics, manufacturing, supply chain related.
(30:09):
I'm sure there are others that don't come to the top of mind right now, but one big
realization I had is there is certainly a lot of waste that happens because of the number
of manual steps and the number of intermediaries in the process and how current operations
are driven.
If one were to look at it coming from the technology lens, you would immediately see
(30:31):
opportunities to improve things.
But it's just that the tech industry has been very much focused on different domains, different
sorts of enterprises.
Now there's more of a merge happening and I hope more of that continues because I really
see a big opportunity and also the opportunity to make an impact because we're talking about
vaccines, we're talking about health and people's lives and it was just very meaningful
(30:54):
to be part of that space.
Yeah, that's cool.
Did you end up saving millions of dollars, I guess, for this process?
We did, not for this particular project, but overall we did.
We did end up saving a couple of million dollars for the company and that of course continues
as we continue to adopt the technology, that savings continues year over year, which is
(31:17):
what matters.
At the end of the day, it's like it's a one-time thing, great, people forget about it, but
the fact is that the more you continue to adopt and the more tech that gets, what's
interesting is that it's what we built was additive to whatever OpenAI or Claude or other
NLM vendors would be launching in the future because it will only add to the company's
(31:41):
productivity and whatever has been launched already versus take away from it.
I think that makes sense.
You're not changing the foundation, you're building on it and as the foundation gets
better, the whole stack gets better.
Exactly.
And going back to how this project started, who came up with it?
(32:04):
Did they think it'd be done in three months?
What birthed the project?
It's a great question.
So the way the project started is last year in 2023 when OpenAI first announced the GPT
models, the company realized there was an opportunity to leverage them.
And so they built a forked version, the engineers and marketing built a forked version of OpenAI's
(32:31):
chat GPT and it was known as M-Chat and it was essentially like a chat UI interface,
some open source repo on GitHub stitched together with the OpenAI API and we were using that
internally.
Since then we created a contract with chat GPT for private and secure use of the data
(32:51):
that would be stored within our servers and making sure data at rest and in transit is
encrypted and so on and so forth.
So we had that, but quickly realized that obviously this is not a product, fully skilled
product solution.
We need to build new things and that's where the platform came in.
We realized that we're going to have to build some like our own internal capabilities and
(33:14):
not just rely on this one set of the UI that we've built.
That's where we started investing more in the platform.
So when I was hired, M-Chat was existing.
We were thinking about what to build next, but we realized quickly that an investment
in the platform is necessary.
And then I was tasked with breaking down the problems further, running those interviews,
(33:36):
making sure there were key themes that based on which we could build out our product strategy
and then bringing that back into the engineering teams to deliver a different set of capabilities.
So then it was like a AI first solution where sometimes I talk to people and they say, yeah,
I happen to be the right tool for the job or didn't and we didn't use it.
(33:59):
But in this case, it was no, we know there's a use case for AI.
Let's figure out how to get it to our people.
Exactly and I think we piggybacked off of the natural curiosity with Chad GPT.
We started there, like the adoption, the first adoption came, I want to use Chad GPT for
my work.
(34:20):
I would love to use this.
It started there, but then folks quickly realized, I want to do a lot more with it.
And it was interesting because folks were telling us, I know I can do a lot more with
it.
I just don't know how.
So how about if I do this?
And then we had to break that down and say, oh, actually you're speaking about automation.
(34:41):
You're not speaking about AI.
Or this is quantitative in nature.
There isn't a language component and there should probably be a machine learning use case.
And here's how we can help you.
Or this is generative AI related.
So I had to do the work to break that down and explain the key themes and just make sure
we are working on the right things.
Ah, neat.
(35:01):
So then I guess a big part of your role is that education aspect of, okay, this is what
you really mean when you ask for this feature.
And sometimes AI, like Chad GPT, AI happens to be the right solution.
Yeah, exactly.
It's not the right solution in every case.
In some cases it happens to be, we can sort of share how that would work out.
(35:25):
But I think we were lucky, we were fortunate to leverage the natural curiosity folks had
and just redirect our efforts in the right way as well.
So then how did that mesh with the corporate goals then, that natural up swell of curiosity
plus the we need to do budget savings?
(35:47):
It was very interesting because I think that in the beginning what started off as natural
curiosity that folks had, but they just wanted to, I'll give you a simple use case.
Folks wanted to use this to write emails or write their weekly business review documents.
And it was great.
It was a really natural and important use case.
But in order for us to get buy in for the bigger goals, so what would more directly
(36:13):
impact the business, let's say the drug discovery process or talking about this prescription
information which we spoke about or answering health authority questions, which was another
big use case for us.
There was a lot of legwork to be done there because additional technology needed to be
built and it wasn't so clear to everyone how we would realize the vision.
(36:35):
So it was a merge between product and engineering and it was with folks who quickly understood
they could use this to write emails or use this to write weekly business reviews.
They were already starting to do that work, but in order to shift and redirect efforts
towards the company goals, that's where we had to dive in and say, where's the biggest
(36:56):
opportunity, define that, have clarity on what do we work on first and then come up
with the additional components to build it.
Yeah, and I think one of the things that I'm very interested in is how do you come up with
those new use cases and then say they come from a less technical stakeholder, how do
(37:17):
you help them negotiate what is the right use case?
Interesting.
In our case, we spent a lot of time and energy in quantifying the opportunity.
So I can give you an example, a colleague of mine, he would run training and development
reviews with executives on our team and he would generate spreadsheets full of ideas
(37:45):
and he would run these every two weeks.
Every three weeks there'd be like a spreadsheet full of like thousands of ideas and he'd say,
hey Prerna, can you please take a look at this and make sure this gets built into this
and say, thank you, I can do that.
So it started from that aspect where there were a ton of ideas coming in.
(38:07):
I took a step back and I said, well, how should I start to think about the business opportunity
here?
So I broke down that work and I said, let me try and quantify which organizations, which
work streams and which ideas would most closely benefit the business in terms of achieving
(38:32):
the 15-5 goal.
So I worked with data analysts on my team to break down our organizational data and
quantify what are the organizations, what is their current head count, what are the
productivity needs, the operational needs, and then what benefit can AI have?
So those were literally columns in a spreadsheet and started to quantify that a bit.
(38:57):
Those thousands of ideas, I clustered those together using some basic machine learning
and a clustering algorithm and put together from a thousand ideas, narrowed those down
to maybe 70 or 100 ideas and then overlapped that or merged that with the business data
to say what are the 15 things we can do or focus on and then created a roadmap to actually
(39:26):
see how would we realize it from an engineering standpoint.
So it was a lot of taking a step back, thinking from first principles and thinking do all
of these thousand ideas really matter or should we start to merge them together, are they
commonalities, themes, and work on the right things?
That is a lot of data.
(39:50):
It's a lot of, yeah.
I can understand why data scientists are so needed.
Absolutely.
Wow.
That's amazing.
To switch gears a little bit, what would you wish that you had known when you started
this side of your career, the product management side of it?
(40:11):
Yeah.
It's a great question.
I think one big realization which also we spoke about in this example was with B2C products,
when you're coming from a B2C space, the metrics that go into let's say engagement or retention
or user adoption, acquisition are far more straightforward to establish because you're
(40:36):
building some software tool and then you're seeing how do the metrics get impacted and
it's tiered up.
Establishing success is more obvious.
When one works on platform and especially working on machine learning where there are
a lot of different variables, it's not so easy to establish success.
Really having a good understanding of those business-facing metrics and having a way to
(40:58):
attribute the contribution of ML is one critical piece that I learned.
It took me some time to learn how to do that, but it was very important.
It was a huge unlock.
I can give you a separate example apart from Moderna when in my time during my work at
Amazon AGI, I worked on the MLN loop for training data.
(41:19):
We had these MLN loop workflows.
I created some MLM models to improve quality and speed of generating training data.
When you're launching NLMs, it's so hard to attribute what training data created what
impact.
It's hard to say that these five fields that you generated for me or these five grows created
(41:40):
10% improvement in precision.
We had to figure out how to break down the process, how to think about our metrics in
a more structured way so that we could start to attribute that impact.
There are two reasons why this is needed.
One of course, as you're working in a big organization, it's important for the business
to know what you're contributing to it for everybody involved in the team.
(42:03):
But second, if one doesn't have that line of thinking, when things don't go as planned
or per expectation, we don't know what went wrong and then we're slandering.
It was an important unlock for me in previous roles and this one as well.
Figuring out how the changes that you're making are impacting not just the performance but
(42:26):
also the business.
Exactly.
Folks talk about observability of the metrics in machine learning.
I've heard this word a lot but really, truly, it's like, can you attribute it to an actual
user-facing metric or not?
In my mind, that's the gist of it.
Having all of this cool dashboards and everything is helpful but that's the core of what I felt
(42:52):
was needed.
And that goes back to putting something in a user's hands and seeing how they like it.
Exactly.
Exactly.
I just wanted to give you a minute at the end to share with us anything you like to
say about what you're currently working on or whatever else you'd like to share.
Yeah, absolutely.
(43:13):
I will say a bit about, of course, not sharing anything confidential.
The current rule that I'm in is a lot about uncovering new capabilities beyond what we've
seen around agentic workflows.
How can you use machine learning and other more nascent fields such as even emotion AI
(43:36):
and incorporate agentic workflows and merge that together to come up with an interesting
product?
So we are working in the family wellness space.
A lot of what we're trying to do is build habit-forming technology but in a way which
is helpful to users as it's additive versus something that distracts them.
(43:57):
Dark design.
Yeah, exactly.
So not having dark patterns around use of habit-forming apps and how do you ensure that
it's built in a way that's helpful to customers.
So we are using everything I've spoken about so far to build products for family wellness.
And it's a very cool and interesting space to be.
We are still discovering and learning what our users want.
(44:19):
And I'm very excited about what comes.
Very cool.
And how do we keep up to date with the releases that your company is going to have in the
future?
We will have press releases and announcements and there will certainly be a lot of public
information release as the time comes and I'm excited to keep everyone in the loop when
(44:41):
that happens.
And that's Panasonic Well.
It is.
It is.
Okay.
Well, thank you very much, Prerna.
I really appreciate you being here and I am going to listen to this again and learn more
from what you said.
So I really appreciate it.
Thank you so much.
And it was wonderful speaking to you and everyone on the call.
Thanks again.
(45:02):
Thanks for listening.
I made this podcast because I want to be the person at the city gate to talk to every person
coming in and out doing great things with AI and find out what and why and then share
the learnings with everyone else.
It would mean a lot if you could share the episode with someone that you think would
like it and if you know someone who would be a great person for me to talk to, let me
(45:26):
know.
Please reach out to me at Daniel Manary on LinkedIn or shoot an email to daniel@Manary.haus,
which is Daniel at M-A-N-A-R-Y dot H-A-U-S.
Thanks for listening.