All Episodes

January 1, 2024 28 mins

FULL SHOW NOTES
https://podcast.nz365guy.com/513 

We are thrilled to have on board Ashish Bhatia, a principal product manager at Microsoft, who is on a mission to democratize AI solutions. Through a fascinating exchange, Ashish sheds light on his journey into the realm of AI and his focus on GPT-based capabilities within Power Automate. He articulates his vision for AI Builder, a tool that empowers low-code citizen makers to build intelligent applications, and how it has evolved over time.

The conversation then delves into the intriguing world of AI, exploring how context, in the form of metadata and location information, can enhance precision and accuracy. We introduce the concept of RACS, a tool that aids in retrieving the right information and prompting the LLM to generate precise answers. Prompt engineering, a concept that reduces hallucination in AI models, also takes center stage in our discussion.

But how does AI impact our careers and learning? We thrash out this question in detail, discussing how AI has become a critical part of many jobs, enhancing performance. We examine different ways of learning about AI, from using it as a writing assistant to critiquing work. And how can we overlook AI safety? It's a hot topic that we analyze in-depth, looking at the ethical implications of AI and its potential for both positive and negative outcomes. Join us in this captivating exploration of the AI landscape and how it's shaping our lives and careers.

Microsoft 365 Copilot Adoption is a Microsoft Press book for leaders and consultants. It shows how to identify high-value use cases, set guardrails, enable champions, and measure impact, so Copilot sticks. Practical frameworks, checklists, and metrics you can use this month. Get the book: https://bit.ly/CopilotAdoption

Support the show

If you want to get in touch with me, you can message me here on Linkedin.

Thanks for listening 🚀 - Mark Smith

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Mark Smith (00:02):
Welcome to the Power 365 show.
We're an interview staff atMicrosoft across the Power
Platform and Dynamics 365Technology Stack.
I hope you'll find this podcasteducational and inspire you to
do more with this greattechnology.
Now let's get on with the show.
Today's guest is from Bedford,Massachusetts in the United

(00:23):
States.
He works at Microsoft as aprincipal product manager.
His mission is to democratizeno code AI solutions, making
intelligent technologyaccessible to everyone.
He has successfully hadmultiple virtual teams and
established a strategicpartnership to drive innovation
and growth.
You can find links in the shownotes to his bio social media,

(00:45):
by the way.
It's worth monitoring,particularly his social media on
LinkedIn.
He talks a lot about AI.
Welcome to the show, Ashish.

Ashish Bhatia (00:52):
Thank you, mark, good to be here.

Mark Smith (00:54):
Good to have you on the show.
I'm excited that you're herebecause you're one of these
people that I see on LinkedInconstantly putting out great
content around AI.
I think it's so, so importantbecause if you look at the
Gartner hype cycle we've gonethrough, I feel that, at the
start of the share around AI,Now we're into the phase of, I

(01:17):
think we're just over the crestand it's now when the real work
it's done for the long tail ofwhat's going to happen in this
AI space for the years ahead.
So I'm excited about having youon the show.
I always like to get to know myguests, to start with a bit of
food, family and fun.
What do you do when you're notdoing your tech role?
What do they mean to you?

Ashish Bhatia (01:39):
When I'm not doing my tech role, I spend a
lot of time with family.
I'm an outdoor person so we doour sports a lot.
I'm avid cook at home sowhenever I get opportunity I
just cook something and it'ssuper relaxing for me.
Put some music on, get a goodmeal on the table.

Mark Smith (01:58):
Nice, tell me about your journey into AI, artificial
intelligence.

Ashish Bhatia (02:03):
I started Microsoft about nine years ago
as a product manager and I'dspent a bunch of time doing dev
work, project management and,for the most part, product
management in my prior roles.
But, as coming to Microsoft asa product manager, I was part of
an AI team.
This was ProDev, pro DataScientist, phd Data Scientist.

(02:28):
That was my first introductionto AI.
It was super fascinating.
Everybody just was supertalented.
For a long, long period of time,I had this imposter syndrome
myself like what am I doing herewhen the right place?
I tried to teach myself AI forsome time, tried to do online

(02:50):
courses and things like that,but then I hit pause and it kind
of self-reflection moment forme that am I trying to be a Data
Scientist or I'm trying tounderstand AI to fit my role?
I thought that the latter is abetter position for me to be and
where I understand AI to anextent where I know where to

(03:14):
apply it, when to apply it, whatuse cases it solves it as a
product manager.
That is super important for meto get a good grasp on, rather
than knowing how to build amodel, how to solve an AI, kind
of slightly build a model, traina model myself.
That was not important, becausemy team was established to do
that, they were trained to dothat.
My job was something else, sothat was my journey, that was my

(03:38):
learning curve.
I would say.

Mark Smith (03:41):
Where do you sit currently inside Microsoft?
What team, what all are you in?
What's your current focus?

Ashish Bhatia (03:48):
Product manager for a tool called AI Builder
that sits in Power Platform.
Power Platform is a slow codeplatform within Microsoft.
We serve again several powertools Power Apps, power Automate
and others in the PowerPlatform ecosystem.
We also serve a bunch ofinternal dynamics teams who all

(04:12):
are again based on the samehorizontal capabilities.
Thank you, as just my own work,I focus a lot in GPT based
capabilities within PowerAutomate, so we're trying to
bring these large languagemodels to our makers.
Citizen developers, makers canuse that interchangeably and the
goal is to simplify thatexperience.

(04:34):
In itself, working with a largelanguage model is simple.
You're interacting in an actualform factor.
You're using text to work withthese models, but just removing
all the other clutter and makingit use case scenario oriented
is the focus.
That's what we work on.

Mark Smith (04:52):
At what point has anything around GPT in the
context of AI Builder Isanything GA'd in that space yet,
or are we still in the?
I was in Atlanta when AIBuilder was first announced.
Was that like 2018?

Ashish Bhatia (05:12):
Yeah, it was before my time.
Oh, it was before your time.
Okay In the team.
Yeah, for sure.

Mark Smith (05:17):
I remember it coming out and you know there were I
think four models in it at thetime Imagered recognition.
There was text you know lookingfor I call them I-O on off
scenarios like looking, and Iforget what the other two were,
and it really was that point ofbringing AI to the masses.

(05:38):
But it was the AI that we knowabout pre-the world changing at
around the start of this year,right, pre-generative AI and
large language models.
So it was machine learningbased and, of course, all based
on Azure Cognitive Services,things like that.
Where are we at from that, fromthe pre-AI Builder of that time

(05:59):
to the use of large languagemodels as part of AI Builder?

Ashish Bhatia (06:04):
I would say our ethos is still the same.
Our goal is still to enablelow-code citizen kind of makers
build intelligent applicationswith foundational and
state-of-the-art AI technology.
I mean, I always say we areourselves, not an AI team.
We are facilitating whatever AIexists within Microsoft and

(06:25):
bring it to this kind ofplatform and this audience in
the most simple form right andbring it closer to their use
case.
And what I mean by that is, ifyou imagine Microsoft AI stack
as a layer cake, the bottomlayer is your platform where
data scientists can build theirmodels, and this is your Azure

(06:46):
Machine Learning environmentsand workspaces where pro-data
scientists work.
If you go on top, the middlelayer is where all the cognitive
services sit right.
A lot of them are based on thesame technology that other
pro-data scientists would useusing Azure Machine Learning and
stuff, but there is a lot ofready-to-use stuff there.

(07:07):
You can imagine custom visionor computer vision, speech APIs,
form processing capabilities.
A lot of those are ready to useas well, but that is also a
layer where, just working withAPIs, pro-developers are going
to be using them and our goal isto take a lot of that goodness
and transition it to this local,local space and make it very

(07:28):
scenario oriented.
So you'll see capabilities likeinvoice processing,
capabilities like identitydocument kind of parsing
capability, or just documentprocessing in itself, right,
with invoice receiptunderstanding and things like
that.
So their scenario focus, usecase focus, but again running on

(07:49):
the same state of the art model.
With GPT a little bit of thatis changing Because GPT is kind
of mixed bag.
It is a pre trained model butyou can set its own goals, you
can give it your owninstructions and make it work
for you.
So even though you can changethe underlying model, you can
change its behavior byinstructing them, by prompting

(08:09):
it, right.
So that is something that issuper powerful, makers, because
again, this is a no codeinteraction.
You could just tell them allwhat you wanted to do and you
could define the outputs.
And it is a net new model.
So, for example, we don't have asummarization model in air
builder.
You could use GPT to act as asummarization model, right?
You can say I want you tosummarize this email or this

(08:32):
incoming message from mycustomer and give me talking
points about it, right?
All of that, yes.
So in that sense it's superpowerful and customizable as
well.
Our thinking going forward isto kind of bring it even closer
to scenarios and use cases andmake it like super actionable
because again, it's still ageneral right, it's a general

(08:53):
purpose model.
You can ask a bunch of things,but in the more targeted, the
more kind of closer to endscenario we take it, the better
is the uptake from developers orcitizen makers in this case.

Mark Smith (09:05):
How do I bring my own data to the mix right?
So the large language model isin place and let's say, to keep
the scenario simple, that I havea large knowledge base around
all the products my organizationhas.
How can I, with an AI builder,then provide that data set to be

(09:27):
consumed and then, whether itbe, for example, summarization,
or I want to Q&A it there may bea range of samples.
I might be out on a job and Ineed to fix something.
Can I need the preciseinformation on this fix from the
data that we have back at theoffice, so to speak?

(09:49):
How do I bring my own data tothat mix?

Ashish Bhatia (09:53):
So some of that is possible through the AI
builder capability even today,but with college it's kind of
limited, right?
So I'll explain both of them.
First of all, the GPT model isyour kind of base model and you
do two things.
You give it an instruction, yougive it a context right?
Any prompt will have acombination of these two, unless

(10:14):
you're asking it to do somecreative tasks, generate a poem
or generate song lyrics orwhatnot, right?
Otherwise, most of the time,you're giving it an instruction
summarize blah email.
Or summarize generate responseto this message right?
So you're giving it aninstruction and you're giving it
a context.
A lot of times, that context isyour own data.
Could be a set of emails or aset of rows coming from a data

(10:39):
source and say, hey, using this,create me a response for this
incoming email, right?
Something like that.
So that is how you can use it.
Most of the power tools alreadysit on top of a data layer which
is called data.
Worse, so your data alreadylives there.
When you're incorporating thismodel in your flows or your apps
, you can query that data tomatch the new incoming data and

(11:01):
pick the right rows and thenbring it in the context of the
GPT model and that's prompt themodel saying, hey, here's the
new data, here's my instruction,here's the existing data.
I want you to use all of thatinformation, give me this unique
response or unique summary.
We're going to make some ofthat super simple that you
wouldn't have to get involved inQuery, and we'll take care of

(11:22):
some of that workload for you.
But again, those are thingsthat will come a little bit
later.

Mark Smith (11:27):
So you talked about context there and context being
my data.
How about context being thingslike the device I'm using, the
time of day, my geography?
Yeah, let's actually just takethose three.
Is that considered context, orwould we label it different?

Ashish Bhatia (11:45):
It depends on the sonar that you're trying to hit
A little bit with power apps,you will see a lot of that
context is present what kind ofdevice you're using, your
compass, your location.
So if your scenario is specificto, I'm creating a location
aware application and myresponse to my user experience

(12:07):
can be personalized based on thelocation of my user.
You could do that.
So, for example, you have somekind of a field front line staff
they're addressing a breakdevice or something, then you
could definitely capture thatlocation information and then
give them awareness of wherethey are, what are the devices

(12:28):
they're interacting with orfacility that they're working on
, and thereby you can bring inthat location aware context for
them.

Mark Smith (12:36):
Yeah, so would we call that context.
That still would sit under theheading of context.

Ashish Bhatia (12:41):
I would say you will use some of that metadata
to find the right context.
So for example, I have a datasource table where my prior
equipment break records are andI know that my frontline staff
is working in a given location.
I can filter down my datasource rows based on that

(13:02):
location.
Say, here is all the breakagein the past one year that I've
had.
Then for the narrow down thatscenario to give them the right
information that they need tocomplete the job that they are
after.

Mark Smith (13:14):
Yeah, so, like the use case that's going through my
mind, that I would see is thatsomebody is logged into a
computer as a bank teller, right?
So we would know about thisperson, what's their role,
privilege, what they are allowedto do, what's their day to day
role.
We would have an understandingof that based on their
authentication Right, we knowwhere they are, they work in

(13:35):
this bank, at this location,this geography, and they're a
teller, so they're going to befrontline staff in front of
customers taking questions.
Customer walks up to thecounter and said what's your
fixed term rate for this type ofmortgage?
They got to give a very precise, accurate answer, right.
Whatever the fixed is xpercentage and whatever the T's

(13:57):
and C's, et cetera around it.
If they were to query that,let's say, through a Power
Virtual Agent on their computer,terminal, metadata, digital
exhaust, whatever you want tocall it would that be ingested
into the prompt?
So who they are, theirgeography, obviously, talking to
a customer, they're wanting therate for this.
So, even though our data sethas every rate going back from

(14:19):
where the bank started, we don'twant an old rate, right, it
needs to be.
And so the challenges I hearunder that use case is that if
that's just a SharePointrepository of all that data and
I query what's the fixed ratefor this mortgage, I am going to
get 5,000 return results withlinks to archaic, outdated, but

(14:42):
it had part of my prompt in it,so can that be fed into.
Then the context of the dataand then a precise based on
today's date we are located,what our current rates were
advertised at market, blah, blah, blah, blah.
All that kind of stuff would bea very precise answer back so
that they could, with fullauthority, give the correct data

(15:04):
.

Ashish Bhatia (15:05):
Yeah, great question.
I would say no.
And the reason I would say I'mgoing to introduce a new concept
called RACS, which is retrievalbase, retrieval, augmented
generation rate and what it doesis it breaks the task into two
steps.
So, for example, the questionthat you're asking, the way we

(15:27):
would perform that is the firststep is understanding the intent
of what's the question beingasked.
And then what is the metadatathat I'm working with, the
location, the teller, theaccount, the type of account and
all of that right, and all ofthe metadata about the end user
that you're trying to cater.
Take that metadata, go search mydata-worse tables right.

(15:51):
Find the relevant information.
That relevant information maybe spread across like 50 rows or
whatnot, right?
The goal is to make a query tonarrow down that data as much as
possible and then to be able topeel out the right answer.
That's the job that we offloadto the LLM say, hey, here is all

(16:11):
the context data.
Here is the instruction that Iwant you to follow, which is the
original intent of the questionGo, find me the right summary,
the right answer, the right rateof Long-term kind of deposit or
whatever Mm-hmm.
And that's how we achieve thatscenario.
So retrieve which is the firstcandidate, the map reduce kind

(16:33):
of world.
Yeah, if you kind of make aparallel retrieve, the right
information, give to the LLM,let the LLM generate an
augmented result based on thatcontext data.

Mark Smith (16:44):
So what I heard and correct me if I'm wrong, if I
play back what you just said theRequest is really handed into a
brilliant prompt engineer thattakes into account all that and
the metadata, the intent, and itcrafts a very precise Request
then to the LLM GPT-4.

(17:06):
In this case, let's say, andtherefore, because it was such a
precise prompt, it thenretrieves the correct data.

Ashish Bhatia (17:15):
Yep.
So again, the step one is thequery, query for the data, for
the reduced set of data, and thesecond step is that precise
prompt with the original intent,this reduced set of data, and
then Offloaded to an LLM tocreate the final answer of that
reduced answer for you.
And that's how kind of a lot ofthat large data problem can be

(17:37):
solved.
When you ask the question, canI bring my data to?
Yes, be precise.
Cut down on hallucination.
Hallucination, great tool tocut down hallucination, because
Model has this tendency toanswer Questions that it doesn't
knows.
But the more data that you packin for that model to educate it
will help cut downhallucination like.

Mark Smith (17:58):
We've talked now about prompt engineering and I
like what you said actuallyearlier on.
You said that in theobservation of your career path
that you were going on, youstarted to look at going down a
data scientist path and thenrealized, no, actually there are
people that specialize in that.
I don't need to go down thatpath, but I do need to
understand.
You know AI and I feel I'vebeen on the same journey.

(18:20):
I thought, oh my gosh, I haveto become a data scientist and
Just quietly, that bores me todeath.
That's just not the way I'mwired to go down that path.
But as I have learned, Irealize is that and I don't
think I'm the only one in thisboat but it's like there's so
much to learn.
But it's like what do I need tolearn to be really good?

(18:43):
Because, you know, I come froma company that has many years of
experience in the AI space andvery publicly, and I look at the
training resources I have to mebehind the scenes and in my
mind I go that's all old AI,right.

(19:04):
It's like it's not where theworld is now.
That's all, yes, the year AI,and it's kind of like I don't
want to spend my time learning.
Like, honestly, if I looked atan AI course now and I saw their
creation date was pre 2023Probably not gonna.
You know, it might not even getme to go deeper and look at the

(19:24):
contents right now and sayingthat.
And the company I work for isIBM and so obviously got a big
brand with Watson.
They've bought out Watson X.
Hopefully I'm getting it smackfor this, but I would have never
named it over a dying productbrand and reinvent it with
another one and.
But they did.
And I just like and I've done alot of that type of training

(19:45):
and I'm just like, yeah, it'sgood fundamentals and good stuff
for how AI was.
But if I was to learn AI today,what kind of resources do you
recommend that folks go out andlearn in the post New Year's
2023 world that we're in?
And really, from thatperspective of I know as a

(20:08):
consultant that I see everythingI do is gonna have an AI
component in the future, nomatter what.
So I can have 20 yearsexperience in dynamics, my
experience in the power platformfrom the day it came out as a
concept.
But I think all of those haveto be now include AI

(20:29):
Fundamentally for where ourworld is going.
So, in that context, how wouldyou advise people, encourage
people to look at AI in contextof their career or context of
their learning pods?
Because what I noticed?
Google, they've got their AIprogram.
Microsoft on LinkedIn, learners, their AI program and stuff

(20:49):
Great, they're good.
My wife's gone through theGoogle one been she's ex-Google
herself and it's great.
And then it gets reallytechnical.
It's like now all your businessstakeholders like I'm out, now
it's over my head.
This is not the stuff I need toknow.

Ashish Bhatia (21:04):
I will start by saying that almost every job
will get impacted by AI.
That doesn't necessarily meanAI will replace the job, but AI
will become an intrinsic part ofyour job.
We're seeing this as the newschool year is starting.
Teachers are trying to figureout how to work with AI.

(21:24):
When there was a time,everybody resisted maybe not
everybody, but a lot of thateducation community resisted
People, our kids playing aroundwith AI.
They're trying to nowunderstand how to live with it,
how to use it as a resource, asa tool to help educate better
and reach out to more studentsand work with them.

(21:46):
Similarly, as a product manager, I'm thinking about how I could
use AI as a resource to do myjob better.
The question always come down tohow you use as a tool to do
your job better, and people whowill figure that out will excel
in the paths that they havechosen.
How do you learn?

(22:07):
Again, there are two pathsthere.
For people who deal day to daywith AI, that learning path is
different.
We're learning about AI safetyall the time.
You're posting a ton about it,because that is something that I
am deep in.
I'm learning right now, but foralmost everybody else who's
going to use AI as a tool to dotheir job better is start using

(22:32):
it, start understandingscenarios on where the AI
augments you and where you bringyour own intuition to perform
the job well, and just crackingthat code is going to be the
recipe for success, at least inmy mind.

Mark Smith (22:48):
Yeah, that word augment, I think, is critically
important in that I find I useit a lot for finding my gaps in
my thinking.
I will say this is what I'mdoing and I'll give it a whole.
This is what's important.
Now, you know, almost play thedevil's advocate and go where is
my thinking?
What am I missing and what'sblown me away?

(23:10):
I've used it in a couple ofcontexts.
I've used it in the context ofmy will.
I took the legal document fromthe lawyer and I passed it in
and I said, in context ofbecause I'm New Zealand based,
new Zealand law and this will,what is missing?
It came up with four sectionsthat the lawyer had totally not
brought up, not addressed, butwere all very relevant and I was

(23:34):
like, hey, that's brilliant,right, it didn't mean I just
copied and paste what it said,but it gave me context to go
back and further a discussion asto why these things weren't in
it.
And then I often find, from atraining perspective, I can give
it, let's say, the table ofcontents of a training scenario
and say what would the gaps bein this, what didn't I cover?

(23:55):
And I'll give you a simple one.
I gave it a.
I do a course on communicationsin the context of Microsoft and
the tech we work in, and I saidwhat additional module should
have been there?
And it said you didn't coverlistening.
You talked all about talkingand written communication and
verbal, but you never didanything about listening and I
was like, ah, like so obvious.
But I find that's the beauty ofhaving this augmented

(24:19):
experience and when you'regiving it the prompt to look for
that, it's allowing you touncover your blind spots in many
situations.

Ashish Bhatia (24:28):
I myself use it as three predominance scenarios,
my kind of work life.
One is a writing assistance.
I use it a lot to kind of writewhatever I'm writing maybe it's
specs or requirements andthings like that to kind of make
them more holistic.
I use it as a critique right,the stuff you were talking about
here is what I'm thinking about.

(24:48):
What are the gaps?
Right.
What am I not thinking about?
Right, or how it could go downsouth, right.
So as a critique, I use it alot.
And then the part that I use itis for ideation, right, I mean,
here are three things that I'mthinking about.
What else could I be thinking?
Or this is an industry andwe're thinking about AI and this

(25:09):
specific piece of it.
What are the other scenariosthere?
Yes, so idea generation as acritique and then writing, so
Cinderella can at least threeways that I use it a lot in my
own.

Mark Smith (25:20):
Yeah, I like it.
You knew this podcast wascoming up.
Is there anything else thatyou'd like to add?
Before I kind of I want to askyou a bit about how you're using
AI personally in your personallife, like how you're
incorporating it in outside ofyour role in Microsoft, but how
you finding practicalapplication, as I say, either in

(25:42):
your personal life or it mightbe a side hobby or anything like
that or are you pretty muchstaying straight in the well
house of how you use it for yournine to five jobs, so to speak?

Ashish Bhatia (25:54):
I would say the current kind of investment of
time that I'm putting in otherthan work is just learning more
about AI safety.
I'm reading a ton of papers.
I kind of try to summarizesometimes those.
So not so much incorporating AIin my personal life or time,
I'm spending a lot of timereading about and learning

(26:16):
outside, just for context.
And so again, the two reasons AI'm passionate about that space
, about AI safety, bias and howwe bring AI in an equitable way.
B there is so much going on inthat space and I feel that is
going to be a keenly contestedarea in the next few years,

(26:41):
because there is a lot of roomfor goodness with AI, but there
is also a possibility to dowrong.
And again, it's in the hands ofpeople who are thinking about
this AI safety in general, onhow, which levels we use and how
much we use them.
So again, this ton happening inthat space and super exciting

(27:02):
because that will dictate howall of this transitions right,
good way, bad way.
However, that ends out right.
So again, it's interesting towatch that space and learn from
that, and there's good takeaways, I feel.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

CrimeLess: Hillbilly Heist

CrimeLess: Hillbilly Heist

It’s 1996 in rural North Carolina, and an oddball crew makes history when they pull off America’s third largest cash heist. But it’s all downhill from there. Join host Johnny Knoxville as he unspools a wild and woolly tale about a group of regular ‘ol folks who risked it all for a chance at a better life. CrimeLess: Hillbilly Heist answers the question: what would you do with 17.3 million dollars? The answer includes diamond rings, mansions, velvet Elvis paintings, plus a run for the border, murder-for-hire-plots, and FBI busts.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.