Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Mark Smith (00:01):
Welcome to the Power
Platform Show.
Thanks for joining me today.
I hope today's guest inspiresand educates you on the
possibilities of the MicrosoftPower Platform.
Now let's get on with the show.
In this episode, we'll befocusing on Azure AI Foundry,
(00:26):
formerly known as AI Studio,Azure AI Studio, so we'll dive
deep into the subject today.
Today's guest is from Pune,India.
He's self-employed and works asan Office 365 consultant and
M365 consultant.
He likes reading historicalbooks and a big fan of Sherlock
Holmes.
He has authored his own books.
(00:48):
We'll unpack that in a moment.
You can find links to his bioand social media in the show
notes for this episode.
Welcome to the show, Nanddeep.
Thank you.
Nanddeep Nachan (00:55):
Thank you very
much for inviting me.
I guess it's my second time,but I do enjoy talking to you.
Thank you very much.
Mark Smith (01:02):
Yes, yes, as an MVP,
you've been on the MVP show
that we've discussed in the past, but I'm interested to, of
course, to drill into this topicof the AI foundry the Azure AI
foundry with you.
Before we do, let's get a bitof background.
What have you been up to mostrecently?
What's happening in your world?
Nanddeep Nachan (01:21):
What have you
been doing in this past year
Right At least I started doingmy own business this year, so at
least I've got almost 20 yearsof experience.
I've been mostly working intovarious organizations, but this
year I started my own consultingthing, which is a new adventure
for me, but again, I found itvery interesting because if you
(01:44):
are a consultant, obviously youget a chance to work with more
people, you explore your morenetwork as well as to expand
your boundaries.
When you have a limited kind ofan exposure, when you are
working in the into theorganization, probably you just
have a trail around you and thenyou just talk to only a few
certain people who you know,whom you work with.
(02:06):
But when you come out of it,when you start your own business
, obviously you expand theboundaries.
So that is what I'm doing rightnow I'm exploring this world.
I'm just into this consultingjust two months old, so quite
new here but yeah, I'm expandingmy boundaries, I'm talking to
various people and finding theopportunities from there as well
, so, which is a shift that I'mobserving during the end of this
(02:31):
year.
But yeah, it is a challengingas well as an interesting one.
What made you go independent?
Yeah, I mean at least, as Isaid, I worked almost for 20
years into the IT industry.
So I just never thought like,for example, if I don't do it
right now, then when am I goingto do it?
So I just thought, okay, thisis the correct time, let us go
(02:51):
ahead explore various things,because in 20 years obviously
you do get much exposure, muchknowledge like how the things
work.
You do get your own network aswell.
So I just thought probably thisis a good time to go ahead,
start own consultancy and takefrom there.
But yeah, I guess I do have agood kind of support from the
(03:14):
MVP community as well as fromthe community in general.
So yeah, so far, this twomonths whatsoever, I'm into this
new role, I'm enjoying it.
Mark Smith (03:26):
Excellent, excellent
.
And you talk about your network.
There is that where yourbusiness is coming from at the
moment.
It's the people that you'vebuilt relationships with over
time.
Yes, yes, mostly like.
Nanddeep Nachan (03:35):
I have been
into this community for, I would
say, more than 7-8 years activeinto the community.
Otherwise earlier I was just apassive listener just listening
to these stuff and, as well,just reading out.
But since the past six to sevenyears I am mostly active into
this.
So I have grown into thenetwork and again mostly from
(03:58):
there as well.
I'm getting some goodopportunities to work, maybe
collaborate with a few of thecolleagues, as well as trying a
few things on my own.
Mark Smith (04:07):
Nice, nice.
So today I want to talk aboutAzure AI Foundry and of course
we're what?
Two weeks on now from MicrosoftIgnite recording this in
December, microsoft renamedtheir product, which was AI
Studio, to Azure AI Foundry.
Microsoft renamed their product, which was AI Studio, to Azure
(04:27):
AI Foundry.
And when I look at this AIlandscape from Microsoft, I see
it in three distinct buckets.
I see Copilot M365, which isdesigned to work with all the
Microsoft 365 suite of toolsthat Microsoft provide under M3
and M5 licensing.
If you add the Copilot SKU on,you get that all lit up
(04:48):
PowerPoint, excel, sharepoint,of course, and various others.
Then in the second stream I seeis Copilot Studio from
Microsoft right, which back inthe day was Microsoft Power
Virtual Agents and that's nowbeen extended and ultimately you
can build agents on thatplatform and I think there's
going to be a big play for that.
I think over 1,400 connectorsto other ecosystems where you
(05:11):
can build applications in a lowcode type of experience you know
that we always had in the PowerPlatform arena.
And then we move into thisAzure AI Foundry which for me,
the way I look at it is, it'sthe pro code area.
It's where you kind of haveunlimited flexibility in how you
can take models and apply themto your work.
(05:33):
How do you explain it?
Nanddeep Nachan (05:35):
Oh right, as
you said correctly, right now we
are not just limited to usingSharePoint, but at least we are
finding out various ways toexpand the SharePoint or at
least the co-pilot, or maybeextending the co-pilot.
So, as you said correctly, wehave got at least three
permanent kind of options.
First one is obviouslydeclarative agent, which is a
very new one.
(05:55):
Again, we used to call it adeclarative co-pilot, but again
after Ignite we call it as adeclarative agent.
Again, there are changes aswell.
Then we do have the AI Foundry,azure AI Studio, which is Azure
AI Foundry right now, andCooperate Studio, for example.
If we go into each one of thoseone by one, probably I will
(06:17):
choose the very simple one Atleast.
You might be amazed that I'mnot choosing very simple one as
the Cooperate Studio, but I willchoose very simple one as the
declarative agent.
Okay, because when you aredeveloping something on top of
co-pilot or whenever you'retrying to extend the co-pilot,
obviously the first option thatcomes to our mind is co-pilot
(06:38):
studio.
But I will say that afterignite, when the declarative
agent got introduced, it bit ofchanged the landscape.
So from the easinessperspective, I will say
declarative agents is my ownfavorite because those are very
much simple to consider, simpleto use, without taking into
(06:59):
complexity of licensingconfigurations, all those things
.
It is very, very simple.
So, in simple words, likedeclarative agents will let you
create a customized kind of acopilot for Microsoft 365 by
simply starting on what you wantthat to do.
So these are the agents thatuse the same technology as a
Microsoft copilot but they allowyou to add up to the business
(07:21):
need very easily and then again,by using the declarative
copilot, you can improve thecollaboration, boost the
productivity and all those kindof things.
It is a newer approach whichjust help us to configure and
deploy those AI copilot acrossMicrosoft 365 applications.
What are the benefits of thisdeclarative copilot or
declarative agents?
(07:42):
They are very much easy to usebecause they just run onto these
declarative approach.
Now that, for example, if wetry to segregate that, we
earlier had the imperativeapproach wherein we need to
define the path that co-parateshould take.
Let's say, for example, if thisis a response that we are
(08:03):
getting from the AI model, thenagain, for example, this is a
prompt that we get from the user.
In that case, what should beour next step that we used to
define, which we call it as theimperative kind of a model, but
if decorative agent, they takethe entirely different path,
called as a decorative approach,which simplifies your
development process and make itmore accessible even to the
(08:26):
non-developers, so that they canenable the users to define the
agent behaviors.
So, in simple terms, I havebeen using the approach of using
Teams toolkit to create theagent behaviors.
So, in simple terms, I havebeen using the approach of using
Teams toolkit to create thisdeclarative agent.
It is pretty much simple thatthere is just one file.
In that one file, you need todefine your prompt and you just
need to define what should bethe behavior for that agent and
(08:50):
that's it.
It will do the rest of thetricks for you.
And again, there isextensibility on top of it as
well.
For example, if you want to getconnected to other sources like
SharePoint or OneDrive, whereinmost of the organizations are
using them for storing the data,or even to Microsoft Teams, or
even to any of the third-partysystems using REST APIs, those
(09:12):
are very much supportive.
So that's the reason I saidlike after Ignite.
Personally, my favorite choiceto go ahead for creating the
copilot or extend the copilothas become a declarative copilot
.
Yeah, before Ignite.
If you could have asked me thisquestion, I could have directly
went for Copilot Studio,because it was a very simplistic
kind of an approach.
(09:32):
But this is kind of a shiftthat I see after Ignite, because
at least before Ignite thisconcept was there, using
declarative co-pilots which theyjust have renamed to
declarative agents.
But since that capability wasnot available into everyone's
tenant at least in my tenant Idid not able to see the power of
it.
But now that's when I'mexperiencing that power, at
(09:54):
least whenever there's an optionto get two simplified things
the declarative agent, it is myfirst choice.
Okay, so let's talk about thesecond part, which is the
copilot studio.
Okay, so again, copilot studiois pretty much simple thing
wherein you can use the lowcodecode, no-code kind of a
functionality or flexibility.
(10:14):
So even having much knowledgeabout the technology, you can
just simply focus on thebusiness logic and from there
you can get started creating theco-planet.
So again, it gives you thedrag-and-drop kind of an
approach, or maybe just thetopic kind of a thing, wherein
you will be able to define howshould be the flow of your
component and then you will beable to perform all those things
(10:37):
.
When we go for the thirdapproach, which is ai studio or
ai foundry, which is recentlyrenamed, when we get the entire
flexibility okay, so at least itis probably I will not say it
is for everyone At least youneed to have some basic
understanding of the AI conceptregarding how the models work,
(10:58):
how should we use the promptengineering and all those kind
of things, because if youdirectly jump into the AI studio
or the AI foundry and startexplaining the things, you might
see it very overwhelming.
So in that case you need to havesome basic information or some
basic knowledge about how to usethose AI models.
And again I will say, at leastyou need to be very much on the
(11:22):
top of using your models, ofwhere it should be deployed, how
it should be deployed and manymore technical things.
So from that reason, at leastfor the business people, I will
say still the good choice to goahead with creating the co-pilot
is Microsoft Co-Pilot Studioand, for example, if you have
basic understanding, whether youknow how the LLM works, what is
(11:44):
prompt engineering, how theparameters work with those, then
obviously you can go ahead withthe AI Foundry.
So one thing that I will say ischoosing the right platform for
your development of the AI isvery much crucial because now
that we are into the world like,where we increasingly embrace
the AI technologies, so thetools like corporate studio or
(12:06):
this Azure AI Foundry they haveemerged very kind of a
significant, kind of a clearscene in the market.
There's both platform offerunique capabilities which are
tailored to different user needsand scenario.
Mostly, as I said, if you arelooking out to create a solution
very quickly so that you willbe able to develop them for your
(12:26):
customers, with some drag anddrop interface which will allow
you to build your chatbots orvirtual assistant very quickly,
go for Microsoft Go PaletteStudio, for example.
If you're looking out forinterface which will allow you
to build your chatbots orvirtual assistant very quickly,
go for Microsoft Go PaletteStudio, for example.
If you're looking out for avery comprehensive kind of a
platform which is designed fordevelopers or maybe for data
scientists to build, train,deploy your AI models so that
(12:47):
you can leverage thecapabilities of Azure Cloud and
then develop your advanced AI ormachine learning kind of
solutions, then obviously AIFoundry is your option.
So, yeah, these are the threeoptions, prominently declarative
agent, copilot Studio or AIFoundry, but again I will say
after Ignite.
(13:07):
Declarative agent is one of myfavorites to go ahead with.
Mark Smith (13:12):
So let me get this
right.
Declarative agents areaccessible or buildable from
within Copilot directly.
Is that right?
Is that the UI that you use,correct?
Nanddeep Nachan (13:21):
So there is a
UI as well, but at least I have
been using Teams toolkit forcreating those.
So if you have got Teamstoolkit, which is an extension
inside of Visual Studio Code,and when you create a project
out of it, they just give youthe option to create a copilot
or agent Right now it is renamedto agent.
Then, once you select the agent, there are again two options.
(13:43):
Like you can create it from anyof the API or you can create
that from scratch.
From scratch is again the samekind of a thing where you will
be able to define your prompt aswell as you'll be able to
define the location or the datasource from where you want your
copilot to get the data from.
If, in case, you do not specifyany location, then it is the
(14:03):
general kind of an internetinformation which is available
to the copilot.
So let's say, for example,you're just building a corporate
agent which can test theknowledge of geography for any
of these given users.
In that case you can justmention into the system message
of that model, just saying thatokay, I want to create this kind
(14:25):
of an agent which can help testthe knowledge of geography or
history, and then, once youstart conversation with that
agent, it will start asking youthose kind of questions and
again.
Secondly, it is a different kindof structure wherein you can
start building your owncomponents, and again, data
source is one of the componentsthere.
(14:46):
So you can define like, okay, Idon't want to probably get the
information from the internet,but I want to restrict my scope
to just a SharePoint site ormaybe any of the Graph API or
any of the REST API, and evenfrom there it can surface the
information.
So, yeah, again, there you needa basic kind of configuration
(15:06):
information.
But if you just at least knowthat JSON, even that is fine you
will be able to create thosevery easily.
So even it is very much easierthan Copilot Studio, just that
you need to know the JSON, howto do that.
And the advantage is thatprobably you don't need the
license, as the Copilot Studioneeds.
Mark Smith (15:27):
Okay, interesting,
interesting little work around
there.
What type of use cases are youseeing at the moment?
Or what have you worked onwhere you know specific things
that people are wanting toachieve using not just
declarative agents but any ofthe co-pilot through to foundry
(15:48):
use cases?
What have you seen so far?
Nanddeep Nachan (15:50):
Mostly I have
seen people are building the use
cases around RAC pattern, thetravel augmented generation, for
example.
If I start from very basics, wedo have those various models
available with us a largelanguage model, specifically GPT
3.5, 3.5, turbo, gpt-4, even wehave went up to GPT-4 as well.
(16:11):
They have their own trainingdata up to which they know the
information, what is happeningin this world.
But after that probably theyare not being trained on At the
same time when we want to usethis kind of capability of this
large language model into ourorganization.
Obviously, since these are kindof an openly trained model,
(16:32):
they do not have any insightsinto our organizational data.
So there are two things now.
Like we need to bridge that gapbetween the last trained data
of that large language model andthe second use case is like we
need to bring our own data sothat we'll be able to ask the
questions to the large languagemodel based on that data.
So to fill that gap we have gotsomething called a retrival
(16:53):
augmented generation.
That is lag pattern.
So what we do simply is like wedo not train that model as such
, but at least we make thosemodel capable or aware of the
information that we have gotinside of our organization.
So, for example, that could beanything, it could be just the
documents, or even it could beany of the information that we
(17:13):
can vectorize, store them intoan index and then again give it
to those large language model.
Then, by looking at that index,those large language models can
help us to give the information.
One of the use case which Irecently had built.
So again I will start with thesimplistic one.
There the first thing was aboutthe contract management kind of
(17:36):
a system.
So, for example, there arevarious organizations who have a
huge number of contracts withvarious vendors as well as maybe
within their departments oreven worldwide.
So again there are variousorganizations who have got even
billions of those records oftheir contracts as well.
(17:59):
So even there are few contractswhich have been running over
the years, which are kind ofhandwritten, where we have
information or maybe scan kindof contracts.
So then the use case startslike we do have that data into,
not digitally probably, but justthe scan kind of information.
Or maybe you can take a photoof that contract and then make
(18:21):
it as a part of your AI system.
So the story starts from there.
We get those kind of contractsand from there we bring that
into our ecosystem.
So there we use the basiccapabilities of AI, for example
OCR or maybe image to text kindof capabilities, to get text out
of it.
Then again, once they are intothe text that is where the story
(18:45):
begins for RAG we provide thatinformation or we vectorize that
information provided to thelarge language model and from
there we can talk to thatinformation.
So what can happen is you canask who is the point of contact
for Contoso contract and then itcan surface that information by
looking at that index and giveyou the information.
(19:06):
But in the recent days, eventhough it does not end here,
there has been a few things likewhen we talk to these kind of
models there is a generic kindof information or I would say
like there are two set ofquestions that we can have when
we are specifically trying tobuild this kind of use cases.
So first thing is local queries, wherein we are just trying to
(19:31):
get the information which isbuilt on that index.
So let's say, for example, whois the contact person of Contoso
contract?
Or print of the Contosocontract end okay, because this
is where the index has beenbuilt.
So let's say, for example, wehave one Contoso contract
document and inside thatdocument we have this
information in place.
So we'll be able to easily getthat information by looking at
(19:54):
that index.
Second query comes when it iscalled as a global search or
global query.
So global query, that means weare holistically trying to get
the information from theinformation which is available
in all of the index data.
So what is meant by that islike, let's say, as I said, we
may have millions of contracts.
(20:15):
I can just ask a question likehow many contracts are ending in
January 2025?
Or maybe how many contracts areending end of this year?
We do not have this informationindexed anywhere, but this is
holistic information which isavailable on all of the index
which is available.
(20:36):
That means we cannot directlyexpect that information to be
available in any of thedocuments which got indexed.
And then we can ask thatinformation.
It has to be generated viasomething known as a knowledge
graph.
Those knowledge graphs areagain built by using another
pattern, which is developed byMicrosoft as well, which is
(20:56):
called as graph rack.
We already had that rack andagain we are introducing these
graph patterns, that graph rack.
What it does is it creates thatknowledge tree around that or
knowledge base around that.
Then again, all those dots areconnected to each other.
What happens via that is likeit knows, like, for example,
(21:18):
when I have a look at thecontract, that means these are
the contracts which are relatedto each other, and then it
creates that entire knowledgegraph around the information
which is there and then, usingthat in knowledge graph, we will
be able to ask the questionsabout the holistic information
of that.
So why that?
What can happen is like I canask the high level kind of
(21:41):
information as well, like howmany contracts are ending in
next 14 months, or maybe likehow many active contracts that
we have got into a system.
So these are the places whichI'm working, which is a very
interesting thing.
Again, if you are trying tobuild this kind of a system
where you will be able to alsoask the holistic kind of a
(22:04):
question, then obviously AIFoundry is only your option.
You unfortunately, will not beable to do that with declarative
corporates or even with thecorporate studio, because there
you need to bring your own graph, rack pattern, you need to
build your own informationdatabase and everything.
So yeah, at least there arepros and cons of using any
(22:27):
approach, whether it isDeclarative, cooperate Studio or
AI Studio, but again, based onwhat fits your need, it is the
best one.
So you need to choose what kindof use cases that you are
trying to solve for yourbusiness case and then you need
to pick out the right options.
But again, we do have a lot ofoptions right now.
In case you want to create acomponent, there are various
(22:51):
options.
And then again, one good thingwhich Foundry makes it available
right now, is that you havebuilt your own RAC pattern.
Obviously, from the FoundryStudio itself, you will be able
to deploy that solution directlyto a website or secondly, to
Teams as an app, or even thirdly, to the co-pilot.
(23:13):
So even from there, you will beable to deploy directly toly to
the co-pilot.
So even from there you will beable to deploy directly to the
Microsoft 365 co-pilot.
So that solution is availableas a Microsoft 365 co-pilot as
well.
So right now, these two optionsthe deployment to Teams as well
as Microsoft 365 co-pilot theyare in preview but they work
very perfectly.
So not sure when they will go GA.
(23:35):
There might be some differences, but at least the functionality
will continue to work.
But yeah, that is the beauty ofusing AI Foundry and again, it
is not working into its ownshell.
Microsoft have make it verygeneric so that, for example, if
you want to develop somethingin Foundry, take it to the next
(23:56):
level, probably into your ownwebsite or into Microsoft 365,
it is still open.
So it is not at all like whenyou are working in AI Studio or
at AI Foundry, you are workingin silos.
It is not at all like that.
You can anytime extend that toany of your platform where you
want.
Mark Smith (24:14):
Yeah, excellent,
excellent.
In the past year of working inthis technology, what surprised
you the most and I suppose whatsurprised you the most about
using AI?
And then, what limitations haveyou seen?
Nanddeep Nachan (24:29):
Oh yeah, at
least we have been using CoPilot
specifically for the last twoyears.
I guess so, oh sorry, I canCoPilot for for the last two
years.
I guess so, oh, sorry, I think,co-pilot for the last one or
one and a half years or so.
So mostly, like I have seen, weare using Microsoft Co-Pilot
very effectively for translatingthe document, transcribing the
things or even creating ameeting notes and everything.
(24:50):
But even I see that there aremany organizations who are still
struggling to get to the rightbusiness case for Microsoft
Co-Pilot, which is a bitsurprising because Co-Pilot is
very nice, it is very excellentbecause, for example, if I even
compare myself when I was notusing Co-Pilot versus when I'm
(25:11):
using Co-Pilot, at least I feelOK, I'm very, very organized.
So, at least on a personallevel, it is giving you very
flexibility to use the Copilotin very effective way.
So, for example, let's say youare on a leave for five to six
days and when you come back, so,for example, you can see a lot
many conversations into yourteams, as well as maybe hundreds
(25:32):
or 200s of email, and thenagain, again, you need to go
through that.
Um, at least two, two days.
We'll go into that that itself.
But with copilot at least youcan quickly summarize everything
and maybe just within five toten minutes you will be able to
draft on your agenda, like whatis the next thing that you
should do?
So thanks to copilot on that.
(25:53):
So at least yeah, I mean leastCo-Pilot is helping us to create
that kind of an environment, atleast for the personal level,
even at the team level.
But still it is a bit surprisingfor me, like even still, after
using Co-Pilot for one or maybeeven for two years, now at least
(26:13):
we don't have any muchenterprise-level use cases for
corporate.
But again, at the same time,microsoft have given good
options for extensibility ofthat as well.
But it is a place where I'mstill thinking like we are a bit
of missing some kind of usecases.
So we do have various levels ofcorporate, like we do have
(26:34):
corporate for sales, corporatefor finance, corporate for
developers.
So we do have various levels ofcorporate, like we do have
corporate for sales, corporatefor finance, corporate for
developers.
We do have everything, butmaybe, for example, holistically
, if we think around, an entireuse case for an organization.
Probably we are a bit ofmissing there.
Mark Smith (26:48):
Yeah, interesting,
Nanddeep.
It's been great talking to you.
Thank you so much for coming onthe show.
Yeah, thank you very much.
It.
(27:17):
It's been great talking to you.
Thank safe out there and shootfor the stars.