Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Welcome everyone to
another episode of the Service X
Factor podcast.
I am one of your hosts, scottLaFonte, and here, of course,
with my esteemed colleague QuadQuad.
How's it going, man?
Speaker 2 (00:14):
What's going on?
Everyone, how are you doing?
Speaker 1 (00:18):
Good man.
We're in the heat of the summer.
Things are spicing up a littlebit around here, but everything
is good and I am excited aboutour guest, aren't you?
I am absolutely and since youknow our next guest personally,
I'm going to pass the honorsover to you to introduce our
(00:40):
next amazing guest.
Speaker 2 (00:42):
All right.
So we were chewing on this.
We couldn't this topic of AI.
We couldn't think of anyonebetter suited.
So I had to reach out and callsomeone who knows a little bit
about the subject.
This gentleman goes by SahibSwani.
Sahib, please introduceyourself, my brother.
Speaker 3 (01:02):
Thanks.
Hey everyone, how's it going?
Salve here yeah, just been inthe AI space for you know, I
want to say four years now.
Before that was data science,so created a financial education
company with my brother.
So we were doing data scienceand I was building all those
(01:27):
databases and stuff for hisfirst company there.
Before that I was a taxconsultant.
So everybody loves taxes I knowRiveting job for sure.
But I went to school forcomputer science, so then I was
like might as well use thatdegree, so I switched over and,
yeah, it's been on the up eversince.
Speaker 1 (01:50):
Modest, I love it,
love it.
It's great to have you on thepodcast here.
We talk a lot about service andAI and AI being such a hot
topic, and I think a lot of ourguests understand somewhat about
what AI is, but it's evolvingso much and so fast that you
(02:17):
know.
One of the things that say I'dlike to know is I mean, how do
you stay on top of everything,considering it is evolving so
quickly?
Speaker 3 (02:26):
Yeah, for sure.
I mean, one of the things Idefinitely focus on is to ignore
the hype.
You know there's a lot of thoseinfluencer hype people out
there that like to, you know,inflate the capabilities of
what's currently out there.
So one thing you need to focuson is that sure, there are new
(02:51):
things happening, new models,llms are getting bigger and
better, et cetera, et cetera,but at the end of the day, these
are incremental increases.
All they're doing is just,especially when it comes to
these like reasoning models Imean, we just saw that article
that came out recently fromApple I believe it was called
like the Art of Reasoning orsomething like that where you
(03:14):
had these models that theytested.
That were reasoning models,where they tested like these
complex problems on like theTower of Hanoi problem, which is
like a complex reasoningproblem, and at the lower
difficulty you had standardmodels beating the reasoning
models because the reasoningkind of functionality that they
(03:36):
put in was tripping up thesemodels.
Then you had the.
Then, of course, at the mediumand higher levels, they were
doing a bit better the reasoningmodels.
However, they were still notsolving medium and higher levels
.
They were doing a bit betterthe reasoning models.
However, they were still notsolving the issue, even when
they were given these algorithmsto actually solve the problem.
So what we take from thatarticle essentially is that
(03:57):
these models are not actuallyreasoning.
What they're doing is they'rejust large pattern recognition
models given the facade ofreasoning.
So they're just like things outthere that you need to realize
that when people say AGI iscoming in five or one to two
years and people like DarioAmadei are saying 50% of white
(04:21):
collar jobs are going to bedestroyed in the next two years
or so, you just got to ignorethe hype.
We're nowhere even close to AGI.
So, when it comes to you know,handling the rapid evolvement of
AI, you just need to realizethat, hey, sure, these are
incremental increases and theyhave these like certain
(04:43):
protocols out there that we needto stay on top of, like
agent-to-agent protocol, mcpprotocol, but these are just
incremental increases and youhave some time to understand
them.
Speaker 2 (04:57):
All right, so that's
awesome news, right.
But one might heard what youjust said and took offense to it
.
They're like wait a minute, arethese guys literally telling us
not to buy into the hype?
So let me ask you a real quickquestion, because I know where
you're coming from.
But for our audience's sake Ihave to ask can you tell us
exactly how AI works?
(05:17):
And if you think if you tell us, then that'll help us kind of
narrow down the hype and we cankind of fit it into where it
works for our organizations, canyou kind of explain to how it
really works for others?
Speaker 3 (05:29):
Yeah, I mean so in
terms of these LLMs.
Right, so these are just largepattern recognition models.
So, let's say, when it comes tosomething like, let's say,
software development, thesemodels, when they say software
is being developed by thesemodels, where, like, 30% of our
(05:49):
software is being developed,this is not like new algorithms
that are being generated.
These are boilerplate templates, like you know, when you're
creating a Visual Studio.
You know, when you go to VisualStudio Code and you ask it to
create a template for I don'tknow an Azure function that's
(06:10):
wrapped in an HTTP request andgives you a template.
That's essentially what theseLLMs are doing.
They're trained on prior codethat's been developed.
It's nothing new.
Prior code that's beendeveloped.
It's nothing new.
So it's not like these modelscan come up with novel ideas
(06:30):
that could solve the I don'tknow solve cancer or something
right, because the data thatit's trained on is just the data
it's trained on.
It can't actually think so.
The way that the LLMs work isthat it has this large
repository of data and that'swhy, when these guys keep
increasing the size of theirmodels, that means that it can
(06:51):
handle more data.
It understands things better,it has more knowledge.
So when you ask a question, itessentially can understand how
to do things a bit betterbecause it has more data to look
at.
But when it comes to AI, rightnow it's largely, like I said,
pattern recognition, and that'sliterally all it's doing right
(07:12):
now.
Speaker 2 (07:12):
Pattern recognition
and predictive responses correct
.
Speaker 3 (07:17):
Essentially yep
exactly.
Speaker 2 (07:18):
Yep.
So, wow, man, these guys arejust totally destroying AI right
now.
No, no, no, no, no.
What we're doing is we'resetting the bar, because as much
as we're setting expectationswith our customers, we're also
helping them to align to whereit fits the most in their
organizations.
Now, you and I had a real, realheart to heart.
(07:40):
One day I called you up and Iwas like I said you know, said
I'm sorry, mcp servers are cute,they sound cool, but, brother,
I don't, I don't understand whycan I just, you know, stick with
my apis.
We had a real bluntconversation, so we're gonna,
we're gonna repeat thisconversation, just minus the uh,
(08:01):
the explicit uh language thatyou use with me, because I've
never seen anything bad, rightyou mind.
helping the audience understandwhat exactly are MCP servers,
because Microsoft has dropped it, but we know they've been out
for a minute.
So there's like Anthropix MCPs.
There's Microsoft's MCP servers, you know.
Let's just talk about what thisis actually used for and then,
(08:23):
if you can get into, why is itbetter than just using regular
plain old APIs?
Speaker 3 (08:28):
Yeah for sure.
So MCP essentially is aprotocol that Anthropic came out
with that allows users toconnect these LLMs to tools,
functions, etc, etc.
And basically just increasesthe abilities of these LLMs.
(08:51):
Right now, all they had orprior to this MCP, all they had
was the knobs that they weretrained on.
So you would go to somethinglike ChatGPT and ask like, hey,
how do I I don't know change mytire?
And it would go to thisrepository of all the data that
(09:12):
it's trained on, pull theinformation on how to fix your
tire and then spit that back outto you.
Now, when it comes to MCP versusAPI, people say like, hey, mcp
is like a you know.
Oh, it's just a wrapper forAPIs, so why wouldn't I just use
(09:32):
an API?
The thing about MCP is thatit's a standard protocol for
literally all LLMs, right?
So if an LLM can understand MCP, it can talk to your MCP server
that you created.
So an MCP server has multiple,basically objects that you can
(09:54):
stand in it.
So you can stand up, basicallyjust information, getting tools,
tools that actually do stufflike essentially post requests
for an API, and then you can runfunctions and things like
scripts with the MCP server.
(10:15):
Now that you have an MCP serverand then you have multiple LLMs
that can talk that one language.
Now if you put in whatever toolsyou want get requests to field
service, get requests to F&O,get requests to I don't know
Salesforce whatever tool you'retrying to use that has an API.
(10:35):
If you put that into an MCPserver, now every single LLM
ever that can talk to MCP cantalk to that data source.
So let's say you have multipleagents that might run on
different platforms and you wantto get to the same data, all
you have to do is spin up oneserver.
(10:55):
You don't have to define theseAPI calls for every single LLM
separately because they mighthandle APIs differently.
Now that there's one standardprotocol that can understand
these APIs, all your LLMs thatyou're using can talk to it.
And what's also cool is thatthe way you define these APIs
(11:17):
tools, functions, et ceterainside your MCP server, you give
descriptions, you giveparameters, et cetera, like you
would naturally with like acustom connector and power
platform.
But now the LLM you can talk toa natural language and with
your query it can understand,pick and choose the exact values
(11:39):
that you have in that juststandard language query and put
that into the MCP server andrecognize what tool you're
trying to call.
Speaker 2 (11:46):
Exactly, exactly,
exactly.
And that's where you had me,because with APIs, you know I
would have to programmaticallydevelop that functionality if I
needed it to recognize whatitems were in that prompt and
knowing which API to call to geta real response from MCP
(12:07):
basically allows the LLM todetermine which is the best data
source to provide the best typeof response correct.
Speaker 3 (12:16):
Right, yeah, people
call MCP what the USB-C of LLMs.
Oh no, you know how I feelabout that.
Like a universal plug, right?
So one of the cool things aboutMCP is that it uses for LLM,
since they're such amazing atpattern recognition, they have
(12:38):
natural language processingwithin it so they can understand
, based on the information thatit has within the MCP server and
what you type, what you'retrying to say, and it'll pick
the correct tool.
Speaker 2 (12:51):
Nice, nice.
Speaker 1 (12:54):
So would you say
that's one of the biggest
advantages to the MCP, or, youknow, are there other areas that
you would say, like you know,that give us a little bit of a
competitive edge when it comesto AI?
Speaker 3 (13:13):
I mean I'd say that's
probably the biggest advantage
the fact that it's like thisstandard protocol that you can
talk to in natural language.
You don't need to define theparameters.
Let's say you're trying to makea flow on some tool and you're
asking a question and then inthat question you need to define
(13:35):
a variable and then take whatthe user said and put it to that
variable and then create theobject and then send it to the
API.
So that's definitely theadvantage of MCP, though I'm not
saying that there aren'tdisadvantages.
Obviously there are somedisadvantages.
(13:56):
Right now I do think that MCP isstill in its infancy.
There can definitely besecurity improvements which you
know people like Microsoft havetried to mitigate.
You can spin up your MCP serveron an app service and then wrap
that app service in an API andthen essentially within.
(14:17):
So Microsoft does stuff a bitdifferently when it comes to MCP
, where you need to create aYAML code, like you would when
you're creating a customconnector and then put that into
the custom connector areawithin the Power Platform, and
then spin it up and say like,hey, this also requires an API
there.
Sure, you have some securitythere, but in general there's no
(14:38):
real robust security, becausewhen you're calling that MCP
server, when you spin up, let'ssay, fno environment, you're
putting your client ID secret,et cetera, within the
environment variables anyways.
So when an LLM calls that singleMCP endpoint, if somebody else
(15:00):
has that URL or that constantconnector and they connect to it
, then they can start talkingtheir data, which isn't
necessarily a great thing, right, but obviously you're, you're
probably not giving your URL etcetera out, but they're just
things that you need to considerwhen it comes to governance,
(15:21):
things like that.
So, yeah, there's, there'scertain disadvantages, sure, but
there are ways to get around it.
But, like I said, mcp is stillin its infancy and it's
definitely going to get better,for sure.
We've seen how like you said,will, how LLMs are getting
(15:41):
better and better.
Speaker 2 (15:44):
All right, so I'm
going to ask you a few questions
, and this is an area that youknow is near and dear to my
heart, right?
So thanks for the explanation.
We get it so like if you want tobuild like a field service
solution, like remember when youand I had this convo?
I'm going to let you get theverbs and the words in.
If you wanted to build a realfield service solution,
leveraging the MCP servers, youcan do that right.
(16:08):
You can have it check knowledgearticles, previous work orders
to get more information,inventory levels, alert
technicians.
It can do all these actions byleveraging MCP servers, right,
but it's just one source, notone source, but one MCP server,
correct.
Speaker 3 (16:26):
Right, exactly.
And the cool thing about thisis that you can spin up one MCP
server, right?
And let's say you want to havemultiple softwares, multiple
APIs within that MCP server.
You can definitely do that.
So you're not going to have acustom connector for each
software, right?
You're not going to have oneMCP server.
You can definitely do that.
So you're not going to have acustom connector for each
software, right?
You're not going to have onecustom connector for field
(16:47):
servers, one custom connectorfor Salesforce.
If you're using Salesforce forsome reason, we can cut that out
, Just kidding.
Speaker 2 (16:54):
Right.
Speaker 3 (16:57):
But you know, yeah,
you can put all these different
softwares in there and now youjust have one server where all
your elements could talk to.
And also, if you want to usethat MCP server to do the tool
callings let's say you're usingSalesforce and then you're using
FNO you can pull data fromSalesforce into your LLM.
(17:18):
The LLM can understand whatyou're trying to say and then
you can do.
You can talk to the same MCPserver and then do a POST
request to FNO to transfer datafrom that Salesforce to FNO
straight from an LLM, which isnice.
Speaker 2 (17:36):
It is more powerful
than just a regular api.
Um, so, yeah, you won thatargument.
You won that argument.
It was cool.
It was cool.
It was a fun conversation.
Why do you mean you?
I lost an argument.
I will publicly acknowledge thathe owned me during that
conversation.
Um, there was a lot of uh, no,wait will, but why wouldn't you
do this instead?
And why wouldn't you do that?
(17:57):
And I, and why wouldn't you dothat?
And I'm like God, I hate itwhen he does this.
So, it was all good, it was allfun, but hey, I have to ask you
because you mentioned it, wegot to go with the G word here
Governance, big deal, ai forgovernance.
Can you tell us what yourthoughts are?
Because technically, it feelslike.
(18:18):
I mean, I know there'scapabilities that you could
build into it with Azure, aiFoundry and other areas.
I'll let you I'm not going topeople don't tune in to hear me
speak.
They hear our guests speak, soI'll let you talk more to that.
But governance, securing dataand making sure our data sources
aren't inadvertently exposinginformation that it shouldn't.
(18:40):
You want to talk to us aboutthat a little bit?
Speaker 3 (18:45):
Yeah, I mean, there's
obviously people like Microsoft
and other companies have beenobviously trying to fix these
issues.
They've created manualauthentication for Copilot
things like that.
Sure, it's built on Entra, sotechnically, if somebody tries
(19:06):
accessing your endpoint thingslike that, they wouldn't be able
to unless they're credentialed.
But there's definitely realrisk around AI and unfortunately
there's, I mean, anythingreally, right, you see these big
(19:27):
companies having these big databreaches and they're not even
using LLMs, right, like we had.
I think it was recently where,where I think it said 70 million
passwords or something oraccount details were released
into the dark web, essentiallyfrom people like Google,
(19:47):
facebook, youtube, et cetera.
Youtube is part of Google, butsure, we're always going to have
these kind of risks when itcomes to new technologies.
Right, it's a new,ever-evolving landscape of
technology and, like we said onthis pod, it's ever-growing.
(20:09):
Multiple protocols are comingout, but what I'll say and what
I'll preface this with is not tojump directly on the bandwagon
and first understand how thetech works.
Like MCP, like I said, youmight want to take me up on that
(20:35):
offer and wrap it in like anAPI.
I'll say do that, Do that right, go to Azure, spin up your MCP
server, wrap it in an API, add akey to it.
That adds a little bit of, youknow, extra security, right?
And let's say you're doing acode pilot Sure, it might be in
Teams.
Maybe you want to add anotherauthentication layer on it.
(20:56):
Maybe you want to connect it toI don't know, like a what is
that called Vileo, zileo,whatever the messaging app?
Send a code to it, authenticatethat way, maybe add another
layer of security on it.
But yeah, I mean these toolsare constantly evolving, but
(21:18):
make sure you are understandingthe governance around it.
It's only recently thatMicrosoft started going super,
super hard on the securityaspect.
Because, you know, sure,microsoft's a great company, a
lot of us love it, microsoft's agreat company, a lot of us love
(21:39):
it.
But there are certain issueswhere things like Microsoft
Copilot had had promptengineering issues, prompt
injection issues, where peoplehave actually gone and pulled
data from these companies.
And at one point there was arepository of Fortune 500
companies' data that somebodyhad put on and sent to Microsoft
(22:00):
and other people saying like,hey, this is a real issue, why
don't you fix it?
And it's these people who haveallowed Microsoft to kind of
understand where these databreaches are happening and
they've plugged these issues.
Sure, there are still issueshappening and Microsoft
constantly evolving.
Like any other company, youhave white headers that are
(22:22):
constantly hacking things andthen sending that information to
these companies to fix.
But, just like anything, justunderstand the governance behind
it.
Understand that it might nothave the best security.
Before you actually implementsomething, first understand how
to secure it, just like anythingelse, and then you'll be on
(22:44):
your way.
Speaker 2 (22:46):
So, like, let's just
stick with good, old-fashioned,
like rag for grounding, right?
I think Azure AI has an indexfilter.
If I remember correctly, AzureAI Foundry probably has an index
filter for security.
There are different methods andtools we can use for, like,
let's just get away from MCP fora second.
(23:06):
I know I said a bad thing, butyou want to speak to some of the
technologies or tools that areavailable to kind of securing
your data, Because I mean at theroot right.
you take your data, you kind ofexpose it to help ground your
LLM.
What are some tools that areavailable to help secure the
grounding?
(23:26):
Let's put it that way Right.
Speaker 3 (23:29):
I mean there's
definitely multiple tools
available.
Obviously, microsoft, they havetheir Entry ID, formerly known
as Adder ID.
You can allow yourauthentication, your data, to
only be sent to authenticatedusers, especially in CodePilot
Studio.
(23:49):
When you're, let's say, youhave SharePoint connected to it,
that CodePilot can only bringback data that you can manually
and physically get when you loginto SharePoint.
So that's obviously one of thegood things about it.
For sure, it's connected toEntra.
So all the enterprise-gradesecurity that Microsoft is
(24:11):
constantly adding to it allowsyou to only ground your data and
allow authenticate users to getit.
There's also other tools, likeAPIs.
You can use the Azure APIGateway to wrap your data
(24:32):
endpoint.
Let's say you're trying to talkto a blob storage and to get
your LLM to be able to talk toit.
You're sending an HTTP requestto that blob storage and you're
doing a um to get your lm to beable to talk to it.
You're sending an http requestto that blob storage, that blob
storage endpoint.
You can then again, sure youcan use entra, but then for
another layer of security, youcan use the, the api gateway, uh
, to wrap it and and talk tothat data um.
(24:54):
There's also other toolsintegrated into things like
Azure AI Search.
You can add another layer ofsecurity on it.
But yeah, I mean there'smultiple different things you
can do.
You can use securitycertificates.
Those are just another layer ofsecurity you can add, kind of
(25:15):
just like an API.
It's funny I had a friendcolleague trying to implement
certificates in Power Automatebecause the service that they
were using does not use Entra,so they need to use a
certificate and client ID etcetera, et cetera, and send that
to the endpoint and pull databack.
But what they didn't know wasthat within Power Automate that
(25:38):
certificate option doesn'tactually work.
It's just there.
Speaker 2 (25:41):
It's not there.
It's not for those connectors,it's not.
Speaker 3 (25:45):
It's not.
Speaker 2 (25:46):
You can't, you can't.
That's why my eyes you can'tsee us on video, folks.
So thank God, you wouldn't haveseen the expression on my face.
Yeah, go ahead.
Sorry about that.
Speaker 3 (25:55):
So he was trying for
like three days and I'm like, oh
yeah, I'm sorry, yeah, thatdoesn't actually work, so you
got to use something like anactual Azure infrastructure to
do those calls.
But yeah, I mean there'smultiple different things you
can do for sure, and Microsoft'sconstantly upgrading Purview to
(26:16):
understand who's accessing yourdata, things like that so
you'll be able to understandwho's actually hitting those
endpoints, things like that.
So, yeah, I mean there's lotsof tools out there.
Like I said, I would suggestdoing your research before
jumping fully into the AI space.
(26:37):
That's one of the things I liketo suggest to, let's say, when
I'm doing stuff for clientsdefinitely first understand the
first.
Do, like your AI strategy work.
That includes the governance.
Understand what sources youwant to use.
Then, based on those sources,understand how to protect those
sources when you're trying toconnect it to LLMs, your
(27:01):
co-pilot integrated dashboards,because co-pilot is everywhere
now.
And, yeah, just understand howto secure that data for sure.
I mean that's something youwant to do for everything, right
, anything you're trying to do,anything you're trying to use to
connect to other sources to getyour data.
(27:22):
Always first understand yourgovernance.
Speaker 1 (27:26):
Yeah.
Speaker 2 (27:27):
Governance, first
what?
Speaker 1 (27:28):
would you say you
know in terms of what you've
seen, you know common pitfallscompanies are experiencing when
they're trying to implement AI.
Let's just say, right, theydon't have a have a strategy, or
maybe they even do have one,but what are you seeing as some
of the common pitfalls andchallenges that companies are
facing?
Speaker 3 (27:51):
I think one of the
biggest issues I've seen is
clients are misunderstanding thecomplexity of Copilot.
Sure, it's a low-code AIplatform, but there are certain
(28:17):
nuances that they need to firstunderstand.
And you know, I know everybodyhates reading documentation, but
that's something that whenyou're trying to use a new
technology sure, videos and etcetera you can handle all the
videos.
Lisa Crosby and all thesepeople have wonderful videos on
(28:39):
Copilot and other AI tools butwhat they don't understand is
that there are these governanceissues.
What they don't understand isthat there are these governance
issues.
There are these certain nuancesthat they need to first
understand before trying toimplement these tools.
Now, sometimes companies are,let's say, they're talking to
(29:03):
somebody about their AI strategyor they're talking to a company
to kind of get somebody toimplement this for them because
they've tried it before.
I would suggest that thesecompanies, before trying to do
something like AI, when theydon't fully understand something
(29:24):
I don't know, get like anexpert.
They don't need to do the fullimplementation for you.
Ask them to do one use case,right, take one use case, show
you the governance side ofeverything and then basically
tell you everything about thatone platform that you're trying
to use and then, sure, now youcan have them do staff
(29:46):
augmentation, train your ownpeople to do the rest of the
implementation.
You don't need to use thatcompany to do everything, but at
least get an expert to tell you, because that expert literally
does that for their job.
And technology consulting is alittle bit different than, let's
say, management consulting,stuff like that, where
(30:06):
technology consultants actuallyshow you how to do things and
actually build the softwaresthat you're trying to use.
So I would suggest getting likean implementer or another
person that actually understandsthis software, because
obviously this is the future andit keeps constantly growing and
(30:27):
then understand the tool firstand then, sure, do whatever you
want.
But I've seen clients puttingcreating these tools in their
default environments and goingwhy isn't this working?
You guys, you guys don't don'tsee video, but will's eyes is
through three times, three times.
(30:47):
But uh, I mean there are.
There are certain issues whereclients try to add PDFs as
grounding data for CopilotStudio in SharePoint, but they
don't understand that Copilotcan't index pictures within
SharePoint.
Yet they're trying to do thingsthat might allow you to do it,
(31:10):
but then what I end up having todo is then, hey, telling them
you can either convert thesePDFs to full-text documents
because Copilot just uses thestandard SharePoint Microsoft
Search graph indexing that'sbuilt in, or you can upload
those documents directly toCopilot because that Copilot has
(31:30):
a better indexer and if youupload the documents directly to
Copilot it can read imageswithin PDFs.
Speaker 2 (31:38):
It takes forever,
right?
True, not a shot, just thereality, folks.
Just setting it Just so youknow.
If you upload it to Copilot,it's not going to index in 20
seconds.
All right, it takes a littlebit of time, folks, so just be
prepared.
Just throwing that out there,go ahead, steve, my bad.
Speaker 3 (31:54):
I mean it just shows
you how good that indexer is
right.
Speaker 2 (31:57):
No, it's doing a good
job, it's doing a good job.
Speaker 3 (32:00):
But yeah, I mean,
there are just certain things
that and it's weird thing is.
So let me backtrack a bit here.
So we were at this client andthey were having problems with
their PDFs et cetera, and theywere saying, oh, it's not
working.
And they had anotherimplementer come in, or he
(32:23):
called, I guess they said he wasa copilot expert and he told
them that all they had to do wasconvert these PDFs into text
documents, just plain textdocuments.
And I'm just like, okay, well,if this expert had just gone to
the documentation and looked upthe knowledge sources that were
(32:45):
able to be used by copilot.
Not only can PDFs be usedwithin SharePoint, pdfs can also
be directly uploaded.
Speaker 2 (32:55):
So where did the
so-called expert come from?
We call what the kids say.
We say CAP right.
Speaker 3 (33:02):
Yeah, CAP.
So when it comes to thesepitfalls with AI, I would
(33:29):
suggest that, since it's so newyou before you let them tell you
how to do things 100%.
Speaker 2 (33:37):
So I have to ask a
question and you and I this is
something that we actuallyaligned on, but I think it's
fair for the audience to hear itright we align on a lot, but I
think this is when I firststarted working with you.
I was like man, you know what?
I got to give him a real coolname.
It's Mr AI Now.
So can you help our audienceunderstand the difference
(33:59):
between AI and automation andI'm throwing this up there for
you, so that means that you canlead into agents, as it goes in
there.
Can you help them understandobviously AI and automation.
Speaker 3 (34:11):
Yeah for sure.
So automation obviously is yourtools like Logic Apps and Power
Automate and 8N, et cetera.
All those tools out thereZapier are basically flows and
tools that allow you tobasically allow a process to be
(34:36):
handled by something else, likea repetitive process.
Right, let's say, every dayyou're going to I don't know,
f&o and then pulling data out ofdata management framework and
then importing that data or havea data pipeline and sending
that data to like a Power BIdashboard and then you're I
(35:00):
don't know doing a bunch offilters on your data to make
that graph, all pretty thingslike that.
Now, with automation, you canautomate that process with, like
HTTP requests et cetera, etcetera, to pull that data and
then do all of that filteringautomatically with LogicGas,
(35:22):
power Automate, things like that.
Now, when it comes to AI, thisis where I like to make a
distinction with things likepower virtual agents, because
essentially, power virtualagents weren't technically AI,
they were just automation toolsthat could happen to talk
(35:47):
because people had programmedthings into it.
But the second, the reason whyit switched to something like
being called co-pilot, wasbecause now the underlying
engine within Power VirtualAgents could understand natural
language.
So when people talked to thebot and they had certain
(36:12):
spelling mistakes.
Now, through natural language,that bot could understand what
you're talking about, becausethe underlying model behind
Power Virtual Agents now at CopaStudio could understand what
you're saying.
So the fact that we includedsomething into that layer that
could understand basicallynatural language through pattern
(36:34):
recognition, because theunderlying model was built on
natural language, it evolvedfrom automation, essentially, to
something like AI.
Now, I wouldn't me personally,it's my opinion I don't
technically consider LLMs AI.
(36:55):
Sure, the technology istechnically AI, but I don't
consider it AI.
Speaker 2 (37:05):
Oh, you're starting
World War III.
Man, You're starting.
Speaker 3 (37:08):
Oh boy, Now World War
III, man You're starting oh boy
.
Now, with natural language, wewent from automation to AI.
So now we have this automationplatform that can understand
natural language text model.
Now you have something thatalso evolved into agents,
(37:36):
because now you have a naturallanguage tool that is also
connected to automation that canunderstand natural language
queries from a user or from aninput, understand natural
language queries from a user orfrom an input.
So now you have these automatedagents with these triggers that
, let's say, just like a powerautomate flow.
(37:57):
If an email comes, the powerautomate flow would index that
email, pull the appropriatevalues and then create like an
object and do whatever you want,whatever someone in one with it
.
But now an agent can get thatsame email through natural
language processing without youhaving to do a complicated power
(38:18):
automation.
It can automatically understandwhat that email is saying and
then pull those values outautomatically and then, through
natural language, understandwhat tools to call through
whatever description you have onthat tool and then do whatever
you want.
So now not only so we went fromautomation that you had to
(38:40):
program and in the and createthese variables, etc.
Etc.
To a natural languageprocessing that you could talk
to and and call tools, to nowsomething like an agent that
could understand context of thedata that it was given and then
(39:02):
understand the appropriate toolsto call to process that data
without much human input.
Now, I don't trust an agent todo everything autonomously fully
if it's something complex.
Speaker 2 (39:19):
That's why we have
agents Got to have, qa Got to
have.
Speaker 3 (39:23):
QA.
That's how you have it.
Yeah, exactly, and that's whyyou also have human in the loop
for AI agents.
So you have an agent that doesall this stuff and then, before
(39:48):
it actually sends that data andactually puts it into your F&O
system or wherever you're tryingto put it in, you have a human
review it.
But obviously that takes a lotless time than going to the
actual data source or cleaningthe data yourself and then doing
all the automation, et ceterayourself.
But now you just have an agentdoing it, but all you have to do
is review what it's gonna doand then hit accept.
And now you have something thatmight've taken days or I don't
(40:12):
know weeks, cut down to hours ordays.
Speaker 2 (40:19):
I love it.
I love it.
You know what we got to do apart two to this man.
But we got to have you jumpinto machine learning versus AI,
because that's my area.
I love talking about that, butlet's give our audience a little
bit more about you.
Where can you be reached at andwhere's your blog and where's
(40:43):
all that good fun stuff at?
And, if I'm not mistaken, don'tyou have some speaking sessions
coming up?
Hint, hint, wink, wink.
Speaker 3 (40:50):
I do.
I do so.
There are multiple ways you canreach me.
Obviously LinkedIn, becausethat's where everybody is, but I
also have a blog calledgroundtocloudai, because you
know I love AI.
That was not a cheap domain toget.
Speaker 1 (41:07):
I can only imagine
how much that would be.
You're committed.
Speaker 2 (41:12):
You're committed to
your brand.
Speaker 3 (41:13):
We love it.
Speaker 1 (41:15):
That's awesome man.
Speaker 3 (41:16):
Before I continue, is
that that ai domain was
actually owned by a countrycalled Anguilla, and so this
company has been making millionsand millions and millions of
dollars a year selling thesedomains to people.
So I was like, hey, good forthis, good for their economy.
Sure, I love pirates.
But then also I do have apodcast where I break down AI
(41:43):
headlines, ai topics, just forthe general audience.
Sure, tech people can view ittoo, but it's called Tech for
Thought.
You can reach us on Buzzsproutet cetera, youtube, spotify, all
the places that you can look atother podcasts like this one,
(42:04):
this great podcast here.
And then I do have somespeaking sessions at the
Community Summit in Orlando atthe Gaylord Resort.
So I will be talking about AIagents, I'll be talking about
governance for Copilot, I'll betalking about how to do
multi-agent frameworks usingflows, and then there's also a
(42:27):
private preview out there foractual multi-agents built into
CodeBot Studio itself.
So I'll be talking about thatthere as well.
Speaker 1 (42:34):
That's awesome yeah.
Speaker 3 (42:36):
And we'll definitely
definitely we'll.
Speaker 1 (42:38):
we'll make some plugs
into the just so everyone can
can reach out.
Make sure that in thedescription that we send out all
the links to all your sites bitfrom sitting in and attending
those as well, but we definitelyhave to have you back on to
(43:16):
have, I think, a part two onthis discussion.
Speaker 3 (43:20):
For sure.
I mean, I love being here andhad a great conversation, so I'd
love to be back.
Speaker 2 (43:27):
And we're going to do
it.
We're going to get a wholegroup on there.
We're going to make a panel.
It's love to be back, and we'regoing to do it.
We're going to get a wholegroup on there.
We're going to make a panel.
It's going to be the group, thesquad.
But, man, we just reallyappreciate it and just honestly,
just from a communityperspective, love what you're
doing in the community and justkeep encouraging you to keep
bringing others along, man.
So keep up the good work.
Really, really love what you'redoing out there, brother.
Speaker 1 (43:53):
Thank you.
Thank you so much foreverything you're doing and for
the conversation today.
We've really hopefully ourlisteners have enjoyed learning
a lot more about AI, a littlebit more in depth, versus, you
know, just having the surfaceconversation about you know what
it is and what it does, and soI think this was really
informative for you know, not my, not only just myself, but I
think I've learned quite a bitjust from from listening to you,
(44:14):
but hopefully our listenershave as well, and we'll
definitely have to continue in ain a part two.
Speaker 3 (44:20):
Yeah, thank you for
having me guys.
Speaker 1 (44:21):
Yeah, absolutely Well
, take care, Saheem, appreciate
it and everyone that's beenlistening.
Have a wonderful rest of yourday and we will see you in the
next episode.
Thanks everyone.