Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome everyone to
another episode of Dynamics
Corner.
My mind is blown.
I don't even have a line for it.
I'm your co-host, Chris.
Speaker 2 (00:10):
And this is Brad.
This episode was recorded onApril 22nd 2025.
Chris, chris, chris Anothermind-blowing episode and we were
able to talk about one of myfavorite topics page scripting.
We also talked a lot aboutBusiness Central with the AI
agents, and we also talked a lotabout MCP service With us today
(00:31):
.
We had the opportunity to Allright, good afternoon, how are
(00:54):
you doing?
Speaker 3 (00:56):
That's better.
I can hear you and see you guysnow.
Oh, excellent.
Speaker 2 (01:00):
That's cool.
Speaker 3 (01:01):
Excellent.
Speaker 2 (01:02):
Excellent, but I
don't know what's worth, no,
that you go.
Speaker 3 (01:04):
Excellent, excellent,
excellent.
Speaker 2 (01:09):
No, that's good.
Good afternoon, how are youdoing?
Hope you recovered from thewonderful trip to Vegas.
Speaker 3 (01:15):
Yeah, actually I'm
just back.
I'm still in the jet lag.
We took a vacation and went toCosta Rica after Vegas.
Speaker 2 (01:24):
Oh, very cool, nice,
nice we just landed on Sunday.
Speaker 3 (01:30):
So yeah, still
recovering a little bit from the
jet lag.
Speaker 2 (01:34):
Oh, the time
difference.
I have a three-hour timedifference and I only went to
Vegas for the conference andback and it took me, I think, a
few days to recover just fromthe three-hour time difference
and being there for a few days,to recover just from the three
hour time difference and beingthere for a few days.
So you have an opportunity toget used to the time difference.
Then have to go back must be alittle more challenging three
hours is nothing man, that wasmy point is three hours isn't a
(02:00):
lot, uh, someone like yourselfwho I think you had eight hours,
I believe, for Vegas or ninehours, excuse me, yeah, it's
almost a full day.
It makes for a challenge to gothrough it too.
So how was your experience atthe conference?
I thought that was one of thebest conferences that I had
(02:21):
attended.
Speaker 3 (02:22):
Yeah, it was great to
interact with partners and the
community.
As usual.
We were discussing internallyhow much we come with specific
intentions.
We want to drive a message anddrive a.
We have an agenda.
We want the channel tobasically ramp up on this whole
(02:46):
AI thing.
It was mostly one of the bigthings in the conference and we
really want people to startdoing things with that fantastic
new technology.
So that's the message we'retrying to deliver at that
(03:07):
conference and I hope wesucceeded.
I mean, you guys you tell me Ihear a lot of.
So my personal impression isthat the channel is very there's
a whole spectrum.
It goes from people realizethat there's a whole spectrum.
You know people, you know itgoes from people realize that,
(03:27):
um, there's something therewhich probably is going to be
interesting for their uh andvaluable for their businesses,
but they haven't started yet.
Uh, until you know people weredeep into it and already doing
you know prompt engineering andhave their own you know ai
feature and and then there'severything in between, but I
(03:48):
think it's partly also becausethis thing is going so fast that
even we have a hard time tokeep up with it.
So I imagine that it's hard foreverybody to figure out what
should I do.
There are some challenges withit.
Speaker 2 (04:11):
I think the big point
that you made is it's going too
fast and it's faster than Ithink users can adopt.
And the conversations Itypically have is if you look at
the advances in civilization,it took us 50,000 years to make
an ax and then, as timeprogressed, the amount of time
before new technologies releasednow is it's almost like Moore's
(04:33):
laws with the chips it's youcan only go so fast and the
window and window is gettingsmaller and smaller, where it
almost feels like every daythere's a new feature and I
don't know if our brains cankeep up with it.
But it's also from thecustomer's point of view how can
they change their business toadopt these technologies so
quickly as well?
So it's not always sometimesthe partner point of view.
(04:54):
It's the partner point of viewto be able to accept the
technology, to be able to workwith customers to implement it.
But it's also customers to beable to have.
How can I in some instanceshave a radical shift in my
business to be able to takeadvantage of these new features
and functionality?
And at the rate where it'scoming out, it's almost like
trying to change the wheels onthe bus as the bus is driving
(05:16):
down the highway in some cases.
Speaker 1 (05:19):
And I think that's a
big challenge, brad, because I
know it's a partner-focusedevent with ISVs right, and the
challenge is two parts.
It's the way I see it.
One, you know, consideringthere's different sizes of
partners too, so there's smallpartners, large partners, medium
partners.
How do you utilize the co-pilotwithin the organization first,
(05:44):
and then you're expected toimplement it for a client.
So not only you're trying tofigure out how would you use
this day-to-day within yourorganization and then also, at
the same time, convince yourcustomers, your clients, how do
you use co-pilot when, in fact,you're also still trying to
figure out yourself?
So can you build a solution fora small partner?
(06:06):
That's a little tough.
You're like building a solutionaround Copilot At the same time
.
You're a small shop and then,at the same time, trying to
convince your client to useCopilot.
That's a big challenge.
Speaker 2 (06:18):
Yes, I do have one
big takeaway from the conference
and I think it's not narrowedon the ai and something that I
realized from conversations withpartners in some of the
sessions.
But before we jump into that,uh, would you mind telling us a
little bit about yourself?
Speaker 3 (06:34):
uh, yeah, you mean
like in general introduce
yourself.
Yes, yes, yes so yeah, my nameis vincent miguelis.
Uh, I am the chief architectfor business Central.
I've been working with BusinessCentral for, I think, more than
10 years now and yeah, so wehave a relatively small team
(06:55):
working for me and what we lookinto is mostly innovation, new
technology.
So obviously we do a lot ofthings with AI at the moment
because that's a new thing, andso we're trying to kind of do
the groundbreaking stuff.
What's the next big thing?
(07:17):
So other things we're workingwith we'll never make it to the
product, but some of them youmight see in two, three, four
releases from now.
We'll never make it to theproduct, but some of them you
might see in two, three, fourreleases from now.
Uh, so we try to be a littlebit ahead of the um, you know,
ahead of the, the game and um,and at the same time, you know
when we have you know if thereis something like if there's a
fire burning, like we have these, you know, major performance
(07:40):
issue or something, then theyinstall men's on deck.
So we, you know some of ourfolks are some of the best folks
we have in our organization, sothey go in and try to fix
whatever needs to be fixed.
That's kind of the charter ofwhat I do.
Speaker 2 (07:55):
You're the innovative
firefighters, so you're working
on keeping things movingforward, but when there's a fire
, you're the special reactionteam that jumps into working
with it.
That's a fire?
You're the special reactionteam that jumps into working
with it.
That's a big responsibility andI'm interested to hear if you
know some of the things we justtalked about with innovation and
how fast it's coming out, howto plan three or four releases
(08:17):
ahead, because the waytechnology is moving, we don't
know what tomorrow will bring ina sense.
We don't know what tomorrowwill bring in a sense.
But to go back to the one bigthing and I had mentioned it
just in the previous episode, Ithink that came out of it is we
have a lot of features andfunctionality.
I think this is across theboard and it's stemmed from
conversations during some of thesessions and also some of the
(08:37):
networking events.
We have a lot of features andfunctionality within the
application and it's it's greatto say that we have these
features and functionalitywithin the application and it's
great to say that we have thesefeatures and functionality a lot
of us that work with it fromthe partner point of view, and
this is the responsibility ofthe community, the partners, and
it's not a fault with BusinessCentral, because Business
Central to me it's been my lifeand I'm a super fan of it and I
(09:01):
realized in conversation is wehave all these features and
functionalities and I thinkwhat's being missed in some
cases is how to apply it in thereal world.
So we have features andfunctionality that are added,
but what is a good user storyfor how to use the features and
functionality and, moreimportantly, when to use it?
Because in some cases, youcould set something up one way
(09:24):
or set it up another way or usea different part of the
application to help make for abetter implementation.
So I think that was a bigtakeaway that I had from some of
the sessions, because thesessions themselves did have
good user stories for when touse the features and
functionality.
And that's what I realized likeah, this is.
(09:44):
It was like an aha for me ofit's not just the lateral.
We can do this.
Now we have an AI agent.
We have, you know, projectorders.
We have this.
It's okay.
Well, now you have an AI agent.
This would be a good case.
When to use the AI agent, thiswould be a good case.
When to use a production order,this would be a good case to
use an assembly order as such.
Speaker 3 (10:06):
So I think that's
something.
So I'm glad to hear that,because that kind of so, from
the Microsoft side, we will shipsome AI-based features.
We'll keep doing that.
But I think very Personally, Ithink very quickly, we'll be
somewhat I don't want to saylimited, because there's tons of
(10:27):
things we can do.
But the real value back to yourpoint, christophe, before the
real value will come from thepartner channel, because we will
never implement an AI featurewhich is industry-specific.
But that's, I think, where thereal value is.
If you're working, I don't know,in construction, business or
(10:48):
wineries or whatever, I'm surethere are tons of scenarios
there that can leverage AI andthat will need to come from you
guys, that will need to comefrom the community and the
channel.
Our hope is that at aconference like this one, when
you see what others are doing,get inspired, even if it's not.
(11:08):
You might not be your owndomain or your own industry, but
you can inspire, say, okay,they are doing that, maybe I can
do, maybe that could somehowtranslate to my domain, to my
customers.
You know what they are doingand the solutions I'm working on
somehow.
So that's you know.
I think that's one of the greatthing with these conferences,
that too, if that type ofosmosis can happen, then then
(11:29):
that I think that's great, andthat's how we will, you know, be
, you know, be successful, youknow all together.
Speaker 2 (11:35):
No, absolutely, and
that that was my takeaway from
that.
It's with right, along withwhat you were saying is is it is
us to be able to have that gooduser story, like you said,
because you have the features,you have the functionality and
it's up to the community ofalmost how to apply it or to
come up with the ways to applyit for specific functions.
Because we all know, even withwhat we do downstream from what
(11:57):
you with the product, it's achallenge.
You can't have something thatsolves everybody without a
little change and maybe aconfiguration.
You know even some people usevariants, some people don't
right, but the functionalitiesin there and how you use it is
all part of the implementation,which is good you had mentioned
you work with the implementation.
We can talk about somefirefights afterwards, but with
(12:20):
technology moving so fast andtechnology changing, how do you
plan three or four releases out?
Do you have more detail for thenext release and then three or
four releases out?
You have sort of a broad strokedesire.
Speaker 3 (12:39):
Yeah, so we have, as
you know, we have a six-month
cadence, so we plan at six-monthcadence.
We plan at minimum six monthsahead.
But still, I would say we stillhave a quite agile process.
We need to make some kind ofplan because we need to tell
(13:04):
everybody what's going to be inthe next release.
But, that said, we still have alittle bit of wiggle room to
modify the plan in the six-monthperiod Because, again, we have
a pretty successful, I would say, agile process.
(13:25):
So that's for what already isplanned.
But we never plan, we don't dodetailed planning more than six
months ahead, because you know,we realize this is the big
learning, I think, in ourindustry.
Like you know, these waterfallmodels, they just don't work
right.
So you know, whatever you planbeyond six months probably is
(13:45):
not going to happen or it'sgoing to look totally different.
Yeah, but the world is changingNow.
The world is changing so fastthat there's literally no point
in doing that.
No, I mean so that's, you know,overall, from a BC engineering
team standpoint, in particularin my team, you know we do a lot
of experimentation andprototyping.
(14:07):
As I was mentioning before.
Some of it don't make it to theproduct or maybe not yet.
We have some things in a drawerthat we haven't had for a while.
We have tons of ideas andthings we could, things we could
do, but we choose not to dobecause we prioritize other
(14:28):
things.
And you know, one of the thingsI'm you know, I sometimes have
some interesting conversationwith my peers is, you know, okay
, I have this thing which Ireally wanted to get in the
product and, you know, dependingon what other priorities there
is, we can have interestingdiscussions about it, right, but
(14:48):
you know, and but there's, youknow, in the end it's all matter
of prioritizing again thingsagainst each other, and and you
can say, there are, there arealways things that would be,
would come on top, things likesecurity.
You know, they're like priorityzero.
With any security things weneed, any security work always
take precedence.
(15:09):
Then there's things likecompliance that we need also to
do.
So these are the things thatare not negotiable, so they're
always on the top of our backlog.
And then after that, everythingis a question of you know what
we want to prioritize.
That everything is question ofyou know what we want to
prioritize and when we try to do, is we so at the leadership,
(15:36):
leadership team level, we, wetry to set some strategic goals
for the product.
Uh, you know um and uh.
Then we try to align thebacklog and or we execute on
this strategy you know uh,afterwards in the d.
So that's more of a detailedplanning of it.
Speaker 1 (15:50):
You must have the
best job to be able to create
some ideas and like maybe weshould do this next or we have
some little bit of time.
Let's try to scoot this inthere and be creative as
possible.
Speaker 3 (16:09):
It's a really great
job in that sense that you know
you get to.
But you know I mean, okay, Imake it sound, you know, maybe a
little more romantic than it is, because you know it's not like
we just spend all timeprototyping and then you know
having fun and you know whatevernext to the product.
So we do have.
You know, I can't go and spendall my resources on some things
(16:32):
which are not relevant for theproduct, so these things need to
be to some extent relevant tothe product.
So right now we are doing a lotof things around AI because
that's our, you know, number onestrategic goal at the moment.
I can share a little bit ofwhat we're working on.
I mean, some of it isconfidential, but I can share
(16:56):
the part which is not.
So we're doing some moreexperiment with a page scripting
thing.
Speaker 2 (17:07):
You can't even that
right there you just lost me at
that, because I am the I don'twant to say the biggest, but one
of the biggest fans of pagescripting.
So, let's talk about some of thethings you have.
I will tell you what I loveabout page scripting.
I saw the roadmap and I wishthat what was listed on the
roadmap, like you, could fastforward everything, because I
(17:28):
did two, uh, regional sessionson it and I was shocked that I
don't even want to interrupt youon this, but I I had to because
you, you hit like a trigger forme.
Uh, I was shocked that no onein the room knew of the page
scripting.
But after the session both ofthose sessions ran over and
(17:49):
everyone had so many visions ofhow they could apply page
scripting from the conversationand then also at Directions, we
did a lengthy page scriptingsession that encompassed a
little bit more than the pagescripting use for user accepting
tests, but some other thingsthat you could do with it.
Speaker 1 (18:08):
So keep going with
your list, but we can definitely
.
That's a long-term, that's along-term favorite man.
Speaker 3 (18:14):
You know, it's like
what I usually say when we
discuss priorities there's youknow, there's hardly any feature
.
People will literally.
So I showed that you early, Ithink tech days, even before it
was shipped or it was theprototype, and at that time I
(18:34):
said, well, this is somethingthat might or might not make it
to the project man.
The feedback I got was totallyoverwhelming and people
literally at conferences, likedirection, they would stop me in
the hallway and ask me when areyou going to ship the page
scripting?
I never experienced that withany other thing we put in the
(18:57):
product.
So that's clearly somethingthat people see a lot of
potential in.
So the first thing we need todo is to remove this preview tag
.
So we want to, because it stillhas a preview tag, so we need
to remove that.
(19:18):
But that's just a matter ofit's not because it's not
production quality, but there'sa few things around you know.
You know complying thingsaround the, you know
accessibility and documentation.
I think like that, get in thecontrol.
Speaker 2 (19:33):
Is that what the
preview tag is?
Because I have had the questionon what preview means for a
feature in the product becausethere have been other features
that had the preview tag.
Yes, and I was asked in asession what does the preview
(19:54):
tag mean?
So maybe, before we continuewith that, not to interrupt you
again, which, like I said, youhave me super excited now.
So I'll probably interrupt youon the pay scripting, but can
you explain the significance ofthe preview tag and what it
means for the feature within theproduct?
Speaker 3 (20:03):
question.
So the preview tag essentiallyuh means several things.
So it means first of all,there's limited to no support.
Uh meaning, like you, you canuse the feature.
It's mostly when we put afeature out there with a purview
tag means that we want to get afeedback on that feature before
we actually put it inproduction.
(20:23):
But but there's no guaranteethat we'll ever ship the feature
.
We can't.
You know if the feedback is,you know, very negative, or we
realize there's too much work,or you know it's just, you know,
not good enough, we can alwayspull it back.
I don't recall any examplewhere we ever done it, but this
is a possibility.
(20:44):
We could potentially pull thewhole thing back and say, okay,
we're not going to do it anyway.
So that's one of the meaning ofthe preview, of the preview tag
Limited support to no support.
(21:04):
And there might be some things,you know, which are not quite
finished and polished, althoughwe tend to have a pretty high
bar anyway, even though we putthe preview tag on.
But you know there are such athing as we call it in terms of
the Microsoft tax.
You know all the features werelease, they need to be
accessible.
You need to go through securityreview.
(21:27):
That we have to do anywaybecause you know, obviously it
has to be secure.
Even though it's preview, weneed it to be fully documented.
And then there's things likeversion.
I'll give you an example.
You know, with the pagescripting, we're discussing
whether, if we make itproduction ready and remove the
(21:50):
preview tag, we have this YAMLformat behind it.
We've been discussing so far wehave been free to make changes
to it.
We can change.
Tomorrow we can decide tochange whatever format we save
the scripts in YAML it.
Tomorrow we can decide tochange whatever format we save
the scripts in YAML.
And then if we break yourscripts, we don't care, because
(22:14):
it's preview.
You might have tons of scriptsor they won't work because we
changed the format.
That's just the way it is whenit's preview, whereas when we
put it in production then wecan't do that anymore, meaning
we'll have to, at minimum,either document the format,
which is part of the feature, orwe need to be backward
(22:34):
compatible with the previous uh,the previous um versions of the
format, right?
Or have some kind of migrationpath.
You know, one way or the other,you know you, you might have to
reload the script and save itagain or things like that, to
update it to the next moment.
So all these things need to bein place where things are not in
preview anymore.
Speaker 1 (22:56):
Yeah, I think where
page scripting is one of the
favorites across the board, it'scertainly.
For me, it's a solution thatsolved a problem that spans
across all different areas ofbusiness and spans across a
partner and as well as a clientand as well.
So those kinds of things, youknow, although it doesn't seem
(23:17):
like it's, you know, not like AIdriven, you know co-pilot
driven, yet but it's such agreat feature that anybody can
pick it up and use it for, youknow, to solve a problem or use
it to simplify a process or sometesting, things like that.
(23:37):
And, again, it's usable acrossall business aspects.
That's probably one of myfavorites, for sure.
Speaker 2 (23:45):
Oh it's, you can't
stop with me.
It was 2023.
I mentioned it at Directions.
Duilio and I did a sessiontogether on PageSkip and we did
mention when you introduced it Ithink it was June 2023 at BC
Tech Days.
Ever since I saw that, I'vebeen on board with it and just
the uses of it are creative,from documentation to training
(24:07):
materials outside of the useracceptance testing, and I've
also talked with some that theygot creative on ways to enter
data because you can change theYAML file so that they automated
doing some tasks.
Speaker 3 (24:22):
This is another
reason why we have kept it in
preview for so long, because youknow, we knew from the very
beginning that there was a widerange of possible usage of it.
And you know it's both ablessing and a curse, because
you know, if you start sayingthis is something you can use
for documentation ortroubleshooting, then you also
(24:44):
need to support the scenariosand test it for that.
So all of a sudden, the rangeof potential utilization expands
, which needs more testing forus, more support, whereas if we
say, okay, the scope of this isend-user testing, so it's a much
(25:05):
more limited scope and allowsus to say, okay, if you use it
for anything else, that's atyour own risk.
Right, we don't prevent anybodyto use it for other things,
which I can hear you've been,you know, diligently doing,
which is fine, you know.
But we might, you know, wemight you know, if we come up
with, hey, I have this scenariowhen I do documentation or
(25:28):
record, troubleshooting orwhatever, we might say this is
not something we support at thisstage.
So this is another reason why wehave the preview tag.
So we just, you know I canshare a bit of news with you.
Just, you know, I can share abit of news with you.
So we've been very focused onthe user and user testing
(25:52):
partner.
That's basically how webasically presented that feature
.
But I think we're shifting thefocus a little bit now to be
more partner-oriented, becausewe realize, you know, partners
are using it also a lot.
Partners are using it also alot.
So we have we are taking abroader perspective on the page
scripting than just the user.
Speaker 2 (26:13):
I'm so happy to hear
that and, again, as I agree with
what you're saying, it's theuse, as people do get creative,
but the ability to test.
And is there anything that youcan share from the roadmap that
will be out soon, or is it all?
Speaker 3 (26:29):
confidential.
I can put my top 10 wish list,if you want I want that, okay.
Speaker 2 (26:34):
Well, if you can
sneak in the, you don't have to
say anything, but if you cansneak in the editing of it, I
can tell you what we'reexperimenting with, and again
it's the disclaimer.
Speaker 3 (26:42):
It might not make it
to editing of it something which
is not which.
I can tell you what we'reexperimenting with, and again
it's the you know disclaimer.
It might not make it to theproduct at all.
But some of the ideas we havewas you know um, is to somehow,
you know, imagine when you, whenyou do your step, you could
somehow involve an lm at somestage.
Speaker 1 (27:02):
And I'm not gonna say
anymore you and I would be
talking about that.
I think we had spoken to othersas well about like our wish
list and I think it was aroundthat.
So, yes, I that would be funmine.
Speaker 2 (27:17):
Mine, to start, is
you can edit the yaml file and
load it would be to be able toedit the script while it's there
, to delete a step, to add astep, to reorganize a step,
because sometimes I'll gothrough a script scripting, then
I want to put a conditional ora validation, and then the
conditional I forget to put theend in.
So you know I have to eitheredit the yaml file or I have to
(27:40):
record over again and also mergetwo scripts.
I understand that the conceptof the scripts is to have small
use cases for users, but theability to maybe take two of
them together.
I know we can use replay, whichI think was a great addition as
well to be able to do a widerange of testing, and now with
AL Go you can also use pagescript tests and there is like
(28:03):
it's just.
I have to slow down myexcitement, I'm sorry.
I feel like a kid.
I lose focus here.
Speaker 3 (28:10):
All of these things
are on the backlog.
So there you know, all of thesefeatures are obvious, so to
speak.
So you know everybody wantsthem, so they will.
You know.
I'm pretty confident they willmake it to the product.
I can't tell you exactly whenand how we'll prioritize it,
even for the next upcoming sixmonths.
I think we're still discussingwhat's going to be, but there's
(28:31):
going to be some things.
It might be depending on how weallocate resources.
It might be a little bitdisappointing in that sense that
it might be the only thing wedo is remove the preview tag, so
that's like the most boringversion of it.
But that means that that thatmeans that the next release we
(28:51):
can you know all the resourceswe'll be using on page.
We will be for new featuresright we will be able to also
add some new stuff for um, wehave a, you know, peter boring,
which I'm which you know is thePM responsible for page
scripting, and he's managingthat backlog and he's doing a
great job at it.
Speaker 2 (29:12):
No, I understand it's
not easy.
There's a lot that goes intothe product.
Speaker 3 (29:16):
There's so much
demand and so much usage of it,
so we'll keep working on it.
I'm very happy to hear that.
Speaker 2 (29:25):
Thank you, so I
interrupted you when you were
talking about some of the thingsthat you were working on.
You jumped on the pagescripting, but it's almost.
I almost feel it was.
You know, it's something thatjust took me for a side rail
here because of my passion forthat.
I could talk about that everyday in every conference with
everyone.
So what are some of the otherthings that you have?
(29:47):
The backburner, the pipeline orsome of the cool things, and AI
within PageScripting would becool too, because for it to be
able to dynamically change yeah,real-time response.
Speaker 3 (29:59):
Talking about it just
a hint right Some of the things
you can try, some stuff withpatch scripting and AI.
Today Let me tell you how.
I've done some experiments withit and it's actually there's
some potential there.
So here's the thing LLMs arereally really good at generating
(30:20):
structured output, especiallyif you use few-shot prompting
and that kind of this kind oftechniques where I'm telling
them you know this is the kindof output they want.
So you can do things like youknow, take a Yammer recording
you know, put, put it into theprompt and ask the LLM to you
(30:42):
know, generate more tests doingother things and output that
particular format and giveexamples of recordings you have,
and actually that works prettywell.
So you can do cool stuff.
Speaker 2 (30:57):
I'm trying that after
this conversation.
I assure you I am trying thatbecause that is.
I've used LLMs going the otherway to create user documentation
, not to create tests.
I can honestly say I didn't trythat.
Speaker 3 (31:11):
So go explain with it
because you know so.
First of all, you know, llmsare really really good at
generating structured output,and especially if it's things
like they have seen a lot oflike there's a lot of YAML on
the internet.
There's a lot of JSON, and JSONis even better because these
(31:31):
days, I'm sure you know butthese GPT models they even train
specifically to be good atgenerating JSON, so they're
really good at it, but they'redoing a pretty good job with
YAML, which is not too far fromit anyway.
So try to play around with that.
That's fun.
Speaker 2 (31:51):
I will try that and
if I have some issues, I'll use
the new feature for convertingJSON to YAML and vice versa.
So I will do some tests with,as you had mentioned, if there's
not a lot of YAML with thestructure, maybe trying a script
with YAML converting it to JSON, telling it to make some
changes, then converting it backto YAML to C no-transcript.
(32:17):
What do you mean?
What do I use?
Speaker 3 (32:18):
Use a chat.
Gpt Copilot.
Speaker 2 (32:20):
Use a I use it
depends what I'm working on.
I use Copilot for coding andsuch.
I'll use GitHub Copilot.
I'll have files open in theeditor and then I'll choose the
different models.
I found Cloud Sonnet with ALseems to work better, yeah, and
then the other models work,depending on what I'm going to
work with.
But I think with the YAML I'lltry to see what Claude Sonnet
(32:42):
does with creating the tests.
Speaker 3 (32:43):
You can try to try
Azure AI Foundry if you haven't
tried it.
Azure AI Foundry I haven'tworked with it that much.
Speaker 2 (32:51):
There's only so much
I can work with.
I've been working with some ofthe LM for local language models
primarily other than GitHub.
Speaker 1 (32:58):
GoPilot Marcel wrote
a blog about how he used Azure
AR Foundry.
Speaker 3 (33:04):
It's just a UI.
It's a front-end too, but thething that's good about it is
that it can tweak some of theparameters in the UI, which is
one thing, but also it hasactually a drop-down box where
you can ask specificallygenerate results in JSON, and
(33:25):
that will leverage the OpenAIAPI, because it's supported at
the API level actually.
So what it does is actuallywhen it sends the request,
there's a flag you can set inthere in the request you send
that says to the ELM theresponse as, as a structure,
jason, and it will.
You will get a Jason back.
You have to tell.
(33:47):
Funny enough, in the in theprompt you have to say it has to
be in Jason, even though youhave the run, but whatever,
otherwise you'd get an error.
But that's you know.
When you use it directly fromthe API, you don't have to do
that, but in Azure AI Foundry,somehow you have to.
So it works really really wellwith the so-called few-shot
(34:08):
examples.
There's also a UI for that inAzure AI Foundry where you can
enter some examples of what youwant, the format you want as a
response, and that worksextremely well if you want to
play around with it.
Speaker 2 (34:24):
I'm putting that on
the list.
I will use Azure AI Foundry.
Speaker 3 (34:30):
It's a mouthful for
me.
Speaker 2 (34:33):
But I will try that
as well.
I'm super excited and Iappreciate the idea suggestions
for testing with page scriptingwith some LLMs to help create
additional tests.
It's yeah, that's exciting.
So what is?
Speaker 3 (34:50):
LLMs are very good at
generating large you know large
datasets.
It's you know if you need togenerate large data set for
whatever purpose, you know LLMis really good at that For
testing data.
That's the other experimentthat I wanted to try with it.
Speaker 2 (35:09):
Again, it's
unfortunate.
We have all this greattechnology and there's so much
that I want to play with andthere's so much that I want to
do that it's a challenge forgetting some of that information
.
So I will also want to playwith my idea with it, as you had
said.
Mentioning test data, because Ithink several, even maybe a
(35:32):
year or so ago, I had aconversation on one of the
podcasts about having opensource test data, because we
have the Contoso data now, nowused to be the Kronos data for
the applications, but it's notgood for every scenario.
But now if we could have LLMscreate test data following the
structure of Business Central,use paid scripting to load the
(35:55):
data or configuration packages.
However, whatever is a quickand easy way to do it.
Configuration packages would bea little more standard.
I think it would be a greatexperiment for it to create test
data for customers, sales,orders, items.
That way you could get a volume, and you know I'm an expert at
making bicycles and coffeemachines.
Speaker 3 (36:18):
So I'll tell you
another interesting anecdote
about that.
So you've seen our sales orderagent, right, we demo at
directions, right.
So when we started testing itwe realized we got a pretty good
accuracy in some of the testswe had.
And so the scenario was sendthat mail and order some chairs
(36:38):
or some furniture.
And tests we had and it waslike this.
So the scenario was you know,send that mail and order, you
know, some chairs or some youknow furniture and a coffee
machine.
So basically, the demo data wehave in BC, right.
Speaker 1 (36:50):
And it was working
actually pretty well.
Speaker 3 (36:51):
And then, you know,
some people tried some other
scenario with some other dataand then it was all of a sudden
accuracy dropped significantlyand at the start we couldn't
really understand why.
How come, you know, if we orderchairs and stuff, that works
fine, but if we order somethingelse that doesn't work?
And then when we realized, youknow, because we went and did
(37:14):
some experiment, you knowtotally outside of BC, just as
you know, we chat GPT it's likewe realized you know our test
data in BC, just as you know,chat GPT is like we realize you
know our test data in BC.
They are out there, they are onthe internet and LLMs have seen
it.
You know this means they knowabout the Amsterdam share, they
know about the.
You know all the things we havein the BC database because it
(37:37):
has been around for so manyyears now.
And you know, know, lms aretrained on on the internet, so
they have seen that data.
So they are yeah, they arepretty good at recognizing it.
Um, so you know, that's one ofthe interesting things about you
know, uh, you know interestingside effect of training on the
content of the internet, that oncertain things.
(38:01):
So what we had to do was so wecreated back to your point,
bernd we created actually a testset which we don't publish, but
we created new test data, alsousing NLM actually.
So basically, we created a toyshop with children, toys and all
the things, which is totallydifferent than what we had to
(38:24):
test our feature, and that'swhat we're using to test our AI
feature, and we make sure wedon't put that out there.
Speaker 2 (38:30):
so, lms, don't come
buy this data and get biased on
this track it does know itbecause I did a blog post on
bulk image import to change andI did go and search and I used
copilot and said create me animage of an amsterdam chair.
(38:50):
And it looked like the blue orwhatever color.
I forget the colors of thechairs, but it created similar
chairs to the pictures that werein the sample data.
Speaker 3 (39:01):
I'm sure you can.
I'm sure you can ask.
You know I didn't, I didn't tryit, but I'm sure you can ask.
You know what?
What are?
Go go chat GPT and ask what?
What are the chairs in businesscentral?
And I'm sure it will come upwith a list.
I'm pretty sure that works.
Speaker 2 (39:18):
It's it, I will try
that, but I'm pretty sure that
works.
This is all just amazing.
We've gone from pen and paperjust in my lifetime.
From pen and paper to acomputer or a system, an LMM I
don't even know what you referto it, as it's almost becoming,
(39:40):
I feel like I'm talking to ahuman, sometimes with the
information that it gives backin the conversations, even now,
with a lot of the voice chatsthat you can have with some of
the applications, and I justcan't believe in my lifetime
what I've changed and the thegenerations now that are growing
up with it.
Speaker 1 (39:55):
it's all they know it
will be all they know to.
That would be the day, brad,about using Copilot voice in
Business Central one day.
That would be fascinating.
When he talks back to you,that's going to be amazing
that's not really hard, that'snot far.
Speaker 3 (40:19):
I'm sure you can, you
probably can do that today
somehow.
You know, I mean you can getaccess to the.
You know you could write anadd-in that does that somehow.
I mean it's just scaffolding,because these services, you know
, all there is is, you know theyhave these text to speech and
speech.
(40:39):
You know models which are notlms model, all the models, right
, uh, and they're just, you know, it's just wiring them, it's
just plumbing in the end, right,you just wire them to the next
thing.
Well, and then that works.
Actually, we did, we, we, um, Ididn't, I didn't manage to show
that because when I showedPetscript now we're back to
(41:02):
Petscript when I showedPetscripting, you said it was in
2023, so I take your word forit.
You know, michael from my team,we implemented.
I didn't implement that Michaelfrom my team did.
We actually had a prototypewhere we had the wild whisper,
(41:22):
which was back then, thespeech-to-text protocol, into
that and you could basicallytalk to BC and say, hey, do this
and that, and it would generatea page scripting, script and
execute it.
Speaker 2 (41:41):
Dude, that's awesome.
Can I ask?
I'll ask you this question andyou don't have to answer it.
Obviously you don't have toanswer it, but obviously if it's
confidential, don't answer it,but if it's a theory, you talk.
I see ERP software becomingfaceless, and what I mean by
faceless is the user, the.
(42:03):
You'll have the data layer, thedata layer you know separate
that.
You have the ui, the businesslogic and the data layer where
you can go to an implementation,in essence, have the ui created
and have a core set of actionsthat you use to create it to
even to the point where you'reusing the voice because I know
you could do it now with emailwith a sales order agent is
(42:23):
getting there, where some of thedata input will be just
specifically what you need foran implementation from a core
set of actions.
Or even if a salesperson's in acar, they had a conversation
and they could say, create asales order for Chris for an
Amsterdam chair and a sportbicycle with these rims and give
(42:46):
him a 20% discount, andautomatically the sales order is
created, is put into theprocess and system.
I have seen AI have the abilityand someone demonstrated for me
on Business Central to be ableto find the actions on the page,
and it's outside of BusinessCentral, it's not something
within it.
This is how creative people aregetting where.
It actually was able to findthe actions and the buttons on
(43:07):
the page and do something.
So my thought was now that's wewon't have a standard UI in
that sense.
Is that a vision?
Speaker 3 (43:17):
Yeah, for sure,
there's lots of yeah, so this,
you know what you're saying isyou know, everything is an agent
, basically, right.
Speaker 1 (43:26):
Yes.
Speaker 3 (43:28):
So there's a lot of
discussions around that and
there surely be I'm surethere'll be a future where we'll
have, you know, ai.
I mean right now, agent is abig thing, uh, and and agents
needs don't need to be, you know, you've seen the ourselves
already.
Agent demo, right, we, we keepthe human in the loop a lot,
(43:52):
like you know, you have tobasically approve, uh, what the
ai is doing and the agent isdoing, and you know many, many
of the steps.
Like you know, when you get anemail, you have to, you know,
look at the email and say, okay,proceed right.
And when the agent, you know,creates a sales quote, you get
to look at the sales quotebefore and the email they
actually crafted, before it getssent to the customer.
(44:14):
So there's, you know, but youknow we program these steps
right.
I mean, it's not the nature ofLLMs or agents to have the human
in the loop.
We implement it that way and wecould just have skipped this.
But now there's tons of reasonswhy, at this stage of the
(44:39):
technology, and also thematurity of the technology, and
also, you know, the maturity ofthe technician, but also how you
know how people psychologicallyrespond to this level of
automation, that we want humansto be in the loop, right, uh,
and and and but.
But our, you know uh, jacob,which is our ux uh expert for
bctm, he has a great way to talkabout this.
(45:00):
He talks about, you know, youhave this dial of things where
you know, on one end, if youdial all the way to the right or
left, I don't know, you have,you know, human very involved at
basically all the steps and theagent is only progressing and
executing the different steps,as you know, as, as, as you know
(45:21):
, as long as you approve everysteps.
So that's the very, you know,you know, um conservative
approach, uh, and then you can,you know, potentially dial all
the way, uh to no human in theloop, where the agents are, um,
just doing what they're supposedto do and and there's no human
intervention.
So there's also another way tolook at it.
(45:43):
You know people are using theanalogy with self-driving cars.
You know there's these levelslevel 1, 2, 3, and 4 in the
self-driving car witness andpeople are talking about agents
doing the same analogy with,basically, level 4 is the fully
autonomous version of it.
That's right, and so I have nodoubt we'll uh, we'll get there.
(46:05):
Uh, for you know, I, you knowit's very, I think it's very
hard, I think it'll bepresumptuous to try to you know
if I try to tell you what thefuture will look like, but uh,
that, because so many things arehappening, can go in so many
directions.
But but I think you know if Ishould venture a guess, there
there are, there are a fewthings that are emerging.
There will be a certain levelof, you know um, you know self,
(46:31):
you know autonomous agent.
There will be a certain numberof these.
Uh, you know how autonomous Ithink hopefully will be.
You be depending on what thescenario is, or critical it is.
You don't want an Asian to run.
I don't know a nuclear powerplant, probably.
Speaker 2 (46:51):
I have a full
self-driving vehicle and the car
drives better than other peopleon the road.
Speaker 3 (46:55):
There are things like
these, which are a little bit
sensitive, like these, which arestill a bit sensitive.
You might, you might, you know,uh, you might agree that some
ai, you know, decides when topull down the shades in your
home, you know, just to takeanother, your rather, you know,
(47:17):
uh, you know, harmless, uhexample, right, and then I think
there'll be a whole bunch ofthings in between.
Again, it's not going to be aneither, or There'll be something
that you will trust the AI todo, and some, you will require
more control for sure.
Speaker 2 (47:31):
Yeah, I like the
human in the loop and I like the
point where you're talkingabout where you can dial it up
because you do have to build forlack of better terms, and again
, it's almost like everyone'sinteracting with ai, like it's,
it's, it's a person, but you dohave to build up trust so that
you can see that it's going toreact.
And I can understand and see apoint where you're talking in
(47:51):
some cases.
Now, as I had mentioned, I havea full self-driving vehicle and
my daughter was with me and shetold me that the car drives
better than I do and I feel thatit's again it's through a
progression.
So, if you take it, even fromthe business point of view, with
the sales agent, to have thehuman in the loop gates to where
, maybe, as the product or thefeature or the agents mature for
lack of better terms, now itcan be I want to see all emails
(48:15):
or I don't want to see emails,unless this or some feature,
just to use one of the pointsyou have where the user in the
implementation, depending uponwhat they're working on, can
gate it where they feelappropriate, versus every step
of the way yeah, you still havesome sense of like the ability
to pull the levers right.
Speaker 1 (48:34):
I mean, there there's
a sense of control.
But at the same time you havean option.
Like you know, I'm at thispoint now where I really don't
need to interact.
I can turn off the humaninteraction from my side.
I can still turn it off, but Idon't need it to prompt me to
decide about something, to goand take action.
I'm up to a point as a businessowner if you're, you know you
(48:58):
have a business running BusinessCentral.
You get to a point where, like,okay, I trusted enough to not
have to ask me to get anapproval because it does a 99.9%
success rate.
Why do I need to go andinteract with it, knowing that
it'll do it successfully 99.9%of the time?
Then I can just turn off thenumber.
Speaker 3 (49:20):
You know.
So in the you know earlypreviews we have, you know we
have.
You know it's very conservative.
It asks you very steps in theway and then when we see that
people, when they, when itdoesn't take very long, like a
few minutes of usage, people gookay, approve, approved, because
you know they kind of uh, but,but it's, I mean you're totally
right.
I mean the whole point of it isalso to to build some.
(49:43):
There's psychological thingsaround.
You know, letting go from thethings that you were in control
and it's it's important to build.
You know, as technologists andyou know software developers, we
need to build that trust in thesystems with our users so they,
they kind of they can, they cansee that uh and that that it's
working as expected.
Another thing we do is that Idon't know if you've noticed,
(50:05):
but any sales quote or documentthat has been created by the
agent is marked afterwards.
So if you can go, even if youturned off the reviewing part of
it, you can go afterwards andlook at the records and you can
see that it has been created byAI.
(50:25):
So you can go and review iteven after the fact and say,
okay, even if the sales scorehas been created and the mail
has been sent, you can go andsay okay, this was created by AI
.
I still want to take a look atit.
Speaker 1 (50:36):
Yeah, yeah, and you
can go back and say, okay, maybe
I don't trust it again and Ican go and turn the checks and
balances.
Just to make sure that I'mcomfortable, where you know, if
I say, hey, if you can get 99 or90, whatever, like, if you have
(51:07):
that control, then I'm okay tolet it do its thing and then not
worry about it.
Maybe I'll check here and thereand do quick audits, but I'm
comfortable enough that I canlet it run autonomously.
Speaker 2 (51:22):
With the agents and
as we talked, I mean, I think
everything can be an agent and Ithink I had some conversations
at Directions where agents willhave specific tasks that do
those specific tasks well, justlike some individuals may be
specialized.
What I start to think about iscross-stack agents being able to
(51:44):
work together.
So right now we set up an emailinbox and the email again to go
back to the sales order agent.
The sales order agent works,but is there a way to trigger
and communicate with otheragents, maybe within Outlook,
word Excel, some of the otherMicrosoft suites where Copilot
is being embedded in agents?
(52:06):
So now we have one flat stack.
You can have an agent managerthat manages the agents for
Business Central and otherMicrosoft products.
Speaker 3 (52:17):
So that's funny.
You mentioned that becausethat's you know, I just got out
of a series of meetings exactlyon that subject.
So there's this new standard ifyou haven't heard of it, you
should check it out which comesfrom Entropic.
Originally it's called theModel Context Protocol, you know
(52:38):
, abbreviated NCP.
There's a lot of hype aroundthis right now.
In essence, it's more or lesswhat you described.
It is interesting because it isa standard, the way it works.
(52:58):
If you're not familiar with it,it's a really simple standard,
right, but it is a standard.
That's why it's interesting,right?
Basically, it has a discoverycomponent in it, meaning that
you can have an agent or aservice or something and
basically publish what it can doand can be discovered by a
(53:21):
client.
And it's also dynamic.
It doesn't have to be, you know, it can change as you go.
So, for example, let's say youhave a web API, that's something
you like, and then you canchange the API.
It will still work, right,because the magic is, you know,
(53:44):
when you're a client, what youdo is that you inject an LM in
the mix, and LMs are pretty goodat, you know, figuring out.
Okay, here's the APIdescription, so I can call that,
and then if you change tomorrow, then we'll figure it out.
Right, here's the APIdescription so I can call that
(54:05):
and then if you change tomorrow,then we'll figure it out.
So what it allows you to do isdo things like so there's this
notion of an MCP server, sobasically you think of it as
your BC could be, businesscentral, could be an MCP server,
and then you could have a youknow, another MCP server, right,
(54:26):
which is?
So I'll give you an example, andit's not totally constructed
because I was playing aroundwith it and that's basically the
prototype that I built, just to, you know, get my head wrapped
around this.
So, you know, I just exposedthe.
You know we have this API forthe top 10 customers, so I just
exposed that as an MCP server.
And then we have then GoogleMaps, instead of the, has also
(54:50):
an MCP server, and so you canwrite an application.
So you need a client, you know.
So I use Visual Studio Codebecause it's it has a built-in
MCP client in that.
But there's also this thingcalled Cloud Desktop that has it
.
But you can build your own andwhat it allows you to do.
So basically you register, yousay, well, here's the tools that
I have.
(55:10):
So I have BC as one tool, Ihave Google Maps as another tool
and you can have like a weathertool as well, which is also an
MCP server.
And then you can go in the chatand you can say, oh, okay, look
up the address for that customerand then give me driving
directions in the chat, right,and so what it does is kind of
orchestrate, you know.
(55:30):
So it figures out, okay, how doI close at my disposition.
You know I can go and look upthe address for that particular
customer in the BC, you know,using the BC API, and then you
know, getting that addressinformation, it figures out how
to feed it into the Google MapsAPI or Bing API, right, and
gives you the driving directionthere.
(55:51):
And then you can ask, hey,what's the weather there?
And then you know it will, youknow, call the weather MCP
server track.
And so what's interesting withthis is that you can have a
whole range of tools at yourdisposal that you know the your
client will figure out how touse right and open ai.
I will support that standard uhvery soon in chat gpt meaning,
(56:17):
uh, you know, if you have in theJET GPT that you know from
OpenAI, you can go there and youcan use any MCP server that are
at your disposal and there's abunch of these already out there
so you can go and have allsorts of things.
You can ask things about BC anddo things like that.
Speaker 1 (56:37):
So is the idea would
be eventually, as a business,
right?
So I'm putting myself in theshoes of a business owner and I
have all these Microsoftproducts.
Is the idea in the near futureand again we can't tell if this
is going to happen is to have anagent that orchestrates within
(57:01):
your tenant, maybe within yourtenant, to pull all these
different agents based upon yourprompt, based upon your need,
and just let it do its ownorchestration of pulling all
these different api calls, yeah,and then just let it do its one
thing and and that's all itdoes.
That would be amazing, becausethen you have a specific agent.
Speaker 3 (57:23):
So you need a client
and that client can be many
things.
Right, that can be, you know.
So in the scenario you'redescribing, you work from a
client.
Let's say, for example, thatcould be, you know, imagine it
could be Teams, you know.
So you could do a chat in teamsand you can go, hey, uh, give
(57:44):
me the latest top 10 customersand what's the driving
directions, and then teams wouldhave these mcp servers.
You would have like a bc as anmcp server and it would have
google maps as an answer andother servers at his disposal to
answer your question.
But then you can also imaginethat you know BC, business
Central itself, you know, in itsUI, you know, could act as a
(58:08):
client as well.
And Business Central could.
If we choose to implement that,it could support other MCP
servers like, for example, theweather, whatever, or another
Microsoft product, microsoftSales, for example, which may be
a little more interesting interms of business scenarios.
(58:28):
So that's exactly where youdiscover.
That's the idea behind thiswhole MCP thing.
Speaker 1 (58:39):
That is wild.
Speaker 2 (58:40):
I love it.
It's mind-blowing because youcan take it back to what I was
saying, whereas it becomesfaceless, because now the
customer of a product or theuser of a product doesn't have
to be a customer, and sometimesI get drawn to terms that I
think aren't as appropriate.
You're dynamically using theapplication, so you have a bunch
(59:01):
of services together and thenthe agent's going to orchestrate
the services from the serversto give you the results, and I
like that.
The top 10 driving directions,even where are they are
geographically and, like you hadmentioned, bring in the weather
.
I mean, I could see so muchinformation and it's going to
become where it's creating theseservers or the endpoints, and
(59:23):
them having access to thebusiness logic in the data.
That will give us the interfacethat we need.
Speaker 1 (59:32):
Here's the wild thing
about this too I don't even
know what's going to happen tous as a civilization.
Speaker 3 (59:36):
The new thing about
it is that it is a standard.
As always, as all standards,it's only going to work if it
actually takes hold, like ifpeople are adopting and starting
.
But there's a pretty goodchance because Microsoft is a
big player in that, open AI is abig player, and Tropic, which
are all really frontrunners interms of AI, so the big
(59:59):
companies and the big players,the big actors in the AI world,
are behind this.
So there's a pretty good chancethey will take hold and there's
already a lot of interest onthis technology and so it's
interesting because it's astandard and yeah, and that,
will you know, allow thesethings.
(01:00:21):
But if it really takes hold,there's a lot of, you know, cool
things that can happen for sure.
Speaker 1 (01:00:27):
Brad, if you talk
about the faceless component of
this, here's what.
And again, I'm just I'mdreaming at this point, right
Like it's going to require someeffort.
But if you imagine, if you haveyour own central agent for your
tenant and you do a businesswith somebody that happens to
have the same capability maybethis another tenant using
(01:00:50):
Business Central at some pointall you have to do is connect
two tenants together, with thosetwo agents talking, and then
it'll do its business on its ownright.
Based on what I know about thebusiness and what it knows about
my business, just have both twotenants talk to each other and
I'm hands-off at that point.
That's amazing.
Speaker 3 (01:01:11):
Basically, what's
really new in that?
Because we've had standstandards before in our industry
.
I'm sure you're aware there'sbeen a lot of attempts to and
standards are only like you know.
We have these.
A lot of the things we're doingtoday are API-based, so you
have an API.
We have a lot of services.
Basically, we live in a serviceworld.
(01:01:33):
There is a web API for this anda web API for that, and you can
do a lot of services.
Basically, we live in a serviceworld.
There is a web API for this anda web API for that, and you can
do a lot of things.
You can query about the weather, you can check out, I don't
know.
Airbnb probably has an API aswell.
Everybody has APIs.
But if you want to write anapplication that kind of
aggregates this data and dodifferent know, do different
(01:01:54):
calls to this api, that's a lotof work and and first of all,
you need to learn how to usethis api, because they're not,
you know, necessarilyself-descript scripting, right,
and also your code will break assoon as somebody changes
something in this api, right?
What's new here is basicallywhat you do is that you say
first of all, you describe thisapi.
(01:02:15):
There's a.
Here is basically what you dois that you say first of all,
you describe this API, there's astandard for describing what
they do, right.
And then the magic comes fromthe LLM, because the LLMs are
really good at figuring outthese changes and getting okay,
you get this API, okay, it lookslike this, so I figure out,
they can figure out how to callit.
And if, two seconds after,somebody publish an update to
(01:02:39):
this service and change the API,then ELM will figure it out.
That's the new thing.
That's basically the magic ofit.
So, whatever client you are, asyou described, you have a client
leveraging multiple of theseMCP servers.
It will keep working.
Client you're leveragingmultiple of these mcp servers.
It will keep working and,moreover, you can add more
(01:02:59):
servers, more services to it.
You know, in in a seamlessfashion, but that's pretty
exciting.
You know there's a lot of.
You know there's a lot ofinteresting application on that,
for sure and it's.
Speaker 2 (01:03:07):
I am super thrilled
for this, but I have to ground
it a little bit from just again.
It's the conversations that Ihave with individuals.
We're talking about AI creatingsales orders.
We're talking about AI.
Chris, your idea of having itbe in tenants was extremely
beneficial and helpful, but nowwe have a lot of information
within a tenant.
We have documents, we have data.
(01:03:29):
We have a lot of points.
We have a lot of points.
How can we manage the securityfrom the context of the
individual making the call, tobe able to differentiate when
it's creating, when the AI iscreating this data, when it
creates the vectors?
For example, vincent, you knowyou're responsible for the
product group and you may haveaccess to finance information
(01:03:53):
and human resource information.
I work for a different functionwithin your group and I
shouldn't have access to that.
What is to prevent and how canyou put measures when AI is
training on this information?
To use some of the terms, whatconsiderations and what do you
put in place to make sure that Idon't have access to the
payroll files when I'm doing aquery or a call and you are?
(01:04:16):
And then also, chris, if we'recrossing tenants somebody coming
in how do we scope what theycan do in this world of all this
automation, because security isa big concern to a lot of
individuals.
Speaker 3 (01:04:30):
That's a very good
question, and so I can give you
some.
And yes, that's something thatwhen you develop AI feature, you
should be concerned about.
For sure you don't want to giveyou know, but that goes for any
automated piece of software.
You write as soon as you write,whether it's an agent or
(01:04:53):
there's ai involved or not, youknow if something is doing
something in in an automated way.
You need to be concerned aboutsecurity and and what it does.
And then it doesn't access datathat it shouldn't be accessing.
So so what?
So the way we and and so that'syou know that's a very good
question.
So the way, I can tell you alittle bit.
By the way, we addressed it inBusiness Central.
So in Business Central, ouragent is executing with the
(01:05:23):
credentials that the user youhave.
So we have a security modelwhere you give, we have this,
have this permission system.
You know, in Business CentralI'm sure you're familiar with it
.
So you grant the agent thepermission that you need to have
and nothing more.
(01:05:44):
Right permission system, andthe agent is acting with a
(01:06:05):
certain set of permissions.
You cannot grant a permissionwider than what you have as a
user, obviously, and moreover,we took an extra step.
So the way our agent works, oursales agent works, is that
actually it's working like it'sa true agent.
It means it's working like ahuman.
It's not calling APIs, it'sactually manipulating the
(01:06:31):
products through the UI.
So it's looking at the page.
So it's not, you know, it'sfaceless, but basically that's
what.
What's what happened?
So we take, you know we we takethe page you are on, we show it
to the other them say hey,here's a page, this how it looks
like, here's what you need todo.
So in our case it could be likecreate a source code, what's
(01:06:52):
next?
And the ELM says okay, clickthat button, then we do that.
Then you get to a new page andwe show the page again to the
ELM and say hey, here's the newpage.
What do I do now?
Then the ELM goes okay, fillwith that value, then we do that
, and so on and so forth.
So that's how our agent works.
(01:07:13):
Basically, okay, value, then wedo that, and so on, and so
forth.
Speaker 2 (01:07:16):
So that's how our
agent works, basically.
Okay, so it is working in thecontext of the UI.
Therefore, if it's running bydefault, it gets the user's
permission overlaid, and again,I'm being loose with the terms.
Speaker 3 (01:07:31):
We also have a
permission specific to the agent
that you can.
You know it's even morerestricted than the user and
another thing we're doing isthat we're not so the pages.
So there's a profile, like aregular BC profile that we've
made specifically for that agent, right?
(01:07:51):
So it serves several purposes.
First of all, we have a, youknow, simplified version of the
pages which are targeting thesales for the sales order agent.
We have a sales order agentprofile which basically
targeting the sales orderscenario, because you know it's
(01:08:12):
it, we found out it's a lotbetter because to to show like
reduced version of the pages,like simplified version of pages
, uh, not to confuse the agenttoo much.
So we get a better accuracy byshowing, you know there's tons
of fields that you will neveruse, like on the, you know sales
code, there's tons of fieldsyou will never use in, uh, in in
the scenarios we're supporting.
So no need to to show them tothe agent because you'll only
(01:08:35):
get confused, right?
So we show a much reducedversion of these things and we
remove things like, you know,discount, for example, the
discount field.
We don't want the agent becausepotentially you could here's the
thing that you couldpotentially have, you could do.
You know, I don't know if youcan call it problem injection,
but you know, I know if you cancall it prompt injection.
(01:08:56):
But you know, given the thingis entirely autonomous, it's
going to do what you ask it todo.
So if the customer, you know,in a sales order agent scenario,
the customer write a mail, sayhey, I want to sell a quote for
these items, you know, twochairs and a desk and I want a
20% discount, the LNM is goingto go around and the agent is
going to go.
Here's the page.
(01:09:17):
There's a discount field.
What's in the prompt 20%discount?
Okay, I'm just going to put 20%here.
It's going to do what you askedit to do there are things like
that we've purposely disabled,so the agent, for example,
doesn't see the discount field.
Speaker 2 (01:09:37):
He can't do things of
that nature.
No, it's great to hear because,again, when you see these new
features and you see thedemonstration, it's great to see
you know to use your wonderfulshow Under the Hood, how it
really works and to understandsome of these considerations
because, as you had mentioned,we want to build trust in the
technology and how it can beused, but also identify that
(01:10:02):
guardrails have been put up sothat, if it does run and execute
, it's not going to go wild and,as you had mentioned, start
giving products away for free.
Speaker 3 (01:10:14):
So you know we
announced that directions that
we will release this agentplatform soon, which will allow
you to build your own agents andyou only need to think about
these things.
You need to create, you know,the right permission set for
your agents for your particularscenario, and you'll need to
create these profiles.
(01:10:35):
So they are.
You know you're not doingbecause we can only do the
things that we know about in theBC-based app.
If your solution has somecustomization that are, you know
, sensitive for whateversensitive data and you want the
agent to manipulate, you'll needto protect it.
But that's you know.
That's just business as usual,I would say.
Speaker 2 (01:10:59):
Yes, I still can't
wrap my head around the security
context of it.
Again, I'll just have tounderstand and trust.
I understand from the salesorder agent, but I get hung up
on retrieval of informationwhich is sometimes outside of
the context of a business center, which is sometimes outside of
the context of Business Central,I think more of a co-pilot in
an office setting or aSharePoint setting, where you
(01:11:19):
have a lot of files on yourserver that it's indexing so
that you can have employeeself-service, for example,
within an organization, or evencustomer self-service, which is
just amazing.
It's the efficiencies that wegain from a lot of these tasks
and I'm still just looking for agood photo AI agent that will
(01:11:39):
organize all of my photos.
Speaker 1 (01:11:42):
I think that's going
to be a hurdle for a long time
is the security component,because we move so fast in
processes and being efficient,and typically security has to
catch up, you know.
And so it's a fine balanceright, because you want to make
(01:12:03):
this huge progress, but then atthe same time, you're going to,
like, hold on, like, you know,you got to pull the ropes back a
little bit and making sure thatthe security is in place before
you let it do its thing, and Ithink that's going to be like
that for quite some time, unlesssomebody comes up with some
simplification of security whereit just learns how to do it on
its own.
Speaker 3 (01:12:23):
The security is
always something you know you
need to be taking seriously forsure.
I mean, I believe that we're inpretty good shape with our
agent platform, in that sensethat we've built, you know, the
necessary tools and guardrailsyou can put in place to prevent
the agent from doing things youdon't want it to do.
That said, you always need tobe careful and how you know.
(01:12:47):
So.
We've done a due diligence withBC, right, we do a lot of
testing around security and wedo a lot of what we call
responsible AI as well and tryto prevent all the prompt
injections, all the misusage ofitems, all these things.
But in the end, again, when youimplement your own engine,
(01:13:08):
you'll need to go through thework as well.
So there's no work through thework as well.
So there's no.
You know they still work.
You know we're not going to getunemployed.
You know, contrary to whateverybody says, we're not going
to get unemployed anytime soon.
I guarantee you that.
Speaker 2 (01:13:23):
No, I don't see it.
They've been saying that foryears.
You know, if you go back to thehorse and buggy with the motor
vehicle vehicle or theautomobile, the nature, of the
work might change.
Speaker 3 (01:13:33):
You know I find
myself these days.
You know you know when I, whenI do, I don't do a lot of coding
, but when I do I, you know Iapproach it very differently
than I used to.
You know all the boilerplatecode we used to write for you
know basic stuff.
Speaker 2 (01:13:47):
You don't do that
anymore, you just have it, yeah
no, and one thing, chris, I knowwhen to jump in, but one thing,
as I say, it's ai isn't goingto replace you.
Someone with ai is going toreplace you.
That's what it is it's.
It's not the ai is going toreplace you.
It's people who are using ai tobecome more efficient and
understand where to use it andwhen to use it and how to apply
(01:14:09):
it.
It's not the ai that'sreplacing you.
It's somebody that has skillsjust like any other profession
and job that you have tocontinue to advance with.
Yeah, and I wanted to kind offinish on that.
Speaker 1 (01:14:22):
You know, using AI
and the security component, I
think my dream of it beingtenant specific and have two
tenants communicating to eachother, two tenants communicating
each other.
It's going to like you haddescribed, vincent.
For you to build that agent,you have to set security
parameters specific to thatagent and, if you have it, talk
(01:14:42):
to all the different agents.
Right, you have to alsoconsider all.
Whoever's building them willalso have to consider all the
security.
So I think it's going to takesome time, but I think we'll get
there eventually, unlesssomeone comes up with a
simplification or standard ofsecurity across the board.
That will be the hurdle, in myopinion.
(01:15:05):
Of course, at the same time,when you set the security, you
have to build that trust too,with the business owners as well
.
They're running people that arerunning the business you know
itself.
So exciting times, I think.
In my opinion, super exciting.
Speaker 2 (01:15:23):
No, there is.
There's a lot of excitement.
I'm super excited with all thenew features in 2025 wave one.
Can't wait for 2025 wave two tosee the new features.
I'm super excited about thepage scripting future, even,
like you said, even if it's thatsmall removal of the preview
tag, because at least we knowthat now there's a little gas
behind it.
Speaker 3 (01:15:41):
I mean, this is not
going anywhere.
I can tell you that?
Speaker 2 (01:15:44):
No, I hope not.
That will.
I'll be the biggest opponent ofits removal and I'll refuse to
allow it to be removed.
No, it's a great tool and I'msuper excited to see where
Business Central goes with theuse of agents and responsible
use of agents and theresponsibility with the humans
in the loop to build the trustwithin the application, and to
(01:16:05):
Chris's point of being able tocommunicate with other tenants
or other servers for automatingtasks or letting AI run and go
with it.
We do appreciate you taking thetime to speak with us today.
I'm super excited for theconversation.
It took me a long time to calmdown from it.
It was great seeing you atDirections North America in Las
Vegas.
(01:16:25):
If anyone would like to get incontact with you, what's the
best way to reach you?
Speaker 3 (01:16:30):
They can reach me
through LinkedIn.
It's probably the best way.
Speaker 2 (01:16:34):
Great, great great,
great, all right, great, thank
you.
Thank you again for your time.
We really appreciate it andlook forward to speaking with
you again soon.
Thank you, vincent.
All right, thank you, ciao,ciao, bye, bye, bye.
Thank you, chris, for your timefor another episode of In the
Dynamics Corner Chair, and thankyou to our guests for
participating.
Speaker 1 (01:16:54):
Thank you, brad, for
your time.
It is a wonderful episode ofDynamics Corner Chair.
I would also like to thank ourguests for joining us.
Thank you for all of ourlisteners tuning in as well.
You can find Brad atdeveloperlifecom, that is
D-V-L-P-R-L-I-F-E dot com, andyou can interact with them via
(01:17:18):
Twitter D-V-L-P-R-L-I-F-E.
You can also find me atMattalinoio, m-a-t-a-l-i-n-o dot
I-O, and my Twitter handle isMattalino16.
And you can see those linksdown below in the show notes.
Again, thank you everyone.
(01:17:39):
Thank you and take care.