Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome everyone to
another episode of Dynamics
Corner.
What is Claude Sonnet?
Is that like a poet poem, amusic?
I don't know.
I'm your co-host, Chris.
Speaker 2 (00:10):
And this is Brad.
This episode is recorded onFebruary 26th 2025.
Chris, Chris, Chris, who'sClaude?
Is that what you're asking?
Speaker 1 (00:19):
Yeah.
Speaker 2 (00:20):
Well, today we have
the opportunity to find out who
Claude is.
Well, today we had theopportunity to find out who
Claude is, what his sonnet isand learn a lot about
AI-assisted development atBusiness Central With us today,
(00:56):
we had the opportunity to speakwith the MVP, dinae Stein.
Hello, good afternoon good, howare you doing?
Oh, what good afternoon.
I'm doing fine, excellent,excellent.
I've been looking forward tospeaking with you for some time.
You know this, you know, Chris,I don't know if you know this I
bother him all the time Foreverything, Anything.
I just bother him all the time.
You know I get up at 3 o'clock.
Speaker 3 (01:17):
I don't think it's
bothering, but I am always
surprised that you wake up at 4am and you know I'm the first
one you text 4 am the firstthing comes to his mind, I think
we have to start this over,because now that sounds a little
bad 4 am.
Speaker 2 (01:37):
I wake up, I text
Dina, it's just.
I always have things on my mindand he's involved in so much.
Now it's like I have to justhave an outlet to say that is
what about this.
What about that?
Isn't it gold?
But now you just let the catout of the bag and everyone's
listening to this going.
What's going on that?
Well, what about the otherpeople that think I'm the first
one that they text at four inthe morning?
(02:00):
oh, now see now you're out ofyourself now you just just set
me up, my friend, you know now.
Speaker 3 (02:07):
Look, I'm sorry.
I'm sorry they have to dealwith it.
I'll take the first place.
Speaker 2 (02:13):
That was perfect.
Speaker 1 (02:15):
That was perfect.
Speaker 3 (02:17):
That was excellent.
Speaker 2 (02:18):
Excellent, so I'm
glad things are going well.
You've been doing a lot ofgreat things and you know we'll
cut out the small talk and justget right into it.
Speaker 3 (02:30):
Before we do that,
can you tell everyone a little
bit about yourself?
Sure, yeah, so my name is Tine.
I'm let's say I'm a developerin this world of Business
Central.
I'm originally from Slovenia,but right now I'm living in
Lithuania, and actually this isgoing to be my last year in
Lithuania, so next year I'lljust say I'm Slovenian, I'm from
Slovenia, I work at Companio,but I prefer to describe myself
(02:56):
as just you know, being young inthe world of dynamics and being
really passionate about thetechnology.
So I like to explore stuff, Ilike to blog about stuff.
I like to.
I like to explore stuff, I liketo blog about stuff, I like to
talk at conferences about stuff,and this stuff is usually
business central and, right now,a lot of ai which is exactly
what I text him about at four inthe morning is ai and the great
(03:18):
stuff that he's doing it'salmost like you're not a
developer unless you do ai.
Speaker 1 (03:23):
Nowadays you have to
like.
That's how it looks like well.
Speaker 2 (03:27):
I mean, I hate to
always talk about ai, but it
seems to be the word of I.
I don't even know how long Icould go without hearing you
gotta say ai for seo purposes no, I, I think.
I think you just have tocompletely disconnect yourself
from life, sit in the woods andI still think AI would appear in
the trees, but I don't thinkyou can go far with that.
(03:52):
So, AI is like anything, it's atool.
So AI-assisted development and,as you had mentioned, you're
doing a lot of great things.
You are newer to the community.
You've been doing it for awhile, but a lot of great things
.
You are newer to the community,been doing it for a while, but
a lot of us older we always talkabout I talk with him about
dinosaurs.
We have the dinosaurs and theyounglings.
(04:14):
I don't know what we could callthem.
I guess they call what do theycall the young Padawans
Younglings in Star Wars?
Do you know, Chris?
Speaker 1 (04:23):
I think it's young P,
a young padawan.
Speaker 2 (04:23):
Oh, maybe it is
younglings yeah, I think it's
younglings yeah I know you havepadawans when they're in
training and then you have theyounglings when they're starting
that is true.
I think that's right, it's good,but I don't even know where to
begin with these questions,because I've I've seen so much
that you have been doing with ai, assisted development.
Can you tell us a little bitabout your thoughts and what
(04:46):
you've been doing, what you'vebeen experimenting with in that
area for AL and even any otherlanguages, because I saw you did
something amazing that Ihaven't been to get back to, but
I want to talk about that atthe end.
With that whole Python script,chris, where do you see what he
did.
Wow, python script.
Yeah, so we'll get.
We'll get to that, because I'vebeen trying to do that for us
(05:07):
and I just haven't had thechance to do it, so maybe we can
ask a youngling to help us outokay, that's all.
Speaker 3 (05:15):
Well, depends who you
put me next to, um, but okay,
the the beginning.
Um, I think I started more thantwo years ago when GitHub
Copilot was initially releasedand even back then it demoed
really well, right, you type outa comment and it will propose
(05:36):
to you a procedure of what itdoes, and then I tried it out
with, I think, c Sharp at thetime, and it did exactly that.
So that was super exciting.
But then I switched to AL andit was meh.
But even even the meh partswere more than enough to cover
cover the what was it?
10, $10 with the personallicense cost.
(05:58):
So I just kept it.
I just kept using it, and for along time.
For me it was just theautoomplete.
So instead of waiting forintellisense, instead of me
figuring out how to complete theprocedure, I'll just tab it out
.
And that was.
That was good enough, um.
But then in the past let's saysix months, when they started
rolling out better models, whenthey gave us um co-pilot chat,
(06:21):
then especially co-pilot edits,now with the agent mode, with
Claude powering all of that, itbecame more and more powerful
for AL.
But whenever I tried a newfeature for co-pilot, I didn't
start with AL, because myassumption is still AL is going
to be weaker compared to amature language, simply because
(06:42):
there's so much more trainingdata available for something
like TypeScript.
So I always started with someof the side project ideas that I
had on my mind a Python script,a frontend in React, whatever
came to mind, and it was so cool.
It was really amazing to seethat once edits rolled out, I
(07:06):
could just type a sentence thisis what I'm trying to do and it
would generate that code.
So I always approach it as getexcited with a mature language,
because you will see the fullpower.
And then, once you know whatkind of use cases should work.
That's when I tried to bringback to AL to see okay, so I got
this kind of scenario workingin TypeScript.
(07:28):
Does this work in AL?
So in a sense, I wasn'tdiscouraged by the use cases
that didn't work in AL.
And well, I'm still kind of onthis hype train because
everything that rolls out, evenif it only works for mature
languages, it has brought me Idon't know how to describe it,
(07:50):
but a lot of enthusiasm for allof the side projects, because I
would if I was not working in bc, I would likely place myself
more on the back end side ofsoftware development.
I would never go for frontendI'm not good at frontend, but
now I don't have to be and allof the backend code that I would
(08:10):
write myself now I have someoneelse that I can prompt to give
me the frontend parts and nowall of those side projects that
were just kind of waiting on thesidelines they come to life in
I don't know a weekend, so oh, Idon't know a weekend, so oh, I
don't know.
It's super cool to to work onon the side projects with a tool
like that, but it also has Iwon't say all of them, but a
(08:32):
limited, um, a limited butgrowing amount of use cases that
I use every day for al.
To just skip the boring part,skip the the coding and get more
into the problem solving, whichis the fun part.
I mean coding is the boringpart, I would say.
Speaker 2 (08:50):
Yeah, coding is well.
It's nice to be able to createsomething, but you had mentioned
a lot of things, so prior toworking with AL, did you program
in another language?
Speaker 3 (09:01):
Never professionally.
It was always more for mypleasure, pleasure.
Speaker 1 (09:06):
You dabbled a little
bit so that's interesting it
lowers your level of entrance toother languages because you
have copilot.
Is that is that how you wouldlook at it, where now you can
pick up other languages becausethere's an assistance um to get
you started?
Speaker 3 (09:22):
I think that's a
separate topic we can open,
because, even though I'm superenthusiastic of using it for
side projects, I don't think Iwould let a developer just
generate code with AI if thedeveloper doesn't understand
what the code is supposed to do.
Speaker 2 (09:41):
That's the key.
I think you hit the key rightthere and that's where I wanted
to go with it, because I myselfuse Copilot.
You can do some basic things,where I said, in Python,
generate me a snake game thatplays itself, and it does it
pretty well, but if I look atthe code, do I really understand
what it's doing?
So I think you really need tounderstand the concepts, and
(10:03):
then the language and the toolwill help you build it.
Think you really need tounderstand the concepts and then
the language in the tool willhelp you build it, but you still
need to be able to review it.
So it's not that it would lowerthe skill for coding.
In a sense, you still need tohave the skill of understanding
how it all works and how it'sall put together so that you can
review it as if it was a juniordeveloper and you're going
(10:28):
through a code review.
Speaker 3 (10:30):
Yeah, I think it's a
very, very powerful tool for
senior developers because onceyou're a senior developer, it
doesn't matter if you just knowone language, one syntax.
You understand how code iswritten.
You understand how code isstructured in general.
So, even though I probablycouldn't write those Python
(10:51):
scripts myself, I can read thePython scripts and know exactly
what's happening there because Iunderstand how code works.
So I think for seniordevelopers, this is an amazing
new tool, but it's not a toolthat will let junior developers
skip a level and suddenly beexperienced much, much sooner.
Speaker 1 (11:11):
Yeah, I think that's
what I was trying to say.
It's like if you've been codingand you understand the
structure of code from maybe aspecific language, getting into
other programming language wouldbe a little bit easier to get
into because you understand thestructure, how it's supposed to
work.
But getting to otherprogramming language, like
Python for example, that hey, Ikind of understand how this
(11:34):
works, but I don't want to codeit from scratch.
So it's an easier way to getinto other programming language
because you now you have a, anassistant, a co-pilot assistant
yeah, I would.
Speaker 3 (11:51):
I would agree with
that.
Recently there was a post onreddit where somebody started
coding with co-pilot or theywere using cursor.
Speaker 2 (12:01):
We can talk about
those I can't wait to get into
that.
Yeah, that, that's awesome.
You know that's on my list too.
Speaker 3 (12:07):
So they were using
Copilot to generate some code,
generate an application, and itwas fine the first day, the
second day, the third day, thefourth day, but then the Copilot
just started going in circlesintroducing bugs.
When they asked it to fix thebug, it introduced more bugs.
When they were asked to fix thebug, it introduced more bugs.
You will hit that if you don'tunderstand the code that Copilot
(12:29):
is trying to fix for you.
If you know what should befixed and you just don't care
about the syntax, the text thatis written out, this is gonna be
awesome.
Speaker 2 (12:39):
But if you also
expect the problem solving to be
done by Copilot, you willsooner than later hit that limit
of well, now it doesn't knoweither yes, and that, see, I do
want to go back to that becauseI recently this week, I had
conversations with individualsthat said, oh, I can be a
(13:00):
developer now.
So I want to take it back towhat you had mentioned.
There's a couple points youmentioned.
To bring it back to, having asenior skill in understanding
the concepts of how things work,copilot can be a great tool to
assist you.
If you're someone that's new toapplication development, you
still can use it as a tool, butdon't use it as a learning tool
(13:22):
in a sense, because you stillneed to understand the
fundamentals of the structure tosee what it gets back to,
because the AI will hallucinatedepending upon what you ask it,
what it's trained on and whatyou're trying to do, and to not
make the assumption, and I thinkthat's a big disconnect.
But with that, ai can be.
(13:42):
I've heard some stories ofindividuals talk about the use
of it where now you have juniorlevel developers, you have
senior level developers and thenthat middle range of developer
is a different landscape.
What's your take on that and doyou see it where AI can be?
Or even if you go into theworld of agency?
(14:03):
I have so many questions piledup where AI can be, or even if
you go into the world of agencyI have so many questions piled
up to where you can have agentsin essence be junior developers
writing portions of code thatthen come back to a senior
developer for review.
So there's a lot in that rangethere, that's a good one.
Speaker 3 (14:20):
I think we're going
to struggle in the upcoming
months, upcoming years withjunior developers, because we
will have to forcefully limitourselves, forcefully limit the
speed that we're going at tofind work for juniors, because
(14:40):
things that I would normallypawn off to a junior for them as
a learning opportunity, thingsthat I would normally pawn off
to a junior for them as alearning opportunity, and
because I'm kind of bored ofthat work, it can now be done in
seconds if I shoot it off toco-pilot.
So I think this is going to bea big struggle that you have to
understand why you're growingjuniors and I think more and
(15:07):
more it's going to go into intothe experienced or medium level
developers as well.
Right, in general, I think it'sgoing to be the same way as it
was for the past 10 years, maybemore, where everybody needs
senior developers and it's goingto be really hard to get senior
developers because nobody'straining juniors to get this.
(15:29):
Thank you, thank you.
It's a double-edged sword.
Speaker 2 (15:32):
It is.
Everyone wants experience.
I think we said it on aprevious podcast.
If not, I've said it to peoplebefore.
It's everybody wants someonewith experience, but nobody
wants to give anyone thatexperience.
So how can they get thatexperience?
It's a challenge, and there isa myth.
I think that AI listen, ai is atool.
I use it daily now, and evenmore so over the past couple of
(15:57):
weeks with some of the newermodels that have been released.
I can't even keep up with themodels that are coming out, but
you still need to take it backto having the fundamentals of
knowing what's going on and notjust assuming.
I think it's putting aperception in a lot of people's
minds that AI can just do it andit makes everything easier and
it's perfect.
So therefore, we don't need theindividual to be able to review
(16:19):
it At this point.
In 2025, where will it be whenI'm retired in 2025.
Where will it be when I'mretired?
I don't know, but hopefully Iwon't care and I'll be sitting
underneath a tree somewhere ifthey still exist.
Speaker 1 (16:30):
So do you?
So you think it's adouble-edged sword?
Then, cause, considering thatyou know you get younglings
right coming into thedevelopment world and they're
learning or maybe starting tolearn using co-pilot of how to
code, but they're missing allthe foundation, like what
seasoned developers have gonethrough, where they have to
(16:52):
build it from scratch.
They understand the structure,they understand the concept, but
the newer generations arecoming in.
Are they not learning it thatway?
They're learning right directlyinto using AI to build a
foundation which could lose thatknowledge or translation.
I mean, tina, you kind ofstarted before the AI and then
(17:15):
you're having to alsoincorporate AI into your
day-to-day.
I mean, how is that?
Could you do the same thing nowif you started now versus when
you first started developing?
Speaker 3 (17:28):
Okay, so that's a
two-part question.
If I go for the first part,there's a trap with AI that I
have caught myself in quiteoften as well, which is AI is so
good at generating answers thatseem like the correct answer
that we tend to believe it asthat's the full truth, right?
So whenever you're exploring anew topic with AI, you will
(17:53):
think, whoa, this is crazy.
I don't have to click 10different links, because this is
giving me a summary ofeverything.
However, if you would use AI toresearch a topic that you do
know about, you would noticethat there are hallucinations.
Hallucinations are at the coreof AI.
I don't think they're goingaway with the current
architecture that we have.
(18:13):
So AI always hallucinates andif you trust it fully, when you
learn a new topic, a technicaltopic, you're going to start to
learn things that don't reallyexist, and, as a junior
developer, you're even moreeager to just yeah, ai said that
.
I'm going to use that as thesource of truth, and for me, I
think it was just with one ofthe side projects.
(18:35):
Actually, I kept trying toconvince AI.
Well, not convince AI, but getAI to give me an answer.
That was in the documentationall along AI to give me an
answer.
That was in the documentationall along, and until AI gave me
a wrong answer for fivedifferent prompts.
That's when I said, okay, Ineed to find a different source
(18:56):
of the answer and I founddocumentation.
Speaker 2 (18:57):
It took much less
time actually to go to the
documentation, but because Ididn't want to believe that ai
is giving me the wrong answer, Istuck with it and I believe
that that's that must be thetruth very good point and people
hallucinate too in a sense, andit's something to to remember
(19:17):
is, and I'd say to ai is a tooland I I thank you for bringing
up that point, because even if II could talk with Chris and
Chris doesn't know something, hegives me information I still
have to have a sense of do Iwant to believe this?
Should I do research?
Should I know?
I have to have a generalunderstanding, and that's what
I'm finding in the fear that Ihave in some cases is everybody
(19:38):
just believes everything that AIis true and will be stuck
because we'll have a lot ofmisinformation out there.
Then everybody uses AI, noteverybody.
A lot of people use AI topublish information.
So now you have AI creatinginformation that's not true.
Then you have people learningor training AI and people on
(19:58):
that, and now it's verydifficult.
It's a cycle, it's verydifficult to determine what is
true in a sense, it's even fromthe development point of view of
how to do something.
Unfortunately, with development, there's usually more than one
way to do something andsometimes, depending on how you
write it, it can cause problems,as you had mentioned, days down
(20:19):
the road or, in the future,down the road too as well.
So it's important.
Speaker 3 (20:24):
So would you say just
to maybe to add this cycle
right um, ai hallucinates andthen you train on hallucinations
and you generate morehallucinations.
This is the main reason why I'mnot yet sold on agents, because
when when one ai model, one aifeature hallucinates, 20% of the
(20:46):
time, I can control that.
If that's picked up by anotherLLM that hallucinates and
another LLM that hallucinates,you go into the cycle where you
don't want to be and maybe forsome cases it works.
I think in development, agenticdevelopment is going to go
further than where it is rightnow.
But introducing agents insomething like business central,
(21:10):
I think will need to have a Idon't know a slightly different
approach than just letting itloose different completely.
Speaker 2 (21:18):
But let's go back to
the agentic approach to anything
.
I understand the hallucinations, but if you have agents that
are focused or trained onspecific functions or specific
tasks, that can use differentmodels, would it reduce the
hallucination?
Because now, instead of sayingmodel X or I don't want to say X
(21:43):
, but a particular model, giveme this.
Now you can say, okay, well,this is what I need.
It's broken down into thesepieces.
Let's go out and get an agentto do something similar to
building a house.
When you build a house, you'lluse a carpenter, you'll use a
plumber and you'll use anelectrician.
You'll have a generalcontractor that will manage them
all.
A general contractor may knowenough to do everything.
Contractor will manage them all.
(22:06):
A general contractor may knowenough to do everything, but
sending it out to the specificuh agent may give you better
results yes, if llms weren't soum sure that they need to be
right all the time, right when?
Speaker 3 (22:23):
when's the last time,
uh, an llm said to you oh, I'm
sorry, I don't know that.
You know, I, I have to go andask a human, I have to.
I don't know how to do that.
The lms will go all the way.
So you know, break down a task,fine, okay.
Maybe in the task process youalready hallucinate, you send
the wrong task to theelectrician and then electrician
hallucinates again and installsplumbing instead of wires.
(22:46):
Right, and this is the partthat I'm worried about that when
you chain hallucinations youcan go sideways quite badly.
Speaker 2 (23:00):
Understood understood
.
That is an interesting point tobring up with the, I guess,
stacked other agents thatplumber can also hallucinate.
Speaker 1 (23:26):
Maybe what they think
is the right way to do things
may not always be the right wayto do it, and so they're just
basing off by experience orbasing off what they've learned.
But you may have anotherplumber that would do it better,
or you know the right way basedon what they also learned.
So it's kind of a slipperyslope when you're you know, when
(23:51):
you're including agents becausethey're going to hallucinate.
It's going to be, it has to be.
You have to build like aparameter around it, like how do
you do that?
Speaker 3 (24:02):
So actually I have an
example of just an agent
hallucinating in AL.
Yesterday I put it up onTwitter or Blue Sky, I don't
know.
I asked it can you just gothrough this file and translate
Dutch comments into English?
And it said sure, no problem,I'll fix those linter errors for
you.
And it identified some lintererrors and started fixing them
(24:22):
as if they are in C sharp.
Some linter errors and startedfixing them as if they are in C
sharp.
So it didn't even understandthe task that I was giving it
and then provided a wrongfulsolution to a task that I never
asked to be completed.
Speaker 2 (24:37):
So with development
to go back to you talking about
hallucinations and finding thedevelopment To what extent
within AL, based upon yourexperience with it thus far, do
you think it offsets yourdevelopment?
What I mean by that is youmentioned we had IntelliSense.
Intellisense helped you and nowwith Copilot, there's a lot of
(25:01):
autocomplete type situationswhere it will try to create you
know if you're doing an actionon a page, for example, or try
to put in the most commonproperties, or several
properties, including the images, which, to be honest with you,
I can't tell you if it's 50% ofthe time it's right with the
name of the image or not but ittries to get there.
(25:21):
How much do you think itincreases the efficiency on
development for creating code?
And if you want to break itdown to tables and pages, I can
talk about some of the examplesI have done, also including what
it takes to go back and reviewwhat it has done.
How do you think that, in fromyour experience with what you've
been doing, it's increased yourefficiency, and in what areas?
Speaker 3 (25:46):
um.
Time wise, I think, like the,the industry average is around
30 percent and I would say myexperience could roughly be
around 30 percent of of timesaved.
Um.
But there's two parts which, ontop of time saved, it brings to
(26:10):
my work.
One is naming and brainstorming.
That's something that I love todo with Copilot.
Even when I'm reviewing someoneelse's code, I'm like, hmm,
something feels off here.
Hey, copilot, do you think thisfeels off as well, like what
could be a better name here?
And I do get back suggestionsthat I can then use in the code
(26:32):
review.
So time saved.
The other one is I get newideas for like naming, for
restructuring my own code, butalso someone else's code.
And then the third part is it'sa joy to develop for me.
You know when, when I don'thave to type the code out, I
don't have to complete it withintellisense, I can just say I
(26:54):
know what I want the code tolook like.
You do it.
Speaker 2 (26:57):
So I think this um, I
don't know enthusiastic factor
for me weighs a lot myenthusiasm my enthusiasm after
seeing some of the stuff thatyou've been doing has increased,
and also I jettison some of theyou and I talked about it too
some of the little things totake away from that, which is
(27:18):
good.
What do you find that you useit the most for?
Do you use Copilot chat andwe'll get into cursor after uh
in a moment but do you use it to, say, create procedures for me?
Do you say, create somethingthat does this within this is
within al, from a businesscentral point of view, or do you
use it more for the autocompletion point of view?
Speaker 3 (27:40):
um, auto completion,
probably just because of how
natural it is.
I see text, I tab, I accepttext.
I feel, even when I introduceCopilot to new developers, al
developers or any other language, auto-completion nobody
struggles with.
Everybody sees how natural itis, how cool it is and just tab,
(28:00):
tab, tab, you're done.
To work with chat, to work withedits, it takes more of a
mental switch.
You have to understand okay,this is what I'm now going to
write.
What if I get Copilot to dothat for me?
And I've done?
I use that primarily when I havepredictable code that I want to
(28:25):
write.
So, for example, today I wasreimplementing one trigger on
validate trigger on one field.
The other one was more or lessthe same.
So I just said, hey, copilot,you do the second one, because
you will now see exactly how Iwant to have it redone or
generate this, this, this andthis field for me.
(28:46):
So whenever I have a clear,clear view of what the next step
is going to be, that's when Igo for chat.
But in terms of what do I use?
More often, auto-completionsare just every minute, not even
every day or every hour.
Speaker 2 (29:02):
Okay, that's good
it's.
Have you used it?
Do you know how it works withit?
Because I had gone through atone point and I had created a
table within Business Centraland again with Copilot it helped
fill in a lot of the commonproperties and I adjusted it.
Then I went to go through and Ineeded to create a list page.
(29:22):
From that I started to type thepage.
Copilot, basically with theautocomplete, created most of
the list page for me.
So, how does that work?
And also, you mentioned the see.
My mind's all over with this.
Which model do you findyourself using with development
(29:45):
now, claude, claude sonnet itwas recently enabled within
github.
I mean, these models come outso fast and someone has to turn
them on or enable them.
But which model are you using?
Go back to what I was talkingabout, to understand how it
works, to be able to createsomething from what you have
already created.
But then also, how do you knowwhich model you should use for a
(30:07):
task, or is there just onemodel that's better for
development in general?
Speaker 3 (30:13):
Okay.
So, starting with models for ALCloud, sonnet 3.5 was, up to
two days ago, the best model touse, but now 3.7 is out, which
is even better.
So AL Cloud Sonnet 3.7 is out,which is even better.
So AL clods on a 3.7 all theway to go Other languages that's
where we could have adiscussion.
Do we want to?
(30:34):
I personally sometimes go for 01when I say, hey, this is a code
, I just have one bug.
Can you help me find the bug?
Review the bug?
The reasoning models are goodfor that.
But still, I would say mydefault is always Claude and
then, based on if I'm testing anew model out or if I just want
(30:57):
to see how another one works, Iswitch, but Claude is my main
driver, more or less always.
But to the question of how doesCopilot work under the hood.
It's interesting.
Over the weekend I actuallyspent quite some time let's call
it opening the hood and seeingwhat is actually getting sent to
(31:20):
an LLM.
So there's two parts.
One is autocompletes and one isedits.
I'll stick to autocompletesfirst.
So you've created your tablewith all of the fields and now
you're creating your list page.
For that table, copilot willbuild a generic request.
(31:40):
You are a helper for adeveloper.
You help do this, this, this.
So it will have just a systemprompt, but it will also first
of of all know what al lookslike from all of the data that
it was trained on.
But on top of that it's usingyour open tabs in vs code as
context.
So because you have your tableopened in one of the tabs, it
(32:02):
will apply that table to theprompt and say, hey, this is
what the user has opened, justfor you to know what's happening
.
And from that autocompletesalready know oh, okay, table
fields.
The name of the page seems tomatch.
Then I have an idea what Iwould suggest.
Right, it knows how the pagelooks like because it was
trained on some AL data duringthe training of the model, and
(32:25):
it knows specifically whichfields you would like because in
the prompt there was thecontext of your own table.
So one very important part hereis that works well when you
don't have a ton of tabs opened.
If you have 50 tabs open,copilot will try to take
(32:47):
something, but it won't reallyknow what fits best.
It cannot take everything.
Context is limited, so for bestresults it's good to have only
the relevant files opened, ittakes up to four open tabs.
As context, and again, evenwith four open tabs, if I have a
(33:08):
management code unit which has3,000 lines inside of it,
copilot is not going to take allof that.
It will again try to takesomething out of that management
code unit, but not everything.
So, to get the best experiencewhen using AI for development,
keep only the open files, keeponly the relevant files opened
(33:29):
and let's have small files.
Let's try to keep files small,not only for developers, but now
also for AI.
Speaker 2 (33:38):
That's good to know.
That's interesting to see howthat is using what you have open
for more context for what ithas been trained on, which is
how, in some, most cases andit's it's a big time savings, as
you could see there, when youdo a list page, because there's
usually a lot of typing in that.
So to be able to start a page,give it a similar name, as you
(33:59):
had mentioned, and for it tocreate most of that information
where I just have to go throughand edit, it is a great time
savings.
But I also want to keep goingback to saying you do need to
review it, because you just needto review it.
Speaker 1 (34:13):
Is it aware only up
to four times that the
limitation to the tool itselfbecause it's too much data to
take, or do you think there'sgoing to be an increase down the
road where it will be moreaware of more tabs?
Or do?
Speaker 3 (34:29):
you think there's
going to be an increase down the
road where it will be moreaware of more tabs when we get
to bigger contexts.
We might get more open tabsadded, but there's a big
difference With autocompletes.
You need your completions to beready fast, because if I type a
sentence, I want to wait half asecond and I want to see that
gray string right, what comesnext?
I want to wait half a secondand I want to see that gray
(34:49):
string right.
What comes next?
When you are talking about edits, so let's call it Copilot Chat,
but the one that actually editsyour files there.
You don't that one doesn't takeyour open files into the
context.
You drag the files into whatthey've called a working set.
You have control over thecontext that gets sent as a
(35:13):
prompt to an LLM, and thatworking set currently has a
limitation of 10 files.
So that's already a bigincrease.
And I would say if you'recreating new files like create a
list page out of a table, usingedits is much better than
trying to auto-complete it,because then you can say here's
(35:36):
my table and you can say here'smy page object if you already
have something in it, and youcan say add all of the fields
from the table to the list right.
You have way more control overwhat hits the LLM compared to
the autocomplete, where themodel makes the decisions for
you.
What's in the context?
What's the prompt that's sentto the model?
(35:58):
So edits super powerful for newobjects, especially if you
already have a comparable objectsomewhere in your code base
that you can drag and drop ascontext.
Speaker 2 (36:16):
You had mentioned,
you looked under the hood to see
what it's using.
Can you use local?
That's the other thing a lot ofindividuals are doing now.
Can you use local largelanguage models with development
so that you could see this, orhow did you see that information
?
Speaker 1 (36:33):
There's two questions
there yeah, can you?
Speaker 2 (36:35):
change it to use a
different model, Number one.
Two how did you actually get tobe able to look under the hood?
Speaker 3 (36:45):
Can you change it to
a local model?
You cannot.
I very much hope to see thatone day where there's a window
where you can add some feedbackto Microsoft.
I've probably sent sevenfeedbacks already.
I want to have a local optionas well, because when I'm on a
(37:05):
plane I don't have Copilot, orsometimes Copilot is slow, and
this year I do want to have acouple of conference sessions
where I'm showcasing some of theexamples with co-pilot.
It's a risk if the co-pilot isgoing to say, yeah, I'm having a
slow day, let's, let's waitfive minutes before this is
completed, when usually it takesthree seconds.
(37:26):
So can you use a local model?
No, how did I get to theinformation?
Uh, there's a tool calledfiddler um, which?
is used for that yeah, it's beenaround for I don't know a lot
(37:50):
of years he's trying to be niceChris.
Speaker 2 (37:53):
I remember Fiddler,
when I was younger than you so
you know Fiddler.
Speaker 3 (37:59):
Fiddler basically
allows you to inspect the
network that's going from yourmachine outside.
So with Fiddler you basicallyplace yourself as a man in the
middle between VS Code and theendpoints of Copilot.
And I was just typing promptsin VS Code and in Fiddler I was
(38:21):
watching what's hitting it, sowhat's happening there, and I
could see the system prompts.
I could see which files arebeing used as context.
I really have to give credit toViejo for this.
He is the one that shared withme at the directions First.
He's the one that said hey,edits are really cool for AL.
(38:42):
It generates files that compile.
If you use it with Cloud, youhave to go go use it now.
So that's where I first got the, the, let's say, the initial
push.
Okay, there's more thanautocompletes.
And he's the one that said I'veused fiddler to see what's
happening.
Speaker 2 (38:56):
So it's like, oh, I
didn't even think about that no,
it's a great idea and you'reable to see what information
sent.
So it was sending yourinformation, your table.
So if we had an open tab for atable, does that get sent up to
the ai model?
Speaker 3 (39:15):
yeah as well.
So it's a part of the prompt isgoing to say the generic, your
developer helper, you do this,and that part of the the prompt
is going to say this is what theuser has opened.
And part of the prompt is goingto say these are the lines he's
currently, they are currentlyworking on.
So a few lines above whatyou're doing, a few lines below
(39:36):
what you're doing and it willknow.
Okay.
So we're suggestingautocompletions for this
specific part, for this line,and there's another feature
called temporal context which isgoing to say these are the
changes that the user has justmade.
So they remove the line hereand added the line here.
(39:57):
So it doesn't only give thestatic files to the LLM, it also
tells the LLM this is where theuser is positioned right now
and this is what they haverecently done.
So it has more and more ideas.
Okay, this is what I think theuser will try to do next.
This is especially powerful forthe next edit suggestions.
(40:21):
So the part where it doesn'tonly try to autocomplete one
line for you, but it will alsolook where does Copilot think
you're going to go next?
Up to last month, you were onlyable to get a completion.
If you're standing at the endof an existing code line or if
(40:45):
you're standing in a new linebut with the next edit
suggestions, it can suggestedits in the existing code line.
Or if you're standing in a newline but with the next edit
suggestions, it can suggestedits in the existing code as
well.
So if I'm standing in line five, it might suggest something in
the line five.
But then it's going to say hey,I think you're going to want to
jump to line 17 now, and I canjust press tab and boom, my
cursor is on tab 17.
(41:05):
And it says I think you'regoing to want to rename this
variable too.
So I just press tab and itrenames that variable on line 17
.
So the next added suggestionsare like autocomplete, but on
another level.
Speaker 2 (41:17):
Wow.
I go back to the days of justhaving to type all this stuff
out.
So GitHub Copilot has a numberof features.
It has the chat that we've beentalking about, the autocomplete
, the next edit.
It has create documentation.
(41:38):
Have you used it to create anydocumentation for AL or other
languages?
Speaker 3 (41:45):
One of the rules that
we have on the codebase I'm
currently working on is that allof the procedures have to be
internal.
If they're public, they needthe XML documentation.
So I've been using Copilot alot for that.
I just say, hey, I need thedocumentation here.
Is it perfect?
Absolutely not, but it gives mea good start and, especially
(42:07):
with developers whose nativelanguage is not English, it
brings them up two levels up inthe way how they phrase the
documentation sentences, in theway they phrase what the certain
parameter is supposed to do.
It raises all of the flags fordouble negatives.
You know things that to you too, it would sound weird, but when
(42:30):
english is not your nativelanguage, you're just gonna
sounds good when I translate.
Translate it to slovenian.
Speaker 2 (42:36):
So I don't see what
your problem is and so the
documentation user for, and thenthe other one that I haven't
tried it for AL.
I did try it over the weekendfor another language, but it can
create tests for code.
Have you done anyexperimentation with creating
(42:58):
tests for code, because I am abig fan of automated testing and
development?
Automated testing the wholepage.
Script testing is a whole othertopic that I've been on now too
, but from the development pointof view, had you used it to
create tests for AL?
Speaker 3 (43:17):
I did so for testing
it's.
I would say it's a two-partstory.
First, the tests that I triedto generate were successful.
Exactly the tests that I triedto generate were successful.
Exactly the tests that I wantedto.
I didn't even need to.
I didn't fix anything exceptthe object ID, because nobody
likes object ID.
(43:37):
It's not even Copilot why thetests were so good.
I had to add a table, I had toadd a code unit, I had to add a
page.
But the functionality was verysimilar to a table, a code unit
and a page.
I already had and for thosethree objects I already had
tests.
So when I created new table,page and code unit, I pulled
(44:00):
that as context.
I said, hey, for the categorieswe also have tests.
And I pulled in the file thisis how tests look like and I
said create a new test forbrands, which was the
functionality I was adding.
And because it knew exactly howtests should look like, it knew
how to create those new tests.
(44:23):
So when you are trying togenerate tests for something
that you've already tested in asimilar manner, it is going to
do a good job.
If we expect the co-pilot towrite a bunch of unit or
integration or end-to-end testswhen we give it a management
code unit, that's not reallygoing to work.
(44:45):
That's not really going to work.
But again, vietco had a sessionof how you can get new tests,
not only the tests that you'vealready created, going with
Copilot.
The answer is interfaces.
He's been talking about thistopic for a number of years now.
That you should use interfacesfor unit testing, that you
(45:07):
should use interfaces to isolatedifferent parts from one
another.
And when your code is modular,when your code has interfaces,
it doesn't look that far fromthe code you would write in
C-sharp, and with C-sharp themodels are really good because
it's a mature language.
So if your code is solid, ifyour code has interfaces, if
(45:29):
it's modular, you can simplyprompt Copilot and say I am now
testing this interface, thisfunction, and it will create
stubs, spies, all of those testdoubles.
It will generate in the sessionhe generated I don don't know
35 tests in 15 minutes maybe.
(45:51):
So it's doable, but not if ourexpectation is that we can throw
our legacy code from 20 yearsago and say now I need tests
okay, I like that the wholeinterface.
Speaker 2 (46:07):
I think that's
underutilized, and I think it's
because that, the wholeinterface I think that's
underutilized, and I think it'sbecause that's a whole other
topic that you and I talk aboutas well too.
So that's interesting, so youcan help stub out some of the
tests so that you can test thecode, which is extremely
beneficial.
Speaker 3 (46:31):
Now just to jump a
little bit.
You use Cursor for development.
Now I use all three, and by allthree I mean VS Code, vs Code
Insiders, which is getting allthe nice stuff, and Cursor.
I've started using Cursor aswell.
Yes, what is Cursor?
Cursor is a fork um made fromvs code.
(46:55):
So imagine, at some point theteam said, hey, vs code, we like
that, but we would like tocreate our own vs code which is
gonna focus on being ai first.
So this is an ide, adevelopment environment which
has AI at the core.
So that's what Cursor is tryingto be the AI-first IDE.
Speaker 2 (47:18):
So, with it being a
fork of VS Code, you have all of
the functionality of VS Code.
So what I mean by that is thestandard functionality, and you
can use the extensions from themarketplace that you can install
.
So you can use the AL extension.
You can use other extensionswith it as well.
(47:38):
Now, with the AI driven first,what's the difference with using
Cursor versus using GitHubCopilot or GitHubpilot or github
chat co-pilot, chat within vscode?
Speaker 3 (47:52):
so I was surprised by
how well the switch to cursor
went.
I knew going into it that thisis a fork from vs code, so
everything should work.
But it's crazy how goodeverything works.
You open it up.
All of your extensions arealready there.
You you open a project, it caneven open the same open tabs
that you had in VS Code.
So transitioning to Cursor is anon-issue.
(48:16):
But to your second questionwhat does it mean that it's more
AI-centric?
A lot of the features that wenow have in GitHub Copilot
actually I'm not going to saythey started in Cursor, but they
were in Cursor before theyreached GitHub Copilot Things
(48:38):
like the next added suggestions,which I talked about earlier.
So let me jump to line number 17to rename a variable that was
in Cursor for quite some timebefore it reached the GitHub
Copilot extension.
But on top of that, for mewhat's really cool?
For example, when I work withPython, you can get some results
(49:04):
directly in the terminal.
I run a Python script and thenit says error in the terminal.
When I hover over a Pythonscript and then it says error in
the terminal, when I hover overwith my mouse over that error,
there's already a button sayingwould you just like to transfer
this to the AI pane, to the AIwindow.
You click that button and AItakes over, because that's more
or less what I would do.
(49:24):
Anyway, I would say, ai, youtry to fix the error first, and
if you can't, I'll see what Iwould do.
Anyway, I would say AI, you tryto fix the error first, and if
you can't, I'll see what I cando.
So that was one of the reallycool additions.
Claude 3.7 came one day earlierto Cursor, which was part of the
(49:44):
reason why I bought a poollicense two days ago to Cursor,
because I really wanted to seeif it's better for AAL or not.
Agents were there, but the gapis not as big.
One of the quality of lifefeatures that I've noticed is
(50:06):
you can now have Cursor fileswhere you specify what kind of
files I never want to send to anLLM.
It's a security issue.
I don't want to send my secrets, my keys, to an LLM ever.
It doesn't matter if theypromise me 17 times that they
won't store my data and that'ssomething cursor now has, and
(50:29):
GitHub Copilot is not there yet17 times that they won't store
my data and that's somethingCursor now has, and GitHub
Copilot is not there yet, sothey're always thinking with
this AI-first approach to howcan we make the whole experience
better.
Speaker 2 (50:41):
So why do you use all
three?
Speaker 3 (50:45):
Enthusiasm.
I'm comparing what works bestfor me right now.
Uh, I don't.
I am driving as like the, thetool for everyone to use.
I'm standing behind the githubco-pilot primarily because I got
our organization to buylicenses for all of the
(51:05):
experienced plus developers.
It would be very hard for me tomake a case.
Nah, now let's buy Cursor forsome of them.
So I endorse GitHub Copilot,but I also love what Cursor is
doing.
And the insider is just to seewhat's coming in the future
releases.
Speaker 2 (51:23):
Oh no, I understand
the inside.
I was just mentioning becauseof if they're in essence the
same with some.
Was just mentioning because ofif they're in essence the same
with some more oomph.
Behind cursor, it seems likewith some of the functionality,
why would someone use VS Codeinstead of cursor outside of any
fees as well?
I mean anytime there's a fee.
You always have to take thatinto consideration too.
Speaker 3 (51:46):
I don't think there's
a strong argument to go for
either um vs code.
You're used to it.
It's not like cursor is thatmuch different.
It's more or less the same, butyou do still have to make a
switch.
For me it was a big turn offthat there's another theme and
it took me two days.
Oh, I can switch the theme,right, it's the same as in VS
Code, right?
But that was already somethingsmall and I was annoyed by it.
(52:09):
And there was another thingthat I liked with Cursor Ah,
yeah, yeah.
So even though it's the same setof features, they are
implemented differently.
The prompts that Cursor sendsare slightly different.
So to give you an example,previously I said I gave Cursor
(52:31):
the task to translate Dutchcomments into English and it
started fixing linter errors.
At first I thought this isweird.
I tried it five times.
Five times, instead oftranslating, it went to fix
linter errors.
I tried the same prompt in VSCode it translated the comments.
So the same feature behavesdifferently, even if it's using
(52:55):
the same model, because theprompts are different and the
way how it's implemented isdifferent as well.
Also, in terms of performance,cursor Composer, which is the
equivalent of GitHub, copilot,edits for me seems to work
better on large files.
It knows that it has to jump toa certain section and make the
(53:17):
edits there.
Well, github Copilot will tryto go through the whole file
line by line and then fix themiddle part where my changes
actually need to be applied.
So the same feature but worksdifferently okay, I have not
tried cursor yet.
Speaker 2 (53:35):
I've been sticking to
the.
I'm an old guy so I stick withvs code, with what I know,
because I've been doing greatwith github co-pilot and github
co-pilot chat and then alsousing some tools outside as well
.
But I think I'll have to giveit a shot.
My enthusiasm I'm driving fromyour enthusiasm, I appreciate it
.
(53:55):
I want to take a step back for amoment.
You had mentioned somethingearlier and I want your opinion
on this, because I've talkedabout this with several
individuals.
You had mentioned that you liketo do the back end development.
You don't need to do thefront-end development and then
you can use AI to help createthe front-end.
In essence, at this point, doyou ever see a point where ERP
(54:20):
software will be faceless and AIcan generate the front-end for
users based upon what they need?
This is where I'm going withthis, seeing that ERP would have
the backend data, the backendbusiness logic, but the frontend
interface would be generatedautomatically by AI based upon
(54:42):
the context of the user using ordoing a function.
Do you think that's feasible,possible, plausible, likely?
Speaker 3 (54:52):
Technology-wise, I
think we're going to get there.
There was that whole ProjectSophia I'm not sure if you saw
that.
That was a year, two years agowhere it generates the UI based
on what you want.
Would the same apply to ERPs?
I doubt it.
My background is in accounting.
(55:12):
Accountants don't like changingUIs all the time.
Right when I'm used to how UIworks for me to enter ledgers,
that's how I want my UI.
Don't change it.
Even the changes from BC25 to26 accountants are going to be
annoyed by that.
Speaker 2 (55:28):
So the idea that AI
is going to regenerate UI every
time I walk in or that it'sgoing to be different between
two users, I don't think that'sgoing to be very confident it
doesn't have to be different foryou, because if you're an
accountant, that you have acertain view, you could have a
consistent view for you everytime you use it, but it'd be
tailored for your specificfunction and then, even if you
(55:48):
had to switch to use somethingelse, it could still generate
the same UI.
In essence, it's just somethingthat I'm thinking about where,
how much of this, where thebackend can have all the
business logic, you can have allthe actions, you can have all
of the functions.
The data store Don't even getme going where I think data will
be.
Data will not be so structured,in my opinion, in the future.
(56:09):
We already getting there, Ithink.
But if you have that framework,that you could have it built on
your own, because even ifbusiness central right now, you
have personalizations, theability to make all these
personalizations by role or byuser in essence gives you the
ability to change the interfacefor a specific group or for a
(56:30):
specific person.
Speaker 1 (56:31):
But is it?
Would it matter, though I thinkit's going to shrink on the
need for UI.
If a lot of things are beingautomated, there's going to be
less interaction with theapplication.
So it's now, I think it's goingto be narrowed to a specific
(56:52):
functionality where you dorequire a human interaction to
be less interaction with theapplication.
So it's now, I think it's goingto be narrowed to a specific
functionality where you dorequire a human interaction.
I think that's going to be themore focused area, but curious
from what you think today aninteresting observation.
Speaker 3 (57:05):
So I'm not so sure
about the idea of um fully
customizable.
I do think, especially in workapps, we would like things to
stay the way they are Oncesomething works for me, I want
it to stay that way because Iknow that's what I need to do to
finish my work.
But something that I have seenis that AI technology is going
(57:32):
to drive UI optimizations, andnot because AI likes users, but
more from the agent perspective.
So I have a really cool demothat I hope I can show you to
guys.
At some point where you ask anagent, I would like to go to
Business Central, create aninvoice for this customer and
(57:53):
enter this item.
And what it does?
It spins up a new browser.
It goes to Business Central, itlogs in with the credentials
that I gave it and then it comesinto the role center.
It's going to look what are allthe available actions on the
role center?
Ah, invoice, create invoice.
That's the one I want.
So then it clicks the same UIthat I, as a user, would.
(58:14):
Once it's on the invoice card,it again scans all of the
available UI options.
Which fields should I populatefor a customer?
Ooh, this one says customername.
Okay, let me put that in, andit goes on to items and
quantities and so on.
But agent can use all of thesame functionalities that I as a
(58:36):
user have available to me.
I don't have to developanything specifically on
Business Central, so no AL codeneeded for this additional use
case to be covered.
However, if my field is hiddensomewhere deep down, agent won't
find it.
(58:56):
If my action is somewhere verydeep down, the agent won't find
it.
So where I do think we're goingto go is the direction of
simplified UI, because now wehave an additional reason why UI
should be simple, why importantfields should be where they
should be and not just add themto general, because the user
(59:17):
will figure it out andpersonalize them themselves.
Speaker 2 (59:22):
Taking it to the
point of where.
I like that, because it doestake it to the point of where
the UI is generated based uponthe actions that you have.
If you want to show the demo,you can share your screen now.
I'll look for it.
If you're not ready, we'll doit another time.
Speaker 1 (59:40):
But you have the
ability to share your screen.
Speaker 2 (59:43):
If you're comfortable
doing it, if the information is
something that you don't mindsharing to be recorded.
If not, I respect that and youknow I'll call you after this
and we'll set up a team's callbecause I want to see it.
Speaker 1 (59:56):
Yeah, so the?
So you're saying the ui, the ui.
The ui is going to beirrelevant in terms of, like
designing it, because it's goingto be simplified, as you had
mentioned.
So I don't know if the effortis going to be.
It's not irrelevant.
Speaker 2 (01:00:12):
It's more relevant to
keep it simple and not have it
so complex oh yeah, that's sorry.
Speaker 1 (01:00:18):
Yeah, I think my
explanation is slightly
different.
Speaker 3 (01:00:21):
Like it's, it's
irrelevant, for in terms of like
, putting a lot of effort intoit, more of like, hey, let,
right, we have the brains thatcan figure out what needs to be
(01:00:43):
done will end up with moresimple pages being built
specifically for agents.
But at that point we're alsogoing to realize oh, but that's
also good for users, right, Ican, if I have a page that shows
the information that I need toaccess every day and I don't
have to go through the list andthe card and figure out the
(01:01:05):
certain number.
So I think that the agenttransformation is going to be
really beneficial for users thatdon't like AI as well.
Yeah, that's just like.
I think the agenttransformation is going to be
really beneficial for users thatdon't like AI as well.
Speaker 2 (01:01:14):
I think, that's going
to be the next big push to this
agentics world.
Speaker 1 (01:01:19):
When you are driving
your Tesla, right?
You just say warm up my seat.
I don't need to fiddle aroundor trying to navigate that with
buttons.
Speaker 2 (01:01:28):
I've never told my
car to warm up my seat.
Speaker 1 (01:01:31):
Oh you should, you
should try it, chris, chris, my
car to warm up my seat?
Oh you should, you should tryit chris, chris, I tell you to
cool my seat.
You could say the same thinglike oh yeah, where you're at
yes, yeah, that's, I was chris,I was joking that's right, I
forgot.
Cool my seat, not heat my seatso no, it is.
Speaker 2 (01:01:49):
It's the
simplification of the UI in
essence.
This is where I see it going.
I like Tina, I like yourexplanation of it, where you
have a core and that's kind of abetter way to explain what I
was trying to visualize whereyou had the core actions already
defined and then now that UIcan be generated for someone for
(01:02:10):
their specific needs, withoutthe need to personalize it and
go to all these, Even ourselves.
Now, with some of the featuresthat you can do with the
promoted actions and the fieldclassification, you know to do
the show more, show less, theimportance and a few other
factors of it, I think willchange it well.
Speaker 3 (01:02:30):
I think you actually
bring a good point right.
The most traditional of machinelearnings were always.
Machine learning models wereused to find I don't know, what
does the user most commonlypurchase.
Right, but in the businesscentral sense, what does the
user most commonly click?
And you could have suggestedcustomizations which would
(01:02:53):
essentially bring us to to thepoint you were saying so can I
have a tailored ui?
Just not necessarily from theground up, but suggest to
someone hey, you've beenclicking this action every day
for the past two weeks.
What if you move it somewhereelse?
That I think could be, could bealso a cool direction it will
(01:03:13):
be a step.
Speaker 2 (01:03:13):
It will be a step to
be there.
So how do you feel about a demo?
Speaker 3 (01:03:17):
yes or no?
I I would love to, but I haveno clue where the project is, so
okay I will.
Speaker 2 (01:03:22):
I will definitely
show you that um once I found,
once I find the project, okay,let me know and I'm interested
in seeing it, even just knowingit's there or you can show it to
me, but then also you shouldblog about it and show it.
Speaker 3 (01:03:36):
I think that would be
beneficial and jaw dropping, as
I say to many individualsactually to write a post about
it, because you can probablytell that I'm quite enthusiastic
about this whole AI technology,but when it comes to agents I'm
(01:03:58):
skeptical and when I firsttried to run that demo to see
how AI recognizes the BusinessCentral UI, I found it very cool
.
It demos very well, but what'sthe use case Like for me when it
(01:04:19):
comes to Business Central andagents?
I'm always coming back to thiswe don't want mistakes in an ERP
and language models are kind ofbuilt on the fact that they
will hallucinate at some point.
It's a fact, they willhallucinate.
So I have a hard timeunderstanding right now what
(01:04:42):
agents will look like, not onlywhat agents could look like,
because I see, like I said, Isee all sorts of very
cool-looking demos.
Like I said, I see all sorts ofvery cool looking demos, but
with those demos I'm not goingto convince somebody who needs
to have their costs posted tothe correct account without the
mistake.
There's a risk.
Speaker 2 (01:05:01):
The simplistic
version is in the demo, but I'm
not saying I'm a proponent or anopponent to any of this.
But we also have to rememberhow many times do humans make
mistakes?
I heard that I'm not saying.
(01:05:21):
I'm just saying is we have tohave a sense of reality that,
chris, go back to the Tesla.
I will tell you.
I have the Tesla drive meeverywhere and the Tesla will
see things before I do andreacts before I could.
So I'm not saying you shouldhave to trust everything every
time.
It's a matter of to use theword that we use over here,
(01:05:43):
checks and balances.
Or you have the checks, buteveryone's saying that AI is not
perfect.
Why is everyone concerned withit if we just have the
assumption of saying it's okaythat AI is not perfect?
Why is everyone concerned withit if we just have the
assumption of saying it's okaythat AI is not perfect?
Because, on the flip side,neither are humans.
Because I can tell you a numberof times, chris and Tina, I
don't know, depending upon inyour role oh, I posted this to
(01:06:03):
the wrong GL account.
What do I do?
Speaker 1 (01:06:07):
Yeah, I think
understanding the risk, as you
said, but you have to be able tounderstand that risk and the
goal is to minimize that risk.
Speaker 2 (01:06:16):
Correct, yes, so are
you more risk adverse by having
AI that might be finely tuned toa specific task?
And AI may not be applicablefor all tasks in this
agentification world or agenticagentification which word should
I use?
agentification so it's, it's.
(01:06:37):
That's what I mean.
It's, it's, it's.
This is a philosophicaldiscussion in a sense, but I'm
of the mindset and I say overand over again use the right
tool for the job.
I understand the risk of usinga human for certain things and I
understand the risk of using ahuman for certain things, and I
understand the risk of using acomputer for certain things.
We use calculators.
Nobody wanted to use acalculator before and they
(01:06:58):
thought that everybody wouldhave to use math and then you
would check the calculator anddo this.
Speaker 3 (01:07:04):
And now nobody knows
math, but with calculators, we
all now use calculators.
What if those calculators werewrong 20 percent of the time,
and you don't know which 20percent of the time?
You know how would we still usecalculators like that?
Because humans can't calculatecorrectly 100 of the time either
.
Speaker 2 (01:07:23):
but that's just
that's kind of you hit.
The point that I was trying tomake is that it calculated.
I mean, math is math.
At this point, for some of thecalculators uh, you know, I
don't know what any complexcalculator is, but you know
scientific calculators and aregular calculator is going to
solve the problem for youbecause that's a a finite task,
(01:07:43):
in my opinion.
You need to do an operation tothese numbers.
You can use these agents forfinite tasks.
So it's a matter of using theright agent for the right task
and up until the point wherethose agents can do more complex
tasks, you can rely on a human.
But also a human at that pointmay be wrong a percentage of the
(01:08:05):
time, because they're tired,and it's not to say that the
humans don't have the ability toit.
Look at what happens to humans,why they make mistakes.
They're distracted, they'retired, they don't want to be
there.
You know there's a lot ofexternal factors that cause them
to hallucinate.
And then how do we catch it?
Speaker 3 (01:08:26):
That's a very good
point, because I've heard this
comparison a lot of times.
Humans also hallucinate, right,I don't buy it as much, but
nevertheless it brings up apoint.
Okay, so if human can do anerror in our system, then an
(01:08:48):
agent can do that same error.
What if we put the process inplace so that the agent cannot
go and purchase a thousandbicycles off of a vendor but at
the same time?
If human was able to do that,well, what if we prevent for the
humans as well?
So that's another aspect.
So if agents are going to fixthe UI for us, I think agents
(01:09:10):
are also going to highlight allof these process issues that we
already had.
Speaker 2 (01:09:16):
We just somehow never
stumbled upon them well, that's
where some of the workflows andthe approvals come into in a
business process.
So I'm not saying I'm changingthe thought process.
I'm just trying to look at adifferent perspective, because
everyone says, oh, I can make anllM.
Say five plus five is nine.
It's so dumb.
Well, it's not supposed to bedoing that anyway, right?
(01:09:38):
So it's almost like we'retrying to find fault with it.
But almost, if you understandthe limitation, you know where
to apply it.
Speaker 1 (01:09:46):
Yeah, I think that's
my point.
It's that, as long as youunderstand that there is going
to be a risk and the goal isalways to minimize risk if
there's a human aspect that,even if you put parameters
around it, humans tend to wantto figure out loopholes, they
they want to be curious and thatmaybe eventually get around it.
But if you have an agent thatis specific to a task and always
(01:10:10):
do the same task, it minimizesthat risk of someone getting
around that workflow.
So it's just understanding oflike this tool, that what it can
do for you and what it can't dofor you.
But there's going to be alwayssomething to consider, or you
always should consider thatthere's always going to be a
risk.
It's how much you want to takeon that risk is the question.
Speaker 3 (01:10:33):
As a business.
One thing that I struggle withis if a task is so
straightforward that we canconfidently say an agent is not
going to hallucinate on that andprobably there's also a better
way how we can solve thatproblem.
There's Power Automate that isgoing to execute it 100% of the
time in that specific order,right?
(01:10:54):
Or can I write AL code so I'mnot pushing too much against
agents.
I know they're coming, I knowthey're going to be awesome.
I've seen demos.
I love them, but I just need tofind a way for me to find what
is a good use case for agents,because if it's straightforward,
there are other tools that are100 reliable and if it's not as
(01:11:14):
certain, then it's a good usecase for agents.
But how do you deal with thatuncertainty?
And I I agree with kind of bothof you probably the.
The answer is what's the balanceof human in the loop?
We don't want to necessarily goand revisit each step the agent
took, but we also don't want itto execute 15 steps before we
we jump in.
(01:11:35):
So what's the what's the rightbalance?
Yeah, because in in cursor andgithub, copilot, that's exactly
what's happening.
Agents are going on.
At some point I say stop.
I know the best way forwardbecause you seem to be
struggling, and I think whyagents are going to work so well
(01:11:56):
in development is because it'sconstantly human in the loop.
We see everything that'shappening and that's why, even
though we know LLMs hallucinate,we also find a ton of value
with them.
Speaker 2 (01:12:09):
Yes, that was a great
point, Like your last point
with the human interaction.
We know it's hallucinates.
We accept the hallucinationbecause we know we're reviewing
and correcting as we go.
Yeah.
So it's it's part of the processand I had, as I went back and
(01:12:31):
mentioned that I had it createfield pages or even actions and
some of the properties werewrong.
I just go in and fix the imagename, for example, which it
seems to mostly get incorrect.
It tries to match it to thecaption, but it does a good job.
But I still just accept it,because I didn't have to type
the other 10 properties thatwere in there, including the
(01:12:54):
brackets and everything else.
It's wonderful.
Well, sir, thank you again fora wonderful conversation.
Speaker 1 (01:13:06):
I could talk with you
for days about this and you
know that I love theconversation because I think we
all have different takes on thisagentic world where it's going
I think it's coming.
Speaker 2 (01:13:19):
But I'm with tina to
the point and you in a sense,
chris, it's just knowing theapplication and the right use
case for it.
I think the misconception a lotof people have with ai at this
point is ai is one thing,instead of ai being all the
different variations that wehave, even with the different
models, and that an agent can'tdo everything when you don't
even have a person that can doeverything.
(01:13:40):
So the the point is that theagents will replace specific
functions or tasks to make themeasier.
And uh, I'll just say easier.
So I'm not saying agentsreplace people.
To Tattini's point is there's abalance of where do you have a
human?
Because if you look ateverything, everything that
(01:14:00):
we've been doing through theevolution of time or even with
man and you go through theindustrial revolution, is
everybody's created tools to doa task.
No tool has been able to doeverything.
So they've created hammers,they created power tools, we
created horse and buggy, wecreated automobile.
Every step forward there's beenresistance to a lot of those
tools, saying, oh, a hand can doit so much better.
(01:14:22):
But once you have the tool thatdoes a specific task, it can
complete that function and taskand you put it all together and
you have a tool belt.
Speaker 3 (01:14:40):
That's function and
task, and you put it all
together and you have a toolbelt.
That's where I stand with this,but I'm just a dinosaur.
So so no, I, I completely, Icompletely agree with that.
I always say that this is nothere to replace people.
Um, it's just making the peoplemore, more, uh, efficient, or
at least more enthusiastic aboutwhat they do.
Speaker 2 (01:14:55):
And with that, I
heard a quote the other day.
I forget who said it.
It was on another podcast.
I don't want to quote it due toinaccuracy, but I would like to
give credit to someone becauseI didn't say it.
They didn't say, and what theysaid is AI is not going to
replace you.
Someone using AI is going to,and it sat with me, and just let
that resonate you.
Someone using AI is going to,and it sat with me and just let
(01:15:18):
that resonate.
AI, to Tina's point, is notgoing to replace people, but
somebody who can become moreefficient by utilizing AI in
their duties will.
Speaker 1 (01:15:30):
Exactly.
Speaker 2 (01:15:32):
I agree.
Mr Tineser, thank you again fortaking the time to speak with us
.
It's always a pleasure.
Look forward to having you backon in a few months to see where
your progress goes with this,because I know technology is
moving quickly and with yourenthusiasm, and I follow
everything that you do.
I know that you'll be doingsome great things, so we'll get
something on the calendar justto lock it in.
(01:15:52):
But in the meantime, how cansomeone get in contact with you
to learn a little bit more aboutsome of the things that you've
been doing with AI, to see someof the information that you've
been sharing and all of theother great things that you've
been doing?
Speaker 3 (01:16:06):
I think LinkedIn is
the best place to go and I would
say, for existing posts, theblog right.
Because just today, when I wasgetting in the headspace of what
are we going to talk about onthe podcast, I was like I wrote
a blog post on GitHub Copilot amonth ago and I already have so
(01:16:26):
many new things that I need apatch to.
And it's been only what 30 days, so there's more coming
excellent, excellent.
Speaker 2 (01:16:34):
Thank you, and that's
why I want to lock you in for,
you know, maybe not 30 days, butat some point in the future.
But again, thank you for yourtime.
We really appreciate it.
Thank you for all that youshare, and also thank you for
all that you share and that youdo within the community.
I know I've personally learneda lot from it as well so I
(01:16:55):
appreciate it.
Speaker 3 (01:16:55):
Thank you for having
me again.
Speaker 2 (01:16:56):
We'll talk with you
soon.
Ciao, ciao, take care, ciao.
Thank you, chris, for your timefor another episode of In the
Dynamics Corner Chair, and thankyou to our guests for
participating.
Speaker 1 (01:17:06):
Thank you, Brad, for
your time.
It is a wonderful episode ofDynamics Corner Chair.
I would also like to thank ourguests for joining time.
It is a wonderful episode ofDynamics Corner Chair.
I would also like to thank ourguests for joining us.
Thank you for all of ourlisteners tuning in as well.
You can find Brad atdeveloperlifecom.
That is D-V-L-P-R-L-I-F-E dotcom, and you can interact with
(01:17:28):
them via TwitterD-V-L-P-R-L-I-F-E.
You can also find me atMattalinoio, M-A-T-A-L-I-N-OI-O,
and my Twitter handle isMattalino16.
And you can see those linksdown below in the show notes.
(01:17:49):
Again, thank you everyone.
Thank you and take care.