All Episodes

April 8, 2025 82 mins

In this episode of Dynamics Corner, Kris and Brad are joined by Dmitry Katson, a 20-year veteran in the Business Central ecosystem. Listen in as Dmitry shares his experience developing CentralQ, an AI-powered tool designed to enhance Business Central by leveraging a robust knowledge base. He emphasizes the tool’s ability to automatically update its knowledge daily, drawing from sources like blogs and YouTube, and its role in streamlining processes, improving user experience, and supporting AL development. Dmitry highlights challenges such as creating deterministic AI solutions and the importance of source referencing for credibility. Looking ahead, he discusses plans for CentralQ, including reasoning models, agent coordination, deep search capabilities, local language models, and page scripting for automated documentation. The conversation underscores AI’s transformative impact on development roles, shifting them toward management and architecture and the need for AI agents to access live data while addressing user permissions.

Send us a text

Support the show

#MSDyn365BC #BusinessCentral #BC #DynamicsCorner

Follow Kris and Brad for more content:
https://matalino.io/bio
https://bprendergast.bio.link/

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome everyone to another episode of Dynamics
Corner.
It's someone's birthday andsomeone's turning two.
I don't know who, because we'rerhyming.
I'm your co-host, Chris.

Speaker 2 (00:12):
And this is Brad.
This episode was recorded onMarch 5th and March 6th 2025.
Chris, chris, chris, I likedyour rhyme.
Someone is turning two.
Are they blue, I wonder who?
With us today, we had theopportunity to learn who is
turning two, as well as awonderful conversation about the

(00:36):
place for AI within BusinessCentral.
With us today, we had theopportunity to speak with
Demitri Katzin about Central Qturning two.
Good morning sir, hey guys, howare you doing?

Speaker 3 (01:03):
Good morning sir, hey guys, how are you doing Good?
Morning.

Speaker 2 (01:08):
Doing great, good, good.
You look like you just woke up.

Speaker 3 (01:13):
Yes, thank you.

Speaker 2 (01:17):
And I've been waiting a very long time to say happy
birthday to you.
Well, not to you, but to yourchild yeah which one of them,
central q, turns two.

Speaker 3 (01:32):
I've been waiting to say that for months now yes, yes
, thank you very much, it's,it's coming, the birthday is
coming when is the exactbirthday?

Speaker 2 (01:43):
I know we spoke with you shortly after it was out
some years ago.

Speaker 3 (01:48):
Well, it seems like just yesterday yeah, I need to
double check when I first tweetthat, but it was the beginning
of march and maybe seven orsomething oh, wow, so we're
right.

Speaker 2 (02:00):
We are right there.
We scheduled this on purpose.
Yes, yes, yes, to be there atthe birthday of your child.
I call it and it's great, and,before we talk about your child
and many other things that arearound it, I like calling it
your child because I think it'swonderful.
Can you tell us a little bitabout yourself?

Speaker 3 (02:23):
Yeah, so I'm.
It's wonderful.
Can you tell us a little bitabout yourself?
Yeah, so I'm Dmitry.
I'm in a business central worldfor like 20 years.
I'm passionate about businesscentral and artificial
intelligence.
I started with a majority in ML, or machine learning or AI,
whatever you call it nowadays.

(02:43):
I started in 2016.
So it was almost like eightyears ago right when I headed
their AI department and a bigpartner, and I didn't know
anything about that, so that'swhere my journey started.

(03:05):
And then I was passionate tocombine AI with a business
central for years and I thinknow my mission is accomplished.

Speaker 2 (03:17):
Your mission is accomplished.

Speaker 1 (03:19):
Mission accomplished.

Speaker 2 (03:20):
That's great and you've been doing a lot of great
things.
You've been doing a lot ofspeaking things.
You've been doing a lot ofspeaking sessions, presentations
and yeah, and like I see youall over the place.
You're very busy not only withbusiness, central and central q,
but uh, sometimes it seems likea world traveler to me yeah,
it's well.

Speaker 3 (03:39):
There are two uh seasons where I travel, so it's
definitely directions.
So usually it's directions Asia, as it's not far away from me,
just one hour of flight.
That's nice, sometimes usingbike.

Speaker 1 (04:01):
That's even better.
That's good.

Speaker 2 (04:03):
I think I saw a picture of you last year.

Speaker 3 (04:05):
You took your motorbike that's right yeah yeah
, but but to be honest, yes,it's still 800 kilometers, so we
prefer to use bike to go to theairport.

Speaker 2 (04:15):
Yeah, it would be a long ride.

Speaker 3 (04:19):
A long ride, yeah, and then then Besitek Days and
Directions EMEA.
So that's my three conferencesthat I usually attend as a
speaker, yeah, and that's wherewe can meet.
I really hope to go this yearto directions North America, but

(04:44):
it seems that my visa is notready yet, so I don't think that
they will issue that on time.

Speaker 2 (04:56):
I'm hoping that they issue it on time, because I
would enjoy meeting you inperson in Las Vegas this year.

Speaker 3 (05:04):
I know it's a long trip for you too yes, but it's
still already two months of visaprocessing and they, you know,
uh waiting okay you got like alittle over three weeks left,
four weeks left, so you stillhave time, you, you just have to
.

Speaker 2 (05:22):
When's your?
When's your cutoff day?
Do you cut-off day?
Whereas if you don't have avisa by a certain day, then you
definitely won't be attending.

Speaker 3 (05:31):
I think that it's already passed.

Speaker 1 (05:34):
Oh man, We've got to make sure that you make it next
year, then I'm hopeful to runinto you somewhere.

Speaker 2 (05:40):
I'm hopeful to run into you somewhere then.
So you've been doing a lot ofgreat things and for those that
do not know about CentralQ, canyou tell us a little bit about
CentralQ briefly?
And then I have a whole list ofquestions for you.

Speaker 3 (05:55):
Right, yes, so I've been doing different machine
learning things before and then,like, I was speaking in the
conferences about how we canimplement machine learning in
Business Central and I rememberthat first time I talked about
this in 2018, I think in Harvard, in the direction of EMEA, and

(06:19):
I was the only weird person thattalked about this in the
conference Even Microsoft didn'ttalk about that.
And then, in directions in melast year, I found myself that,
like 60-70% of all the content,everyone speaks about Copilot

(06:41):
and AI.
So that's where we are.
That's where I think that mymission was accomplished.
But I returned back like twoyears ago a little bit more,
when the first chat, gptappeared right, and we were like
all mind-blowing about thepower of Linguistic Models.

(07:05):
We all saw them for the firsttime and what I did actually I
think many people did I thought,hey, great, now I can use it to
help myself with a business.
And just after some quickqueries, I figured out that, no,
that doesn't work.

(07:27):
It just suggested me featuresthat doesn't exist, suggested me
you know code that doesn'tcompile, suggested me you know
routes where I it's justhallucinated a lot.
But I still thought that, yeah,that could be a good framework

(07:53):
to build around and to help ourcommunity to use it to help with
the business central problems.
Yeah, the problem with thebusiness central is that it's
still very, you know, narrow, uh, comparing to the whole
internet.
Yes, so our al development is.

(08:16):
you know it's several githubrepos.
Comparing to the millions ofyou reviews, Our documentation
for the Business Central isstill small comparing to all
other products.
So probably at this point, atthose point of time it was GPT
3.5.
It maybe knew something.

(08:39):
But you know, the main goal ofthe light language models is to
answer all the questions, nomatter if it's correct or not.
So it was just imagine theanswer.
However, I found and in thoseperiods of time it was very hard
that there are still a way howwe can make it better.

(09:03):
So if we just make the bigknowledge base about everything
that we know about the businesscentral in one place and then
not just ask directly Flashlanguage model, but first a
language model, but first queryour knowledge base, find the

(09:27):
potential answers, like sometext that will potentially
answer on the user question, andthen we'll send this to the
language model together with theuser question, this increases
the correct answer a lot.

(09:47):
So that's what we call it factgrounding, yeah, or the
knowledge grounding.
So that's where the idea wasborn about hey, I think that
that will work.
So the next problem with thatwas that I I need to find a way

(10:13):
how to build it because therewas no exact documentation,
there was nothing.
So actually my, my only sourceof knowledge at this point of
time was Twitter.
So I followed some guys thatalso did some experimenting,

(10:34):
chat with them, and so I built aknowledge base.
I took first the blogs and theMicrosoft Learn, Then I added at
some point of time YouTube,then it was Twitter also as a
source of knowledge and yeah, soit took like two months of

(10:59):
building, I remember, and theCentral Queue was fun.

Speaker 2 (11:05):
So Central Queue in essence is a large language
model that's built or it'sgrounded, or it has its
knowledge based upon popularblogs from community members of
Business Central, from thedevelopment point of view, as
well as from the functionalpoint of view, the Microsoft
Learn documents, which keepgetting better and better,

(11:27):
twitter and the YouTube videos.
So anybody who uses CentralQueue, similar to ChatGPT you
mentioned, which a lot of peopleuse it, will pull the knowledge
from those sources to returnthe result.

Speaker 3 (11:41):
Yes, and also the problem with just a pure
LashLanguage model was and stillis that it's trained and has a
cut-off knowledge date.
So it's usually for the OpenAImodels it's one year before.

(12:02):
So the current models I thinkthat they have a cut of days
like 2024 or somewhere in themaybe autumn, maybe summer, but
as we use, as we ask aboutBusiness Central, so this area

(12:24):
is growing fast.
The new features appears everyday oh yes no, like, oh yeah,
not every day, okay, but we haveuh, waves, uh, and they are
much appears, much quicker thanthis, that large models are
trained based on that it doesseem like every day, by the way
yes, every month we have newfeatures.
So it's, it's just like everyday by the way?

Speaker 2 (12:44):
Yes, exactly Every month we have new features, so
it's just like every day is aholiday.

Speaker 3 (12:47):
I guess you could say yeah so this was the second
problem that I wanted to solveand the Central Queue not just
have this knowledge base that istrained and used, but it
updates automatically every dayit but it's updates
automatically every day.
So we search for the web for thenew information regarding

(13:09):
business central and updatesthis knowledge base, and you
know this is very exciting tosee that, for example, when
Microsoft release, before thewave, the launch videos yeah, so
it's, and they are published onthe YouTube.

(13:30):
So on the next morning,centralq knows everything from
all the videos.
So it's you can just go and askwhat's new features, how it
works.
So in the tool answer based onjust what was just published
what's new features, how itworks.
So in the tool answer based onjust what was just published.
I think that's very useful.

Speaker 2 (13:53):
I think it's extremely useful because, as you
had mentioned, there aren't alot of sources or a collection,
even with those other languagemodels.
Because Business Central, thereare a large number of users
using the application.
We have large number of usersusing the application.
We have a lot of members in thecommunity, but it's still small
compared to other languages andother pieces of information on

(14:13):
the internet.
So it's a great tool foranybody that uses Business
Central, and it's not justdevelopment and it's not just
functional, it's a combinationof both.
So, whether you're a developer,a user or somebody working to
consult others with BusinessCentral, it's a good tool to
have.

Speaker 3 (14:32):
Yes exactly.
And the second thing that Ithought should be really
mandatory and it now became astandard in all these Copilot
things, things is to referencethe source.
So In the in the pure ChargeEPT on those periods of time,

(14:55):
you got the answer, but you, youknow, you don't know if it's
correct or not, so you need todouble check that and there were
no sources where you can doublecheck that.
So that was my uh initialdesign from the beginning, that,
hey, you not only need to getthe answer but also the links to
the sources where this answerwas uh pulled from.

(15:17):
Uh, and I found this uh also avery uh.
I found this also a very widelyused flow.
When you ask a question in thesexual queue, it gives you the
answer and then if you want togo deeper, you just click on the
link.

(15:38):
It opens the blog, so there ismore detailed information.
You can just read it.
And I found that around I think30 or 40% of all redirects to
my website are going now fromthe central queue, which is also
interesting.

Speaker 2 (15:59):
Well, I like that.
I do like that because, as weall hear, if you haven't heard
AI, then I don't know where youare, and if you haven't heard AI
within the last hour, I don'tknow where you are either,
because I don't think you can goan hour without hearing AI
copilot, large language model,machine learning no matter where

(16:19):
you are on the planet you couldbe using it too.
You just don't know maybe, maybethe the ability for users of
tools such as this to validatethe information, because
everyone talks about how thishallucinates hallucinations
where you had mentioned largelanguage models will always give

(16:40):
you an answer.
They never return.
I don't know, so it could be anincorrect answer.
So, knowing that individualsare utilizing or following those
links to learn more about theanswers or validate the answers,
it's nice to hear, instead ofeverybody just saying give me
the answer and it creatingsomething that may or may not
even exist, and then peoplespread that information.

(17:01):
So, with Central Queue, when westarted talking about planning
this because we planned this along time ago with Central Queue
turning two, you said you mayhave a lot of new things in
store for Central Queue.

Speaker 3 (17:20):
Yeah.
So I hoped that I will releasethe second version before we
talk, but it's still indevelopment mode Because, well,

(17:42):
there are some other projectsthat I'm doing, oh, I understand
.
Well, there are some otherprojects that I'm doing, yeah,
oh, I understand.
Yeah, but but also, uh, I thinkthat the most important reason
for me was to postpone a littlebit.
Uh, many new things appeared inthe ai world since our, you
know, since my first planningway, and the most important of

(18:07):
them now there are new type ofthe models, which are called
reasoning models, so they don'tgive you the answer directly,

(18:28):
they think about the answerfirst and then produce the
answer.
That's a little bit differenttype of models that I want to
also implement in the centralqueue.
So, and also, the other thingis the concept of agents that
you also, I think, hear a lot,concept of agents that you also,
I think, hear a lot.
And I started experimentingwith the agents, I think, in

(18:52):
September last year August,september and the first agents
that I showed were in directionsin a year, and I was really
mind-blowing about this conceptand how it works.
So the example that I showed inthe directions in here was that

(19:14):
I created a team of agents that, yeah, so there were a team of
agents that were the goal was toask any questions in the
natural language and it willconvert it to the API.
Calls to the business central.
Do the calls to the businesscentral, grab the data and

(19:38):
provide the answer to the userthe user and the problem with
that if I do it the classicalway is that in many cases, if I
just ask in a simple call to thelife language model, hey, take
this query and convert it to theAPI, this API in most of the

(20:02):
cases will not work.
But if I make a team of agents,there will be one agent that
will be responsible to generatethe AI, another agent will be
responsible to call this API andanother agent will be
responsible to provide the finalanswer, and they actually

(20:23):
communicate with each other.
So first one generated API, thesecond one called it, and they
actually communicate with eachother.
So first one generated API, thesecond one called it and didn't
work.
It returned back to the firstone and said hey, this didn't
work, so you need to do this jobbetter.
It generated something and onceagain sent it to the other
agent.
The other agent once again saidhey, this didn't work, send it
to the other agent.
The other agent once againtells hey, this didn't work.

(20:44):
So the first agent actuallywent to the knowledge base that
I also connected to thatsearched for the information.
Actually, I connected to theJeremy's book, the whole book
about the API.
So it went, read the book,found the exact endpoint that

(21:06):
potentially will work and thengenerated the good API.
The second agent executed thisAPI.
That worked.
The other agent produced theanswer and it was like online.
You can see their internalcommunication.

Speaker 2 (21:22):
That is all amazing to me.
It's the whole agentification.
We talk about this a lot nowbecause everybody's in it, but
it's almost like having a staffthat's working for you and each
one of them does a differenttask agent coordinator.

Speaker 1 (21:38):
So you have two features coming in.
One is the reasoning right, soit's going to reason itself.
It sounds like, yes, it's akind of new feature.
And in the second one you'realmost adding a um, an agent
coordinator.
It sounds like it's like I justwant to talk to this one thing
and then it's going to pull inwhatever agent I need to
accomplish this task yes, soit's, um, actually what I'm

(22:01):
thinking of.

Speaker 3 (22:02):
Uh, because there are simple questions.
So how this feature works.
It will go to the my KnowledgeBase, find this feature and
produce the answer.
That's how this works nowadays.
But let's say you want to asksomething like hey, please find

(22:26):
me the apps on the app sourcethat do this, compare them by
something, produce me the outputtable which one with maybe some
feedback from the users, andsuggest me the best I can use.

(22:48):
It's like a multi-step processand this currently will not work
using the current version ofCentral Queue.
It will work at some point, butthe answer will be limited.
So I want to now serve moreadvanced queries with a central

(23:12):
queue, which I call centralqueue 2.0, which I'm working on.
So that's why central queueturns 2, not only in years in
age, but also in the version.
But, yeah, I want it to beagentic, I want it to use reason

(23:33):
models and also the new thingthat appears in many cases in
many areas AI areas nowadays.

Speaker 2 (23:44):
It's called deep search or also deep research.

Speaker 3 (23:47):
So it's because now deep search or also deep
research.
So it's um because now I'musing and most of the this uh,
the chat, gpt, the complexity uh, other uh co-pilots, today in a
simple mode, they're using likea maximum of 10 different
sources, um depending, becausethat's actually usually the

(24:08):
limitation of the one call, youknow to the Lash language model,
but with a deep search.
It's also multi-step.
So we you can ask a complexquery.
It will break down this queryinto the multiple queries.
It will search them one by one,then find maybe 50-70 different

(24:31):
sources.
It will understand whichsources it should go and read,
depending on the differentevaluations.
It will go read, it will findthe trusted sources and then
produce the answer.
So usually this process takeslonger.

(24:54):
Yeah, so, because the simplequestion answer in the central
queue takes about 10, 10 to 10seconds to the first token.
The deep search, according tomy experiments, nowadays it's
around one minute.
So it's one minute, one minuteand a half, but it will go

(25:20):
really deep and find moreinformation and produce them
more advanced answer.

Speaker 2 (25:27):
And so, yeah, three things that I want to combine
together and it's, um, it's notvery, you know, obvious how to
do this it sounds logical, itsounds wonderful, but how a
large language model or how thedeep research knows which source

(25:50):
to read based upon the content.
And that goes back to thereasoning.
I mean, I know how the humanmind works with reasoning,
reasoning based upon history andunderstanding.
I still have difficultyunderstanding how these language
models really put thisinformation together to know

(26:11):
it's.
It's.
It's to me, uh, I meanmind-blowing when I go with like
it, just my mind, it, likeeverything you said, sounds
great.
And if I had 10 people sittingin the room that were humans
working with me, I could say,okay, let's go through these
sources, find the ones that arerelevant for the question.
Okay, let's take the piecesback and put them together,

(26:32):
because you know that humanshave reasoning in how the mind
thinks.
But getting a computer to dothis or to getting a piece of
software to do this, which is inessence what it is right, it is
software, if I stand correct.

Speaker 1 (26:49):
Hold on.
Can I recommend the fourth oneas a wish?
Maybe, maybe text to audio oraudio to text that'd be really
cool to add, or someone justhave conversation with, that
would be awesome to to do.
I'm not trying to add more workfor you, but yeah, so actually,
audio-to-text is a great way.

Speaker 3 (27:13):
I'm personally using this with external software
because I know that maybe inWindows it's already implemented
by default.
I'm using Mac.
There is no such feature, butI'm using, you know, let me,
what's called?

(27:33):
It is called Flow.
Yeah, so this software iscalled flow.
You can just Talk to this andit will automatically transcribe
and then use it in the query.
Yeah, but I would also want toadd okay, the fifth feature to

(27:54):
that is multi-models,multi-model support, which means
that now I'm pulling just textfrom the sources, so from the
blogs, it's just text, from theYouTube videos, it's a

(28:17):
transcript, and in many casesit's not enough.
Especially in the blogs, Ifound that very often people
just paste the screenshotsinside of the block.
Yeah, so they don't describethese screenshots.
That's how this feature works.
And then there is an image withdifferent arrows.

(28:39):
Yes, there is, and I actuallynow don't get this information,
which is very importantinformation.
So I want to grab thisinformation, which is very
important information.
So I want to grab thisinformation as well.
But also, that's the back-end,so that's how to improve my
knowledge base.
On the other side, on the userside, I want it really just to

(29:04):
copy-paste the screenshot andsend it directly to the central
queue and ask about the you knowthe error, for example uh, this
really will help to improve theanswers.
So, yeah, this five pillowsthat I'm working uh, right now.
Uh, and also yeah, so that'sthat's the, the area that right

(29:30):
now and also yeah, so that's thearea that I'm focusing on.

Speaker 2 (29:33):
That's a lot and for you to do this.
You're doing this all on yourown, now, correct, and in your
free time.

Speaker 3 (29:40):
Yes, when I say free time it's.

Speaker 2 (29:43):
You still work with Business Central.
You do all the stuff that wetalked about.
So when do you sleep?

Speaker 3 (29:51):
You see that I already wake up, so it's 6am
here.
Yes, yes yes.
Yes, once again thank you.

Speaker 2 (30:06):
My day starts very early.

Speaker 3 (30:09):
I have more time for work for the central queue after
that.

Speaker 2 (30:12):
No, that's good.
That's why we we said we coulddo this, but, as we talked about
last time, you're in the futurefor us, so it's six in the
morning, or six zero six hundred, where you are.
Thursday on tomorrow for us,tomorrow yeah so I like to talk
with you because I get to knowwhat will happen tomorrow.
The you're doing a lot of greatthings with central q and

(30:38):
another thing that has come outand again with this deep
research models is local largelanguage models.
Do you see a place for thatwith Central Queue to maybe help
with some of the processing oroffloading some of the resources
or knowledge for Central Queue?

Speaker 3 (31:02):
Yeah, I thought about that, but I didn't find where
this can fit with the centralarchitecture and the users right
now, because I don't have likean app for the phone, for

(31:24):
example.
Maybe we need to do it at somepoint of time, but let's see.
And still, it's like a webservice which works in the web,
which communicates with AzureOpenAI nowadays and all the

(31:45):
whole infrastructure is in Azureand the whole infrastructure is
in Azure.
There is one thing that maybecan be useful in this case I
mean these local language modelsis using the I call it private

(32:07):
data with the central queue.
So maybe you know or not, afterour previous call when we
discussed the web version of thecentral queue, I released the
business central version of thecentral queue, and so this is
the AppSource app, which isactually a paid version, which

(32:32):
costs like $12 per user permonth, which is like not a lot,
I think.
But with this, you have thecentral queue inside of the
Business Central and you canupload your own documentation
there.
So you can upload thedocumentation about how your

(32:54):
business central works, like theinstructions about your
processes, the instructionsabout your pretend extensions
and all of that, and one of thenice features also there is that
you can use the page script,the basic user, the Business
Central page script, to recordthe steps.

(33:16):
It will take the URL of thispage script or a YAML file.
You can export that and uploadto the Central Queue app inside
of the Business Central and itwill automatically produce the
user manual from that and use itas an internal knowledge about

(33:36):
how your Business Central works.
And you can just ask a question.

Speaker 1 (33:42):
That's gold.
We were just talking about this, right, brad, like we were just
talking about like taking apage script result and then
turning that into a usable guideor documentation.
Especially for someone that ismaybe in the middle of an
implementation, documentation isusually like the last thing
people create, but if you canmake it easy with this tool,

(34:04):
that's incredible.

Speaker 2 (34:05):
That's going to save a ton of time I'm on the page
scripting kick because I'vealways been big into testing and
page scripting is, in essence,a way that you can enhance your
user acceptance testings, butwith the way that it records it,
as you had just mentioned, tocreate user documentation.
So now, with the CentralQ appfor Business Central, not only
do you get the ability to usethe CentralQ knowledge, only do

(34:27):
you get the ability to use theCentraQ knowledge base that you
update daily with informationfrom Business Central.
You have the ability to uploadyour own private documentation,
right, and that stays separatefrom everything else.

Speaker 3 (34:42):
That's just yeah, yeah, so this is a separate
knowledge base that is perenvironment or pattern-based,
depending on your choice.
So you have a dedicatedknowledge ID and all your
knowledge that you upload thisis linked to this ID and only

(35:06):
you can use it.
It is all very secure and youcan upload the PDF files, word
documents, txt files and pagescripts.
And there is a chat window.
It's also not a question andanswer.

(35:26):
It's a chat chat so you can goand chat about uh, that and you
can decide if it's uh, if it,when you ask a question, what
sources it can use.
So you can decide if it can useonly your private documentation
and nothing else, or, inaddition, it can use the whole

(35:48):
central queue knowledge or, inaddition to that, you can use
the Microsoft Learn.
So it's three big buckets ofknowledge that you decide that
what you can use and it's yeah,you can go and install it.

Speaker 2 (36:05):
Chris, to your point.
That's.
That's where it is.
Nobody wants to document aprocess and everybody relies on
someone in the office to havethat process.
But something may happen wherethey're out.
One day they go on vacation,they, for personal reasons, make
a change in their career andall that information is lost.
But now, with this, to be ableto use page scripting to have it

(36:27):
generate documentation and thenhave that documentation
searchable is a huge time savingand it's gold because you can
record as somebody's working,say that this is that process.

Speaker 1 (36:41):
I'm just even going on that too brad because even if
for some example like if youprocess change right, people
process, business process change, you want to go update that,
you just do a page script andhave it change that in your
document and then there's yourupdated document.
Because a lot of people likebusiness change process, your
business process changes andthen nobody ever updates the

(37:03):
original document.
So with this.

Speaker 2 (37:05):
It would make it easy .
It's the original document, sowith this it would make it easy
To be honest with you.
And again, as you had mentioned, it's a relatively low price
for what you get, because forthe ability to keep the business
continuity there is extremelyvaluable and important Central Q
.
This whole AI stuff is such ahuge time savings if it's used

(37:30):
appropriately yes, and you knowthe most.

Speaker 3 (37:37):
The cherry on this, on this process, is that when
you have the answer, and you aswell have the links to the
sources and if the source was apage script, you can just click
it it will open the businesscentral in a new window and will
replay it.
So is that right?

Speaker 2 (38:00):
there.
See, these are all the.
These are what I call like whatI would.
I don't want to say hiddenfeature, but these, like.
I know about central q and Iwas chatting with you a few
months back about the app aswell, because I had the
questions about the pagescripting and creating
documentation.
But these are things that Idon't think a lot of individuals
may know about CentralQ and thepower that you have, because
from a user point of view, thatis a huge time savings for them

(38:25):
and from any business point ofview, I think there's some huge
value in having that.
So you have so many things onthis application.
I still can't believe you didit all by yourself.

Speaker 3 (38:36):
Yes, it was just me.
So and uh um and uh I.
So this is the, actually thesix pillow that I wanted to also
embed in the web version of theCentral Cube, so Central Cube
2.0.
Also we'll have at least in myplans, I really want to do this

(38:58):
this login feature where you canlog in.
It will be your space where youcan upload your own
documentation.
So I want to combine these twoworlds together.
Nowadays, and you can use itexternally as a web or
internally inside of theBusiness Central.
So that's my goal, that I want.

Speaker 2 (39:24):
I think that's a great goal and I hope you get to
it With CentralQ, if you canshare.
If you're not comfortablesharing any of these questions,
please feel free to let me know.
I understand it.
How many searches do you getper day, per month, per quarter?
What sort of metrics do youhave on it?

Speaker 3 (39:50):
Yeah, so for the last two years almost.
That's where more than 300,000questions and answers generated.

Speaker 2 (39:58):
That's a lot of questions.

Speaker 3 (40:01):
And that's produced around 1 billion tokens.
Around 1 billion tokens.

Speaker 2 (40:11):
1 billion tokens 300,000 questions what is a
token?

Speaker 3 (40:16):
Yeah, so the token is one word or part of the word.
One word can be split into oneor more tokens, so that's
actually how LLH language modelsthey produce see the world and

(40:40):
generate the answers.
Yeah, so it's around 500queries per day nowadays and
depending on the time of theyear, so the lowest number of
queries is on 25th of December.

Speaker 2 (41:02):
I wonder, why yes.

Speaker 1 (41:06):
That is interesting.

Speaker 3 (41:08):
Yeah, but still there are questions on this date to
the set to kill.
Well, some people don't want.

Speaker 2 (41:17):
Yes the break.
Do you keep track of orclassify the questions?
I'm just I'm thinking of couldbe used.
What I mean by classifying isis it a finance question, a
purchase and payables question?
Order the cash.

Speaker 3 (41:38):
Yes, so I'm also classifying all the questions
using the last-lunch models.
So it's around, so I have thestatistics.
So around 20% of all thequestions I call it like a

(41:59):
general, so it's like differentquestions about different
features of the business central, but about 20% goes around AL
development, questions aboutdifferent features of the
business central, but about 20%goes around AL development.
And then it's breakdowns by themodels, like about 11% comes

(42:25):
about the financial and thenabout 9% from the inventory and
so on, about 9% from theinventory and so on.
But also this is like acategorization by the categories
.
But also I classify thequestions by the types, and

(42:46):
about 50% of all questions abouthow to do things.
So this is like how can I dothings?
And that's very interestingbecause that's where comes the
power of the central knowledge.
Because if you rely only on theMicrosoft Learn, the Microsoft

(43:06):
learn documentation structure isabout the feature, about like
90%.
I would say that this is thefeature, this is how this
feature works, this is AL type.
This is what it's about.
It's not about how the processworks.
There is not enough knowledgein the documentation about how

(43:28):
the process works.
So there is not enoughknowledge in the documentation
about how the process works, andusually people ask about this
how to make this process happen.
And that's where blogs comeinto play and YouTube videos
come into play, because in manyblogs, it's not about the
features, it's about the process.
That's actually how you can dothis using multiple features.

(43:51):
So that's where the power ofthis comes, and I also have this
telemetry that 86% of all theknowledge comes from the blogs
nowadays and 65% comes from theYouTube videos and about 50% or

(44:17):
60% from the Microsoft Learn.
So it's a combination, ofcourse.

Speaker 2 (44:23):
So it's one question can have sources for multiple
different categories, but theblocks play a crucial role in
this answer in general, I'mfascinated by statistics and I'm
happy that you shared that,because I was curious using it

(44:45):
and I thought the number onequestion, chris, would be about
what's the best podcast aboutBusiness Central.

Speaker 1 (44:52):
I don't know we should give that to Dimitri as a
link to our website, becausethere's transcripts in there.

Speaker 3 (45:00):
I think that I don't know if I can find it very
quickly how many answers wereused in the Dynamics Corner
podcast.

Speaker 2 (45:12):
No, no it's okay, you can look at that afterwards,
but it was just a little fun.
It was a little fun.
I appreciate those statisticsand all that you're doing with
that.
With that, we always have someside conversations and such too.
So where do you see AI withinBusiness Central and the most

(45:33):
important AL development?

Speaker 3 (45:38):
So I would start with AL development, because that's
where I use AI for the BusinessCentral every day.
So I'm not the user of theBusiness Central, so I actually
don't very often consume AIfeatures inside of the Business

(45:59):
Central for myself, but as an ALdeveloper.
So there are nowadays a choiceof IDEs, yes, what we can use
for AL development.
So we all started from the VSCode and we started with a

(46:22):
GitHub component.
Yeah, so that's where.
Yeah, so that's where I started.
Many you people use that.
I then switch to cursor.
About eight months or so ago Ifound that.

(46:42):
So at this period of time wesaw supported to the cloudSawNet
model.
So the VS Code supported theOpenAI I think 4.0 model, which

(47:02):
is not so good in AL development, to be honest, for any reason.
That's the fact.
But there is another model fromthe Entropiq which is called
Cloudy Sonnet 3.5 and I foundthat it knows AL pretty good and

(47:27):
the only IDE that supportedthat was Cursor.
So I switched to Cursor andCursor is actually a clone, a
fork, from the VS Code, so itsupports all the features from
the VS Code plus.
The guys did a huge job.
I mean, my central queue stuffis like maybe I don't know like

(47:54):
5-10% of what they did about thecursor, but they have a big
team and they have like millionsof investments and they located
in San Francisco.
So, yeah, this cursor supportedthe so called composer mode.

(48:20):
The composer mode is not just achat, so it's not like a how
can I develop these things in AL.
It's not like a hey, how can Idevelop these uh things in al.
It's actually you ask it toproduce the feature in the
natural language and it developsthe feature for you and then
you uh ask it and then you check.

(48:42):
So, uh, and it worked prettygood if you know how to use.
Yeah, so it also, and it workedpretty good if you know how to
use it.
So it also requires some changeof your mindset how you deal
with AL code, how to set upthings.
But if you know what you aredoing, that's actually really

(49:04):
increase the productivity ofyour development and the quality
as well.
And now the new model appearedfrom the clouding cloud, from
the Atrofic Cloud, sonnet 3.7,also with the reasoning
capabilities, and I found thatand also in parallel, in that

(49:33):
Coursor introduced the agenticmode.
So combining these two thingstogether in AL development, it's
the next level, I mean nowadays, so should I put my resume
together?

Speaker 2 (49:48):
Is that what you're telling me?
Yes, yeah, yeah, yeah, chris,you hear that he's subtly
telling AL developers just lookfor another job, because
Actually, yeah, I updated myresume as well.

Speaker 1 (50:01):
so it's he knows.

Speaker 2 (50:07):
I did see the update on VS Code today and it was
funny.
We'll go back to your story ina moment.
I do have another questionabout the sources, not to
disrupt you, but all of thefeatures, for this month's
update of VS Code is primarily alot of the features that you
talked about from Cursor andit's all AI-related.
They added the agent mode, thecopilot edits, a lot of the

(50:27):
features that you talked aboutfrom cursor and it's all ai
related.
They added the agent mode, theco-pilot edits, the next edit
suggestion, which was a big one,so that you it can have the
edit.
So it's.
It seems like a lot of that'scoming back to it.
Uh, to go back to the stats,you had mentioned, I believe
from memory, in the conversationonce.
Once I play it back and hear itI'll know for certain, but I

(50:48):
think you said 20% is ALdevelopment and you mentioned
your sources were blogs, learn,youtube and Twitter.
Did you ever think of GitHubrepositories?

Speaker 3 (50:58):
Yes, so GitHub repository also appeared as a
source, I think a year and ahalf ago.
At this point of time it was anexperiment, so I didn't pull
all the source code from theBusiness Central, but just a

(51:20):
system app repo and there is nowin addition to that and there
is now in addition to that itappears, a special Tumblr, an
option in the central queue thatI'm asking specifically about
the system app.
So actually, what this optionis doing, it's using just the

(51:43):
GitHub as a knowledge base andyou can ask how can I create the
email, for example?
It will produce the AL codelooking into the system app and
it works pretty good.
I had an experiment which wasfun.

(52:06):
I was sitting in the visitorTech Days and there was a
session about how to use thesystem app.
In parallel, there was a codein the screen and in parallel I

(52:28):
just asked Central Cube, how canI do this and produce the same
code.
So it was great to see that itreally works.
And, yeah, the one thing thatthat didn't work right nowadays

(52:49):
with this, even Cloud 3.7 modelsand we still have for now, we
still have a job as developersthat it can really.
So it's good in AL syntax.
So it knows the AL syntax.
Yeah, but AL development is notabout just about the syntax.

(53:11):
It's about using existinglibraries.
Yeah, so it's using existingcode units.
So we don't want to duplicatethe code, we don't want to
reinvent the wheels and so on,and that's something that it
somewhere knows somewhere.

Speaker 2 (53:31):
Well then, we're safe for a long time then, because
if it's trying to analyze someof those code units then I don't
even think people can do it.
So AI will have quite a bit ofchallenge.
But if the language changes tobe more contemporary, you know,
down the C-sharp road road orlike it has been going, then
maybe there is no hope for us.

(53:54):
But until all those code unitsget cleaned up, we're safe, I
think.

Speaker 3 (53:59):
Yes, and that's something that I'm also looking
at as a central queueopportunity, because recently
there was appeared also theso-called MCP protocol in the
Coursor.
Actually, this is the you canthink of it as an API to

(54:19):
external tools and you can,inside of the Coursor, in this
code generation mode, edit modeor chat mode, you can mention
the tool and ask it hey, how canI do this?
And what I'm thinking of alsois to put some effort in making

(54:49):
the knowledge graph from thewhole Business Central AL.
You know COVID phase, so it'sthe knowledge graphs is a new
area also in this world.
So actually it's not just takethe let's say the code, let's

(55:13):
say we take the code, unit 12.
So it's the biggest one, moreor less, and it has a lot of
different functions that aredifficult to understand.

Speaker 2 (55:25):
Is 12 bigger than 80?
I just had to add.

Speaker 1 (55:28):
I'm just kidding, you know the old time is know the
numbers right, so we had to add.

Speaker 2 (55:30):
No, I'm just kidding you know the old time is, know
the numbers right, so we refereverything as number 12, 80.

Speaker 3 (55:35):
But it's more than 1,000 lines of code.

Speaker 2 (55:38):
Oh yeah, I understand , it's huge.

Speaker 3 (55:41):
It's huge and actually the problem with that.
You can't not just take thisone code unit and paste it to
the LLM as in one call and askit to you know, can you, can you
just produce my code based onthat and so on.
They just we still havelimitations of context window.
So there is an, the, the areawhich is called knowledge graph,

(56:08):
so actually you can create aknowledge graph, also using
knowledge language models thattells you the higher level.
So this is the flow, this how,the things connected to each
other, and that's the internalthings or the functions, for
example.

Speaker 2 (56:28):
So it's create this how do you get that?
Yeah, this is the graph.
Is that part of the newlanguage models, or do you have
to use another tool for that?

Speaker 3 (56:37):
No, there are some open source libraries how you
can do this, but still behindthe scenes they are using the
large language models to producethese knowledge graphs.
At the end, this is thedatabase of how things are

(57:02):
connected to each other and whenyou have this database and you
ask a question the user asks aquestion it can first go to this
database instead of theknowledge base.
With a viewer knowledge, with araw knowledge, it can go first
to this knowledge graph,understand which pieces of

(57:22):
knowledge are valuable for thatand then go deeper to the raw
knowledge and that's reallyincreased the quality of the
answer.
So my idea that I also want toput effort in to make this
Central Queue API for the Cursoror VS Code, I think they will

(57:45):
also support that.

Speaker 2 (57:47):
I think so.
I think VS Code will followCursor wherever it goes.

Speaker 3 (57:54):
And then you can just use the Cloudy 3.7 but also
Managed Central Queue.
So the flow should be like hey,I want to do this feature and
please use central queue to dofor the best quality, or
something like this.
And then it will first go tocentral queue, finds the

(58:17):
existing libraries that canhandle this, then provide this
information to the cloud, so netand the clouds, it will provide
the final art.
So in this, this will really, Ithink, increase the final
quality of the features.
Yeah, so maybe I will dosomething that will make me

(58:46):
unusable as a eligible, make youobsolete, make me obsolete,
that's what you do.

Speaker 2 (58:54):
Dimitri goes down as the father of CentralQ and also
the same one that killed.

Speaker 1 (59:00):
AL development.

Speaker 2 (59:03):
But it's not only him .

Speaker 1 (59:04):
It's everyone else, everyone else.

Speaker 3 (59:08):
Yeah, but I think that it's really changing the
way how we do the developmentright now, because I did for the
last task that I got about theAL development, I actually
decided to do the experiment, soI decided to not write any code

(59:30):
at all, so I was using just acursor in this chat mode with
edits to produce the final code,and it appears at the end.
First, it's doable, so thefeature was there.
The first draft was very quick.

(59:56):
Yeah, so it's doable, so thefeature was there.
The first draft was very quick.
Yeah, so it's.
It was much quicker than I willdo that.
However, the next follow-upsasking to refactor something, to
change something and so on,resulted in additional time, so
the total time from zero to heroappeared to be more or less the

(01:00:20):
same as I estimated how I woulddo this, but there were, in the
final solution, there were manythings that I didn't thought
about from the beginning.
So that was the problem.
That asked to connect toexternal API, and it actually

(01:00:45):
looked in the documentation ofthis API and found something
that I didn't find by myselfwhen I looked in the
documentation, and itimplemented these things inside
of my API.
So, something like about errorlock management with nice

(01:01:07):
features to discover more.
It results in a moreuser-friendly flow at the end.
So I found that these toolsreally help you to produce a
better solution and we actuallywill not go anywhere.

(01:01:27):
We just will make thistransition from just you know AL
programmer to something moremanageable, managerial.

Speaker 2 (01:01:38):
Yeah, I think you're correct.
You'll be more managerial andmore architect and
function-based.

Speaker 1 (01:01:44):
So to go back to your experiment.

Speaker 2 (01:01:49):
You had a task to consume an API.
The amount of time it took andyou didn't want to write any
code, so you wanted AI to createthe entire code for you,
including refactoring.
So the amount of time that ittook for it to do it, you say,
was about the same amount oftime you thought that it would
have taken you to do it.

Speaker 1 (01:02:10):
Yes.

Speaker 2 (01:02:10):
But it produced better quality code in a sense,
because it had additionaluser-friendly and error handling
and other features within itthat you didn't consider as part
of your first estimate.
That is amazing.
I think that would be a greattest, but I'd like to see that

(01:02:32):
test done differently.
See, this is a good session,see, I like to see these types
of sessions.
So, if you ever do one ALdeveloper versus AI right, so
you could do it not an estimateof what you think A live event.
Find an AL developer.
I don't know if you can do itlive, depending on what it is is
how much time it takes, butfind an al developer.
We'll have to volunteer someoneto write something, give them a

(01:02:55):
task.
You can do it with ai, see howlong it takes them in the end
result and see how long it tookyour ai again, chaining it
together with the, therefactoring and the code
completion, to see the realityof who wins.
See, it's AL versus AI.
I would call it.

Speaker 3 (01:03:14):
Yes, and the good news is that two days ago I just
got an email from Luke, who isthe organizer of the BC Tech
Days, that this kind of sessionwas approved.
So there will be.
There will be a session at theBC Tech Days, uh, and as all

(01:03:35):
sessions at BC Tech Days, theyare recorded and then published
on the YouTube.
Uh, so, um, and we actually, uh, we'll do the session with AJ,
uh, so he will be the one oldschool so doing the school,
doing the doing the likeclassical AL development, and I

(01:03:57):
will be doing the same in a justtyping that session alone is
worth the price of admission forBC Tech Days because to see
that experiment to where ALdeveloper aj who's?

Speaker 2 (01:04:15):
so if your aj is in there, you're doing it.
I don't even want, I want toask what it's about.
I'm just saying if aj is doing,I have some ideas of what type
of experiment it will be.
Or do you want some ideas forexperiments?
I'm open about the ideas I'llhave to send you both some ideas
for this session because Ithink that would be a great
session or a great idea.
The world is changing so fast.

(01:04:36):
A little side topic now.
I like to go with you with AIand I'm trying not to jump
around too much.
Business Central is adding alot of AI functionality to
within the application with theagents and a few other features.
Where do you see that goingwithin the application itself,

(01:04:58):
outside of development, outsideof everything but the future of
Business Central and ERPsoftware with AI?

Speaker 3 (01:05:08):
So Microsoft is working now and released the
first agent, because the flowseems to me not real.
I mean, the flow of this agentis that the user gets the email

(01:05:55):
and then, based on this email,the sales and sales agents read
this and generate the salesquote.
Yeah then, but yeah, I tookwith the Microsoft.
They told that they did aresearch and found it.
So this is pretty common flow.
But you know, that's just myopinion on that.
But the main thing that theagents coming to the business

(01:06:21):
central, I find that well, thisshould be a really next step.
I wouldn't be very optimisticabout that from where it's now,
because I see that a lot of wedon't still have a lot of

(01:06:45):
platform support to make itreally powerful.
For example, what I see, wedon't have so-called live
queries.
So in many cases an agent towork efficiently with a business

(01:07:09):
central, it needs data.
So it needs data to work withand it needs to search for the
data autonomously.
So, based on the user request,it needs to understand how to
fulfill the task.
It should go to the database,search for the data that is

(01:07:34):
required to fulfill this taskand then maybe do some action.
So that's actually what agentsdo.
There are many definitions ofagents, but I prefer to call
them the large language models,that action in a loop.

(01:07:57):
So they understand the query,they understand what next action
to produce and then they dosomething to prepare for this
action.
And then they produce theaction yeah, and then this could

(01:08:17):
be a small step.
Then they start once again.
So this is the outcome from myprevious action.
I need to start once again.
What's my next action?
And we think of these agentsthat we actually don't, uh,
don't program them deterministic.
Yeah, so we, we can set someguard rails so you can go here

(01:08:44):
and you know that's your goal.
But how to accomplish this goal?
The agent should decide, andone of the big parts of this
decision and the process is togo to the business central data
and pull the knowledge that itrequires, and I see that, for

(01:09:05):
example, queries, that they area real solution for that.
But we don't have a generatequery on fly nowadays, like we
can do the SQL query.

Speaker 2 (01:09:21):
I understand.

Speaker 3 (01:09:24):
Maybe Microsoft can use this internally, because
they do have internal connectionto the SQL, so they can
generate the SQL queriesdirectly to the database.
But this is once again notsecure, because if the user
doesn't have permission to go tothis table, it shouldn't get

(01:09:45):
this information, and that's whyeven even Microsoft should run
all this through the platformlayer.
Yeah, of course, taking intoconsideration all the
permissions, and that's actuallywhat I really asked them to do

(01:10:08):
here.

Speaker 2 (01:10:09):
So it's almost like a query API and that query.
Api would honor user permission, so anything that they have
access to would be filteredthrough the platform.

Speaker 3 (01:10:20):
If this will appear, this will open a lot more
different scenarios for theagents inside of the Business
Central, and that's when thispower of agents will really be
visible, because for now, Ithink that it's more well.
To be honest, I think there'smore automation.

(01:10:42):
It's an agent, so this is verydeterministic flow and there is
very little space in the agentdecision where it can go inside
of the process.
And so, yeah, I know that theyare also working on the next

(01:11:06):
agent for the purchase invoicing, on the next agent for the
purchase invoicing.
So when you get, when theyaccost, when the vendor send you
the purchase invoice, alsomaybe by email, and it will grab
this email and recognize theinvoice and it will convert it

(01:11:28):
to the maybe general journal orthe purchase invoice based on
what is in the invoice.
I think it's a more agent flowthan the first one, but let's
see where it goes.

Speaker 1 (01:11:45):
That's a good point that it does sounds more of a
workflow power, like more of anautomate, than an actual AI,
where it requires a little bitof thinking.
It's what it sounds like.
I mean sales agent and purchaseagent.
It's very linear.
Yes, what it's trying toaccomplish.

Speaker 3 (01:12:03):
Yes, because I think that's the power of agents comes
when you say that, hey, okay,this is your goal and this is
your tools and you are reallyfree to organize your workflow
the way you want and use thesetools in the way you want to
produce the final outcome.

(01:12:24):
That's where these reasoningmodels actually really help,
because they can produce areally nice plan and then
reflect the outcome of this planand maybe do the second
iteration, third iteration.
That's where these deep searchagents also work like this.

(01:12:48):
So they have the user query,they understand the intent and
they plan how to answer on thequery on their behalf.
So they can say that, hey, thisI can go and pull from the
knowledge base.
Or maybe this query is aboutthe code, so I can go to the
code knowledge base.

(01:13:09):
Or maybe this query is aboutthe code, so I can go to the
code knowledge base and feed theanswers from there and many
other things.
Or maybe I want to generatesomething using the Business
Central API, so I can just askin the same window.

Speaker 1 (01:13:28):
Yeah, I think that goes back to.
I know Brad and I had aconversation with somebody where
I think where the power comeswhen you're using AI.
What maybe they should havedone is a full stack solution

(01:13:53):
where if you want to ordersomething, it's going to take a
look to see what you haveavailable.
Uh, do you have enoughavailable?
Then maybe it would make asuggestion of like hey, I can
create a purchase order.
We can probably get this vendoryou, you know to send us on
time To me.
That's a better solution interms of like the experience
wise, versus like I need you toorder this.

(01:14:15):
Well, I don't have any, I'lljust create a sales order and
then it kind of stops there andthen maybe call in another agent
to do the purchase order when,if they had painted it as a
whole solution, I think it'd bea better adoption.

Speaker 3 (01:14:28):
in my opinion, gives it a power of what AI can really
really do for an organization.
Yeah, and I fully understandwhere they struggle right now
because, actually, if we thinkglobally, the Asian concept
really works using the languagemodels behind the scenes, right,

(01:14:52):
so this is like the combinationof different calls to the life
language models, orchestratingthis flow calls in the right way
, reflecting on the outcomes andso on, but still it's
life-long-life models producingthe final answer on the

(01:15:14):
sub-answer internally and it'snot very deterministic, right.
So this all things canhallucinate in any kind of level
and.
But if we implement this in theERP system, we want this to be
trustable and we want this to bedeterministic.

(01:15:37):
So, by design, these two rolesactually don't really fit
together.
So we want to build somethingdeterministic with the
undeterministic tools, andthat's, I think, where the real
problem comes.
You need to really understandthat, hey, this is AI feature

(01:15:58):
and we need to accept that itcan be not trustable for now.
Okay, so that's where we areright now.
We need to accept that and wecan do the much more experiments
and implement much more AIfeatures and see how they really
work, instead of trying tobuild something very

(01:16:20):
deterministic with a direct flowand call it like an agent.

Speaker 2 (01:16:27):
Yes, and to Chris, your point, I think I'm hopeful
that it will get there that dayand I take it as maybe this is
the first step to get there andhopefully they can get it to
work to where it covers thepoint too where the agent has
some reasoning and it's allinclusive and it can do the

(01:16:48):
whole flow.
Chris, like you had mentioned,with the sales order to the
purchase order, to schedule it,to do it, versus just creating a
sales order and then havingsomebody have to go do planning
or something else maybe it's onpurpose.

Speaker 1 (01:17:00):
It's just prolonging the uh absolution of roles in
the organization.
It's like ah, we'll give you alittle bit so you have a little
bit of time to enjoy yourposition until it gets replaced.

Speaker 2 (01:17:14):
So what are you saying?
It gives you more time to workon your resume, Chris.

Speaker 1 (01:17:17):
Is that what you're really trying to say?

Speaker 2 (01:17:19):
Maybe.
Well, I, after talking withDimitri, like I just figured out
now that my resume, I'm goingto get the update tonight or
tomorrow, whatever it may be.
See, he's in the future, he'stelling us right.

Speaker 3 (01:17:31):
He's in the future.

Speaker 2 (01:17:32):
He's telling us put your resume together tonight.

Speaker 3 (01:17:35):
Yeah, but I think that's the only line that you
can add to your resume and youwill be there in the field for
the rest of the years, at leastfor the maybe two, three, five
years.

Speaker 2 (01:17:50):
It's the manager of ai agents.
So it's there you go.
Thank you very much see my newrole.
I'm the manager of ai agents.
That's what I want my new titleto be.
I'm going to put that on myemail manager of ai agents.

Speaker 1 (01:18:04):
No, just put it update.
It says future, you're a futureand that's your role.
Put it down right now.

Speaker 2 (01:18:12):
I'm a future manager of AI agents.
Is that what?

Speaker 1 (01:18:15):
you're saying yeah.

Speaker 2 (01:18:18):
Maybe I'll do that.
Well, mr Dimitri.
Sir, we appreciate you takingthe time to speak with us
tomorrow early in the morningand to share information with us
about CentralQ, where you'regoing with it.
Congratulations on Central Qturning two.
I love just saying that itsounds like Central Q turns two.
I don't know if Central Q turnsthree Well, we'll have to come

(01:18:39):
up with another jargon for thatbut we do appreciate everything
you're doing for the communityCentral Q and all the other
information that you shareonline as well as at these
conferences, and I'm lookingforward to seeing the results of
this BC Tech Days session thatyou're doing with AI versus AL.

Speaker 3 (01:18:58):
I'm looking for scenarios and, by the way, if
you're looking in this podcaston YouTube, I'm open.
Just send me your scenarios andwe'll try to do this on the
stage.

Speaker 2 (01:19:13):
Oh, that'd be great, that'd be great.
When is the BC Tech Daysconference, chris?
We'll have to make sure thisgets out far enough time before,
and we'll have to share thatsuggestions are open.

Speaker 3 (01:19:23):
Yeah, so I think it's 15, 16 June Around this days.
Okay, 15, 16 June Around thesedays.

Speaker 2 (01:19:33):
Okay.

Speaker 1 (01:19:34):
Plenty of time.
We'll put it out, for sure,yeah we'll have plenty of time.

Speaker 2 (01:19:37):
It's in June.
Mid-june is BC Tech Days, andI'm looking forward to seeing
your session.
If anybody would like to findmore information about some of
the great things that you'redoing, learn a little bit more
about CentralQ queue.
Or now, Chris, did you knowthat you can support central
queue?
Dmitry does this all on his own, Uh, and many people benefit

(01:19:58):
from the use of it.
So, uh, you do also have theopportunity to support central
queue, Um, so you can do that.
So, uh, where is can someoneget uh in contact with you?

Speaker 3 (01:20:09):
Um, so the first, like centralqai, that's the free
website.
Then from there you can go tothe docs and see the
documentation.
From there you can go to theAppSource app, or you can go to
the AppSource and find CentralQthere.
Me, I'm LinkedIn, dmitry Kapson, almost there online,

(01:20:40):
especially at 6am in the morning, always for you.

Speaker 2 (01:20:44):
I know great Well, next time we'll do it at 5am.

Speaker 3 (01:20:51):
Well, next time we'll do it at 5 am.
Now.
Do you really want?

Speaker 2 (01:20:55):
to talk about that.
We'll talk about that later,we'll see.
We'll have you on.
I still hope, and I'm holdingout, that you do have the
opportunity to make it to theUnited States for the upcoming
Directions Conference.
I know it's really close and Iknow it's difficult logistically
to travel from the future backto the present on the short
notice, but I definitely would.
If you do attend, just shoot mea message, because I definitely

(01:21:18):
will make sure that I look outfor you and Chris and I will
like to hear more about thefuture with you while we're in
Las Vegas.
Thank you again for all thatyou do and I look forward to
speaking to you again soon.
Ciao, ciao, ciao.
Thanks for having me, bye-bye,thank you, bye.

(01:21:39):
Thank you, chris, for your timefor another episode of In the
Dynamics Corner Chair, and thankyou to our guests for
participating.

Speaker 1 (01:21:42):
Thank you, brad, for your time.
It is a wonderful episode ofDynamics Corner Chair.
I would also like to thank ourguests for joining us.
Thank you for all of ourlisteners tuning in as well.
You can find Brad atdeveloperlifecom, that is
D-V-L-P-R-L-I-F-E dot com, andyou can interact with them via

(01:22:05):
Twitter D-V-L-P-R-L-I-F-E.
You can also find me atmatalinoio, m-a-t-a-l-i-n-oi-o,
and my Twitter handle ismatalino16.
And you can see those linksdown below in their show notes.
Again, thank you everyone.

(01:22:26):
Thank you and take care.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.