Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Mark Smith (00:01):
Welcome to the
Co-Pilot Show, where I interview
Microsoft staff innovating withAI.
I hope you will find thispodcast educational and inspire
you to do more with this greattechnology.
Now let's get on with the show.
In this episode, we'll befocusing on small language
models, fabric and intelligentagents.
(00:23):
Today's guest is from Israel.
He works at Microsoft as a dataand AI product lead.
You can find links to his bioand socials in the show notes
for this episode.
Welcome to the show, ilya.
ILya Venger (00:35):
Hello, hello,
thanks.
Thanks for having me.
Mark Smith (00:37):
Good to have you on.
Tell me, I always like to getto know the guests to start with
.
Let's see your audience.
Kind of get a bit of backgroundbefore we get into the tech
side of things food, family andfun.
What do they mean to you?
They're a bit of everythingright.
It's like having food withfamily is fun yes, indeed,
indeed, but like what's the bestfood to eat in israel?
ILya Venger (00:57):
oh well, everybody
would like falafel, right,
falafel?
We've exploded it everywhere,so I'm sure that this is the
thing that comes to everyone'smind, and I won't dissuade
people.
Mark Smith (01:09):
Yes, yes, I tell you
what the minute you said it.
I go straight to a place in NewZealand where I know they have
good falafel, and you're soright, it's known everywhere as
a great cuisine.
So what part of Israel are youbased in?
ILya Venger (01:20):
I'm based near
Haifa, so this is towards the
northern part of Israel, sort ofin the mountains just above the
sea.
Great sea views not at thistime of night, but yeah.
Mark Smith (01:32):
Nice yeah, really
nice neighborhood, and does
Microsoft have a big footprintin that region?
ILya Venger (01:40):
So essentially,
israel is a small country right.
So we've got about 80kilometers to the largest office
, which has about 3,000 people,and then we've got an office
just here nearby which is about300 people, something like that.
So it's an office, regional one, here.
Mark Smith (01:57):
Yeah, yeah, okay, I
didn't realize that Markashot
had so many staff based there,but it totally makes sense.
It totally makes sense.
Tell me about your day job.
ILya Venger (02:07):
What's your role,
what do you do and what is your
focus at the moment yeah, so Ilead the product team within
this ai, so business andindustry solutions, particularly
on the ai front.
So we are working essentiallyalong several different
directions.
On one hand, we're dealing withall the data that is needed for
(02:27):
agents and for co-pilots, andparticularly how do we transform
the data according to industrystandards and how do we offer
our partners and customerswithin different industries,
such as financial services,manufacturing, healthcare,
retail, etc.
How do we offer them the rightdata tools?
And we're going to talk aboutthis a little bit later.
And then we've got anothercontingent of small language
(02:51):
models, which are specializedmodels where, again, we're
working very closely with ourpartners on wanting to train
third party models, so modelsthat partners themselves train
with their own proprietary data,train or fine tune.
And then we've also got some ofour own models that are
industry-specific for particulartasks.
And then, lastly, my team isalso doing what's called RAG, so
(03:12):
retrieval and degeneration.
That is again geared towardsspecific documents that are
domain or that are related to aparticular domain Mm-hmm.
Mark Smith (03:22):
Just to jump in on
the industry piece.
To start with, is this what wasformerly the industry clouds,
correct?
ILya Venger (03:30):
It still actually
is the industry clouds.
So this is business andindustry solutions.
Yes, so that encapsulatesindustry clouds as well as the
AI ERP business.
So all the ERP within Dynamics365 is sitting under the same
organization, and right now, oneof the things that we're doing
is actually bringing all thegoodness of the industries into
(03:53):
ERP.
That is kind of the next bigstrategic move.
Mark Smith (03:57):
Yeah, yeah, perfect.
Correct me if I'm wrong aboutfive, or is there seven
different industry cloud focusesthat have been in play now for
some years?
ILya Venger (04:06):
So I think there's
about seven right now, yeah, and
we're sometimes unified andsometimes dispersed.
So we've got financial services, retail.
Retail includes CPG, soconsumer goods, as well as
agriculture, which is also fromplate to table eventually, if
you think about that.
And then we've gotmanufacturing, automotive.
(04:27):
Healthcare.
Sustainability also is aspecific segment and specific
area.
Mark Smith (04:33):
Yeah, so I've been
involved in Australia in
implementation on the healthcloud for a major company, there
as in.
That was a two-plus-yearproject.
I did that while I was workingat IBM and so very familiar with
that.
And I've actually demoed thefinance cloud for the banking
sector and I was involved quitea bit with the sustainability
(04:56):
cloud internally with Microsoftabout probably 18 months ago.
Yeah, so I have a familiaritywith them, particularly in the
health side.
You know Nuance was a companythat microsoft purchased.
I've just heard a bigannouncement of a co-pilot which
will help doctors, you know,dictate notes and things like
that.
Is that sitting under your areaas well, because it obviously
(05:17):
would feed into that cloud pieceor is it more than a standalone
product?
ILya Venger (05:21):
so actually, the
cloud for healthcare is running
out of the larger what used tobe Nuance larger organization so
for health.
So it's a very largeorganization now with the
acquisition very influential.
We are doing quite a lot ofwork together with them.
So the data foundations thatthey have in the healthcare data
solutions, essentially runningon top of the infrastructure
(05:41):
that our team is developing.
Mark Smith (05:43):
Okay, okay,
interesting.
Tell me about small languagemodels.
Yeah, we've heard a lot aboutthem.
Microsoft seems to be theleader in the discussion.
Everyone wants to talk aboutLLMs out in the market, but, as
I'd know, maybe 18 months agothat Microsoft started talking
about small language models.
Can you just explain to theaudience what's the difference
(06:07):
between an LLM and an SLM andwhat's your thinking around this
?
ILya Venger (06:13):
Yeah.
So very, very briefly, thedifference is actually
encapsulated within the nameLarge language models LLMs
versus small language modelsSLMs.
Now, small language models aresignificantly smaller.
So if we actually look at thesize of the model, so the
leading models have rumored tohave about one trillion
(06:34):
parameters, potentially some ofthem more, or in the hundreds of
billions.
Small language models are, Iwould say, in the up to tens of
billions of parameters.
So they're significantlysmaller, which means that they
have a smaller memory footprint.
They require much lessprocessing power.
They require sometimes well,actually not necessarily less
(06:57):
data and we're going to talkabout this in a second but they
crunch the same amount of datato produce more distilled
results.
Crunch the same amount of datato produce more distilled
results.
And small language models areparticularly effective when sort
of doing either general tasksthat are sort of inside the
consensus, what I would say ofthe large language model.
(07:19):
So very complex reasoning, forexample, would be complex for a
small language model, which ismore doable with a large
language model.
Also, small language modelswould have less knowledge.
Large language models would bemore accurate on the knowledge.
However, in particular domainsyou can have significantly lower
hallucinations because youwould have trained the model to
(07:40):
be a specialist in a particulardomain.
So you know, think about thisgeneralist versus a specialist.
So you've got a generalist whois that is, the large language
model can do everything, and youknow, because they have been
trained on huge budgets and theyrequire huge budgets to run.
Then you've got generally highquality.
And then specialists arespecialized in a particular area
(08:02):
and we've got some veryinteresting numbers sort of
coming also from industryanalysts, and things that we're
observing in the market is thatcustomers are expecting to see a
very, very large number ofdifferent small language models
in different domains.
So I think Gartner is predictingthat about more than 50% of the
(08:23):
models are going to be outsideof not large language models,
but are going to be domainspecific small language models.
And with the work that we'redoing, we're saying OK.
So there are particular tasksthat are, for example, answering
questions on financial reportsor analyzing.
So one of the models that we'vereleased with one of our
(08:43):
partners, so Safer.
So this is within Fidelity Labs.
They have released models thatare evaluating specifically
marketing communications thatare going out to clients within
the retail banking domain or theretail investment domain in
order to decrease mis-selling,etc.
(09:04):
So you've got specializedmodels that are good at
specialized tasks and thereforethey are a really good fit for
industries, because this iswhere we, as industries, can
excel, because we can combinedata that is relevant to the
industry, train the model forparticular tasks that are
relevant to the industry andthen achieve significantly
better cost-performance ratioratio, as well as just general
(09:25):
performance quite often.
Mark Smith (09:27):
Yeah, what are your
thoughts then on fabric and data
management For a long time?
One of the illustrations I havewith a customer is that when
you feed, for example, copilotand it's fed by the Microsoft
Graph, an average sizeorganization let's take an
arbitrary number it might have10 million data artifacts inside
that organization.
They take those data artifactsand then they start querying it
(09:50):
and go well, the answer'sincorrect.
You know that it's comingbackwards.
And then we look at theunderlying data and those 10
million artifacts.
Human error has been introducedover years and years and years
of data input.
You've got an Excel spreadsheet.
Someone's created a formulawrong.
It gets copied by the colleague.
The colleague copies, sends itto their friends and then all of
(10:11):
a sudden there's 10 copies oferror data.
And then along comes the model.
It looks at it and it sees thereinforcement it gets is a
reinforcement of this errorintroduced by humans, and this
has happened potentially overdecades of that organizational
data.
And then we go oh, the model ishallucinating and I go well,
(10:33):
hang on a second, it's justtrained on data that actually
has the error already in theerror in it.
Why is data management soimportant and how you need to
really distill down the datasets that you provide to a model
so it is actually the correctdata and not with all this human
error that's been introducedover years.
ILya Venger (10:52):
So there's, first
of all, it's a journey and we're
still sorting things out right,and I think that this is
important.
So let me first quickly talkabout Fabric, because maybe not
everybody's familiar with Fabricand it is related, but it's not
the full solution to theproblem that we've been talking
about.
So Fabric is Microsoft'sessentially unified data
platform.
So you can bring data frommultiple different sources,
(11:15):
whether these are Microsoftsources, whether these are in
your blob storage, in your lakehouses, both within Microsoft or
outside of Microsoft.
You can symbolically linkanything that you've got on AWS
or if you've got something inBigQuery, so you can bring it
all into one environment that isunifying the different personas
there.
So you've got essentially thewhole pipeline, from starting
(11:38):
from data architects, dataengineers, data analysts,
business analysts that areworking with Power BI, so all of
them have the tools that allowthem to operate on data on top
of Fabric.
So this is what Fabric is, andthat is one environment where
you can manage your data andthat you can transform your data
, you can clean your data, etc.
(11:59):
So to your question of okay,how do we deal with non-clean
data?
So this is one thing that isokay.
So let's make sure that we'vegot one place where we can work
on the data.
So this is first thing.
Second thing is that you needto have your governance
processes in place.
So, again, fabric is connectedto Microsoft Purview, so Purview
(12:20):
is a data governance platformand a data security platform,
and so that is going to be yourdata catalog where all your data
assets are registered, so thatyou know to actually so the data
assets themselves available inFabric, registered in Purview
that's the simplest way to lookat this or registered in this
way, so that will allow you tobuild processes in order to be
(12:42):
able to process your data andclean it and deduplicate it and
trace errors, etc.
And then we come, of course, toAI, and I think that this is,
you know, and AI comes in twoplaces, right?
On one hand, you want to use AIas much as possible in order to
make sure that your landscapeis clean and is usable
(13:04):
Downstream, also to be consumedby AI, right?
So you've got AI on theentrance for as a filter, as a
cleaning as well as, andorganizing the data, as well as
getting the data out and feedingit back into your agent.
Now to the specific questions.
I think it's like the journeyof, you know, just taking all
(13:24):
your data and just filing it in.
It cannot work, right.
It just cannot work because,exactly as you said it's like.
So you need to essentiallyfilter and understand, okay,
which data sets are you going tobe dealing with, which data
sets are the more important ones?
Then start working throughthese and establish your own
governance programs and so somethings that have not yet been.
(13:46):
You know, we have not inventedmany new things in that space
yet within the last 10 years.
I think so the last 10, 15years, we're actually on that
same journey.
So, starting from when wedefined big data that, okay, we
need to get our hands aroundthat, get a program around this,
make sure that we govern thoseparticular data sets Now it's
much easier to get access tothose data sets, much easier.
(14:09):
We've got better tools tocurate.
We've got better tools thatallow us processes, but that's a
mandatory part of datagovernance.
Taking my own lens, we aresaying, okay, we need, once
we're bringing in data, if youare in a particular scenario,
like an industry scenario, youneed to make sure that you've
got semantics on top of thatdata, right?
(14:30):
You need to understand what thedata actually means, right, and
you need to make sure that youharmonize and that the customer
from your I don't know branchesin New Zealand and the branches
in Australia, right, that thedefinition of customer
eventually can be merged.
And you need to even to be ableto put all of them into
Lakehouse such that they talk inthe same language and such that
(14:50):
you harmonize data from variousdifferent systems.
And so our team is responsiblefor industry data models.
So these are pre-built industrydata models that are humongous,
like 3,000, 4,000 tables foreach one of the different
industries, highly normalized,that allow to bring in and
represent a variety of differentdata points, with rich
(15:12):
semantics, essentially sayingokay, so this is the table.
This table means X, y, z, thisis the column, so a customer
means the same thing foreverybody who brings data into
the lake house.
So this simplifies the work ofdata architects.
And then we also simplify thework of data engineers that need
to bring data from multipledifferent sources.
So we've got very efficienttransformation tools that are
(15:34):
allowing to bring the data in.
That's kind of how we'relooking at it.
So, government architect,engineer yeah, yeah, brilliant.
Mark Smith (15:43):
Where do you see you
know, being that you're focused
on industries, industryspecific agents?
Like you know, with the cloudsthat microsoft have, we've seen
the data model being the, thekind of a common data model per
an industry.
Are we going to now see likestandardized industry specific
agents?
So, let's say, use thehealthcare use case.
(16:06):
Could I have a nurse agentright that has the knowledge,
the training of a skilled nurse,in neurology for example, and
they're going to be alongsidemaybe a physical nurse, you know
a human that might be doingtheir work and assisting them.
Are we going to see these like?
But they're not just going tobe trained on the education of,
(16:28):
let's say, one university, it'sgoing to be everybody from all
the major universities,potentially as a training.
So we're going to get thismassive amplification of
knowledge, but just on aspecific topic, and then that
agent is going to assist peoplein their day-to-day roles that
work in those areas.
ILya Venger (16:44):
Yeah.
So this is exactly what we'vegot now with the DAX Copilot
agent, where you've got anassistant for a doctor to take
notes, okay.
So it's still a doctor'sassistant, so it does not
replace, so it does not dodiagnosis retro directly, but it
is an assistant.
So this is one thing thatalready we're starting, that is
brewing there.
I think the important bit isthat Microsoft itself wants to
(17:10):
facilitate, via the partners,the creation of these agents
right In most of the years.
So, specifically withinhealthcare, we are very, very
deep because we've got thenuance acquisition, so we are
very deep in that space andwe've got very particular IP
there that came with thatacquisition.
In other areas, quite often,what we have is we have a I
(17:30):
would say, a starter agent,right.
So, for example, we've got aretail shopper personalization
agent right.
So it's like for an e-commerceretailer if they want to build
an agent that is going to besitting inside their website or
on top of their website, we areproviding a starting point that
provides the scaffolding, but wewill require the customer to
(17:53):
connect to their own systems, toconnect to their own, to maybe
augment within their ownworkflows, etc.
So within many of the industrycloud, we've got an agent that
is intended to be amplifiedeither by customers or by
partners, and for partners andcustomers to build on top.
So this is one bit.
That's when we're talking aboutagents, and then the other
(18:14):
element that needs to be takensort of a good way to think
through this future world, Iwould say is skills and tools.
Right Is that we are going to beoffering skills and tools and,
for example, my team one of theproducts that we have developed
is financial document analysisskill for agents, right?
So it is a particular skill thatcan be plugged into any agent.
(18:37):
So this currently is availablewithin Azure Marketplace.
You deploy it into your owntenant and then this can be
connected to any agent that youwant and it will provide
significantly better quality RAGsolution than your standard
out-of-the-box ones, because wehave provided a lot of metadata
and document crunchingcapabilities into this skill.
(19:01):
That is then going to beavailable for building agents
wherever you're building them.
So that is relevant to theindustry, right, because this is
a financial agent or financialagent skill, and then we might
have manufacturing standardoperating procedures analysis
skill, so you will havedifferent these kinds of
(19:22):
building blocks with whichpartners in each one of the
industries are going to be ableto build their own agents that
are going to be sievingknowledge from everywhere, but
we do need to smartly sieve thatknowledge.
Yeah, and each area requiresknowledge representation in
their own way where does copilotstudio come into?
Mark Smith (19:41):
all this from your
perspective yeah.
ILya Venger (19:43):
So copilot studio
it's an interesting one, right?
So, first of all, sort ofnon-marketing wise right.
It's like Compiler Studioeventually is an evolution of
what used to be part of virtualagents, and it has taken the
best bits of that andessentially added all the AI
goodness into the mix.
So what do I mean?
(20:07):
An agent relatively, you know,with low code agent that has
different topics about which itcan reason and it can act within
these topics.
These topics are being selectedby the orchestrator, so a
built-in capability based on LLMwithin Copilot Studio.
So every maker within everycompany should be able to build
(20:31):
either from scratch or fromstarting from a starter template
to build their own agent, andwhat it allows is both to
relatively simply create frombasic building blocks an agent
for yourself, so it could be anagent for myself or an agent for
my team usually more effective,obviously, than just me
creating for myself but creatingsomething for the team.
(20:57):
An agent that will be able toexecute within the workflows and
, because this is within thePower Platform, you are able to
connect it into your flows withstandard Power Platform building
blocks.
And what this allowsadditionally is, after you have
built your agent, you've testedyour agent, then it allows you
to publish that agent to avariety of different channels,
right?
One channel could be, of course, teams.
(21:20):
Another channel could beMicrosoft 365, copilot.
And the third channel could beSlack right, it doesn't have to
be within the Microsoftecosystem or it could be a
custom website, right?
So you can publish that agentand can have many appearances in
different places.
It doesn't have to be withinthe Microsoft ecosystem or it
could be a custom website, soyou can publish that agent and
can have many appearances indifferent places.
And that's what makes CompilerStudio very powerful, because
(21:40):
you've got this composition,you've got AI, so youMs into
your flows as well as connection.
You know 3,000 connectors, or3,500 connectors that we already
got in the Power Platformecosystem, so bring data from
wherever.
And actually talking aboutcurating data, right, you expect
(22:03):
that the specialists on thisspot, in the places that know
their data, will not be bringingon all the data, and they're
actually the best position tocurate the specific data sets
that they want to provide totheir own Copilot built within
Copilot Studio, and then theywill be also building it into
the flows that are most logicalfor them within their
(22:24):
organization, within theirsmaller organization, within
their business unit and thenpublishing it out into the
channel that where theycommunicate, rather than the
channel that is forced on themby an external provider.
So that's, yeah, kind of wherewe are and that's where we're
going, but it's a journey, asalways.
Mark Smith (22:42):
Yes, yes we're fast
closing in on the end of q well
q3, and we've got a quarter togo.
What's your feeling between nowand entering the next FY?
Even if we took it from acalendar perspective, what are
you excited about that's goingto happen in 2025?
Well, within 2025 overall yeah,let's go for this calendar year
(23:04):
.
Yeah, as in.
From how far do you think we'regoing to progress in this AI
space?
Not just necessarily Microsoft,but what are your thoughts in
general?
ILya Venger (23:11):
So I think, first
of all, I can always be wrong,
even within the nine monthsright, and as a product manager,
my job is to try and lookforward.
I used to have a horizon of twoor three years.
I remember when the whole chatGPT came out, I sort of said,
okay, my horizon actuallyshortened to four weeks.
I could not predict what newthings are going to come within
the next four weeks.
(23:31):
I think now our horizons havelengthened a little bit, which
is, I think, very positive,definitely positive in the
product space, because we areless randomized, so we're
building more purposefully.
So I think, the two things thatneed to be taken into account
so, first of all, is that themodels that we are getting and
that we have been building overthe last, let's say, year or
(23:54):
relatively new generation ofmodels, continuous improvement
that we have seen was in, Iwould say, within the business
space.
Right Within the business space.
Continuous improvement waswithin coding right, and coding
is very important, not becauseit's going to replace coders,
that's not the reason.
The reason why coding isimportant and writing code is
(24:16):
because then you can bridgewhat's code and create
neurosymbolic architecturesright.
That's the simplest way tocreate that right when you've
got the large language model,which is associative but it can
write code and then once thiscode is written, this code can
be executed and then you don'thave all the limitations that
(24:37):
you usually have on LLMs, thatthey are stochastic,
probabilistic et cetera, becauseI've hardened the probabilistic
part into code and now it stopsbeing probabilistic, now it is
ironclad and it's gettingexecuted.
So we've had significantimprovements, definitely on
smaller portions of code.
So maybe you know, writing ahuge, large enterprise-grade
(24:58):
application we're not there yetand again, this is more of
compete with standard humandevelopers.
But we've given our models andour agents right, and that's the
second thing.
We've given our agents anability to traverse between the
probabilistic and stochastic tothe solidified and symbolic
agents.
Right, and that's the secondthing.
We've given our agents anability to traverse between the
probabilistic and stochastic tothe solidified and symbolic
methods, which are okay, writesome code, evaluate that code,
(25:19):
get a definitive answer right,and already on small snippets of
code, it reasons very well.
And so we've given this verypowerful tool of you know
properly giving them formallogic.
So this is one thing that I'mvery excited about and how it's
going to be developing and it'sgoing to be developing in.
I think all the models thatwe're going to see and all the
(25:40):
agents that we're going to beseeing are going to be executing
code on the fly, and we can seemore and more of this.
This started with codeinterpreter.
Now we're seeing some wonderfulthings also on the UI.
I think that this is beyond2025, but still, you know, we
can talk about this in a second.
But sort of executing code andfor the models to be able to
execute code is one.
And second thing is agents,right, and agents, what they
(26:02):
allow, and they allow thatsignificantly better than where
we were, let's say, two yearsago, because I think it all
started with AutoGPT and BabyAGI.
You know, for those thatremember those wonderful days
where they always looked at thisand said, oh, this is magic,
but it would actually, you know,go astray immediately.
(26:23):
Right now, we're already havingmuch better agent patterns that
allow for cross-validation,that allow coding again inside
the loop, and much betterplatforms that would be hosting
agents, and I think that whatwe're going to be seeing Copilot
, studio as a place where youcan generate agents with low
code, etc.
(26:43):
So you've got all thoseelements maturing into something
.
That is where you're going tohave significantly more agents
for significantly better tasks,with larger and larger portions
of work that can be offloaded tothe agent and that you don't
need to worry about and that youknow you validate.
As a human, you validate, butthen you also validate the edge
(27:06):
cases, essentially, and you'vegot sufficiently good systems
that are telling you okay, thisis an edge case, you, you know
you need to validate that theseI don't know 1,000 are not
exceptions.
So you can start managingexceptions rather than managing
just the general sort ofacceptance and reviewing
everything, as long as we've gotsufficiently high fidelity
systems, yeah.
So I'm very excited about that.
(27:28):
And I think one thing thatpeople are starting to notice
and it appears here and there Idon't know how many have played
around with cloud artifacts orwith packages like, or services
like, lovable or DevZero.
So there's multiple differentthat allow you to generate,
essentially, ui on the fly.
(27:51):
I think that if we look alittle bit further out and this
is where I think Microsoftstrategy also is going
eventually with Copilot is thatwhat you're going to be seeing
is that more and more is goingto be generated on the fly a UI
that fits your specific needsyeah, and specific needs of the
user in the moment, because it'snot logical to always talk to
(28:14):
your agent or to your co-pilot,right?
What you want is you want toconvey in the best possible way,
in the shortest time span, toconvey your intent.
Okay, you know, if I need nowto choose a color, it's not
logical to say oh.
It's not logical to say oh, red, 131, green, 25, blue.
You know it's like you don'twant to go there.
(28:34):
What you want is you want thecolor picker right.
So you, you either wanted tomagically guess what exact color
would go and would be the mostbeautiful to you or, yes, you
know, give a person a colorpicker and they would need to
pick a color, because we operateon the visual as well as audio
and nobody takes our eyes away,and I think that this is
(28:56):
something that is going to behappening more and more.
I think that it's not going tobe okay.
You know, all the code is allthe time generated on the fly,
although even there, we can seealready with the recent
demonstrations of games that arebeing developed by the Ledge
Language Model and that actuallyplays.
It generates dynamically thestages and the levels.
But what we're going to beseeing is probably it's going to
(29:18):
start with particular elementsbeing recombined.
Also, some elements are beingput aside.
Okay, this is a good piece ofcode, let's reuse that.
Okay, so here's a checkboxpicker And's, because you don't
need to reinvent these things.
You can just take these off theshelves and then you're going
to having a proliferation oflibraries of these elements that
(29:40):
right now require you knowquite often, design and you know
somebody to select them and topick them and to make sure that
they work together.
So I think that all thesethings are going to be
significantly simpler, becausecode can be generated on the fly
, including the front-end code.
So very exciting times, I think, from that perspective.
Mark Smith (30:01):
Very inspirational.
I love it it's.
You've just a whole bunch ofideas gone off in my mind.
Elia, thank you so much forcoming on the show.
Thank you, Thanks for having me.
Hey, thanks for listening.
I'm your host, Mark Smith,otherwise known as the NZ365 guy
.
Is there a guest you would liketo see on the show from
Microsoft?
Please message me on LinkedInand I'll see what I can do.
(30:23):
Final question for you how willyou create with Copilot today,
Ka kite.