Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
What if every
developer on your payroll
suddenly worked like two?
Ai-powered coding assistantsare rewriting the math on
developer productivity, boostingoutput by as much as 50 to 60
percent, but the real questionfor leaders is how to capture
the upside without invitingsecurity gaps and culture shock.
In today's episode, you'll hearWorldwide Technologies' Nate
(00:22):
Mackey and Andrew Athen willpull back the curtain on what
many are calling the killer appfor generative AI.
By the time we're done, you'llknow whether these assistants
are a shortcut, a crutch, or thenew competitive baseline, and
what it will take to stay incontrol.
This is the AI Proving Groundpodcast from Worldwide
Technology, and even if you'venever touched a line of code,
(00:44):
you'll want to listen closely,because AI coding assistants
affect timelines, budgets,security and talent strategy,
and this episode breaks downeach of those angles in everyday
terms so that you can join theconversation and steer it inside
your organization.
Let's get to it, nate.
(01:10):
Andrew.
Thank you so much for joiningthe show today, yeah absolutely,
yeah, absolutely Excited.
We're talking about AI-poweredcoding assistants, and certainly
AI been around for a while, butGen AI kind of still relatively
new here.
Coding assistants have beenaround for some time.
Nate, I'm curious, you'veprobably used coding assistants
for some time.
What is the landscape there andwhere they, you know, were
(01:35):
perhaps in the mid to late 90s,versus where they are today?
And that actual, true AI?
Speaker 3 (01:37):
powered function.
Yeah, I remember gettingexcited about them in the, you
know, mid 2000s of the idea ofautocomplete is basically what
we would say is you know, you'reworking along and oh, how do
you spell this?
Or what's the name of thatfunction?
I can't remember being able tohit tab and you know, instantly
get that done for you.
And that kind of thing hascontinued to grow and get more
(01:59):
sophisticated, but generative AIjust took it to a whole new
level of what it could do,because not only are you getting
the benefit of some names hereand there, but generative AI,
really understanding what yourcode is and looks like and where
you might be headed, just gaveyou a whole new ability to save
yourself time and energy.
Speaker 1 (02:19):
Yeah, and Andrew, you
know what are you seeing from
the AI-powered coding assistance?
What is it actually enablingcoders to do in this AI?
Speaker 2 (02:28):
age.
It's really interesting.
You know, like you, nate, Icome from a background of
starting from text-based editors.
You know that could index yourcode base and maybe answer some
very simple questions as towhere is a function, or you know
what is a variable name thatshould be completed.
You know, today we're in a spotwhere the tool that they're
(02:51):
using as a developer reallyunderstands the full context of
what you're doing and can answerquestions that you know.
The question pops into yourmind.
You don't even have to go outto a browser anymore.
You pop over into your chatwindow, which is your coding
assistant.
(03:12):
It knows probably more than youdo about a wide range of
development topics and tool setsthat you might be using.
You ask a quick question andthen you can continue with your
task.
Speaker 3 (03:25):
Yeah, it's really the
killer app for generative AI
right now.
I mean, while it's great atcoding or content generation in
general, you know there's a lotthat you have to deal with with
hallucinations and beingfactually correct when it comes
to code.
The benefit there is thatthings are sort of black and
white.
You know your code works or itdoesn't on a lot of levels, and
(03:48):
so the ability for generative AI, which is really good at
predictive text, to be able touse the corpus of everything
that's been out there on theInternet around code and help
you do your work, that's whatbrings it to a whole new level
and makes generative AIparticularly effective at
helping you out?
Speaker 1 (04:07):
Yeah, and is it just
a speed thing right now?
I mean, I've seen numbers ashigh as maybe upper 50s or 60%
productivity increase.
I've seen them down, maybe inthe 30s or 40s.
Are we talking about just speedhere, and where should we think
?
Speaker 3 (04:20):
about that.
I mean there's speed,absolutely, because just having
things kind of built out for youand being able to fill it in
rather than just spend timetyping.
You know that sort of thingcertainly happens, but it's also
that knowledge that Andrew wasjust talking about is that,
whereas before, when you'rethinking about, all right, how
am I going to fix this bug orhow am I going to add this
(04:41):
feature, the first question isasked where do I need to go in
the code base to do this?
And you might spend, you know,half an hour an hour even,
depending on how large it isfiguring out where to even start
and where you should put this,whereas you can ask now
generative AI to recommend whereshould I go.
Where is the part of the codebase that handles this
particular element of whatyou're trying to do and have a
(05:03):
conversation versus?
You know, remember the rightcommands to give it to be able
to get where you need to be andhave it direct you immediately
and then probably give yousuggestions, for you know what
you should do to make it work.
Another example is I remember,specifically in my career, one
night being all by myself in theoffice.
(05:23):
Everything is dark, everyone isgone trying to figure out why
this code was not working.
When ultimately it turned outit was a misplaced comma over in
a file.
You know that I would neverhave noticed on my own until I
just ran through all these tests.
That's the kind of thing,generative ai can say, oh,
here's your problem and be ableto solve something.
(05:43):
So that was hours of work thatit would have saved me that
particular evening if I had hadsomething like that.
So those are the kinds of timesavings you can see.
Speaker 2 (05:51):
Yeah, I agree with
that, and I think that the way
that the coding assistantsbecome part of your workflow
really depends on your personaas a developer as well.
You know, a new developer maybea novice developer is going to
find a different utility for thecoding assistant and might also
(06:12):
use a different type of codingassistant.
You know, there are those thatare integrated with your IDE and
then we find others that aresort of vertically integrated
for a particular task.
For example, I want to, youknow, create an e-commerce
website.
I go to one of these codingassistants that's really built
to spit out whole, hog, completee-commerce site, including all
(06:35):
of the back end, Whereas if I'mmore of a general purpose coder,
I'm going to go and use acoding assistant which is inside
the IDE.
Speaker 1 (06:43):
Yeah, I like, nate,
how you mentioned how this is
kind of the killer app right nowfor Gen AI.
But it's not an app, it's abunch of apps.
There's a lot of codingassistants out there.
What is the market like?
Is it confusing because there'snew coding assistants popping
up every week, or are theredifferent coding assistants for
different situations?
Speaker 3 (07:03):
Yeah, I mean there's
certainly the 800-pound gorilla
of Microsoft's GitHub Copilot,which was already the top
autocomplete capability but veryearly on integrated generative
AI features into just being ableto use it for your code base.
Now downside is you need tohave your code on GitHub.
(07:24):
You may or may not want to dothat, depending on what you're
doing, but that you know.
Github is a great platform.
It's a great place to be ableto store, understand, build your
code, so for a lot of peoplethat was great.
Now there are a lot ofcompetitors out there and,
honestly, you can even just usean LLM straight up to do code
(07:46):
generation.
You don't necessarily even needa tool.
To some degree, you can helpyourself be able to do that, but
there have been a lot ofcompanies out there doing some
great work integrating it withyour development environment so
that you can easily use it.
You can kind of have aside-by-side companion as you're
working through your code baseto be able to help you rather
than having to jump to anotherapplication.
(08:08):
The market is broad.
There's a lot out there, butthere do seem to be a few
players kind of rising to thetop.
Speaker 2 (08:16):
Absolutely, and I
think it's important to note
that coding assistants really,when you decompose them, they
have several parts.
One of them is the back-end AIthat's being used to drive the
behaviors of the codingassistant.
The other elements include whattools are available to that AI
in order to help the developerperform tasks, and also what
(08:41):
those tools can do for the AIelement itself.
Also, what those tools can dofor the AI element itself.
For example, if the AI elementis multimodal and can do things
like interpret images or eveninterpret video, then you might
have an element which canmonitor your screen so that, if
you're building a GUI, it cansee how the GUI is behaving, or
(09:02):
perhaps render the GUI and seewhat's wrong.
Or you can give it a pictureand say build me this GUI.
All of those elements are whereyou're going to see variability
relative to how the codingassistant is presented to the
coder sometimes within an IDE,sometimes as a website and
(09:22):
whether or not it has access toyou, know your desktop and your
code base and is able to createfiles and directly edit files.
Those are all questions thatyou want to answer when you're
choosing.
You know which coding assistantyou want to use.
Speaker 1 (09:53):
Yeah, I want to dive
deeper into the enterprise
adoption and integration.
But real quick, a little bit ofa pros and cons of coding
assistance.
Nate, you wrote an article onWWTcom not too long ago about
unlocking the power of theseassistants, and in it you had a
good section called the hiddengems of coding assistance.
Among them, just a couple was,you know, onboarding, software
developers and language learning.
Maybe a little bit more aboutwhat these hidden gems are, or
just what it's good at.
Speaker 3 (10:12):
Yeah, I think these
are just the kinds of things
that are not obvious to you whenyou're thinking about what a
coding assistant is and what itmight do.
The fact that you can go fromwhat, if you've, you know,
programmed in Java all your lifeand now you need to switch over
to a new platform.
We're trying to figure out howto get this into our react you
(10:32):
know code base.
That's our new standard of whatwe're doing Having the ability
to have the system help youunderstand.
You know, I know how to do thisin Java.
I would use this how do you dothe same thing in react and have
it you know?
Teach you actually to do thisin Java?
I would use this how do you dothe same thing in React and have
it you know?
Teach you actually talk to youwhile while you're doing these
things is is mind blowing.
(10:52):
It's amazing to be able to getthat kind of assistance.
We we had an example where webuilt internally at WWT an
application in Python becauseyou know that is the language of
AI, right, and so we initiallybuilt something in Python to be
an AI application for ourselves.
But our IT team was like well,we don't really support Python
(11:14):
as one of our standard platformsfor deploying applications.
We really need this to be in aJavaScript language, and that
seems really daunting.
It's like wow, wow, we got towrite this from scratch.
Well, it's much easier withthese coding assistants, um and
and to be able to kind of helpyou along to figure out how
would I change this.
They can take an initial crackat it and you can take a look
(11:36):
and see if that works.
They can help you in writingtests.
So it's.
It's not only only being ableto help you save some time, but
it is truly like having someonealongside you who has the
ability to understand the codebase, understand multiple
languages instantly, be able togive you advice and thoughts on
how something could work thatyou can talk to and converse
(11:59):
with, and I don't think peoplealways understand that's what a
coding assistant can do untilyou get it in there and start
using it.
Speaker 2 (12:07):
Yeah, yeah, and I
would say also, going back to
this notion, that the codingassistants are composed of a UX
via which you access the AI andthat has different capabilities.
Many of those will allow you topoint to a different AI, so you
can say, okay, I want to useCloud 3.7, or I want to use
(12:30):
OpenAI 4.0 or 0.3 as the backend, and you'll find that each
of those elicit a differentpersonality when you're using it
.
So it's kind of interesting ona human level too.
You know, as you interact withthe AI, you learn what its
strengths and weaknesses are,and I think that's actually a
(13:07):
very important element to it'sgoing to really exceed your
expectations in some cases.
Speaker 3 (13:14):
Yeah, we've been
talking about, you know, coding
assistance, AI, codingassistance, for a few minutes
here.
Even really address theelephant in the room of these
agentic coding assistants thatare not only, you know, your
kind of side-by-side companionwhile we're working, but are
almost like your junior engineerthat's going out, you tell it
(13:35):
what you want done and it'sgoing out and doing it for you.
And that has been, you know,pretty revelatory in the
industry for having it, asAndrew was pointing out.
You know it's going out andmaking changes for you.
It's not just suggesting whatkinds of things you could do,
but actually going out in yourcode base and saying, well,
(13:55):
let's go make this change overhere and we're going to add this
new library here or we're goingto implement this framework.
It's amazing what it can do andit's kind of resulted in sort of
a lot of conversation in theengineering circles about.
You know, is this going toreplace engineers?
Is this actually a tool we wantto continue to use?
(14:19):
How good is it?
What can it actually do?
And I think that the jury isstill out to some degree, but
it's really interesting to thinkabout where some of these tools
are already going.
I mean, it's not the future.
A lot of these things are here,and how do you properly use
these tools as you get your jobdone?
Speaker 2 (14:41):
I have an opinion
about this, having used some of
these tools, even in relativelycomplex tasks, and as far as the
state of the art today, I wouldsay that it's very important to
continue to have a human in theloop.
First of all, you know there'sa security concern.
(15:01):
You want to make sure that whatyou've asked the AI to do has
been implemented correctly.
You know addresses all of therequirements of your
organization.
These tools will come to anenterprise relatively
unconfigured and many of themwill have the ability to provide
(15:21):
system prompts or content whichwill drive its behavior.
And you really want to take thetime to make sure that you've
given those initial prompts tothe AI so that it knows exactly
how you want it to behave incritical context.
Speaker 3 (15:39):
Right.
So just to illustrate whatyou're talking about, you can
give these tools a system promptto say you are this kind of
developer and you're going tofollow these kinds of standards.
We want to always have only oneaction happening in our
function, or however younormally want your developers to
(16:00):
behave.
You can describe that to thetool so that it will attempt to
follow your rules, so to speak,very strict structure.
Speaker 2 (16:07):
That is a way of
causing procedural systems to
behave in very procedural ways.
(16:29):
When we look at how an LLMworks, it treats everything as
text.
Okay, and this is one of thejust as free form pros.
And this is one of the placeswhere we're going to see
differences in behavior as welook at which tool I want to use
.
You take a specific tool I'mjust going to give an example
(16:50):
something like Windsurf.
You know that firm has spenttime making sure that the code
that's presented to the AI hasbeen syntactically analyzed in a
way that when the LLM ispresented with that code, it
understands where the blocks are.
It doesn't purely rely on thegeneral training corpus of the
(17:14):
LLM to do that task, so theyhave special RAG indexers that
will allow the tool to be moreeffective.
And yet, when you ask it toperform certain tasks, it may
inadvertently go into a block ofcode and reformulate that block
(17:34):
of code, even though all youwanted it to do was to move it
from here or here or to refactora big file into two smaller
files.
And that's where because we'reat the bleeding edge with some
of these capabilities that'sanother example as to why it's
so important to have a human inthe loop.
Speaker 1 (17:50):
Yeah, I couldn't
agree more.
A lot of that is speaking toadoption within the enterprise
of these tools, things you haveto account for.
Nate, I'm curious, you knowwhere?
How did we start to?
We've always been familiar withpaired programming or having
that type of help from our end,so maybe there was a little bit
of a because, like I said, thiswas the one of the first
(18:12):
applications of AI that seemedlike an obvious benefit.
Speaker 3 (18:35):
And so we've had
teams using tools.
We've evaluated most of thetools, if not all of them, that
are out on the market to figureout you know what are the pros
and cons in different places,have settled on a few.
Like I said, github Copilot hasgotten a pretty broad adoption,
I'd say around worldwide,because of its ease of use and
just generally the capabilitiesthat it provides.
(18:56):
So we've been, you know on thisfor, I would say, a couple of
years now, using these kinds oftools to figure them out.
At the same time, it's noteverywhere yet, because there
are certainly teams like we do alot of custom software
development for customers.
We still have customers whoaren't totally comfortable with
(19:18):
AI seeing their code becausethere's, you know, there's an
element here.
Seeing their code becausethere's, you know, there's an
element here, even if you knowif you're on, if you've got your
code out on GitHub, then you'reprobably already have decided
okay, I'm good with my codegoing out to the cloud, but if
you're not there yet and youreally want to keep things on
prem, there's been a lot lesswillingness to adopt.
(19:41):
Not knowing like, is it okay forour code to go out and be
evaluated by AI?
What are the security risks?
So I would say, even withWorldwide, who's been very eager
to adopt some of these thingsand try them out.
You know, when it comes todealing with some of our
customers, we want to berespectful of what they're ready
to do as well.
(20:02):
So, but it's given us theopportunity to try out a lot of
these different technologies andsee them work in various
situations in the enterprisewith teams.
I think we would wholeheartedlysupport this kind of
side-by-side coding assistantthat we were just talking about,
where you're gettingsuggestions as you go.
(20:22):
We are moving into the world ofthe more agentic code
generation style of CodingAssistant and trying to figure
out what are the right ways touse that in various situations
for our customers so that we canbest advise them.
Speaker 2 (20:37):
Absolutely.
You know, I've thought of threejumping off points for other
elements of the conversationbased on what you just said.
One of them is that it'simportant to point out that,
because most of these AIs havebeen trained on corpuses that
are relatively old corpuses ofcode right, I mean they extend
(20:59):
back 30, 40 years they tend tocontain mostly open source
tooling in that code that'spublicly available.
One of the elements that we seesome challenges existing with
use of coding assistance is thatthey're not necessarily
completely up to speed on thelatest tooling from some of our
partners.
(21:20):
For example, do they know howto best use NVIDIA NIMS, you
know, maybe not Right, and sothat's where, again, some of the
system prompting and some ofthis preparation of the of the
context in which you ask the AIa question is so important.
So then, instead of answeringyour question by using an open
source tool, it might insteaduse the specific commercial tool
(21:45):
that you're that you're lookingto use.
Speaker 3 (21:47):
Yeah, I would also
say that applies to languages.
Yes, there are some languagesthat have a lot of code out
there on the internet for them,and there are others that don't
have as much.
Speaker 2 (21:56):
Zig, for example
really exciting new little
language, but you're going tofind very little code out there
in that.
The other element I wanted tomention is that the most capable
AIs are going to be thefrontier models, right?
There's only a few companies inthe world that can train those,
and super fine-tuning them isalso a task that requires a huge
(22:19):
amount of compute.
That's probably why we'reseeing some of the announcements
relative to acquisitions, youknow, in the space and
consolidation in the space.
Once you have one of thesefrontier models, it's unlikely
that you're going to run thaton-prem.
So then the question is, as youwere saying, do I want my
(22:41):
intellectual property findingits way out to essentially an
inference provider?
Right, that's the codingassistant companies.
A way to think about one oftheir functions is that they are
inference providers inferencein the cloud.
So confidential computing isgoing to be an element in this
space that has to develop andcontinue to develop.
(23:02):
Security for AI is going to beincredibly important to the
continued development of thespace, and what's exciting there
is that the mathematicians havefigured out ways to.
For example, when we thinkabout security in the
traditional context, we thinkabout things like SSL.
Everybody knows what that isright.
I take my text, I encrypt it, Isend it over the channel.
(23:24):
The problem is at the other end.
Your inference provider has todecrypt that channel.
It's going to see your text,meaning it's going to see your
code.
Now the mathematicians havefigured out a way to transform
that text so that it'sequivalent to freeform, open
text that you can read to themodel.
Speaker 1 (23:56):
But if a human were
to look at it, they would have
no idea what it says right Tothe model.
But if a human were to look atit, they would have no idea what
it says Right.
So confidential computingsolutions of that, like you said
, coding infrastructureconsiderations how did we work
with our internal teams to makesure we are accounting for all
those?
Or is that an ongoingconversation?
Speaker 3 (24:17):
Yeah, I mean it is
ongoing.
One thing that we did on thatWorldwide did early on we
developed this AI driver'slicense training capability that
helped walk through with eachof our employees.
Here's how AI works and here'ssome of the things that you need
(24:38):
to keep in mind.
It's not a magic black box.
There's, you know, there'sactual data moving around, going
to different places.
How do you keep us safe?
What kinds of tools shouldwould we recommend that you be
using?
So that was definitely part ofit.
Another is that we looked at themarket and possibilities there
for how could we use thesetechnologies and try to limit
some of the risk.
(24:58):
So Windsurf, for example, haswhen it comes to what they call
Windsurf extensions, which isthe sort of side by side auto
complete style.
They have an on prem capability.
So, as long as you've got the,the GPU computing power to be
able to install it, you can haveyour own coding assistant
(25:19):
within your firewall and andyour code's not having to go
anywhere.
So, looking at what thedifferent options are, this is
what we can best advise ourcustomers, both the ones that
we're working with and buildingthem code, as well as the ones
who are looking at theseproducts and try to figure out
which one is best for them,because that's absolutely going
to be a consideration andsomething we want to make sure
that we're fully versed onbefore we get out there.
(25:42):
But as far as security, this isa little bit of a tangent, but
part of what I'm interested andexcited about is, as we continue
to train these LLMs that arehelping us from a code
perspective, some of thesetechniques that we've always
tried to teach developers how todo around things like
accessibility and security andthe ways that you have to
(26:05):
actually you know implant thatin your code.
Security and the ways that youhave to actually you know
implant that in your code.
Think of it from the beginningthat these tools can help us do
that, can help us maintain thatkind of discipline, by saying,
you know, hey, if you use thiskind of model, this would be
more accessible on the front end.
Or if you you know, if you,this algorithm or this, this
(26:26):
idiom that you're using withinyour code, has a potential to
create vulnerabilities, you know, let's try using this
alternative instead.
So there's also possibilitiesfor actually increasing security
in what you produce and how youdo it if we can start to bring
those kinds of capabilities intothe forefront as well.
Speaker 2 (26:45):
Absolutely.
I'll point out an examplerelative to WinSurf, relative to
the capabilities of on-premversus not on-prem.
All of the agentic capabilitiesof that particular tool
actually come from this SaaScomponent of the tool.
The on-prem is really focused,as you were saying, towards
(27:06):
providing autocomplete, but it'simportant to also note that
autocomplete in the context ofAI is not the autocomplete that
you're used to right.
It's very useful, particularlyfor an advanced coder.
It removes all of thatboilerplate stuff that you have
to write.
You know what variables aregoing to go into that for loop.
(27:28):
You know that the body of thatfor loop or that while loop is
going to contain you knowcertain things.
Typical autocomplete is goingto be hit tab and get the name
of a variable.
Autocomplete in an AI contextis hit tab and you get the body
of the function and maybe youhave to fix a couple of things
in a minute, right, and so thatcan accelerate your tasks quite
(27:49):
significantly, even if you don'thave access to agenda
capabilities.
Speaker 3 (27:56):
This episode is
brought to you by Windsurf.
Windsurf's AI development toolsgive your team the power to
build faster, work smarter andraise the bar for what great
software can be Turn time savedinto product shipped and
breakthroughs made for whatgreat software can be.
Speaker 2 (28:07):
Turn time saved, into
product shipped and
breakthroughs made.
I do want to get we've beentalking a lot about- what these
assistants do very well.
Speaker 1 (28:15):
We've talked about
some considerations, but what
about the flip side?
What is it not ready for?
This episode will air sometimein May and we know things
advance quickly, but what arethey not ready for right now?
I'm thinking of a couple quotesthat I've seen that has a
potential to mask, you know,poor coding, or just you know
hide or lead to technical debt,or just you know, continue on
(28:37):
bad habits.
What are some of these codingassistants not ready for now or
in the next couple months?
Speaker 3 (28:43):
The more code that's
being generated by the system,
code that's being generated bythe system, the less the
developer, the one at thekeyboard, is actually
understanding exactly what'sthere and being able to keep it
in their head and ensure thatthey're not introducing some you
know unintended consequencedown the road.
So you know when you're talkingabout for loop, you know your
(29:06):
risk is pretty low.
But when you're looking at theseagentic tools and it's going
out and doing something exactlyas you've asked but you're not
thinking about all theimplications of what that means,
you could definitely getyourself in trouble and into a
situation where you don't evenreally know how to fix a problem
that's come along.
And if it's sure, if it's a iferror or something that is easy
(29:30):
to spot, the AI is going to helpyou fix it.
But if it's a business logicerror that comes from multiple
dependencies in the systemreaching some conclusion, and
now the conclusion has changedand you have no idea which piece
of that actually caused it,that's where you can really get
yourself in trouble.
So that's one area I mean I canthink of several.
That's one I'm worried about isthat the more you know it's
(29:53):
just human nature Once you starttrusting what it's doing, the
less you're going to inspect itand be aware of it and
understand it and take ownershipover it, and the more likely
that things are going to happenthat you didn't intend.
Speaker 2 (30:07):
Yeah, you know how.
I would add to that is let'sthink of you know Hollywood for
a second.
You know you think of a movie Idon't know what was it,
inception or whatever and theguy's out there and he's
manipulating things in the airand the AI is doing all kinds of
amazing tasks.
The first time you start to useone of these AI coding
(30:31):
assistants, you start to thinkthat that's its level of
capability.
You know it tricks you veryquickly into believing that it's
an omnipotent assistant.
But it has a limited contextwindow.
You know it cannot see yourentire code base all at once.
And as you begin to use it andyou hit accept, accept, accept
on all of its changes, youdevelop a technical debt to
(30:52):
yourself because, unless you'vetaken the time to internalize
all of the changes, you've readline by line, all of the things
that it's written and you'vetaken.
I mean it takes a human alittle bit longer than a machine
to.
You know, read the text.
So what picture do you have,what working model do you have
(31:12):
in your head about the overallsystem that's being created If
all you've done is punch through50 accepts, each of which
you've seen independently, lookscorrect, doesn't contain a bug
doesn't contain a security issue.
Do you have a clear picture ofwhat your overall system is
doing?
At the end of that, I wouldn'tsay that's a reason not to use a
(31:35):
coding assistant, but it is areason to use it carefully,
right, right.
Speaker 3 (31:39):
Yeah, I expect that
we're going to change our way of
developing systems oh, 100%Right, so that it's not about
just generating modules andgetting something to work, that
our entire way of having a jobas a software engineer is going
to change, so that we'rethinking about what kinds of
(32:01):
problems need to be solved andwhat's the best way to do that.
I don't think we even have thestructures you know the
structures and the patterns forthat yet, but because we can't,
I don't think it's sustainablefor AI to just domain specific
language that we call you knowPython or C plus plus or
(32:31):
whatever it is.
Speaker 2 (32:33):
It's a way for us to
look at a unit of information
and understand what's happeningthere.
Is that really going to be thebest way to present what a
system is doing 20 years fromnow, or are we going to really
be presenting concepts, perhapsin a different visual language,
which isn't text on a page right?
(32:53):
And this leads to one of thechallenges, also with coding
assistance.
You know there are firms outthere that have done some
analytics on what has occurredto coding code bases, what's
happening in GitHub relative tothe impact of coding assistance
and the code that's going intothem.
(33:13):
We're seeing a lot morecopypasta.
Now why is that?
And what I mean by copypasta isthe same for loop with the same
variables repeated multipletimes in the code.
You know, as a coder, what Iwould typically do.
There is write a utilityfunction and I would call that
utility function and what youwould see is just the call over
and over again.
Now you see that code repeatedover and over again Because our
(33:38):
presentation layer is stillthese text files.
We see those things.
In the future, I would imaginean AI-mediated presentation of
that code would not show me allthose repeated for loops at all,
so it doesn't even matter thatthey're in the code base.
Instead, I would see you know areference to that function that
I would have written out byhand, right?
(33:59):
So it's both a challenge and anopportunity, yeah.
Speaker 3 (34:04):
Again, you know we
think about the problems.
There are problems with the wayit is today, which is why I
don't think I would recommendlike whole teams taking this
agentic coding assistant andimplementing it Code bases.
(34:34):
You've got basically the Genticcoding assistants competing
with each other and seeing lotsof conflicts when you're trying
to pull this all together andhaving no idea how to resolve
them.
I mean, I can see a lot ofissues, but the fact is this is
too useful to just say we're notgoing to be able to make it
work no-transcript it as acrutch right.
Speaker 2 (35:39):
Particularly, you
were saying before so there's
this new language that you know.
I'm now coding it.
Let's say I haven't used Rust.
I've been traditionally C++developer.
Now the coding assistant ishelping me learn Rust because
I've asked it to develop acertain part of it.
I've seen the example.
The next time I run into that,it's important to step back for
(36:00):
a second and attempt to writethe code myself so that I can
continue to be asking it cogentquestions.
Basically right, exactly, soyeah.
Speaker 1 (36:09):
Yeah, it's
interesting A lot of the
parallels.
I myself have never written asingle line of code, but you
know I'm I'm a writer by trade,um, articles, blogs, whatever it
might be.
There's a lot of parallelsbetween what we're talking about
.
I rely on AI very much, whichcan be an impediment to myself
staying sharp as a writer.
So I'm going to ask you, Nate,how do you keep your developers
(36:31):
sharp so that, until the timecomes when AI is writing
everything for us, we're able tointerject where we need or
understand where we need topivot this way or that way?
Speaker 3 (36:40):
Yeah, I think the
thing to do now, with the tools
as they are, is to lean evenharder on the coding rigor and
discipline that we've known thatwe've needed since the late 90s
.
You know, being able to have,you know, good test coverage for
your code base, being able tocreate simple, readable code.
(37:04):
If you are going to use thesetools, take the time that you
save and pour it into thatdiscipline, because what that's
going to allow you to do isforce you to understand what's
happening in your code base andto make those broad problem
solving kinds of decisions thatare really what we need our
(37:24):
coders to be doing, our codersto be doing.
Like we don't need our codersto know 10 programming languages
.
That's not that helpful.
Let the systems be able to dothat kind of thing for you, but
take ownership of that code.
Think about how you would wantit to look if you didn't have
these assistants.
If you were coming in and youneeded to be able to understand
this code base without an AIhelping you, what would it need
(37:46):
to look like?
How could you rely on it?
The exciting thing about thesetools is that they give us the
time to do that.
I think, to some degree.
We've abandoned some of thosedisciplines because of the speed
of change and the need to getfeatures out, et cetera.
This gives us an opportunity tolet the tool do the grunt work
(38:08):
and you to focus on what a good,maintainable, you know, easy to
change code base might looklike.
Speaker 2 (38:16):
And, yeah, I would
build on that by going back to
the statement that you madeearlier what is the level of
productivity gain that you'regoing to get from a coding
assistant?
One way to think about that issure you get 60% productivity
(38:39):
gain If you are a perfectorganization that does all of
the things that you justmentioned.
You have perfect unit testcoverage, you've got all of that
stuff.
Then you get that 60%improvement and you're done
right.
I think in most organizations,we can all agree that they're
not perfect and in the end, youknow the market is driving.
Uh, your, your, uh, you knowyour time to completion, what
(38:59):
have it?
As a developer, you're cuttingcorners, right, and so it's
important to have executivemanagement uh understanding of
that issue so that they're not,once the coding assistant is
implemented in the organization,they're not expecting to see
every project completed 60%faster.
Rather, they're expecting thatthat 60% improvement in
(39:23):
productivity is now beingutilized to increase security or
increase unit test coverage orwhatever it is.
In other words, the projectmight still take about as long
as it did before to complete,but it's going to be a much
higher quality product,absolutely right.
Speaker 1 (39:39):
Yeah, that's
interesting because I had a note
down here that said speeddoesn't always equal quality,
absolutely.
Speaker 2 (39:45):
So yeah, I mean you
just said you know it may be
moving the unit tests to ensurethat the regressions aren't
(40:10):
there to do all of those otherthings that maybe would have
fallen by the wayside, becauseif the coding assistant hadn't
been there, you wouldn't havebeen able to do it.
Speaker 3 (40:19):
I mean, like you
mentioned, the assistant's go-to
in most cases is going to bejust to do the simplest brute
force thing to solve yourproblem right.
So you need to spend that extratime to make sure that it's
been done the right way.
So in your example of, you know, this little code block getting
repeated all over the place,that's not you know.
(40:40):
You might say, oh well, if ahuman had done that, that'd be
wasted time and that's why wewouldn't want them to do that
and that's why we wouldn't wantthem to do that.
But there's more at stake,because if that code block ever
needed to be changed, if yourbusiness changed in some way,
that needs you.
Now that code block, thatassumption that you made when
you wrote it, needs to bechanged.
Now you've got to go change itin 10, 20 places in the
(41:01):
application to be able to makethat feature work or whatever it
is.
So it's always there's benefitto as much as possible not
repeating yourself in the codebase.
So take the time to actuallyexamine have I created a bunch
of repetitive code here?
Use your tool to help you, youknow, figure out how to refactor
(41:22):
that into a much better stateso that you can be in a place
where you can make changes againin the future.
Speaker 2 (41:29):
Yeah, and that's a
good example relative to sort of
learning the AI's personalityand learning how to use it best,
because if you think that'sgoing to happen, you prompt it,
then you look at what itgenerated, you factor that into
the reusable function, you letit know that it exists by you
know there are various ways forthe tools to allow you to do
(41:49):
that and then it's more likelythat it's going to use that
function that you created ratherthan repeat that code block for
you all over the place.
Speaker 3 (41:57):
Exactly right.
Speaker 1 (42:06):
As these tools become
more mature.
I want to run a statement thatI'm sure you've probably heard
from the industry North StarJensen over at NVIDIA that
everybody will be a coder in thefuture.
How do you feel about thatstatement?
What does that mean for thefuture of the software?
Speaker 3 (42:25):
It is another
interesting facet of these
agentic tools that can do thingsfor you.
You just describe what you wantand it creates it for you, and
I've already heard stories ofnon-developers using these tools
to build themselves tools to beable to do their job, which
sounds fantastic and IT going.
I know Exactly, right?
(42:46):
Yeah, I mean, I can't imaginehow scary that is to the IT
department to think about.
Well, they're building theirown applications.
Are they injecting securityissues in those?
Are they?
Are they putting something outthat customers are having access
to, where they could createvulnerabilities in our entire
network?
You know what that?
That is frightening in and ofitself.
(43:08):
But even if you solve thatproblem, it kind of reminds me
of, you know, the propagation ofExcel worksheets, right?
So people use Excel.
A lot of people use Excel tomake their job easier, and
you'll find people who have thisExcel workbook that they've
curated over time that does allthis amazing automation.
(43:28):
And as a software engineeringorganization, we've had
situations where someone broughtus one of these spreadsheets
and said can you turn this intoan application?
It's something that looks sosimple.
It's one workbook how hardcould it be?
And we come back with yeah,that's probably going to be
eight months and $2 million tobe able to do that.
So it's the same kind of idea.
(43:50):
If you've got this functionalitythat's just popping up all over
the place, what opportunity areyou losing to bring that
together and be able to dosomething that helps a broader
number of people, and not justthat one individual?
And if one person's doingsomething in a way that's really
effective for them, but theother person doesn't know about
(44:11):
it and they write their own toolthat does it in a much less
efficient way, you never findthose opportunities.
So there's danger there, too,right, and I think that's
another area we're going to needto continue to explore and
figure out.
How do we create the right wayto use these tools?
Yes, that's awesome thatnon-developers can create their
own tools to solve their ownproblems and not have to put it
(44:34):
on the IT list and they don'tget it for two years or whatever
.
That's all great, but how do weavoid the pitfalls on that as
well?
Speaker 2 (44:41):
I mean, if you take
Jensen's thought to its end
point, where do you end up?
You end up with a question asto what is software right?
No-transcript thought processas to what are we really talking
(45:18):
about here and where are theseAI assistance really leading to?
Relative to achievingacceleration of business within
an enterprise.
Acceleration of business, youknow, within an enterprise.
Soon we'll have codingassistants for Excel, right?
Maybe that doesn't exist fullytoday, but, man, that'd be a
killer app.
Right, Go into this Excel sheetand figure out how to take this
(45:42):
thing and do the pivot tablethe right way.
I would be fantastic, right?
Exactly.
So, yeah, I mean, it's a reallyinteresting thought process.
Speaker 1 (45:54):
And is that, at risk
of putting myself out there for
being wrong?
Is that where we get into thebuzzword that I've heard a lot
of vibe coding?
Is that kind of where thatarena is?
Speaker 3 (46:03):
Yeah, the vibe coding
idea is that I don't have to
think, you know, I can just tellthe AI what I want and it will
create it for me.
And what's funny is that, whilethis is true and there's,
there's been some funny storiesof people like creating their
(46:24):
own video games and releasingthem and one person you know
making uh, six figures on thisvideo game they they made with
vibe coding.
Um, it's really right now.
It's it's more of a of a, a fador a or a meme in what's going
on, because these these thingsare awful that they're creating
(46:45):
that way.
If they're not thinking about it, if they're not putting the
effort into considering what amI creating and what are all the
facets and how could it work,and you're just trying to get
something to work, you end upwith a mess, you know right now,
and with what you have today.
So so it it describes where wecould be and you know I guess it
was only a year ago that I sawa demo from OpenAI where they
(47:10):
showed how they were justtalking to the AI and the AI
could see their screen and theywere working on a web page
together and you say, well, I'dreally like these columns to be
able to shrink when I change theborders, and can you help me do
that and have it go in and giveyou the code to do that?
You know, that sounds morerealistic to me about where we
(47:31):
could end up is a conversationand almost like you're talking
to someone side by side andyou're working together on where
it's going, versus just I'mgoing to sit back and pull my
recliner and just tell the AIwhat I want.
Speaker 1 (47:42):
Yeah, Well, Andrew,
there's probably a kernel of
truth there, though, too.
Like where there?
What's's the middle?
Where?
How can a software developerutilize the idea of vibe coding
to actually push a business?
Speaker 2 (47:53):
forward.
Great, great question, I mean.
Look, the counterpoint to thatis, I think if you go out on on
the web and you search for, uh y, combinator and coding
assistance, you'll find thatthey're making statements that
that they're you, you knowthey're.
They're incubation.
Uh, what do they call it?
What's the word I'm looking for?
The group of companies thatthey're currently incubating are
(48:16):
seeing a significantacceleration in getting their
first MVP out the door becausethey're using a certain amount
of quote-unquote vibe codingright.
But the key is that there aredevelopers in there that are
sort of watching the machine doits coding right.
But the key is that there aredevelopers in there that are
sort of, you know, watching themachine do its thing right.
It's not pure vibe coding, yeah, yeah.
Speaker 3 (48:35):
Yeah, and you know,
and we've also seen reports of
that going on, and they put iton the internet and immediately
it's hacked.
You know not sure if becausebecause they didn't have the
rigor there.
So, yeah, yeah, I mean we'llfind that middle ground.
It is amazing and, like I said,there's no stopping it, like
it's going to happen.
We just have to figure out theright kinds of discipline and
(48:57):
the right way of working with itto be able to get to that
promise.
Speaker 1 (49:01):
Knowing that we're
moving towards that future
potentially, what are youlooking for in a new software
engineer that you're bringing onto the team?
What types of skills will theyrequire in the future and how do
they feel about that?
Are they excited about that?
Other early adopters who areready to go and eager for
whatever they see.
Speaker 3 (49:17):
Honestly, I think,
what we always looked for when
it came to software engineers.
We cared a lot less about whatspecific language do you know or
(49:41):
how long you've been working inthis framework, and more about
how do you solve problems andhow do you think about how to
make these things happen.
And it's interesting I'm on theboard of a local university's
computer science team.
They just have an industryboard where they bring people in
and you know thoseconversations can get pretty
(50:01):
depressing sometimes if youstart talking about the future.
They're trying to teachstudents how to get better at,
you know, dealing with computersand at the same time, all these
things are happening that makeit look like that idea is going
to go away.
But my point to them has beenyou know, yes, let's have them
use AI to be able to do thesethings.
Encourage that, don't restrictit, like that's where we're
(50:25):
headed, but start evaluatingthem on.
How did they go about solvingthe problem?
How did they use AI?
What was their method?
How did they determine that itactually did what they wanted it
to do?
How resilient is it if you askthem to go and change it?
Those are the kinds ofdecisions that you're always
going to need to make, no matterwhat happens, and that's what
(50:45):
we continue to need to hone.
Speaker 2 (50:48):
It's important to
remain realistic, and any
computer scientist will tell youthat there's a thing called the
stopping problem.
It's not computable to look ata piece of code and predict, or
you know in general, to look ata piece of code and predict what
it's going to do.
It's impossible, it'smathematically not possible.
(51:10):
So you are not going to ever bein a situation where a computer
system can look at a highlycomplex problem and reduce that
to what's going to happen in thefuture.
We, as humans, continue to beexcellent at understanding
(51:31):
complex system behaviors andsynthesizing those from
components, and obviously, youknow the AI assistants will grow
in their capabilities in thatarea.
But I think that's where coderyou know the coder.
That is a coder because theyknow a language.
Sure, that might go the way ofthe Dodo, right, but a software
(51:55):
architect who understands whatthe right algorithms are, what
the right ways to connectagentic systems are, uh, what
the right ways to create aservice architecture is, you
know which vendors I want tobring into my software
architecture in order to achievecertain goals.
(52:17):
Those are going to continue to,I think, remain in the domain
of humans, absolutely.
Speaker 1 (52:22):
I wonder if one of my
favorite things about using AI,
or the recent features of AI,is when I can see the AI
thinking like.
I'll prompt it and then it'llsay thinking.
And if you click on thinking,it'll expand.
It'll say Brian asked for X, y,z, which means I have to do one
, two, three.
Do coding assistants do thatand if so, would it be valuable
for coders to understand and seehow the machine is thinking?
Speaker 3 (52:45):
Yes, Especially in
these agentic tools, they have
what's called a planner wherethey say here's, you know,
you've asked me to do something.
I'm going to come up with aplan and you, you know, if
you're really seriously usingthese tools, you're going to
want to see what that plan isand you'll say something like
you know, this is what I want.
Come up with your plan to makethis happen and show it to me,
(53:08):
and then we'll talk about itbefore we actually make changes
like that.
That's how it should work and Imean, that's that's been one of
the recent revolutions withLLMs the fact that they have,
they can put together a plan,they can think about a process,
they can show you what they'rethinking about.
All of that is incrediblyhelpful when you're trying to do
these things, just to keep youfrom making a big mess and
(53:29):
having to destroy it and startover.
So, yeah, absolutely.
That's a key part, just like itwould be with trying to have it
write you an article.
Speaker 2 (53:36):
Yeah.
Speaker 3 (53:37):
You know, being able
to see what is the plan, and
let's make sure we're on thesame page before we move on the
notion of federated models andwe've seen Google recently
announce a protocol calledagent-to-agent A2A.
Speaker 2 (53:53):
Recently, mcp model
context protocol has come to the
forefront.
These are mechanisms that allowmultiple AI models to interact
with each other, gain additionalcontext, perform tool, use all
of that.
These are being integrated intoAI coding assistance, and where
(54:14):
I would like to see thosemodels go not the models, but
the use of A2A and MCP go isthat we have other tooling that
not a lot of developers arefamiliar with or use because of
the complexity of these tools.
We have formal mathematicallanguages and systems that we
(54:34):
can express formally how asystem should do so that we can
compare what it does do withwhat it should do.
It's kind of like a linter onsteroids.
(54:55):
Right Now, the use of thoseformal languages to express
those formalizations of whatcode should do is a complex
process.
The coding assistance couldhelp more of the developer
community begin to use thosetools and that's going to hugely
increase the quality of codeand diminish the error surface
(55:17):
that we tend to create as wewrite code.
Speaker 1 (55:21):
Well, this has been
an excellent conversation.
It resonates with me super well.
Hopefully it does with ourlisteners.
We're running short on time.
I did just want to ask you know, wrapping up, what should
business leaders be doing rightnow to optimize their
workforce's use of AI-poweredcoding assistants?
And then, on the flip side,what should developers be doing
(55:42):
to make sure that they'reoptimizing as well?
Speaker 2 (55:45):
Why don't you start,
andrew?
Well, I would say that thefirst thing they have to do is
give them access, and that mightmean that they have to relax
certain constraints relative towhat's okay to do in an
organization.
Give coders tasks that are okayto use in a SaaS environment.
You know, if I'm going to usean agentic system that requires
(56:09):
me to send some IP outside thedoor, then let's do it.
You know, not necessarily in atoy scenario, but at least in a
scenario where we know this IPis relatively less important.
The other thing to do is toreally, you know, create centers
of excellence around the use ofAI, you know, within your
organization and createevangelists for this, because
(56:33):
there are going to be those inthe organization that will
poo-poo the idea of using, youknow, a new tool.
There's others that are goingto be super excited, and we see
this every time there'sinnovation anywhere, you know,
in technology.
It takes, sometimes it takes anevangelist to sort of push that
forward and show people why itis that you might want to change
(56:54):
your ways.
Look, as a coder who has becomevery, very productive.
I have a certain tool chainthat I use.
I have a certain way of usingthose tools that I'm used to.
If now I have to change fromusing Emacs to using Windsurf
and Visual Studio, I'm going toresist that until I experience
(57:17):
that and really see what it cando for me.
Yeah, exactly.
Speaker 3 (57:20):
Yeah, no, I think
that's exactly right.
You know, we encourageorganizations a lot now to find
ways to start having your peopleinteract with AI Because, you
know, while you don'tnecessarily have to go and try
to solve your most difficult,hairy problem with AI right now,
maybe it's not ready for that,but AI is going to incorporate
(57:43):
itself into what everyone isdoing ultimately.
So this is a great way to get agroup in your organization
using AI on a regular basis andstarting to understand it.
And, honestly, codingassistants I kind of see it
right now as a bellwether forwhat is really working in the AI
world.
It's the first place.
I'll start that over.
(58:07):
Coding assistants.
I see right now is a bellwetherfor what's really working in
the AI world.
Coding assistance for the firstplace.
You saw agents For the firstplace.
You're seeing things like MCPand A2A, because it has so many
tie-ins and it's so easy toprove out does this work or does
it not work?
Easy to prove out, does thiswork or does it not work?
(58:31):
So just keeping an eye onwhat's going on with those tools
is a great way to just kind ofkeep up with what is going on in
the AI world.
Speaker 1 (58:35):
Love that.
That's a practical tip, justfor anybody interested in
developing their own AIexperience and readiness.
Well to the two of you, thanksso much for sharing your time
today.
Super helpful conversation.
I'd love to have you back onthe show again sometime soon.
Yeah, I'd love to do it.
I've seen you talking.
All right, yeah.
Speaker 3 (58:51):
Thanks a lot, thank
you.
Speaker 1 (58:52):
Okay, we've covered a
lot of ground, but three key
lessons stand out.
First, speed doesn't alwaysequal success unless you
reinvest it.
Coding assistants can unlockproductivity gains, but only if
teams plow the saved hours backinto test documentation and
(59:13):
refactoring.
Second, guard the crown jewelsbefore you press accept.
Security and IP posture shoulddictate whether you use cloud
model, an on-prem instance ornothing at all.
And human craftsmanship stillplays and wins at this game.
The best returns come whendevelopers use assistants as
tireless apprentices and rely onfundamentals like readable code
, tight architecture andownership of every line that
(59:33):
lands in production.
Bottom line.
Ai coding assistants are aforce multiplier, but not an
autopilot.
Treat them like a juniorengineer.
Give them clear tasks, reviewtheir work and they'll elevate
your team instead of steering itoff course.
If you liked this episode of theAI Proving Ground podcast,
please consider sharing withfriends and colleagues, and
don't forget to subscribe onyour favorite podcast platform
(59:55):
or on WWTcom.
This episode of the AI ProvingGround podcast was co-produced
by Naz Baker, cara Coon, mallorySchaffran, ginny Van Berkham
and Stephanie Hammond.
Our audio video engineer isJohn Knobloch and my name is
Brian Felt.
We'll see you next time.