Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Jerod (00:04):
Welcome to Practical AI,
the podcast that makes
artificial intelligencepractical, productive, and
accessible to all. If you likethis show, you will love the
changelog. It's news on Mondays,deep technical interviews on
Wednesdays, and on Fridays, anawesome talk show for your
weekend enjoyment. Find us bysearching for the changelog
(00:24):
wherever you get your podcasts.Thanks to our partners at
fly.io.
Launch your AI apps in fiveminutes or less. Learn how at
fly.io.
Daniel (00:44):
Welcome to another
episode of the Practical AI
Podcast. This is DanielWitenack. I'm CEO at Prediction
Guard, and I'm joined as alwaysby my cohost, Chris Benson, who
is a principal AI researchengineer at Lockheed Martin. How
are doing, Chris?
Chris (01:01):
Doing great, Daniel.
How's it going today?
Daniel (01:03):
It's it's going great. I
was just commenting before we
hopped on about I'm feeling theemotional boost of seeing the
sun again after a after a longMidwest winter. So feeling good
today and and excited to excitedto chat about all things AI and
code assistant and developmentand, all of those things because
(01:25):
we have with us Kyle Daigle, isCOO at GitHub. Welcome, Kyle.
Kyle (01:30):
Thank you so much. It's so
great to be here.
Daniel (01:32):
Yeah. Yeah. It's awesome
to awesome to have you on. Just
even in your comments about howyou like to think about the the
practical side of of AI, this isthis is your place. So I I
already feel a kindred spirit.
Kyle (01:46):
I feel very much at home
already.
Daniel (01:49):
Yeah. Yeah. Well,
speaking of which, I mean,
you're of course really kind ofat the center of a lot of what's
going on in terms of codeassistance with GitHub Copilot,
of course, but you're also, I'msure, seeing a ton of things out
there. I'm wondering if youcould just kind of take a 10,000
foot view and kind of for thosethat maybe aren't following all
(02:12):
of the things happening with AIcode assistance and development,
what's kind of as of now sittingin, what is it, March of twenty
twenty five, you're listening tothis, what's kind of the state
of AI code assistance and andhow people are kind of generally
using those, right now?
Kyle (02:34):
Yeah. I mean, it's so
interesting to see how far I
feel like we've come in such ashort period of time. Right? It
was only a couple of years agowhen ChatGPT came out, GitHub
Copilot came out. And back then,the novelty was sort of like, it
wasn't gonna disappoint you.
Right? For GitHub Copilot, youknow, you would type some lines
(02:56):
and and it would respond with,you know, a line, two lines, a
method, etcetera. It was gonnacomplete your code. Very similar
to, you know, I'm gonna askinstead of Google a question,
I'm gonna ask ChatGPT and I cankeep asking a question. I think
what really, you know, locked inthis enormous transformation
then was finding a userexperience that was simple,
(03:20):
straightforward, and didn't needmuch explanation.
Right? Like, I'm a dev. I'mwriting code, and it's just
working there versus, you know,needing to figure out how to use
a tool, figure out how it worksin my workflow, and kind of go
through hours of onboarding.Fast forward a couple of years,
right, not only have the modelsmaterially gotten so much
(03:41):
better, but we found more andmore ways to kind of have that
similar joyful expected userexperience with code assistance.
So it's not just really aboutwriting the code.
In some ways, right, it's notabout that at all right now. I
think that's at the bleedingedge of what we're experiencing
(04:03):
with code assistance where it'smuch, much, much more about
sitting down with a couple ofdev friends and saying, hey. I
have this idea for an app. Butinstead of pitching it to your
friends, now you're pitching itto your IDE. And that's code
assistant is gonna jump in andhelp you get that next step
done.
So when I look back over thiswave and how it went from sort
(04:27):
of, you know, cool, but inretrospect, right, a little bit
simplistic behavior of, wow, itreally knows what I wanna write
next into, like, the next levelof what it's always been like to
be a developer, which is I havethis idea and now I have to
explain it to someone else. Wekeep finding ways to augment,
improve, and speed up what a devdoes kinda every single day. And
(04:52):
we're at a point now where Ithink we're seriously starting
to blur the edges of, like, whatis a developer? I don't think
we're there all the way to bevery clear, but I think, you
know, a year ago, we weretalking about that, and it was
like, sure. And now it's gettingcloser and closer to say, you
know, what is that distinct,that distinct need?
(05:15):
And that's only really been in ayear. And then, you know, about
two and a half, three years fromthe start of, the start of this
journey. And so I think the codeassistant category has always
been so interesting to mebecause it doesn't it's kinda
matching how we work. You know,it's finding ways to augment and
improve how we work, not tryingto teach us totally to do
(05:37):
something completely different,which I think when we zoom maybe
from 10,000 feet to 40,000 feetand we look at AI, the best
tools are the ones that are justhelping us do work we're already
doing. The tools that aren't thebest or having more difficulty
finding traction, in my opinion,tend to have to make the human
contort to get the most powerout of the AI tool.
(06:00):
And so because we're devs, we'rejust kinda iterating in what we
know. And that's been the powerof, you know, code assistance
and the growth of them, youknow, over the last year or so,
I think.
Chris (06:10):
I'm curious. I mean,
that's a it's a great point
you're making there and andabout the the changing developer
experience and changing soincredibly rapidly. I mean, you
know, month by month, there arechanges in what it means to be a
developer now. And so I know,and I'm sure I'm speaking for a
lot of people, keep reinventingkind of parts of my own workflow
(06:33):
as I'm doing stuff because newtools become available and what
I am doing or not doing ischanging constantly. It's both
amazingly wonderful, given wherewe've been over the years, but
it's also quite tumultuous.
And if I stop and kind of leanback a little bit and have a cup
of coffee and think about it,I'm kind of going maybe a little
(06:55):
bit scary in the future abouthow good it's getting and where
that's going. What are yourthoughts on, since you've talked
about the developer experienceexplicitly and the user
experience of Code Assistanceand that they rapidly going so
far ahead, What kind ofthoughts, and don't even go out
a long way, just talking in thenext few months in the short
(07:15):
term, like where, how can we bethinking about adjusting
ourselves to an ever evolvingstate right now as we're as
we're trying to think about thateven before we get into the
specifics of the toolsthemselves.
Kyle (07:28):
Yeah. Yeah. I mean, you
know, I think, what we've seen
at GitHub by rolling out thesetools is, like, we'll talk to
customers or I'll just talk todevs or open source maintainers,
etcetera. And they can kind offall on this continuum. Right?
This continuum of, I absolutelylove every AI tool. I'm gonna
use every single one, and I'mgonna try every single one. And
(07:48):
then you have the folks who arelike, I'm never touching these
things ever. They're terribleand they're gonna destroy
software. And then there's allthe folks in the middle.
And so I think the thing that Itend to tell folks is like, you
know, just like in our careers,we've all had a moment where a
new piece of technology comesin. And I feel like for some
reason, it's in at least 50%developers' minds of like, oh,
(08:11):
well, that's just a silly thingor that's just a toy or
whatever. So I'll just say formyself personally, Rubyist by
nature, JavaScript took over,and I'm like, ugh, JavaScript.
Well, Ruby is, you know, blahblah blah. And so, like, over
time you grow and you realize,oh, well, I should really
understand that and try it out.
And it may not become my new goto tool, but I I it would not
(08:32):
help me or honestly, like theindustry or my peers for me just
to be like, ugh, I'm never gonnatouch JavaScript. So I think
that experimentation that youwere talking about, Chris, is
the important thing. I see a lotof devs, like, try out a new
tool or try out a new feature ora new library or a new model and
then drop back to whatever theirfloor is, whatever the thing
(08:53):
they're most comfortable with,the model they know, etcetera,
etcetera. And I think that isthe minimum because the change
is gonna happen just like it'salways happened between
serverless languages todatabases. You pick it.
Right? And if you just don'texperiment, my fear personally
would be that you do kind ofstart to get left behind because
(09:17):
you don't know how to reach outto the new tool that is actually
excellent and actually helpful.And you're, you know, kinda
stuck behind the eight balllearning something that you
could have been learning as yougo. I will say, like, you know,
in the next few months even, noteven just kinda like now, I do
(09:37):
expect way more kind of, AIfunctionality to come outside
the editor. Because if you're ifyou're developing software as
part of a team or as part of acompany, not as a solo dev or a
smaller startup, but a biggergroup, you we all know, like,
writing code is a important partof the job, but it's not all of
(09:58):
your day.
Right? You're reviewing code.You're building out decision
records or architecturediagrams, or you're debating how
to roll this out. You'reoperating a live site, so on and
so forth. I think as AI comesinto those spaces to fill in the
gaps more and more, again, like,you're gonna wanna have those
skills from, you know, figuringout the right way to word things
(10:21):
when the AI can't just figure itout or the LLM can't just figure
it out on its own.
Or, again, like every,developer, know how the system
is working inherently so you canbest, best benefit from it. So
as long as you're kind of tryingthese things out, even if you
drop back to your baseline, Ithink you get set up for, more
productivity and I think justkinda like more joy when the AI
(10:43):
can take more of those mundanetasks away from you. Again,
like, I think over the nextcouple months, not even the next
year.
Daniel (10:50):
What do you think are
the for those devs, you know,
some that have jumped right in.They figured out their work
flow. Maybe there's devs outthere that are experimenting
with the tools. What do youthink are those new kind of you
know, everyone's kinda got theirmuscle memory of how they how
they develop, you know, thethings that they like to use.
What are what are kind of thenew the new muscles that need to
(11:14):
be developed for kind of AIassisted coding?
Like, the most important onesthat that you've seen over over
very many use cases. Yeah.
Kyle (11:23):
I mean, I I think there's
kinda two major ones. Every
developer has, like you said,you know, come up with, the kind
of practices and principles foryou personally. Right? We've all
worked in systems that havelinters and CI and everything
that, like, stops you frommaking mistakes. But there's
just also a bunch of things thatI like to work in this order.
(11:45):
It helps my brain process what'sgoing on. You know what I mean?
And so I think on a tacticallevel, stating those rules,
those prompt instructions,whatever, we're you know,
depending on which tool you'reusing for this, there's a
different name for it. But I dothink that that's something just
the act of sitting down andwriting out, well, how do I work
(12:07):
on this project? Even if youwork as a part of a company, you
know, how do I care about it?
I always wanted to find a schemafor the back end API before I
implement the front end, andthen I go back to the back end
or whatever the thing is foryou. Writing that down and then
letting the tool use that, Ithink, is a dual benefit, which
kinda gets me to my secondpoint. The big skill that
(12:30):
everyone is, I think, trying towork out is it used to be
called, like, promptengineering, and I honestly
think it's just describing aproblem. We use so much
shorthand and sort of we skipover the details like the
hilarious meme of, you know,what the product manager said or
what the engineer did, what thedesigner did. But that is
exactly what we have to do withthese tools every day.
(12:53):
We go build an app that x y z's,and suddenly it comes back and
it makes no sense. And you yougo, oh, this stupid thing
doesn't work, you know? And yes,sometimes it just doesn't work.
But realistically sitting downand saying, well, what are the
must dos of this app? You know?
How do I want it to work? Whatdo I want the flow to be?
Whatever those things are. Beingable to clearly communicate,
(13:15):
particularly in a written form,is, like, crucial in this new
era. And I think it's been askill that in some ways we've
kinda let fall down.
Like, you know, when I thinkback to, the era in which I was
a much more active dev, youknow, I think there was just so
much written communication,whether that be blog posts or
(13:36):
GitHub's always been remote. Andso for us, it was usually like a
GitHub issue or Campfire back inthe good old days, Slack these
days. Just just, you know,writing down what you mean.
That's a skill to bring tosaying what I want this app to
do. And I think that's why whenyou're on Twitter or X or
wherever and you're looking at,you know, wow, how did this
example get one shot?
(13:56):
It's like ask for theinstruction. That instruction
was certainly not build amultiplayer game that allows me
to fly airplanes. Like, that wasnot it. You know? It was much
more.
But with all the practice thatcame from describing problems
socially, describing problemsfor your LLM, being able to do
that regularly, and I reallythink it's mainly describing
(14:17):
problems as the models havegotten so much better. There's a
little less like, how do I makeit work for each model than
there used to be? That's a skillthat's gonna serve you both in
those tools and with yourcolleagues, with your manager,
with your open source, friendsand maintainers just cohesively
if you can do it really well.
Chris (14:37):
I'm curious to that. Do
you think as just as a two
second follow-up that fordevelopers, kind of that that
describing a problem skill thatyou've that you've been
addressing Mhmm. Along with kindof the communication skills that
support that, should we think ofthat as developer skills now?
And and, you know, maybe that isa muscle that we have that we
(14:57):
should start exercising as well.
Kyle (14:58):
Yeah. I think it was I
think it's something that we
like, the best teams, the bestcompanies have considered that.
And I think we've kinda let alittle bit of the, like, 10 x
developer meme take over andmake communication not be as big
of a deal. There's no majorapplication site or app that
serves hundreds of millions ofpeople or tens of millions of
(15:21):
people where being able tocommunicate what's happening or
what the problems are isn't coreto the job of being a developer.
And if we just play out overtime, you know, if AI and LLMs
are gonna continue to write moreand more and more and more of
the code, even if it never hits,you know, all of the code,
(15:41):
whatever that ultimately means,all that's left is
collaboration.
All that's left is collaboratingwith your peers, with LLMs, with
agents, with designers, withyour boss, with your client,
whatever that is. And sosuddenly, the fact that you can
write an app incredibly well,succinctly well factored and
tested, whatever. That's great.That's a that's a that's a great
(16:03):
skill too. But the human factorwill be, I can look at you in
the eye.
I can read what you're writing,intuit what you're saying, what
you're looking for, and describethat in such a way that I can,
you know, benefit from all ofthese tools. It's gonna be
incredibly necessary as those,you know, more rote or highly
automated tasks, can be done by,you know, AI tools.
Sponsor (16:41):
Well, friends, today's
ever changing AI landscape means
your data demands more than thenarrow applications and single
model solutions that mostcompanies offer. Domo's AI and
Dataproducts platform is a morerobust all in one solution for
your data is not just ambitious,it's practical and adaptable so
(17:04):
your business can meet those newchallenges with ease. With Domo,
you and your team can channel AIand data into innovative uses
that deliver measurable impact,and their all in one platform
brings you trustworthy AIresults without having to
overhaul your entire datainfrastructure, secure AI agents
that connect, prepare, andautomate your workflows, helping
(17:27):
you and your team to gaininsights, receive alerts, and
act with ease through guidedapps tailored to your role, and
the flexibility to choose whichAI models you wanna use. Domo
goes beyond productivity. It'sdesigned to transform your
processes, helping you makesmarter and faster decisions
that drive real growth, allpowered by Domo's trust,
(17:48):
flexibility, and their years ofexpertise in data and AI
innovation.
Data is hard. Domo is easy. Makesmarter decisions and unlock
your data's full potential withDomo. Learn more today at
ai.Domo.com. Again, that'sAI.Domo.com.
Daniel (18:15):
Well, Kyle, one one of
the things that, that I was
thinking about the other day wasthere's a sort of generation of
developers that are growing upsort of not having any other
experience than having this sortof AI, assisted experience, both
on the kind of, like,educational debugging IDE side,
(18:36):
but also, of course, using, youknow, interesting tools, whether
it be kind of, vibe coding toolsor or other things. I was
listening to the a 16 z podcast,and they did, like, a I think it
was them. I forget where. Itwas, like, somewhere they
mentioned a a survey of thelatest cohort of of Y
Combinator. Yep.
(18:57):
That that cohort of ofcompanies, and they were saying,
like, 95% of the code is is AIgenerated. What what kind of
impacts are on your mind interms of, like, this generation
of coders that are really thisis what coding is to them. What
does that mean for bothorganizations that are hiring
(19:19):
developers out of thatenvironment, but also new
opportunities that maybe peoplethat wouldn't have maybe broken
into, developing cool projectsor that sort of thing now have
have opportunity for.
Kyle (19:36):
Yeah. I mean, you know, I
look back on how I personally
got started coding, and it wasbecause I wanted to build a
video game. And I feel likethat's not very unique, but it's
one of those things where, like,I enjoyed playing video games,
but it's still cool. Exactly.And I wanted to go build a video
(19:58):
game.
So back in the day, I went to, Iprobably Barnes and Noble and
bought, you know, the read cplus plus book because you had
to learn c plus plus if youwanted to write a video game.
And that thing was, I don'tknow, 650 pages probably. You
know? That that was an enormousbook. And so that is a huge
immediate barrier to entry to,like, learning because you're
(20:20):
like, the reason I came here wasto solve a problem.
And if I just do 650 pages ofhow c plus plus works, I'll
eventually get to build a textbased video game, probably. You
know what I mean? And at GitHubwith, like, our teams in GitHub
Education, I get to work withthem and the the team there on,
(20:40):
well, how do we approachlearning in this era in a way
where, like, we can bring thatproblem up front, which is
essentially what vibe coding is.Right? I want something in the
world.
I wanna go build it. I think thepiece that is necessary to
continue to learn is thatproblem solving piece. And I
just wanna make it accessible toyou so you can bring a problem,
(21:03):
something you want to go learn.But in the process of getting
you to your destination, we canjust expose you to the ideas
around why this applicationworks this way or why there's
two files, one for the front endand one for the back end or
whatever. So you're kindalearning as you go, but still
focused on ultimately, you know,solving that problem that you're
(21:27):
going after.
So I don't think it's a badthing that, you know, these
startups or folks online or evenme on the weekend, I'm writing
an app that is just for me. It'sgonna have a user of one in
perpetuity. I just want it toget written. You know? I want it
to just work.
But if we can help folks learnas they go, I think we'll
(21:48):
actually create more, you know,craftspeople in a way similar
to, like, I always describe, youknow, changing out a light
switch in my house. Like, if youown a home, we've all probably
replaced a plug or a switch, butthere's no way we're gonna go
into the circuit breakers on ourown. We'll probably fry
ourselves. So we call in anelectrician to come do that. But
(22:09):
I'm not an electrician.
I just know how to go change thelight switches, and that's what
I need in order to solve myproblems. That's what I think
learning coding in the AI era isgonna be, is that you can
continue to start from thisplace of, well, I just want
something, and that's fine. Ithink that's great, and it makes
the idea more accessible. Iwanna be able to get you to
that, you know, journey personstage of, oh, okay. I know how
(22:31):
this works.
I understand variables. Newtechnology came out. Oh, I wanna
try to play with that, etcetera.But it's possible that at some
scale and speed, we're stillgonna rely on, you know,
professional software developersin perpetuity, running these
apps, building these apps, ofetcetera. The real thing that's
interesting to me about thatstat I was talking to some
(22:51):
teammates about is I reallythink there's a huge opportunity
in operating the apps, and I'm alittle dumbfounded that that
hasn't been something that'sbeen tackled yet.
I mean, at GitHub, right, wekind of focus on, like, you've
got to production and, like,okay, great. And then you use
(23:12):
Sentry and PlanetScale andwhatever Azure and so on and so
forth to run it. But I I reallythink that in all of our
probable life experiences asdevelopers, the thing that
bothers you is you get paged.You get an email. There's an
error thing, you're like, crap.
What is this thing? That isanother place that I feel like
as vibe coding continues, onceyou run an app and you have
(23:33):
thousands or tens of thousandsor hundreds of thousands of
users, I'm not on the team like,oh, well, that's when you gotta
bring in the serious people. No.Rewrite it the right way. I
really think there's still spaceto just okay.
Well, an error came in. The AIsaw what it was. It resolved it.
It wrote a test. The testpassed.
It deployed it to Canary or to asmall version, and you just get
(23:54):
a text message that's like, wefixed it. That I feel like is
the next step of this, you know,era of writing learning how to
code, writing, and deployingthese apps versus deploying them
and going, uh-oh. Now I need areal, you know, a pro to come in
and help me out.
Chris (24:09):
That makes so much sense.
And and are you actually seeing
anyone out there, kind of earlypeople doing some of this? Is is
this is this in the wild morethan just you know, because we
tend to think of AI in terms ofwriting the code. Operating the
app makes perfect sense. Who'sdoing
Kyle (24:25):
I think the, like, the
issue here is it will require us
all to work together. So, Imean, when I joined GitHub, oh
my like nearly twelve years agonow, like, joined to work in the
ecosystem on APIs and webhooksand how you connect everything
with GitHub. And I really that'swhere my passion lies. It's in,
(24:46):
you know, the hub part, youknow, of like how do we get
everything connected. And so asI look at, you know, how quickly
the industry has gotten soexcited about, MCP and being
able to connect tools together.
I'm really hoping this hype wavedrives into something valuable,
(25:08):
which will be if I can bring thecontext of my error tracker, my
database, my two cloud services,my email provider, etcetera,
etcetera, all together, then Ibelieve it becomes possible for
tools to work together to solvethese problems. Unfortunately,
right now, each tool isattempting to solve the problem
(25:29):
that it can see, and I do notthink that's terribly valuable.
Right? As an end consumer, Idon't wanna use three AI tools
to solve an error in production.I want one.
You know? I want one tool to dothat, or I at least want them in
some future magical state whereagents all actually work
together and blah blah blah.Then, you know, eventually, that
(25:51):
could also happen. But I haveyet to see a tool that is kinda
tackling this, I think, becauseof the, like, interdependency
problem that, a tool like thatwould have in this current, you
know, very quick moving, AItooling state.
Daniel (26:08):
Yeah. Yeah. I think it's
it's somewhat connected to,
like, my concern around the easeat which all of this can get
built is great. The burden onthe debugging side, right, is is
potentially potentially kind ofgrowing and sort of you have all
this stuff. And then I guessit's more around decision
(26:30):
support in terms of making gooddecisions based on these
overwhelming pieces ofinformation because you built
just so much stuff and you mightnot have kind of visibility and
intuition around that.
What what is your thought kindof because as more code is AI
(26:50):
generated, there there'spotentially not a good intuition
even on how things areinterconnected or like, oh, this
function exists. Right? I didn'tknow that this function existed.
Right? I've never heard thisfunction name.
I have no context there. So, youknow, what's needed from a tool
(27:11):
standpoint to really get theproper context around that kind
of decision support or whateveryou wanna call it for for the
developers in in the tools thatthey're working in?
Kyle (27:23):
I think, you know, for
most of, modern, you know,
history of software development,I feel like most folks are
working in a relatively, like,highly high level language.
Right? Lot of abstractionultimately, and we're most of us
aren't working in c or evenlower than that. I think that in
order to help us understand ourcode bases or our multiple code
(27:46):
bases and multiple systems,right, like at GitHub, There's
no world in which as a developerwho works on, you know,
webhooks, I'm gonna understandhow Git, systems is ultimately
gonna work for me. And so forme, I think the piece that I'm
trying to figure out is how canwe get more kind of, that higher
(28:11):
level abstraction of how thecode base is working available
to me?
And it probably needs to be in away that as a human, I can,
like, understand how that worksmore so than, you know, this
class, this file, this whatever.I don't really need to
understand that. I need to knowthat the webhook system is
(28:31):
having an issue or this otherpiece isn't working or there's a
bug over here where we processimages. And then I can kinda
click down and dive in and divein a little bit more. Because
usually, when you have an a bug,even if you do understand the
system, your goal is to figureout what to ignore.
You know? Like, you're like,okay. Well, it's not any of this
(28:51):
stuff. It's gotta be over here.And I do think that similar to,
you know, humans being good atdescribing a problem ultimately
to, the LLM, I think the LLM hasto help us abstract up to a
level where I would draw on awhiteboard, you know, and then
let me double click in andunderstand more deeply what's
(29:14):
ultimately going on.
Daniel (29:16):
Yeah. Yeah. That that's
a that's a great point. It
reminds me of, like, the sort ofpeak microservices days and, you
know, everything everythingexpanded into you know, we're I
was at a small company at thetime, and I don't know how many
microservices we had. And, youknow, we had alerting set up.
Right? But then the alert wouldgo off and, you know, everything
(29:38):
was dependent on everythingelse. So all the alerts would go
off. It was either none of thealerts go off or all of the
alerts go off. Then you're like,well, I give up.
I like, where do I even hop inhere? Yeah. It seems like a
seems like a big, bigopportunity. I guess in terms of
the you know? And I wanna talkabout, Copilot, specifically
(30:01):
here in a second.
But just in terms of the IDEspecifically and at a more
general level, how do you seekind of the the IDE you know,
obviously, people are tryingvarious things with, what both
what Copilot's doing and Cursorand Windsurf and all of these
things, all hands and all ofthat. How do you see that
(30:24):
interface morphing over time? Doyou do you see that kind of
still kind of being recognizablein a year and a half or two
years or or being something kindof completely foreign maybe to
to certain people?
Kyle (30:39):
I'm hoping that, you know,
in the next, honestly, six
months that a startup, justbecause of the nature of how
these things move, you know, cankind of show us a future state
that is in some ways backwardscompatible. So what I mean by
that is, like, GitHub has hadWorkspace. We kind of demoed
(31:01):
Spark. All of these are kind ofthe code is stepping into the
background, to show me theprompts, the thinking, and like
a preview of what ultimately isbeing built. But right now, in
IDEs, all the ones you'vementioned, and generally, all of
them that aren't the sort of,like, idea to app tools, like
Lovable, Bolt, v zero, etcetera,they all are still staying code
(31:26):
forward.
And I think it's necessary, youknow, in order to attract an
audience right now. Otherwise,it kinda you get pushed aside as
like a, it's a fun toy. It's notreally a tool that I'm gonna use
as a professional dev. I dothink in the future, though, I'm
working with the app or the, youknow, the web app, the actual,
you know, iOS app or whatever,every time I'm writing code.
(31:49):
Like, I'm writing code, I'mwriting a test, and then I'm
gonna go and touch the app.
That last step is usually wherewhere I figure out if I'm right
or not. And when something'swrong, why do I have to keep
bouncing back and forth betweenthe result, the thing I'm trying
to actually build, and code? Andso there's a couple of tools out
there now, right, that are kindashowing me the preview. And as I
(32:11):
adapt that, like, the code ischanging. And it gets to the
most in like, maybe not themost, but one of the most
interesting problems to me inthis AI era, which is like the
magic mirror problem, how do Icontinuously change a
representation and have the codeor the text or the read me or
the spec match what I'm doing inthe representation?
(32:33):
So, yes, moving pixels is prettyeasy. Right? I'm gonna go, oh, I
changed this position orwhatever. But what if I ask it
to do something completelydifferent? Right?
How do I make sure that the codealways matches that? And I think
there's a couple of reallyinteresting, like, attempts at
that. But if and when models,tech, specs, etcetera, get
(32:53):
better there, then I think IDEswill broadly be, you know, the
prompts, the preview, thethinking so I can kinda correct
and adapt. And then probablysome way for me to, you know,
click on a part of the app andnot go make it blue, which is
the demo ware that we all see,but instead be, well, no. No.
(33:13):
No. I want this to be like adynamic view that shows me this
whole other, you know, basicallyanother controller, another
view, another app or whatever.And it'll code it right there
and show it to me. Then I thinkwe'll be even faster than we
think we are kinda like rightnow, because instead we're going
and manipulating by prompting.You know?
Listen, turn okay. Well, now I'mgonna convince you AI to go do
(33:34):
this thing. But it feels likewe're still a couple clicks away
because there's some actual hardproblems to, to solve to let you
go back and forth very, veryeasily because most companies
are still, like, working in codeultimately via CI build systems
deploy, etcetera. So we wannamake sure that everything
matches up in the code base, notjust in the app, or the visual
(33:57):
representation of what we'retrying to build.
Chris (34:00):
So, you know, as we've
been talking about code
assistance and where things aregoing and stuff, I wanna I wanna
get more specific for, for amoment because we got you here.
Sure. Talk a bit about GitHubCopilot specifically, and kind
of maybe maybe as a startingpoint on this, kinda talk a
little bit about, you know, whatthe current state of GitHub
(34:21):
Copilot is, kind of how the userexperience is now, and as a
starting, you know, toward whattomorrow and the day after is
going to look like, and how yousee that affecting, you
Kyle (34:35):
know,
Chris (34:36):
IDEs, adoption of of the
technology, the whole thing
going forward and and kind of,start a a path into the future
from here on that on thatparticular item.
Kyle (34:46):
Yeah. Yeah. For sure. I
mean, you know, I feel like most
folks are familiar with, Copilotone point o, we'll call it.
Right?
Like, everyone's like, okay. Soit does code completions and
cool. And, you know, in the lastsix months or so, we went from
the yeah. It does codecompletions to, you know, now
(35:07):
you can choose to use a varietyof models usually within a day,
if not the same day, of themcoming out. There's chat, you
know, the ability to ask thesequestions.
Now there's agent mode availablein Versus Code Insiders, which
allows you to have thatexperience of describing a
problem, watching it do thework, asking it to do something
(35:28):
else, working across multiplefiles, the context of your
entire repository, not just thefile that's open, and make these
sort of much broader, you know,changes to your application in
the IDE still. As part of sortof the overall Copilot family,
we continue to do theseexplorations like Workspace and
Spark where we're sort of goinglike, were just talking about,
(35:52):
What does it mean for me to planout what I wanna build and then
let Copilot as an agent go andfigure out all the steps that
need to be taken across multiplefiles, multiple repos to
ultimately kind of build thatapp. So the goal instead of just
saying, give me some lines ofcode or give me a whole, you
(36:13):
know, method, is now startingwith, well, what problem are you
trying to solve? You know? Mostof our devs are working in, you
know, major open source projectsor big companies, or they're
starting to learn, etcetera.
And so we wanna be able to letfolks come from a problem. That
could be a prompt in chat. Thatcould be a GitHub issue. That
(36:33):
could be a pull request that'salready open, and you think that
there's a piece of it that'smissing. We want you to be able
to just state what you'relooking for, you know, and then
let kinda copilot take it fromthere.
So we kinda shared a little bitof a, you know, a preview of
that path forward where, youknow, we've all gotten bugs and
we put them in our issuetracker, and it's, like, not
(36:55):
interesting. It's gonna take afair bit of time to solve, you
know, or to resolve. And kindareposing the question, like, why
not just assign that to Copilotand let them work just like a
dev would work, you know, tryingit out, running the tests, the
tests failed, commenting whatthey think they got wrong,
continuing to go, and thenasking for a human review.
(37:17):
That's something that, you know,again, we're trying to model it
after that experience of anyoneon your team versus treating it
like this magical tool that's,you know, always gonna get
something perfectly rightinstead, just like you would,
you know, explain with anotherdev friend. You can go in and
help Copilot understand or justgo, yep.
That's totally right. Justchange these two things and
(37:37):
Copilot will do it, andultimately deploy. So when we're
sort of looking at the codecreation process, which
generally happens in IDEs, Ithink that's a big part of it.
The part that's, in some ways,like, more exciting for me, as a
dev is all the other pieces ofbeing a dev, like I kinda said,
you know, like, when I'm writinga or when I'm reviewing code,
(38:00):
I'm a human being and so, like,I may not remember the exact,
like, method signature ofsomething, but this doesn't seem
like the best way. And so to beable to work with Copilot in
those moments or to let Copilotkinda just tell me, yo, Kyle,
this isn't quite right based onwhat you know, how I know you
work, and so it can show it tome and just let me accept the
change.
(38:20):
Or in actions in CI, why not letit fix the failures that come
through, or let me define myactions workflow just by talking
to AI versus having to go andbuild it myself. And so, you
know, the real kind of magic Ithink of Copilot over the next
year is how can we find momentsboth in creating code, but also
(38:43):
in reviewing it, building it,testing it, deploying it, and
let Copilot probably in a muchmore agent fashion, you know,
having a multitude of Copilotagents that can work together
and use the context not just ofyour code, all the code in your
organization, but also the toolsthat you also use. If Copilot
can reach out and get theinformation from them using MCP
(39:06):
or a Copilot extension, thensuddenly, it can take over the
tasks that you probably didn'twanna do in the first place, to
be honest, you know, less sothose sort of interesting novel,
I'm building my business aroundthis tasks. It'll help you do
all those things. But at thevery least, let's let it take
away the kinda rote pain workthat I think, you know, every
(39:27):
dev kinda has in their backlog,but it's been sitting there for
the last, you know, two years,three years, or however long
it's, artisanal, now.
And so Copilot's, you know,really, really, trying to allow
you to just go from problem toapp or, you know, problem to fix
via these new experiences in theIDE and Versus Code in
particular. But also now in moreIDEs, like we announced, you
(39:50):
know, Xcode now has chat. Abunch of other editors also
continue to have chat. So ifyou're in those environments,
you can still use, you know, thepower of Copilot. And then in
GitHub.com, you'll see all thosenew experiences coming in, code
review, being able to use anagent to, you know, build an
actual, solution for you from anissue and kinda fix the other,
(40:12):
you know, 80% almost of dev timeinside the SDLC process that
they're working in versus onlyfocusing on that editor
workflow.
Daniel (40:22):
How do you think, I I I
realize this is probably a
complex question, but I get itposed to me a lot. Sure. So I
figure you're you're the you'reprobably the best one to answer
or at least have an opinion. ButI I oftentimes, I get a lot of
questions around this side of Imean, even in in what you just
described kind of, here's anissue, a fix, you know, agents
(40:46):
that can do this, especiallyaround, like, the open source
community and cogeneration. Howhow does this kind of influence,
you know, licensing and kind ofthe ecosystem of open source
over time from from yourperspective?
Kyle (41:02):
Yeah. I mean, you know,
with Copilot and and what it's
doing ultimately, that code thatis being generated, whether that
be generated for, you know, yourbusiness or for an open source
project, we have tools inCopilot that you can basically
say, hey. If this matches anypublic code, don't give me a
match. And then it won't. Youknow, it's not gonna match
(41:22):
anything from, the public, codebase, that it has access to.
And so in general, for folksthat are most worried about, you
know, well, where is this codecoming from? Is it using code
and generating code that lookslike other public repos that I
don't wanna match on? It can dothat, just by setting a setting.
And, for some of our sort of,SKUs of Copilot, we require that
(41:46):
to be on. You know, you have tohave that on, in order to sort
of, you know, protect yourself,if there's any concern around,
yeah, where is this code comingfrom?
What's the license, etcetera? Ithink as we continue to move
forward more and more and aswe're looking at all the tools,
you know, out in the market, asdevelopers, I think we can all
kind of intuit that there's onlyso many novel ways to write the
(42:10):
same exact thing. And so you'llsometimes hear or I should say,
I'll sometimes hear,particularly from open source
devs, you know, going like, oh,Copilot won't write this for me.
You know? It's not gonna itwon't get why why why won't it
give me the answer?
And the answer is because thatloop that you're trying to build
is complex enough that ittriggers us to look for a match.
(42:32):
And because we have thatblocking on, because, you know,
you've turned it on, or thebusiness has, it won't give you
a return. And so it reallydepends on the business's
personal preference or theuser's personal preference on
whether they want that publicmatching to come back to you.
But in general, especially as weget into agent mode and we get
(42:53):
into, you know, the ability tokinda create close to an entire
app, you know, or at least avery complex set of files. You
know, Copilot's gonna iterateand iterate and give you
something that, again, doesn'tmatch that public set if you
have it turned off, butultimately, you know, try to
solve that problem for you.
Every other tool has a differentset of, you know, obligations
(43:16):
like this or whether it's gonnause the suggestions, etcetera.
But I think at the end of theday now, our goal is really to
make sure that everyone'sempowered to use this tool. They
can choose, you know, how theywant to use it and what kind of
responses and suggestions, theywant back. And that's why we
(43:37):
give Copilot, you know, for freeto students and, maintainers of
very popular open sourceprojects. And we're trying to
find more ways to just let makesure everyone can have the tool,
if they want to use it.
Now Copilot free, basically,everyone, can use at least a
portion of Copilot, And thenkinda let them decide for
themselves what what they'remost, comfortable with as we
(43:59):
keep going down this, you know,AI, future of coding.
Chris (44:03):
As we start to to wind up
here, we often will ask guests
kind of, you know, what we referto as the future question kind
of going forward now. But wehave covered so much ground, I'm
going to ask you that. And Iwill say that as you look into
the future, and you're kind of,know, we've covered we've
covered everything from AI interms of productivity with code
(44:25):
to the developer experience tothe GitHub Copilot product
itself, and a bunch oftangential stuff. You go
wherever you wanna go. Where doyou think, as you are kind of
finishing up for the day and youget through the crush, and you
have a glass of wine, or maybeyou're getting in bed for the
night and your brain's kind ofspinning in open mode, you know,
(44:45):
where you're being creative,where does your brain go and
where all this is going to gofor us and what kinds of things,
might be next that we haven'talready talked about?
You know, what would you like tosee aspirationally coming down
the pike? Take us into your intoyour brain for this, this last
question.
Kyle (45:02):
Yeah. For sure. So, you
know, if I were a good,
corporate citizen, I'd bepitching you on something from
GitHub. But that's not thehonest answer, we're all
developers in some way, and sothe people will understand. I
think true ambient AI thatunderstands me and has access to
my information and what I chooseis the thing I'm most interested
(45:26):
in coming, right now.
You know? I think we've seen thepower of the LLM, and I don't
think we've honestly tapped intothe vast majority of it. We're
still broadly speaking in chatmodels, and that's incredibly
boring to me. You know? I get itand why it's that way, but,
like, I really think the nextstep is gonna be more about if
you have all of my emails, mycalendar, all the things that
(45:49):
I'm currently sharing, thatcould be my purchases on Amazon,
that could be, you know, accessto sort of my doorbell camera
and you see what I'm wearing onthe way out, etcetera.
There's all these experienceswhere we go to Google and we go,
what's the weather today? Or weask our assistant, like, you
know, a a tool at the house orwhatever. Or more complex, you
(46:10):
know, like, when's the last timeI what was the last episode I
listened to of Practical AI, andwhat was it about? Because I'm
going into a podcast recording,and I wanna remind them that
Matt Collier is a friend ofmine, and he did a great job
with Sidekick and kinda so onand so forth. That ambient AI or
that ambient intelligence wherewe're not, like, invoking an
assistant, it's
just telling me assistant. It'sjust telling me what I need to
(46:31):
know when I need to know itbecause it has all that data
about me is I want it. Idesperately desperately want it.
And I think there's a couple of,like, really interesting
attempts at this. Like, therewas Rewind AI that was a Mac
app, and they kind of pivotedinto this, limitless, tool,
which is like a wearable plusall the apps that has the same
(46:51):
idea. There's been a couple of,I won't, like, name them, but a
memed versions of this thing,and that's not really kinda what
I mean.
I really mean the ability tofinish my thought because you
have all the context that Ineed, and I didn't have to set
up 55 integrations or IFTT orZapier to move all my data into
(47:12):
a single place so that way g p tfour five can answer it or
whatever. You know what I mean?And I don't think we're that far
off. I think that I find itincredibly interesting that,
like, iOS and Apple Intelligencehave been attempting to come up
with what they're next up on,but I actually have some hope
that they may solve this,because they haven't shipped
(47:34):
their solutions, you know, andthey kind of publicly are
talking about how it may takelonger than they thought. The
biggest gap to this isn't LLMs.
It isn't connecting all thedata. It's privacy. I don't want
all of this data sitting in anarbitrary startup's cloud or
wherever, you know, to do this.For as powerful as all of our
laptops are, there's stilllimits, you know, about how much
(47:56):
it can do and how much data ithas and what the models it can
run, etcetera. I think someonethat can take all the
information, do it in a way thatI'm personally comfortable with
from a privacy perspective, bothfor me and for anyone that is
inherently, you know, like,getting data sent from them into
this tool, you know, like if Iwas recording my screen right
now, for example, to be able tohave all that and actually help
(48:18):
my day to day life in a realway, you know, and reminding me
of what's coming up and helpingme do those things without the
personification of a hey, Sirior hey, Alexa situation, just
text, that's what I sit upthinking about at night and how
to crack the privacy nut becauseI think that'll be required for
us to do this in a way that isboth really powerful, but also,
(48:40):
I think, morally correct andand, you know, safe, for all of
us to, you know, benefit fromversus accidentally slipping
into a even worse dystopia byletting all this information
kinda, you know, get out intothe wild in a way that we don't
want.
Daniel (48:56):
That's a great way to
end it, Kyle. I I I also have
hopes for for similar things,and, we we end on on the same
wavelength again. Reallyappreciate you joining.
Kyle (49:07):
Thank you so much. Thank
you so much for having me.
Jerod (49:16):
Alright. That is our show
for this week. If you haven't
checked out our changelognewsletter, head to
changelog.com/news. There you'llfind 29 reasons. Yes.
29 reasons why you shouldsubscribe. I'll tell you reason
number 17. You might actuallystart looking forward to
Mondays. Sounds like somebody'sgot a case of the Mondays. 28
(49:40):
more reasons are waiting for youat changelog.com/news.
Thanks again to our partners atfly.io, to Brakemaster Cylinder
for the Beats, and to you forlistening. That is all for now,
but we'll talk to you again nexttime.