All Episodes

May 20, 2025 • 63 mins

Unlock the power of AI development and learn to build your own custom AI tools! In this episode, we dive deep into AI coding strategies with AI engineer Manuel Odendahl. Discover practical approaches to identify opportunities, leverage existing services like Cursor, Gemini, and Sonnet, and craft bespoke software solutions tailored precisely to your unique challenges. Whether you're a developer aiming to expand your toolkit or someone with a problem needing a fix, we teach the mindset and methods to build AI-powered tools yourself, solving those nagging day-to-day problems. Forget waiting for solutions; learn to create them with AI and transform your workflow, moving from detailed coding to high-level ideation and automation.


Sign up for A.I. coaching for professionals at: https://www.anetic.co

Get FREE AI tools
pip install tool-use-ai

Connect with us
https://x.com/ToolUseAI
https://x.com/MikeBirdTech
https://x.com/ProgramWithAi

00:00:00 intro

Subscribe for more insights on AI tools, productivity, and automation.

Tool Use is a weekly conversation with AI experts brought to you by Anetic.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Forget expensive development teams, the most impactful
software you'll ever use might just be what you build for
yourself with the help of AI. Now anyone can leverage AI to
create bespoke tools to solve those nagging problems that you
face day-to-day. In this conversation, we're
sharing practical approaches to identify opportunities, leverage
existing services, and craft solutions tailored precisely to
your unique challenges. Whether you're developer looking

(00:20):
to expand your toolkit or simplysomeone with a problem that just
needs solving, we want to teach you the mindset and methods to
not wait for solutions, but rather just build them yourself.
Welcome to episode 40 of Tool Use, the weekly conversation of
a a tools and strategies to empower forward thinking minds
brought to you by Identic. I'm Mike Byrd and this we were
joined by Manuel Odendal, an AI engineer tinker and a pillar in
the AI engineer community. Manuel, welcome to Elias.

(00:42):
Hey, how are you doing? Thanks for thanks for having me.
Absolutely. Would you like give us a little
of your background, how you got into development and eventually
AI? So I've been a professional
developer for like 25 years now,but I always wanted to be a
programmer, right? Like when I was six years old, I
knew that's what I wanted to do.And I've been mostly on the back

(01:03):
end side, system side. Like I've, I've done like some,
some, some web stuff. I've been an embedded for a long
time, for like 10 years or 15 years maybe.
And recently went back to like e-commerce.
And I think when three years agowhen like copilot alpha came
out, that's when I started usinglike AI tools and I went ChatGPT
came out. I was like, I'm going to do

(01:23):
everything that I do with ChatGPT.
So last last 2 1/2 years have been have been pretty wild.
Yeah, absolutely. And that seems to be the trend
with a lot of people who've beendevelopers.
And as soon as you kind of get exposed to it, you get something
clicks and you see it. What was one of the first use
cases to use Chat GB before we're just kind of having a
conversation? Or did you dive right into help
me with coding? So it came out on a Friday, I

(01:44):
think was or like a Thursday. And I was like, why is this on
hacking news? Everybody's talking about it.
OK, I guess I'm going to try it.So I first started doing like
rap battles about post quest versus my sequel, like that kind
of when I transforming all my code comments into limericks.
And then I I was like, OK, this is working like way too well.

(02:06):
I'm going to try to just like write all of my code.
And so starting Monday, I basically decided everything
that I'm programming, I'm going to try to have the AI program it
for me. So beginning was pretty rough.
I was like, I think it was like Da Vinci with like 2K token
window like it was, it was like a different world back then.
But but even then I was able to like slowly start leveraging a

(02:28):
bunch of things and it's been going.
I don't I don't remember the first things I did, to be
honest. It was probably just like
transforming sequel queries. Like I was doing a lot of
e-commerce stuff at the time. And then I mean meanwhile like
100% of my code is just AI to behonest.

(02:49):
Like sometimes I'll type a variable name or something but.
Yeah, I find there's, there's the odd day where I'm like, you
know what, I just want to do it myself, make sure I don't
totally lose it. But I mean, AI does 95% plus of
mine as well. I tried to adhere to the rule if
I couldn't write it myself, don't allow AI to write it.
Like don't get lost in the weedsand have it do something you
don't totally understand. But I mean, the efficiencies

(03:10):
right there. You mentioned copilot.
What? What's your tool of choice right
now for developing with AI? It's been fluctuating so it's
been cursor for quite a while. I am starting to lose my love
for cursor is just like annoyingme for a really long time.
I was just like I was just copy pasting stuff right?
It was like either copy pasting from my command line tools or

(03:31):
copy pasting from ChatGPT window, putting it back into my
code. And it was pretty tedious,
right? Like I had a couple of like diff
apply models in the beginning, but that was like really hard to
do. So when cursor had their pretty
well working diff model that came out.
That was really that was really at the time, I felt like a copy

(03:52):
paste machine for the for the AI, right?
It's like I would paste stuff into the AI and I would paste it
out into the code and I would dothat all day.
I was like what am I doing here?And but lately, especially with
O3 and coding agents or like just autonomous agents like
Manus, I find myself going way back to copy pasting stuff

(04:15):
around. Except that I'm not copy pasting
like functions or lines of code.I'm copy pasting whole code
bases around. So for the last two weeks, my
workflow has been more and more going back to just like copy
pasting code from some from leftto right.
And I think in terms of tools, it's like, it doesn't really
matter in a sense, right? It's like mostly, OK, well, how

(04:38):
do you copy paste stuff from theAI to your code base?
And and how do you find the stuff to copy paste into the
into the code base? So like sure, cursor and agents
help with that, but then ultimately it kind of it it it
kind of doesn't matter. Like you could give me windsurf
and I would be able to do the same stuff you give me like VI
and a copy paste. I'd be able to work kind of the

(04:58):
same way, just like maybe a little bit more tedious.
It's been in terms of models. It changes all the time as well,
right. So for a long time, for a really
long time, it was sonnet. Before sonnet, it was like GPT 4
and 3, five turbo. And now it's like I still use
3/5 from time to time. I use 37 for certain use cases.

(05:20):
And then Gemini Pro is like kindof my favorite, even though it's
annoying and cursor and then O3 is like kind of its own little
world, I guess. So yeah, lately it's been, it's
been crazy. Yeah, it's all over.
It was funny how long of a rain 3.5 had just being the supreme
model. I've also switched to Gemini 2.5

(05:42):
Pro for the vast majority of my work now.
I still find with the the reasoning models I'll use to do
like an initial. Hey here's an idea I have.
How should I architect it? Am I missing anything?
Give me some cool feature ideas.But yeah I I need that kind of
quick iteration. As soon as I get 2 out of my
flow. I noticed that it's almost like
a detriment. That that's interesting because
I use it I use it the other way around I use Sonet 35 still like

(06:05):
Sonet 35 is still like a really nice model because I know it's
so well and I know that it's just like it's it's great like
it it is able to do good decision at the file level,
right, like within 500 lines of code.
It does some really good work interms of which patterns to apply
how to structure the functions and all of that.
But the way I work now is my current flow is I will when I'm

(06:29):
lazy or I don't know exactly what context to put to the AII
will use the agent mode and cursor to kind of scurry around
and find like the relevant pieces in my code base.
So I'll use sonnet for that. I'll, I'll usually have like AC
document and the C document saysexplains like the architecture
or I, I can show that later on. And I have sonnet V5 kind of do

(06:50):
a couple of tool calls saying like, oh, I'm going to look in
this directory. I'm going to look at this file.
I'm going to look at this thing,but I don't ask it to write
code. I just said like, oh, look
around and like, tell me a little bit what you find, which
fills the context window with, you know, with relevant code.
And then I'll switch to Gemini 2/5 Pro.
And because now if you start with Gemini 2/5 Pro, any
reasoning model, they don't havevery much to reason about in

(07:13):
your first question, right? They'll be like, oh, well, I'm
going to try to do the thing that I was asked.
And it's like, OK, yeah, that's that's great reasoning.
Thank you. You'll be able to tell.
It just says like, thought for one second and you're like, OK,
but I wanted to think for like 30 seconds, right?
And so once the context window is full, then I'll actually
switch the reasoning model and I'll be like, OK, now write a

(07:34):
document that explains how to solve my problem.
So I still don't really have it write code.
I just have it write a document.But now it has all this
reasoning. So you'll see it like, oh, maybe
I should use this method from this class.
Oh, wait, but like XYZ, so that kind of stuff.
And then I'll switch the Max on because at that point the, the
context windows pretty full, butI still want to keep everything

(07:55):
that's in there. And I'll turn the Max on And I
just say like, do it. And usually these three steps,
like it's always like 3 steps and it works pretty well where I
can like one shot thousands of lines of code.
So maybe I'll show that later on.
It's like that. That's kind of my workflow for
the last two weeks. And it works.
It works really, really well. And the thing I really

(08:17):
appreciate about Gemini's, the thinking traces, right?
So while it's thinking for like 30 seconds, actually these
thinking traces, I try to read them pretty closely because they
teach me about the code base really well, right?
It's like kind of looking into the brain of a really
caffeinated good engineer that'sgoing to be pretty rigorous.
And so like, oh, it looks like this class is this, and looks

(08:40):
like this class is this. So while I'm kind of vibing
still, I do actually absorb a lot of knowledge about the code
base. And that has been, that is
finally after like 2 years whereit kind of felt like development
was like a little bit Russian roulette, right?
You'd be like, all right, let's put the prompt in and then let's

(09:03):
see if it right, like let's see if it fires or not.
Now it feels like I'm engineering again, right?
That I'm like writing a document, understanding the code
base and there's not dislike. Let's wait what happens and be
like, Nope, rewrite the prompt, try again.
Nope. Is that even if it fails, I'll
actually have a whole sequence of steps where I'll be like, OK,

(09:25):
well I did make progress on my task, right?
Yeah, I really like that approach.
Being able to have the asset almost like the source of truth
for what the objective is, is super important.
Whether it's doing things like evals or even just like a human
writing a APRD document before the into a project, having it
well defined, it's scoped out helps immensely.
So having an intelligent agent go through and and create that
for you is super valuable. I do, I want to give a quick

(09:47):
shout out to two tools or two services I've heard about
recently. 1 is how it works, howit dot works.
Put in a repo and actually dive through the repo and show you,
if you want to say, hey, I want to replicate this functionality,
it'll show you how to do that. And then how it dot works, how
it works. I'll, I'll show you the link.
Yeah, so it's just how it works.But yeah, getting out one of the

(10:07):
fancy URLs, I'll share it below.And it's, it's been cool being
able to explore it. It's from Hi Who Made Cats with
Bats, which is a really good video platform.
And the other Devin came out with deep wiki, I believe it's
called and saved everything you put in your repo.
And then it just in depth explains to you what it does,
what the capabilities are, and just helps you kind of get an
insight into how the code base works in a in a quick way.

(10:29):
I actually let, let me piggybackon that the, the, the deep wiki
stuff because I, I realize this is something that not all too
many people have been doing, butthat I have been doing for a
long time, which is, do you havelike a repo that you kind of
enjoy looking at that or like some code base that you're like
interested in? Let's use vim, right?
Like I've never looked at the source.

(10:50):
I've never looked at the Vim source code.
So let me, let me share my screen and I'll show you
something that since you mentioned deep wiki, I find deep
wiki is pretty cool, but also it's not it's pretty slop,
right? It's like, it's not, it's not
like the most pleasant stuff to read.
It's it. Did feel very, yeah, very, very

(11:11):
AI generated. Right.
It's like it has a lot of bulletpoints and has like a lot of
which is I found that like this kind of AI slop is great when
you asked to create it because you know what you're looking
for. If you see someone else's AI
slop, you actually don't know which parts are relevant and
which ones aren't. So then GitHub is that I don't
know what the repo is, but I'll show you how to do what Deepviki

(11:35):
does, but basically benefit fromlike, oh, I know what I'm
looking for. So I'm able to ask the right
prompts and the way I'll do that, I've always been someone
who reads a lot of source code and I really like reading big
code bases because you'll see how how like complicated code
comes to be. You know, things like if you
look at engine X as a proxy, youknow that like somewhere in

(11:58):
there, there's going to be like one line of load balancing,
right? That just says like load
balancer very easy is like one line algorithm, but like, where
is this line? Like it has to be somewhere
because it has to be efficient. And so finding where these kinds
of like cores of, of certain of certain repos are, is, is like

(12:21):
really interesting. And Dai is just like, is just
like bliss to do that. And you, you, you'll see like
one of the things many people bring up is that, you know, Dai
only knows what it's been trained on, which in a way is
true. But also if you write a good
document describing something foreign, then it like does

(12:43):
pretty well. So this is the, this is the VI
code base, right? Which I've never opened on
cursor, So it's probably still indexing it, but I'll do this
technique that was that I was talking about, which is
selecting Sonnet 35. But you can really use any of
your favorite models in here andthen say, I don't know, analyze
this code base and tell me what interesting parts there are.

(13:09):
You can use kind of any, any stupid problem.
It's probably going to run like a couple of shell commands of
doing like finds and, right, like LS and just based on file
names being like, oh, this lookslike pretty interesting.
So you'll do that. It's like filling its context,
right? And then I'll actually turn on

(13:30):
the reasoning model and select it.
Please, please think a little bit.
And I find that more valuable than deep Wiki.
Like I mean, deep wiki is prettynice.
Like I've used it for quite a few repos because it just like
saves you time if you want to get a quick overview of what
something is about. But the slop is pretty big and
it tends to just like find the high level stuff that's not that

(13:53):
interesting. I will tell you like, oh,
there's like a configuration management part and then there's
like a front end part and a backend part.
I'm like, yeah, but what does itdo?
Yeah, totally. So now that we have all of this,
right, like you'll see there's I, I don't even look at this.
I'm like, I don't care. All I cared about was actually
this stuff here, right? I'll stick with Sonnet for now

(14:16):
because I don't think we have much source code in there.
How are buffers handled efficiently, which is always
something I wanted to, to learn about them.
And we'll see if it's able to find like source code.
And once it's able to find like,that's a wild, that's a wild
guess. Saw it.

(14:40):
But yeah, once it starts like reading source code like this,
not actually, it wasn't that wild a guess.
That's pretty interesting. Once it's able to find like code
like this that has been reading,right?
It's like, this is exactly the kind of stuff that I was looking
for. Then I'll ask it and I'll show
some of my prompts to create like not all too sloppy

(15:01):
documents. All right, And I learned
something this morning. If you click here.
Oh, no, you don't. This morning I clicked here and
I had the token count, but I guess it depends on the model,
so I'll switch on. Yeah.
Because I feel like I saw that on Twitter too.
Yeah, it's it's and I saw it before like I, oh, look at that.
OK, so we're we're at 40,000 tokens, which is for Sonic 35 is

(15:23):
the limit where it starts degrading like take the official
number numbers and have them by two or like have them.
You shouldn't go much past that.That's when it starts making
like weird errors and weird edits.
So that's why now I switched to Gemini 2/5 Pro, which I say like
the official part is like 200,000 tokens, I think.

(15:44):
And then it also depends on the tokenizers they use.
But what I'll do now is like explain in detail how the buffer
efficient handling works and write A blog post for junior
computer science majors that gives them a solid introduction

(16:08):
and deep dive into the topic. So framing as a blog post makes
it like into something that's not just bullet point lists.
And then I usually have a structure just to store all of
these files that I generate, which is going to be, I call it
TTEMP. And then very importantly, I use

(16:29):
kind of the file name that I use.
I try to make it as descriptive as possible because if the model
doesn't LS or find, it will findthese files.
And if they're called like blog post for junior software
engineers about efficient bufferhandling, it won't like, it
doesn't need to look into it allthat much.
So that's what I'm going to callit blog post, blog post for CS

(16:52):
students about efficient Vim buffer handling dot MD, right?
And so now the thinking trace isgoing to be pretty long, I
think. And I'm not going to read it
right. But this will teach you a lot
about how all of this works. Man, it didn't think a very
long, but whatever. But I did think.

(17:14):
Of a good right point about the importance of the metadata being
something that is also you want to keep it optimized for AI,
because if it canmore easily identify a file, it's going to
know where to go. It's going to spend less time
kind of searching through. So great advice.
And, and so you'll see, right? Like this stuff that it output
here. And I really like that that
cursor is able to go like straight to the definition of
these things. If I store that in a file and I

(17:36):
say like, oh, I want to work on buffer editing, right?
Like if I start a new conversation and I tell sonnet
like, please, I want to do this and I give it this file, it will
know where to look immediately just because there's a symbol
names, right? But it's not necessarily
depending on what the current code is like the code could have
changed. They will still know where to
look for these things as long asthe names don't change all too

(17:58):
much. So it's like kind of an index.
And then if you look at this, then you have like a pretty
nice, not all too sloppy kind ofthing.
Usually Sonnet 37 is like betterat this.
So often I'll just like switch to Sonnet, usually Max because
the context windows getting how long are we?
Yeah, this didn't change all toomuch, but I'll be like, make

(18:20):
this even more detailed, useful paragraphs, search for
literature, right? And then you can kind of build a
really good document out of it. And I always try to make them
nice to read because if the AI gets stuck, I want to have a
nice to read document and not some kind of like weird bullet

(18:40):
point orgy. So this kind of showcase is
like, even if it's just in this case, like research.
This is also how I create like my, my, I basically have it
write blog post about how to do the thing that I want to do
right? Like a nice tutorial with like
nice explanations and at a full paragraphs.

(19:01):
Because if the AI is not able todo it, I end up with a nice
tutorial anyway I can do it. I can I can then do it myself.
Excellent. And so that, yeah, that's,
that's like what DP Deep Wiki kind of does, right?
But it's super cheap to do it for yourself.
It's just like open a codebase that always interested you and
start asking questions and then you have like this great stuff

(19:23):
here. I, I love the idea of using
cursors or learning tool and especially if people hear about
a tool that they kind of want toreplicate or or take specific
functionality out of. They get everything in there
here for them to start with. And then they can just kind of
pare it down to what is most important to them and then take
that artifact and then put into a fresh code base to build
something for themselves. Yeah.
And and if you extrapolate that to, to, to like full agents

(19:46):
which are starting to, to becomelike more common, right, like
Cloud Code or like Manas or all of these these things like
something I did yesterday, whichlike kind of surprised me.
I mean, I'm surprised myself every time with these tools.
This was on Hacker News, which is a course about how to build
an LLM inference engine on AppleSilicon.
And so this guy was like, oh, you know, like I, I wrote the

(20:08):
code for these like couple of 10chapters, but I only wrote the
chapter as human language for like the 1st 2:00.
So I literally went to Manus andI said, like, can you write the
missing chapters? And it did.
And the result is like it. I almost like prefer the.
I almost prefer the like. To like AI written chapters

(20:30):
because they're just like have more references and deeper
search. But because the code has been
written, I know that they're probably fairly correct, right?
It's so that's been that's been pretty wild to see that if I
look at my initial prompt, it's like literally write the missing
chapters for this repo. That's it.
Amazing, how have you been enjoying Manus?

(20:51):
It's it's, it's pretty great. Like what comes out is always a
bit janky, usually right? Like it's not the Oh my God,
this is perfect. I'm going to drop it right,
right in. But it's always like, it's
always a really good first step.Often nice and excellent, yeah.

(21:14):
Knowing that it will only get better, right?
Like I can see, OK, this is thisis the first step and it's like
already that good. It's it's like, it's like the
GPT 4 of agents. It kind of feels like right.
It doesn't feel like 3-5, which is like, Oh my God, this is like
horrible. But it feels like, Oh yeah, it's
already like sparks of AGI kind of kind of thing.

(21:35):
We're dancing in the middle of it and it's going to, I feel
like it's going to be a gradual thing where, you know, frog in
boiling water analogy where one day we'll just wake up and
realize, oh man, we've we've hadit for a bit like things are
just kind of amplifying all overthe place.
So pulling it back in, one thingI liked about well, you've been
approaching you just built a tonof tools.

(21:55):
I believe you have a, a GitHub repo with just like dozens of go
things that just solve differentproblems.
I'd love to discuss that a little bit.
How, how do you how do you go about doing that?
Do you just like encounter a problem and do the classic dev?
Oh, I could spend 2 minutes on this job or I could spend 2
hours automating it or like, howdo you go about building these
tools for yourself? That's actually.
A yeah, it's a deeper thing because I think that as I, when

(22:17):
I started using Chachi PT for everything, I had to make it a
habit. I had to like pinch myself and
say like, no, you want to do it with Chachi PT, right?
Because it's so easy to just do things the way you used to do
them. And so this habit of, oh, can I
just ask the AI to do it is on one hand, it's like a hard habit

(22:38):
to get into, but it's also weirdwhen you realize when you're
talking about something, it's actually exactly that fast to
type the thing you're talking about into ChatGPT and press
enter. And so that was like a habit
that I, that took me a while, but is now deeply ingrained.
So what I, what I do often is when I have any kinds of ideas

(23:00):
of stuff that I want to build, Iwrite them down.
So I have like tons of sketchbooks filled with ideas
and some of them like one idea I'll probably have like 50 times
because it just keeps coming up.And then then I don't even go
back to these sketchbooks now because it's kind of automatic.
And I'm just like, yeah, I want to build this.
And so especially with manas andstuff now, I've been like going

(23:24):
completely crazy. So I was on vacation last week
and I ended up with one and a half million lines of code of
ideas that I wanted to have doneright.
Like I obviously I didn't write any of it, but I was like, oh,
wouldn't it be cool to have likea, I don't know, like YouTube
shorts editing software using blender?

(23:44):
And I was like, Hey, manas do itright.
And I came back with stuff and Ihaven't like even looked at them
anymore. So I have I created like a vibes
repo with all that code and a couple of those are then cleaned
up and like made them work. But it like it feels like being
in a candy store right now because like all of these

(24:05):
actually look pretty good, right?
I was like, oh, wouldn't it be cool to have something that uses
tree sitter to parse a code fileand then for every function that
like tells me using the languageserver protocol where it's being
used, type that into manus And then I have it now, right?
It's like, it's like it's prettyrough, but it appeared for like

(24:27):
$5. It's so, so I don't know.
It's been, it's been a long, long habit to like, not just
think of which tools to build orlike actually get a sense for
like, OK, what's what's in reachfor a tool to be built to be
useful? If, if that makes sense.

(24:49):
But the problem now is just like, they're so easy to build
that I forget them, right? It's like, I'm always annoyed
because I'm like, I know I builtthis tool.
I don't know how to use it. I don't know where it is in
which directory is it? I remember having it but.
No, that's funny. I I got into the habits of just
making scripts. If I did something, if I ask
open interpreter to do a certaintask, I end up converting to

(25:11):
being like, hey, will you createa script to this task Then I
just have it on repeat. 11 area I find people are having a bit
of trouble in is they try to usethe LLM to accomplish
everything. When if you have like
deterministic code written, all of a sudden you get way more
reliability. It's cheaper, it's really
automatic. You can just tie it into.
Things it's easier for the LLM usually, right?
Like that's, that's one of the prompting pattern that I keep
telling people for for the last three years.

(25:33):
It's like you try to have the LLM do a task and you don't
you're not like trying to use anLM to do the task like you don't
know a front. Oh, I want to clean up like
messy text and it should be doing this and this.
Then just add the three words like write the code to in front
of it, which will also Co cause like open interpreter to kick in

(25:53):
like whatever code analyzer likemost front ends now have.
Because if if you ask something like, imagine you have like a
long table of like bank transactions and you're like,
OK, which ones are like, weird? I don't know what the alarm is
going to do. It's going to do like weird
stuff, right? But if you write like, oh, write
the code to show me the outliersin this bank transactions, it's

(26:14):
actually a fairly well known problem.
There's like a lot of stuff in the training corpus.
So it's basically just going to say like, oh, I'm going to write
the code calling the library like find outliers.
And it's like, yeah, that's pretty easy for even GPT 35 to
do. And so that's, yeah, that's,
that's like one of the ways to build all of these tools.
It's just like I have a problem to solve and it's often easier

(26:36):
to write the code than to ask the LRM to solve the problem
because a they suck and then right?
Like there's still copy paste machines there.
It's like it's less obvious these days because the models
are so big that they still do the same.
Like nonsensical, I have a wrongvariable name, I don't care, I'm
going to use it now. I like why?

(26:59):
Yeah, just invent a file name actually something that's like
right in between those points. I've started using Gemini 2.5
Pro to do the sub to do the timestamps for these videos.
So I upload the the subtitles and then extract it.
And it used to just make up stuff.
It used to be random. But as soon as that verify with
code, it just simply goes through the SRT file.

(27:19):
It's like, here is where Manuel actually asked this question.
Let's start it at this point. And it became so accurate just
from Attic verify with code. What code did it write?
Oh, you mean it's like it it looks up the time stamp and then
confirms that that's what's in there?
Yeah, that's a good idea. I should do.
That. See this is what I would write
down now is like oh, which I'm going to do actually.

(27:41):
Standard 100% yeah, take it, take it yeah.
It's one of things where I'm noteven sure if it will like write
out code that you can export whatever, but just by having it
in the code interpreter being like, oh, let's, let's do you
know whether it's grep or just similarity search or something,
it's going to be at a lot different results.
So yeah, that's one of the reasons I think people should
actually still learn to code. It's important just to

(28:03):
understand this human to computer language that we speak.
100% right like the fundamentalsof knowing how to decompose a
problem is like what allows you to build something in 2 seconds?
Yeah right because you just say like oh use grab to do X and
then store it as a Jason to put into sequel Lite.
It's words that actually to string them together in that

(28:24):
order, you actually need to havelike a pretty good understanding
of what they do. And I'm, I'm really curious how
the new generation is going to on board with that, right?
Because it's so easy to be lazy,but also it's so easy to be not
lazy. And I've heard like there's a
lot of people wringing their hands saying like, we ruined the

(28:45):
new generation. They'll never learn anything.
And I'm usually a little bit more optimistic about because
the kids are all right. And so I've heard from, from a
person I met at a conference, for example, that they're like
14 year old son has reversed engineered the Roblox protocol
and has like built their own like 3 DJs protocol.
And they added like audio control to it.

(29:07):
And you're like, OK, well, that's what kids do these days,
right? It's like I had I struggled like
built writing Hangman and basic or so and they're like, OK,
well, yeah. Well, let's reverse engineer
Roblox protocol or or or whatever that's.
Yeah, that's why it's it's an amplifier.
It can amplify laziness, it can amplify the curiosity to learn.
It's just kind of making sure you you nudge people or at least

(29:29):
demonstrate the benefit. If you say, if you understand it
properly, you can accomplish waymore than just going on and and
saying, you know, generate a game for me.
If you can give any type of direction, use the Unity engine
or or three DJs or whatever. There's a ton of potential just
for having that innate knowledgeahead of time.
I trust in the competitive nature of not only teenagers,
but just like humans in general where it's like, Oh well, if

(29:50):
writing an app now is like the new like stick figure, right?
Where where you can just like everybody can prompt and say
like, make me an e-commerce app,then there's going to be a fair
amount of people who want to be like not part of the masses and
do something cooler. And I I don't know what it's
going to look like, to be honest.
Yeah, I, I tried as hard as I can to avoid predictions because

(30:13):
even one year out, I really don't know where we're going to
be. It's just the rate of changes.
I have no idea. Yeah, yeah.
The best thing we do is just keep building.
One thing I know is that things are slower than I thought they
would be, right? Like when in, in December 2023
and then in, in March, No, sorry, in, in December 2022, I
was like, holy shit, this is like wild.

(30:33):
And then in March 2023, I, I hadalready like all of these things
that people are slowly in a way discovering, right?
Like, you can make unit tests, and you can make, like, fuzzers
and like, you can write all these internal tools.
And, like, it transforms everything in our workflows.
And now we can have, like, logging everywhere.
I was like, OK, well, in six months, the entire industry will
have switched to these tools. And I still encounter people who

(30:55):
say, like, it's just a magical autocomplete that doesn't work.
I was like, OK, man. Like, I don't at this point, I
really don't know what to tell you.
Yeah, and I get it. Everyone has their own little
bubbles and, and we're we're very deep into it.
But yeah, it's just one of thosetools where it's so hard to, or
it's so hard for me to comprehend how people can ignore
it. You, you can dislike it all you

(31:17):
want. You can be skeptical of the the
future direction, but just the change over the last two years
has been so significant. The only way I found to like,
have people change their mind right about these things is to
show that on code that they know.
It's like, I can show them as much code as I wrote.
I can show them as many poll requests that I did.
I can show them they always be like, yeah, it's like a proof of

(31:39):
concept. It's like, that's a tiny
feature. I'm like, no, it's not.
It's like 3000 lines of code on a legacy code base.
And I'll be like and, or whatever it's like, but once you
show it on like a problem that they have and it's like, oh,
look at this legacy class. Like you want to have logging on
every entry point, like say, addlogging to every entry point.
And they're like, holy crap. Like because it shows them that

(32:00):
they're not. I mean, there is a very valid
reason to be afraid of these things as a software engineer
because indeed it's like a lot of our labor is just like
disappearing, right? It's like what I used to get
paid for my whole career is likeis like gone.
I don't do it anymore. And so there's a very valid

(32:23):
reason I think to be worried about it, especially if you are
in a right in one of these rolesin the bigger corporation that
could totally be cut because like you're not very valuable to
start with. So I can, I can understand that
people don't want to look more closely because of what might
come out. And I totally understand that

(32:44):
fear. But if you approach it in terms
of like, look, obviously you care about your craft, you care
about XYZ. So let me just show you what it
means when you care about this thing, what this tool can do for
you, right? Which is not like write a lot of
slop code. It's just like, just make
everything better. There's no excuse anymore to not

(33:05):
have stellar documentation, for example.
Great point. And we recently did an episode
on AI for product management. I did listen to the project
manager and I had one buddy Eugene and he without knowing
code, I started building tools like dashboards and what not to
satisfy his day-to-day job. And that's one of those things
you'd have to be able to, back in the day, see if we get any
engineering resources and then try to squeeze it and maybe get

(33:25):
like lucky, you get 1/2 day Friday or something.
But the ability for people who haven't been spending the years
learning how to code to just like spin up something that's
practical is super appealing. And then taking the devs and
being like, OK, now you can output, you know, whatever X
more than you used to. Just keep an eye on it and make
sure you direct it properly. It just feels like everywhere is
amplifying kind of simultaneously.

(33:46):
Yeah, one, one thing and, and which is really interesting and
the the way I approach usually like where can AI be useful is
like I, I think about the humans, right?
Because ultimately that's all there is, right?
Like we've had machines forever and they're supposed to like
compute stuff for us and you anyway, right.
So, so if you, if you take humans and model like kind of

(34:10):
the human communication flow inside a team, for example, or
in the context of building a software product, then it
becomes very apparent where an LLM can help a lot, which is at
these communication boundaries and that what people care about.
So if you, if you think back at the ancient times of doing, say

(34:31):
product design, right, where youwould be like in the best of
cases, which is really the best of cases, you would be in a room
with a customer and a product designer and a software
designer, for example. And just like whiteboard stuff,
right? Like that was like the best you
could do. Now you can actually, the
whiteboard is actually going to turn into real code that people

(34:53):
can try. It doesn't mean it works, but I
can. Like before I couldn't even
write like one line of CSS because it would just like pull
me out of the meeting and then be completely useless.
But now I can say like, make me a dashboard to edit like YouTube
videos and something will come out and be like, is this good?
Like you want a button on the right?
And so during the meeting, you have this like so much richer

(35:18):
information exchange because people can try out what you're
building. It doesn't mean I'm going to
keep the code. Maybe I'll just keep screenshots
of it, or maybe I'll keep the code and say like, yeah, this is
a good starting point. But and so in the in the product
design workflow, just like this would have, you know, making
three different versions of a dashboard would have been

(35:38):
completely unheard of before. It'd be like, oh, maybe the
first one that happens to work like semi remotely.
That's the one we're going to use for the next 10 years.
But now in 20 minutes I can havelike 16 dashboards, right?
And we can choose the best. And it's already half written,
which means able to probably take me like an hour to push
into production afterwards. And it's like like, is this 100

(36:00):
X? Is this a 10X?
What does it even mean? Right?
It's like it's very hard to clock to clock what that, what
impact that has and. And along the same vein too, you
can say, hey, generate a static data set that we can populate
the dashboard with. Make sure you collude only edge
cases, like throw in letters andwords and booleans and just
random data types to see what will happen.

(36:20):
And they'll sort of be like, oh,that looks atrocious.
Let's make sure to change that. And then you just got all this
QA done with a prompt. So, yeah, this may be a good
point also to show like a, a, a thing that I've been that I've
been using lately to do that kind of work.
So do you have any idea of like something we could actually,
well, I mean, editing a YouTube video, right?
Would be, would be pretty cool. So one thing I've started doing

(36:43):
is I put my window always where it's bugging me.
So I'm going to use O3 and I'm going to say like research UX
paradigms for a YouTube video editor and make a list of 10
little designs, right? So I'll have O3 do some kind of
research out. Obviously, if I, if I was doing

(37:06):
real stuff, I probably add a little bit more details, but
sometimes I'll really keep it atlike 4 words.
So I'm just like, look, I don't know what video editor research
has been done. What is out there.
Just like surprise me with your probabilistic output.
And then in the meantime, because I really like sonnet for

(37:26):
that kind of exploration, I'll just go to cloud AI keep 37
sonnet, right? I do have a couple of system
prompts where I tell it to not use like Lucid React because
that's usually the reason why italways fails.
It's like I added like a YouTubeicon.
It's like, but there is no YouTube icon.
Too bad you waited 10 minutes for that.

(37:48):
But what this allows me to do isthen I can take these 10 ideas
from O3 in 10 different cloud AIchats and built 10 different
prototypes, right? And then maybe redo each of them
three times as well. And so now I have 30.
Pretty impressive, right? Like, let me just try that React

(38:12):
app for YouTube editor. And my prompting is horrible,
but that's enough. I know that this is enough.
And so we'll start writing this stuff.
It will take like forever, but in the meantime, we can continue
discussing. We can do like a podcast.
We can discuss more ideas about YouTube editing, which is like

(38:33):
YouTube editor for viral shorts about mermaids.
I don't know and I'll do that, but I like, I don't really need
to be. This is not techno technical
brain, right? It's just like I'll either like
continue chatting on Discord or whatever, but it will come back

(38:55):
with these kinds of things, which I knew this one wouldn't
be very interesting, but it's still like pretty fucking
amazing that this is what I get for like 15 seconds of
inference. And we could start discussing
saying like, no, I think I want to have like a drag and drop or
maybe I should organize them like this.
I don't like the icons here. Most of this stuff always works,

(39:16):
which is always kind of kind of wild.
But that's why this thing here is like is like pretty
interesting, right? Because I don't even look at the
results. I'm just like, yeah, probably
some interesting stuff here. And then I'll be like build
number one. And that really I've been doing

(39:37):
that for indeed actually processing transcripts and the
results have been like, have been like pretty amazing, right?
It's like this. I, I keep them as front end only
apps with like local storage andexport to Jason or markdown so

(39:58):
that I don't even need to build a back end.
And you can do like some, some really, some really crazy stuff
here, which is, I don't know, like sonnet is really is really
impressive for that kind of stuff.
Swipe down to hide timeline or expand.
It seems to work even. So yeah, this is a workflow that

(40:18):
I've been using that's like pretty impressive to do this or
it did a research, it didn't like write one, but.
Still cool. Yeah, All right.
Really appreciate showing the workflow.
Yeah, yeah, yeah. Well, like you said, it's just,
it takes such little effort to just fire off some background
jobs, get an actual artifact that you can interact with, and

(40:39):
then all of a sudden you're you're 5 steps ahead of where
you were before. And then you can kind of start
getting into it, making tweaks and and polishing it.
Yeah, it's a it's AII think it'sjust like allows you to refine
your thinking a lot like a lot of I've I've always been like a
pretty like. I don't know if I, if I can call
it creative person, but but I've, I've been doing a lot of

(41:01):
music and like a lot of drawing and stuff.
And so there's a, there's something that's really hard to
get when you are doing code is that it turns on a part of your
brain that just makes it hard tovibe right.
It's like, once you start thinking about like, is it
string dot split or is it like split string?
Like you don't have ideas anymore.
You like focus on this like ultimately pretty idiotic thing,

(41:22):
which is like, who cares if it'slike string split or split
string? But it hinders having like ideas
such as what do I even want out of a YouTube editor, right?
It's like suddenly you're completely focused on like
splitting your string that you don't, that you don't even
realize that, oh, maybe I actually don't even want an
editor. I do actually want to have just

(41:44):
something to sort my transcripts.
And that's ideas you get while staying in this vibe mode,
right? So I think the name Vibe Coding
is, is really apartment and should be embraced even by
senior engineers because it's a mode of building software that
we rarely had before. We like often had writing on

(42:05):
napkins or like doing architectural diagrams on the
whiteboard. But but it didn't go further
than that. But now we can actually write
code. I can be like in my code base
and it's like, can you refactor this thing to be like a facade?
And I look at the output, I'm like, Nah, rewind.
I'm just like, OK, can you make it this design pattern?
Can you like move this code out into a separate package?

(42:28):
And it doesn't need to run right.
It doesn't need to be like perfect.
It's just, oh, it allows me to get a sense of what that will
look like. And if I like it, then I can put
in the effort of like either prompting it correctly or if the
model doesn't fully get it to, you know, to do, to do like the
actual work, which can still be pretty detail oriented still.

(42:50):
But yeah, vibe coding is great. Like I think it's a it's a thing
to really be embraced as a mode of building software, which is
like, I don't care what comes out of the AI.
I'm just going to press accept until I get a better sense of
what I do want to build or it works.
And if it works, yeah. Totally.
So I really like that perspective.
I was very much teetering towards the, I don't want to say

(43:14):
anti vibe coding because I definitely see it has merits for
just like firing, playing quick,but to be able to embrace the
creativity part of your brain and being able to kind of be
look high level, rather getting into the the weeds definitely
has merit, especially if you're cognizant of what you're doing.
Like if you're saying I want to make like scalable, secure,
production ready code, I think you got to keep your head in the
game a little bit more. But if you're just looking for a

(43:35):
new feature or explorer experimenting, it has a lot of
merit. That that's why experience comes
in right? Like the more you know the
fundamentals, like even if you do really like hardcore security
work or like low level piping things together, right, you have
a pretty good sense of you have a pretty deep, well developed
intuition of like, how do thingsdecompose?

(43:56):
Do I need like a kernel driver or do I need like threads?
Or what's the problem with threads?
It's like, oh, maybe mutexes arenot the best idea here.
Like should I use this crypto algorithm and not this one?
And that allows you to vibe at atotally different level where
you're like, oh, build like a zero knowledge architecture for
XYZ. And then you see, interesting.
It uses this thing, right? It's like you just apply your

(44:18):
knowledge. Doesn't mean you accept this to
be the end all be all, but it will be because I've I've been
building like having been an embedded engineer, which is
like, it's not the most complicated code you can write,
but it's like you need to be cognizant of like memory
allocation, like memory usage. If you have like 2 kilobytes,
you can't, but it could still vibe, right?

(44:40):
Like it's, it is able. Like if I see static cons car
512 and I'm like, OK, this is 512 bytes I have I don't need to
switch my brain to detail mode to know like, yeah, this looks
decent, right? Like let's let's move on.
And then if it actually also work, I'll be like, OK, maybe I
should look like even more closely or, but at some point

(45:02):
I'll be like, my experience allows me to tell, Oh yeah, this
is good, right? At a glance, I see like, oh,
there's no like global variable.Cool.
Like there's, there's no global variable.
I don't need to think about it, right?
Whereas if you're a beginner andyou don't know that, then you
obviously have no idea if something's good or not.
You could have a global variablein there with a mutex around it

(45:23):
and be like, I don't know, let'smove on.
But it will cost you really badly, right?
Yep, there is one thing that we talked about before.
They want to make sure we get tobecause I thought it was really
interesting when you mentioned you like building to me
privately, you might have been building regular tools, but then
you expose it by MCP, so then the AI gets to take advantage of
this. Could you kind of walk through
that process a bit, how you can take a regular classic tool or a

(45:45):
script and then enable it to be used by an LLM?
I mean, so I must admit I shoulduse MCPS more.
I don't really do, but I built an MCP server which allows me to
pretty quickly wrap shell scripts.
Nice. A lot of the value of MCP is
like mostly just like local tools for yourself.

(46:08):
Because if you're building something as like for production
or whatever, then adding an MCP protocol in the middle kind of
doesn't make sense. So most of this MCP is like, Oh
well, for this session, I reallywant to use XYZ or I want to
give this agent the ability to like read my e-mail or like, you
know, to read my development database.
And because they don't need any schema, right?

(46:31):
Like it's just string in, stringout.
What I do is just like allow shell scripts to be wrapped or
ultimately to do prompt engineering on shell scripts,
right? And then in the shell script,
they're just like hard code tokens.
I just do like the most horriblestuff.
But that way you can turn any ugliest like grep said, you
know, pipe it into the GitHub CLI, get it back and like send

(46:54):
an e-mail with like the mail command line tool.
You can make that into a tool that's called like e-mail me the
latest, get her pull request or something like that.
And once you give an LM the tool, send me the latest, get
her pull request, then suddenly it's able to do all this like
crazy work with it, even though it was like the ugliest shell

(47:15):
script that you've ever done. So, and yeah, I think this is,
this is the kind of MCP tools I've been doing, which is like,
oh, for this session, I really want to reuse this like shell
script that I just created to, Idon't know, grab for patterns in
the log files and then looking up the IDs in the database,

(47:36):
something like that. And but then for this session,
I'll have this tool enabled and I'll be able to do a lot of cool
stuff. And then I'll throw the shell
script away and be like, yeah, that's it's done.
Nice. Very cool.
Yeah. I love being able to expose it.
I've recently I'm I'm similar, haven't been using it as much as
maybe I should be, but starting to explore it.
There's one service called Tool Hive and it kind of has a a

(48:00):
bigger focus on security for it.So I, I've been playing around
with it and one thing I've addedto to my cursor is the ability
to connect to my GitHub. So if you work on the coding,
you're like, hey, can you pleasejust like check the issue, make
sure you get the requirements that I need to fulfill.
And then I don't even have to like leave cursor to go check
GitHub and then just, you know, refresh my memory.
I just have the agent do it right there.
Yeah. So it's going.

(48:20):
To be able to be like like close28 please.
And you're like, OK, well, like tell me the last tickets I had.
But I, I, I can actually quicklyshare my screen and show you
what my GitHub MCP tool looks like.
Because that's, that's actually a pretty good example, right?
Like doing GitHub authenticationand something that's like a real
enterprise tool is like kind of annoying and right, because you

(48:43):
have like your Oauth dance and then it has to be like for the
MCP session and all of that here.
However, in examples, I think I do have GitHub and so say list
GitHub issues. So this is the prompt
engineering, right? Like this is the schema
definition where it says like this is how you should use the
tool. And these are like the flags
that you have. And then at the end, it's like

(49:06):
this is even uglier than it needs to be.
Then at the end, it's just like,right.
It takes the command of like GitHub issue list like literally
the command line tool. I even hard coded the repo name
in here. Yeah, but that's enough, right?
Because like the output of the CLI tool is actually human
readable. So the better the human
readability is, the better for the LLM.

(49:28):
Often giving a Jason is actuallycounterproductive.
So that that's good enough like these.
And this is generated, right? Like I just say like please.
And it's it's garbage because shell scripts have all these
escaping needs. So I have to do something better
here, but that's literally how Ibuilt my GitHub.

(49:49):
MCP is like just having it call like GH issue comment and in a
way you can like bypass the whole MCP and just tell like
cursor like look when you want to get the issue list, just call
GH issue list. So it's that was a hack that I
was using before, but yeah, I was playing with this in case I

(50:11):
want to do something a little bit more complicated.
But you can bypass all of this and just say, look, I created a
shell script that's called getlastdatabaseentries dot SH.
Please call it when you want to get the last databaseentries dot
SH just like. Nice and easy.
How do you use? Cursor rules or anything like
that to kind of guide it towardsthose things or?

(50:31):
I tried and it felt very flaky and buggy.
So I guess that was, I was probably just unlucky and I
tried it the week where it was actually flaky and buggy.
I was like, oh, it doesn't seem to really catch and do that.
So maybe I'll wait. I'll just like reference the
file that has the rules that I need to use.
So I use it very I have stuff inmy settings which just says like

(50:56):
the 4-5 different frameworks that I use when I write and go.
I think that's like a thing I have in cursor walls, just like
always use error group when you want to do Co routines and
always use this thing when you want to do X.
And that that's I know people are using them a lot.
And I this morning I was like, Ishould really look it back into
this, but there's this like bigger and it may be one of the

(51:21):
tools that I'm that I'm most that was the most useful for me
for a long time before I switched to cursor.
Ultimately what cursor rules areand what all these like
different mechanisms of doing things.
They're basically ways to dynamically expand your context
with information that's relevant, right?

(51:41):
And it could be like a cursor rule.
It could be like an agent searchthat does rag.
Ultimately they're all doing thesame.
It's like I want fresh information to put into my
context for the task that I'm doing right now, and maybe the
cursor role applies because I amworking on a file in this
directory, or I'm working on a Go file, but there's nothing

(52:02):
super magic to it. Then, for example, if you think
about linting rules, anything that gets injected into your
context is fair game. Like there's no difference
between injecting linting rule or injecting a cursor rule
besides the prompting that you want to do around it.
And so you can think of linters,for example, as dynamic cursor
roles, except they're really, really good because they

(52:24):
literally tell you you shouldn'tcall this variable, this name,
which you can put in a cursor role.
It could say like if you name a variable XYZ, you shouldn't do
it. But like a linter is much more
efficient at doing it, which is like this variable on line 56,
you should call it XYZ and but it's the same concept.
It's like putting stuff into your context to help the model

(52:45):
do the right thing. So these static files I find
very limiting because they're static.
So you have to do prompt engineering to make them general
to always apply to the right thing.
But it's hard because it's hard to describe stuff in a generic

(53:07):
manner, right? Where whereas if you have like
type checks that tells you this should be a float, not an end.
That's pretty, that's pretty precise, right?
And so one of the tools that I that I'm starting to use again
is it was called Prompto. And so back in the copy paste
days, right, like where you where you didn't have an editor

(53:30):
that would automatically send, you know, the file names and the
functions and the linking things.
I let me see this Prompto thing was kind of the idea.
Well, what if I have shell scripts in my repo that allow me
to quickly gather context for something, right?
So if I go say to my MCP repository and I want to say

(53:56):
quickly get to definitions in SoI have a tool that uses tree
sitter to get context. All of that stuff was really,
really important back when you had like 4000 tokens that you
could really had to micromanage your context window.
So I couldn't. But however, like going through
a file and like crapping for funk was like kind of hardcore.

(54:19):
So I had this tool that was justlike get, for example, the
definitions right? And I built this tool called
Prompter to quickly be able to access those.
So say I was working on the Flipper 0 firmware and one of
them was like, say, getting the hardware abstraction layer

(54:40):
headers because I often wanted to like get these headers,
right? And so if I, if I show you what
this hardware abstraction layersthing is, it's like literally
just cadding 3 header files, right?
It's the it's. But if I call it and say like
get Flipper Cal headers, then I oh, well, I shouldn't typo it,

(55:04):
then it will actually give me that immediately so that I can
just copy paste it into ChatGPT,right?
And now I have it, it tells me it's like 50,000 tokens and I
can send it to O3 and just say like explain.
And so this tool, I think if youknow repo prompt, for example, I

(55:25):
think it's something that's going into this direction.
And if you think about MCP's, for example, there's a lot of
MCP's that give you context about a certain library, right,
that tell you how to use certaincomponents or so they're like
basically dynamic context extentexpansion things.
And so in a way, this is like myprevious ways of doing MCP's is

(55:47):
that I used to have shell scriptthat are called like get me the
top database things, right? It's like, let me see if I still
have some of those, but let me show you the cool thing about
this, which is that I have a webfront end for it, which was
like, you know, a cloud, a cloudaway.

(56:08):
It was like, I want to have a nice interface for this.
So I just like took the code pasting cloud and said, like, I
want a nice interface for this and that's all you got to do.
So this allows me to like quickly access documentation
about my framework with this is the this is the killer thing is
that these things are always up to date because they're getting
crapped out of the code base, right?

(56:30):
So it's like a dynamic document that I don't need to be too
concerned about editing. And then I have all this like
nice little thing here, this like this is all stuff that
Claude added and was like, yeah,sure, I'll take it.
You know, I was like, oh, can you have like a favorite list
here that's stored in local storage and like, yeah, sure,
here, here you go. So all these like this would

(56:53):
have taken me a day before probably to build, right?
Like, or it would have taken me like 2-3 hours and then I would
have been really tired and I wouldn't have programmed the
next day. Now it's like, OK, guess, I
guess I'll roll with it. So, so this I think, I think
we're starting to see it right? Because I did this thing where I

(57:15):
can configure like a list of repositories in which this tool
is going to look for context things, which means every repo,
I just have a repository, I havea directory for these dynamic
prompts. And then they're just everybody
who closed this directory and uses my tool has them available.

(57:35):
And I think that's more and morewhere we're going to go.
And you know, I should have blogged about this a long time
ago to become like a luminary ofall of this.
It's so easy to do. It's like literally a shell
script, right? It's a shell script plus 100
line go file that just looks over all these directories and

(57:56):
just like calls the shell script.
It's it's it's pretty ridiculoushow like the simplest idea is.
Having the idea is the hard part, right?
Yep. And that's why if people are
domain experts or they work in things that aren't traditional
software, they have ideas like they they have the exposure,
they have the experience, and now they have the ability to

(58:17):
just kind of go and turn into reality.
It's so I don't, I don't know ifyou've seen that, but I found
for a long time it's changing a little bit, but programmers are
not really good prompters and they're not really good at
writing software with LLMS. While people who know nothing
about softwares are able to prompt themselves really far
because they'll basically just type like make me an app to like

(58:40):
sort my recipes and that will work pretty well, right?
Like that will cause the models to do some pretty good stuff.
And then, but if you, if if you're a programmer, you go in,
it's like, make me a React application using Prisma DB
version 35, using like this pattern, like the models not
going to, it's just going to produce something that doesn't

(59:00):
work. And that's something I've
consistently seen where developers will super quickly go
into the details or like go right there and not actually
care about the idea, right. So suddenly the little thing of
like, build me an app to make recipes is like 100th of the
prompt. But actually you want it to be

(59:21):
like front and center and be like if the model, you know, and
if these tools and we're gettinglike scaringly close now where
that's all you need to say, right?
Like that's what it means to have a model, right?
Softer for you is like I type inmake me an app to edit YouTube
videos and outcomes an app to edit YouTube videos.
And because I haven't specified more, it has like a super wide

(59:44):
gamut of things. But if I then realized like, oh,
I actually want to edit YouTube videos with my voice or
something like that, then it allbecomes about ideas and
understanding yourself and your goal.
And I think as developers, we'velost that quite a bit and we
have to, we have to reconnect with it, right?

(01:00:05):
Exactly. Yeah, think more about the goal,
like the problem we're solving rather than like the the
minutia. There's one thing I want to
cover before we run out of time.You first came on radar with AI
in action on Late Space Discord.Could you tell the audience
about that a bit? Who?
Who might be the type of person I might want to show up to that
What type of things you discuss?So latent space is the SWICS

(01:00:26):
Discord and SWICS has a this hasa podcast.
It's called latent spaces as well, which is which is amazing.
And on this, on this Discord, for a long time, there was like
an, a club called LLM Paper Clubwhere people would read like LLM
papers and discuss them, which was which was pretty heady,
right? Like it's too heady for me.
But, but, and so Kevin Cable, who I think started the LLM

(01:00:51):
paper club, said, Oh, I want something that's a little bit
more concrete and created what'scalled the AI in Action Club.
So every Friday at 4:00 PM ESTI think 1:00 PM PST, we meet.
It's pretty scruffy, like whoever wants to present can
present, can sign up to present.So and people show what they use

(01:01:12):
AI in action for basically. So like Flow has shown all his
music stuff. I'm usually kind of the backup
person showing whatever I've been like working on.
We've had people show the modelsthat they that they train to do
like OCR. We've had, we've had all kinds
of of people present basically what they're working on.

(01:01:35):
It's always, it's always really fun.
It's like it's, it's like not it's, I think it's just very
grassroots, which I really enjoy.
So, so it's, yeah, it's a great community because it's just
people who just build stuff and want to show it off and like,
don't, you know, don't necessarily care about
monetizing any of it. Like that's never part of the

(01:01:55):
conversation. It's more like, look what I
built. This is This is cool.
Yeah, what one time Yikes, tagged me and he's like, hey,
will you present on it? And I gave one super welcoming
community, everyone supportive there.
There's it's it's just the desire to learn and build and
show off and just be like this, this is what you can do in this
day. So I think it's a great club to
be a part of. Yeah, I, I, it's my favorite
community, right? Because it's, I've joined

(01:02:16):
Twitter recently or like went back to Twitter and it's like
exhausting. It's, it's just like, it's not
very pleasant overall. And it's like just filled with
this like drive, drive, drive. And here it's more like, oh,
look, like we're, we're having fun, right?
Well, this was awesome. I think we could talk for hours
more, but we got to call at somepoint Before I let you go, is
there anything I want the audience to?
Know no like explore the stuff like try out, try out all the

(01:02:39):
cool things you want to try. Like remember that talking about
something is exactly the same amount of time it takes to type
it into ChatGPT and you can do both at the same time.
And then that's that's how I getmost of my stuff done these
days. Excellent.
All right, buddy, I'll talk to you soon.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

NFL Daily with Gregg Rosenthal

NFL Daily with Gregg Rosenthal

Gregg Rosenthal and a rotating crew of elite NFL Media co-hosts, including Patrick Claybon, Colleen Wolfe, Steve Wyche, Nick Shook and Jourdan Rodrigue of The Athletic get you caught up daily on all the NFL news and analysis you need to be smarter and funnier than your friends.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.