All Episodes

July 29, 2025 58 mins

Unlock the future of AI tools, context engineering, and practical AI development in this special Episode 50 of Tool Use! We celebrate one year of weekly conversations by bringing back two dozen of the brightest minds in the AI space to share their most valuable lessons and predictions. Dive deep into the evolution from prompt engineering to context engineering, and learn how to master large language models (LLMs) and AI agents for maximum productivity and impact.


In this can't-miss episode, we explore the biggest takeaways from a revolutionary year in AI. Discover the power of reasoning models, the rapid advancement of local models, and the rise of multi-agent systems. Our expert guests, including entrepreneurs, researchers, and builders, discuss the critical importance of mindset, staying curious, and the human element in the age of AI. Learn practical strategies for building with AI, from owning your control flow and creating effective feedback loops to leveraging "vibe coding" and knowing when to use deterministic code versus fuzzy LLM functions.


Explore the latest in AI tooling and workflows, from using WhatsApp as a universal interface to the best practices for evals, tracing, and context management. Hear predictions on the future of AI, including the path to zero-cost intelligence, the rise of home robotics, and the solution to model orchestration. Whether you're a developer, entrepreneur, or just passionate about leveraging AI, this episode is packed with insights to help you stay ahead in this rapidly accelerating industry.


Join us as we discuss how to build for the future, augment human capabilities, and democratize the benefits of AI. Find out why the top 1% of builders approach AI differently and how you can apply their principles to your own projects.


Thank you for an incredible year! To support the show and help us bring on more amazing guests, please like, share, and subscribe.


12 Factor Agents - https://github.com/humanlayer/12-factor-agents


00:00:00 Intro

00:01:05 Ty Fiero - Model Advances

00:04:21 Context Engineering

00:05:49 Ryan Carson - Obsess about context

00:06:23 Richard Abrich - Garbage in, garbage out

00:08:23 Michael Tiffany - More Metaprompt

00:09:13 Mindset

00:12:04 Dexter Horthy - Keep doing the hardest thing

00:15:22 Diamond Bishop - Change is the only constant

00:17:53 Dillion Verma - Build around AI

00:19:30 Freddie Vargus - Don't believe the magic

00:20:49 Wolfram Ravenwolf - Curiosity, motivation, and skepticism

00:22:13 Jake Koenig - Most things you learned are obsolete

00:23:19 Sarah Allali - Know the outcome your expecting

00:24:29 Practical Development and Tooling

00:27:12 Francisco Ingham - WhatsApp and leveraging humans with LLMs

00:30:27 Minki Jung - Standardized evals, tracing, context management

00:32:16 Hai Nghiem - Don't be too smart

00:35:02 Jason McGhee - Three camps of AI tool use

00:38:38 Olga Beregovaya - Right model for the job

00:40:01 Orlando Kalossakas - Prompting is vital

00:40:48 The Human Element

00:43:20 CJ Pais - AI to empower creatives

00:45:05 Kirk Marple - Play with vibe coding

00:47:17 David Cross - Think about the people first

00:48:13 Adam Cohen Hillel - Humans are the bottleneck

00:49:22 Aaron Wong-Ellis - Talk to your customers

00:50:34 Ian Pilon - What is agency?

00:51:56 Future Predictions

00:54:41 Thank you


Please follow our incredible guests:

https://x.com/FieroTy

https://x.com/ryancarson

https://x.com/abrichr

https://x.com/kubla

https://x.com/dexhorthy

https://x.com/diamondbishop

https://x.com/dillionverma

https://x.com/freddie_v4

https://x.com/WolframRvnwlf

https://x.com/ja3k_

https://x.com/SarahAllali7

https://x.com/fpingham

https://x.com/JungMinki7

https://x.com/haithehuman

https://x.com/_jason_today

https://x.com/sunglassesface

https://x.com/cj_pais

https://x.com/KirkMarple

https://x.com/adamcohenhillel

https://x.com/aaronwongellis

https://x.com/IanTimotheos


(and me too)

https://x.com/ToolUseAI

https://x.com/MikeBirdTech

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome to episode 50 of Tool Use.
This is an incredibly special episode.
It marks one full year of weeklyconversations about the latest
AI tools. Some amazing entrepreneurs,
researchers, builders, and we'reextremely lucky that two dozen
of them of the top minds in the AI space have decided to come
back and share their top piece of advice that they've learned
over the last year. We're diving into context,

(00:20):
engineering mindset, practical tooling and the human element.
It's going to be the most informative episode of tool use
yet and is an absolute can't miss for anyone looking to
leverage AI tools to become moreproductive, more efficient, and
to amplify your impact. I'm thrilled to be joined by Ty
Fiero. Ty, great to see you.
I'm so happy to be back, Mike. Thanks for having me.
This is a very special episode. I can't believe it's been a year

(00:42):
already. Great to talk to you again, and
man, just going to dive right into it.
The past year has been absolutely crazy in the AI
space. What are some takeaways?
Oh man, I really has been crazy.It's crazy to think about like
where we were at a year ago. It's so like I was thinking
about today, I saw like 2023 as the date of a paper and I was
like, oh, that wasn't too long ago.
But then you think about it, it's like, oh, that's actually

(01:03):
kind of a long time ago in the grand scheme of things.
I mean, that was before reasoning models.
We were before 01 at that point.And that's that to me, I think
has been the biggest story of the last year.
I think reasoning models took AIfrom where it was to where it is
now. And I, I take O3 for granted
now, but I mean, we were dealingwith just GBT 4 O back then.

(01:25):
And now we have these incrediblereasoning models that are
winning math Olympiads and all sorts of crazy accomplishments.
And so I'm, I'm hyped on reasoning agents.
Obviously they're going to keep going.
I think another huge thing from the last year too is also just
like local models and how far they've come.
Just for fun. I just played with a, I had a
llama model installed from like I think a year and a half ago

(01:48):
whenever mixed role mixture of experts was released, which I
think over a year ago. And I tried it out and it was,
it was just cute. I just smiled at it.
I was like, oh, that's fun. But like, considering how far
things have gone now with Quinn and some of these other models,
Kimi, it's I'm just blown away with how far local models have
come and I, they progressed much, much, much faster than I

(02:10):
could have predicted. And I'm super excited to see
where they go next because the cost of intelligence really is
going to 0 and I'm so here for it.
Even just earlier today Freddie from Quotient AI shared out they
have this 500 billion perimeter model or sorry 500 million, half
a billion and it's specialized in just observing agent tool
calls and the success rate with that.

(02:30):
So we're starting to get these very small hyper specialized
models. And if you look back, I was
saying from the beginning, if wecan get a well orchestrated
fleet of highly specialized models, we're going to get AGI
distributed. So having these small little
bits of functionality, being able to be finely tuned into a
specific model, if we can just call it when it's needed, we
don't necessarily need these massive O3, ESG God tier models

(02:51):
taking care of all the work. If we can just kind of delegate
to smaller models. Give your model a little pet
model to handle little tasks, and all of a sudden you make
things more energy efficient, you make it more cost efficient,
and it's just going to distribute the capabilities to a
lot more people. What's also cool about that too
is you're like, yeah, we don't need to use O3 for everything.
We can use specialized small models.
But what's really cool is we canuse these these bigger models

(03:11):
use smaller models when they want to, like with cloud code
doing sub agents, you know, it sends out little haiku agents to
go do its thing. I think that's remarkable.
And so like multi agent systems this time last year were just
kind of a thought, you know, I mean, like there was there was
frameworks around and whatever else, but they weren't really
hitting the mainstream. But now you have cloud code,
which is taking the world by storm actively using multi agent

(03:33):
architectures. And it's just it's crazy to see
all of that pan out. You actually brought up
something that a few guests havementioned and it's the
frameworks where a lot of the the previous era of multi agent
frameworks were really just context management deciding
which model to send it to. Sure, but the rest are just what
context it's pulled in. And one trend which gets
addressed a lot in these conversations is how prompt

(03:56):
engineering has evolved to context engineering.
Because prompt engineering isn'tthe right mindset because you're
not just thinking about the prompt, you're thinking about
everything the model has. And controlling the context is
important for every stage of theinteraction.
So it's not just, you know, what's your super prompt to get
things kicked off and started, but for every follow up message
in a conversation, if, if that'sthe workflow, you got to make
sure that you manage the contextthere.

(04:16):
So a lot of our guests actually talk about context management,
sorry, context engineering. So we can dive into that.
But I'd love your thoughts on context engineering before we
do. I kind of always feel a little
bit grateful for tool use in that.
The first time I heard context engineering is an actual term
was right before Dex released 12factor agents.
You know, we were recording withhim the way I think the week

(04:36):
before he released 12 factor agents and really dove into
context engineering. And for me, I don't think
there's a better explanation of context engineering than 12
factor agents. 12 factor agents for me has been the biggest
change in the way that I build AI applications in the last
several months. I saw him at the AI Engineer
World's Fair. He gave a great talk, which is

(04:58):
on YouTube about 12 factor agents and own your contacts
window. Know it's going into your
contacts window. I mean like results from tool
calls and things like that. Can you just take out the full
messy tool call result and replace it with just a simple
line in your system prompt? Man, that saves you a lot of
tokens and makes it a lot less messy.
And I think context engineering,I'm glad it gets the recognition

(05:20):
it deserves because prompt engineering always felt like
this kind of flaky thing. And you'd have some people
saying, well, prompt engineeringwon't matter in a couple years
or whatever else. But context engineering I think
will always matter. What you're putting into these
models will always matter. So I think it's just such a
profound change from prompt engineering to context
engineering. I'm just so happy that it's so
talked about now. We'll link 12 Factor Agents down

(05:43):
below because it is a phenomenalread for everyone.
If you're watching this content,you got to check it out, but
let's go see what everyone else has to say about it.
So the thing that I've learned is that you have to obsess about
context. So call yourself a context
engineer, call yourself a context obsessive, whatever it
is, context really, really matters.

(06:03):
So take time to select the rightfiles that you want your AI to
know about. Take time to talk to the AI to
tell it all the context. Just take time.
To give your. AI, the right context and it
will really unlock a lot more for you.
So that's my that's my lesson I've learned.
I think a lot of it boils down to garbage in garbage out So

(06:26):
like I think these models are a useful way to think about them
is as calculators for words. You know there's a lot of talk
about like AGI and that stuff and you know I'm I'm, I'm open
to AGI That's, that's right like, you know there's a lot of
people much smarter than me thatare working on it, but you know
I look around and I see what what do we have now?
It's not AGI, right? It's it's useful way to think

(06:48):
about it. I think it's a calculator for
words. And so just like with the
calculator, like if you're giving it the wrong inputs, it's
going to give you the wrong outputs.
And so I think when you're losing large language models,
you have to really think about, you know, I think this notion of
prompt engineering has expanded and now people are calling it
context engineering. And that me like makes a lot of
sense. You basically have to plan in

(07:10):
advance like what information doI need to give the model in
order for it to produce what I want?
And you have to think about it in terms of another level of
abstraction, which is that it's not just about the next step.
You have to think about it in a series of steps because the
whole conversation is the context.
And so you need to engineer thatcontext from start to finish.
So I think you have to develop an intuition.

(07:31):
I mean, this is the way I did it, right?
It's like just trial and error to develop an intuition.
And now that I have an intuition, I'm much better at
it. But something I've discovered
recently is that actually modelsare very good at creating those
problems for you. So before you start on, on your,
on your context, you can plan out your, what your context is
going to look like with the model first and say, like, Hey,
I want to do this kind of thing.What, how should I approach this

(07:53):
with a large language model? And they're actually really,
really good at set forwarding them for you.
And so in fact, what I've been doing with Windsor, for example,
is, you know, like for small things, I can just say, Hey, I
want you to do this and it'll doit.
But for much more complex features, what works much better
is first going to ChatGPT and being like, Hey, this is the
feature I want to build here is sort of like the stack that
we're using, like plan, plan this out.

(08:14):
And then once it's planned it out, then I tell, OK, now give
me a prompt to Windsor. And then I give that prompt to
Windsor. And then it does a much better
job than if I just prompted it myself.
I think my top advice my my biggest insight is that however
much you are doing metaprompting, you are not
metaprompting enough. Like.
The the Frontier models will help you use the Frontier models

(08:38):
and I'm just. Constantly building recursive
loops of improvements to my system prompt and getting my
whatever my favorite model is the time to help me to manage
its own context window and get like better and better work out

(09:00):
of out of pretty much every model with pretty much every way
of using them. Metaprompt, even more metaprompt
your metaprompting. So on top of just deciding what
goes into the model and how you're actually interacting with
these systems, it's really important that people consider

(09:21):
how they approach AI in general.It's really a different beast
than traditional software development or traditional
computer use. So one of the biggest piece of
advice that I have for people isjust stay curious.
You have to have that drive to kind of see what else is out
there and what's new. So finding something you're
passionate about and an excitement really helps keep
that curiosity alive because as soon as you get stagnant, you're

(09:42):
going to fall behind. There's never been a faster
accelerating industry in human history.
And this is 1 where if you can stay curious and stay a forever
learner, you're really going to get ahead.
So that's what I'm encouraging with mindset.
What about you? What do you think a good mindset
or strategy is for people to adopt?
I had two on this and you just took my first one, which I love
because I think it's the most important one.
But I, I curiosity is one word. I think the other word that I

(10:06):
would use for it is to just play.
I love the word play because it takes the, it takes the stress
out of it. You know, it's not like you
shouldn't be going into buildingAI apps all the time with, you
know, deadlines and stress it like have fun, play with these
things. These things can do incredible
things. Like I've been playing a little
bit with Runway and just playingwith video mod.

(10:27):
There's nothing that I'm really using it for, but it's just fun
to play with me. And I just go and you know, kind
of like Jason Neen does he, he absolutely loves making his AI
art. And I've I've kind of been
inspired by that to just go and play with video models and
anyway, but go play if there's something that even looks fun,
just have a good time with it. And yeah, stay curious.

(10:48):
The other piece that I will add,I think my second piece point
would be to stay optimistic. I think there's way too much
dumorism in the field and I think there's there's reason for
it. I think that's it's not, you
know, maybe, maybe some dumorismso that you can understand the
risks. There are risks to this
technology, but stay optimistic.I mean, there's that meme of the

(11:10):
two guys on the train, one guy looking out over the valley and
smiling and the other guy like looking to the side and being
all sad. They're on the same train and
they're observing the same experiences, but they're just
taking in two different ways andstaying optimistic about the
technology and, and seeing whereit could play out and seeing how
it could help humanity. It's just a way more fun mindset
to be in. And I think if you stay in that

(11:32):
sort of positive, curious, playful mentality with AI,
you'll have a lot more fun and Ithink you'll get a lot further.
It's important to build for the future because if you build for
today's tech, by the time you ship, you'll be obsolete.
But bringing in that optimism, build for the future you want.
Like, don't just build for the future that you think's gonna
happen. But we all have the ability to
nudge and shape things just a little bit.

(11:53):
So if there's a tool you want tobuild, think about the
downstream effects, what it can actually lead to, and just build
with the future in mind because we're at a really cool pace.
But you know what? Let's see what everyone else has
to say about mindset. Man, OK, so this is so crazy
because like, I'm gonna, I'm gonna tell you what my advice
would have been like a couple weeks ago and then I'm gonna
tell you what it is now. And it's a little bit better

(12:13):
because you know, with going through our whole thing of
building a startup and building AI tools that like plugged in
and like work directly with all the frameworks, I realized that
everybody that like was not a, it was like serious developing
and building products that we'remaking money and we're like, you
know, being deployed in the enterprise and we're reliable
and good. It was like, oh, there's not a

(12:34):
lot of framework usage. Like most of the good founders
I've seen would prefer to hand engineer every single detail of
their pipeline and, and the whole thing.
Everyone's talking about contextengineering now.
And so that was kind of like the, the more meta advice there
is, like there's still a lot of engineering to be done.
Try to ignore the hype as much as you can.

(12:54):
Because the way that the, I think switch put it this way,
and I really like it was like the way the top 1% builds is so
drastically different than the way the next 99% build.
And so everything you hear is probably is probably mostly
hype. And if you're a good software
engineer but you don't know anything about AI, that's OK.

(13:15):
And it might even be like good. And so like work for first
principles and like to do software things right, test,
orchestrate, code, deploy, things like that, all still
matters. There's no magic bullet.
Since then I have, you know, actually been using a lot of
these like super agentic AI's, the things where I'm like, Oh
yeah, this is really hard to getworking well.

(13:36):
And like you look at something like cloud code, I think cloud
code is built 100% on the principles that are in 12 factor
agents. And like, again, I didn't come
up with those things. That's just like people building
an AI for a long time figure it out for themselves.
The Devon guy from Cognition, they have blog post about
context engineering in mid June that kind of like started this
new wave where you had like Tobyand Audrey Carpathy all saying

(13:57):
like, oh actually, yeah, contactengineering is a way better word
than prompt engineering because it's not just how what you tell
the model to do. It's like everything you give
it, but I've actually been writing less agents from scratch
and doing a lot more of just like can we take a coding CLI
and use it in weird ways? Like I dropped it in a folder

(14:17):
full of Markdown files and said,here's everything about my
company, go help us make money. And it's like building little
tools and scripts and pulling Zoom meetings and like pulling
like building like walking me through the oaf flow so we can
get my calendar events and like storing.
All context is just marked down files on disk.
All the rag is just agentic rag against like files and folders.

(14:38):
Not super efficient probably. But like, honestly, the thing
that is like, yeah, just call tools over and over again
doesn't really work that well. Maybe it kind of does now,
depending if you know how to kind of prompt it and guide it,
you can sometimes get that to work really, really well.
And so my new meta advice is like change your mind every
three weeks because this world changes so fast.

(14:58):
And like, yes, there's a lot of hype and learn to ignore it.
But also like just because you have hard won lessons, those may
be irrelevant within a month. And so just constantly keep an
open mind and keep trying to do whatever the hardest thing is.
New stuff will come out and the thing that was hard a month ago
is no longer hard. Go find the new hard thing cuz

(15:19):
that's how you find value and things that are interesting and
fun to work on. So the biggest lesson in the
last year for me for working in AI tools is a really a lot about
the mindset of the fact that theonly constant is change and that
the rate of change for the underlying kind of foundation
model capabilities keeps accelerating.
It's very easy for skeptics to kind of look at a point in time

(15:40):
and say something's not possible.
Yeah, agents writing code, you know, working as part of a team,
etcetera. But if you look back over the
last three, six months, a year or more and look at what most
people said at those kind of points of time of what's
possible for the next few months, most were wrong and
underestimating the change. So the main lesson to me is a
bit of the bitter lesson for AI agents and applications, which

(16:04):
is that we should assume the underlying general models will
keep improving significantly on both the costs and intelligence
axes and build tools and applications that take advantage
of that general improvement. Rather than very specialized
tools that are useful only because the current models
aren't quite capable of something, you know, be it
better reasoning, context windowsizes, what have you.

(16:24):
And I think that that's something where sometimes, you
know, new YC startups, new tools, you see kind of sound
very optimistic, but many times they're not that far off.
Because I do think these kind ofoptimists who think about where
things are going and kind of projecting ahead to what's going
to be ready soon really are the ones that are gonna kind of win
in this new intelligence age. So if you're building agents or

(16:46):
you're building tools that you're evaluating, you should
really think a lot about how do you do that evaluation as you
make improvements and really look a lot at where the failures
happening. And if you find that there are
failures happening in a specificpoint where you're doing a bunch
of kind of patchwork to fix, that's probably a point where
you can say really make that call.
And like, do we need to be doingthis right now or should we set

(17:07):
it, set this aside as something that we assume will improve over
time and we're gonna keep comingback to this when new models
come out and see if it's gettingbetter.
So if you see a bunch of failures or because of context
windows maybe not quite big enough for your models and
you're like, well, you know, this probably would work if we
had had everything in context. Should I go and do the work to
build this really specialized rag system?

(17:28):
Maybe if it's important to your customers right now.
But you should also be aware that you probably want to retest
this in the future without that system.
You want to kind of do ablation studies in some sense in the
future. You want to be able to turn off
and on these specialized things that you might add just to solve
a particular problem and decide if this is something that
really, as you tear it out, as new models come in, can you keep

(17:50):
your code base simpler? Can you keep what you're doing
more generalized? The biggest lesson for me was to
focus a lot on building around AI so that your product improves
around AI releases and AI models.
So for me, last year I was working on what I call magic DY
and it didn't have AI built in to improve and I noticed that I

(18:14):
was actually exponentially getting worse in relative to
other products in a way. So going forward, I think of an
AI and LLMS as sort of a generator.
So for refract OMD right now is I built a system around LLM
generation so that every subsequent generation the system

(18:36):
becomes better. It's like a self improving
system. So with someone creates a new
video, what happens is that theycan also like vote and like say
this, the good outputs, the bad output and that's stored.
And what happens with that is that every subsequent but new
generation or new shot, the AI agent that I have looks at the
previous data set, sees which which things were better, which

(18:59):
things were worse and the systembecomes better over time.
And this is what I mean by usinglike building a plant around the
properties of AI, which are very, very powerful.
Like for me that's the one property that I can listen on is
the generator. Property.
Yeah, and I'll definitely focus a lot on that in particular

(19:19):
because if you're not aligned with this, you will be left
behind because the only people who will be building their
products that get better with every new model released, every
new chat, every, every new generation the.
Biggest lesson I've learned working with a bunch of
different tools, building tools,it's been just to like remember

(19:40):
the nodal we believe in the mad and to chip away at it.
Like I think now is like the easiest time to understand how
things work. If you want to understand a code
base, you could take the code base, drop it into an LLM and
just ask like, hey, how does this feature work?
Or how is this feature implemented?
Give me a basic. Rundown implementation of this,

(20:04):
or if you wanna understand, likeyou could just basically convert
anything to markdown our text and you can start to build an
understanding for yourself. And then I actually think that
helps build a lot of internal confidence that you can keep
building something great or thatlike maybe the tools aren't as
magical as you think. For me, the things I'm always

(20:26):
thinking about are what would I actually use this for?
And then beyond that, like, how does it relate to what I might
do in the future? And I think if you can kind of
contrast those things and you'rethinking about the present and
you're also thinking about the future, that's how you can kind
of start to bend things in different ways and see like.

(20:47):
Do things really hold up? I used to believe such to
benefit from AI and use it effectively, one had to be
better than VI, at least in the Avi we had supplied.
I since realized you don't need to be better than the AI to
benefit from it. It depends on how you use it.
Mindset methods, particularly curiosity, motivation, and
healthy skepticism. At the right chance.

(21:09):
AI can be more than just the tool, it becomes a teacher.
AI can be more like rather weirdactually Co workers than just
the mere tool. Although it's not and shouldn't
be considered a person, it's adoption has been more
successful among those using personalized assistance.
Tricky enjoyed interacting with Pika, another colleague,
Prashad, an assistant speaking in her regional dialect, and so

(21:32):
on. When AI is not only useful but
also fun, users are more likely to incorporate it into their
daily lives. Was this a purchase?
And for everyone, the key take away is adept AI to the user to
make the user adopt AI. The lesson I learned from the
Matrix never said the human to do a machine's job.
HL Smith facts that, and I thinkit's right because similarly you

(21:56):
don't use the language model fortasks that a different program
can handle. So automate deterministically,
let's code whenever possible, and we serve AI for situations
where it's truly needed. This approach at hand for
consistency, reliability, performance and cost efficiency.
I think the biggest lesson is just things move so fast that

(22:17):
you can't really take any firm lessons.
I don't know. Everything's like changing all
the time. Like it used to be that they
were just kind of good for chatting like you could ask a
question and get an answer and the answer could be pretty good,
but you kind of had to like testit and you couldn't like run
them in a loop. They'd go crazy and like the
agent stuff didn't work at all like even just a year ago, but

(22:39):
now they're they're pretty good.Like I think like there's
there's lots of PRS that Med tech makes that we merge without
basically how N changes at all. And there's lots of people
providing that kind of experience between like codecs
and cloud code. So like almost any lesson you
learned a year ago is like kind of now obsolete in a, in a way.
And I guess I don't know, my prediction is, is that will

(23:01):
continue and that things will get like even better.
And I don't know, my, my biggestsense of the last couple weeks
and months is like, I am the bottleneck or I'll like have
ideas and I'll fire them off. And and the limiting factor is
like actually my ability to evenlike look at the code, the codes
like that it generates is like not even the bottleneck anymore.
So I think the number one is being aware of the outcome

(23:23):
you're expecting from the tool, whether it's when you're
building an AI tool or when you're using an AI tool.
You can do so many things that technically you can easily end
up like in the lake where you'restarting, like to prompt and
then you end up with something and then you keep prompting.
And at the end, you don't necessarily neither know how to

(23:44):
stop or what really we're looking for.
So you always have like kind of this dissatisfaction basically.
So give you an example. Right now I'm trying like to
work a little bit on the brunt of the company, but I started
like iterating with open art, you know, image generation tools

(24:04):
like that. And at the beginning, I ended up
like using this for a 2-3 hours without like ending up with
anything constructive. And when I posed at the end, I
was like, OK, but what was I looking for?
You know, because you start being like a proud idea, but if
you don't really understand where you want to go, it's

(24:24):
really hard, you know, to end upwith something you're really
satisfied about. I would say just the mindset, I
think it's really important to talk about like the the
practical application, like things get a little theoretical
esoteric, but pulling it back down for today, what are some
things that you think are reallyimportant principles that people
should keep in mind when they'rebuilding these tools or using

(24:46):
these tools? That's a take away that someone
could benefit from like right now I.
Think the thing that I'd say offthe top for vibe coding or like
building applications is just tolike make it as easy as possible
for the AI to actually be in thefeedback loop to try things and
test things. Because there's there's a lot of

(25:09):
work that you can have Claude do, or I mean Claude code and
Claude codes the thing. Now who knows what happens when
the next model comes out or whatever, but give it linting
tools, set up linters in your code base, set up tests set up
like if it can actually run the application and try it and see
if it worked for itself. Like get yourself out of that

(25:31):
loop as much as possible. If you're going through vibe
coding style, just like feedbackloops are so important.
And if you can create that feedback loop, that kind of
plays into evals as well too. I think evals are dramatically
important just to see if you're actually doing the thing that
you want to do. What's your definition of good
look like? I absolutely love that.
Just like optimize for the feedback loop.

(25:52):
I would say I recommend people try to use code as much as
possible where if you can take the LM out of the workflow and
have deterministic code there, it's cheaper, it's faster, it's
more reliable. And have the LMB, that fuzzy
function that's kind of bridging2 processes together or taking
an unknown input and making it something that you can work
with. And then also there's just,
there's no lasting best practices yet.

(26:12):
We're still figuring things out.So be agile.
Like don't construct your product or your app in such a
way that it becomes inflexible because you want to adhere to
certain principles. In this day and age, because
everything's changing so quickly, we're still figuring
out how to do this as an entire industry, that the ability to
stay agile and be willing to throw away a bunch of code to

(26:33):
bring in new code is going to help you accelerate.
Another point that I want to throw in here to another shout
out, 12 factor agents is like own your control flow.
I think that's super important and he mentioned it in his talk
too, but it's kind of a request for some of these agent
frameworks to just expose more of the control flow to people.
But man, you can. When you actually build an agent
from first principles, you kind of realize how much those agent

(26:57):
frameworks are holding away fromyou.
And being able to control your your your control flow gives you
a lot of leverage that one mightnot have if you're building with
an off the shelf framework. Let's see what everyone else has
to say about practical development tooling.
Two big trends I've been seeing.One is people using WhatsApp as
a universal chat interface for building products.

(27:19):
I think that is extremely usefulfor two reasons basically.
First, it has virtually no limits to what you can build in
WhatsApp. Addition, in terms of agents,
you have also some components inWhatsApp that are very handy
like forms and other stuff. But the big thing is like it's
very easy to build in WhatsApp. Custom agents for a specific use

(27:42):
case or general agents for a general use case.
It depends on what product people want to build, but I'm
seeing a lot of that here in South America.
And the other thing that's very big for WhatsApp, a big plus is
that people have no switching cost or like on boarding cost
for WhatsApp. They don't need to download
anything. They need to learn in your

(28:03):
behavior. They use it every day for every
single thing. They will order food through
WhatsApp, talk to your friends, talk to your family, and just
add a new number where they havelike their gym assistant or
their task assistant or whatever.
So we have a lot of development of new startups that are
WhatsApp native here, deploying AI at scale and what now?
That's one big one. The other second thing I'm

(28:26):
seeing a lot, which I find extremely interesting is how
human beings fit into the LLM and native organization.
So these types of organizations that really leverage NLMS in the
whole transversal functioning ofthe company in the post NLM
world. And I see it happening two ways.
So one way is obviously buildinga good data flywheel that

(28:48):
translates user needs into features and into AI features if
you have an AD deployed agent and delivers that as value to
clients so that you need a engineers there.
Obviously we also have designersthat interpret what users are
needing and build convenient UI's that fit the user's needs.

(29:09):
And also you that you can use V-0 and Lovable or Rapnet in
those cases to build great UI's.But there's a deep component of
understanding the user and beingtasteful about it.
But these automation and design,all these aspects of the product
that you build internally need to be guided by a deep

(29:29):
understanding of the end user, which you only get by talking to
users. And that's what I find
fascinating is that many startups that are doing great
work in building custom bespoke products for very niche markets
talk a lot to users and feel they can feel the way that the
user feels in their daily workflows, in their daily

(29:51):
day-to-day activities. And that's how they get the
right intuitions to the feed into features and design.
So it's a very soft skill that is that with AI you can leverage
to a great, the much greater degree than you could before,
but it's still a soft skill. So it's it's a place where
humans shine of being very closeto users and having, you know,

(30:11):
daily to weekly conversations onhow they're using the product,
what they're needing, what they're seeing is missing,
etcetera, etcetera. So yeah, what's up?
And like really leveraging humans without a lens that
that's basically the biggest trends I'm seeing.
Last year was, I would say the AI engineering became more

(30:32):
standardized. So before like a year ago, we
didn't know what to do. Like everyone has different
ideas. But now we have some
standardized like axioms. For example, you have to do
eval, you have to start from eval and then run the eval and
then like tweak the prompt, improve the system gradually.

(30:53):
Or you when you do eval, you have to create your own
annotation tool because if you rely on like third party
annotation tool, it wouldn't support your like data types or
you, you might find you want, you want certain features when
it's not there. In terms of tracing, we, we got
much better tools. I'm using Langsmith, but there

(31:14):
are a bunch of tools that that supports tracing and people
realize that tracing is important because when you, when
something goes wrong, you have to look into the model.
So it's much easier to look intowhat's, what's, what went wrong,
how the model is processing the data and so on.
And lastly, it's, I think it's arecent lesson, but people are

(31:38):
more aware of context managementor context engineering.
So in the in the beginning, people didn't look at the data,
so they just had a hunch why theagent is not working.
But now they're looking at the data and then see all the traces
and they realize that, OK, I missed put the context here.
So context managing the context in point, I should not put

(32:01):
everything in the message history.
I have to like digest it in a, in a, in a simple message and
then let LM to do so. That's three things that I think
got more standardized, like eval, tracing, and context
management. The biggest thing has been to
realize that don't be too smart about scaffolding things that to

(32:24):
to patch how models work today and just kind of like do the
brute force stuff first because sometimes the brute force stuff
actually works way better than my scaffolding.
An example for that would be I used to do a lot of rack like
like heavy chunking and stuff like that.
And now what I normally do is actually I do a genetic search
and then I just pass a lot of context into the model.

(32:47):
And we're talking about like, you know, Frontier models here.
If you're working with a small model, obviously you have to
still, you know, do a bunch of stuff with like the instructor,
like library and stuff like thatto get stuff to work at a
certain standard. But if you're working with front
Frontier models, China root for stuff work first.
It usually works pretty well. So that's my biggest take away,
which is that models have gottento a point where it's so good

(33:10):
that a lot of stuff that I learned from like a year ago,
now they aren't that useful anymore.
Like memory trunk truncating, like summarization, all that
stuff. Like you just most use cases you
just stuff all that in. And you've seen that with claw
code versus cursor where cursor truncates a bunch of stuff,
right? But claw code just let it run.
Obviously it's more expensive, but like you get really good

(33:32):
results. Being able to use like a vibe
coding tool, I think turns out to be a huge advantage.
So like if you're somebody who has good taste, you can go on
V-0, you can get better results than I do because you know what
you're looking for. And yeah, vibe coding is not
bad. Like it's great when you're like
trying to put ideas down on likesomething concrete.

(33:55):
And I've been, I, I used to be like, I used to be kind of
against using those tools a little bit, but that was just
because I realized that, oh, I didn't have good ideas.
But once I have like an idea of what I want and then I want to
get there faster. Oh man, like I'm just going to
go on like Figma may V-0 just bang it out and then think like

(34:15):
APM as opposed to like thinking like as a developer, I found
that to be super useful for people like ourselves who are
founders who, who run small teams.
Because you know, you're going to be, you can be building parks
in the developer mode all the time.
You have to be kind of thinking about like the, the, the
features and parks side of things while you're, while

(34:35):
you're scoping out things and designing things.
And then you shift into developer mode and you build out
stuff. Because a lot of times in the
past I just kind of built and I didn't have a plan, but that was
because planning and designing and mocking up things was, would
have taken too long. So now like embracing vibe
coding tools is like my new thing.
So I use that to help my planning, help my feature like

(34:58):
road map and stuff like that. So changed my mind about that
recently. Since we last talked, I've been
exposed to like a lot of the different tools, both, you know,
playing with them myself and like watching other people.
And it's pretty interesting, like seeing the different kind
of work flows and like what people optimize for.
And so I think there's like a few groups that, you know, drive

(35:19):
people to, to use the tools and like what they use them for and
this kind of thing. So part of it, I would say is
like the micro interaction type of optimization, micro
interaction with AI, which is kind of the a year ago, I would
call it the copilot, right? The, the to be clear, the GitHub
copilot and it's the like autocomplete.

(35:39):
And I think like windsurf does that pretty well.
People who really want that likeautocomplete behavior, Windsurf
is good at like, oh, let me change this variable name.
And then it like kind of says, oh, do you want to like change
it in all these other places too?
And you're like, yeah, that looks good.
Or like you delete some stuff and it's like, oh, well, you

(35:59):
just like deleted, you know, this use of a function and now
that's a dead function so you might as well delete it too,
right? So just kind of like
anticipates, you know, what you just did and what you might want
to do next. So I would say that's like a
pattern. And then there's this camp of
like, I'm doing this task. I need to do this thing.

(36:21):
It's a, it's a basically a ticket.
And here's how I want to do it. Can you go do it for me?
And this is the cursors, this isthe Claude code, this is the
codex. And this is, you are instructing
this thing to make a series of changes to a series of files.
They are very good at tool use. They know how to search and find

(36:44):
things. You can give it context that you
want to this kind of thing. So I use that like day-to-day.
I use, I use Claude code. The, you know, IDE integration
is super naive, but it's a nice to have.
Basically all it is is like, apply this DIF and you're like,
yeah, apply this DIF and you canlike undo and stuff.

(37:05):
So it's like you don't have to do it entirely in terminal
anymore, but it's not nothing crazy, right?
There's no like great LSP integrations or like anything
like this yet, but it's, it's useful, you know, you can, you
can definitely save time. You can, you can do things in
that way. And so the kind of third camp,
because I mentioned there's 3, is this other thing you can do
now, which is pretty interestingthat I think is growing faster

(37:28):
and faster. And people are going to be doing
it more and more, which is like,you can now build purpose built
personal software at a cadence that you really couldn't before.
And so I've done things where it's like, oh, I need to do this
thing. It would be nice if I had this
tool. I'm going to build that tool to

(37:50):
help me do this thing, like instead of trying to directly do
the thing. And maybe that's like a script,
right? Maybe it's like like that would
be the simplest case. It's like, Hey, I need to
complete this task instead of like asking an AI, which is like
an agent or a computer use or a whatever it is to like do a task
for you. It's much, much, much easier to
be like, right? There's bash script that like

(38:11):
does that thing and then you runthe bash script, right?
And that has the other really nice benefit of like you can
tweak it manually or ask it to tweak it manually.
And you're, you're, what you're effectively doing is like you're
working on the plan and then executing it rather than like
saying, OK, go off and do this thing.
And now like, oh, you didn't do it quite right.
And so I think we're going to beseeing that like a lot more is

(38:32):
like this idea that you can actually do purpose built things
and purpose built tasks. When you build an AI based
application, first ask yourself a question.
Am I actually solving a real product or a process problem?
Or am I perfectly fine in the deterministic rule based world

(38:53):
and I'm just doing it for entertainment's sake or for
intellectual curiosity's sake? So First things first, can I do
it locally? Do I really need to tap into
third party models or raise llama or fine tune?
And that's one lesson that we learned internally.
Like for instance, we are a lot of times we can actually get by

(39:14):
with the previous generation, like fine-tuned Roberta and
don't even need to go into the latest generation of large
language models. There's still a lot of good old
things within older previous generation libraries and those
open source communities are still very much alive and
kicking. So do it when you see that it
will really be solving your actual problem that you have

(39:37):
when you get. Awesome results with a larger
model. See if you can get better
results with smaller purpose built model.
With some tricks and tips you can actually get the same output
for much less money, much faster.
Inference and actually purpose built models fine-tuned for
purpose of your specific domain and your specific task are

(39:58):
definitely going to perform muchbetter just because that's what
they are for. Prompt engineering is still
really important. If you don't know how to write a
good prompt, no matter which tool you use, whether it's like
Claude code Gemini CLI launching, like whatever you
use, like it doesn't matter, like you're not going to get a
good result. And I see this on a daily basis.

(40:19):
I help people with, you know, writing prompt with for further
agents. They want to build agents on two
house or on the like Mastro dot AI, you know, the type St.
framework. And it's just like it doesn't do
what you expect it to do. And so prompt engineering is
still #1 concern that I think humanity has to like learn.

(40:40):
You know what, When you're a good prompt engineer, you will
unlock a lot of capabilities in terms of what the model can do,
which otherwise you are not going to get.
The one thing I just really wantto touch on is it's important
for us to keep in mind that we're not looking to replace
humans with AI. We want to augment, want to
empower, want to uplift, and a lot of people fear AI because
they don't understand that that's the main trajectory.

(41:01):
They view short term impacts, first order impacts as a
negative and then just kind of get turned off the whole idea.
But when they take a step back and they view it as something
that is a tool that can help enhance people, enhance things
like medical tech, clean tech, We can actually improve human
health and World Health with AI.It's just about the way that we
we handle the tool. So I really hope people keep in

(41:22):
mind that this industry is goingto have a amazing impact on
humans and the productivity we can have the the amount of
drudgery we can eliminate and the amount of suffering we can
reduce. So I really hope that keeping
the human element in mind and that we're we're working to make
people better, allowing people to make themselves better is

(41:42):
something that is a main objective.
It's becoming increasingly clearthat humans aren't going
anywhere in the loop, so to speak, like human in the loop.
We talk about it a lot as this kind of like, you know, new ish
paradigm we talk about with agents, but I think it's
something that is becoming increasingly clear from Chachi

(42:03):
PTS agent mode, clod code. You can't just have these things
go fully autonomous. They they will mess up.
We need humans in the loop rightnow and build your product or
build your service or build whatever it is with humans in
mind to give them the control tosee what's going on in the
inside, to be able to control what's going on in the inside.

(42:23):
And also, I think we've talked about this a lot of times, but
like meet your users where they're at.
Not everything needs to be a newweb app, You know, Slack bots,
Telegram apps, like there's somegreat UIS for AI that aren't UIS
that aren't custom built web apps.
And I feel like that's somethingthat I've seen recently or just

(42:44):
felt recently is I'm getting webapp fatigue.
There's so many new little demosand there's so many new little
apps that it's like hard for me to keep up with.
And my my bookmarks tab can onlytake so much.
So I think meeting people where they're at, Cursor's doing this
with Cursor and Slack. You know, Claude's doing this on
GitHub. Like if you can build these

(43:05):
asynchronous agents that meet these users where they're at,
they can go and kick off workflows, they can check in on
things. I think that's dramatically
important. And that's not going anywhere in
my eyes, at least for the next two years.
Let's see what everyone else says about the human element.
In some ways, the biggest thing I notice is with other people.
It's been interesting seeing oneof my friends who's a designer

(43:28):
and he's just like, OK, I'm justgoing to work with the cursor
and see what happens. And just like, keep spitting out
new websites, like every day basically.
And they're like, they're on thepath to becoming something that
he wants. But he, you know, he'll hit a
point of like working on it for a day and then getting
somewhere. And it's like, it really
expresses like a feeling and a thing.
And I found that really impressive And similar, another

(43:50):
friend who used to be a programmer, but like doesn't
actually want to look at a text editor and like program again,
he's like kind of doing the samething for like a local project
that he's building. And I think just like seeing
this, that the tip of empowerment of people and this
technology is pretty amazing to,to be like, you really don't

(44:10):
need to be a programmer anymore.But if you have like a little
bit of knowledge in and around, like obviously these people both
have worked with technology before, so they kind of know how
to say the right things. But it is really to the point of
like you're able to express yourself through this.
It's it's like almost a new medium, it feels like.

(44:32):
So in some ways, like I think that's been the most impressive.
Like for me on the day-to-day, like, sure, I mean it, it's
reducing like the amount of timeand work and stuff that I would
do. The amount of times I touch my
keyboard is just so little now. Like actually, but there's still
a lot of like thought and other stuff.
And I think that's maybe The thing is like learning how to
articulate yourself in a way that the AI can understand and

(44:54):
being sufficiently detailed is the most important thing.
It's it's not about anything else at this point or, or it'll
get to the point where it's not.Maybe it's not today.
But yeah, I don't know. I've got really deep into what I
mean, people are calling vibe coding and as a long time
developer, I mean 25 plus years of doing development, I didn't
really get it at 1st. And I think I mean, one of the

(45:15):
big things to this product Zine that we're launching next week
was literally quote vibe coded. I mean, and I'm not a front end
developer by trade and I've doneit in the past, but it's not not
in the last 10 years really. But I was able to build this app
in the last month, surely with Claude code, Klein, some of the

(45:35):
tools like that and learn so much from from that whole side.
It was just insane. I mean, like it was people are,
I mean, underestimate how quickly you can really move.
But the downside is it can really get lost and it can, it's
great at zero to 1, but it's notas good at iteration.

(45:56):
So I, I mean, learning the righttool for the job helps and I
think, but also learning the right model for the job where
sometimes I'd get stuck, especially like on a react issue
or something like that. And one of the things also is
some of the models do have a knowledge of different
technologies better. Some have a different design
sense. I think if we try everything, I

(46:17):
mean, there's, I mean, get a feel for there's.
I mean, some people are like, Oh, the ID is dying and this and
that. It's like, no, I mean use, use
the right tool, use them all. I mean, I think it's, but also
there's some people that just don't get the agent like the
quad code and others like GeminiCLI like treated as its own
thing. Like it's just, it's something

(46:39):
new. It's a new paradigm.
It's literally like going old school IRC or like like chat
session where it's like, think of there's somebody somewhere
else that is building this code for you and you have to talk it
through with them. And it's like, ignore that
there's even an ID, there is theway to really think about it.
And, and that really works for me.
And it's like you end up gettingthis collaborative feeling with

(47:03):
it that I'd never had before in my career that I mean, it's,
it's literally like a virtual team and with it be able to spin
off multiple agents and stuff like that.
But it's also this is the worst it's ever going to be.
So I'm I'm excited to see what the next six months old.
You've got to first of all. Put the human first.
You've got to think of who you're trying to market to or

(47:23):
use AAI to reach or connect with, and you have to consider
the human connection 1st. And then you have to use AAI
under that framework, under thatlens.
So you have to consider whether your use of an AAI tool is
actually making that human connection to your customers or
prospects closer or whether it'sactually distancing.

(47:47):
And so I would say forget the AItool at the start of a project
or undertaking. Consider the human aspect, the
human connection, then apply theAI tool.
And then throughout the process,consider is this becoming closer
to my customers? Is it becoming more connected
and more congruent and more authentic?

(48:09):
And that's really the thing I would suggest is the most
important thing. The last year made me a bit more
optimistic, seeing how humans are still kind of like the
bottleneck, and I see that in the next 5-10 years we'll still
be very relevant for the workforce.
I think it will look very much different, but I working with
these tools just shows you that you still need to be there.

(48:32):
Lead the agents into a directionpeople prefer to use, load code
and Gemini CLI. All of those need tools because
there is massive hype around it.I don't know what am I missing.
Like because I, I don't see it. My other difficulty with the
tasks is that the way I work with agents, I still need to be

(48:54):
there. I still need to like, I might
not actually read the code. There are some things that are
completely fully vibe coded, butI still need to test it.
Q&A, Q&A, you know, like make sure it works, see the results.
I think the human is still the bottleneck and I think the next
challenge will be to solve the actual fit, the clue for the

(49:17):
agent to see if it achieved. It's.
Goal. Everybody knows this, but you
know what everybody says like you gotta listen to your
customers, find a customer or something like that.
It is absolutely true. I 100% believe this.
If you trust the process and youkeep going and going, you will
eventually hit something. There are certain ways you can

(49:37):
do to maximize the probability to do that.
But yeah, I would say like just just try everything to filter
out the signals and noise and trust the process.
I don't know how else to say it cuz like if you ask me, if we
were doing now what we're doing like a year ago and you told me

(49:58):
that that's what I was doing, I would be like, I did not see
that coming. And the only way that was able
to happen was because we just kept at it and we just made sure
that we focus on customers. Forget the news, forget the
noise, forget appealing to BC's.Just find one 2-3 people who are

(50:19):
willing to stick by you and actually use your product even
when it's shit. Because eventually if you keep
working with them, you will findsomething and it will build up
and grow into something. You got to find people that love
your product and you got to workwith.
Them. I went down the agent rabbit
hole and started to reflect on what that means from a human

(50:39):
perspective, even in the word agency, to think about the human
analogy around what does that mean in your life?
And so thinking about when you have systems and we're giving
some form of control over to them, what does that really
mean? So that that's what I used to,
to, to think about the last yearas far as it wasn't a particular

(51:01):
tool per SE, but just using my own brain as a as a tool and
thinking about literally cognitive functioning as tools.
I think you, you can have mentalmodels that we use as tools.
If you want to go like a little bit more abstract.
I think about tool use as literally my brain going not
necessarily always application based functions and, and and

(51:23):
tool calling, but also how do I think about something before I
even architect a solution in working with others to say, hey,
at this start, at this stage of the experience that we're
designing, does it make sense that we have complete agency
there? Is it a, you know, let it go and
we don't need human in the loop to sign off on anything?
Or is it, Oh no, if this actually is wrong and it

(51:45):
hallucinates at the wrong moment, or something goes wrong
here, how do we, how do we recover from that error?
Well, that year was absolutely insane.
The AI space moves faster than ever, and it's only going to
accelerate. So I hate to put you on the spot
like this, but I really like to know one, maybe two of your
predictions for the next year. What's going to happen to AI
space that is going to blow people's minds?

(52:07):
I think the one that's one of the most interesting to me and
people don't talk enough about is the price of intelligence
really is going to 0 and I predict that it will actually go
to zero. We saw with Apple's SDK now that
you can just use the on device LLM and I think local models are
going to get good enough to the point where you can have

(52:28):
incredible AI experiences using consumer hardware without
actually having to hit any APIs.And when that happens, like you
can do a whole range of things. Every website can be AI enabled,
intelligence will be everywhere.And this general intelligence
explosion, I think, can benefit a lot of people.
And I'm curious to see how that all plays out.
And I think the people that are building apps now, basically not

(52:50):
worrying about their inference bill, are the ones that are
going to win. The ones that are like, oh, you
only get 200 credits a month. Like, yeah, plan for the future.
Inference costs are going to go dramatically down.
I think that's the one that's the most obvious for me.
I would say mine is probably going to be.
I think we, being the open source community are going to

(53:11):
solve the orchestration issue. I think we're gonna be able to
figure out a very lightweight, low overhead way to route
requests to a model given the context of that request.
And that will allow more specialized tasks to be done by
the highly specialized models. And I've been preaching on this
for a while, but seeing all of the innovations like Augment
Toolkit or Transformer Lab and the ability for people to train

(53:32):
their own models on their own data is gonna cause a
proliferation of open source models.
And with those models, even though you can do a specific
task, how do we make sure that the right task gets ended up
there, whether it's hosting on your machine or hosting on the
cloud somewhere? We still need to go from A to B.
And that problem hasn't really been solved yet, but I think

(53:52):
there's enough smart people working on it that that's going
to be in. I also think we're going to
start getting more home robotics.
I don't think humanoids are going to be quite yet.
I think we're still a few years away from that.
Like I ordered the Ricchi Mini from Hugging Face.
It's going to be just a little robot sitting on my desk.
There's been tools like what Seed Studios produced to get
little cameras with built in intelligence.
And with the cost of intelligence going down with the

(54:15):
proliferation open source models, they're going to be much
more enabled with these physicaldevices.
We were early days with the O1, didn't quite pan out the way we
were hoping. Still open source, people can
check it out and play with it, but I do think we're going to
start seeing a lot more of that this year.
Brilliant prediction. Robots are definitely coming.
I'm curious to see when I think AI only makes robot development
faster, so I'm excited for the future.

(54:37):
I'm so optimistic on the next five years, no one can tell me
otherwise. For everyone still watching, I
just want to say thank you. Thank you for the past year.
Thank you for sticking along this journey and supporting me
throughout it. I'm extremely grateful that I'm
able to have this conversations.Every week I get to talk to
people about a topic that I'm very passionate about and they
are equally if not more passionate about.

(54:58):
And I wouldn't get to do that otherwise there there's people
who just wouldn't have the opportunity to talk to if it
wasn't for an an audience watching.
So thank you very much. I, I wanted to reiterate really
quickly the, the purpose of the show, why I'm doing in the 1st
place. I think it's extremely important
that we democratize and decentralize the benefits of AI.

(55:19):
The thought of it being centralized in the hands of a
few really could lead to a bad outcome.
So by teaching people about how they can leverage these systems
and these tools to become more productive and more competitive
is very important from from small businesses who I worry are
going to be wiped out by these massive corporations that can
just keep throwing money at the AI wall until they find a tool

(55:40):
that fits where the small business have to be a lot more
careful and considerate with howthey use tools.
We're able to distill down a lotof that decision making into a
45 minute episode. And people who are are worried
about job displacement, what skills do they learn?
If you're able to take somethingyou're passionate about and
apply AI to it, you're probably going to be OK.
But a lot of people just didn't know that's possible.

(56:01):
When I teach someone how to use chat PT for the first time, they
get it. It's a mind blowing bone for
them. They just didn't know what was
possible. And I just want to make sure
that we educate, inform, and help spread this wonderful
capability to as many people as possible.
So thank you for allowing me to do this.
Thank you for allowing me to have my little microphone.

(56:21):
I do intend on growing this for for season 2 for year two of the
show. I want to try new different
formats, try to get bigger guests on and and that's where I
could use your support. If you can just share the
episode, hit the subscribe button, do all those things
YouTube wants to do, the vanity metrics that come along with
that subscriber count doesn't really matter, but people who

(56:42):
are looking for what podcast they should spend their
interview budget on, they do care about that.
So I would like to grow this to try to bring on a continuing
high quality caliber of guests because I've been so fortunate
with the guests I've come on. If you look back at the guest
list, mind blowing to me. Like they're just such genuinely
good people who are so bright and knowledgeable and they share

(57:03):
an hour of their time to be ableto record it and spread the
knowledge to you. So any support would be greatly
appreciated. Now we just qualify for YouTube
membership. So if we're in the financial
position to do so, check out themembership.
We have a Patreon. I don't know how I'm doing on
Patreon, but I'll figure out something to make it worth your
while. We're also getting our first
sponsor. So the next three months, it's
going to be announced next week are going to be our first

(57:24):
sponsored episode. I've been approached by multiple
people about sponsorship and I only want to do it for tools
that I genuinely believe in. And I think you're going to be
really happy with this one. So tune in for that.
If you're a company looking to sponsor podcast, reach out, let
me know. But ideally you're open source.
If not, I'll start the conversation, but you better be
doing some good if you want to hang out more.

(57:45):
I have a discord for tool use and actually a bunch of the old
guests are on there and they're more than willing to share
information about their their topics.
So if you watch an episode and have a question, check out the
discord. There's a lot of good stuff.
You can follow me on Twitter. I'm not as active as I used to
be, but I still try to put out outfit to make sure that I
distribute the knowledge that I gain.
I'm not looking to make any money off of tool use.

(58:05):
I have a day job and I'm very happy with, so every dollar that
comes in from these different channels goes right back into
the show. I've got a great editor who's
going to do some good stuff to to hopefully bring up the
production quality because I want to make this as enjoyable
an experience for the viewer as possible.
Or the listener. If you're on Spotify.
I think that's it. Thank you so much.

(58:26):
I'll see you next week.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Medal of Honor: Stories of Courage

Medal of Honor: Stories of Courage

Rewarded for bravery that goes above and beyond the call of duty, the Medal of Honor is the United States’ top military decoration. The stories we tell are about the heroes who have distinguished themselves by acts of heroism and courage that have saved lives. From Judith Resnik, the second woman in space, to Daniel Daly, one of only 19 people to have received the Medal of Honor twice, these are stories about those who have done the improbable and unexpected, who have sacrificed something in the name of something much bigger than themselves. Every Wednesday on Medal of Honor, uncover what their experiences tell us about the nature of sacrifice, why people put their lives in danger for others, and what happens after you’ve become a hero. Special thanks to series creator Dan McGinn, to the Congressional Medal of Honor Society and Adam Plumpton. Medal of Honor begins on May 28. Subscribe to Pushkin+ to hear ad-free episodes one week early. Find Pushkin+ on the Medal of Honor show page in Apple or at Pushkin.fm. Subscribe on Apple: apple.co/pushkin Subscribe on Pushkin: pushkin.fm/plus

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.