Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
I've liked being able to move fast and I think now with AI you
can get unbelievable amounts of things done.
I'm building a startup on the side.
I've written all the code with the agent.
I've done all of the logistical work, the HR work, the finance
work. I've done all of that myself.
And previously I would have raised money and hired A-Team
and done all that. And it's literally just made
(00:21):
coding about 2-3 hours a day at night.
So it's like, it's so, it's so bonkers.
I love it. Welcome to episode 49 of Tool
Use, the weekly conversation about AI tools and strategies.
I'm Mike Bird and today we're talking about how to use AI to
improve your productivity, usingand building tools, implementing
agents and much more. We're joined by Ryan Carson, a
three time exited founder and Entrepreneur of the Year award
(00:44):
winner. Ryan, welcome to Tool Use.
Good to be here. I'm excited to share some stuff
and hopefully delight your listener.
So let's do it. Absolutely.
Would you mind give us a little bit of your background as we get
into it? Yeah, sure.
So start off with the computer science degree and then was a
web developer for a little bit and then sort of caught the
entrepreneurial bug and started my first company.
(01:05):
It was actually a tool to send large files because it was hard
to do back in the day. And then that was acquired and
then I really fell in love with education and teaching people
how to code. So then the last two businesses
I built and sold were all all focused on teaching people how
to code and empowering people with code.
And now I'm a full stack dev again.
(01:27):
It's kind of funny how it's comefull circle with AI.
So it's an exciting time to be alive and.
Actually, I wouldn't mind getting your opinion.
Do you think people should learnto code in this day and age?
You know, that's funny mentioning because I was
actually going to tweet something about this yesterday.
I'm seeing the same lines get drawn that were drawn before,
whereas people saw themselves either as a person that could
(01:50):
learn how to code or not. And we're seeing similar lines
around people saying, oh, I'm going to try cursor or try
Winsorf or not, which is unfortunate, right?
Because I, I think now anybody can code for real.
Like, you know, back when I was running Treehouse and we taught
a million people how to code. I believe anyone who really
(02:10):
wanted to could learn how to code.
But now literally anybody can code.
So, so I think absolutely learning how to code is still
valuable because we're, we're still very, very far from the
models, literally creating all the software that you need
without any input or any understanding of how it all
(02:32):
works. So I think having that basic
understanding of how code works and it doesn't have to be deep,
you don't need a computer science degree for sure.
But I, I think learning how to code is still extremely valuable
and it's never been easier to learn, right?
You literally download cursor, open the agent and say I don't
know anything, teach me how to make a game in Python And and
(02:53):
you can do it. And on top of that, if you have
cursor in one window chat to youbetween the other, you can just
ask it. Have a little thought partner.
Go over, start making code. If you're questioning different
concepts right now, it's it's like you said, never been
easier. Never been easier.
It's it's apps. I mean, you have this always on
almost omniscient being that wants you to win like for 20
(03:14):
bucks a month. Like it's it's bonkers.
And I think we're very lucky to be alive right now.
Yeah. And I've actually talked to some
people and they use Cursor as a learning tool or they're cloned
down a repo, ask how do they implement feature X and I'll
actually walk through it. So it's like a 2 directional
thing where you can get information out of it and then
you can dump information into turn something into a real
thing. Yeah, yeah, it's amazing.
(03:35):
So do you mind giving us a little bit of how you use
Cursor? Because you're getting prolific
on Twitter with all your little tutorials and we'd love to get a
little insight. Yeah.
So, you know, this journey started back when I first heard
that cursor was shipping. And the idea was, you know, if
anyone's used VS Code, that is their kind of standard IDE,
which which most web developers probably will.
(03:57):
It's basically a fork of VS Code.
So I thought, you know, I want to try this.
I want the AI built into the UI because before that, what I was
doing is, you know, having ChatGPT open a window, you know,
asking something, copying and pasting the code, you know, into
the ID is very, very clunky and slow.
And so as soon as I heard about cursor, I thought, let's try it.
(04:18):
And I think, gosh, this was, youknow, over a year ago, which is
like lifetimes in AI. And you know, what I did is
download, install. And I thought, gosh, this is
just a dream because it's obviously creating diffs for me
right in the ID, right? So I'm not sort of copying,
pasting and trying to figure outwhat to change.
(04:40):
I just say, hey, you know, update this and it does.
So I write all my code through cursor.
Now you know, I'm probably it, Iwould say the agents probably
write, you know, 95% of my code.And my main job is, you know,
reviewer and editor and guide. So it's it's a fascinating
(05:03):
workflow. And I find sometimes people are
almost splitting hairs saying cursor versus windsurf versus
cloud code. Have you experimented all of
them and settle in cursor? Or did you just find it worked
in your workflow and just stuck with it?
Yeah. So I've used, I use them all.
Initially I was testing windsurfand cursor pretty heavily
against each other, and I just found that the agent worked
(05:25):
better for me in cursor. You know, there's a lot going on
behind the scenes around the context window management around
and the system prompt around thetooling.
There's a lot of stuff that the Cursor team is doing versus the
windsurf team. And I just found that the mix of
them seem better in Cursor. So kind of double down on Cursor
(05:46):
and then Cloud Code came out, you know, and for those
listening that haven't heard of it, it's basically a command
line interface tool that is an agent, right?
So you you MPM install it and then you talk to it just like
you would any AI. And then you can say things like
I want you to update, you know, XYZ to do ABC and it kind of
(06:09):
cranks through it. Now it's interesting because I'm
a visual person, so I prefer theIDE, right?
I prefer to see the file, see the file tree visually, you
know, see the def easier. There's a lot of that that I
like, which is why I tend to useCursor's agent.
But I do find, you know, in certain circumstances, Cloud
Code is great. It primarily uses, you know,
(06:31):
Linux commands to navigate, you know, the file tree and, and,
and to search for files and do things.
So it's very efficient and very,very, it's very targeted at what
it does. So I'm exploring both.
Most of the time though, I'm using Cursor's agent and, and
(06:52):
now that they have background agents, I use those a lot, which
is where you spin up, you know, an agent, you say, you know, go
find this bug and fix it. And then it just works in the
background and then you come back and it's done.
And then you, you submit a poll request and review it.
And by the time your listeners hear this, this will be kind of
older news. But today Cursor just ships
(07:14):
background agents on mobile. So now you can code from that.
So it's. It's such a wild time to be
alive. Yeah, I was actually just out of
my backyard the other day. I had a little idea, sent Claude
on a deep research, open the eyeon a deep research.
And then I just got to go about my day and they get the
notifications that hey, this allcame in and all of a sudden you
just that much more productive during your leisure time.
It's also. I know.
(07:34):
So, you know, I've hired and managed teams of up to about 110
people and, you know, I had ACTOand AVP of engineering managers
and product managers and engineers and, and, and all of
that. And I'm just blown away by the
fact that essentially now I can manage a team of AI developers
(07:59):
in a similar way, right? You can get unbelievable amounts
of things done just with me. So I'm building a startup on the
side. I've written all the code, you
know, with the agent. I've done all of the, the
logistical work, the HR work, you know, the finance work.
(08:21):
I've done all of that myself. And previously I would have
raised money and hired A-Team and done all that.
And it's literally just me, you know, coding about 2-3 hours a
day at night. So it's like it's so, it's so
bonkers. I love it.
Yeah. One thing I would like to get
your thoughts on. You mentioned managing multiple
agents, being more visual. I feel that we're still evolving
(08:43):
into what the actual interface, the user interface is going to
be for dealing with these agents.
Do you have any thoughts on our background agents going to be
dominant and you kind of just like issue command run and
forget it? Or do you think they'll be some
type of dashboard we see? Or how do you see this interface
evolving? I think it's going to end up
feeling a lot like a Kanban sortof project management PM type
(09:06):
tool. I think where this is going is
that the pure engineering work is going to be done by a is
mostly right. And then what you're going to be
doing a lot of is creating, you know, product requirement docs
or PRDS. You're going to be then asking
your lead engineer, which is going to be an AI, to turn that
(09:26):
into a task list. And then you're going to say,
OK, you know, go and execute that task list.
Come back with any questions, you know, I'll help and then I'm
going to review it at the end. So it, it actually will feel
very similar the way you would manage humans, right?
But it's a is instead, you know,there's a lot of feedback you
have to give, right? And this is just like employees.
(09:49):
You never just say to an employee, go do that and it
happens and you never check backin and you don't chat to them.
You don't get feedback. Like there's a lot of feedback.
It's a lot of checking in. There's a lot of guidance,
there's a lot of giving context,right?
This is all the same with AI devs.
So I think the UI is going to end up looking a bit like an
(10:10):
Asana, you know, Kanban board where you, you're managing, you
know, epics and stories and, and, and things like that.
But the AIS are, are working on a lot of it.
And we'll touch on this later, but I've come up with a very
simple three-part workflow to manage AI devs that really has
(10:30):
transformed my output and my workflow.
And it's been really exciting toto use that.
And one thing you just mentionedwas managing the context, and
we're starting to explore different ways of automating
that. Context engineering is going to
be one of the hardest things. But as you use Cursor, whether
it's with plug insurance or withMCP servers, how do you try to
start bringing the proper context into your workflow to
(10:51):
make it a little bit more automated?
You know, right now I don't. I think it's, it's very hand
cranked at the moment. So This is why I do think people
need to learn the basics of code.
They do need to learn, you know,the basic files that they need
to include in the context. So whenever I start a task, you
(11:12):
know, here's how I do it. So the first thing I'll do is
I'll use a tool like Flow, whichis just records my voice.
So I don't have to type, I'll gointo cursor into the agent, I'll
turn on flow and I'll just starttalking about the, the feature
that I want to build, right. So I'll say, you know, all
(11:32):
right, I want a dashboard that does XYZ and, and accomplishes
this and makes it easier for users to do that.
And then, and then I'll hit donewith recording and I'll get all
the text. And then I'll, I'll tag a, a
markdown file which has instructions for creating a, a
PRD or a product requirement doc.
(11:53):
So then I'll say I'll tag it. And really this is what people
need to learn how to do. It's like, OK, you're going to
have these markdown files with pre written prompts in them and
you're going to reuse those prompts all the time to do
things. And so my create dash PRD dot MD
is really just a complex prompt that says, you know, take the
(12:16):
users, you know, text and turn it into a product requirement
doc. And the first thing you should
do is ask the user, you know, 5 clarifying questions.
So then the AI comes right back.And this is all in cursor,
right? So the agent then so just to to
say this again for folks that are listening to make it clear,
(12:37):
right? So you type in kind of almost
verbal diarrhea around like, what is this this feature
supposed to be? You really don't have to specify
crystal clear. And then you tag this generate
PRD markdown file and then you hit go and then AI will then
read all that it will read the the PRD prompt and then it will
come back and say, well, you know, what did you mean by this?
(13:01):
And can you be more clear about that?
Right. So it kind of asks you good
questions you'd expect a productmanager to ask you, and then you
answer those. And then what you do is then you
tag another prompt, which I callgenerate tasks dot MD, right?
Again, it's a prompt, right? So the prompt says, you know,
(13:21):
take this PRD and turn it into alist of tasks that a junior
developer would need to finish in order to implement the PRD,
right? But the first thing I want you
to do AI is give me the five toplevel tasks that probably need
to be done and ask me if they look correct.
(13:42):
And then they I says, OK, Brian,you know, these look like the
five tasks that, you know, top level we need to do.
Does that look roughly collect correct?
And then I'll, you know, give a little feedback and probably say
no, I think probably after the second one, we should probably
do this, give a little feedback and then I'll say go.
And then in that prompt it will say, OK, great.
(14:03):
Now I need to flesh out all those tasks into, you know,
typically it's kind of like a 5 five part task plan with
probably, you know, 5 to 10 sub tasks on each one.
And each one of those has a little markdown check box.
And then and only then then do Isay, OK, let's start on task 1.1
(14:27):
and and I have 1/3 markdown filewhich tells it how to execute
the task. And it's very simple, do 1 task
at a time and always stop and ask me for permission before you
go to the next task. That's that three-part system,
you know, has allowed me to ship, you know, over 100,000
(14:47):
lines, you know, of highly complex code, you know, for the,
the startup I'm building in a very detailed niche vertical,
you know, all with just me. And you know, it's, is it
perfect code? No, but it's secure.
I've done a lot of work on that.It's concise, it's dry, it's
(15:12):
good coat, right? So and I've open sourced those
marked out files. We can pop a link in the in the
show notes. But I guess people are loving it
because it's got like 3000 starsin GitHub already and I guess
people are liking it. Amazing.
Love the idea of having human loop throughout the whole
process. It's extremely important that
people still realize the limitations of these systems.
(15:33):
And the more tasks you chain together, the higher likely to
failure. So keep coming back.
Get a little reiteration when you go through this process, do
you try to start a new cursor chat for each task to prevent
the risk of context compression or or pruning?
Or can you do in a single task because you have these markdown
files offloading some of the memory?
So for each PRD, I'll have one chat and, and this is the
(15:57):
importance of having the PRD andthe task list, right?
Because whenever and, and again,people need just to get in here
and try this stuff and you startto get a sense for the AI and
its capabilities. And, and just like a human, when
you know, you're, say you're sitting down with your developer
and you were chatting through, you know, some, some, some of
(16:17):
the features that you know, theywere building and it was clear
like, oh, their, their memory ofthe PRD is a little fuzzy now.
Like they're saying things that like don't quite match reality.
Obviously this happens with AI, you know, context raw and, and
you can see when that starts happening.
And then what I do is usually I'll use the cursor command to
(16:40):
clear context, which completely clears the context.
And then I'll re tag the PRD andthe task list and say, let's
keep going, you know, on task, you know, 1.3.
And so it's just that kind of manual human and loop process
that allows me to execute prettylarge PRDS.
You know that pretty much perfectly.
(17:05):
Now. I just think people need to
understand it's not like 01 here, right?
I get AI hear a lot of people saying, wow, you know, vibe code
is just, you know, slop, you know, and you know, there's no
point. And then I hear people that are
just all in and believe everything is one shot.
It's between right. It's it's like now you have a
very capable AI engineer. They're almost a genius, but
(17:29):
they have like this practical sense of a goldfish, right?
Where you're like, wow, you understand everything and you
remember everything, but you don't seem to understand it's
pretty important, like to do this very basic thing we talked
about. And that's where the task list
comes in, right? You get right through that and
you have a human loop. I love hearing the fact that
(17:52):
your system is getting so much attention.
One question to have is around prompt management.
So a lot of people will have these MD files that they'll pull
into every project. Do you do anything in terms of
version control or evals to try to iterate upon these, or is it
good enough off the first go that you've kind of locked it in
and you don't really need to worry about optimizing it
further? I think it's a good idea to
(18:13):
periodically evaluate these and I actually do emails on them.
But the truth is most of us are busy.
And if it seems to be working, you just keep using it, right?
So, So what I'll do occasionallyis I'll think, oh gosh, like I
really like haven't updated my agents dot MD file, you know,
(18:35):
and the project has changed since then.
You know, this prompt is, is, isa little frustrating.
Like it keeps asking me this annoying thing.
I really should update it and then I'll go in and clean things
up, right. So I, I, and again, this is just
like working with real engineerswhen you realize, gosh, we need
to do some cleanup, we need to do some maintenance.
(18:56):
You know, it's the same thing. But I find in general the, the
three markdown files that I havefor this, you know, PRD to task
management to task execution, they've, they've not changed a
whole lot. I've had some people submit some
really great PR's to the repo and we've improved it a little,
a little bit. But by and large it's it's
(19:19):
pretty static and it seems to beworking.
It's power of open source to getall the brains together working
on it. Yeah, absolutely.
Love it. It's it's really fun.
I will say the model matters a lot.
So I I'm all in on Max mode on cursor.
So what that is, is basically more thinking, right?
So you do have to pay more for it.
(19:41):
And I find, you know, sonnet 4INMax mode pretty much is
unstoppable if I need larger contacts and I'll switch to
Gemini 2.5 Pro because there's amillion token contacts window
and I'll use that in Max mode aswell.
But I'm pretty much Max all the time.
And to me, it's, it's worth the spend because otherwise you
(20:02):
would be hiring an engineer, youknow, raising money and doing
all this stuff or, you know, paying 200 to 1000 bucks a
month. You know, to manage a team of
engineers to me is a huge, you know, value unlock.
Yeah, and it depends on the project, too.
If it's just something you're hacking together, don't worry
about it. But if you're trying to make a
company, you know, spend a few extra bucks.
Yes, exactly. So love the insight into Cursor.
(20:25):
I'm curious for the other aspects of your business or
workflow in general. Do you have any other AI tools
that help either generate the PRDS, help with testing,
anything from the tangential aspect?
Yeah. So occasionally you use a tool
called Repo Prompt. So Repo Prompt is kind of a a
power context tool. So it's a free, there's a free
(20:47):
version for Mac and there's a paid version, I don't think
there's APC version yet, but basically allows you to select
the exact context do you want and then it formats it in the
perfect kind of XML markdown format.
So if you want really kind of high-powered context management,
then Repo Prompt is really great.
(21:09):
There's also a chat tool in it, so you could use it instead of
cursor if you wanted to. But I tend to use it for, for
sort of very specific context management.
And and by that, what what I mean is so say I know I have a
specific set of files that I want to put into the context of
a model, you know, open Repo Prompt and I'll select those
(21:31):
files in the file tree and then I'll click copy.
And then I'll go to another model.
You know, say, for instance, I go over to chat.com and pull up
OO3 Pro, which I could use in cursor, but and then I paste it
in there and then ask it to do something.
So there's a little bit of that,a little bit of repo prompt.
(21:54):
I love O3 Pro, right? So it from the open AI world, O3
and O3 Pro are just, you know, so powerful.
So for a lot of personal stuff, I use open AI.
My kind of default, you know, AIthat I chat to everyday is, you
know, one of the open AI models through, through ChatGPT because
(22:18):
I find that the, the memory management is very good.
And, and it, you know, chat ChatGPT is really becoming like
pretty knowledgeable about my life and what matters to me and,
and I'm really enjoying that. So that's, those are a couple
tools I use. And then like I said, the other
one is Flow, which is basically a tool to capture your speech
(22:43):
and turn it into text quite easily wherever you are.
Nice. Yeah I'm a user of Super
Whisper. Exact same premise.
The tool almost doesn't make a difference these days with all
the open source coming out, but the ability is just like you
said, verbal diarrhea to your computer.
Be a little weird with what you're thinking and it just
makes connections and you're fine, and after and it's fine.
I know it's amazing. I do a lot of almost like sort
(23:07):
of founder therapy and you know,business therapy with open AI
where I talk a lot about gosh, you know, I'm not sure what to
do about this or I don't feel like I know enough about that.
Or maybe I'm not good enough to do this or I need some guidance
on that. Like there's a lot of of what I
used to have a found like Aceo coach for.
(23:29):
I tend to do all that with with open AI now.
I was formerly quite a skeptic of using AI for therapy, and I'm
still not advocating for not seeing a therapist if that's
what you need, but just the ability to have a infinitely
patient companion that's going to talk you through different
issues has been super helpful. I've had some friends actually
upload blood work or other things to get a second opinion,
some of which actually revealed stuff that the doctor kind of
(23:50):
missed or might have overlooked.Do you have any other in in your
personal life or even your professional life?
Other uses for ChatGPT? Whether it's a project or just
different prompts to help unlockthese different aspects of life
that were possible before. Yeah, we are using it for a lot
of medical stuff. So now that O 3 is out in O3
Pro, we've done my wife and I'vedone deep research using O3 Pro
(24:13):
on a couple kind of gnarly medical issues.
And it was very insightful. You know, obviously, like you
and everybody else, we're not inherently trusting what it
says. We're not, you know, fact
checking, but overall, it's really unlocked a lot of compute
that your, your doctor just doesn't have time or resources
(24:34):
to give you. So using a lot for for that.
My kids use it a lot. It's really funny because we
share an account and so I can see what they're chatting to
ChatGPT about. And it's hilarious to do that.
And I I've sort of asked Chet GBD, I'm like, do you understand
the difference between me and mykids?
And you know, when someone's talking to you about F sixteens
(24:57):
and, and you know, the right weapon load for F sixteens,
that's not me. That's my kid who's, you know,
who's, who's doing a flight simulator.
And it's like, yeah, I know about that.
I'm like, Are you sure? Never heard those memories bleed
into the different answers. Yeah, yeah, I this is the my
major gripe with the way LMS work right now is they're just
(25:17):
two conceding they they just they're they're still too fast
to agree and to support. I think, and I assume, you know,
the the large labs are going to figure this out, but we need a
way for the LMS to really push back right when they really
disagree because, you know, you've probably done this in
(25:39):
cursor, especially with Sonic 4.It's, it's bad at this is, you
know, you'll, you know, do a bunch of tasks and then commit
your code and it gives you this,this just happy shining, you
know, message about all the changes in your commit and just
how amazing they are. And, and then you'll, you
realize like one of them is literally wrong.
(26:01):
And it's like, gosh, we just need to get to a spot where it
it's more reflective and accurate about what's actually
happening and not presuming everything is great all the
time. Yeah, there've been a few
instances when I'm like, Are yousure about that?
It's like, actually I'm wrong. I'm like, Are you sure about
that and just kind of flip flop.So yeah, yeah, a little more of
a backbone would be wonderful. Yeah, I've heard, I've heard and
(26:23):
actually need to try that more of this.
The anthropic folks were saying a really good method is to ask
the AI to to sort of measure itsresponse.
That's not the exact words they said, but maybe it's reflect
upon your response and then you actually get some of the self
correcting behavior pretty quickly.
(26:45):
So I probably should do more of that.
Yeah, Now that's something you want to explore.
I'd love to pivot slightly into AI agents because you talk about
cursor and the background agentsoperating.
But for other aspects of generalworkflow or or interact with the
digital world, now we have things like browser automations
where you have entire browsers or plug insurance that can just
control us or automate processes.
Do you use any agents or have you seen any limitations
(27:07):
stopping you from adopting them in your workflow?
Yeah, so I've written workflows for agents primarily using
TypeScript. And then I recently tried N 8 N
to see, well, do I want a visualversion of this workflow?
So if anyone's listening hasn't tried this, I think you should
go to N8N dot IOI believe and you can basically build, you
(27:31):
know, a workflow where you inputthis and then that gets sent to
this agent and then the agent does this and then sends it to
this agent. And then, you know, eventually
something comes out. You know, I don't use many of
these in real life yet. What I'm finding is most of the
agentic behavior I need is very specific.
(27:54):
So for instance, you know, on the start up on building, I
wanted to do basically an SEO audit, right?
So what I did is, is created a group of agents that
collaborate, right? And the idea is, OK, I, I want
an agent who kind of reads all the text and takes notes about,
you know, potential SEO improvements.
And then I want another agent tothen go and do cross link in
(28:18):
between, you know, all the pages.
And then I want a final agent toreview the output for quality
and accuracy. And you kind of orchestrate
these agents using, you know, atthe moment I just use for cells
AISDK all in TypeScript. And I actually find that easier
than using a visual tool like N8N because it's just faster with,
(28:40):
you know, the agent in cursor cranking out the code for me.
So I don't use many, you know, true agents yet.
I just find that very specific use cases.
Absolutely, but most of them I think that fall into the
ChatGPT, which, you know, slightthe gentic behavior and then,
(29:05):
you know, using cursor's agent, you know, to write code.
I think as this, you know, this will get better and better and
better. Andre Carpathy tweeted a couple
days ago. He said, you know, I, I think
what we're going to see is this kind of cognitive home base
where they'll be a small LM thatlives on your phone and it's
(29:27):
just smart enough to be able to delegate out to larger, more
intelligent models. And it becomes essentially your
AI companion. And then once that really
happens and it actually knows you and understands how to tool
call and route to various agents, I think at that point
you'll start to see true agenticbehavior.
(29:49):
But we're just not quite there yet in in my mind.
I agree with the building your own pipelines rather than N8 N
maybe it's the developer background.
I think some people like the thesafety net of having the visual
laid out. But just in regards to agents, I
use tools like Exa or Fire Crawlto do some web scraping, but
then that feeds into a pipeline of predeterministic code.
(30:10):
So it's kind of like AI aspects to a pipeline.
But I still don't know if the whole thing is an agent.
I, I don't know if it matters. It's like you're saving a huge
amount of time, effort, money. You know, it's like a workflow
that's agent enabled. Who cares, right, Whether it's
actually an agent or not. But I I agree, like having MCPS,
(30:31):
you know, unlocks a lot, but youend up kind of plugging it back
into a deterministic, you know, tree of events, which is like,
wow, OK, that's fine. Yeah, now it's going to be
pretty mind blind. AGI turns out to be a super
small model. It's just really good a tool
calling and then, you know, we already have all these systems
that have already just have the AI figure out what system it has
to trigger. Yeah, I think that's where we'll
go because ultimately you reallyneed your AI to be on this, you
(30:58):
know, and we'll see. You know, I'm kind of curious
what Sam Altman and Johnny Iver up to and like what form factor
that thing's going to be and howthat's going to work.
But it it's clear we need to rethink, you know, the human
computer interaction model with AI now.
So I'm excited to kind of see that, you know, that play out.
(31:19):
I've got a pair of meta Ray bans, you know that I they're
sunglasses. I use them when I go outside.
But you know, I find I use some of those capabilities where I
ask, you know, meta like what's that?
But it's not very often. It's mostly just to take a
couple pictures and then and then send them to a friend in.
(31:39):
My previous experience, I workedwith Open Interpreter and we
went through the O1 project, which was a little device that
you could speak to and then it would be connected to a server
running a computer to do computer control.
A little premature, didn't quitehave the capabilities we're
looking for, but it was a screenless interface that really
excited me because I'm out and about.
Get an idea, you know, you can just do a little voice note or
you can even say don't forget tosend this file to Ryan.
(32:00):
So I do think we're going to be exploring a lot more different
things away from the computer because when you're out and
about, I would love it if AI helped us kind of get into
nature more. But you can still be productive
and functional. How do you see the open source
ecosystem playing out? Do you think they're going to
try to keep pace with the big labs?
Do you think they'll be buildingmore of the tooling supporting
it but still rely on the centralized models?
(32:21):
How does the open source world factor into your future world
view? Yeah, I, I think that Meta is
going to be the determining factor here, right?
So it's clear that, you know, Meta and Zuckerberg are going to
throw a huge amount of capital at Llama, right?
And I think it's pretty clear Llama is going to continue to
(32:43):
be, you know, one of the soda models, right?
And you know, as an open weightsmodel that really unlocks a lot.
So I think, you know, Llama willbe a big deal.
I think, you know, open AIS new,you know, open weights model
when it comes out, that's going to be fascinating.
(33:04):
But the way I see it playing outis, is people just don't use the
open weight models as much if they don't have the
intelligence. You're usually going to go to a
model in the cloud because you need the intelligence, right?
Or you've got a hyper specific use case on mobile that you're
going to use an open weights model for.
(33:24):
Great. But it's not for the
intelligence, right? So I think those are the kind of
the world we're in at. There's a really cool open
source tool that Bob Duffy and team created Intel called AI
Playground. And it's, you know, literally
the tools open source and it uses open weights models to
create a ton of imagery and videos.
(33:45):
And and so I could see people saying, you know, I just don't
want to pay 20 bucks a month, you know, for open AI to make my
images or my videos. I'm going to use, you know, an
open weights model locally. But I just feel like that's kind
of the tinkerer, you know, groupof people, which is got to be
small. You know, I think most people
are going to essentially pay fora Netflix account for AI, right?
(34:09):
Like I just going to pay 20 bucks a month and get all the AI
need and it's going to end up using, you know, proprietary
model in the cloud that's super smart.
So that's why I see it shaking out.
I would like open weight models to really matter here And, and I
think they could, but day-to-day, you know, the truth
is I use Sonnet 4 because it's super smart and I need it to be
(34:35):
super smart. So now Fast forward 12 months, I
mean, it's probably going to be that, you know, a small, you
know, open weights model is veryintelligent and very good.
And I'll end up relying on it because it's on my phone, you
know? So let's hedge our bets and see.
(34:57):
Yeah, exactly. Keep, keep pushing forward.
I feel like the orchestration layer is one that we really have
to solve because as soon as we can get hot swappable adapters
or even just models hosted in a rapidly accessible way, if it
can be properly routed to the right model, we have potential.
But yeah, just scale alone is such a vast gap to overcome that
I'm still rooting for open source, always team open source.
(35:17):
But yeah, long ways to go. Yeah, yeah, I wanted to win.
And, you know, Hugging Face is doing a huge amount of great
work, you know, to really enablethe the open weights ecosystem.
And I think it will be a big player, you know, and we see
things like Mr. All, you know, really pushing the limits.
(35:37):
And it's, it's interesting that a lot of these are happening
outside the states, right? So you have Mr. All and Hugging
Face, you know, the non-us companies and obviously DeepSeek
and, and everything going on over there.
So we'll we'll see how it plays out.
It's going to be exciting. Yeah.
I'd love to pivot to talk a little bit about the one person
company you're you're doing yourown startup solo.
(35:59):
More and more people are talkingabout the potential for one
person Unicorn. What are your general bits of
advice for a tinkerer? Maybe a product manager who
wants to get in to have an idea and they want to execute?
Yeah. So, you know, I've started
probably 5-6 companies, you know, three of them worked, many
did not. And I think this is all about
(36:21):
finding a problem that you really care about, right?
So if there's a company to be made, it's always going to be
around a problem that you deeplycare about that you think you
can help solve. And that's that's one of the key
things is like people don't buy vitamins, they buy pain pills,
right? So it's always going to be about
(36:43):
solving a problem versus, you know, making people's lives
better. So that's a lesson I learned,
you know, a couple of startups where I was trying to do
something that I thought would be, you know, make people's
lives better. But the truth is people just
want their paint salt and you can't make these ideas happen.
They, they, they usually will appear, right.
(37:06):
So the current startup I'm working on, I didn't have the
right idea for a while and then bam, it hit me.
And you know, I'm working very hard on that.
So I think that's kind of thing one find the thing that you
think that you deeply care aboutbecause there's no there's no
(37:26):
reality where you're going to build something and then, you
know, six months later, it's going to make you a lot of
money. Startup is just pure grind,
right? And it's likely, you know, it'll
be a year to three years before you really figure out product
market fit and you actually, youknow, make enough money to quit
your job. And you know, it's just a slow
(37:47):
grinding process. But it, it, it, if it's the
right idea, then it probably will work after a while.
So that's all like basic entrepreneur stuff, like whether
you're talking about one person company or 1000 person company,
I think if we focus on the one person startup, what you need to
(38:09):
do is be extremely curious and be willing to move forward no
matter how much you don't think you know.
So for example, like you know, the the current startup I'm
building as a Dell or C Corp, I use Stripe to incorporate it.
(38:32):
That's fine. That's, you know, you click
couple buttons and you have a Dell or C Corp.
But also I had to register for some annual tax and I got a
letter in the mail and I was like, Oh my God, like, I don't
know what this is or how this works.
I mean, I, I used to have ACO that would do all this stuff.
(38:52):
And I was like, wait a minute, I, I just talked to AI about
this, right? So I did that, right.
And, you know, half an hour later I'd filed, you know, the
form and, and paid the small little fee and moved on my way,
right. So I think there's just a lot of
little things like that, that ifyou're willing to do them and
and let AI help you do them, it really unlocks that one person,
(39:17):
you know, advantage. And on top of that, you know,
it's interesting. I've always been a solo founder,
like supported by my wife. I haven't really had Co founders
and I've always liked that. I've liked the control, I've
liked the clarity and I've likedbeing able to move fast.
(39:38):
And I think now with AI you really can.
And I think a lot of folks are realizing maybe the whole Co
founder thing was, you know, a pre AI idea.
It's just less important now because like, why give up 50% of
the equity in the company if youdon't have to, you know?
So couple thoughts. Yeah, love it.
(40:00):
Pulling back to the the, you know, find a passion, find a
problem you're passionate about.A lot of listeners to this show
into the AI tooling space. During your development
workflow, what problems do you think still exist, not
necessarily in the model level, but in the application layer for
AI tools that people might want to start working towards?
Gosh. So I, I would look at very
(40:21):
specific vertical use cases, right.
So the start up I'm launching isaround a very specific vertical
problem in Connecticut where I live, right.
It's not sexy, it's not exciting, but it absolutely can
be transformed, you know, through AI.
So I think people should start looking for these, like what are
(40:44):
these gnarly, messy, vertical specific problems that people
have? And how can you relaunch a new
user interface, a new user experience, and then layer AI on
top of that to solve that problem?
I just think there is a million businesses that could be
launched in, in this sort of genre.
(41:06):
And you know, these businesses can make $1,000,000 a year easy,
right? So you look at, gosh, well, if
as a solo founder you could be generating $1,000,000 in revenue
a year, that's a big deal, right?
Yes. It doesn't need to be 10 or 100,
you know, like all the VC, you know, backed firms, companies
talk about. But $1,000,000 a year business
(41:27):
for a solo founder was absolutely a life changing
thing. And so there's a lot of
opportunity. I, I would encourage people just
ignore all the, you know, all the VC talk and all the, you
know, people saying they're making 50,000 in MRR every month
and X just ignore all that and, and find a problem you care
about that you think is unique and not a lot of people are
(41:52):
paying attention to. And I, you know, think you can
do well, perfect. Last question for me, for those
people who have the itch, not quite ready to leave their job,
but they want to spend a bit of time kind of adjusting,
adapting, preparing for taking the dive entrepreneurship.
What do you think they should focus on in terms of skills,
mindset? How can you take a regular
employee and then work towards becoming a stronger entrepreneur
(42:15):
in this day and age? I mean, I would say literally go
to cursor.com and download it #1#2 open up the agent panel and
talk to it. I think we're in this magic
world now where you have this hyper intelligent, you know,
being that will literally do what you ask.
(42:37):
So I think just start going in there and, and it's like getting
your reps in right at the gym, right?
Just go in, right? If you can go in and start
moving your body, you know, the first time you go into the gym,
doesn't matter what you do. Like the fact that you went
there matters, you know, And then even the second time, it
doesn't really matter. Like even the seventh time, it
(42:59):
doesn't really matter. Like you're just getting your
reps in, right? And and then try to build
something really, really simple,right?
And again, you don't even have to know what the tech is.
Like if you're not actually a developer, it doesn't matter,
right? And if you want to go a level up
and make this even easier, you know, you can use Replit or
(43:19):
lovable or Bolt, like all these tools that literally build the
app and the infrastructure for you.
So there's just so many options.So I would say get go to the gym
and the gym is cursor or lovableor Bolt or, you know, Replit and
see what happens. Go on X and, you know, follow
(43:41):
some people that talk about thisstuff and try to meet up with
some of them if you can and and if you can't, you know, join
some, you know, some online meetups and the world is your oyster
right now. It and this probably won't last
forever. Like I think we're in kind of a
5 to 10 year window where a hugeamount of value is going to be
(44:04):
created and a lot of the value is going to be transferred from
existing enterprise juggernauts to more smaller vertical
specific startups. And the time is now.
So. So just just start.
(44:24):
Ryan, this was awesome. I really appreciate you coming
on and sharing your experience and your insight.
Before we let you go, is there anything you'd like the audience
to know? Just come on over to
ryancarson.com or x.com/ryan Carson or linkedin.com/in/ryan
Carson. Say hi.
You know, I talk about this stuff all day, every day,
probably too much. So come on over and say hi.
(44:48):
Tell me what you're thinking about, what you're building, how
you need help, and it'd be fun to chat.
Awesome. All right.
We'll talk to you soon. Thanks a lot.