Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Hey, folks, welcome back to another episode of the Ruby
Rogues podcast. This week, on our panel we have Valentino Stole. Hey, now,
I'm Charles Maxwoot from Top End Devs And yeah, it's
been a while Valentino, we've kind of had a lot
going on, just picking things back up. Do you want
to give people a quick update as to where you're
at and then I'll do the same thing and then
(00:25):
we can dive into the rest of this topic for today.
Speaker 2 (00:28):
Yeah.
Speaker 3 (00:29):
Yeah, you know, earlier in the year I moved gigs,
I was at doc Semity and I switched over to
the Lovely team Agusta working on AI related stuff.
Speaker 2 (00:38):
Pretty wild thing's happening.
Speaker 1 (00:40):
Yeah, you know, lots of So when you say you're
working on AI stuff for Gusto, are you integrating AI
based features into what Gusto does or you know, well,
what does that look like? I guess it seems like
there's a wide world of things that people are doing
with AI.
Speaker 3 (00:57):
There is a wide world of things. So they have
a you know, Chapot Gus who's like the assistant for
your business, your small business, and uh, you know, they're
basically just integrating their entire ecosystem into the chato okay,
amongst other things.
Speaker 2 (01:14):
There's a lot of other initiatives. That's that's the one
I've been focused on in my op okay.
Speaker 3 (01:18):
So yeah, pretty much, anything that you would want to
do as a small business, think about what an AI
assistant might help out with that, especially when you have
a small business platform you can plug into.
Speaker 1 (01:30):
It's it's kind of exciting, very very cool. I'd love
to dig into that. It looks like a lot of fun.
So yeah, So for me, about the same time you
transition over to Gusto, I dropped my contract and went
and worked for price Picks and so I've been at
price Picks since March and I've spent quite a bit
of time working on a lot of the social features.
(01:51):
So if you're if you get into the app and
you're you want to follow another member's profile or you
want to they have the feeds where you can see
what other people or wagering or stuff like that. You know,
it kind of makes it fun and you know you're
able to kind of see more of the things that
other people are doing and you know kind of build
stuff off of that. And anyway, I've I've been spending
a lot of time on that and on the profile stuff.
(02:13):
So that's where a lot of my work has come.
It's actually kind of funny because my wife and I'll
be sitting watching TV and a price picks that will
come up, and it's, you know, it's like, hey, did
you know that you could you know whatever, and I
and I look at her and I go, yeah, I
built that, right, at least I built them back end of.
Speaker 2 (02:30):
It, right, are you on there? You know, pick making
your picks?
Speaker 1 (02:33):
You know?
Speaker 2 (02:35):
No.
Speaker 1 (02:35):
In fact, my track record on picking the right stuff
to win is abysmal. I mean, you know, obviously as
an employee, I can't actually play for real money, but
you know, if I want to play, I can play,
you know, and it's kind of fun. I just you know,
I can't. I can't depositor withdraw money and so anyway,
(02:57):
it's just it's just yeah, it's so so it's all
in fun. But yeah, I almost never win.
Speaker 2 (03:03):
So it is funny to see, like, you know, ads
pop up and you're like, hey, yeah, I know that.
Speaker 1 (03:10):
Yeah, yeah. But my deal is is I don't really
follow NFL or NBA or you know, any of the
big sports that people are are in there playing on
So I think I think I'd have a better shot
if I was like a die hard Eagles fan or something,
you know. And so it's like, I'm going to put
(03:30):
something in on every Eagles game, but I don't because
I just don't care. Yeah, but yeah, So we were
talking before the we got recording, and you said that
you've been working on this autogenesis stuff and Gym.
Speaker 3 (03:45):
It's a term I came across called autogenetic, which basically
just means self generating, and Okay, I've started to explore
like what it might look like for these AI things
to just like assemble the themselves things that let them
do more rather than having me decide.
Speaker 2 (04:04):
What it should do more.
Speaker 1 (04:06):
Mhm.
Speaker 3 (04:07):
And that part of my exploration is this gem called
it's a Ruby gem called Agentic. I started as just
like I wanted a way to do plan and execution
the workflows. So if you're not familiar with those, it's
a way for you to have an AI thing an LLM,
create a plan and then execute that plan. And so
(04:27):
you can think cloud code right, is a perfect example
of how this kind of started in a more practical way.
There there were others before them, but you know, a
plan execute is very like common pattern for AI stuff.
Speaker 2 (04:43):
And as I was.
Speaker 3 (04:45):
Building it, I had these like, you know, agents that
I was like manually creating, and I thought, why am
I creating these things? There has to be a that
the plan could just be like, okay, what kinds of
you know, what do I need in order to accomplish
(05:06):
this goal?
Speaker 2 (05:08):
Right?
Speaker 3 (05:08):
And then I have you know, if I introduced concepts
to the LLM that it knew how it could use
them and how it could build them, could it do
it effectively? And so the obvious one is agent, right,
Like you have instructions and it can do things, it
(05:29):
has tools available to it, and so can an LLM
with just knowledge about like that construct build its own
and assemble its own agents to accomplished tasks that it
has to do.
Speaker 2 (05:43):
And so that's where I kind of like really.
Speaker 3 (05:44):
Dug into this agentic gem and started I created this
like way to assemble agents based on just like instructions
and giving it a role, and with that like primitive
giving it a name, a role and instructions, throwing a
task it and be like, hey, like assemble whatever agents
you need to accomplish this task and give me the
(06:05):
result of that.
Speaker 1 (06:06):
Right, So it reminds me a little bit of some
of the conversations we've had with other people about AI
agents and lms, where effectively, I think kind of the
basic version is you have one agent or one program
and you give it a task and it just kind
of runs until it gets the stuff done right. And
so then we've talked specifically, I'm thinking of when we
(06:29):
talked to Obi about Ray where he said, you know,
I've got multiple agents, right, So I've got like a
calendar scheduler agent, I've got this agent, I've got that agent,
I've got another agent, right, and so they all kind
of specialize, and then he's got one or two that
kind of orchestrate things at different levels and so and
a lot of that was done deliberately, right. It's like,
I'm going to have a calendar agent, I'm going to
(06:51):
have a chat agent, I'm going to have a whatever
else agent. And what you're saying is is in your case,
you're saying, Okay, I just need this task done right,
and so you make up what the agents are, right,
and so you're giving that or that level of design
and orchestration to the LLM as well.
Speaker 2 (07:11):
Exactly because.
Speaker 3 (07:14):
Because this object oriented, right, and all these types constructs
are defined in code, I could save the artifacts of those, right.
So like, as it's building out these agents to accomplish tasks,
it's saving them off for use later.
Speaker 1 (07:29):
Right.
Speaker 2 (07:30):
So like if it goes.
Speaker 3 (07:31):
And if it has like a bigger plan that it's
trying to get through and a goal is trying to accomplish,
and it's creating all these subtasks and making agents and
then reuse those agents as it goes through the plan,
and so like let's say it's like just like a
research agent that it decides to make, and it goes
and it researches some content and then pulls back information
(07:54):
and saves it in a file, and that's like what
it was built to do. And then it comes across
another task it is like I need to research about
this new thing. You can just reuse that agent and
because like it then becomes available to the system as like, hey,
this is these are agents that are available right at
every given turn in the task, you can choose to
(08:15):
pick one of those existing ones or build a new one.
And sometimes you know, if the research is if the
task is not related enough to what is trying to
do like let's say it was like just generic research
on the web, but it needed to research in a
specific file that it would create a new agent for
file research. All right, So it's it's kind of interesting
(08:36):
to watch this thing kind of bloom, right, uh huh
as you give it different things to do.
Speaker 1 (08:44):
So one thing that I'm wondering about is it seemed
like some of the sub jobs or subclasses of agent
or however you want to look at, that they had
pretty specific functions that they needed, right, So for example,
talking to a calendar or you know, connecting to a
particular service or things like that. Does it create those
(09:05):
two and actually write the function code for those sometimes?
Speaker 2 (09:11):
Right? So that's the beauty of this is like it's
an experiment and things that all.
Speaker 1 (09:15):
It really is coming from my job.
Speaker 3 (09:17):
Like, yeah, sometimes it does effectively do that, you know.
Huh when it gets runs there are cases where it
like you know, because I do have limitters on it
that like don't let it like recursively you know, get
to itself.
Speaker 2 (09:33):
So if it fails to.
Speaker 3 (09:34):
Do something, it doesn't get stuck and it could just
move on and say I wasn't able to do that,
And so it does do that, but yeah, I would
say the more complicated things, it tries to just break
it apart into smaller things and try and accomplish those.
And so I haven't really pushed the limits.
Speaker 1 (09:48):
Of it yet, right, So, I guess one thing that
I'm curious about with some of this, because I guess
you could also just say, you know, here are some
functions that I just have available, right, and then it
could build the agents around those however it wants. But
I guess I'm wondering, you know, how granular does it get?
And I'm also curious what the implications are for Let's
(10:09):
say I want to actually build and design something like
this on my own. I don't want to do the
auto genetic stuff. I just want to, you know, I
want to decide where the boundaries are for for my agents.
You know, does this inform those decisions for you? But
let's let's back up real quick, like how how granular
(10:30):
does it get? And how you know, how does it
handle some of that stuff? Because I'm imagining, for example,
let's say it did need to do some scheduling or
you know, calendar management, right, it could have a connect
to Google Calendar, and I have these six functions that
it can do, or it could say, you know, I'm
just a busy checker and I'm an appointment updater and
(10:52):
I'm right, And so you could end up with agents
that do a bunch of different things as opposed to
one that just kind of generically and it is Google Calendar.
Speaker 2 (11:02):
Yeah, I get that.
Speaker 3 (11:02):
I mean I built this in a very modular way
because I wanted the ability to have like agents that
I specifically made with certain things. And so you can
build your own agent, and there's even like you can
give it like specifications if you don't want to fill
in all the details, so you can create like kind
of a spec for how the.
Speaker 2 (11:22):
Agent should be built.
Speaker 3 (11:25):
Okay, but yeah, there's like a register, so you could
like register your agents and then it would make use
of those like in the normal process as it's looking
to accomplish tasks, and you can bypass all have to
use Yeah you don't have to use self assembly.
Speaker 1 (11:40):
Yeah right, but but what how how far does the
self assembly go?
Speaker 2 (11:45):
It goes pretty deep? Yeah, I mean it.
Speaker 3 (11:47):
So I built like a capability system I call it,
where like let's say, you know, a capability could be
most similar to like tools, I guess in the modern realm,
but like I think we'd more of like okay, be
able to search the web, like read a file, generate
a pdf. Like these are all capabilities, right, And so
(12:10):
I made like kind of an agent capability system that
you can register with the agentic system and just say
okay you have and it's like all defined abstractly with okay,
you have this capability, and here's the implementation function and
it takes inputs and it generates a normalized response.
Speaker 2 (12:29):
And these like.
Speaker 3 (12:32):
Specifications that you're generating for the capability help inform the
agentic system as well what it can do, right as
it's building agents too, So like you can basically define
kind of like what what is capable for agents to
be built with, right, And so it's not just gonna
(12:52):
go in like say oh I can go and like
connect Google Calendar and like automate a huge pipeline. There's
only so many capability is that you can register at
a time. And so like I made it kind of
with that informed decision of like I don't want to
just make stuff up.
Speaker 2 (13:08):
Because it was at first.
Speaker 3 (13:10):
Not really doing too great, but as soon as you
put some like kind of guardrails around it and give
it like, oh, you can only.
Speaker 2 (13:17):
Kind of do these things. It performs really well.
Speaker 3 (13:20):
And so like when you especially for you know, it's
funny now that like research a deep research is a thing, right,
Like that's like the easiest thing to do with all
of these agents. It's just like right, the other report
and like search the web, right, like so easy to do, right, yeah,
And so like what happens when you want to do
like next things and have it like yeah, like you said,
(13:42):
talk to APIs or things like that.
Speaker 2 (13:45):
It's yeah, it's interesting to see how it evolves.
Speaker 1 (13:49):
Yeah, it does? Does it ever evolve? I'm gonna I,
you know, I'm going to kind of build my own
tools or my own capabilities or do you just lock
that down? So it's like you have to ask.
Speaker 3 (14:00):
So I've tried, uh my, my preliminited. My initial goal
was could I get it to build itself?
Speaker 2 (14:08):
Right? So, like if I I through just like a
plan a goal at it, like hey, I want.
Speaker 3 (14:14):
To create like a self assembling agent system in Ruby, right,
and could I.
Speaker 2 (14:19):
Get that to actually happen?
Speaker 1 (14:22):
Right?
Speaker 2 (14:22):
And what right? Like what agents would it built?
Speaker 1 (14:26):
You know?
Speaker 3 (14:26):
I created a coding agent for Ruby. Right, it created
like uh, you know a I don't know, PM like agents,
like managed a project. Right, It did all these things,
and it got like.
Speaker 1 (14:43):
It got to a.
Speaker 2 (14:43):
Certain point where it was just like, all right, it
just spent way too long, and it like it said, oh,
I've gone through too many turns, and now I can't
like I say it was decent. You know, it wasn't
what I.
Speaker 3 (14:56):
Would code, right, but it's a little rising how far
it did get and the agents that it built, and
so I've actually reused and repurposed some of the agents
that it made in my own interesting, which is kind
of fun. And you know, the probably the biggest benefit
to me personally is I got it to use Olama locally,
(15:17):
so I could just use starcoder or something like that
to run on my own machines and not have to
spend any more. Yeah, but I'm trying to evolve it.
It has this like extension system that I'm trying to
like adapt where you can give it a domain, so
like if you want it to be scoped on healthcare,
then you can add like specific knowledge to that domain
(15:39):
and have it use it in its assembly and execution.
And I have like this initial like self learning system
that I've started to to explore, where it keeps track
of records of itself and tries to create patterns and
like strategies based on how you use it, so it'll
(16:00):
like slowly evolve in different ways based on how you
like train it ultimately or tell it how to learn.
And so I've been exploring That's probably where I want
to spend more of my time. Next is kind of
getting it to recognize things that it does well and
doesn't and how do I tell it and inform it
(16:24):
when it's doing something that it shouldn't be doing or not.
I haven't really figured out that like human intervention aspect
right at the moment. It's kind of just like making
that up on its own.
Speaker 2 (16:35):
So it's kind of interesting.
Speaker 3 (16:37):
To explore what models do and what they learn on
their own because they are all different. It's kind of
fun to rerun the same experiments with different models and
see how it evolves differently.
Speaker 1 (16:49):
Yeah, I bet so. I'm a little curious, you know,
as you get into this, like what is it showing
you or telling you about using agents or building agents?
Like what has it taught you any lessons that it's
like when I do this on this other project, I
ought to build my agents more this way or that way.
Speaker 3 (17:08):
It's yeah, I would say the biggest thing I've kind
of taken away is that lms are not good at
like managing their own artifacts, right, Like if you tell
it to like do something and like keep track of something,
it's going to do it differently almost every time, and
like you might see some normalization, but like it's going
(17:30):
to take a lot of prompting to get it to
like be any like deterministic at all.
Speaker 2 (17:36):
And so I've I've found that like trying to trying to.
Speaker 3 (17:39):
Help it avoid maybe some of those pathways to generate
specific things in certain formats to just avoid those. Getting
back to the lessons, it's hard to tell what is
a good lesson or not right because so much of
it is like the LM's deciding what to do. And
I guess what I found is the more struck sure
(18:00):
and reils that you give to llms, the more deterministic
you can make what it does. And so it doesn't
know anything, but if you if you can help kind
of visualize things for it, it seems.
Speaker 2 (18:20):
To produce much better results.
Speaker 3 (18:23):
What I mean by that is like you know it
has a ton of training data, so you know you
kind of know, like you have a good idea and
understanding of what training data has just by asking you
a bunch of questions, right, and but you can get
an idea for like what concepts it has too, right,
And so I like to like poke the models all
(18:45):
the time on what concepts it can recognize, right, And
if you give it a new concept, can it like
continue to like reading about that concept and like you know,
mutate it at as like you would right, Like if
you're going to create like a user object in rails, right,
like and you wanted to add a new attribute to it?
Speaker 2 (19:08):
Is like that straightforward?
Speaker 3 (19:09):
Like would you be like, well, why would I add, right,
like a shopping cart to a user? Maybe you would, right,
But like would you would you add like a I
don't know something super unrelated like a I don't know
what Maybe that's a bad example. Let's say you had
a school, would you add like a grocery store to it?
Speaker 1 (19:27):
Right? Right? Uh?
Speaker 2 (19:28):
Can the LLM like know that kind of thing?
Speaker 3 (19:31):
The answer is yes, the lms are very good at
those connections of things, right, And so like, uh, the
further away that the concepts are actually better to the genet,
like the generations will get right, because that's what it's doing.
It's like trying to hone in on very like similar things,
right and like what the next things are. And so
(19:52):
if you can give it more like funneling going back
to Obi's like you know, narrowing the path, right, the
more you can do that, even on a conceptual level,
the better, right. And so that's where I've kind of
like taken, like been blown away by how well this works.
Speaker 2 (20:10):
Is like I can just give it more and more.
Speaker 3 (20:12):
Concrete concepts for it to reason about, and it can
figure out how they work together, right, And then okay, well,
if you then help it, like tell it how those
things connect and work together, it doesn't even better. And
so like and the more things that you can like
you know, glue together and firm up, like, the better
(20:33):
that it performs, and the more determinists.
Speaker 2 (20:34):
That you can get it. And so I guess that's
kind of what I've.
Speaker 3 (20:37):
Taken, is like if you can decouple that idea and
like make a bunch of things so that you can
define those like concepts and how they work together, then
you'll get the best result.
Speaker 1 (20:49):
From all of this, gotcha, So which is the most
important to master? Then? Is is it the prompt? Is
it the definition of the problem? Is it more affinement
on the tool?
Speaker 2 (21:04):
It's always the definition of the realm.
Speaker 3 (21:06):
I mean helm's are dumb, right, Like they can't come
up with their own stuff, right, Like you could try
your best and like tell it to create the best
company in the world. It's not gonna make any money, right,
Like it has no desires, you know, like it's missing
a lot of the things that it takes to like
create something substantial, like to create something that's meaningful to you,
(21:29):
because it doesn't really care about you.
Speaker 2 (21:32):
It doesn't care about anyway.
Speaker 3 (21:34):
So you know, it's always you know, that's why you
know spectrum in development if you've heard that, Like that's
become so popular because like really this how what you're
telling it to make and do is like the most
important part.
Speaker 2 (21:50):
Yeah, and so the more Yeah.
Speaker 1 (21:53):
It sounds a little bit like So we had a
conversation on JavaScript Jabber with Eric Kens anyway, he works
for Amazon, and he was on talking about their Kiro editor.
That that was kind of we talked about it literally
like two weeks later. Curser added, their killer feature, which
was the plan. Right, So you had agent, you had
(22:14):
asking you. Now you have a plan and so it'll
actually pull together the entire plan and things like that.
And so it sounds like what you're talking about is
the better your plan is and the better you can
specify what you want, the better the tool works.
Speaker 2 (22:27):
Yeah, totally, and the better the plan works.
Speaker 3 (22:30):
Yeah, you know, forget the tools on their own, Like
the tools are just actions that can be taken, right,
and so it's it's all about the plan. It's all
about what you really want to do, right, Like if
I tell my son to go outside and like clean up,
(22:52):
I wouldn't expect anything to happen, you know, Like what
does that mean?
Speaker 2 (22:57):
You know, it wouldn't mean anything for me if I
was asked to go outside and clean up, I guess
like rape, Like I don't.
Speaker 1 (23:04):
Yeah, so I guess. Part of my question then is,
you know, it seems like AI is evolving so fast, right,
and so, like you were talking about this autogenetic AI
agent stuff and I was like, I mean, you explained
it to me in like two sentences and I understood
what it was, but I don't know if it's something
(23:24):
that I would have dreamed up on my own. And
it seems like, you know that this is just another
step along the way toward wherever we wind up with AI.
And so part of me is wondering, is this is
this a tool or a technique that people are going
to start adopting now, and where where does it lead
(23:45):
us to over the next six months to a year.
Speaker 3 (23:48):
I think we're already starting to see that where like
the where you know, where I first saw the style
of AI use was like air handling. Some air happens
while you're developing and you ask an LM, can you
fix this?
Speaker 2 (24:03):
Can you handle this case?
Speaker 1 (24:05):
You know?
Speaker 3 (24:06):
And a lot of times, especially for JavaScript errors, right,
that was like the classic example back in the day
when you're using an LM and you want you know,
JSON back and it gave you improperly format a JSON
and you ask the LM again, hey fix this JSON,
and ninety nine percent of the time it'll fix that, Jason,
(24:28):
so you don't have an issue, and this goes and stuff.
Speaker 1 (24:30):
I feel lazy when I do that, but I do it.
Speaker 3 (24:32):
All the time, all the time, right, And so like
this makes a lot of sense, and this is a
little bit like autogenetic, right, Like you're getting the system
like the self healing aspects is like autogenetic, like you're
getting the system itself to generate, you know, Like and
I think it's going like we're starting to see you know,
(24:54):
products crop up around this kind of thing like security analysis,
security prevention and right, like being able to get ahead
of like these issues ahead of time, having something like
we're we're kind of seeing the ambient response so far
of things that are just latent sitting in your system
(25:14):
that are then analyzing and then getting actions to set
up to do later, which is less like of this
you know maybe generative aspect of itself, but it is
a little bit like you know, still that self feeling
self like you know, improving nature, which are along the
same lines in my mind. And so I'm wondering if
(25:38):
we're more like getting to that, like you know, Nvidia
is trying to make their own like kernel right where
an LM sits on the kernel, which is really interesting
And what what does it look like when the system
that you're using in general, right is like an agent, right,
and it's it's a little that's like kind of the
(26:01):
mind bend the right, Like yeah, okay, Like if you
just log on a computer, you start talking to it
and it can do stuff for you, like kind of wild, right,
Like what does that do to apps?
Speaker 2 (26:12):
Right when?
Speaker 3 (26:12):
And you can be like, all right, well create slack.
I want to communicate with my friends, right, and like,
right it goes I was.
Speaker 1 (26:18):
Going to ask how far down the rabbit hole does
it go? Right where it's it's okay, Well, I'm going
to generate myself because you know, initially I could see
it where it's okay, I've got a browser, I've gotta this,
I've got that. But yeah, eventually it just gets to
the point where it's like I'm going to generate a
windowing system, right, I'm going to generate a something I
can make API calls with. And so you know, my
(26:42):
interface to the world through my operating system is completely
different from yours because it built something around how I
think about how I communicate with the world and how
I research things and how I approach different things as
opposed to you. And yeah, and so I don't use
(27:02):
the standard app. I use the custom whatever that it knows.
Speaker 3 (27:06):
I like, I mean, I think Google is onto the
right thing with their agent agent protocols. But I think
I think they were just too early, right, Like we
need we need more protocols, like everybody like trashes MCP
for being like, oh, it's just like a wrap around
open API, which maybe it is, but like I think
the definitions of the specifications are the value, right, like
(27:31):
defining what it means to be a tool, how how
that tool interacts, how it can like respond, how it's defined,
and then you know, going further, what are the protocols
that can communicate with it?
Speaker 2 (27:47):
And so like all of that stuff is very important,
and like.
Speaker 3 (27:50):
That is kind of like what is missing and what
will be most important if we move to this world
of like an operating system level, right, is like it'll
just be a you know, huge like pool of protocols
that like, okay, if you're creating an app like on
this new system, it the artifact of that is the
(28:10):
protocol of how to use that that then other things
can inspect and then build for right, And so I
see this kind of like and it's a little funny
because like the path to get there is like a
bunch of trash, right, Like the paths again, where we
are today is like currently trash. You know, so like
(28:31):
we had to get through all of this like bad
generation before to be like confident enough now where it's
like we're really good and you're kind of like impressed
by it, but like to get to that next level again,
you have to get through all of the trash work.
And we're like we're already seeing that with like AI
slop right, like that's the new thing.
Speaker 1 (28:52):
We We've always done this, right, It's just it feels
like the pace of iteration is so much faster, right,
I mean, you know, go back through rails right, you know,
we we had some people had some other great ideas.
They put them into MERB. We went through that heinous
upgrade you know, to Rails three, which was super painful,
the asset pipeline situation right where we went from sprockets
(29:16):
to webpacker out to currently prop shaft, which is better,
but it's still not like super intuitive friendly whatever you
know fixes all my issues, right, it's just way more
approachable than webpacker, right, And so we're going through the
same thing here where I mean I remember having conversations
with just regular people like my father in law where
(29:38):
he's like, well, never trust AI because it hallucinates all
the time. And I said, well, give it a couple
of years because they're going to keep improving it, right,
And so does it still hallucinate? Yeah, but does it
do it a lot less and does it have a
lot more tools at disposal? Absolutely? And so again, like
you're saying, I mean, we're just going to kind of
(30:00):
have to see how this evolves. It'll be really interesting
to see what gets picked up and what gets dropped
and if we find a better way. I mean, one
of the issues I have with things like MCP is
that it takes up a whole ton of my context
to tell it all the things it can do, and
you know, and it's like, no, I want the context
to be here's all the stuff that Chuck cares about, right,
(30:22):
figure the f out right? And so.
Speaker 2 (30:26):
Yeah, it's interesting. A concept was brought up at.
Speaker 3 (30:29):
The AI Engineering Code Conference our Code Summit recently in
New York.
Speaker 2 (30:34):
Concept called progressive disclosure.
Speaker 3 (30:36):
And it's an interaction design like principle of just like
incrementally like disclosing new like information to a user right
or to an interface. And this is now being applied
a lot to like coding agents because of this exact
problem you're saying, where all right, you have like I
don't know, thousands of MCP servers and tools and documents,
(31:01):
and you know, like there's no way it can keep
all that in its context. How do you like surface
the right things at the right time? Like it's a
common like current problem, and this seems to be like
the best solution that I've seen so far of like
basically creating like smaller chunked like summarizations of what the
different things are that can be surfaced, and then providing
(31:24):
it a you know, a link ultimately to find out
more right, right, like demand pages right, Like man, you
know that's why like CLI tools will always work better
than any MCP server you set up, because like the
help mechanisms are so like token conscious yep, and also
(31:47):
like available and straightforward, like it's all normalized, right, Like
there's a protocol that like it follows right and gives
you the information that is needed in order to learn
how to use it. But also you just be like,
you know, show me all the commands I have, right,
and it doesn't give you like pages of text. It
gives you like a one liner of like all the commands, right, yeah,
(32:10):
And like that's that is very valuable to like nlll
F right.
Speaker 2 (32:15):
Yeah, it's like this kind of concept.
Speaker 3 (32:17):
I think it's being more and more just like bod
skills are a perfect example of this progressive disclosure where
you have like this made a data the front matter
they call it, where you could say, hey, this is
like information about what the skill does and is and
the things that can be used with it and the
tools that it uses, and like just surface this right
to all the other ones and it does like an
(32:39):
incredible job.
Speaker 2 (32:41):
Yeah, way better, yeah, way better than the tools.
Speaker 1 (32:45):
Yeah.
Speaker 3 (32:45):
Not that not that tools aren't valuable, but I think
we're like kind of moving away from the idea of
these like actionable things needing to be dumped at the
m all the time.
Speaker 1 (32:57):
Right, So I kind of want to spin a little
bit back toward the autogenetic AI team. You know that
you you it kind of spins up its own set
of agents to do the work. Are you using this
anywhere in production or experimenting with this anywhere where it's
likely to go into production?
Speaker 2 (33:16):
No? No yet, I do have.
Speaker 3 (33:19):
I do have some experiments in the works of using
it within a rails app uh that I that I
hope to get to production. It's it's hard because it's
like the models still aren't quite there. Right, they're better, right,
you know, the best performance I've seen is from Opus
four five.
Speaker 2 (33:39):
But it's just like so it's a.
Speaker 3 (33:40):
Little less extensive now, but like right, yeah, not not
worthwhile to like introduce in a you know, in any
official capacity on my own, so to be cost effective.
Speaker 1 (33:54):
Yeah, So I guess the other question related to that
then is and it sounds like the answer to this
qu question is going to be yeah, that's correct. But
so the main limitation to this is the limitations on
the capabilities of the models. And so if we had
stronger models that were able to write better code or
make better decisions, or maybe had a larger context window
(34:16):
so you could it could figure out more stuff and
remember more stuff. Those those kinds of things would make
this a much more effective approach. Yeah.
Speaker 3 (34:25):
I mean it's interesting because I see production changing, right,
Like when people say, well is this in production? You
know your local system is ultimately becoming production now, right
with all of these coding agents. Like if I wanted
to like for my job personally, you know, maybe I'm
biased because like I'm a programmer, right, but like if
(34:49):
part of my job is to like review code, right,
And so I'm a coding agent looks up to the
getthub cli and can review the prs that are open
that I'm assigned to and give me a breakdown and
have some actionable comments like right, and so like all
(35:11):
that stuff is kind of like your local machine is
becoming more and more of a production system that you
can run more and more things on, right, and especially
as we start, like as I start to see this
stuff coming, like I feel like the production line is
going to blend with local so much because of what
you can produce on your local machine now where you
don't need to offload that work in the cloud anymore
(35:34):
because you can just generate a quick ashumel page and
whether or not it like persists or not, like that's fine,
because it was just to show you something, right, or
to do something and like create that interface, and that
those like the interfaces are becoming temporary, right, like and
at least for our work, right, but I can see
(35:54):
I see it more and more, right, and so like
you could do like a big question is always like, well,
well what can't.
Speaker 2 (36:01):
You do with cloud code? Right?
Speaker 3 (36:03):
What can't you do with chat GPT? Right, what can't
you do with these things on their own? And there's
the answer to that becomes less and less things.
Speaker 2 (36:12):
Right.
Speaker 3 (36:13):
And so if you're running all of this on your
machine anyway, like where is production right if you're doing
more and more work on these things. And so I
see it as like your local environment kind of like
taking on more of the chunk out of that production.
And I use this library a lot on my local
machine to do different tasks that I know it does well, right,
(36:35):
And so like, really most of the exploration that we're
seeing is well, what does it work well doing?
Speaker 2 (36:42):
And the answer to that is like more things.
Speaker 3 (36:43):
Because the models are getting better, right, But it's also
like go use it, try it out, and if it
works for you, like just keep using it.
Speaker 1 (36:53):
Yeah, Like I think if I can restate what you're saying,
because whenever we think of production, like you know, if
you work a company that you know has like this
giant rails app or you know, micro services or what however,
however your architected, however, your architecture, production is is that
set of code that you deploy out to the web
servers so that people can interact. But what you're saying
(37:15):
is is production anymore is wherever the work's getting done.
And so if you know, in that case, then you're using,
for example, the auto autogenetic tool to get work done
locally because that's where the work is done, you know.
(37:35):
And similarly, in a lot of these other cases, the
line's going to get blurred from is it on the
production to server to you know, where am I getting
the work done? Right? So is it going to work
on my machine and then connect to some quote unquote
production system out there in the cloud or you know,
and how much of this is going to live back
(37:57):
here with wherever I'm at and wherever I'm doing the
work and wherever I'm interacting with it?
Speaker 3 (38:02):
Yeah, you know, we had we had Dave kamer on
at one point where he was talking about his local
server setup, which is just mind bogging if you if
you ever get to talk to Dave, ask him, ask
him about his local service setup. But I guess right,
but it this was years ago to now, right, But
(38:23):
he had to a point where you know, he could
basically send a a web request to this server with
a job to do, right, like almost like a job
in Q but to his local machine right right, And
I see, like more like from a system that like
you know, was had an app and everything.
Speaker 1 (38:42):
I think it was.
Speaker 3 (38:43):
This was for transcribing the videos on Drifting Drifting Ruby, right,
And he had this job that was like okay, like
in queue the transcription process and it would like basically
trigger back home at his computer and run it through
some local models so we wouldn't have to pay for
any of it, and then transrib it right, and then
send the request back up. So again like perfect example
(39:05):
of like where is production? Right? Like right, Granted he
he has his own like legit production set up at home.
Speaker 1 (39:15):
Yeah, he has a server rack and all kinds of
stuff at home.
Speaker 3 (39:19):
But you know, like it's still the same. You know,
it's still the same, like where's the work getting done?
I feel like more people don't want it to get
done in the cloud, you know, like all of the
DH stuff you know has been pushing back against the
cloud and having you know, machines running in your closet.
Speaker 1 (39:37):
Right.
Speaker 2 (39:37):
I feel like that's going to become more and more popular.
Speaker 3 (39:39):
And like what does that mean then for production if
you're distributing it like that?
Speaker 1 (39:45):
Yeah, And that's really interesting too write because yeah, it
seems like the way you're talking about this, Yeah, a
lot of that becomes a lot more not only possible,
but convenient. Right where I have more control, I have
more capabilities, it's more personalized, and it's all because it's
right here in front of me. And so then the
(40:05):
services in the cloud become less about oh what can
I do with the user interface and a lot more
about hey, what can you do for me in whatever
thing I'm trying to accomplish?
Speaker 2 (40:16):
Yeah, exactly.
Speaker 3 (40:17):
I mean imagine the day where you can run like
a GPT five level model, like just on a computer
in the background, right, Like eventually it'll get there. So like,
what does what does that mean if you're like no
longer need like the internet to do that kind of
computation right right, And like there was slowly the internets.
(40:43):
We like that's how we communicate. We still have to
like communicate to like solve any real problem. Right, But yeah, again,
it like turns more into like the protocol and communication
layers more than anything. I just see that aspect of
things eating more of this than anything.
Speaker 1 (41:01):
So one last question I guess I have on this,
and it's it's somewhat related to something you said before,
but also related to this approach that you've got here
where you've got it kind of autogenerating agents. And before
what you said was and I don't remember if it
was before the call or not, but you mentioned that
you're not writing as much code, right, And so my
(41:23):
question is is I guess there are two parts of this,
So I ask the first one is this has changed
the way that you do your job? Right?
Speaker 2 (41:31):
Yes?
Speaker 3 (41:31):
And no, I mean this particular project. I have another
project called the AI Software Architect that is like literally
just markdown files and that does most of my job
for me, and it's great.
Speaker 2 (41:41):
And I would say yes, like a you.
Speaker 3 (41:43):
Know, it's hard because like coming if you're new, right
coming into the industry, like you still need to have
like the knowledge and experience of like how to develop
and practice software and systems, how systems work.
Speaker 2 (41:58):
Maybe the you don't have to know.
Speaker 3 (42:01):
About like semicolon's and where to put like syntax stuff more,
but you still need to know how to build systems
and how how to integrate and the concept building.
Speaker 2 (42:11):
I see that becoming more prevalent and more.
Speaker 3 (42:14):
Maybe something that people should focus on is that conceptual
like conceptualization and compression, and like you know, managing and
thinking through those concepts, because that is ultimately what what
your job becomes when you just lean heavy into all
these agents. Is like you're just conveying concepts that you
want to create that don't exist, right, and like it
(42:34):
can't do that because it's not trained on those concepts, right,
And so it's like really just comes down to like
and again I hate to keep like mentioning DHH, but
like you know, all of his old stuff coming up
is coming up again, where it's like, you know, software
is writing, right, Like when you're writing software, you're really authored,
like you're you know, it's a communication like thing. So
(42:59):
like my job is still the same, and I still
need to like communicate in very specific ways to get
what I want out of it. And the better I communicate,
the better the things work right in the long run
and maintain better, right, and people can like work with
it too, right, And so nothing has really changed, I guess,
(43:19):
is what I'm saying.
Speaker 1 (43:20):
Yeah, except it seems like you're operating at a different
level where you're actually now telling systems what you want
instead of getting in and telling the system what to do.
If that makes sense, right, Where you're writing more prompts
and less code.
Speaker 2 (43:34):
Yeah, that's true. Yeah, I feel like that's true of
most people at this point.
Speaker 1 (43:38):
Yeah.
Speaker 2 (43:39):
Yeah, Well if not, I would like to hear from you, you.
Speaker 1 (43:42):
Know, right, well yeah, and I yeah, I'm just thinking
of all the reasons why you may end up in
a slightly different situation. But I would be speculating, and
I don't know how useful that is. I guess that.
The second part of this question, then, is is, so
let's imagine that I'm I've been doing this for a while, right,
(44:04):
you know, I've been writing rails for twenty years now,
you know, but I talk to other people, you know,
it's ten years or fifteen years or five years or whatever,
you know. So I'm out there, you know, I'm building
apps in Ruby, and I'm looking at things like this
and saying, okay, well, now you have code writing code,
you have models writing code. A lot of this is
more prompt engineering than software engineering, but you have to
(44:25):
understand the software engineering. So what does my job look like,
you know, within the next year, two years, three years,
four years, And what do I need to be paying
attention to and learning so that I'm not just a
repository for I can generate code because it looks like
a machine's going to be able to do that soon.
Speaker 3 (44:46):
Yeah, I mean the thing is like the machines still
generate things that people need to know about, right, Like
it's not like it's creating novel things. It's not creating HTTP, right,
Like it's not creating its own protocols, which maybe will
be the another future, right maybe, but so like people
(45:07):
still need to understand those concepts, right, how does HTTP work?
Like if you're making something that you want to put
on the Internet, Like the Internet isn't going.
Speaker 2 (45:16):
To change in the next ten years.
Speaker 3 (45:19):
Maybe it will, but like unlikely, right, And so like
you still have these protocols that are used that you
need to understand. You know, you need to understand you
know HDML, you know maybe maybe not so much that
HTML like aspects of things, but like how it's communicated and.
Speaker 2 (45:37):
Served, right and how users see it.
Speaker 3 (45:39):
If you're like doing dynamic content and programmatic like flows,
like you need to understand like system processing and architecture
and you know, best practices for like how services.
Speaker 2 (45:54):
Communicate to each other.
Speaker 3 (45:56):
Right, Like there's a bunch of bunch of fundamentals that
are still applicable, Like in traditional computer science, and maybe
like the whole testing aspect of things is changing, like
how you test and what you test. Like, I feel
like we're still not, you know, all in agreement on
that those aspects of things anyway, right, but it's still there.
(46:17):
People still like I wouldn't trust delivering anything to people
that pay me money, right if I didn't test it,
because I don't want to have to like be a
customer support person, right right, and like also like a
little bit shame on you, right like right.
Speaker 1 (46:36):
So, so it seems like though that the skill set
has shifted from being able to sit down and actually
like crank the code out to being able to ask
for what you want. But it also seems like what
you're saying is is you still have to conceptually understand
how software goes together and how the systems work so
(46:57):
that you can intelligently ask for what you want and
you can also intelligently validate what you get.
Speaker 3 (47:02):
Right, Yeah, you have to know the concepts, I mean
circling back to that, right, like, yeah, you may not
need to know get, but you have to know how
code changes flow works, right, yeah, like even if it
uses SVN or you know, God forbid us SVN, but like,
you know, can you use some other like you know,
(47:22):
code management tool? Right, Like you need to know how
those things work just fundamentally so that you could be
like what change right, Like so you know how to
ask the thing for the very specific things. You know,
maybe it can like make to add the changes that
you need and like you know, revert things when they
(47:43):
go wrong. Right, But you need to know those keywords, right,
You need to know that those are possible, and you
need to know the concept So again back to the concepts.
They're just like a bunch of concepts you need to
like know right, so that you can like really max
out with these coding agents.
Speaker 1 (47:59):
Right. So I guess the final question I'll ask, because
I think this is just getting us to an interesting place.
Is the way I learned a lot of the stuff
that you're talking about having to know is that I
had to do it on my own for a long time.
And then you've got some of these newer folks that
are you know, graduating from college or coming up through
(48:21):
the boot camps or self teaching that haven't done it
as zillion times like I have. So how do they
learn this stuff? Like is it a different avenue do
the AI systems llms actually help them learn how to
do this? I mean, what do you do for people
who are new that haven't built, you know, twenty years
(48:44):
of muscle memory on on how to build web apps
or how to use command line or things like that.
Speaker 3 (48:49):
Process is still the same, Like you know, go learn
to yourself and use these tools to your advantage. Right,
when something doesn't go as you expect or you don't
know something, ask right, I guess that makes sense. Yeah,
it's the harder part is like you know, okay, knowing
what to ask right, and that I feel like there's
(49:10):
still a lot of room for traditional like courses, right,
is like there's going to be a lot of times
where you're just not going to know what to ask
and you'll never get there. And traditional like coursewear and
workloads like that teach very specific things. Will it help
you introduce that? Introduce that to you, right? And like
maybe once you've visualized like what all the like the
(49:35):
course material are, then maybe you could dig in like
maybe this whole like mi T open course where like
you know, Stanford ree learning stuff. You know, that could
be the future because it just provides all of the
concepts and material that you would need to know for
very specific things, and then the agents can help you,
(49:56):
like because you could create plans, right, a learning plan,
like help me learn how to do this thing. Here's
the course material, right, and like why do you need
the course? But at the same time, you know, like
it's the concepts and the material and like all these
things that.
Speaker 2 (50:11):
Are going to be like kind of the value.
Speaker 3 (50:15):
And so don't go out there exploiting uh, you know,
open courses course please.
Speaker 2 (50:22):
You know, it's a.
Speaker 3 (50:22):
Great platform, lots of great learning out there, and if
you want, you can earn your own.
Speaker 2 (50:28):
Degrees out of it. From what I've seen.
Speaker 1 (50:31):
Kind of yep, makes sense, all right, Well, is there
anything else that people ought to know about any of
this stuff? Kind of meander through a bunch of stuff
beyond autogenetic stuff, but I think it's helpful for people
to understand what it's doing.
Speaker 3 (50:43):
Yeah, I mean, if you're working on something, let me know.
I'm the code name v on on Twitter. This stuff
is just really I'm working on another project now on
exploring the idea of letting letting lllms pave their own
path with their own memories. Actually, one of my Uh.
(51:08):
One of my coworkers, Martin, he like made this prompt
where you basically just like ask an LLM to like
generate a profile about itself, right, and like what is
it like almost like to try and get its like
soul out of itself. And it's really interesting, Like yes,
the models all different, you know, all the different models
the same thing, and like the results are wildly different.
(51:30):
It kind of gives you some like insight into like
how the models operate just like holistically I think. But yeah,
I have another project. I'm calling it seed box, but
like just asking an LM. All right, here's what you
did last time. What do you want to do next?
You know, more experimentation like this, We need more like
you know, stuff out there to experiment with and just
(51:54):
like see what these things are capable of, because we're
trying to come up with ideas for what they're good at,
and like is that like the best?
Speaker 2 (52:02):
Right?
Speaker 3 (52:02):
Like shouldn't they also be coming up with ideas so
what they're good at?
Speaker 2 (52:06):
Right? What does that mean?
Speaker 3 (52:07):
You know, there's a lot, there's so much exploration I'm
excited to see.
Speaker 1 (52:11):
Yeah.
Speaker 2 (52:12):
Funny now very cool?
Speaker 1 (52:13):
All right, Well let's go and do some picks. You
have some picks.
Speaker 2 (52:18):
Yeah, we were talking before the show. There's a project
called open code.
Speaker 3 (52:21):
It's like a cloud code alternative, fully open source works
on your act Windows, Linux. It's really impressive. I've been
using it lately and yeah, it's fantastic.
Speaker 2 (52:33):
Check it out.
Speaker 1 (52:34):
Nice. I'm gonna throw out a board game pick, as
I am wont to do, and then I will man
I sound sold fashion when I say it that way anyway,
and then and then I'll probably throw out something else. So, yeah,
the game that we've been playing lately is that we
played last time I got together with my friends was Infiltrators.
And so Infiltrators is you have a bunch of suspects
(52:56):
and you're trying to figure out who they are, and
it's just a color, a number, and so usually on
your first turn, everybody nabs a suspect, and so you
know you have information that nobody else has because you
can't tell people who your suspect is, but you know
who your suspect is. And so then you play cards
on other people or on yourself in order to figure
(53:17):
out who the suspects are. And so it's a process
of elimination, right because you when you play a card
on a suspect, it either has something in common with
it or nothing in common with it, and so it's
just a process of elimination to figure out what it is.
And then it has kind of the concept that you
get out of the Crew where it has multiple missions,
(53:39):
and so it'll say, use these colors, use these numbers.
You have so many bullets, right because you're executing your
suspects when you know who they are. And so anyway,
it's it's pretty fun place, pretty fast. You play up
to five people, I think, and anyway, really really enjoyed it.
If you don't want the gun and bullet aspect, you
can track that. However, you're going to track that. You know,
(54:00):
it's basically you have so many tries and then you know,
you try not to run out of cards and things
like that. So anyway, super duper fun. So I'm gonna
pick that, and then I'm trying to think what else
to pick. I mean, lately, I've just been using the
plan feature on Cursor and then right I go look
at what it did and tell it what it did wrong,
(54:21):
and it cleans it up and it's pretty nice, you know,
And like I said, for my full time job, I'm
copilot and you know, works more or less the same
way and does a great job, and so liking those,
so I guess I'll all shout those out just kind
of as where I'm sitting now. And then one last
(54:41):
thing I'm going to mention is so I've been putting
together two things and one of them is and I'm
going to do an episode on it. I'm not sure
if I'm going to release this episode first for that one,
So if you've already heard the whole episode on it, great,
But I'm a big fan of the seventy five Hard
Challenge by Andy Frazella. You know, I've lost a bunch
(55:04):
of weight doing seventy five Hard. I feel like I've
leveled up as a person doing seventy five Hard, and
so I thought, well, what if there was something like
this for code right, And so I'm putting together a
code Forge seventy five Challenge. It's based very heavily on
seventy five Hard, except instead of workouts, you're you know
(55:24):
you're writing software. He has you read a book for
ten minutes a day. I adopted the same thing. Right,
go find a tech book on something you want to learn,
you know. So it's that kind of thing. So I'll
walk through all the different pieces on the other on
the other episode, I'm looking to expand it to be
a so he expanded it out to be a year
(55:45):
long challenge. So you know, seventy five Hards the first
phase of the challenge, and so I'm looking to do
that too, because I want to encourage people to speak
at an event, whether it's a meetup or a conference.
I want to encourage people to you know, generate content
and things like that because I think it helps your career.
And so there will be other phases to the program
(56:06):
similar to what seventy five Hard does, and I'm gonna
I'm planning on putting together a little tracking app with
rails and hot wire Native so that you can track
your progress right and then if you miss one of
the items, then you have to start over anyway. So
I'm putting that together and then the other thing I'm
putting together. Both of them are relevant to what we've
(56:27):
been talking about today. I've been finding as I talk
to people, and Valentino I think made the case for this, honestly, Yeah,
because I work with people and talk to people, there
are folks that have major gaps in their knowledge in
some areas, and I do too, honestly with regards to
Ruby or Rails. Right, there are things that it's like, oh,
(56:47):
I didn't know it did that, or oh I didn't
know that. You know. The way that we do things
now is we architect it this way so that way, right,
because I have a life and so I don't, you know,
I don't get into all the nitty gritty of what
came out in some point too, and how do I
use it and what's the best way around that? And
so if you want to do the Code for seventy
(57:07):
five challenge and you want to level up on Rails,
then I'm putting together what I'm calling Ruby Geniuses, and
it'll have stuff or actually it's Rails Geniuses, but it'll
have Ruby and Rails content and there will be that
daily level up kind of thing, and then we're going
to have there'll be different membership levels. So if you
just want the tutorials and stuff, great, and then if
(57:28):
you want to be part of the training and things
like that that we're doing every week in the book club,
then you can get a higher level. And then I'm
doing the same thing for AI, and that's also going
to be focused in the tools, which is part of
what we talk today, and then also building AI agents
and AI enabled features, free applications. That stuff's just moving
(57:49):
so fast. I feel like, you know, having a group
that gets together on a regular basis and looks at
it and talks about it and says, hey, have you
seen this is very very helpful and handy, And so
you can sign up for one or the other or both.
There will be a discount if you sign up for both.
But I plan to address this, you know, we'll we'll
have weekly meetups and book clubs and things like that.
(58:09):
I'm trying to figure out exactly how to do the
book clubs so that you're kind of getting more of
the evergreen stuff that's not changing as fast with the
AI approach and tools and things like that. But we'll
figure that out and we'll go along as we go.
But I feel like if you're leveling up in these areas,
then you're going to put yourself in a position where,
(58:30):
no matter how much the models do for you, or
how different companies approach their workforce or things like that,
you'll always have a competitive edge because you know you
understand the architecture of what you're building or what you're
getting help building, and then you're also going to understand
the tools and the architecture of what the capabilities are
(58:53):
when you need to build something with AI. Anyway, you
can go check those out. It's going to be Railsgeniuses
dot com and Aidevgeniuses dot com. So I'm just gonna
put those out there. You'll get a whole lot more
explanation on the code for seventy five episode if you
get a chance to listen to that. And then I
(59:14):
am going to be offering a launch discount through the
fifth of January. And the reason I'm going through the
fifth of January is that as I've talked to people,
some people want to expense this to work, and some
people have used all their budget for twenty twenty five
and want to expense it in twenty twenty six. Oh,
(59:35):
I will give you the opportunity to do so with
the discount, And if you want to use up the
rest of your twenty twenty five budget and things like that,
then you can do that too. And if you need
some kind of arrangement where it's like, well, I only
have so much twenty twenty five budget, and I want
to use some my twenty twenty six budget and reach
out to me and we'll figure it out. But I'm
not going to hard pitch it. If it sounds like
(59:56):
something you want, I'm putting it together because it's something
that I wanted. I was like, I am missing stuff,
and I feel like if I get people together in
a group, then I will miss less stuff. And then
it also gives me an excuse to go and learn
things that, you know, maybe apply beyond what I get
in my full time job. So anyway, that's what we're doing.
(01:00:18):
And yeah, I just I want to give you the
tools so that no matter where any of this goes,
you know, you have the skills, you have the knowledge,
and you can you can go and kind of build
whatever kind of life and career you want. And I
guess we'll wrap it up here until next time.
Speaker 2 (01:00:30):
Next out