Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
What's going on?
Speaker 2 (00:02):
Everybody?
Speaker 1 (00:02):
Welcome to another episode of Adventures in DevOps. Warren joining
me again. I keep making you feel like the new guy.
But it's been like, what a year.
Speaker 2 (00:12):
Now, almost that long, and I've got my I got
my back prepared. It was a recent well I don't
want to spoil my pick, so I'm not going to
say what it is. But the conclusion is that AI
may be making us stupid. The truth is that AI
has a huge decrease on our critical thinking or how
much we're utilizing it and not necessarily training that skill,
(00:36):
and this could be the beginning of the downfall of humanity.
Speaker 3 (00:39):
And that's all I'm gonna say.
Speaker 4 (00:41):
I don't know. I sort of take like issue with
that because I remember hearing the same thing, like my
teacher is telling me all about spell check, like, oh,
you're not going to have a computer in your pocket,
you need to get over this dyslexia thing. And as
it turns out, I do have a computer in my pocket,
and no, I still do not know it as well.
We're fine. The skill set se ball, but so it's
gonna be okay everybody.
Speaker 2 (01:03):
The same thing happened with calculators as well. But I'll
say more about that at the end of the episode.
Speaker 1 (01:09):
Right on, Hi, Jillian, welcome. Hello, all right, this is
going to be a cool conversation. Joining us today, we
have the founder and CEO of Warp the Warp Terminal,
Zach Lloyd. Zach, welcome, I'm excited to be here.
Speaker 3 (01:24):
Thanks for having me.
Speaker 1 (01:25):
I'm excited to have you on here. And just to
pick your brain about this because I first saw the
Warp Terminal. It's been several years now, so you've been
working on this for a while, and it was just like,
at first, it was so confusing to me because I
was like, wait, this isn't what my terminals supposed to do.
(01:46):
It's it's like offering up stuff like how do I
trust this?
Speaker 2 (01:51):
So before we dig into that.
Speaker 1 (01:53):
Tell us tell our listeners a little bit about warp
and and what it does.
Speaker 5 (01:59):
And yeah, so Warp it's a reimagination of the terminal.
You can use it like a regular terminal, so you
drop it in and use it in place of I
don't know whatever you're currently using, if you're a mac
I term or just the stock terminal app. The idea
behind it is that it has a much more sort
(02:20):
of user friendly user experience, so you know, basic stuff
like the mouths works, for instance, but it's also increasingly
it's about being intelligent, and so when you use WARP,
the main distinguishing thing these days is that you don't
have to enter command, so you can just instruct the
(02:40):
terminal in English, tell it what you wanted to do,
and it will sort of solve your problem for you
by translating your wishes into commands using AI, and it
looks up whatever context it needs and kind of guide
you through whatever task you're doing, whether it's a coding
task or a DevOps task or setting up a new project.
So it's a totally different way of using the command
(03:04):
line that I think it's like pretty fun to use
and definitely more powerful than your standard terminal. And like
we're kind of having an internal debate at this point
about whether or not it's even right to call it
a terminal because it's so fundamentally different from what you
know that people expect when they use a terminal, but
it does work. It's like I think a really really
(03:25):
nice to use terminal as well.
Speaker 1 (03:29):
Yeah, for sure, Like the terminal features are definitely all
right there and ready to go, and then it just keeps.
I think it's really a cool way to get used
to it is just drop it in as your replacement terminal,
and then you can start picking and choosing like all
of these other things that it has as you, as
you get comfortable with it.
Speaker 4 (03:50):
I want to say I really like that it uses
the mouse because I have like a bit of a
horror story of trying to get somebody set up with them,
and I felt like very proud of myself, like oh
look I got the scientist using them, and then they
were like, great, how do I use a mouse? And
I was like, oh no, So I think I think
that's a nice feature.
Speaker 5 (04:07):
The other thing that it will help you do is
figure out how to quit them if you end up VIM.
Speaker 4 (04:15):
It's not what we're trying to do here.
Speaker 5 (04:17):
Which is what it's one of our most popular features
is you could ask the AI I had to quit them.
Speaker 3 (04:22):
It's very funny because people people do end up in
there and they're like, what, oh.
Speaker 4 (04:26):
You mean, like quit the application?
Speaker 3 (04:28):
Not like I quit the application yet, not.
Speaker 4 (04:29):
Like quit the addiction. Okay, No, people love them.
Speaker 1 (04:34):
Now that's a twelve step program for that ward it is.
Speaker 4 (04:39):
They need a new one. They need twenty steps.
Speaker 1 (04:42):
Cool, So, how long have you been building WARPD.
Speaker 5 (04:46):
We've been at it for a while, so started the
company started during COVID SO twenty like the middle of
twenty twenty, and we first launched something publicly in twenty
twenty one. And it's just sort of evolved from something
(05:07):
where the main value initially was, hey, let's make this
tool a little bit easier to use and like fix
some of the UX into something that as much richer
it is, especially when chat chupt came out, and we
were even doing some AI stuff before that.
Speaker 3 (05:23):
But they've been working on it for a while now.
Speaker 1 (05:27):
Right on what's the what's the thought process that goes
into figuring out how to integrate AI into this?
Speaker 3 (05:42):
Yeah, so we went through a bunch of different stages.
Speaker 5 (05:45):
So the first, the first sort of stage of AI
and warp was essentially like translate English into a command
so you could bring up this little thing and it
actually predated chatchupt. Use something called codex, which was a
I think an open AI like coding API, and you
could be like, you know, search my files for this
(06:08):
specific term and it might generate like a fine command
or a rep command, something like that, and it's very
much like one to one English to command translation.
Speaker 3 (06:18):
The next thing that we did was when chat chikut.
Speaker 5 (06:24):
Came out, we did what I think a lot of
apps did at that time, which was like put a
chat panel into warp and so you could have a
sort of chat panel on the side where you know,
you could ask coding questions. You could be like, how
do I set up you know, a new Python repo
with these dependencies, and we give it to you as
a chat and then it's sort of like a copy
paste type experience where you would take what was in
(06:46):
the chat and move it into the terminal And that
was cool, but kind of I would say, like limited
extra utility compared.
Speaker 3 (06:52):
To just like doing it in chat GPT.
Speaker 5 (06:56):
The biggest change that we made was basically the idea
that the terminal input where people type commands also could
be used directly as a conversational input to work with
an AI, and that the AI itself would end up
in like sort of intersperse in the terminal session.
Speaker 3 (07:17):
And we call this agent mode.
Speaker 5 (07:19):
And so in this world, it's not just that you
chat with it, it's that you tell it what to do,
and it's able on its own to invoke commands to
kind of like gather the context that it needs help
you do a thing. So, for instance, if I was like,
go back to that same example, like help me set
up a Python repo with these dependencies, instead of doing
(07:41):
it in a chat panel, which we got rid of,
you just type that into the terminal input and we.
Speaker 3 (07:46):
Detect that you're typing English and not a command.
Speaker 5 (07:48):
And when you hit enter, it follows up and says like, okay,
like what directory do you want this saying? And you
tell it what directory and then it will say it'll
make the directory for you, It'll CD into it, create
the gitripo, it'll do all the pip install, it will
even generate the initial scaffolding of the code.
Speaker 3 (08:07):
If it hits an error, it can debug its own error.
Speaker 5 (08:10):
And all of this is happening within your terminal session.
And so you know, you get to a point where
it's like you're actually driving the terminal a little bit
more in English than you are in commands, and it's
it's kind of crazy how it's changing how people use
the terminal. Like I was just looking at this yesterday,
like in warp. Now like a quarter of what is
(08:31):
going on in the terminal sessions is actually just English
and AI generating commands.
Speaker 3 (08:35):
And not people typing CD and LS anymore.
Speaker 5 (08:39):
So that was the sort of evolution, so from a
very bolt on thing to something where it's like the
actual fundamental experience of how you use the tools has
changed a bunch.
Speaker 1 (08:50):
Yeah, so you're completely changing the interaction there. Instead of
saying how do I just saying go do it.
Speaker 5 (08:57):
Exactly exactly, And that actually takes like developers don't necessarily
think to do that. They're very much in the like, okay,
let me google this, let me go to stack overflow
type of mindset, and it's a totally new behavior if
you're a developer to just be like I'm just going
to tell the computer what to do. It's a little
(09:17):
bit scary because like, what's your terminal and it's like
now the computer is just like doing stuff in your terminal.
But I do think that's the future of how development, DevOps,
whatever you're doing is developer.
Speaker 3 (09:31):
It's going to move from this.
Speaker 5 (09:32):
Like let me run a bunch of queries or let
me like open up a bunch of files and hand
out of things, to a world where you're just sort
of like, hey, let me actually tell my smart AI
whatever you want to call it, assistant agent whatever, to
start me on this task.
Speaker 3 (09:47):
And the you know, the.
Speaker 5 (09:49):
The agent will loop me in get more info, you know,
you know, leverage me when there's ambiguity resolve. But it's
like it's like going to be an imperative I'm telling
it what to do way of working. And like the
cool thing about the terminal for doing that is like
that's kind of what the terminal is set up for.
(10:11):
If you think about it, it's like the terminal is
set up for users to tell the computer what to do.
It's just that we're like upping the level of abstraction
from you telling it in terms of like grap and
fine and cdnls to telling it at the level of
like a.
Speaker 3 (10:25):
Task what you wanted to do. And so that's like the.
Speaker 5 (10:28):
Vision that we're building towards right on.
Speaker 1 (10:31):
I think it's a really great analogy, you know, because
we've seen that in other areas of software development, where
you just keep abstracting things away more and more, yep,
and coding at a higher level. But this is one
of the few projects where you're actually doing that outside
of doing it at the like the task level. Rather
(10:52):
than at the coding level.
Speaker 5 (10:54):
Correct, and like we are so you can you can
code and warp. I don't know if did you all
see the cloud code? Have you played with that at all?
Speaker 4 (11:06):
I have a little Yeah, so cloud code.
Speaker 5 (11:08):
Is super interesting from our perspective because it's it's uh,
it's all terminal based, and it's all this imperative like
you run a terminal program, you tell the you tell
cloud code like, hey, you know, make this change for me,
and it skips the file editor and id entirely to
do coding stuff. And so we're also we have very
(11:31):
similar feature in Warp. It's not it's access that you
don't run a program within the terminal, You just tell
the terminal what to do. But I think it's interesting
in terms of like the types of tasks that you
can do and if you even look at like have
you all used Cursor and Windsurf those types of apps
do any coding?
Speaker 1 (11:52):
Yeah a little bit.
Speaker 5 (11:54):
So yeah in those apps, Like the sort of initial
feature that was like the magic feature and this is
true forgetting Copilot two, was like it will do great
code completions for you, So it gives you this goes
to text as you're typing and it sort of completes
your thought. And the sort of thing that they're building
out now is also it's much more like a chat
(12:16):
panel within those apps, where you can tell the computer
what to do and it generates code dips, and they're
creating something that looks an awful lot like a terminal interaction, but.
Speaker 3 (12:28):
Within the code editor.
Speaker 5 (12:29):
And so I do think there's this general shift that's
going on for coding, and I think it's also going
to really impact people who are doing production DevOps basically
any type of interaction with systems where you just sort
of start by telling the computer what to do somehow.
So it's pretty neat, pretty neat to see.
Speaker 4 (12:50):
So I really like this because I spend a lot
of my days trying to convince biologists that, like, you
need to be able to use the terminal at least
a little bit, and it's always touch a tough sell
because being like, well I'll go over here and take
this Linux class is like not not what they want
to be doing. Let's say, so just being able to
say why not, just in English and it will at
(13:10):
least get you to the directory and install your Python
environment and do this kind of stuff is just so
much nicer than what I've been doing in the past,
and I do I like this. This is great.
Speaker 5 (13:21):
Yeah, I mean it's it's the other cool thing for
people who who are it's not their natural environment, let's say,
and like they have to use it.
Speaker 3 (13:30):
Is that.
Speaker 5 (13:32):
As you use warp to do this stuff, that teaches you.
So it doesn't just like off the skate, like at
least for now. The way it does it is like
you type in like, hey, I want to create this project,
and it says something back to you like, Okay, here
are the commands that need to be run in order
to create this project. Are you cool if I run
these commands? And so to warn to your earlier points
(13:52):
like is this just making this all like kind of
dumber and not knowing how to do anything? It's possible,
But there is also an aspect of like it's kind
of like working with like the smart person on your
team who can show you how to do things, and like,
you know, hopefully you pick it up because it is
it is in some ways faster if you know what
you're doing, just type the commands. And I think in general,
(14:13):
like I don't think it's a great outcome if everyone
who's doing development or working in the terminal doesn't know
what the hell is going on, because inevitably you're going
to get to some point where you kind of need
to know in order to fix something. And so you
know that the hope is that this doesn't make people
like dumb, or this makes people more proficient, but there is,
like I think there's a risk for sure.
Speaker 2 (14:35):
There's there's actually two things that this reminds me of
a lot. And the first one is a long time ago,
and I don't know how well it's maintained, but there
was a program that you could install into your terminal
called fuck.
Speaker 3 (14:46):
Yeah no, No, we've partnered with that exactly.
Speaker 2 (14:50):
You've never you've never seen this before. Something that actually
happens sort of often is that a command line program
you run will tell you sort of what you did
wrong in a way like did you mean this, And
instead of having to like retype the command and fix
the problem, you could just type fuck and it would
read the output and then do that thing. And that's
(15:11):
the first one. So you haven't seen that, Like I
highly recommend at least, you know, checking that out. And
the other one is this thing that totally changed how
I use the terminal for doing software development, for interacting
with GIT repositories. Is there's actually a get configuration that
you can set up to automatically fix typos. So if
you type something wrong, it will swap the letters around
(15:31):
and be like, oh, okay, you probably meant this with
a ninety nine percent accuracy, and then just do that
command anyway. And you can also set a time out,
like you know, if you accidentally type something and it's
gonna start deleting all of your code base, you can
be like, oh, wait, no, I don't want you to
do that. But that actually brings me to a question
I want to ask, which is I see more and
(15:52):
more of these pieces of software I'll call them agents
that are interacting with your operating system directly, and for me,
like I'm super risk goverse, like I don't. I want
to keep every LM or non thinking creature in its
own private box where it can't accidentally delete like my
entire operating system, because that's what I thought I wanted
(16:13):
to know.
Speaker 5 (16:14):
It's just like, might trust the agent with myself? Yeah,
go ahead, point I think, yeah.
Speaker 3 (16:24):
Like, so how do you how to manage this is
the question?
Speaker 2 (16:27):
Or yeah, I mean, it's just it's almost like I
would want to run like two computers side by side
one of them. I mean, I already am really concerned
about running external software on my machine from Ali like
a malicious standpoint. Very rarely is it will break my
operating system. I don't remember the last time it happened.
It was probably when I was using Windows, like over
(16:47):
a decade ago.
Speaker 5 (16:48):
Uh.
Speaker 2 (16:49):
But when it comes to LMS and things that, like
I know from firsthand experience, sometimes it's like there's a
non zero chance that it just figures out the wrong
thing to do. And and like that's the sort of
thing that I almost want a sandbox as much as possible,
and I feel like we're not getting closer to that
because our operating systems aren't don't allow it as much.
Speaker 5 (17:09):
So it's a great point. I mean, you have a
couple of choices. If let's say you're using warps, so
one you can just turn this stuff off, like if
you're just like I don't trust that, I don't want it.
Speaker 3 (17:19):
So that's fair.
Speaker 5 (17:20):
There's one there's like a toggle that just says AI
off and like that's it. You're back to, like you know,
you're in control. There's also like a sort of like
you can control the level of autonomy it has. So
the the one of the levels that you could have
is just like it can't do anything on its own,
(17:40):
so it can suggest commands, so you can then manually
approve anything it suggests. There's a level up from that,
which is like you can kind of provide like an
allow list and denial list. It could be like, oh,
it's fine, it can run CAT, it can run less,
can't run r M. You can go a level up
from I feel like I wanted to be able to
(18:01):
run read only commands and let let an LLM determine
what it thinks as a read only command, which it's
pretty damn good at but not perfect. Like if you
had some crazy like piped thing or like hero doc
or something like that, it might it might get confused,
but it's pretty good.
Speaker 3 (18:17):
Or you could be like you know, like yellow, like.
Speaker 5 (18:20):
Like I just wanna it's not that big of a
deal if that messes up my like get ripo or whatever,
and I'm gonna let it run. And then the other
thing that we're working on that we don't have yet
but I think is really important in this world of
like more autonomy is is what's the fastest way to
like spin up.
Speaker 3 (18:36):
A sandbox where.
Speaker 5 (18:39):
You know your whatever state you want it working on
is replicated and it can just go to work there
without without you losing any sleep. That's gonna do something irreparable.
I think like an undue functionality is super interesting too.
It's not like trivial to do that in the terminal.
Like the terminal is a stateful place where you know,
(19:01):
kid deal.
Speaker 3 (19:02):
Files and there's no like undo.
Speaker 5 (19:05):
Uh so you kind of got to figure out, like
like sandbox is try of the safest. But we're we're
we're aware of this issue and it makes it makes sense.
A surprising number of people don't give a ship. I
will say, like they're just like this thing is just magic,
and like I just it makes me so much faster
and makes makes my life so much more fun that
(19:27):
I don't really care.
Speaker 3 (19:28):
But it's a totally fair point. I wouldn't. Like they're
not using this in NASA.
Speaker 2 (19:31):
And I'm like, you know, well, I think I think
you you really yeah, not yet right, but probably Honestly,
I have some theories there, but I think if I
say them will definitely get canceled. So Uh, yeah, I
think that's sort of the problem. And I think this
is again I don't want to spoil my pick, but
(19:54):
realistically it's that a large majority of the population falls
into this area of maybe they have concerned, but they're uh,
they're apathetic to actually turning off whatever. The source of
the potential problem is there's not a good way to
moderate AI from outside or l MS from outside the
black box. You It's really like all or nothing in
(20:17):
a lot of ways, and most people are not going
to turn it off because they still perceive some huge
amount of value from from utilizing them. And so you know,
I'm not going to turn off the future. I'm just
going to be really scared now what it's going to
do when I'm not okay.
Speaker 3 (20:33):
Yeah, yeah, I think that that's right.
Speaker 5 (20:37):
And people obviously have a strong predisposition to do whatever
you said the default too.
Speaker 3 (20:43):
It might not even like know what the heck is
going on, but I don't know.
Speaker 5 (20:49):
Developers are maybe a little different, Like I feel like
if anyone's gonna go tweak the knobs, it's gonna be
like you know, except I don't.
Speaker 2 (20:58):
I don't think so. I think everyone has their their
depth where there feel comfortable controlling and when if there's
if they're comfortable pulling an LLLM to solving part of
their job or part of what they're doing, it's probably
in an area they don't care about, and so they're
probably not going to. I think another aspect here is
I have a very close friend that went away on
vacation and they're the person who was cat sitting for
(21:21):
them left some plastic on the stove which was induction
but and it was totally fine, it was off, but
one of the cats managed to turn the stove on
and actually melted the plastic. Yeah, And so this is
really funny though, because there there was no LM in
there right the cat, the cat was fine, the cat,
(21:46):
the cats were fine. The thing is like, I really
do fear at some point like there is gonna be
in someone's gonna put an LM in my in my stove.
It's going to happen at some point, and I don't
think we can avoid that future. And I do fear
that it will just turn on one day when I'm
not here and start doing things where like I have
no I have no need for that, and I don't
(22:08):
I'm not not thrilled about this future, but it's coming.
Speaker 5 (22:10):
Kelsey I Tewer had this good tweet which was he
was like, I'm actively at the point where I will
pay more to not have a smart appliance. So I
was pretty much like, I get it, Like I don't
need like my refrigerator having Wi.
Speaker 3 (22:24):
Fi or whatever. That makes sense on the on the
MLM side, for if you're a developer.
Speaker 5 (22:33):
This might not be a popular opinion, but but I
think you're not really going to have a choice as
a developer if you want to continue being a productive
developer on whether or not you.
Speaker 3 (22:44):
Adopt this technology.
Speaker 5 (22:45):
It's kind of like being like, oh, I only want
to work in assembler. I'm not going to use like
a high level language. Like that's not a viable choice
going forward. The I think what you're gonna have to
do is developer if you want to be like product
is like, learn how to use all this stuff and
learn how to use it in a safe and productive way.
Speaker 3 (23:05):
Is that unpopular?
Speaker 2 (23:08):
Let's have a fight. No, let's let's let's go around.
You know, Jillian, what do you think greed disagree?
Speaker 4 (23:13):
I think so, Like I'm pretty judgmental over development over
developers that don't use a debugger, so like I can
see this kind of being just the next, the next
sort of iteration and that process. Yeah, because I don't
know developers are I think at some point, like everybody's
kind of drawn to development because everybody has I like
(23:34):
to learn new things, disease, and like writing code is
really good for that, and then at some point you
get really tired of it, and so then AI is
really good for that like process when you're like, all right,
I'm sick of having to learn the new things. Just
I just want for the AI to tell me what
to do and then there we are. So I'm gonna
go with mostly yes, except that I feel like I
(23:55):
might get some angry responses on the Internet for that,
so I'll give like a little bit.
Speaker 5 (24:01):
Like there is a fear and understandable fear that developers
have this is going to replace them.
Speaker 3 (24:06):
I don't think that's even remotely true.
Speaker 5 (24:08):
There's also like a thing that I've noticed, which is
that a lot of like the more experienced, really strong
developers on our team and who I've worked with, like
they kind of get the least value out of it
initially and are most likely to be like, oh, this
is a stupid suggestion from this thing like or it's
like creating bad code.
Speaker 3 (24:28):
And so they have like a kind of anti take
on it, but.
Speaker 5 (24:33):
Eventually people get to a sort of moment with it
where they're like, oh shit, this actually makes my life
a lot easier and does some of the stuff that
I find super annoying. And they, I think the proper
outlook to have towards it is like, this is like
another tool that I can use, just like if I
like master like said and grap like, I'm like awesome
(24:55):
as a developer.
Speaker 3 (24:56):
I think if you could figure out how to effectively
use the l I think it just makes you better.
I think that's like the right for now, the right
way to look at war. What do you think?
Speaker 2 (25:08):
Well, I have the opposite controversial opinion, so you know,
I was maybe thinking about keeping my mouth shut. So
I have this perspective that it definitely replaces inexperienced engineers.
And so the problem with that is, and I think
this is where the fear comes from, is that lms
do not replace inexperienced engineers. People think that elms will
(25:31):
replace in experienced engineers and do that anyway, And I
think we're already starting to see that happening. And the
problem with that is you're paying money for these tools
and you're not training your organization's people on leveling up
their skills in these areas and will become more and
more dependent on them and definitely move away from it.
(25:52):
Now on the productivity side, I still think it's way
it costs way too much. I think there has to
be magnitudes cost reduction in general eating answers before this
becomes of high value.
Speaker 4 (26:05):
Mean like monetary costs.
Speaker 2 (26:07):
Like the monetary environmental et cetera. It's still like, uh,
none of the AI companies are making money like the
ones that are pumping out AI. You know, Open AI.
I'm sure whatever.
Speaker 3 (26:21):
We know.
Speaker 2 (26:21):
Anthropics not making money. We know whatever they are, they're zero,
Like it's negative negative billions of dollars per year on this.
So you know, that's not sustainable model from a society standpoint.
There's gonna have to be something to change. Either these
tools will completely go away or the costs will have
to come down. I think the last thing is that
(26:42):
we find from a productivity standpoint, at least for me
and the myself and the companies that I work with,
is that the bottleneck isn't doing more work or specifically
writing out code or pushing that out, so the tools
don't solve the needs that we have. It's okay for
us to still be slow in this way or not
be productive in this way, because that's not where our
(27:03):
bottle that gets.
Speaker 5 (27:08):
I disagree with almost everything you just said, but I'm gonna.
Speaker 3 (27:14):
Like it's interesting.
Speaker 5 (27:16):
It's it's interesting to have this, uh, this discussion because
I'm so in the like AI bubble of like like
Silicon Valley people and like AI tech companies and like,
like the main contention that I hear amongst these, like
the people I talked about on the investing said on
the other A company side is like how quickly are
(27:38):
we're getting to a g I? And Warren is coming
in hot with being like these things are not even.
Speaker 2 (27:43):
Valuable, They're not it's not even AI. Like I hate
this term that like we we there's these companies are
lying to the masses of people saying we have AI.
All we have is transformer architecture which is able to
utilize you know, create l MS, and they will always
hallucinate And that's the ridiculous thing. Like I'm waiting for
someone to say, how is open AI going to recoup
(28:03):
the billions of dollars they are losing every single year?
Like where does that? Where does that change? Because money
will run out at some point?
Speaker 3 (28:13):
Oh well, well you want to go or do you
want that?
Speaker 1 (28:17):
I'm going to jump in real quick. Then we can
come back to that. I this is good. I think
I tend to agree with you, Zach, that there's going
to be people who are resistant to AI. And I
think the primary place I've seen this is people who
really are They're really passionate and invested in their chosen language.
Speaker 4 (28:40):
You know.
Speaker 1 (28:40):
I think if we look at the category of people
who will argue Go versus Rust, and they've pinned, they've
pinned their career on I'm a Rust developer or I'm
a Go developer, and and so they'll try something like
AI or or any of those related tools and say, oh,
well it got this wrong. That's clearly why I'm I'm
(29:04):
not going to rely on this thing because it got
this one thing wrong. And you'll get a lot of
resistance from those people.
Speaker 3 (29:15):
I think the.
Speaker 4 (29:16):
AI is like another tool. I mean, I guess more
than what you're saying with like all the money being
spent in the environmental cost That is very valid. But
from the tool perspective, it's like I'm already so dependent
upon tools like without dictation software, pie Charm and VIM,
I'm completely useless. I have like zero utility to anybody
anywhere at any time, like in a professional context, anyways
(29:39):
that I am, you know. I mean I do have
kids occasionally, I'm useful in like a human context, but
like from from a professional standpoint, if I don't have
those things, I'm not going to get any work done.
And so AI has just become like another tool for
me to use. And so I just see it from
that perspective. From the money perspective, like I don't know,
but humanity is a bunch of money on a bunch
(30:02):
of things that we don't recoup an investment from. Like
it's just the money never actually runs out. We don't
have a gold standard anymore, Like there's there's always there,
Like it's an arbitrary concept. There's always money.
Speaker 1 (30:17):
As long as the printer companies keep making printers that
print the money.
Speaker 4 (30:21):
I mean, isn't that kind of what we're doing at
this point? Though? Like isn't that what the governments of
the world have sort of decided or doing well.
Speaker 2 (30:27):
There's a secondary problem here, actually, which is that the
energy consumption is too high. Like even shave off the
environmental impacts, the energy cost is so high that people
are now starting to have their lives affected by having
spotty continuous energy flow into their own appliances in there
and their house lights and stoves, ovens, whatever. And that's
(30:48):
happening near data centers where increased energy usage is required
to run LM. So I think that problem is likely
to get worse even if the money doesn't run out.
Speaker 1 (30:59):
But if you had a smart refrigerator, it could address
for that exactly.
Speaker 4 (31:05):
If the things are smart, you know, then then what
do you even need the energy for? Now, We're fine.
Speaker 2 (31:10):
I like the perspective. I mean, it is a tool,
for sure, And I think the thing that I see
is that it used to be the fact you could
type into Google and get a website that helped you
answer the question you have. And you can't even do
that anymore because at least that search engine has become
utterly worthless, and so you need a replacement for that.
And I think it's worse from a accuracy standpoint than
(31:33):
Google at its best, but it's for sure better than
Google now, and I think that's a worthwhile trade off
that you have to change if you're still using Google,
or you're still believe that your one true programming language
is the only one for the future. I think that's
just the mindset which doesn't make sense.
Speaker 1 (31:48):
So, Zach, you wanted to come back and answer or
respond to the money issue.
Speaker 5 (31:56):
I can't speak to the energy stuff. I can't speak
to just like it's valuable. So for for developers paying
twenty to forty bucks a month for AI in their
core tools, if you just think of how much development
(32:17):
time costs you have to save I don't know, twenty
minutes or something for that to be a worthwhile thing.
Speaker 3 (32:27):
And that threshold has.
Speaker 5 (32:30):
Been crossed a long time ago in my opinion, just
from using these as a user of these tools, the
amount of like time that they save me, it's like
a no brainer trade off. I don't know if anyone
on the back end of this is making money yet
I do know WARP like we have a positive margin
(32:51):
when people pay us for AI, and so it could
be that the model companies or the hyperscalers are just
taking a huge loss on warp's profit. But it you know,
from like just pure economics, people find the value, they
pay for it, they stick with it like a surprise
and higher like we don't have very high churn on it.
And so I have to believe that just from that
(33:13):
and from like actually using it, that there's a ton
of value. It's certainly true that these things are not infallible.
And like, I guess you could debate from like a
philosophical perspective whether or not they're intelligent. Actually think they
they they have some level of like intelligence. Now it's
not like it doesn't quite work the same way that
human intelligence works, but they're able to, like I don't know,
(33:38):
they're able to do things that up until a couple
of years ago you would only say a human could do.
So there's it's I personally am like super excited by
the progress, Like I was a like I studied a
bunch of philosophy. I have a philosophy degree in addition
to a CS background. I think it's like absolutely fascinating
what it says about what intelligence means. It's not, like
(34:01):
you said, it's not perfect human intelligence, but it's something
and it's like I think it's a it's a pretty
awesome technological advance.
Speaker 3 (34:08):
So I'm more pro AI, more bullish.
Speaker 5 (34:11):
I think Warren's a little bit more on the skeptic side.
That's all.
Speaker 3 (34:14):
I think.
Speaker 2 (34:15):
I can't assign the word intelligence to it yet, and
because because of the architecture that it's utilizing, it's just
a probabilistic word predictor. And I think we need a
different architecture other than the Transform architecture to actually reach
anything that would be fair to call AI in any capacity.
I do want to jump into how you're utilizing it
(34:37):
though at WARP sure are you are you running your
own foundational models or are you passing queries to something
configurable for like I can put in open AI apike
or anthropic apike, what's going on there?
Speaker 3 (34:51):
You can pick your model.
Speaker 5 (34:53):
So we we support the anthropic models, the open A models,
Google's models, We support you as hosted version of deep
seek models.
Speaker 3 (35:02):
Even some of the open source models.
Speaker 5 (35:05):
You you can't go directly to them because our server
has like a whole bunch of logic on, like the
prompt engineering and sort of different agents for different types
of tasks, so there's like a logic layer between them.
But the basic the basic intelligence underlying the AI and
war currently is.
Speaker 3 (35:25):
The foundation models.
Speaker 5 (35:26):
There's a chance at some point that we'll get a
little bit more into the like make a model to
protict your command type business, but currently we're we find
that the best thing for our users is to sort
of use the like we're not going to spend billion
dollars on you know, GPUs or whatever and trade models
(35:48):
right now.
Speaker 1 (35:49):
That would probably change profitability statement you just made earlier.
Speaker 3 (35:54):
Yeah, well, so I would say, like we are at
the application layer.
Speaker 5 (36:00):
If you look at this as like application layer, model layer,
hyperscalar worth the application.
Speaker 2 (36:04):
There, No, it makes makes sense. I mean, but in
in that way, the model providers are definitely subsidizing the
profitability because they're taking huge losses. I mean, I don't
know who's making money from there.
Speaker 3 (36:17):
It's just a question of like where's the value going
this whole thing, like the you know, the other thing.
Speaker 5 (36:23):
Like, so the model providers, I think, like the big
question mark to me.
Speaker 3 (36:31):
Is like open source models and like if you have open.
Speaker 5 (36:34):
Source models, especially ones that are like comparable quality, if
like like open AI and anthropies of the world can't
maintain like a like a real lead in quality or
latency or something like that. How does how does the
world work in that? And so the open source alternative
where you run it yourself, uh and you don't have
to pay the sort of margin to open AI is.
Speaker 3 (36:58):
Super interesting to me.
Speaker 5 (37:00):
I think that the one place that there's definitely going
to be someone's gonna make money.
Speaker 3 (37:04):
It's just on like serving these models.
Speaker 5 (37:06):
So I feel like, for better or worse, if you're
Amazon and ANWS you know, g Cloud, Azure or whatever,
they're gonna make money because someone needs to serve these models.
And the local versions, which is I think another interesting
thing to consider are at least currently they're not at
the they're not at it's not really practical to like
(37:26):
get the same level of power from like downloading you know, Lama.
But that's another thing I'm looking at is like maybe
it's just local models that totally disintermediate the need for these.
Speaker 3 (37:39):
Like huge API based cloud models.
Speaker 2 (37:43):
Who know, no, I mean, you're you're onto something really
there because the like you, it would cost you way
more than the price that you would pay to the
model providers to utilize their llms if you tried to
run the open source models locally on hardware that you
know is comparable and gets you speed and accuracy precision
in order to utilize that.
Speaker 3 (38:04):
Yeah.
Speaker 4 (38:05):
Yeah, we should talk about WARP some more and like
its features about whatever. I like, speak into the terminal
my commands so that I don't have.
Speaker 3 (38:19):
To do it. So we added this feature. It's super cool.
If you're using WARP.
Speaker 5 (38:24):
You can hold the function key or you can configure it, uh,
and you can.
Speaker 3 (38:30):
Talk to your terminal. It's magic.
Speaker 5 (38:32):
You can you can just tell it what you want
to do. We uh, it translates it into English and
then it runs it and so it's it's pretty star
trek y from like a user experience standpoint.
Speaker 3 (38:46):
So yeah, that's that is something that we wanted.
Speaker 4 (38:49):
Saving the people from the repetitive stress injuries like this
is what I.
Speaker 3 (38:52):
Why should people have to do anything?
Speaker 2 (38:54):
Like I know what Gillian's waiting for that she wants
the brain interface device.
Speaker 3 (39:00):
Exactly what I want.
Speaker 4 (39:04):
I think.
Speaker 3 (39:05):
I think it would be a really good WARP.
Speaker 5 (39:07):
I'm getting the sense that that WARF is actually very
well suited to Jillian's work force.
Speaker 4 (39:12):
It really is, Like, especially since you just said the
speech thing, because I'm getting older and I can't type
so much so like I very specifically need the speech.
Speaker 3 (39:20):
Thing, and why should you have to say it?
Speaker 4 (39:22):
I know I shouldn't. That reminds me of like an
episode kind of person here.
Speaker 1 (39:29):
It reminds me of an episode of The Simpsons where
Homer's in the hospital and the guy in the bed
next to him is on a breathing machine and He's like, hey,
how come that guy gets someone to breathe for him
and I'm over here doing it by myself.
Speaker 2 (39:44):
See, I thought you were going to bring up the
episode where he tried to get to three hundred pounds
so he could be classified with a disability use a
wand to dial.
Speaker 4 (39:54):
That must be an old episode, like that must that
must be a real old episode.
Speaker 2 (39:58):
Yeah, that was when it was still good.
Speaker 3 (40:00):
Yeah.
Speaker 1 (40:02):
Doctor Nick's food philosophy was if you rub a newspaper
on the food and the newspaper turns clear, it's good
to eat.
Speaker 5 (40:11):
I mean, one of my I'm pretty lazy, and like
I'm not like ashamed to be lazy when especially when
it comes to when it comes to development.
Speaker 3 (40:21):
I don't want to have to.
Speaker 5 (40:22):
Do more work than I have to do to ship
something that's useful. So like my like what I care
about as a developer. Like again, there's different kinds of developers,
but to me, I'm like all in it for the
I want to build something cool.
Speaker 3 (40:37):
I want to ship it out to people.
Speaker 5 (40:39):
I want to be proud that I built it that
I want it to work really really well. And I
want to do that with like the minimal possible effort
and the extent that I have to put effort into it.
I want it to be effort that goes towards thinking
about how it ought to work. And I don't want
to spend effort on like annoying shit in the terminal,
(41:00):
Like that's like the last place that I want my
limited brain cycles to go. I don't want to spend
effort either on like changing function signatures in my files.
I just I know what I want it to be,
and I want like to get from A to B
as quick as possible. And so yeah, it's the extent
that something like AI and I think WARP for the
terminal especially like makes it so I can be like
(41:20):
a little bit lazier.
Speaker 3 (41:21):
Again, this isn't like the advertising I put on our
home page or whatever, but I think it's like I
should have a.
Speaker 4 (41:28):
Valuable things advertising it's great.
Speaker 5 (41:32):
And like, honestly, like a lot of the best developers
I've worked with in my career just kind of all
about that, like, don't make me spend my brain cycles
on like tedious shit and toil. And so I feel
pretty good about trying to eliminate that stuff for developers
so that you can do the more fun stuff, because
(41:52):
the really fun stuff.
Speaker 3 (41:53):
Is like it's like, to me, at least, it's like,
how should the product work?
Speaker 5 (41:56):
And then it's like, how do I architect this thing
so that I can make the product work the way
that I want?
Speaker 3 (42:02):
And then the least fun thing is.
Speaker 5 (42:03):
Like the like typing in the like words in the
text editor or the terminal to do that.
Speaker 3 (42:10):
I don't know if everyone's in the same way.
Speaker 2 (42:12):
No, I think I gore onto something.
Speaker 1 (42:14):
We was going to say it, yeah, No, I was
going to say, it's much more exciting to work on
how the application works than how to center this fucking div.
Speaker 3 (42:25):
Right that vertically.
Speaker 2 (42:32):
Under the vertically on the page. That's that, that's the key. Vertically,
you know, horizontally you just use flexbox, no problem. I think, well,
you know, there's an interesting thing here, because I feel
like if if we take this to the natural conclusion,
it's probably like the managing directors who will then be
responsible for building the product by communicating with the AI
(42:57):
that we technology that we have available and not needing
so called technology department in any of our companies anymore.
Speaker 5 (43:05):
So that's like a horrible outcome to me. I think
it's product managers making software. I mean, arguably that's what's happening. Yeah,
I mean argue that's what's happening right now. There's just
a couple of you know, people in their way that
are telling them that they can't have it exactly what
they want. That's interesting, That's well, I think that's that's
(43:25):
not how it works at work, Like I could be
at work to some places. But so at work, for instance,
we build something and we may be again we may
be different than other places.
Speaker 3 (43:36):
It's primarily engineers who are driving the product direction. Now
we're working on a product that is used by we use.
Speaker 5 (43:43):
We're the customer, we're the audience, and so we have
this awesome virtuous feedback loop of like we build it,
we use it, we like something, we don't like something,
and so we drive a lot of it.
Speaker 3 (43:55):
I don't want to change that at all, Like, actually,
I think that's not a good thing to change.
Speaker 4 (44:00):
And so.
Speaker 5 (44:02):
I also just I don't think as bolish as I
am on AI. I don't think that we are close
to the point where you can build something meaningful without
having some technical knowledge. And if anything like again this
is probably is not the prevailing wisdom. But it's like
I think you need to be more technical to be
able to sort of guide and correct and be the
(44:25):
like the tech lead for an AI. And it's if
you are a aspiring developer these days, I would say,
like learn the shit better, like learn the fundamentals to
see us better, because if you want to effectively produce
software in a world where you have someone who's like
pretty smart but also kind of like a savant and
like dominant a bunch of ways, you need to know what.
Speaker 3 (44:46):
The heck is going on for when you hit a wall.
Speaker 5 (44:49):
And so I think, you know, I don't think we're
close to a world where it's like MBA is building
all of our software no fins to nbas.
Speaker 3 (44:56):
Nbas are great, but like I feel like it's you're
gonna need people.
Speaker 5 (45:00):
Who are experts in order to effectively use this tool to.
Speaker 3 (45:03):
Get its pact capacity.
Speaker 5 (45:05):
And I do think like Wren, I don't know your
point out, Like if you're really junior and you don't learn,
if all you do is say maybe like you've only
learned how to build web apps, I do you feel
like you're like a little bit at risk. Like my
advice to those people would be like up level your
CS skills.
Speaker 3 (45:21):
But I don't see a world anytime soon.
Speaker 5 (45:23):
Where if you're in a professional software development setting, that
developers are going away.
Speaker 3 (45:29):
I sincerely hope not.
Speaker 2 (45:31):
I mean, I'm screwed if I think it's the de
leap there that's problematic. It's that we know you need
the skills in order to utilize LMS effectively. Like you're
not going to be able to just job off your
entire brain to this vehicle and have it go at
full speed without thinking. It really does require critical thinking
to interact with it effectively. And then that's what you're saying.
(45:53):
And I think the problem is, Yeah, I think part
of the problem is that some companies believe that that's
not necessarily the case, that you can delegate this out
to an LM and have it.
Speaker 5 (46:05):
Some companies are just buying the hype that we don't
need to hire developers anymore.
Speaker 2 (46:08):
I and there are companies out there that are, like,
you know, we are an agentic building thing. There is
like the software developer Devin or whatever. Sure, yeah, so
I mean, and I think what I'm saying is, I
know those can't work.
Speaker 5 (46:22):
But I think that's those companies will find out when
they try to replace their development with Devan.
Speaker 6 (46:28):
Yeah.
Speaker 2 (46:28):
I don't like building dev is Devin building Devin because
I don't. I don't think he is or they are
doing it that. Yeah. But I think the bigger problem
is that the leap from hey, I'm someone who doesn't
have technical capabilities to I want a job utilizing technical capabilities.
That gap is growing and harder to get into it
(46:48):
now because the technology available for us interact with is
much more complicated than it was five years ago, ten
years ago, twenty years ago, and the skills that you
get from even training a little bit, like teaching yourself
skilling up skilling even a little bit, is much further
away from what companies are looking for. At least that's
(47:08):
my perspective that I think I'm seeing. And I think
the LMS are contributing to that gap.
Speaker 3 (47:16):
I'm sure like, Okay.
Speaker 5 (47:17):
So say you're a company and you're spending one hundreds
of million dollars on software developers. I'm sure you're like, God,
I would like to spend less money and have equal output.
And you could be like, Okay, I'm going to hire
AI software engineers the DEFN example, And I've tried Devan
and it's a neat vision. Devin, I don't want to
I'm not gonna shoot on Devon. It didn't work that
(47:38):
well for us. I know they're improving it, but it doesn't.
It's like that model today does not work. Will that
model work in five ten years?
Speaker 3 (47:48):
I don't know. I'm still skeptical. I think any company
that finds that they want to improve their.
Speaker 5 (47:57):
Cost efficiency on the software side by placing their developers
is going to be in a I think it's just
they're going to find that they don't get the ROI
on that, and that the better ROI right now is
to empower your developers and like give them tools that
let them be more productive.
Speaker 3 (48:15):
I'm saying this. I'm obviously super biased.
Speaker 5 (48:16):
I run a developer tools company where I'm building something
where the mission is empowered developers. But I truly believe
that that's like the right way to approach this. And
you know, it's like companies will try whatever they're going
to try, but I think that they're going to stick
with something that actually gives them the result. I don't
think that they're like the economic incentives are such that
(48:37):
like if JP Morgan replaced all their developers with AI
software engineers and then like all their bank and transactions.
Speaker 3 (48:43):
Failed, they'd be like, this is not the right move.
Speaker 5 (48:45):
And so I do think that there's like back pressure
on doing something that actually works.
Speaker 1 (48:53):
I think that's a great model, and I encourage them
to do that, and then when it blows up, I
want them to get over to my website where I
have my consulting rates listed.
Speaker 3 (49:03):
Exactly they're gonna need. They're going to need some smart people.
Speaker 2 (49:08):
We actually went on smart people still, Yeah, I mean
for sure, for sure, I mean we actually went We
actually did a deep dive in this area in our
episode on the Develops report from from from Dora in
twenty twenty four. Okay, where like the I don't know
if you've read it, but the actual results was like
the value that lms were providing to organizations was suspect
(49:33):
like it was. It wasn't significantly different than where they
had been before. It was very difficult for organizations to
justify that the value to the bottom line or the
value to the products that were being delivered. I think
the interesting thing was the one thing I did say
is that people were happier with using the LMS, but
it didn't actually reduce toil and it didn't didn't reduce
(49:56):
the amount of time spent doing things that they didn't like,
which is interesting. I think it gives the most value
to people who are positive optimistic about AI. So if
you like AI, you should use this.
Speaker 3 (50:08):
I can tell you experience from WARP.
Speaker 5 (50:10):
So so there's the way we think about users coming
into WARP. There's some users who are coming into WARP
because they're like I love AI.
Speaker 3 (50:18):
They're like, I want I love this new technology.
Speaker 5 (50:21):
I want to like use it in all my tools,
And those are great users for us, Like they come
in they're like, holy shit, I can I can use.
Speaker 3 (50:30):
A terminal in this totally new way. That is not
the majority users.
Speaker 5 (50:34):
So the majority user for us is what I would
call like an AI neutral like developer who might be like, Okay,
I'm open to this, but it's like there's a lot
of hype, I have a bunch of inherent skepticism. And
for those users, the challenge for us is to get
them to actually see the value of the AI and
(50:55):
like actually use it. And so the the way that
we've like figured out how to do that is that
it's very similar to that tool that you mentioned earlier,
the fuck and so like the like when you have
an error in the in warp and it's like, oh shit,
like I'm missing this. I don't know if I'm a
lot to sware on this podcast, but I'm missing this.
Speaker 3 (51:15):
Uh, you know Python dependency.
Speaker 5 (51:18):
We show something where it's like, hey, we can fix
this for you, and like all you have to do
is say command enter and we fix it for you.
And then that's like a like a conversion moment. And
so like, I guess my point here is like kind
of piggyback them off. Your point is like there's some
people who are just like into this and like they're
gonna love it, and maybe they're they love it even
(51:40):
if it isn't really helping them and they're just like
messing around with LMS all day. But I do think,
based on our experience converting people who don't inherently want
to use this technology that there there must be value
because we have, like I said, we have a lot
of people paying us for something that that like, and
I don't think that people are just going to pay
(52:01):
us for something that I'll find valuable.
Speaker 3 (52:03):
Sure, and a lot of them were not AI enthusiasts
to start.
Speaker 5 (52:06):
There are people who tell us like, oh shit, like
this thing just saved me hours and I love that.
Speaker 3 (52:14):
So that's like the you know, my kind of counter
to what you're saying.
Speaker 2 (52:18):
Yeah, I'm really curious, you said, so the commands are
going through this proxy layer that you're you're hosting and
before interacting with the model prior. So I don't know
if if you can share, but maybe there's some interesting
metrics or data that you've been able to collect based
off of what people are looking for, what has been searched,
what sorts of problems are being fixed, anything in this area.
Speaker 5 (52:39):
Yeah, so we have a group of like alpha testers
who give us like data collection access essentially, and so
really common use cases where we're helping people are the
like install dependencies, the like.
Speaker 3 (52:57):
Get my get is messed up, Like I did something.
I mean some weird get state and I need to
get out of it.
Speaker 5 (53:04):
We are increasingly fixing compiler errors for people, so intrest
of like simple compiler errors and the air log isn't
the terminal we fix.
Speaker 3 (53:13):
We get people who.
Speaker 5 (53:16):
A lot of like Kubernetes, Docker Helm, like those types
of issues where there's very heavy command line usage and
kind of you know, pretty complex commands that you need
to do.
Speaker 3 (53:31):
Is another really popular area.
Speaker 5 (53:33):
We do things where we write scripts for people to
automate things that they're doing over and over again, and
so you know, it's it's a mix. I would say,
like though, really like prime use cases for us to
start are things that are pretty terminal oriented. And then
increasingly as people realize you can fix coding stuff and work,
and we guide them into that the coding stuff matters
a bunch too.
Speaker 3 (53:53):
Because just like developer spend a lot of time writing code.
Speaker 1 (53:58):
I think one of the things that that doesn't really
get highlighted enough is that there actually is a pretty
steep learning curve to using these AI tools. I think
there's an expectation that oh, it's AI, I just go
in and it's going to make my life magical, but
really my experience with it has been learning how bad
(54:20):
I actually suck at communication? And that was the first job. Yeah,
Like that was my first job, is to figure out
how to communicate.
Speaker 3 (54:29):
It's weird.
Speaker 5 (54:30):
It's turning every programmer into someone who needs to know
how to write, which is like kind of a crazy skill.
Speaker 3 (54:36):
But like, yeah, the quality of what you get.
Speaker 5 (54:39):
Out of these llms is highly dependent upon how good
you are prompting them, how good you are at providing
them with the right context to.
Speaker 3 (54:47):
Answer your question. And if you Yeah, who would have thought.
Speaker 5 (54:52):
That, like being really good at like writing English would
have been like the core thing.
Speaker 3 (54:55):
But I guess, I guess like people engineers write design docs.
It's not that different from that skill.
Speaker 5 (55:02):
It's a real behavior change and it's a real skill,
and I think that I think it's a great observation.
Speaker 3 (55:07):
Agree.
Speaker 2 (55:08):
I mean, I know, I went to university specifically to
study engineering so that I wouldn't have to read and
write words. And now my life I pretty much just
write a lot of blog posts, knowledge based articles, you know,
chat with lms, like it's every single day, like there
are it's just words. It's just words. That's that's my
(55:29):
whole life now.
Speaker 1 (55:30):
Yeah, yeah, I think it's worth elaborating on though, just
just like that's one of the reasons I'm being more
pushing people more into AI. It's like, yeah, I know,
you get it. You tried it, it made a mistake,
and you're ready to write it off. But I really
need you to stick with this and learn how to
use it, because by putting that time and effort in now,
(55:54):
you're going to figure these skills out and learn how
to make it productive. And then as the technology self improved,
you're going to start getting to take exponential benefits of that,
and so you and your career are going to be
way way ahead of everyone that you're sitting with now
who says, oh AI sucks five years from now.
Speaker 5 (56:18):
I believe I'm one hundred percent with you that that's
like that's the smart approach. Is like, yeah, I think
it's like the tool analogy is the right analogy right
right now, where it's like you can't get mad at
them if.
Speaker 4 (56:31):
You, like I didn't learn, I can't get mad.
Speaker 3 (56:38):
But it's like it's like counter productive.
Speaker 5 (56:40):
And I think if you've remove the hype for a
second and just think of it as like it's a
computer program that you're using it's like, yeah, you got
to learn how to use it, and like, you know,
what is it like R T F M, Like I
kind of hate that, but it's a it's like learn
how to learn how to use it. If you want
to get the most out of it, is one hundred
percent right and if you are if you think of
(57:03):
it instead as like a dumb coworker you don't want
to associate with. But that dumb coworker is like someone
who's on your well, I don't know where I'm going
with this.
Speaker 3 (57:17):
Think of it as a tool that you got to
get the most out of.
Speaker 2 (57:19):
I think I think you're onto something there really important
because I think one of the things that a lot
of the elms we see out there, and I think
this is where some of the value is definitely lost.
They don't do a great job of teaching you how
to be an effective prompt engineer, like how to actually
create communication with the tool, to Will's point, and I
think part of it is because those same companies have
(57:40):
no idea how their own thing works, so they can't
actually give good recommendations. But I think they do figure
it out over time, because there are communities that pop
up that are discussing this and then they bring that
knowledge back in where you know, we see examples where
like the Dalli model that open ai has is the
prompt is being mutated by their one or whatever based
(58:02):
on what the user inputs, because it's just nonsensical and
needs to be mutated. And like those instructions, it would
be great to be exposed. And I just feel like
these tools don't do this good of a job. But
you work on the application layer, and so I feel
like you're providing a much better experience for teaching people
how to utilize the tool effectively because you have to
because you're actually selling a real.
Speaker 5 (58:23):
Product, right right, No, No, And it's like it's a thing
that we're constantly thinking through, Like we have a feature
that is suggested prompts essentially where you know, if there's
a like the most common use case again is like
an error error resolution, but well, based on the error
that we see, we will suggest to prompt. And the
(58:45):
prompt probably is a little bit more than just like
fix this, which is what a person might write. It
is probably like fix this russer that is caused by
incorrect mutability, And like you provide, we do everything we
can to make it the minimum amount of work, and
also to show the user like, hey, here's what we're
(59:05):
actually telling the model, so that if you want to
do this on your own next time, without like Warth
doing it, you can do it.
Speaker 3 (59:13):
So that that that's it's a key skill, like totally
ride of it. That's something that matters.
Speaker 4 (59:18):
I think this kind of shows my bias because like
forcing the developers to have to communicate properly, I just
don't see that as a problem. I'm like, this is
a good thing. This should be a feature, not a butt.
Speaker 2 (59:29):
Well, okay, maybe I'll put this into perspective, Jillian in
a different way. It's communicating correctly is a subjective perspective
based off of the people involved in that collaboration. When
you're communicating with a second person, you know there's a
culture involved there, your values involved, the definitions of words
that you grew up with, all these things in that
(59:50):
and when you're using an LM, you don't like it's
challenging to figure out what its culture is, how it
responds to certain things, and so you have to learn
that tool. So I think there's a difference between like
you're not becoming a better communicator. You're becoming a better
communicator with that thing. And I think is it the
good thing to force people to do, I mean communicating
(01:00:12):
with other human beings that you work with, Yes, for sure.
Forcing them to learn how to use, you know, end
tools out there that all are slightly different, have individual
mindsets or cultures or whatever. The corpus and material you
know that I think is open for challenge and debate.
Speaker 4 (01:00:29):
But I just see this as like a people living
in society kind of issue. Like when I was a kid,
my dad was like, you're going to take a typing
class because that wasn't just an automatic thing back then,
you guys all right, like this was this was a
while ago, And I kind of just see AI as
sort of like I think it's it's very like pivotal,
it's paradigm shifting, but it's it's another iteration of that.
(01:00:52):
It's another tool that we're adding on that people are
going to learn how to use that everybody's going to
have to use, just like I don't know, like now,
my kid's just I did not have the option to
sign them up for a typing class or not. It's
just part of their curriculum that they are just doing.
Speaker 1 (01:01:06):
I think you should put them in a typing class.
No no, no, no, like an old school with the old
with the old man, just to screw with them.
Speaker 2 (01:01:19):
See you know here here I have I have, I
have good parable here because when I was in the
fifth grade, I think I was in a typing class
in my school was provided a public school. You got
a type in class, and I learned a S D
F J K L semi over and over again for
for a year. And realistically I don't use quirdy. Actually
(01:01:42):
I find it to be a lackluster, subpar keyboard layout.
And so I was taught, you know, something that took
me many months to unlearn so I could be more
effective on my keyboard use. I'm a I'm actually I'm
a prog. I'mer Devorak fan. But I have used Linux
(01:02:02):
to uh configure like all of the almost all the
keys so that the the third the third level not
shift and control about the special al gr key to
give me other things that are beneficial for programming and
German and Greek and Roman, and however I want to
utilize them.
Speaker 4 (01:02:21):
Sounds like a lot of work.
Speaker 2 (01:02:23):
Well, this is the thing is we're talking about productivity
and optimizing your flow. And I find that I type,
you know, you with an umloud or a with an umloud,
or a dollar sign or the euro sign frequently, and
so I want an easy way to type those. I
don't want to google euro sign and then copy and
paste that somewhere. You know, it's like on your on
(01:02:44):
your phone. Isn't there an emoji key where you can
hit emoji and then find the emoji you want? I mean,
I I see the LMS a sort of similar tool
from that perspective, Right, you're you're hot, You're hot keying
over to your warp uh terminal to you know, type
those things out and get the answer rather than having
to search on the internet.
Speaker 4 (01:03:02):
Yeah, but if it's what you're doing, it's what you're doing,
and there should be like a productive way to, you know,
to accomquish the goal.
Speaker 2 (01:03:08):
Look, my five my keyboard layout is open source, it's available.
Speaker 1 (01:03:12):
And do you have blank key caps?
Speaker 2 (01:03:17):
To Warren, no, I am, so this is not the
episode where we talk about my keyboard.
Speaker 4 (01:03:25):
I think it's becoming that board.
Speaker 2 (01:03:30):
I took a Quirity keyboard. It's the Logitech. I don't
even remember what number it is like K four hundred
or something. It says on here somewhere. I have no
idea what it is. It's their silent version, the one
that makes the least amount of sound possible, because I
care about noise more than anything else. And then I
just moved the keys everywhere I could. And this thing
you'll find out about keyboards that are not designing this.
The F key and the J key are have a
(01:03:53):
different form factor than all the other keys on the keyboard,
so you can't swap them around. I don't know why
they do this, just to piss you off a parent,
you know, it's like these two keys are going to
be different. I don't know why, but they are. And
so all the keys on my keyboard are in different
spots except for the F and the J they're exactly
where they started on the cordy.
Speaker 5 (01:04:13):
I think it's because that's like home based, right, Like
they're like you want like a tactle wave finding out
where those are.
Speaker 2 (01:04:20):
But it's the key cap form factor, not like the
key itself. So it's like, I don't know. The only
the only justication that I can figure out is that
if you took all the keys off the keyboard and
you're like, oh, where do I put them back? I
don't know, Oh, these two have a different form. Maybe
the F and the jay goes there, and then I
can figure out where the other ones go. And I'm like,
that's a pretty suspect. But it's like every keyboard I've
(01:04:41):
seen has this problem.
Speaker 3 (01:04:43):
I got a mechanical keyboard once and.
Speaker 5 (01:04:48):
My wife made me stop using. She's just like, that
is the most absolutely obnoxious, annoying sounding thing that you like,
put that away. I don't know want to see that again.
I was like, no, that's cool, Like it's like I
love the feel of it. And she's like it's like,
you know, it's like really, lad, Yeah.
Speaker 4 (01:05:07):
I've removed those from my kids Christmas book. She's not
there anymore. I'm not doing this.
Speaker 3 (01:05:13):
See.
Speaker 2 (01:05:14):
I know that that would not work for me because
I'm a very angry typer sometimes, like my wife. My
wife can figure out like what application I'm using in
what I'm doing, but based off of how angrily I'm
typing on the keyboard, like when I'm typing a blog
post or writing a message in slack somewhere or an email,
it sounds different to her, and so I think it's like,
how how angry I am?
Speaker 4 (01:05:34):
You know?
Speaker 2 (01:05:34):
When I'm in an email.
Speaker 3 (01:05:35):
It's the exact same thing.
Speaker 5 (01:05:37):
If my wife can be like, don't send that, take
a breath, don't don't send like I can, I'm like like,
and she's like, no, take it fre either don't don't
And it's the thing actually is like as a manager,
I try to remind myself of of like, don't don't
know angry slacks, no angry emails.
Speaker 1 (01:05:58):
Oh no, he's typing the manifest Get in the car,
kid's get in the car.
Speaker 4 (01:06:02):
So maybe, like you know, doesn't Google have like a
like a drunk email detection. Maybe what we need is
for the keyboard to have like an angry no what
We're gonna wait, We're gonna wait fifteen minutes and then
we're gonna revisit this and see if you would still
like to send it.
Speaker 2 (01:06:16):
Look, I feel like, Julian, you haven't tried searching hard enough.
I'm sure there's some extension out there for your browser
which runs some sort of LM in the background and
determines whether your your email has some sort of angry
tone to it, and well, we'll prevent you from from
sending an email if it contains though no there is.
Speaker 4 (01:06:31):
If you use like pro writing aid, it will detect
the tone of your email and maybe course correct you
a little bit. And I do I do have that. Yeah, you.
Speaker 1 (01:06:43):
Hit sand and it comes back and says I didn't
send this. But I feel like it's a good time
to talk about your feelings. What's the source of this
anger for you?
Speaker 4 (01:06:52):
Let's get to the bottom of these issues. Speaking of
which I think we need to get back to WARP
because I have specific questions and more like more future requests.
Speaker 1 (01:07:04):
Bring it on.
Speaker 4 (01:07:06):
The point of having the app people on the show
is that I can be like, if I use this,
I have things that I want. All right, Like that.
Speaker 3 (01:07:13):
Tell me what can I do for you?
Speaker 4 (01:07:14):
So I saw that there's like WARP workflows, and I'm wondering,
can I do those in reverse? Like I can? Can
I go through and figure something out and then be like,
all right, Warp, I'm stupid and I don't remember anything
that I just did, but I'm probably gonna have to
do this again. So I would like for you to
go through my history, figure out what I did, and
just go put it in like a markdown file or
(01:07:35):
some notes or something as opposed to like history doing proactively.
Speaker 3 (01:07:41):
It's a great idea. We don't quite have that.
Speaker 5 (01:07:43):
We have the ability to take command that you've already
run and turn into workflow, just so folks know what
a workflow is. A workflow is a it's kind of
like an alias, but it's like a templated command. And
so if you have like a complicated thing you're doing
a doctor or like what's your workflow for like cherry
picking something into a release, you can make it one
of these templated commands, and then we actually make it
(01:08:06):
so it's shareable, which I think is kind of like
the killer value of it. And so if you're working
on a development team, you can build up a library
of these things.
Speaker 3 (01:08:14):
That you can use in different situations. So if you're
like an.
Speaker 5 (01:08:17):
SRI team, it's like, Okay, what are all the commands
that I need to be able to run the middle
of a firefight. You can have that and they're all
sort of in a common library that you have directly
within warp. We don't have the feature yet of like
intelligently make these for me from a session, but that's
a super smart feature. We do have a thing that
(01:08:37):
we're haven't launched, but our experimenting with which is like
essentially like run the output of your command through an
LLLM and have it summarize for you and pick out
the interesting and important parts.
Speaker 3 (01:08:50):
But I like your ideagulian of like figure out what
I did, record for me so I can do it
again smart.
Speaker 2 (01:08:56):
So I found that some people have sworn by this
ring a chat context session at the end, just tell
it to like echo back at you what you did,
like say what did I do? And then when it's done,
then say, okay, now I want you to take that
and write a document for me that includes that information
so that the next time I have this problem, I
can go and reference that. And with WARP you can say, okay,
(01:09:17):
now turn that into a templated command.
Speaker 5 (01:09:19):
You can totally you could totally do that today and
WARP the one piece of it that's not like we
don't tie the loop of turning it into this specific
like executable thing that is a workflow.
Speaker 3 (01:09:29):
But you know, we also have like a notebook concept
and WARP.
Speaker 5 (01:09:32):
So you could be like, hey, LLM and WARP summarize
everything I did, turn into a notebook, extract the relevant
commands for me, but it's not quite as.
Speaker 3 (01:09:44):
Seamless I think as it could be for Jillian. It's
a good idea.
Speaker 4 (01:09:48):
Yeah, I'd really like to be able to have different
I don't know if it's sessions or context, but I
suppose one of those where I can say, I don't know.
I mean, I suppose for me it would be like
client dependent or contact dependent, or even like tell me
which environment I'm in, which version of terraform I'm using,
like you know, all that kind of for sure, like
(01:10:09):
it's here, it's it's right here. Yeah, well, yeah, so
that's what I want.
Speaker 3 (01:10:17):
Yeah, we don't like. I think that's a super interesting idea.
Speaker 5 (01:10:21):
I mean, you can you can use warpflort anything about
your environment today, so you could be like what toolchain
am I using?
Speaker 3 (01:10:27):
What are my environment variables? Like what? Uh? Anything?
Speaker 5 (01:10:34):
You can ask about your history and so you can
get some of that today, but you can't quick get
we don't have like packaged episode. When you start a
new session you can get all that stuff, which would
be cool.
Speaker 4 (01:10:45):
Well, I would if we're taking peach requests, I would
like that.
Speaker 3 (01:10:51):
I'm gonna I'm gonna force everyone on our team to.
Speaker 2 (01:10:54):
Listen to this, Well, well, you should probably wait until
the episode drops and then use uh you know, an
I'll m to summarize the episode and extract the future
requests from it, and we can do.
Speaker 3 (01:11:07):
It that way.
Speaker 5 (01:11:08):
Or I think there's been so much interesting discussion about
like philosophy, AI and hear that make them all listen
to it.
Speaker 3 (01:11:16):
I don't think. I don't think it's distilled. Summarized version
is going to do a justice.
Speaker 2 (01:11:21):
Oh, I totally agree we need human Warren.
Speaker 1 (01:11:24):
Yeah, I'm gonna put them in a dark room and
play it back to half speed.
Speaker 2 (01:11:30):
I don't listen to content any slower than like to
these days. Before we get on another tangent, I have
this feeling that we should move off onto onto picks.
Speaker 1 (01:11:40):
For that, it's probably a good part, good point, good time,
good words, look at me, work in my words.
Speaker 2 (01:11:50):
Well then well why don't you go first?
Speaker 1 (01:11:52):
Right on? Okay, So I'm I have a couple of
picks today. One I'm blaming you Warren and Matt from
last week because I got the book Dungeon Crawler Carl,
and I hate how much I like this book. It's
(01:12:15):
just it's dumb and it's funny and it's entertaining, and
it's engaging, and it's sucked way too much of my
time last week. So Dungeon Crawler Carl, I can't even
remember who the author is. Do you remember Warren?
Speaker 6 (01:12:31):
No?
Speaker 3 (01:12:31):
I didn't wrack it up.
Speaker 1 (01:12:33):
Yeah, just google Dungeon Crawler Carl. It's a stupidly fun book,
very entertaining.
Speaker 2 (01:12:39):
If you're listening to this episode, the link will be
included with the podcast, just you know, down below it,
so you don't even have to google.
Speaker 3 (01:12:46):
I just click the link right.
Speaker 1 (01:12:49):
And then the other pick I'm gonna recommend is if
you haven't Zach you mentioned this earlier, if you haven't
gone to your favorite AI tool and just started a
chat about philosophy with it, I highly recommend that. And
that's going to be my pick for the week because
it's just it's so much fun to do. And Laurren,
(01:13:13):
I know you said that AI is not intelligent, but
neither are some of the people I hang out with.
So chatting with AI about philosophy seems to be working
out quite well because it's just a really cool perspective
of some of the stuff that it has and some
of the insights it has to offer, and I've used
it for setting goals as well and challenge me challenging
(01:13:36):
me on those goals, and it's been pretty insightful for that.
So I think that's one good way to start working
with AI. And those are my picks. So Jillian, what
about you? What'd you bring this week?
Speaker 4 (01:13:51):
I'm going to keep going with the self promotion until
I'm back up to the lifestyle with which I've become
a custom And if you go to my website, yeah, yeah,
that's right, uh dabbleopdevops dot com slash AI. I have
a data discovery tool for mostly for data science companies.
If you're not a data science company like I don't,
I don't like even really know how to talk to you,
(01:14:12):
so maybe just ignore this portion. But the idea is
that you get your data, you load it into the LM,
and then you can start asking you questions. It kind
of acts like a maybe like a junior grad student.
You don't want to like completely trust what it says,
but it gives you a very good first draft. I'm
adding the PubMed interface so you can go search medical
(01:14:33):
literature and say, like, okay, get me all the papers
back on this disease or this protein or this drug interaction,
you know, whatever the things are. Load that into the LM.
Start asking you questions. I've got a couple different data
sets open targets, a couple single cell data sets. I
want to add a couple of transcripts data sets, even
though those might be out of vogue, because they're still cool,
(01:14:54):
you guys, Okay, they're still cool. So anyways, cool things
are being added to the platform. Anybody wants it, mostly
in the biotech space. Again, if you're not biotech, I don't.
I don't really, I don't even know why you're listening
to me, Like, just tune me out.
Speaker 2 (01:15:07):
It's fine, don't. Don't reduce your you know, your TAM
your total adjustable market here. You know, if you don't
understand what Gillian's saying, maybe you should go to the
website anyway and see if you can figure out a
use case for yourself.
Speaker 4 (01:15:18):
That's true, you could. I do have some companies that
use it just for meeting notes. They there you go
like Otter record all of their meetings and then Otter
kind of gives them, like, you know, the different summaries
and images and things like that. It's pretty cool. And
then you can feed that into the LM and have
sort of like a just a history of meetings, so
then you don't have this, didn't we have a meeting
(01:15:39):
about this? Didn't somebody make a database like wasn't wasn't
there a thing? Wasn't there a person we can talk
to here? You could just go and query it and
then and then it will tell you. Sometimes it gives
you the answer you want, and sometimes it's like, no,
that conversation never happened. You're hallucinating now, but you know,
like it's it's either one, it's one or the other. Well,
there's a.
Speaker 1 (01:15:58):
Big overlap between bioh hackers and software engineers as well,
so that they may find that interesting.
Speaker 4 (01:16:07):
Yeah, they could put all the literature and data in
there around biohacking that I'm not I'm not totally familiar with,
although I am very much looking forward to having like
bionic limbs. That would be great, Like that would that
would just be.
Speaker 2 (01:16:20):
Amazing for me, because because you want you want it
that you don't have to think about moving your limbs anymore.
You want something else to do it for you, right, No.
Speaker 4 (01:16:29):
I just want limbs that work at this point, Like
that would be nice that's that's like just on a
mechanical level, like that's what I need. And then you know,
and then on that note, we're kind of talking about
like philosophy of AI and so on, you know, and
we can kind of argue about the tools, but from
an accessibility viewpoint, AI is really great and doing some
really great things. You know, like I have some issues
with typing as I age out of this career field.
(01:16:53):
You know, I have some like low vision people in
the family that AI is very helpful for them being
able to dictate, being able. You know, there's like there
is of a lot of cool accessibility things that can
be done with AI, and I do always kind of
like to give a little bit of a shout out
to that because I do think it's like all of
that is pretty great. You know, Like I have somebody
who's low vision who can now listen to audiobooks and
(01:17:14):
you know, I'm basically like kind of still go through
the Internet just with voice, and I think that's pretty cool.
So I don't know, that's it.
Speaker 3 (01:17:22):
That's my picks, alrighty.
Speaker 2 (01:17:25):
Then, so for my pick this week, I primed it
at the beginning. It's this Microsoft backed research paper that
came out of Carnegie mellon the impact of Generative AI
on Critical thinking, and I think it's just absolutely fantastic
paper about the correlations between utilizing AI tools and developing
(01:17:48):
critical thinking processes and expanding in usage of that sort
of brain muscle. And I think some people have misinterpreted
the paper as Microsoft paper on AI is making us stupid,
But I think the one thing that really does come
out of it is that if you have low confidence
in an LM doing the right thing, you will be
(01:18:10):
able to get much better answers out than if you
have high confidence in the current tools that we have,
because the current tools are transformer networks that hallucinate, and
if you just assume that it gives you the right answer,
like your calculator, you are going to stop developing the
muscle of challenging where you got the information from and
trying to understand it. I will say that this leads
(01:18:33):
me to a great interview question. I know that interviewing
candidates today can be challenging because they may be using
lms to answer your questions, and for me, I think
that naturally you can just ask them, hey, are you
like how much confidence do you have in the LMS
that you use to produce the right answers. The more
confident they are, the more likely you know they're not
(01:18:54):
using critical thinking to challenge what comes out of them
and could be a useful litmus test for what sort
of person you're hiring into your organization.
Speaker 1 (01:19:06):
Right hunh, And so by are you phrasing the question
that way? Just presuming that they are using AI to
make them more comfortable with admitting that they are trying
to hide it?
Speaker 2 (01:19:19):
Well, I think realistically part of our interviews now should
be dedicated to solving problems that don't rely on using LMS,
or problems that can use LLMS to be solved better,
and then asking them to use lms and which LMS
they're utilizing to solve the problem and how they're going
about it, because I think where you know you're trying
(01:19:42):
to hide this perspective from yourself, you're lying to yourself
if you believe that they you don't want to pull
these tools into your company to utilize in some fashion
and that people aren't utilizing them irrelevant if you give
them a take home assignment for your company that takes
four hours or eight hours, some of them are going
to utilize tools, And I don't think it says a
(01:20:06):
lot on the type of person based on whether they
utilize the tools, but it does say something about them
about how they're utilizing them or what their expectations are
on how they utilize those tools.
Speaker 1 (01:20:16):
Cool, all right, Zach? What'd you bring for?
Speaker 3 (01:20:17):
Pick? I have a tool that I like. Why not?
Speaker 5 (01:20:25):
So it's a tool called Granola, and it's a it's
a note taker.
Speaker 3 (01:20:31):
It's a meeting note taker, you say, I.
Speaker 5 (01:20:34):
But the thing that I like about it compared to
all the other ones that I've tried using, is that
you don't end up with like a little like black
box in your zoom for the note taker. The note
taker works just off of your computer audio. Oh so
there's no like this is weird?
Speaker 3 (01:20:56):
Who is this? Like Zach's note taker thing? Joining the meeting?
And it it not? Is it takes notes. It doesn't
like the default way that it takes notes isn't by transcription.
It's by.
Speaker 5 (01:21:08):
Like semantically summarizing and giving you the key points.
Speaker 4 (01:21:12):
Of what happened in the meaning.
Speaker 6 (01:21:13):
So I don't like taking mening notes. So this is
a cool thing. It's called pole. That's one thing. A
second thing I'm reading a book. It's pretty nerdy. I
don't know why I'm reading it.
Speaker 5 (01:21:25):
It's called A it's like a travel Guys in the
Middle Ages, and it's all it's like a it's a
history book, and it's all about like from like you know,
the year like eleven hundred to fifteen hundred, how did
people travel, Like what was it like for them to
take a vacation.
Speaker 3 (01:21:43):
They weren't really taking vacations.
Speaker 5 (01:21:44):
They were like primarily going on pilgrimages or at least
that's like what the written record survives.
Speaker 3 (01:21:50):
And it takes you all over Europe.
Speaker 5 (01:21:53):
The Middle East and like the Near East, and like
I'm not through it yet, so I don't totally know where.
Speaker 3 (01:22:00):
But to me, it's what I like about it from a.
Speaker 5 (01:22:04):
History perspective is that it's just about like it's about
a like relatable experience, not about like a series of
historical events.
Speaker 3 (01:22:12):
It's not about historical leaders.
Speaker 5 (01:22:14):
It's about, like, say, you having to be living in
the year thirteen hundred, what the heck were you doing?
Speaker 3 (01:22:19):
How did you pack? How did you travel? Where did
you stay? Like what were the inns? Like? What were
you trying to go? Sight see? At I don't know
why I like it so much, but it's it's like
I really like it.
Speaker 5 (01:22:32):
It's like a puts me in a very different mindset
from like how we're living today, So.
Speaker 1 (01:22:38):
That's super cool. It's like National Lampoon's Middle Ages vacation.
Speaker 5 (01:22:44):
Yeah, except I guess it didn't seem like very funny
to be traveling then.
Speaker 3 (01:22:48):
It was a lot of like very serious.
Speaker 5 (01:22:51):
You got to get to this religious site, like like
you got to see these relics, like people were really
wanting to see a bunch of you know, historic felts
or at least that's what the written record, uh survives,
and that's where the history comes from.
Speaker 3 (01:23:06):
So that's pretty cool.
Speaker 4 (01:23:08):
I used to really like all those, like the the
diary type books, like their fiction, but they're sort of
written as diaries of like the kids that would do
the Oregon Trail and travel across the United States, and
and they're they're from like other places as well too,
So you have people coming to Plymouth Rock and doing
the Oregon Trail and just the sort of Yeah, in general,
people go in different places, like across history. It used
(01:23:31):
to be a lot harder. You used to have to
worry about more things than like if the gas station
have your preferred chicken tenders or like whatever you know.
Speaker 2 (01:23:39):
Yeah, there's so many questions, Jill Anne.
Speaker 1 (01:23:47):
Awesome, Zach, thank you for joining us. This has been
a super entertaining episode.
Speaker 4 (01:23:52):
Yeah, this has been fun.
Speaker 3 (01:23:54):
Yeah that was great.
Speaker 5 (01:23:55):
I was super interesting conversation and like, uh it was Yeah,
that's fine.
Speaker 3 (01:24:00):
Really appreciate you all having me on here.
Speaker 1 (01:24:02):
For sure. I'm gonna challenge Jillian to go download Warp
try it out, and then invite you back on the
show for a head to head rematch.
Speaker 4 (01:24:11):
Voice like that's the one thing I really want. So
there we go.
Speaker 5 (01:24:14):
It's a I don't know if it has it's at
warp dot dev is where you get it, and it's
now available Mac, Linux, and Windows, so all platforms right on.
Speaker 1 (01:24:27):
Cool cool, Well, thanks everyone, Zach, thank you again, Warren, Jillian,
thank you, and we'll see everyone next week.