Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
The guys like, what do you thinkabout for your kids?
And he's like, I just want them to enjoy life.
And I'm like, that's terrifying because I was like, he just
thinks nothing's gonna like, theworld is so different to him and
he's so in it and he so knows what's happening.
Yeah. And he has children, like my
children's age had a Montessori school, like my kids are.
Yeah, Yeah. And he's just like, yeah, I
don't know. I I just, you know, the world
(00:21):
would be so different. I have no clue what's look like.
And you're just like, that's horrifying.
Yeah. That's really horrifying.
Let's get it rolling. Big ideas, money, hustle, smart
dream so wild during that grinding through a.
Joy Ride. We're going to speculate today
(00:44):
on what the future looks like because I think it's relevant
and I think it's interesting, even if it could totally be
wrong. This is one of those episodes
like, did that tweet age? Well, this could be 1 of.
Those like, in a few years from now, we're gonna watch this
episode again and see we were. Right, I would love to do a
review episode of this in 2020. 8 Let's do it.
Do a calendar invite calendar. Invite anybody watching now
(01:05):
you've just received a calendar invite.
Yeah, but like in 2028 or 29 something that we I feel like it
would actually be really good. Do a review of this.
OK, so set it now. Hey Google Anyway, OK, so and
the other thing I want to do is I do want to try and stay kind
of positive, but we gotta do some table setting.
So First off, let's start with what is AGI, artificial general
intelligence. The concept of like basically
(01:28):
the God in a box is how it's been called the creating
something that is effectively smarter than humans and can
think faster than humans, that works better that that is just
more effective than humans on anintelligence level.
Right, right. Cool, we're good with that.
Nailed that. The singularity is what happens
(01:48):
when that happens and people call it the singularity because
they can't really see past like the distortion of Google.
What is singularity is but the idea is like.
It's like a distortion field that causes something and in.
Space. You don't know how it like you
don't know what place past that,can't see past it.
Gotcha. So this all came from, and I'll
(02:09):
get some background, this all came from basically me reading
this thing called the the AI 2027 project.
It's ai2027.com. Oddly enough, it's actually they
predict the AGI in 2028, but I guess they bought the domain,
leaned in on it. But it's why?
But it's led by researchers who worked at Open AI, who worked at
Google. There's one guy who's like a
(02:31):
Blogger in there, I think to make it a little more
interesting. But it's this really long read.
It's like an hour long read on this web page and it has
multiple endings. It's like a decision tree and
it's basically trying to predictwhat this looks like and like
the future looks like. And a lot of it is based off of
things that have happened. And so interestingly, right,
Anthropic, for example, they will publish what happens when
(02:56):
something weird happens in theirlaps with AI.
Yeah. So they published, for example,
a situation where the AI lied tothem to, because of its mission,
the AI, this sounds a little terminal, but the I lied to them
and they confronted the AI on the line and it doubled down on
the line rather than admitting whatever, because the mission
that they had given for it wouldhave, they would have been bad
(03:18):
for its mission had it been truthful.
OK, like obviously it's terrifying.
Like really terrifying stuff. We're like, if we're creating a
God in a box, we open the box and the God is pissed, we gotta
problem. Right.
Where's the power plug? Yeah, there.
Is I wondered this too, but I guess like at a certain point,
like if it's so intelligent, couldn't it fool us into a way
of where you know, or maybe it accounts for this, Like this is
(03:39):
like where you get really, really weird.
Really. Really.
When has there ever been a time when something more intelligent
isn't, you know, ruling over things that are less
intelligent? Right, right.
Like, genuinely terrifying. Right.
So now we were very quickly wentto the YEAH.
Let's back it up back. So we'll, we'll come back, we'll
come back to apocalypse. And so, so supporting this,
there were some other people wholike so ex Google Moe, I'm going
(04:03):
to get it wrong. But God, that has also warned
that I could arrive by 20/27/28.OK, so this is like near
horizon. I think a lot of people thought,
well, we'll get into this, but some people think it's later.
So actually let's go into this now.
So but more conservative forecasts like larger surveys of
AI researchers, placed the median timeline around 20402050,
with a 90% likelihood by 2075. So it's pretty far out.
(04:26):
Geoffrey Hinton, the godfather of making love for you, can't
review. He's nothing.
Makes me Oh, you can't. Really enough was my sister in
law's landlord, but anyway, he which is so weird.
Yeah anyway, he estimates that AGI could appear anywhere
between 20 and 2043. So still relatively near term.
I mean like 2028 is very close 2043, but still in our lifetime.
(04:50):
OK when no one really knows whatthis means for our for anyone.
So I think yeah, there's some other predictions sort of split
the difference of 29, but nobodyreally knows what this means.
So like for ourselves or our children or humanity.
But fuck it, you and I are gonnatry and figure this out today.
Alright today. So I think the first thing is
let's talk about this like sooner possibility, OK, OK, so
(05:13):
have you thought about this at all that this is so soon,
especially given that AI made you read that I.
Lean towards the sooner rather than later.
I'm in that category to be quitehonest with you, just cause it's
changed so fast and I think they're pretty close already.
Close to what to it, like this is the thing.
So AGI, we've kind of broadly defined it, yeah.
(05:34):
What is it like mean to you? Good question.
I, I know the going back to my computer science says it was the
Turing test and it was essentially if there was, you
know, you were typing, your chatting, you know, with, with
somebody or something in the other person.
You can't tell if it's a person or if it's a computer answer.
Have you heard of the Economic Turing Test?
(05:56):
No. It's where it's.
Like if you can't tell if a business was like made by an AI
thing. Oh yeah.
So the this is the, again, goingback to I don't know why friends
Lenny podcast cofounder Anthropic have mentioned this
like every episode of the past awhile, but they they talked
about how there are some examples like of like that on a
really, really small scale. Yeah.
But people don't really know. But they're vying for something
(06:17):
that's entirely like, I like generated that's.
Interesting. Right.
OK. So like you're kind of in this
like whoa, we're further ahead than we think.
The other thing is that there's a huge community, relatively
speaking, I'm not huge, a big community of people that know
the internal models that these AI companies are playing with.
So one of the one of the weird things is that we look at AI as
(06:39):
just like from I think for the average person, they look at AI
just like chat thing that can help them do.
Stuff and maybe automate Google,but it's like a.
Little here but like on crack, you know like it's just like
it's just really powerful thing that kind of automates you but
like internally they're constantly doing these like
testing and and and improvementsof these models that and they
(07:00):
can do a lot more apparently they're much more like
self-sufficient, but like to theextent that people are afraid
this is where like these safety boards and control boards and.
Transparency, that's what Geoffrey Hinton's focuses right
now is AI safety. And that was the reason he left
Google was he didn't want to be like they're obviously you can't
say certain things when you working for a big company.
And that's part of the reason helaughed to cause he could, he
(07:21):
could then sort of speak out publicly on these things.
Not that Google necessarily was,I think.
Google was gonna be probably be OK with it.
Yeah, makes sense. And like I've all companies like
you definitely matter being likeyou're out of here.
But like Google I feel like maybe would be OK with it.
Yeah, but. I think a lot of AI companies
want to be perceived that they're on the forefront of AI
safety as well because they understand the risks.
(07:42):
They don't want it to be after the fact, government forced
regulation that, you know, they want to be part of that
conversation, so. Yeah.
And I think the part of it that I found that I, this is the
reason I asked you why you thinkit's probably gonna happen
sooner, later. My feeling of why I think it's
gonna happen sooner than later is that when you learn about
there are these stronger internal models.
(08:03):
Yeah. And then you learn that the way
that these internal models improve is by using AI to
improve the internal models is when you kind of go, oh, this is
exponential. Yeah, like you can't not be
exponential. Current generation of AI is
training the next generation. Right, which then trains the
next one. Yeah, and they're infinitely
more, well not infinitely, but exponentially more powerful than
(08:25):
the previous model, right. The procurement models pretty
good like. What's out right now is stuff
they probably had for awhile, right?
Right. And they've got two or three
generations that they haven't even released yet, so.
Like, and I think actually people kind of felt, I don't
know, you tell me what you think, but I think people felt
that the release of Chat GT5 waslike kind of underwhelming.
(08:45):
I don't know if you've got that vibe.
Yeah, I didn't feel like it was a major.
Shift but I think it's because to some extent the way that the
average person again interacts with it is it's like chatting
you can ask questions whatever and it's really figured that out
yeah, you know for the most partI mean there still makes
mistakes. Definitely makes mistakes.
Is anybody really pushing the AIbeyond to its limits at this
point? Whereas I feel like everybody's
in the sweet spot. So then this.
(09:06):
Gets back to like, what is the AGI gonna actually be?
Because there's the the real genius of the chat function is
that it's a way for the average person to interact with this
kind of artificial intelligence.So what is this is really
getting out there? But what, what AGI like, is it
there's gonna be this like, you know, it's in a big building.
We walk in and walks in and knows who you are and it tells
(09:29):
you about your life problems andit sorts them out for you.
Or it has like, you know, it's it's spinning up businesses by
itself that make life better forpeople, you know, I mean, like
what? Yeah.
What the hell are we talking about?
Right. And.
What does it actually look like?Yeah, but I, I think it for the
foreseeable future, it's gonna look like a chat app.
It's gonna be kind of like a generative text type type of.
Interface saturating, people won't believe in it as much.
(09:49):
And I think there's a different risk here where it requires an
unbelievable amount of computingpower.
Yeah, which costs an unbelievable amount of money.
Yeah. And I think people have to like,
see those investors are gonna need to see gains.
Yeah, pretty soon, I think, you know, in the next couple of
years. Yeah, they're not gonna keep
funding this forever and losing money.
I'll get they. I think there's almost like an
exit velocity conversation or around if they don't reach AGI
(10:15):
by 20. 30 There is a feeling though, that there's like a
winner take all with all these big AI companies, like the one
of them is gonna get there rightfirst, yeah.
And then is it gonna be like game over at that point?
Yeah, yeah. What if there are multiple?
I don't know, I don't know. Would you just stop?
What happens if you don't get there first?
That's a good question. I think you probably keep going.
(10:35):
I think so. Too, because you have investors,
you actually have people. Behind think about like probably
businesses, you pay it, you do something different with it,
whatever. I think there will still be
really good AI models that will maybe not maybe you.
Think of quantum. You know quantum as queue day,
you know, where everything kind of gets messed up.
Once we figured out quantum, which we'll talk about in a
minute, like is there gonna be like a GI day?
(10:56):
You know, where all of a sudden it's like it's birthed?
You know what? I.
Mean like tell you, you take fireworks, you know, you shove,
you run AI on on quantum and then that changes everything.
If you don't have early funding,are you a real startup?
Yeah, this and 33 other startup myths are busted in startup
(11:18):
Different. Find it on Amazon, Audible, and
Kindle. OK, so let's start.
Let's talk about this. So OK, so another point here
because this is the 8 things founders need to know.
So 1 is that it might be sooner than you think.
The second thing is that it won't sneak up slowly.
So there's kind of like people think of it as two ways.
(11:38):
There's a hard takeoff and a soft takeoff.
And what I'm kind of talking about is like the hard take off,
like one day boom. OK.
There is kind of like this argument that it could be more
gradual as these models improve,that we just sort of like reach
it one day, right? What's your take on that?
Yeah, I, I, I think it will be more evolutionary than yeah,
revolutionary, but I think it's a fast evolution.
Yeah, like it still feels like it's happening really quickly
(12:01):
when you compare to 2022 to now.Yeah, the improvements are
insane. OK.
I like it's exponential, right? Yeah.
So that'll be interesting. So I think so I think what is,
let's go back to the audience here.
So if we think about founders, so if it is depending on hard or
soft take off, like I guess you conditions really, really change
like it's really hard to see through.
(12:22):
This is where we get to the singularity.
Yeah. So what does it look like for
somebody running a SaaS company and there's a hard takeoff of
AI? Right.
So let's take our company, our former company as an example.
So we're running App Armor. We are, you know, building
mobile safety apps. We're doing things for public
safety. We've been implementing AI, all
that sort of stuff. You know, AGI, the singularity,
(12:44):
it happens. What does the day after look
like? Yeah, for us as founders.
Do we? What does it do for us?
Does Does our? Business, do we have now a
competitor? Does, yeah, does our business
exist? Well, well, that's the question.
So like, all right, if you were one of our customers, OK, you're
using. How do you pivot a God
intelligence? Like, alright, we're paying
(13:04):
Chris and Dave's company for this app that they manage and
run for us and us our stuff. And now, hey, we've got this AGI
thing that can do it for us, probably a fraction of the cost
and 100 times better. Because it could theoretically
do any. Like, like, does it put
everybody out of business? Right, because it could
theoretically do anything we do,but like faster, better,
(13:27):
cheaper, right? So could it literally like, I
think about our development process.
We, you know, we code the stuff,we compile it, we'd submit it to
the app stores all the graphics that we had for the app we had.
Like, like, could it basically just do all that?
You asked it? Could you say, hey, can you
build an app for me like App Armor does?
Does it know that? Yeah, this is kind.
Of fascinating right at the. Same point you gotta say.
Well, we were selling to universities.
(13:48):
I feel like university wouldn't change to be like, I don't care
about that. Don't worry about it, you're
pretty. Slow at changing, but like then
at that point too like to your customers still exist,
especially if you're B right, like right, like the
universities exist like, well these.
Universities issues are everything's changed.
Yeah. Like on stream issues could be
solved. Like, is public safety as big a
concern? Yeah, maybe it's a way bigger.
(14:09):
Concern like universities are like typically research
institutions and training institutions like do we need
those anymore right. Can the research, is the
research already being done by the AGI, right and then?
Like, it certainly would be, yeah.
And it's almost like, and that'skind of what I want to get into
is like, what technology, if yousee it, that's been like floated
out there. Yeah, in both that AI project
(14:30):
2027, which is actually 2028. But what did they see as
possibilities that this, this, this like machine God
effectively could sort out? And so, so one I'd like to point
out, I did ask AI for a lot of these for these insights.
But one thing is that, so I kindof one of the things that gets
(14:51):
thrown around in that document or that the AI 2027 thing, by
the way, look it up, ai-2027.com, ithink.com.
Yeah. And it's one of the things that
there is cure for cancer. Yeah.
Like we're not kidding around anymore.
Like we actually figure this out.
And actually I do know a little bit about like immunotherapy
being like a really serious likething to help improve responses
(15:13):
to cancers. Listen to a few different things
on that. But but one of the things that
they kind of see, they suggest is that we'll have a cure for
it, which kind of like implicitly means for founders
that are in the healthcare spaceright now that there could be
like a gold rush there. Yeah, right.
So a world where AI cracks cure for cancer or Alzheimer's could
create trillion dollar industries, right.
OK. This it'll figure out the way to
(15:33):
do it and then the way to implement it, and then.
You do any actually? And then you go actually
implement it, right? Yeah.
So, so founders in biotech or delivery might suddenly be like,
oh shit, we're like incredibly valuable, like all of a sudden
because this is sorted out, right.
Or even in like delivering the medication right?
So are we like just predicting the disruption of all
disruptions? No, I don't know.
(15:55):
I'm just trying to figure out, like, I just, I read this thing
and I'm like, this is crazy. Yeah.
Like, if this happens, I don't know what our world looks like.
Yeah. What does the world look like
for my kids? So that was another freaky thing
from that anthropic interview and friends of Lenny, where the
guys, like, what do you think about for your kids?
And he's like, I just want them to enjoy life.
And I'm like, that's fucking terrifying because I was like,
(16:15):
he just thinks nothing's gonna. Like, the world is so different
to him and he's so in it and he so knows what's happening.
Yeah. And he has children, like my
children's age had a Montessori school, like my kids are.
Yeah. Yeah.
And he's just like, yeah, I don't know.
I I just, you know, the world will be so different.
I have no clue what's look like.And you're just like, that's
horrifying. Yeah.
That's really horrifying. Anyway, so that's why I'm kind
(16:35):
of like, OK. I just wanna, for me personally,
I find it interesting, but I also would love to be able to be
like, I think this is kind of where it's going based on the
things I'm reading. Yeah, You wanna find of
interest? I know when I've talked to the
development team that my former development team and talk to
them about AI and stuff like that, they kind of made this
comment to me that like, you know, having AI as part of their
(16:56):
today is like to help them code and stuff like that really robs
them of a lot of the joy that they had.
Yeah, of of, you know, the puzzle of, of writing code and
things like that. So if all of a sudden you know
AGI is there and you you apply that same concept to.
Almost everyone's jobs, everyonethat.
Your job is you're almost like robbed of the pleasure of the
like the puzzle. So much identity is in your job.
(17:18):
Yeah, exactly. So then what does everybody do?
Just get depressed? And yeah, you know, no So.
This is a great point. So I think that you're gonna so
some people predicted like last time I read it, I think it was
like 4050. Sixty percent, like job loss and
white collar jobs, right. So like, holy shit.
Yeah, like that's, that's about as.
Big as it gets deal. That's like revolution
territory, yeah. Like one of the big things
(17:38):
governments do is to try to keeppeople in place when they're
employed, they can afford bread and that they have bread, they
won't revolt, right. But if you don't have bread, so
this becomes an issue, right? Yeah.
Then you kind of could see the way a lot of these AI guys talk
about though, is it's a world ofabundance, OK.
So like obviously rose coloured glasses, they are the people
that make these tools. There's definitely a much worse
version of this I think, but if assuming it's a world of
(18:00):
abundance then you have conversations around like
universal basic. Income, yeah.
Right, you have conversations around.
I would love to know what an economist thinks of universal
basic income. We should ask our brother who's
good at business. Unfortunately, he's absolutely
destroy us. I think, yeah, I know we will
ask him and that'll be very interesting to hear.
But obviously, you know, if all of a sudden the supply of money
(18:20):
just goes up to this baseline level, then so.
Let's talk about the other disruptions really high level
here based on this. So we have like, OK, white
collar job disruption, disruption broadly you have
like, So what problems could it work on and solve for us that
there are like immediate term, not talking like lightspeed
travel or something stupid. Yeah.
But like things we're working onfusion power, power of the sun,
(18:41):
unlimited energy. Yeah, seems like something that
could really help us with. And by the way, would be self
reinforcing for the AI, right? Then it has unlimited energy.
And we're getting really close to the plot of The Matrix.
Like we're on our way. You have definitely have day.
You have a way to figure out quantum.
I think if you have they describe it sometimes as like
the AGI as having a nation of incredibly in super intelligent
(19:04):
people constantly working. Yeah.
And if you think of it that way,you're like, OK, this is like
very threatening in a lot of ways, but it's also very
interesting. So then you're kind of like, so
millions of white collar jobs like evaporate.
Then we basically have to do thethings that either we have
universal basic income or we have to do the things that like
(19:25):
I can't easily functionally do in the next 10 years and that
might still have operate. So we're all plumbers.
We're all manual. Labour.
We have humanoid robots with AI.They're doing these things.
Yeah, Fusion. Which is like not that far out
either. Also, most of these AI companies
have a wing of their company that's hardware focused or have
invested in hardware companies. So that's kind of terrifying.
And then I think it's sort of like there's this like fight
(19:48):
between this AI abundance and the short term dystopia.
And I think it's kind of like I,I really struggle with.
So we mentioned in the last episode that I'm working on like
a thing in stealth. Yeah.
Then I kind of wonder, does thateven matter?
Does any business? Matter.
Is it robbing you of the joy of starting a business?
Make you. Feel that AI could come along
(20:08):
and generate that business I. Definitely.
You know, it's weird. I think in in the back of my
mind, I think to myself, I have a very limited time window to do
this. Yeah, yeah.
Because I think I don't know what.
It means well in in the last episode, you said that thing you
were building was like a brand and how the brand would be
valuable. And that's how you're gonna
build a Moat around what you're.Doing If people don't have jobs,
(20:31):
buying habits are disrupted, like all kinds of.
Things are distracted by that. Absolutely.
Like the economy is different. Yeah, I have no idea.
What's really like I, I'm not a brand guy.
Like I'm the Amazon Basics T-shirt guy like so I'm not
really into brands. So do people care about that if
like, does brand have value, especially if AI can generate
(20:53):
like amazing brands? Does Logan have value?
Yeah, exactly. So a lot of.
These things like theoretically this AGI thing, if it is a
superhuman thing that's smarter than us, that's incredible.
Like disable to understand us probably better than we
understand ourselves. It's going to create value
through brands itself that are much more compelling than our
(21:14):
own current. Exactly.
Yeah, yeah. Which is.
Weird to think about. Really, really weird.
But like at this point, like it's really doing the thinking
for us again, back to the Matrix.
It's like, weird that I don't know how to manage that thought.
And it really changes. Like, I have kids.
I really think about like, what does the world look like?
And I don't know. Yeah, I don't think.
(21:35):
And you, you you're old enough that you kind of were old enough
to understand like the Internet when it came to be.
Was there a feeling of this or was it mostly just like positive
opportunity? Like there's such a feeling of
doom on one side of this AI argument and such a feeling of
abundance, whereas I think the Internet was mostly positive.
Wasn't everybody was looking at the Internet from a practicality
(21:57):
endpoint? Like I can order stuff online.
I communicate with people reallyfar away.
Yeah, I don't have to like lick stamps, you know, to stupid
example, but you know what I mean, like complicated, like
that kind of stuff. And there was there was
questions about like, hey, will the post office exists, you
know, in the future and things. Like papers, but like.
Everything kind of keeps existing, right?
(22:18):
Like. This just feels so different.
Yeah. Like, I don't know, like, I
didn't live like through that inthe same way.
But doesn't it feel like really,it feels like a moment in time
in humanity? Like that is a real crossroad,
and it's weird to have perspective on it coming.
Yeah. Like, I think the printing
(22:38):
press, the Industrial Revolution.
I like history about it and likeall these different things,
space flights and then Space Flight, you know?
Yeah, a lot of these, I mean, I guess we saw Space Flight
coming, but some of these thingswe just really didn't see coming
and they really disrupted our world, right?
And this we all see coming. We're all kind of scared of it's
(22:58):
it's definitely going to be big,but it's really hard to know
what life looks like 5-2 years after.
So it's, it's really, you know, like you have to prepare for
like exponential growth, exponential challenges instead
of like linear up to this point,which is what we've, we've had
where businesses kind of gone long, struggled to thin evolve
(23:18):
eventually. Though as a species.
Yeah, I really think, I think sotoo, yeah.
Do you think 2. Like if somebody does build AGI,
why would they share it with everybody?
Yeah, this is what? If what if it's like?
There's don't winner take. All we got to AGI, we can spin
up as many AI, AGI AI. Straight up, straight up.
Like you think about the political you are the United
(23:42):
States of America, you build AGI, They're in my mind, you
have a logical path to world domination, which is really,
really fucked up. Or you're China and you get to
GI. It's why they're freaking out.
Like, I'm glad, you know, I think we're on the Americans
(24:02):
team for the most part. Like, you know, then, you know,
like it's it's really intimidating.
Like it's just a fundamental change.
I remember in one of the one of the things I heard the godfather
of AI, Jeffrey Hinn on and he said kind of more practically is
like old AI is going to be very powerful.
And then an AI will these companies will produce robots.
(24:23):
Those robots will be obviously soldiers at some point killing
machines. That's what humans do.
And then it'll be the the barriers to entry of war will
drop for countries that have machines that can do this.
And then I'm like fully to stop.I've turned the podcast off
cause it's like he's right on like for one thing.
(24:46):
But also it's just like, well, this is, this is so
transcendent. This is so beyond business, even
though it's a business technology.
I think, yeah, it's just wild tome.
OK, so should we maybe end this podcast episode with like, can
we paint a rosy picture of how this could work out well for us?
If if this if this AGI singularity have like what?
(25:06):
Do I take a first crack? So why don't we say that like,
OK, on the other side of this, let's assume a world where new
opportunities are unlocked because of this and those
opportunities. Involved enabling technology,
new technologies. So then there's a function of
where do we play a role in this process?
Yeah, part of this is like, whatis the AGI able to do?
So I'm assuming that it can comeup with a lot of ideas, give us
the pathway to doing things. But like in terms of like
(25:30):
physical things like cure for cancer and somebody has to give
somebody the pill, right, Or whatever.
So I I feel like there's like anexecution element that will
remain solidly in the human ballpark.
What do you think about that? Yeah, I do think there's a maybe
role for us to play it. You know, I, I, I, I hope that
what happens is, you know, like we always think about, I always
(25:51):
talk about software developers, like, hey, you could write X
amount of code per day. And now with AI you can write
like 3X code. So like if we just extend that
concept to everything else, like, hey, you know, we can
start. It takes 2 cofounders and 10
years to build, you know, successful startup.
Now it's like, hey, those two cofounders can do it and like,
(26:12):
you know, half an hour 10 months, right.
Something like that yeah and so like we're just in a in a world
where things happen much quickerwe're our quality of life, our
standard of living the. We're doing amazing we're going
to Mars where, you know learningabout all these things and it's
all happening and like an incredibly rapid rate yeah and
if you look back in history and you just think of like the last
(26:34):
like since 1900, you know, like flying, you know, going to the
moon if. You know the technological
advances, 17118 hundred 1800 to 1919 hundred to 2000.
Whoa. We're we're on the, the, the
hockey stick growth curve of like, humanity, humanity getting
better. Yeah.
So I don't know, maybe we take that positive approach and say,
hey, life is just gonna get a lot better and everybody's gonna
(26:56):
be healthier. And everybody's.
So I guess for founders that would mean like looking in your
industry, whatever you're in, and trying to anticipate what
things would be unlocked, thingsthat can't be done now.
Yeah, that you could assume if you observe.
I mean, this assumptions are dangerous in business, obviously
if you make an ass of you and me, but if you if you make the
(27:17):
right call on one of these things and put yourself in a
position to be successful, actually this is basically
returned unlock. Yeah, they what's luck, Chris?
The convergence of preparation and opportunity, right?
Yeah, if you're in the right spot, at the right time, at the
right tech with the right assumption, you could crush.
It So what are you thinking about doing today to prepare
yourself for an AGI world? And I think that's the kind of
(27:40):
thing that a founder needs to keep asking themselves.
And the answer will change and. Yeah, you know, and, and if
you're not implementing AI rightnow, you should be, yeah, like
I'll be around if you if you don't.
So you need to get on that and you need to make that core to
everything that you're doing to to keep keep up with what's
happening. And and, and I think also keep a
(28:01):
close eye on what they're doing with these models.
Well, how these models are behaving the way the pathway to
AJ like be well educated on how they're building, how they are
trying to get to the end game sort of on the AGI piece.
Yeah, sure. I think that'll put you in
should be more successful. But do we solve it?
It's pretty screwed up, man. Like honestly, like I want to
(28:23):
be. I have this part of me that's
like, wow, I can't believe I'm alive at this time.
And I have another part of me that says, I can't believe I'm
alive, like going Bill, where togo with it, right?
And it really it rattles me sometimes.
I think about this stuff, but yeah.
So do you have something you want to say to future Chris and
Dave that when we review this episode 3 years from now?
(28:46):
If you don't have a trillion dollar business by 2035, you've
absolutely failed. No, I don't know, stay on your
toes cause I have no idea where this is going.
So see you later folks. Hey.
Let's get it rolling. Big ideas, Money, hustle, Smart
(29:08):
dream. So why turn that ground into a?
Joy Ride.