Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
A media.
Speaker 2 (00:05):
Your scientists have yet to discover how neural networks create
self consciousness, let alone how the human brain processes two
dimensional retinal images into three dimensional phenomenon known as perception.
Yet you, somehow brazen lye declare seeing is believing.
Speaker 3 (00:18):
Yes, I do.
Speaker 2 (00:30):
I'm at Zitron. This is better offline. I'm your host,
by our merchandise. Go to my newsletter. Where's your ed
dot app?
Speaker 3 (00:37):
Anyway? Fuck all that.
Speaker 2 (00:38):
Brian Kopperman is joining us here in the studio. He's
the incredible writer, producer, and he's in the Bear a Bear.
He's a real deal actor. He's also the co creator,
showrunner and executive producer of Showtimes, Billions and super pumped
the battle for Uber. Brian, thank you so much for
joining us.
Speaker 1 (00:53):
Thrilled to be here with you.
Speaker 2 (00:53):
Man. We've of course got Mike drug At, the wonderful comedian.
And of course you have a book, don't you.
Speaker 4 (00:57):
Mike, Yes, sir, it's called Good Game, No Rematch. Get
out embarrassing myself with video games throughout my entire.
Speaker 2 (01:03):
Life, wonderful, I'll be embarrassing myself with tech on this show.
Sherlyn Lowe as well, joining us from mang gotcha hate
doing shell.
Speaker 5 (01:09):
I am about it down a protein shake hopefully.
Speaker 2 (01:12):
Yes, Yeah, we're just talking about the Ninja Creamy and
all the various slops.
Speaker 1 (01:16):
They're great Ninja Creamy. I'll tell you there's no endorsement
out there for any of us. They don't need it
because people just get these things and they immediately go
to tiktokh because they want to share how good they are. Yeah,
they're honestly, they're so good. That's why I have full
indorse full endorsement.
Speaker 3 (01:32):
This is the Ninja Creamy show.
Speaker 2 (01:33):
Now I'm going to get what's great about this is
literally any time I mention any product, I will get
an email from someone being like, here's a post from
two thousand and three from the CEO in which case
they shot the door.
Speaker 3 (01:43):
It's like some insane ed how dare you bring up?
I'm like, I don't know everything. Please tell me.
Speaker 5 (01:49):
Well, Fair Lives bottles have a lot of Dalley's in them,
so you know that's our that's our thing today.
Speaker 2 (01:54):
Yeah, I just eat like too much yoga. So I
want to actually start with a really good thing. So, Mike,
you wrote a really great piece of the Gamer I
think they went out, Yeah, yeah, very recently is called
I'm starting to worry this industry has no respect for
the people who work in it.
Speaker 3 (02:06):
Well, you walk us through.
Speaker 2 (02:07):
The article, because I think it's a good subject.
Speaker 3 (02:09):
Matsa sure well.
Speaker 4 (02:10):
As listeners may or may not know, Microsoft recently laid
off about nine thousand people, most of which were in
their gaming department. Simultaneously, they were claiming that their gaming
department's quite profitable and they're making a ton of money
and everything's going well.
Speaker 3 (02:24):
I don't know if that's actually.
Speaker 4 (02:25):
True, but they are saying that, And they've kind of
also said that a lot of the money that they
were paying these employees isn't isn't about losses, it's that
they want to move this money into AI development.
Speaker 2 (02:36):
Did they say that, Yeah, yeah, that's horrifying.
Speaker 4 (02:40):
Yeah, it's kind of im I mean they didn't say
that word for word, but it was very implied that
the budget was you know, they're more focused. That was
kind of the language. The language was like, you know,
as we restructure, we're more focused on AI going forwards.
Speaker 2 (02:51):
It's just so ironic, is like video games is one
of the first real exposures to AI for most people, yeah,
I remember when Fear came out.
Speaker 3 (02:58):
He won't remember fit. Yeah, it was one of the first,
one of the first.
Speaker 2 (03:01):
It was the first time a guy would like shove
his head over and look around before just jumping up
and getting shot.
Speaker 1 (03:06):
Yeah.
Speaker 2 (03:06):
It's just the reason this really spoke to me, other
than in fact it's very well written, is the I
feel like this is the central problem with a lot
of tech, entertainment everything right now, where it's like the
people at talk and you make this.
Speaker 3 (03:17):
Point well in the article.
Speaker 2 (03:18):
Yeah, the industry is the problem because the industry is
not being piloted by the people creating right exactly.
Speaker 4 (03:24):
I mean, you know, and it's a problem that also
extends to the you know, Hollywood, which I also work in.
Speaker 3 (03:29):
Because I'm famous. No, that's not true at all.
Speaker 4 (03:35):
But it's this problem that, like, you know, a lot
of the decision makers are not as close to the
product as they used to be. Yeah, I mean, I
mean you could even say the same same for things
like Boeing, where you know, you don't have engineers running
the department anymore. You have business people who've been brought
in because they're good at checking well track alliance exactly exactly,
and you know, these people are far away from the product,
they're far away from the development of it, so they
(03:56):
see these you know, workers as numbers on a spreadsheet.
And for better or worse, video game development is a
very long process. It takes a lot of time, it
takes a lot of money.
Speaker 2 (04:05):
It's getting longer. It's getting longer, yeah, horizon as well.
Speaker 4 (04:08):
And so it's you know, very easy for these companies
that are looking for a short term you know, next
financial quarter.
Speaker 3 (04:14):
Boost to go.
Speaker 4 (04:15):
We fired all these people and this product wasn't going
to come out for two years, so it really doesn't matter.
Speaker 2 (04:20):
And that's the thing with Microsofts well they've that's that
they've laid fifteen thousand people this year as well. Nuts
and it is the money really even going an It's
just so confusing. But you're seeing everyone with Hollywood. There
was the Acme versus Coyote thing. Yeah, sorry, Coyote vusus
Acme thing where they just like multhbuled it releasing it
happily mouth bolded movie. Yeah, to save on taxes. I
(04:41):
feel like that shouldn't be a loophole.
Speaker 3 (04:42):
I hate that loophole.
Speaker 4 (04:44):
I hate the idea of that loophole. I mean, it
almost reminds me of that loophole from and Brian. You
could maybe correct me on this, but remember like in
the nineties they made a Fantastic Four film they weren't
going to release just so they could maintain rights for it,
because the rights for it were you have to make
a movie to keep the rights, you don't have to
release it.
Speaker 1 (05:01):
I don't remember that exactly though, now that you say it,
it rings a bell, but I don't have particulars on it.
But I think what you're talking about really the answer.
I agree completely as you diagnose it with how depressing
it is. But to me, these things feel like a
system's response, like complexity theory and system well, they're like
(05:26):
systems responses to what's happening in the world, as opposed
to feeling like because I think it's easier for us
to look at the human being and go that motherfucker,
and that may be a motherfucker. But another possible answer
is that from a far remove, like something's happening right,
(05:46):
and when this thing is happening, it gets into quantum theory.
But it is a complexity system's response to this giant
change of artificial intelligence, having certain capabilities and all of this.
You can look at all of it, and I mean,
(06:08):
we've seen people who had one set of beliefs for
so long and on the record in your industry, completely
switch and they may think that's an autonomous decision they're making.
But I found a little bit of solace by reading
about complexity theory because that for me, offers a potentially
more not hopeful, but sort of a more complete understand.
Speaker 2 (06:30):
You're saying, like a systemic response to conditions. Because I
don't know about the AI capability side, but the existence
of capabilities potentially being there makes sense. In the they're
all trying to reconfigure for a future they don't know
if it's actually there. The general AI doesn't seem to
do it, but the potential of that, the idea of it,
is motivating them so much. Microsoft of all people as well,
should know how little money's actually being made from this,
(06:52):
considering out the thirty.
Speaker 1 (06:53):
So that's it, Yes, But executives, honestly, because I've studied
these people so much and written about them much, and
I understand how venal they are and how short term
they think at times. But they're also I think looking
at data that we don't have very often, and they're
(07:13):
trying to forecast out a long time.
Speaker 2 (07:18):
Why do you think they have that we don't have though,
because like with Microsoft for example, and I have probably
studied such in the DELLA too much at this point.
I really I learned too much about the growth mindset
and Karlyn Dweck and all of that nonsense. But the data,
if the data is there, they're acting very peculiar about
the data because they're only making thirteen billion dollars this
(07:41):
year from AI. Ten billion is a ur ur I
was from open Ai, burning giraffes and whatever they put
into the machine for chat GPT. But it's it feels
almost like they are reacting to what they hope will
happen or what may happen. I'm just trying to lad
it might just.
Speaker 1 (07:55):
Be when when everyone reacts to quarterly like the problem
our business, the entertainment. The thing is that these people
are responding to quarterly earnings calls that they have to make,
and maybe the data isn't about the long term possible,
maybe the data is literally that we don't know that
they're looking at is their little cohort of people and
the exact moment they can exercise which kinds of options,
(08:18):
some of which they have to declare and maybe some
of which they are somehow able to flip on a
different market.
Speaker 2 (08:23):
What Anthropic basically Amazon did with Anthropic, they flip their
investment into a certain kind of thing they could tax
the dug.
Speaker 1 (08:29):
I mean, there's no doubt that there's heartlessness, But I
just because Druckers here, I just want to say, you know,
even in the most depressing times or in moments where
the technology the platform seems brutal, people's humanity can transcend it.
And I remember in some dark days of Twitter, Drucker
(08:50):
was and it was really amazing man. And I remember,
you know, you know, I don't know each other well,
but we've known each other a long wi time, like
twenty years, and yeah, it's true. And I remember that
they're where many nights you were like not sleeping that well,
sleeping at the wrong time, yeah, and like sleeping during
the day up at night. But keep I remember him
really like very vulnerably talking about sadness and depression and
(09:13):
getting people to entrust sharing, and him actively trying to
like save people in a place where like where people
were so callous almost by profession on there by like
everything they wanted to do that right, Well, all everyone
was at home playing grand theft Auto, which could you
guys fix that and get the next one out? But
(09:33):
everyone's at home playing grand theft Auto, uh, and being callous.
And he was like literally going, let me explain depression
to you and what you can do and what the
resources are and why you shouldn't kill yourself, and which
I think is great. Of course you would lead the
empathy on this too and look at these assholes and
think about all the engineers, and it's the right way
to process it. I just think often it's not a binary.
Speaker 2 (09:55):
No, and that makes sense. And that's the thing I've
been saying like a lot about the show is I'm
talking about the pigs in the arsholes and the skumbags
and all the different names and the voices I.
Speaker 1 (10:03):
Do for them.
Speaker 2 (10:04):
But it's also about the fact that there is like
I'm pissed off case of Kagawa.
Speaker 3 (10:09):
Friend of the show's said this to him a lot.
Speaker 2 (10:11):
It's I'm broken hearted dramatic because things like what Mike did.
Speaker 3 (10:15):
Actually one of the reasons every day and.
Speaker 2 (10:16):
Gadget as well, is it's like Yeah, there are a
lot of these financial horrors in there, but there's fun,
dorky shit online. Still, there's still one Like most of
my friends are from the internet, you know. It's like
I'm like the drill quiet crying. It's like being on
Like it's like like everything I got is through emails.
Speaker 3 (10:35):
But I still think there is a lot.
Speaker 2 (10:36):
Of joy in this And I think the central thing
about your article now is just like beneath this capitalism
crush is actually some really like wonderful things being built.
There are still wonderful games being built. Yeah, there's like
an entire economy on Minecraft, for better or for worse
about selling mods and stuff. There are people on roadblocks
other problems with roadblocks obviously, who are like building games
in there and selling them. There's still some cool shit
(10:58):
happening in tech, just being in with because every three
months someone gets upset at them.
Speaker 4 (11:05):
Or finds a new way to make money off it,
which you have to you know, shove into a new category,
but the new thing.
Speaker 5 (11:10):
To chase, right that's where it is right now. I
think that what you were talking about, the like macro
picture of everyone sort of having that reaction, and I
think we will course correct in time. I think we're
right now at that stage in history where it's not
as Cotton dried to me as the NFT crypto sort
of bubble, where like it was clearly about criminals.
Speaker 1 (11:31):
Yeah, it was walking people.
Speaker 5 (11:33):
Over, knowing right, And I'd argue that on some level
these are criminals. There are criminals that play in this
scenario right now. I will say that with what Mike
was writing about in your article, the Microsoft thing was
all the more like I think when my team saw
the news last week, my instant reaction was, didn't they
just ratify a union contract? Was like their first ever
in the US? Too like to be saying that on
(11:54):
one hand, we really care about workers rights and really
want to protect workers, and the other you're like, and
those are their quality people, right And now we're like, oh,
game developers, man, nine thousand of you can go because
AI n PC's are all the rage right now and
they really make a lot of sense. A m PCs
never said a bad thing, never said anything.
Speaker 2 (12:10):
That was demo last year?
Speaker 4 (12:13):
Was it this year?
Speaker 5 (12:14):
I can't remember.
Speaker 2 (12:15):
They've done too and video wheels out at demo would
be like the generative AI NPC's here and within one
day someone has made it says layah or like it's
just said something in spane. Microsoft tay was the twenty
sixteen AI bought that learned from the Internet to be racist,
like one of a proto Mecha Hitler situation.
Speaker 4 (12:33):
Oh that was Grock Yeah taal that the Microsoft bathroom
Yeah from twelve or something.
Speaker 3 (12:41):
I forgot about that.
Speaker 5 (12:43):
But the thing is, back to Brand's point is that
this keeps happening and then we keep course correcting back
like it feels like there is a system's response and
Ed I think you're looking for the joy. I wouldn't
use the word intervening or interfering. I would say it's noise.
And the way for companies and people existing in this
industry to deal with that is to focus on what
(13:04):
you think is good. I find the irony in saying that,
which I think these tech bros that you're referring to, Brian,
they're also trying to focus on what they think it's good,
so they're cutting out noise from their perspective. And I
don't know it's that movie that was just released featuring
those four dudes in the Mountain Lodge.
Speaker 1 (13:20):
Oh not out Head Mountain.
Speaker 5 (13:22):
Mountain Head, mountain Head. Yes, so it feels like that.
It feels like everyone's got their little bubble, which is
also at the same time created maybe and supported by tech.
And if we continue to operate in silos, I don't know.
I feel like if we.
Speaker 1 (13:34):
Only if we only think of those and often they
are men in those roles. They're not all, but often
they are. If you think of some of those you said, bros.
But if we only reduce them to those guys running
around trying to kill someone in a song where I
think in a way it allows in a way, it
(13:54):
allows us to like not worry about them because we
can do them.
Speaker 3 (13:58):
Moll right, that's fair.
Speaker 1 (14:00):
Some of them are smarter, like just raw synthesizing power.
Some of those people, not all of them, they're smarter
than everyone in this building combined, couple of them. And
I'm not saying that that makes them good or anything
in any way good or in any way good for now,
because even if you say, like I agree with you
that some of them may believe that they're doing this,
that they're doing good, right, But if they're thinking in
(14:22):
a thousand year chunks, that's really bad for You've all
said this and you've all I think is amazing. And
he said, well, the problem is historians like me. He goes,
you may look at it now. He was on a podcast.
He said, you may look at it now and say,
but those things worked out. Okay, look where we are.
But as a historian, I have to look at the
cost of human life. Yes, that happened for all the intervals,
all the little intervals to get from there to here.
(14:44):
And that's really what you guys are just talking about. Rightly,
so is all the you know, the devastation in the way.
But but can I just because you might might be
so jaded, but do you not think AI is like
mind bogglingly great?
Speaker 3 (14:57):
It's nice.
Speaker 5 (14:58):
I can see the valance. I think it's like far
off you.
Speaker 1 (15:01):
Don't wait, you really I think it's like the single
I think it's why the single greatest invention in my life.
Speaker 5 (15:07):
What is what is your favorite thing that has done?
Speaker 1 (15:09):
I think the ability to have super high level conversations
about really esoteric about systems theory, right, you know it's
correct in them? Well you can, well, you have to
do a bit of work.
Speaker 2 (15:22):
I don't want to talk to something that I have
to verify constantly I talk to people.
Speaker 5 (15:26):
Well, you have to do that with people to some extent.
You can't trust everything you feel about.
Speaker 1 (15:30):
The automobile or is the horse drawn carriage still your thing?
Speaker 2 (15:35):
No, I'm just saying, no, I'm wondering what the point is.
Speaker 5 (15:37):
It looks very uneasy, right, I mean the point?
Speaker 1 (15:39):
No, I just have jokes in my head.
Speaker 3 (15:41):
No, I'm well, because I.
Speaker 1 (15:43):
Think you can decry the industry and the way people
are using it. And I guarante I'm pretty sure I
was reading Alazer you had Kowski before anyone in this room.
I mean, how to actually change your mind as a
book that I was obsessed to give it Christmas gifts.
Speaker 3 (15:55):
I can't take your ds.
Speaker 1 (15:56):
I just it's you gotta take him serious, isn't. I
mean you have to take one to take that guy's
brain seriously. He think everything you guys are worried about
he called out fifteen years ago in detail.
Speaker 2 (16:06):
Okay, I mean, like, here's the thing. The automobile is
not a comparison because we knew who had to go forward,
side side and everything like that, Like we had an
actual use case for that with generative AI. The way
that these conversations happen and large language models can be
conversing with documents. It's one of the only use cases
that is actually remotely useful because you can actually verify
based on the parts of the document. Having a conversation
(16:27):
with one of these. Fine, it's a thing. I'm not
amazed by it because there have been chatbots doing this
with KMS since like what thirteen, twenty fourteen.
Speaker 4 (16:36):
And the Eliza effect goes back to the sixties.
Speaker 2 (16:38):
Yeah, and Eliza even then, there's a Karen Howe's Empire
of Ai is a great thing about Eliza.
Speaker 3 (16:42):
What the creator was just like, why is everyone so
fucking impressed? But it's just like, really, just like, what
the fuck is this?
Speaker 2 (16:48):
I think what it is is. I am just not
that impressed by it. Based on the larger discussion. Everyone
acts like this as the fucking future, and it just
feels like a growth of the past. Chat GPT has
become for better or for worst is what Google could
have potentially been. It's insane the attention.
Speaker 1 (17:06):
I agree with you, but like, okay, I got a
tick bite. Okay, and you can literally the second you
get a tick bite, everybody says antibiotics you gotta go.
If you can't find the thing. I got a picture.
I put it on there. The AI was immediately able
to say, yes, I could verify it after because I called,
I was able to say, this is the kind of
ticket is, this is the area you're in, this kind
(17:27):
of tick. You don't need antibiotics. You're not gonna get
lyme disease. Here's why sent it to my doctor who
that it's feedback and they agreed they'll be harder to
do it.
Speaker 2 (17:36):
Why didn't you send it to your doctor before you, Austin, Well.
Speaker 1 (17:39):
He doesn't. His job isn't too like identify what ticket is.
Speaker 3 (17:42):
Oh, oh, sorry, I misunderstood what you said.
Speaker 5 (17:44):
Yeah, I want more clarity on the doctor thing. I
raised my hand as you were talking because I'm like,
did you trust the chat GPT answer and leave it
at that?
Speaker 2 (17:52):
Right?
Speaker 1 (17:52):
Then? I took when it said that, then I searched
for what it said and compared and it was right.
But it was.
Speaker 4 (18:02):
Out of curiosity and ed you might be able to
answer this question. Is that And I don't actually know
the answer. Is that iterative AI or is that generative AI?
Speaker 2 (18:10):
So we're using chat GPT so that it's a mix,
so that would be generative okay, and so that would
still be that And by the way, sounds like a
use case.
Speaker 3 (18:22):
It is the growth of so what you.
Speaker 5 (18:23):
Say, you know, I was going to say, I think
I think a version of Google could have done that
for you in the past two before chat chept. What
chat gupt, the generative side of it is providing is
the l M, the natural language interface about pulling a
lot of different sources of data and putting that together
for you. That is chat jept. You could have done
that with Google or I think there were apps right
that you could do a photo uh sort.
Speaker 2 (18:42):
Of recognition for a while.
Speaker 5 (18:44):
Yeah, and I to my knowledge that parts of Google
and Google Health researchers with their AI divisions were working
on apps that could identify different things like skin things,
so like is that a bug bite or is that
exemon kind of thing through the pixel phones. I don't
think they've ever broadly released it, but they were experimenting
way before chat GPT was even a thing. I would
argue that I think the LM portion of this is
(19:05):
more about how it converses with you and how it
like understands what you're actually concerned about. So if you
didn't give it the exact words of look up this
thing and should I see a doctor? Even if you
didn't input this your doctor request, it might you know
divine that that's what you realize.
Speaker 3 (19:18):
This is the thing like chat GPT.
Speaker 2 (19:20):
I don't think would have been a big deal if
Google had actually innovated and search at all, if they
have something adds to it. Google search has been the
same for like fifteen twenty years, except worse what you
were describing their valid its inference. Basically, it's infers the
understanding from the image and all this and then spits
out an answer.
Speaker 3 (19:39):
It's the use case.
Speaker 2 (19:40):
I think that in the way you use it that
was responsible. The problem is at scale that is not
going to work out so great because.
Speaker 1 (19:47):
I believe you know way more about this than I.
Speaker 2 (19:52):
No. I'm genuinely glad you brought this up because it's like,
but what about that is extrapolating out to the greater
AI replacing jobs? Think, because that's where my principal problem is.
People are taking what is what Google should have been
in twenty seventeen and turned it into this is going
to replace half of workers. A quote from Dario Amit
Day that was fucking made up.
Speaker 3 (20:12):
Which he set off the top of his head. It's
not a bird.
Speaker 2 (20:14):
Sorry, just getting Wario himself. Oh harrio amitay.
Speaker 5 (20:19):
I think what you're also bringing up, Brian, is that
there is this marketing I guess problem with AI, which
is that like AI has existed for a very long time,
and the current version that everyone's really obsessed with is
jen AI and generative AI is all about like what
it can generate for you, using large language models, using art,
like creating art, creating videos, all of that stuff. Is
this current iteration of AI we're all talking about, But
(20:40):
the previous stuff has existed before, and this is all
like just more of the same. I think that's why
so many of us in the industry are so frustrated
with it, because there's a misconception. There's also this idea
that the jobs that it's trying to replace are in
that field of coming up with art, music and words
that are not jobs at APay wall to begin with,
but be like our parts of our life that we
(21:01):
want it replaced, right, we want it.
Speaker 1 (21:03):
When I talk about Yodkowski, the reason I do is
that it is that he was somebody who he's right
on your side way and his thing always was this
is not going to be a net good. First, that's
not even he and I mean I remember, I'm sure
you remember when astro Teller wrote Ex Jesus. I remember
reading that book like the day it came out, and
it really did freak me out about what was possible.
(21:26):
Now we're not even there yet, right, it even isn't
where that but that version of Ai.
Speaker 2 (21:31):
And Butowski's he's Awskidkowski.
Speaker 3 (21:34):
Sorry, he's so find him quite distasteful.
Speaker 2 (21:37):
But on top of that, he's a fucking agi dom
and he's saying he's just yeah, sure if a frog
had but a frog had wings that could fly, It's like, yeah,
if the computer wakes up and does this, this could
be scary. He's one of the few people that seems
to what you think about what a g I could do.
But also we're so far more off from it that
I can't see him as anything but a grift of
because all he is doing is grift.
Speaker 1 (21:57):
It's just he's been on it long before there was grift.
He was on it as a nonprofit. No he no,
I mean, think about what I've written. You think I
know nonprofit's the matter with your one? Yeah, but he's
(22:18):
I have a different view. Now, I'm not saying all
his opinions are sure are correct. What I'm saying is
that that's somebody who flagged a bunch of these potential
issues a long time ago, and of course it shouldn't
and it's horrible that these people in charge are so
willing to slough off being the moment they think there's
(22:40):
this much of an edge. But also because of what
I study in life, I can't be so we've all
of us who write about this stuff in fiction, right,
you do it because okay, and you guys have to
report on something you have to. I can take sort
of like my partner, I could take what's in there
and try to dramatize what might happen, and always would
try to point out exactly that these motherfuckers do all
(23:00):
this shit. It's not surprising to me. It's horrific, but
not surprising.
Speaker 3 (23:15):
What bothers me as well with it?
Speaker 2 (23:17):
And I think you're completely right, And also like I'm
not completely saying everything you're saying is wrong, Like you
actually have well read on this, which is nice because
a lot of people who mentioned the AGI stuff don't
read anything.
Speaker 3 (23:28):
But I think the thing is as well.
Speaker 2 (23:30):
Is so much about this aiag I think is not
about what it can actually do. Yeah, the best fiction
about AGI, girlfriend Sarah showed me the kill Switch X
Files episode. Fantastic AGI episode. It's scary because it's not
about the people making it. It's about the computer itself
and the intentions spilling into it. A lot of what
the AGI discussion now is and it will auto make jobs.
And that's the last time I'm going to think about
(23:51):
it before I say, here's anthropic and it's and I
think what it is is there are discussions to be
had around what this stuff could do. If AI could
do the things that they're saying it could do, if
it could replace jobs, it would be doing it. But
also it would actually require a change in society. It's
almost as if they're like they want all of the
profits from all of the exposure and the ability to
(24:13):
lay people off without actually doing anything to earn it.
And it comes back to what you said about the
people running this shit aren't even trying to build AI.
They're not Mark Zuckerberg's building a Manhattan size get data
center to build superintelligence.
Speaker 1 (24:28):
I wonder though, when you say, and you guys ask
you to this. I wonder when you when when you
talk about the Google could have done this or should
have done this, And then it's a great point that
you made about that it is the way that it communicates,
because what I wonder is, you're all expert, you're all
native early adopters, native to not only the Internet, but
(24:52):
native to tech, and so sure you could you have
used Alda Vista to do oh yeah, ge for fox sake, baby,
But I'm saying you could have used all that, but
for generations of people who aren't literate, in a million,
billions of people exactly, even if that was possible through
(25:12):
Google image search. But you ever try Google image watch
people try to do a Google image searching.
Speaker 3 (25:17):
I agree.
Speaker 1 (25:17):
I'm thinking this is all making it for people friendly
to them, as you said, it easy for them, and
I wonder if that's.
Speaker 3 (25:26):
That's exactly what I'm saying.
Speaker 2 (25:27):
That's why people like chat GPT. It's not because it's
this amazing product, it's because it does what people go
to Google that's hard.
Speaker 1 (25:34):
But communicating well in an inviting, seductive way is very challenging. Yes,
So for text people to do that to regular people
is the amazing thing it's almost when you're saying Google
invented this tech.
Speaker 2 (25:49):
The attention the twenty seventeen Attention is All you Need
paper was eight Google scientists.
Speaker 3 (25:53):
I just look this up.
Speaker 2 (25:54):
I thought it was several, but it's like, no, shousea.
They ended up paying like two billion dollars of character
just to bring him back. Google had this technology and
had this movement just been we're going to make inference
of meaning better exactly what you're talking about, it would
not be this big story. The thing is they need
it to be what you were describing our use cases good, bad, whatever.
(26:15):
They are things that people use it for the actual
things they're describing on as a PhD level intelligence, it's
going to solve physics. And it's like, no, it's not.
But if they say, what if Google Search worked, they
can't make a trazillion dollar They can't be like this
is going to be everything we need. We're going to
build data centers. It's just because what you're describing, Brian
is the most evil thing.
Speaker 1 (26:36):
In going back to Mike Er, yeah, because I think
what I think in all these things, what I've learned
just being curious is to look at the incentives.
Speaker 3 (26:49):
Yes, that's what I mean.
Speaker 1 (26:50):
And Google was de incentivized to disincentivized to make search
work a bit for the user. Yeah, because of the
profit agenda, right you should, Oh, they think should make
a fucking profit. But here's the thing, in a weird way,
them not making a profit is better for the users. Yes,
and the Google wasn't. But I'm sure it's all on
the Google paper that nobody's fucking reading except I'm talking
(27:15):
about for the user. I'm trying to You're raising these
amazing questions, but if I try to think about how
to answer them, it's that these companies are like in
the early days of social media and all this stuff.
In the beginning, they try to super serve the user
to get you addicted to it so that then they
could But the incentive structure is what you got to
look at. And obviously Microsoft's incentive structure has been locked
(27:36):
in for a very long time, and that is for
the people who have the most stock in Microsoft to
make the most money one. That's the incentive.
Speaker 5 (27:43):
Struct Google too, to an extent, and that's why they
had that stuff for that long, but they never because
they never found a way to integrate it into all
the different parts of his businesses that matter. That ads part,
the search.
Speaker 3 (27:51):
Parts do still so they monopoly.
Speaker 5 (27:52):
They just they have a monopoly. They have no reason
to be. But I don't know if you remember this
Google Io twenty seventeen, twenty eighteen, even twenty nineteen when
first show kitchen Sink, which was their version of chat GPT,
but it wasn't as conversational. It was you can ask
this app to come up with ways to learn about
a new hobby or to plan a thing for you.
Before chat GPC even was known widely to everyone, I
(28:13):
think it was a I wasn't really even a thing,
and it just wasn't a chat interface. And I think
to Brian's point, the incentive that chat GPT has brought
to people and to my parents, who by the way,
discovered chat GPT last year, very annoying to me. But
it's like it's so much easier. It brings them into
technology in a way that technology used to be kind
of looking down on people for not knowing things, and
(28:35):
you deal you do away with that with the chat bot.
My parents not only like feel so hip now, which
sorry Mom and dad, you're not, but but also like
there's people who seek comfort in the companionship brought on
by AI chatbots like chests differently, they are amazing. They
dance at their age and I love that. But the
thing is, if you look at the use of jen
(28:56):
a I from the last and I think I talked
about this on that Kevin Ruse episode you're on, but
the use of chat aih services has to change, right.
Used to be like very interest based on very search basements.
Now companionship base as the top few uses. And that's
why people are drawn to it. And I think that
my last point that came up really when I was
like listening to you talk ed, is that like the
(29:19):
incentive for them to push towards like, yes, let's go
towards a GI. It's not just like laying off people.
It's also who can get there first. It's the race
of like it's like it's the tech AI ego thing,
that tech the text ceo ego thing, and then from
the ego standpoint, then they pushed down to profits.
Speaker 1 (29:36):
So many businesses are like so many businesses are already
sort of the industries are already acting in bad faith.
Oh yes, yeah, like okay, look at them. Here's here's
I'll give you, not a youth, like a real industry
that I think will be transformed by it. And I
think that's not a bad thing. Is like money management.
Money management, which is a billions of AI AI already
(30:00):
can replace delineate between Yeah, though, but well you can,
you can do that. But I'm saying, I know that
whole business, That whole business is about a front money management.
I'm not talking about the high net worth I'm saying
in general, people who use for their retirement accounts money
managers well front all that stuff. But within those companies
(30:23):
they're they're they're front facing language of humans who are
just trying to keep you invested. They don't know that stuff.
They are not offering value really, and they're taking his
big percentage from regular people trying to say it for
their retirement, and they're bleeding off my and in the
end they're going I was listening to.
Speaker 2 (30:40):
Like, No, I'm smiling because you were right already this
was the twenty fifteen through twenty twenty wealth.
Speaker 1 (30:45):
No, I know that all that stuff, but I'm talking
about even now like big banks, like the big banks,
I'm not talking about their their AI front. I'm saying
that I was listening to a talk given by Josh
Brown is uh and I one of his partners, I think, okay,
And he was talking about how essentially all of the
(31:07):
back end of all that stuff, meaning you might still
have a person talking to the user to but everything
else is going to be done by I just just
it is Michael Batnik from his company.
Speaker 2 (31:18):
He was talking because here's the thing with that is
from my knowledge of basically financial regulation, they don't want
LLM touching much of this. They there's a lot of
stuff within financial research happening now with jener I, a
ton of companies doing like insanely high compute burn to
do these massive kind of like evaluations of stuff. I
(31:38):
don't know if anyone wants to touch the money with llms.
And they've actually been quite resistant to it, partly because
they don't know how they work, Like they truly don't know,
I know, the black box thing, yeah, yea yeah, And
so it's like a lot of these things. Also, when
are they gonna happen though, because they've been saying this
for two years.
Speaker 1 (31:56):
But do you think there's a scenario, But do you
think there's a scenario where this this goes this goes away.
Speaker 3 (32:02):
I don't think it goes I.
Speaker 1 (32:05):
Don't think it's gonna do you think it's going to
reveal itself as as being fraudulent for what.
Speaker 2 (32:10):
I'll explain, I think you're going to see what call
him my shot. So I believe that open ai will
is an ongoing concern eventually go into nothingness.
Speaker 3 (32:21):
Matt Hughes, my editor, believes they'll become a patent. Sal
I actually think it's an amazing thing.
Speaker 2 (32:25):
I think that what we experience of large language models
will vastly pull back. I think there will be rate
limits that, as there will be rate limits on GPT,
people are going to be horrifyingly sad because those.
Speaker 1 (32:36):
Comparive companions are going to go away, And they're not
going to go.
Speaker 2 (32:38):
Away, they're just going to be much, much, much more limited.
And I think that everything we see today the kind
of and you look in any of the reds behind
any of the serious like GPT's, they're all kind of
saying like, yeah, we know that the abundance the free
ride is over. So no, it's not going away, but
you're not going to hear about it constantly, And everything
you use today is going to be so severely rate limited,
(32:59):
or those companies that are charging for generative AI things
beneath the surface. All of the API rates behind these companies.
So the things that you plug in to run the
models are vastly subsidized by big tech and by the companies.
Speaker 1 (33:13):
Alda Vista goes away and but Google takes over.
Speaker 2 (33:16):
Well, Google Gemini exists, but the Gemini request perhaps don't
hit the Lelem as much they.
Speaker 1 (33:21):
Have using it as a parallel.
Speaker 2 (33:22):
Yeah, but it won't be this big thing you hear
about all the time.
Speaker 3 (33:28):
I think you were going to.
Speaker 1 (33:29):
Of course, it'll just be the back end of lots
of stuff.
Speaker 5 (33:32):
No, it would just be something that sits large skills.
Speaker 2 (33:34):
It's also not.
Speaker 3 (33:35):
Good as a back end. Large language models are not
good back.
Speaker 1 (33:38):
At they're good at talking to you.
Speaker 2 (33:41):
Can they can divine stuff as well.
Speaker 4 (33:43):
One thing we're kind of circling on the consumer level
that we're not talking about is it's still a novelty now.
Speaker 3 (33:48):
I think it's going to.
Speaker 4 (33:49):
Be continued to be used. I do think that people
are going to continue to make lists and scheduling and
summarize summari like you know, what's a good vacation.
Speaker 2 (33:57):
And they are on device models that we are able
to do that.
Speaker 4 (34:00):
I just think that right now it's such a big
deal that everyone's using it because it's cool, because they
you know, your parents hear about it, because you're told
you should use it for work. And I do think
that it'll stick around. I do think there'll be a
contraction in the sense that you know, it'll be a
cool thing. But I think in fifteen years it's no
longer gonna be as funny to produce, like a picture
of a pig with like Mickey Mouse's head and three boobs.
(34:21):
And I feel like, now.
Speaker 2 (34:22):
That's a big, big Some comedy is timeless, like, but
I just.
Speaker 4 (34:26):
Don't imagine that in like fifteen years having the same
novelty as it does, or five years, ten years.
Speaker 2 (34:32):
If you take away all the headlines, if you take
away all of the money and you actually look at
what's there, it is everything. What you're describing is probably
the most useful thing. It's like, do I know this, Okay,
this seems plausible. I'm gonna double check it. It's kind
of like what in Karta could have been. I don't
even mean that sarcastically. I as a very cool child,
I was very cool. Would sit them then caught for
(34:53):
hours reading stuff.
Speaker 3 (34:53):
Because it was kind of good. Why you have access
to everything?
Speaker 2 (34:56):
And I think human beings a curious so of course
they're gonna talk to it if it talks back. I
just think right now, there is no business model. That's
the biggest one. The biggest one is there really is
no business model. Ads do not work, ads are not going.
How do you put you inject ads.
Speaker 3 (35:10):
Within an L and M. Look at what happened with
grog grok.
Speaker 2 (35:13):
Happened because they tried to make us let's make it
just how we crank up this racism dial and like,
how do we mess with this system prompt. But the
thing is those subtle changes for even advertising will be bad.
Perplexity's been talking about there and ads for a fucking year.
Not heard much of that aravant, and it's like, on
some level, regardless of how useful it is, the economics
(35:34):
do not make sense. They're nothing like an uber. Uber Yes,
you well know Uber was a complete fucking and it
still barely makes sense. They're raising another one point two
billion dollars, but you can at least tell someone exactly
what uber is and why you'd use it and it's
just kind of chugging along through necessity. I don't know
how necessary. Chat GPT is a've large language model, a
(35:57):
Google Gemini, and Google is claiming that doing efficiency stuff
that could lost. I think it will be heavily right limited.
Speaker 1 (36:05):
I'm certain you are all right about the business models
and the viability of this from a business standpoint. You
know so much more than I do. I'm learning and
it's fascinating as a user of it who is not
on the inside of the business. I my prediction is
you're dead wrong.
Speaker 4 (36:22):
No, Brent.
Speaker 1 (36:22):
I it's going to become the dominant thing in most
of society in lots of ways. I think here's people
are going to use it and it's going to be
part of them, like William Gibson predicted a really long
time ago. Like I think that it's fine that. I
think it's bad at creative things, the thing people think
it's good at. I don't. It's bad at that. I
don't think you can write a story. Yet none of
(36:43):
them can convincingly write a story like drugs are good,
Like there are lots of that stuff that's not there yet.
I have no idea. Again, you all know way better
than I did the science behind it, but you're asking
about working out before you could. There are ninety five
percent of people who are trainers can't do as good
a job of programming, and you could play different aias
(37:04):
against each other and asking questions.
Speaker 5 (37:06):
That's also ninety percent of them are influencers with no
serious backgrounds.
Speaker 1 (37:10):
But I'm saying, even if you talk to sign but
because you can show programming, it can track for it
can just now you may, you may, And I'm sure
you're correct. That's not the AI doing it. It's other
people could have done. But the way I I can
program and interact with you and allow you to catch up.
If someone asked me all day long, people are asking me,
(37:31):
wait the question about programming. Well, I can't program for you.
I don't know you well enough. But if we talk
generally about what your goals are, I could definitely talk
to chat, GBT or Claude and build something that you
could then iterates.
Speaker 2 (37:47):
Such engine were describing the iterations of such.
Speaker 1 (37:50):
Yeah, but it's packaged in this way. Now go ahead.
Speaker 5 (37:53):
The thing that three of us are going to tell
you are at different levels. I think it's coming to
you at the business model. The very like micro right
and sure, and yeah, that's the way kind of how
it's got to play out financially, and druggers come in
with the medium level of the use case and everything,
and I'm going to tell you that at the top level,
I draw another parallel for you, which is two things
come to mind. One is how bitcoin and crypto very
(38:15):
exciting everyone found in novels y factor and it ran
for everyone wanted to make NFTs and crypto a thing.
And then now is kind of dialed back down to
a less fever pitch and more of a regular body
temperature pitch. The macro metaphorical level is what I'm going
to say. I think AI will dial back down to
that Norman normal regulatory sort of body temperature. And then
to draw another parallel tinder was everyone was making their app.
(38:39):
The tinder of this tinder of real estate and we
were describing is like an interface that works really well
for something like a use case. The Chad GBT model
is an interface that works really well for question and
answer is seeking help assisting you with things that might
never go away, that might just get built into every app.
As access to LLMS becomes easier for developers, they'll build
(38:59):
it in to the Bank of America, Chat Bottle build
it into everything, and so I think that's where it
balances out eventually over time, maybe through rate limits. I
don't know that that's going to be the way. I
think some consolidation might happen.
Speaker 3 (39:10):
You have right limits own actually accessing your pa R.
Speaker 5 (39:12):
The question back and forth for the people who are
using chat gubt right or yeah. I think that will
come too, But I think eventually we're talking like five
years with Druggers. I think with the rate limits, maybe
in the short term, but I think even longer term
than that, we're seeing that might eventually go away. Those
apps may not even exist stand alone.
Speaker 2 (39:29):
Yeah, and I think I actually don't know if I
fully disagree with you about everything you're saying. It's just
the scale of what we're talking about might be different.
Everyone having a large language model to access. Just saying
that this is the future we're talking about doesn't really
change that much. I don't like the economics effect. I
know I'm going to do the business bullshit thing I do,
but it's the economic effects are quite limited right now.
Speaker 3 (39:53):
Now.
Speaker 2 (39:54):
If you're saying everyone will have a large language model
will be hamstrung in some way or what have you. Fine,
I can buy that for good or for bad. I
could see that happening. I just don't think that it
goes much further than what we see today. And I
think what you're describing is what Google Search should have become.
And like that was what I remember when Bard came out.
Speaker 3 (40:13):
I wrote about this and was kind of.
Speaker 2 (40:14):
Like, surely what chat GPT is is what search was
meant to.
Speaker 1 (40:17):
Be, right as the com Brothers said, you know, sure
if if if a frog you know, had had wings,
had wings, it wouldn't bump its ass hopping, you know
what I mean?
Speaker 4 (40:28):
And so and I'll say that, you know, I think
that even as it's completely absorbed into our culture, we're
still going to be on the phone listening to an
AI list medical options, yeah, human, human operator, human, Like
I don't think that's.
Speaker 2 (40:42):
Going away green seven seven one right, yeah, Like I
don't think.
Speaker 4 (40:45):
I do think that people are going to be like, yeah, okay,
I'll talk you through my problems until I hit this
part with my insurance. And I just want a human
right right right now. And I don't think that's going.
Speaker 3 (40:54):
Away, No, not at all.
Speaker 2 (40:55):
I actually think we're really more on the same page. Yah,
it seem because it's like, my whole thing is what
you're describing is the use case. I think there are
real harms, but I think we kind of agree where
the dangers would be. My thing is is that people
are extrapolating from that to this insane level, like this
whole they keep talking about agents everywhere. You've got Matthew McConaughey.
Speaker 5 (41:15):
And a But agent is kind of what I'm describing,
which is like every every service has its own chat
bot more or less. They're just using different.
Speaker 2 (41:22):
They're just using a different Yeah, And I mean that's
kind of what like Bank of America already has a
chat bot and it does it.
Speaker 3 (41:27):
Does not work.
Speaker 2 (41:28):
It's the bottom is when you're trying to search for
a transaction.
Speaker 1 (41:31):
He's talking about for sure, the human of course we
all do that. I guess I think it'll fool us.
Speaker 4 (41:38):
That's fair, that's fair.
Speaker 1 (41:40):
I don't think right, I've already fooled a lot of people.
Speaker 4 (41:42):
Very well, people into up killing themselves.
Speaker 1 (41:45):
Oh god, well that's tell your car that old character AI.
Speaker 3 (41:47):
Of course Google paid two billion dollars for them.
Speaker 1 (41:51):
I'm not making an argument that it's this is a
beneficent as in any way, shape or form. Like Like
I said, I've been reading about this for so long,
but I do think that to just sort of That's
why I brought up the horse and buggy because no,
the people who wanted the horses right, they were right
about a lot of stuff about the harms that would do,
(42:11):
the pollution, the pollution, the noise, the way would take
us away from our communities, like they were right about
so much. What they were wrong about was the inevitable
march of the future in time.
Speaker 5 (42:24):
That's I think the same. Before coming to this podcast,
I was reading Mike your piece and I was like, oh, yeah,
are we like in the Industrial Revolution forgetting about the
agricultural revolution? For getting all the revolutions.
Speaker 1 (42:33):
That came there. Because I've seen it all from when
I was I remember when AOL showed up and coffee served,
and I remember mess I was invited on a message
board where I was fourteen. I'm fifty nine, like this
guy I knew had a message board in New York
and had to dial up. And I mean, so I've
seen this, So I'm an early adapter of stuff, even
though I'm an old dude, I'm but not from a
tech side, from.
Speaker 3 (42:53):
A user side.
Speaker 1 (42:55):
Single. This is this single as a just as a
user meaning I don't know how it works why, but
I can explain to you why people are so fascinating
and NFTs I was on. You could find the old
tweets going calling people buy one? Are you fucking I
don't know. Studying people is like literally like why I
(43:16):
started doing what I do. But like, no, of course
I recognize that as a con from the moment one.
Speaker 5 (43:20):
But this doesn't mean like a con.
Speaker 1 (43:22):
But even when you say the bitcoin things today, yeah,
I know, so Bitcoin one, NFTs were always huckster devices
to separate suckers from their money. And look, Theodore Eblin said,
and David Mammock quotes it all the time. Every profession
is a conspiracy against the laity, every profession fox over
the regular. Yes, that's the and there's no doubt about Yeah,
(43:48):
Mamott was ahead of that by twenty years. But you
gotta you gotta look at it and not me Mammott
and then me, but but so uh, you got to
just understand that it is very active. Sure, people, I'm just.
Speaker 2 (44:01):
Trying to bridge from what you're describing to the automobile,
which changed everything. I don't think large language models. I
think they're going to create harms. I think they are
going to be things that change. But it's like what
happens now, because what we have right now is basically
what we've had for two years. If you want to
email me about reasoning, please do our all email as
(44:22):
much as you want. But the thing is you look
at this and people are going, okay, and then this
will happen. It's like that thing that with an LLM
that goes and does something really basic. Thing an LLLM
that you tell to go and do a thing online.
They are bad, bad at it. There was a Salesforce
study that sayds like thirty five percent they like it
(44:43):
was like thirty something percent they fail or that that
was only the only ones they complete it.
Speaker 1 (44:48):
I don't think I've never asked an AI to do
a task for it. That's I'm saying. I've never asked
it to do it. Do one of those agent kind
of functions. I wouldn't agree with you. I wouldn't well,
I don't think it's there yet. I mean, for again,
as a user. But research, I've had good research.
Speaker 5 (45:00):
Done, really well, I mean the AI and jen AI
has done a lot of good stuff in the medical
fields too, right, Chrisper and all of that stuff. There's
been a lot of discoveries about what sort of mutations
you find in certain types of cancer that like, I
don't think that done.
Speaker 1 (45:15):
I want to study an industry to consider writing about it,
I can ask you have to ask good question. I mean,
it's like in anything else, right, I can ask really
good questions of an AI and send it, you know,
for the two hundred dollars a month model one where
they'll do that research thing. And if I ask it
to do research and then you can, it'll like it
will come back and maybe you have to send it
(45:37):
back three times, but you have the speed and accurate. Yeah,
but you can very quickly a lot of it because
you can. Like the booklist thing that I can't fathom
being that irresponsible, like those idiots who in the paper,
but you literally all you have to say to them
is when it presents any kind of list, all you
(45:57):
say to it is go verify that list. Please.
Speaker 3 (45:59):
How do you know what right is?
Speaker 5 (46:01):
Or myself? I mean I would at some point, but you.
Speaker 1 (46:04):
Go verify the list. It immediately goes, You're right, I
was hallucinating these three books don't exist. That happens. Then
you go do it one more time and make sure
that these titles are available in these stores. Then they'll
give you links and then you can go look at
something that you're doing.
Speaker 4 (46:20):
And I think sometimes we're splitting here though, is you're
both of you are extraordinarily intelligent people who have done
a lot of research in your life, so you know
how to do those states. You know how to be like,
I need to call this up, I need to search
this make sure. I do think that one of the problems,
again coming from like the low level consumer level, is
it's often being marketed as an impartial refer impart.
Speaker 1 (46:40):
You're totally right, You're totally right. It's not that you
cannot you'll be fucked so bad.
Speaker 2 (46:45):
But most people don't interact with it like you do,
is what I'm saying.
Speaker 1 (46:48):
Well, but but they can.
Speaker 4 (46:49):
They can, and I agree with you, they can. I
think it's almost but it's it's.
Speaker 2 (46:54):
They don't lead them to do that. It's not like
they have things that guide them.
Speaker 3 (46:57):
Right.
Speaker 1 (46:58):
This is really interesting here what you're saying about this.
Speaker 4 (47:00):
Yeah, well, I mean I think that you know, when
you see people and we all saw the Rock Nazi thing,
but if you see people on Twitter, they don't use
they I would say, fewer use it to do not
see ship as much as they go, hey rock, is
this true?
Speaker 3 (47:13):
Is this true? Is this true?
Speaker 4 (47:15):
And depending on what fingers, on which scale, the answer
is different each time.
Speaker 1 (47:19):
But I think I understand why. I mean I did.
I haven't been on Twitter since Yer whatever that date is,
so I don't know, but I would never. Of course,
you can't just say let's go to the ay as
though that's a final arbiter and it's certainly not today.
Speaker 3 (47:37):
But it's being pransitioned.
Speaker 4 (47:39):
I think that's my problem is it's being positioned that way.
And I think you're absolutely right. I think you're absolutely
right how useful it is, especially if you have the
skill to use it. I think the problem is it
is being marketed as this is a catch all solution,
this is a panacea to your knowledge problems, and oh.
Speaker 1 (47:54):
Yeah, you know what I mean.
Speaker 3 (47:56):
Well, it is like.
Speaker 1 (47:57):
Believing rock Star games that that they're going to get
fucking grant the motto out right. I mean, it's no doubt, no,
I mean I literally look at the actuarial table. If
you have to build one on my life expectancy verse
when the next popperation of GTA, I might lose.
Speaker 3 (48:14):
But here's the thing. Here's the thing.
Speaker 2 (48:15):
Though you're describing the research, it isn't an invalid use case.
What you're describing is how people use Google Search to
do research. They pull up a bunch of stuff, they
go through them, they look at it and they go,
is this right?
Speaker 1 (48:26):
That's a slog?
Speaker 3 (48:27):
It is a slog? Why that is a slog?
Speaker 1 (48:29):
No, it is not. It's a pleasure. It's not. No,
you're gonna be honest about it. It is a total pace.
Speaker 2 (48:34):
I should be clear. I'm also I have actually used
these things. I've genuinely tried because I'm I love my
dud Dad's Mcgizmo's and I've really I've sat down and
been like, what I'm I missing?
Speaker 1 (48:44):
You just became Paul McCartney And that was Paul McCartney moment.
Speaker 2 (48:50):
He has not my invite for the show though. No,
I'm not invited Mecca. My mom would be so happy.
Speaker 3 (48:57):
No, it's just.
Speaker 2 (48:59):
I really feel like this keeps coming back to the
you were using it in a totally fine way. I'm
mad at the fact that everyone's like, and this is
the only thing you'll need. You can fully trust this,
this is the best thing. The information's the best. You're
looking at it and going this is a way of
digging through information and passing stuff and having a conversation
with the information.
Speaker 1 (49:14):
Always common and in the broad society, yes, the lowest,
that's the lowest common. But unfortunately, yes, you're right, our
educational systems really fucked up. Disadvantaged people have no chance
that Mike went to a school that allowed people from
disparate areas to get We gotta, yeah, we gotta reform
the education system, and we don't trust it. Everyone has
a fair shot, I agree.
Speaker 2 (49:33):
But there's a very base so let's do it. Yeah,
but there's actually a very basic thing we don't have
to with that. There should be regulation that says that
these things need big fucking disclaimers that say, hey, check everything.
They won't do it because the incentives we discussed, but
that would actually be I think called the stealing is
also bad. I think the environmental damage is bad. But
I think we agree on that.
Speaker 1 (49:52):
It's just Safeguards are always great, But then you gotta
figure out who decides what those safeguards are and who's
Like you talk to Bill Girly, he'll talk about regulatory creep.
So where do you want to I'm just saying, no, no, no,
I agree with you. He's a smart person and he's
you know, thoughtful about this stuff. And so how who
do you want? Do you want to go? I mean this,
you want the current government? I think the guardrails. Who
(50:15):
who's to do?
Speaker 3 (50:16):
But here's a very basic god roil.
Speaker 2 (50:17):
It's just a disclaiming that says everything with the general
if I is blah blah blah, let you.
Speaker 5 (50:21):
Kind of some of them have it now, but they're
all in preview right, and that's probably what they're coaching.
Speaker 1 (50:25):
Again, they do say right on every single time, every
single time, every single time I've got today actually for sure.
Speaker 5 (50:35):
Yeah, because they they have been criticized. But to Brand's
larger point, there are guardrails put in place into a
lot of these. I'm most familiar with the Geminis and
the Apple intelligences and the Amazon ones of the world,
and they their guardrails are around like see Sam Right
child sexual abuse material, or like not presenting people's faces
or like trying to avoid photo realism because then you
(50:56):
get very deceptive, very quickly. You don't see any of
that in rock. Maybe it depends.
Speaker 3 (51:01):
I just look, I just used the which one does.
Speaker 1 (51:04):
Really it does. But I'm not gonna turn on my phone.
Speaker 2 (51:06):
But I'm like, I'm just like, here's the thing. If
they're there sometimes and not others, that's also bad. Like
it's like it's it's because when you've got people who
are killing themselves, people are having it was miles clear.
I think it was a rolling stone wrot, this piece
about people having psychotic reactions. Yeah, I agree this This
administration I've probably done won't regulating this. But the answer
(51:27):
being let's regulate nothing is terrifying.
Speaker 1 (51:29):
Well, it's really, but it is confusing because if you
think about, let's say Facebook, right, there's no doubt that
Facebook was used in me and mar to uh foment
a genocide. People were warned inside. Who knows where it
got to? It is very well documented. How could one
(51:53):
after the fact it's okay, we know it's really hard
to sort of figure out these use cases, and then
she's all social media you have gone away like some
people think it should be. I would social media feel.
Speaker 3 (52:03):
Better about that.
Speaker 2 (52:04):
If Andrew Bosworth the CTO, Yeah, yeah, yeah, I mostly
say it for the this, yeah, because also the yellow.
Speaker 1 (52:11):
But if I know I'm a well, that's good. If
I know he did. He did an.
Speaker 2 (52:16):
Internal letter in twenty sixteen twenty seventeen where he said
that all things were justified for growth, including a terror attack,
and that's kind of how they approach everything.
Speaker 1 (52:24):
No, it was monstrous, Like when I saw that Me
and Mark thing, it made me say, like I should
never use Facebook. I mean, I think that is as
bad a thing as I've ever seen.
Speaker 2 (52:32):
I mean literally, this is still an example that and
Meta's l M allows children and Jeff Horwitz report this
in the journal allowed children to like have sexual conversations
with John Cena very peculiar, Like there was like super clear.
Speaker 1 (52:53):
You say with John That's what I say.
Speaker 2 (52:56):
One, it was just John c He's gone to jail
for one hundred years.
Speaker 1 (53:02):
On the Bear. I was in a scene with John
lovely guy, and he definitely wasn't doing that.
Speaker 2 (53:07):
No, No, he seems like I'm saying it was the
voice of John c You're gloud to have pedo conversations with.
But again it's this lack of restriction because no particular
technology is evil.
Speaker 3 (53:18):
At it's it's.
Speaker 5 (53:34):
I think what Brian was saying. And I don't know
if I misinterpreting you, but like it's there is some
fatigue at seeing these things throughout all the pharmaceutical warnings
you get the end of every commercial, every warning you're
gonna get from Gemini from now on, every Apple intelligence warning.
That's like notification summaries can be wrong sometimes, like sometimes
you see that I see them on my phone.
Speaker 2 (53:52):
Are no, no, no, I believe you.
Speaker 1 (53:54):
It's I guess what I'm saying. Is that why I
brought up me and Mara is No, we as a society, unfortunately,
we default to the convenient, fun.
Speaker 4 (54:06):
Easy way.
Speaker 1 (54:07):
Yeah, that'sactly what we all should have done when face
And I'm not even blaming exactly like you can blame bos.
I don't know enough to blame anybody there. We know
that the technology that Facebook itself was used by these
generals in me and mar right, yes, and nobody took pause,
Like I think very few people left Facebook as a
result of that. I think. So I just don't know
(54:28):
what the fix is for these problems because we gravitate
like bees to honey.
Speaker 5 (54:33):
But they're also like tools that can be used as weapons,
and it depends on the perpetrator and the person using
these right. And this is an age old question again
coming back to the industrial and agricultural revolution. This can
be just a tool for hacking a plant off of
its huscal. It can be used as a murder weapon.
Back to AI, can be very informative, very helpful for
people who need companionship. It can be used like people
(54:55):
will send me scam texts all the time. The technologies
keeps keeping up with them and filtering them out, so
they keep changing their spam. Bad actors are going to
bad act That's just the way it's going to be.
What can we do? We can I stop using Facebook.
I try to educate my parents every single time they
use a GBT answer with me. I'm like, Mom and Dad,
stop using that. But then they just keep using it
(55:15):
because they're the sort of person that's accessible to this.
They will just use it convenience and they don't want
to do the extra work of maybe the Google search
method which you were saying at, or they just want
something that's easy and they don't care if they get
it wrong.
Speaker 1 (55:30):
Me. Yeah, I guess I love the idea of you
using your brains to figure out how to make these
things safer and more useful as you agitate within the industry.
I think trying to trying to find a way for
them to disappear seems all possible. And again, like NFTs
were obviously going to disappear, but the underlying bigcoin thing
(55:50):
wasn't because there's too much the moment this election happened,
the moment this election happened, bitcoin is going to two
hunch of bitcoin was going because.
Speaker 2 (55:59):
Once about that is we could have stopped crypto. I
was writing about it at the time. I was, and
the only one are you going to starve crypto by
informing people about the inherently criminal aspect that underlies everything,
the fact that tether is more than likely in the
hands of multiple different I tried, and I failed, and
you know what, it sucks, But you try. You can
put ideas out there, you can see if people pick
(56:20):
them up. And I mean that's kind of what you
do there. In the case of crypto, it's such a
weird thing as well, because it's there, but it isn't.
It doesn't really do anything other than fun things. Or
be funded, and it just kind of exists where it's
just like they kind of they don't even I kind
of respect the fact they don't even try and.
Speaker 1 (56:37):
Be literally you're talking to Charlie Monger and Warren Buffet's book.
That's what they always say. I know, but that's what
they always said. Because of that. That's their whole point,
that to forget the blockchain. It doesn't really do it,
you know yet really oh yeah, do anything.
Speaker 2 (56:50):
And the thing is, though there was real money in that,
which there isn't a generative payer. And I think that
what's funny is this convenience thing you're talking about may
actually be their downfa because those five hundred million weekly
active chat deepert users, the cost of them billions of dollars,
it's probably all gonna fascinate it.
Speaker 1 (57:06):
It's not as fascinating interesting that they mean their popularity
is going to destroy. Yeah, those companies, not underlying tech,
but that the companies themselves, that's fascinating. I'm really like,
I'm saying, you're teaching me something I had no idea
about that.
Speaker 2 (57:20):
I'll explain it very simply, which is open Ai last
year spent nine billion dollars to lose five billion dollars.
Anthropics spent five point three billion dollars. No, sorry, they
look they spent like nine seven eight. They spent billions
to lose five point three billion dollars, of which a
chunk of that was just given to Amazon for servers.
(57:40):
It's very fucking weird. They lose money. Their conversion rate
on chet GPT is awful. So five hundred million, I
think they have fifteen point five to sixty million paying subscribers.
They don't publish monthly active users because if they did,
you could do the math and see it's trash. And
on top of that, they just can't find a way
to make money. They so much money. So what's more
(58:01):
than likely is, yeah, these companies might die. Llllms will
hang around because there are use cases and Google has that,
Like Jeff Dino for at, Google is one of the
least evil tech people. There are actually people there who
like the tech and give a shit about it, and
there's more efficiency stuff coming out of Google's models. The
thing is what we see today I do not believe
is I think the most annoying scenario is going to
(58:21):
be the longest life large language model with unlimited access
is going to be on fucking Meta because of their
unrestricted tripe that's on every fucking app. But I think
things like chet gupt are just going to be limited.
You may still have people who use them in the
way with the gym.
Speaker 1 (58:35):
But it's interesting what you were saying about. Maybe an
answer and you think about the industrial revolution is eventually
is that people are going to have to be trained
on instead of firing nine thousand, people train nine thousand,
become prompt engineers, and yeah, become prompt engineers. And the
fact that they didn't, I mean, look, no one is
(58:55):
less surprised than me by incentive structures making these motherfuckers
act like evil. It's completely non caring, you know, monsters.
But if they can be convinced that there's a profit
motive in training people.
Speaker 3 (59:09):
It's no profits.
Speaker 1 (59:11):
No, that they can be convinced of it. No, But
I must be clear though, but event but in all
these businesses there was no profit until nothing even as
long did take Amazon to become profitable.
Speaker 2 (59:19):
Amazon took about eleven years with a WS but AWS
was a concern that reduced costs for Amazon itself to
run there.
Speaker 1 (59:25):
But in two thousand and a lot of people thought
it was never going to become.
Speaker 3 (59:27):
Profitable, right, Yeah, but that's not the same.
Speaker 2 (59:31):
Yeah, but the economics computer is, it's completely different economics.
Speaker 5 (59:34):
Ask so, Brian, how much would you pay to keep
using GBT a month?
Speaker 1 (59:38):
I'm a bad I'm older, and I've done well and
I've do you know what I'm saying?
Speaker 5 (59:44):
Do you pay anything right now?
Speaker 1 (59:45):
I pay two hundred dollars a month.
Speaker 5 (59:47):
Two hundred a month. Okay, so Google one's like two fifty.
Speaker 3 (59:49):
Do you pay two hundred for.
Speaker 1 (59:51):
The research because for the supercharged researching research on the
twenty bucks a month one though not the same level.
Speaker 4 (59:58):
That's an interesting thing to point out, too, is that
you're getting better quality by paying for a higher I'm.
Speaker 1 (01:00:02):
Aware of it. Seriously, is there a quality difference if
you look at it? We could yet, oh my god,
they would tell you there isn't any amount of What
is the rate?
Speaker 5 (01:00:14):
So they sell your ship product to make you buy
them less?
Speaker 1 (01:00:16):
Well, I don't know if they sell product, but you're
saying the US rate. I someone told me someone I
trust a lot, a person who's tech savvy, was like,
this is when that happened. They used it for a
while and a couple of months ago, I was in
a meeting with someone and they were like, if I
was going to tell you to spend money on anything,
spend two hundred bucks.
Speaker 3 (01:00:34):
Want do you want to something crazy?
Speaker 2 (01:00:35):
Though?
Speaker 3 (01:00:35):
Yes, they lose money on every two hundred buck a
month customer.
Speaker 1 (01:00:38):
I believe you because because I'm using, Because when they
do that search, you're saying it's burning so much.
Speaker 3 (01:00:44):
But that's the thing.
Speaker 2 (01:00:45):
How likely do you think that that will continue because
the deep research that.
Speaker 1 (01:00:52):
Well, of course there's a number very soon. I'm already
at that.
Speaker 5 (01:00:55):
Yeah, you're very high end of it.
Speaker 1 (01:00:56):
Yeah, yeah, for sure, double this I would I would
not pay double that.
Speaker 2 (01:01:00):
I would not pay And I will be honest. And
this is not even me being like hate or anything.
It may not be that cheap because right now.
Speaker 3 (01:01:08):
It's the big store of my news. Let please head it.
Speaker 2 (01:01:11):
Look for a two hundred a month power user who's
already using the losing the money. They lose so much
money on them that it's like four hundred, five hundred,
one thousand dollars a month clawed code right now, which
is slightly different because the way they do context stuff. Nevertheless,
on the two hundred dollars max user on Claude they
could be losing. They had someone on ia on Twitter
(01:01:34):
they spent ten thousand dollars in compute on a two
hundred month subscription. These are the majority of power users.
Power users go nuts, as you well know. This is
why I'm so pushy on the economics, because what we
are seeing today, it's like if every uber weighed forty
thousand pounds and the fuel was giraffe blood. It was
just like this insane economic And I sound like I'm kidding,
(01:01:55):
but the economics are completely bonkers. So as much use
as you're getting today, I just don't know how practically
they'll provide that. And they might be cheaper models, but
the cheaper models and I might not be able to
do I see me use three Yeah, I've got.
Speaker 1 (01:02:08):
Yeah, yeah, So which takes a lot. Yeah, sometimes it
still takes such a long time.
Speaker 3 (01:02:13):
That's the thing.
Speaker 1 (01:02:13):
You can have a conversation and that's long.
Speaker 2 (01:02:16):
How much of this is sustainable long term? And I
know the business stuff is annoying because you still have
your experience, which you like.
Speaker 1 (01:02:23):
No, it's not annoying, it's fascinating. I'm fascinated about the
whole thing. This is just great.
Speaker 2 (01:02:27):
It's just the long term here, and I'm talking long
terms in eighteen months is I don't know if we
will have deep research at any price point approaching the
one we'll have in that period.
Speaker 1 (01:02:37):
Well, the deep research is the only reason anyone should
pay the money.
Speaker 2 (01:02:40):
And that's the thing, though, the only reason that people
should pay is the thing that is not sustainable and
has no problems. So when you talk about how we
bring this towards like an AWS situation, AWS is lack
of profitability was built on infrastructure expansion. It was because
they were building the rails of cloud rip large. It
was never in the billions and billions a year with
no There is no path the profitability here. There was
(01:03:03):
one model of open aies that they we can deliver
to open Microsoft in twenty twenty three, called a.
Speaker 3 (01:03:08):
Racus, and.
Speaker 2 (01:03:12):
They failed to do it. They have yet to discover
a or make because it may not be mathematically possible
a really good large language model that can do the
kind of reasoning like that that would be reliable and
have the web search too.
Speaker 1 (01:03:25):
And I want to say you do have to when
you talk about prompt Sorry, I was thinking about this
when I was learning about complexity theory. I read this
book by a gun named Neil Tiste that I loved.
He's professor at at NYU. And I'm not good at physics,
and I really want to understand quantum physics, and so
I would ask questions, Right, do research on this and
then find a way to be able to explain it.
So go read these books, go find documentaries. And then
(01:03:48):
I just quickly realized at some point that that that
the AI hadn't It was clear to me it hadn't
read something and it doesn't have access. But I could
figure it out it hadn't, And I just said, like,
did you look at that video? Like it was a video?
I go, did you really look at that video that speech?
And then it immediately went no, you got me there.
I didn't look at the speech, but I'm gonna go
(01:04:08):
now find it a different way. So you do. You
do have to be vigilant acculturated to having those kinds
of crazy too, like you have to maybe have advanced
degrees or have trained yourself. So I'm not this is fascinating,
right because I take for granted the steps I take
for granted to and make my way through the world.
Speaker 3 (01:04:31):
Yes, and.
Speaker 1 (01:04:34):
Like the way that I would interrogate something like that
to get to an answer that's useful.
Speaker 5 (01:04:39):
Yeah, as opposed to maybe my parents would be like, oh, okay,
you watched that video.
Speaker 1 (01:04:42):
Okay, okay, and they might get wrong information from this.
I can't argue with that. You're right about it.
Speaker 2 (01:04:47):
And I think that a lot of this grand theory
of this comes down to I think people have a
lot of questions and not a lot of people to
ask them to questions about their life, how they're feeling.
Then I think it's answer with the medical side, where
it's it's quite hard to ask a doctor a question
in any country, it's hard to know whether they and
also doctors regularly make you feel annoying. I'm not talking
(01:05:10):
about my doctor.
Speaker 3 (01:05:10):
He loves me.
Speaker 2 (01:05:12):
But you can't go to a doctor regularly with little questions.
They don't have the time because they must maximize profits
or they are busy one of the two. Yeah, And
ultimately we are sitting there going I've got this weird
rash or like I might leg itched in this place
in the same day three days running.
Speaker 3 (01:05:27):
What could that mean? And you can't really google that.
Speaker 2 (01:05:30):
It's just you're dying but you can't really google that,
but chat, GPT whatever can do an impressive impression of it,
or in your case, Brian and I think this is
reasonable lead you in a direction towards like a male
clinic article about a particular things, something you could raise
to your doctor. That makes sense. People are lonely, people
just have weird questions. And I think that there is
(01:05:51):
partially a bad cybe where it's like everyone wants everything immediately,
we must have everything we want immediately. But also people
are curious, and we are more connected as people. We
are more decentralized as people. We don't meet people we
don't have We are by at scale, overworked, underpaid, so
we don't have the time to be generous with our time.
Speaker 3 (01:06:12):
Yeah.
Speaker 1 (01:06:12):
I'll give you one other use case though, because and
it goes to the question of training. Training, But you
could use this for you said, your parents dance or whatever.
So I'll tell you that if you are somebody who
is all used like deadlifting or squatting with a bar bell.
As an example, if I put a thirty second clip
into chat GBT and I say please watch this, tell
(01:06:36):
me is this form at risk of injury? How should
I modify it. Is it good enough? What do you
think about the load of this weight? The answer it
will give is and I have checked it against like
the world. The answer will give is outstandings. And that's
how would you get that in another way? I don't
think there is. And that's a small example. Well that's
(01:06:57):
different than search. There's an export. No, that's I'm searching
using a video. No, no, it's not.
Speaker 5 (01:07:02):
The matching is a bit less spe sophisticated in the
regular Google search.
Speaker 3 (01:07:07):
Oh I'm saying that.
Speaker 1 (01:07:08):
No, it's I'm merely matching it against a perceived perfect form.
It's looking at your female literally, it'll go with your
femur size. This is the kind of squad that you
could do if I low bar versus high bar. Here's why,
here's what this looks like. You're over, I'm agreeing with you.
Speaker 2 (01:07:21):
I'm just saying that search as a term has grown
to look at this image and compare it to these sources,
which is theoretically we'll search.
Speaker 1 (01:07:28):
But then in the good language, right language, fix how
to fix it?
Speaker 4 (01:07:35):
And I worry if you ask a bad question, like
if you not ask a bad question, phrase a question incorrect?
Speaker 3 (01:07:40):
Sure, And.
Speaker 4 (01:07:43):
I'm not saying you if one phrases let's say they
ask a fitness question, but they phrase it a little
bit weird, and the answer they get is harmful. It
almost I'm worried about situations like that. And also it
feels like we're removing an element of human responsibility or accountability,
where it's like, well, the machine answered weird, rather than
like a doctor answered weird.
Speaker 3 (01:08:03):
And I think that lots of corporations love that. They
love that because they're already doing it.
Speaker 4 (01:08:07):
If I if my AI accidentally denies your insurance, it.
Speaker 3 (01:08:10):
Was the AI, that wasn't us.
Speaker 2 (01:08:12):
It's the algorithm. It's it's the same algorithmic pass on thing.
But I think you're right in the that is kind
of cool. I also have used those three because I
do pay for this. I'm not a baseless hater. I
did ask what's the distance between this bottom of this
photo frame and the floor, and it spent ten minutes
to give me a completely insanely wrong answer. They're not
(01:08:33):
good with numbers.
Speaker 1 (01:08:34):
Oh fascinating. The other day, in this example, I just
gave you it said I can't. It just said like
I can't see, which is good. That was great.
Speaker 3 (01:08:42):
That should do that.
Speaker 5 (01:08:43):
That was great, both of you using the same I
was using three.
Speaker 2 (01:08:46):
On Chap GBT plus though, So now I don't know
how I feel about giving Clammy Sammy two.
Speaker 1 (01:08:54):
Underprive for a month and see what it does.
Speaker 3 (01:08:55):
Yeah, you can write it off.
Speaker 2 (01:08:57):
I don't want to write gave two hundred bucks to
Warriolama day hopes.
Speaker 1 (01:09:01):
I would love you to do that and then call
me and say it's the same result.
Speaker 2 (01:09:05):
No, I would love you to tell me I actually
really careous.
Speaker 1 (01:09:07):
I would love you to say to me, dude, that's
just been twenty bucks the same.
Speaker 5 (01:09:11):
Then you can say you're one eighty a month.
Speaker 1 (01:09:12):
Yeah, great news. Please.
Speaker 5 (01:09:14):
I am what Mike was bringing up I thought was
going to be similar to the point I had been
mulling over when you were talking about the training stuff,
which is that we already deal with due to like
you know, capitalism or barriers to entry, an influx of
individuals who may not actually be fully equipped for the
jobs they purport to do. So, whether it be journalists
like myself or like again Finnish influencers or trainers that
(01:09:34):
say they have whatever types of physical health degrees that
are just the result of a ten hour course online
that sort of thing. We're already dealing with the like
quality dilution of like information coming from sources like that,
to throw AI into the mix piece like making it
even worse, like harder than ever to tell what the
truth is. And I don't know about you all, but
(01:09:55):
I find myself gaslighting myself all the time now, regardless
of like my own life, whether it's the truth in
the world, whether I'm being too sympathetic to multiple different perspectives.
I don't know what the truth is anymore. I can't
tell you what the cold heart scientific truth of anything
ever is. And that's where it's led me.
Speaker 1 (01:10:11):
You also have to be willing, I agree with. That's
a brilliant point. I think one of the things I
would say to people if someone asked me, how do
you how should you communicate? Mm hmm, I would say,
And it's really painful for people because I've seen them
talk online about what they love about conversing with the eye.
You gotta say, click every toggle that says be mean,
tell me the truth, don't tell me I'm smart. Yeah,
(01:10:32):
Like you got to suck me rocked it to really
be withholding in that way if you want to actually
engage so that so that you're you're not getting gas lit,
because yes, I agree that this is dangerous. The default
setting is to gaslight you. You got to actually go.
I don't need to hear like that glaze you.
Speaker 3 (01:10:51):
I think the young people.
Speaker 4 (01:10:52):
Blaze you sure before we run And you didn't hear
the producer just laugh out loud.
Speaker 1 (01:11:00):
We talk about just one I think they used to say,
gle but can we just talk about one positive, purely
positive tech thing that happened the last week July fourth?
Who who's on TikTok and knows all about the antipasto party?
Speaker 5 (01:11:12):
And was it the one that one person went to?
Speaker 1 (01:11:17):
Yeah, okay, it's the greatest thing.
Speaker 3 (01:11:19):
Please.
Speaker 2 (01:11:20):
This is this?
Speaker 1 (01:11:21):
Okay this They're in Texas. These people have a July
fourth party. There's a woman she's just moved there recently.
Her kid becomes friends with someone else's kid. And this woman, Sarah,
is the parent of one boy, says, come with me
over to these people party, these people's party. Woman a
makes the apotheosis of all antipastos out, the greatest salady
(01:11:42):
you've ever seen in your life, goes to their house. Yeah,
and these people are like who's this stranger in our house?
And they kick her out even though she brought this
incredible She goes home and gets on TikTok and she's
crying and she's like, I brought this salad and they
kicked me out of their hounds. And the entire internet
(01:12:03):
that her, yeah, and loves her so much, and it's like,
it's an incredible story. Everybody I know is like, like
everybody of all ages, like nieces and nephews of mine
and then older people older than me are all sending
and it's an amazing thing. Is a huge community is
rallied to hate these people and to love her and
(01:12:23):
her homemade mazzarella and come grown tomatoes that she brought
to their.
Speaker 5 (01:12:28):
House scaling Lo, you saw something else, but no, I
saw something else. All together, it's really worth But like
Reddit does things like that, and that's the thing.
Speaker 3 (01:12:37):
Reddit like this is I liked ending this in the
post of notes. That's love it. Reddit does that.
Speaker 5 (01:12:41):
You can say something well, yeah, no, community and social
forums like that, that's what the Internet is great for.
And then not a lot of AI as president, a
lot of that and.
Speaker 2 (01:12:50):
It's almost read it especially right now has got good
because it isn't they The CEO keeps thinking of shoving
it places, but even on the.
Speaker 3 (01:12:57):
Better offline read nine thousand.
Speaker 2 (01:13:00):
Now, hey guys, but it's great because I one of
my biggest stories I wrote recently was on Cursor and
them falling apart, and it was because someone on the
Red the forum was just like looking through their stuff
like a you and everyone had this full conversation about it.
You've got these people out there in this morass of
fake stuff or generated stuff or seo stuff, You've got
(01:13:20):
genuine people. There is still a joy to all of
this crap. I love Blue Sky as well, but read
it has really just I'm shocked.
Speaker 1 (01:13:27):
Spend a lot of time I read it. I read it.
Speaker 2 (01:13:29):
I'm shocked.
Speaker 1 (01:13:30):
And a group someone ever goes on you got to
pick your on. No, but I'm saying, you know, like,
but but there where people have a hobby or a thing,
like whether it's music. You know, you're like I take stuff. Yeah,
it's great and you must love the like the Claude Reddit.
Every day someone's like, why does claud you must I
(01:13:52):
love our slash Google.
Speaker 2 (01:13:53):
Yes, I like our slash sas because it's all just
people like running sass. Know you're thinking of a different
SaaS one. I'm talking s A A S as a service.
Speaker 3 (01:14:05):
Yeah, I'm a loser.
Speaker 2 (01:14:06):
So no, you watch people being like my app has
been up for six months since made seven dollars. A
bunch of people are like yes.
Speaker 5 (01:14:13):
And there was subs.
Speaker 2 (01:14:14):
I kind of love them that you've got these niches.
But I hate to say, I do need to call
this episode.
Speaker 3 (01:14:20):
Brian. Where can people find you?
Speaker 1 (01:14:22):
Oh?
Speaker 3 (01:14:23):
Instagram, Yes, we'll have a link to you there as well.
Speaker 4 (01:14:27):
You can find me on Instagram at Mike Drucker is Dead,
and on Blue Sky at Mike Drucker and by a
good Game, No Rematch. It is a book that's available digital,
hardcover or audio with the audio read by myself.
Speaker 5 (01:14:39):
Hell yeah, I'm at engadget dot com or on threads
at Sherlin Instagram c h r O I N N
S c h r M.
Speaker 3 (01:14:46):
Type in Google the man who destroyed Google Search. That's me.
Speaker 2 (01:14:50):
I'm ed Zitchohn. Thank you so much for listening as
ever is recording the wonderful New York Studio and iHeartRadio.
Danel Goodman of course as our producer. Thank you so much, Daniel,
Thank you all for listening. Thank you for listening to
Better Offline. The editor and composer of the Better Offline
(01:15:12):
theme song is Matasowski. You can check out more of
his music and audio projects at Matasowski dot com, M
A T T O S O W s ki dot com.
You can email me at easy at Better Offline dot
com or visit Better Offline dot com to find more
podcast links and of course, my newsletter. I also really
recommend you go to chat dot where's youreaed dot at
(01:15:33):
to visit the discord, and go to our slash Better
Offline to check out our reddit. Thank you so much
for listening. Better Offline is a production of cool Zone Media.
For more from cool Zone Media, visit our website cool
Zonemedia dot com, or check us out on the iHeartRadio app,
Apple Podcasts, or wherever you get your podcasts scho