All Episodes

July 2, 2025 74 mins

Welcome to Radio Better Offline, a tech talk radio show recorded out of iHeartRadio's studio in New York city.

Ed Zitron is joined in studio by Allison Morrow of CNN, Victoria Song of The Verge and Gare Davis of It Could Happen Here to talk about the fairy tale of AGI, AI boosters’ religious attachment to the industry’s success, and how the tech industry fears admitting they’re out of ideas.

Allison Morrow

https://www.cnn.com/profiles/allison-morrow
https://bsky.app/profile/amorrow.bsky.social
AI warnings are the hip new way for CEOs to keep their workers afraid of losing their jobs
https://www.cnn.com/2025/06/18/business/ai-warnings-ceos

Victoria Song

https://www.theverge.com/authors/victoria-song 
https://bsky.app/profile/vicmsong.bsky.social

The Unbearable Obviousness of AI Fitness Summaries
https://www.theverge.com/fitness-trackers/694140/ai-summaries-fitness-apps-strava-oura-whoop-wearables

Gare Davis
https://bsky.app/profile/did:plc:jm6ufvsw3hg5zgdpnd3zb4tv
https://www.instagram.com/hungrybowtie

It Could Happen Here
https://www.iheart.com/podcast/105-it-could-happen-here-30717896/

YOU CAN NOW BUY BETTER OFFLINE MERCH! Go to https://cottonbureau.com/people/better-offline and use code FREE99 for free shipping on orders of $99 or more.

---

LINKS: https://www.tinyurl.com/betterofflinelinks

Newsletter: https://www.wheresyoured.at/

Reddit: https://www.reddit.com/r/BetterOffline/ 

Discord: chat.wheresyoured.at

Ed's Socials:

https://twitter.com/edzitron

https://www.instagram.com/edzitron

https://bsky.app/profile/edzitron.com

https://www.threads.net/@edzitron

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Zone Media.

Speaker 2 (00:05):
This is water extermination, fights cell by cell, through bodies
and mindscreens of the earth. Soul's rotten from the orgasm,
drug flesh shuddering from the ovens. Prisoners of the Earth,
come out storm the studio. This is better offline and
I'm at Zetron. We have an incredible studio guest assortment.

(00:31):
Today we have the wonderful Victoria song from the Verge,
hating Victoria. I'm good geez great, and we of course
of Alison Morrow of CNN CNN Nightcap Newsletter as well. Yes, wonderful,
and of course Gare Davis, the wonderful Gear Davis who
didn't insult me on Bluesky for being late because I
was on time. Of Cool Zone Media, Gear, thank you

(00:51):
for joining us.

Speaker 1 (00:52):
Thank you for having me again, despite our despite our
brief fight on Blue Sky tiff.

Speaker 2 (00:58):
It was not even a tiff. It was a friendly
thing by some merchandise though, if you're listening to it.
We have new hoodies, we have new t shirts, we
have new hats. We have an upcoming challenge coin that
you can spend your money on and it will flow
somewhat to me. That's what's great. But today we're talking
about artificial intelligence. There have been a few stories in
the media. Allison is freshback from vacation, so she has
to learn about all the good things that have been happening.

(01:18):
Don't want to start with one of my favorite stories
at the moment, and this is the negotiation between Microsoft
and open Ai. Now, the negotiation is just to run
this down because open ai, by the end of the
year needs to become a for profit entity. It's a
little more complex than that. It's the for profit part
of a nonprofit needs to convert. That alone would be difficult,
but Microsoft owns forty nine percent of this company's future

(01:40):
profits and a bunch of other stuff. They get a
revenue share, they get right store of their IP through
twenty thirty and all these other things. And open ai
has said, Okay, what if we give you thirty three
percent equity less revenue share and you don't get access
to ORIP And understandably Microsoft has said no. So we
are in the funniest possible scenario here in the Microsoft

(02:03):
could literally just fold their arms and let open ai die.
And I feel like at the moment, this is an
underdiscussed topic because this is a gun to Sam Worman's head,
and everyone's just kind of acting like it's fine. I
guess maybe it's just two complex and it's confusing to
me why more people aren't a little bit worried. Anyone

(02:26):
numbers hard numbers.

Speaker 3 (02:27):
That's scary, that's.

Speaker 4 (02:28):
Scary, that's yeah.

Speaker 2 (02:30):
But I think I think why I'm going so insane
about it is this could kill open AI one hundred percent.
Like this is they don't turn into a for profit,
they're dead dead, And I'm just wondering why everyone's just
kind of chilling walking around about it. I feel crazy.

Speaker 5 (02:47):
I think it's just the sense that like that seems
implausible to most people, like if you look on it,
like maybe, but you know, chat GBT is synonymous with
AI in the sense that kleanex is synonymous with tissues
right now, So you know, we're at a point in
time where when you see the big one of of

(03:08):
like a tech thing, you kind of feel that they're infallible.
It's sort of like saying, well, if Apple doesn't get
its ducks in a row with these tariffs, they're fucked,
in which case you're like, are they are they? It's
like a Marvel movie at the end, Are they dead.
Are they are they really dead? Or will they come
back in some mutated form in Avengers like Part seventy two,

(03:28):
The avengein ing right.

Speaker 6 (03:31):
And I think everyone's still under the spell of Sam Altman, right,
there's a sense that he's the visionary who's going to
lead us into this AI utopia. And you know, I
at others, you know, a bunch of us have reported
that a lot of it is smoke and mirrors. But
I think investors and shareholders and people who like frankly,

(03:53):
the people who work for him, desperately want to see succeed.
I think Silicon Valley wants him to succeed. So maybe
it's just a willful blindness.

Speaker 2 (04:02):
Yeah, it's the most strange time in history because in
Victoria you've written quite a lot about this. When you
look at the actual things that this shit does, it
don't do that much. Right now, you've been on the
lead a on the verge for a few months now,
have you seen anything exciting at all?

Speaker 5 (04:20):
Define exciting, Anything.

Speaker 2 (04:22):
That you looked at and you felt delighted by in
any way, Because I'm genuinely curious by delighted.

Speaker 3 (04:27):
I think delighted is a strong word.

Speaker 5 (04:30):
Have I seen things that have been surprising. Yeah, with
some of the Like I wrote a story not that
long ago about the what I call the hug and
Kiss generators.

Speaker 3 (04:39):
Uh yeah, exactly.

Speaker 5 (04:41):
So there are these apps there are called Hugging and
Kiss AI generators. So you take a picture and like
they were advertised in a kind of skivy way where
it's just like, oh, you take a picture of you
and your crush and make them kiss, and like that's
good that you can do. AI doesn't understand what to

(05:02):
do with tongues yet, so you know, I was generating
very cursed content for the Verge dot com.

Speaker 2 (05:09):
That was what those horrible things you're sending me?

Speaker 3 (05:11):
Right, yeah, yeah, yeah, I sent you some horrible things.

Speaker 2 (05:13):
Really awful.

Speaker 6 (05:15):
It turns to the AI bots. A lot of humans
don't know what to do with tongue.

Speaker 5 (05:18):
But no, they but they really don't know what to
do with the songs, like you know, you're supposed to
make people kissing, and so like, I generated a couple
of videos of me and Edward Collin because not because
I'm like a twyhard but because he was like a
preset in the app, right, and you know, you just
you just watch yourself kiss Edward Cullen with full on
tongue and you're just.

Speaker 2 (05:38):
Like, but little wrong tongue.

Speaker 5 (05:40):
It's wrong tongue because honestly, it's it's like if you
told a toddler what kissing looks like and they just
you know, imagined two faces smooshing together and like things
coming out of the mouths and odd rhythms. That's what
it looked like.

Speaker 1 (05:55):
I mean, I could be wrong here, but I assume
most of these products are made by straight man who
do not know what to do with the tongue.

Speaker 5 (06:02):
These apps are just very like, yeah, weird, but you
know so I tested them. I deep faked my parents
parents at my wedding, and I was like, oh, this
makes me feel weird. I have emotions that mom's teeth
are not correct in this story. But you know so
that was just like one of the I think one

(06:22):
of the things that I've tested most recently and I went, oh, okay.

Speaker 4 (06:25):
This is something.

Speaker 1 (06:27):
I mean, the thing that I've seen this with is
like this this woman who made like a video of
like her mom hugging her as a kid, based on
like an old picture, right, and it's like this is
I've never seen a video of my mom before, and
it's you start obsessing over this, this like artificially generated video.
When you're ignoring you you actually have a picture of
your mom hugging yeah, Like you can look at like

(06:49):
that and that actually is her, that is like what
she looks.

Speaker 2 (06:53):
Like, yeah, versus watching something imagine, not imagine just generate.

Speaker 1 (06:58):
Put on like a skin suit of your mom hugging you,
which it just isn't like it's weird. The video is
not real, but the picture is.

Speaker 5 (07:04):
I just it's a bizarre I feel those I felt
very judgmental of it before I tried it, and then
I tried it and I sobbed.

Speaker 3 (07:11):
I like genuinely sobbed because that's interesting.

Speaker 1 (07:14):
That reminds me of like the the VR thing where
you're like reconnecting with like dead family members only VR headsets,
which which yeah, people were very skeptical of. And then
I saw saw some people try it in Japan and
they like like just totally broke down.

Speaker 5 (07:30):
It's just like when I like, I think the phrasing
that I used was like, I know it's fake. It
didn't look anything like my dad gave him hair. My
dad had never had hair in his life, but like
the shape of it was enough to like sketch this
like part of me that was very much longing for
my father to have been able to go to my wedding.
So like doing it, I was like, this is weird.

(07:51):
I don't feel this is not comforting. I mean it's
not comforting, but it's.

Speaker 2 (07:55):
Makes you feel bad.

Speaker 1 (07:56):
That nausea is like like indicative of like the hyperreality problem. Yeah,
which which mean people like like Altman in the whole
the whole industry or can you break them rapidly pushing
us towards.

Speaker 2 (08:06):
On the hyperreality problem? Can you break that down?

Speaker 1 (08:07):
Well, I mean, I guess if turn gets used in
a few different ways, but it's like the more something
is so fake that it's more real than real hmm.
And we see this problem with a lot of like
the VR stuff, But now now you see this a
lot with with a AI generated images which are like
quote unquote photorealistic, but they're like two photorealistic because they're

(08:30):
being trained on on on a data set of like
highly photoshoped images, So it looks like reality, but it
looks more than reality. It's it's it's it's stronger than
what reality actually is. Supposed to be and that's like
completely poisoning the data set and like this can affect
you like emotionally too.

Speaker 2 (08:46):
Yeah, and what But the thing is that gets me
about this is you look at everything. You look at
all the AI stuff and this is a fairly old
example at this point. And I'm not I'm not insulting
you would. This is nothing bad about your point. It's
just they've not been able to find a dude at
or a gizmo that at least brings you cheer.

Speaker 5 (09:07):
I mean you can if you are twisted, then my
mind is twisted. And you'll like come up with some
prompts that are truly cursed and no editor will let
you publish in good faith, Like yeah, you could have
a little fun with it.

Speaker 2 (09:20):
I'm talking about a thing that people used to do
something normal.

Speaker 1 (09:23):
No, yeah, no one, it's I mean it helped that
one kid graduate from UCLA.

Speaker 4 (09:29):
So there you go.

Speaker 2 (09:30):
Which one was that?

Speaker 7 (09:31):
Oh, this has just been.

Speaker 1 (09:32):
A viral video the past like two weeks of this
guy like showing the prompts he used to graduate from college,
like during his graduation ceremony.

Speaker 2 (09:40):
So cool.

Speaker 6 (09:41):
Yeah, that's the thing I've pointed out a lot of that.
AI has found a lot of use cases. They're just
all kind of bad.

Speaker 2 (09:48):
But they don't generate money, right.

Speaker 6 (09:50):
They don't generate money, and they take away This will
be a metaphor that you relate to. I can't remember
who said it first.

Speaker 2 (10:00):
It was a tech.

Speaker 6 (10:00):
Columnist anyway, it was about letting kids use AI to
do their homework is like going to the gym and
having a machine lift the weights for you. Like, the
point is the process and the learning, and AI just
kind of subverts that, which can be useful if you're
coding or doing some high level technical stuff. I suppose yeah,

(10:23):
not my area.

Speaker 2 (10:24):
But but even then, it's like with the coding stuff,
they massively overstate what coding like, coding is only one
part of the software engineering stack. And even then you
can't like help They're like, oh, I could build an
entire application? Could it has anyone? Actually, And this is
Matt Hughes Murtis that brought this up the other day.
How did it all this bullshit Kevin Rucy and bullshit
about oh, vibe coding is taking up. I've got seen

(10:46):
one fucking vibe coded company.

Speaker 5 (10:48):
Man. The phrase vibe code, it's just I haven't heard
this before.

Speaker 4 (10:51):
Vibe coding is it's horrible.

Speaker 5 (10:53):
It's it's it's it's horrible. Like at Google Io they're
like vibe coating and I was like, please kill me,
because what does it even mean.

Speaker 2 (11:02):
It's so, here's what it's meant to mean. It's meant
to mean that you, as a person do not understand software.

Speaker 4 (11:09):
True.

Speaker 2 (11:09):
True, you are able to use the coding thing to
build software. And the idea which the massive liberty they
take from there is that because someone can do a
thing like this, whether it works or not, whether it's
secure or not, who cares, it means that someone who
doesn't understand coding at all could build a huge company
that does whatever. It's kind of like if you saw

(11:30):
the watch The Simpsons anyone. Yeah, there's an episode of
The Simpsons where they rebuild Ned Flanders his house and
like the rooms get smaller and they're like, this room
has no electricity, this room has too much electricity, and
all the hair stands up. It's like seeing that and
being like, holy shit, these people could build a city.
It's fucking insane. And of course there is a Kevin
Ruth's article in New York Times, so he's like, that's arm.

(11:51):
Oh my god, I made a recipe application. It's just
real like peekaboo moments in AI. And I know I've
been thinking about AI for what feels like seven years now,
but it's only one. But it's just I don't know
why more people again, I get the Microsoft Open Air,
I think I do, but I don't get why more
people aren't more alarm. There's nothing. There's not a thing.

(12:14):
There's not a thing that you can point and be like, Wow,
this is actually kind of fucking cool. That costs too
much money, it requires stealing, it boils legs with all
these things. But at least we have this. Not really,
I genuinely, I'm not even asking the question sarcastically anymore.
I'm just like anything, anything, anything, one thing that I
don't mean kind of works. I mean, this is a

(12:35):
tool that I use every day, like a drop box
style thing. I don't even mean a filse. I just
mean a useful piece of software you can point to
and go, I use this, and it's good.

Speaker 6 (12:45):
I have I have one, but I don't know how
much of it is LM based. Do you use otter
in your way?

Speaker 5 (12:51):
I do?

Speaker 6 (12:51):
Yeah, Like Otter is a dictation service.

Speaker 2 (12:54):
Yea transcription.

Speaker 6 (12:55):
I was just gonna say transcription.

Speaker 7 (12:57):
I was just gonna say clips captioning.

Speaker 1 (12:59):
That's like the only thing like Dropbox has integrated like
of auto transcriptions for almost all their uploads and then
mix My job really easy because I have to do
a lot of interviews and now I can just refer
to that. If I need a more complex transcription, I
could set it to one of our services. But but no,
like that's that's it, But like that's LLL Hulm's have
been doing that for a long time.

Speaker 2 (13:19):
Yeah, it's just transcription. I'm not even being a misanthrope.
I'm just so much money is going into this, so
much money is not going into other things. And the
only thing we're having popped out is, hey, we've got
a Google search that kind of works but doesn't and
we can do transcriptions, which they did almost immediately. I
feel I feel like that like rev had their AI

(13:40):
transcriptions almost immediately. Yeah, they've been around for it at
There were various companies I worked with an AI transcription
coming dead now A couple of years back with the
like twenty twenty three, I think it was. It was
like meeting transcription zooms had it.

Speaker 1 (13:53):
I was I was using where I was AI transcriptions
back in twenty twenty.

Speaker 2 (13:56):
Yeah, it's just like it feels it feels like I'm
going insane sometimes. It feels like when I read these
stories and they're like in the revolutionary power of Ai,
but you look at it, it's like you don't even
have a funny You have a funny thing. I guess
a joker level thing. Yeah.

Speaker 5 (14:12):
I think the problem is is just like we are
promised one thing. Yeah, you're promised this personalization, this automat,
this automation that it's gonna know you. And like we've
been fed through so many generations of like science fiction
what we think uh ai is going to be. This
is not that this requires so much work from you

(14:33):
to train it. Like you have to understand the language
which with to prompt, chat cept or any of these
other ais in order to get something remotely useful. So
you're actually having to learn a new language. And like
if you look at the most successful chat ChiPT prompts,
they're like four four paragraphs along, they're insane. You have
to like you have to be preempting what like this

(14:56):
thing could be like I had. I was just like,
you know, to your point about vibe coding, I'm not
a I'm like a spreadsheet girly, but I'm not like
an advanced spreadsheet early and I was trying to pull
some data insights from this set I was looking at,
and I was like, Okay, I don't know fuck all
about spreadsheet formulas. Besides like the really basic ones. How

(15:17):
am I going to do this conditional logic program? Let
me ask chat CHETPT And it took me so long
just to figure out the stuff, and it was always wrong.
And because I could, like, because I understand math, I
could parse out how to fix the completely wrong formulas
it was giving me.

Speaker 3 (15:36):
But that was such a painful process, and.

Speaker 2 (15:39):
So was the output even that good?

Speaker 5 (15:42):
Oh no, it was an excellent output. I got a
beautiful spreadsheet.

Speaker 2 (15:45):
That's cool. I had much time. Would you say you
invested two and a half hours? Nice?

Speaker 6 (15:51):
You're you're a journalist who understands math, like you're a
rare and special Yeah, you know.

Speaker 1 (15:56):
It was.

Speaker 5 (15:56):
I was just basically like this, the fact that this
conditional if and statement is not working, or it's working
opposite to what I want for to find this one
particular data set is driving me cuckoo for cocoa puffs.
It's very simple. I know how to do the math manually.
Why can't the computer tell me how to write it
to the other fucking computers two and a half hours.

Speaker 2 (16:19):
I love innovation, and I think I think that that
speaks to the larger problem, which is generative. AI isn't
completely useless. If it had been sold as it is,
which is kind of niche cloud software, like cloud compute stuff,
they wouldn't have been able to fund any of the
data centers. If they would have been like, all right,
we're going to be able to in two and a
half hours give you the world's best spreadsheet, they would

(16:41):
it was.

Speaker 5 (16:41):
It was a good spread okay, lot the best, Okay,
a decent spreadsheet.

Speaker 2 (16:46):
The thing even the asterisks have asterisks, and it's it's
just I feel like in the feedback I get from
listeners is very much that everyone is like a lot.
I don't get any emails from people being like, hey,
atu man, I have the most useful thing I get
the occasional bright spark is like I have, over the
course of hours, created a very useful thing that I

(17:06):
use Sometimes it's like cool, Okay, it's fine. Bidet sounds
more useful than that, Like I'm trying to think of
like other innovations that exist that could be Yeah, everything
is more useful, like Apple pay is more useful than
any of the shit that they've built. But it's this
thing where we are being told again and again and

(17:28):
again that it's the future, and we're being told that
it's this ultracomplex thing that will never understand, which leads
really neatly into my favorite story of the week, which
is all of you, I assume have heard about this
Meta offering one hundred million dollars to Open AI staff
and how there's this big talent more and four people
just left Open AI to go to Meta. Alison, you're

(17:49):
on vacation, so you missed some of this, which is
probably best for your mental health.

Speaker 6 (17:53):
Oh I saw a bit. The seven figure bonus story
came out right before I went on vacations, so I
was getting that.

Speaker 5 (18:02):
I can't blame Oh no, get way to a four
oh one k that's fat and sizable, Like yeah, no, no.

Speaker 2 (18:09):
I think it rocks. I think it's great. I think
they should demand There was a part of the erin
Woo from the information of a great story about this,
where she she was talking about how some people are like,
oh yeah, I just threatened to quit and they gave
me more money. I would join one of these companies
and day two slug fucking quit. Mark, What are you
gonna do about it? Mark, I'm gonna go right back

(18:30):
to Open Ai. They're gonna give me this money and
then Mark Zuckerbo will give you whatever you He's been
flying them to his house in Tahoe and they're still
saying no. That's the best part that people just like, nah,
I don't want to, don't want to mate, But it
might be because they're all in a weird I'm gonna
say click, but I want to I want to believe
Polly situation. So all okay, I have no no knowledge

(18:53):
that they're fucking. But the recruits on the list, which
refers to the meta list for potential people that they
could hire, typically have PhDs from elite schools like Berkeley
and Carnegie. Melon They have experienced places like open Ai
in San Francisco and Google Deep Mind in London. They
are usually in their twenties and thirties, and they all
know each other. They spend their days staring at screens

(19:13):
to solve the kinds of inscrutable problems that require spectacular
amounts of computing power, and their previously obscure talents have
never been so highly valued. So these stories have a
thread through them I'm really enjoying, which is that the
writer and the companies don't know what these people are doing,
and I just feel like they are scamming I think
they're scamming them. Another quote from The Wall Street Journal,

(19:36):
Megan Borodowski, I believe it, wrote this. The handful of
researchers who are smartest about AI have built up what
one described as tribal knowledge that is almost impossible to replicate.
Rival researchers have lived in the same group houses in
San Francisco, where they discuss papers that might provide clues
for achieving the next great breakthrough. We are very aligned
on research directions and interests. One of them wrote, I
hope we get to work on more stuff together in

(19:57):
the future. They are fucking lying to them. I'm sorry, sorry,
they are just making shit up. This has to be
I think that this is the funniest thing ever. I
think that this is a true revenge of the nerds situation.

Speaker 1 (20:07):
Have they reinvented collective bargaining?

Speaker 2 (20:10):
They have, They've basically done unionization. It's amazing. It's good
for them.

Speaker 6 (20:17):
Higher education is really expensive. They have a lot of debt.

Speaker 2 (20:20):
Yeah, I love this. I think that they should ask
for more money. I think they should all get together
and just refuse to take less than fifty million a day,
eight figure, site, nine figure, like Sky is the limit?
Sam Samon want to burn anything. But back to erin Wouhi.
This is another quote about this which this is from
inside the Great AI Talent auction that deals with the

(20:41):
free agents and the egosham But a senior leader another
of the major AI labs said it was hard to
know what research specialties actually mattered for improving AI models.
AI is field where researchers are designing such complicated systems
that is difficult to break up one aspect of the
work from another, the leader said. Ultimately, they said recruiting
often comes down to word of mouth, the game knowing
a person or having worked with them before. It's a scam.

Speaker 5 (21:03):
Isn't that just like normal job stuff like word of mouth,
knowing having worked with someone before?

Speaker 6 (21:09):
And yeah, yeah, it feels like hyper focused Silicon Valley.

Speaker 2 (21:14):
But stuff generally you know what they did at the job.
Generally you do. And Mark Zuckerberger, guy, we did AI.

(21:34):
There is a WhatsApp group I think it's called recruitment
and it's the party Emoji. And that's where Mark Zuckerberg
invites people. He like has this little weird little WhatsApp
hot like whole he invites people. He's like, hey, I'm
Mark Zuckerberg. Do you want all the money in the world?
What do you do? I don't care how many? And
apparently one of the metrics they measure them on is

(21:56):
like citations in papers. I genuinely think this is a scam.
It's genuinely a scam. I think it's the coolest scam
of all time. We find it's nerds versus management consultants.

Speaker 7 (22:06):
So nerds versus different types of nerds.

Speaker 2 (22:08):
Management consultants are not nerds. Actually, I want to say
management consultants are not nerds. They're jocks.

Speaker 5 (22:13):
I would agree with that. I would agree with that.

Speaker 4 (22:16):
They see it.

Speaker 5 (22:17):
They are professional deckmakers about and they go in slide one,
we can see that number go up. Yeah, and slide too,
we can see number flat. Here's a pie chart.

Speaker 2 (22:29):
Someone else made. Yes, that's it.

Speaker 7 (22:32):
I can see it.

Speaker 2 (22:33):
It's jock shit.

Speaker 7 (22:34):
Yeah.

Speaker 2 (22:34):
I went to a drama private school. I know everyone,
this is not shocking. Yeah, I thank you all boys
as well. It's yeah yeah, but I was like the
dumbest kid in that school, which is really and I
was the fastest as well. So school was great for me.
But you run into a lot of people who can
memorize a lot of things but don't know how to
put them together. There's no real like synthetic thought. It's

(22:57):
all just like, I don't know, like having a big
isle of information that they portunly draw from and that
they don't really know what any of it means.

Speaker 5 (23:04):
Sounds familiar to all the AI fitness summaries that I've
been suffering through. I just wrote a thing about.

Speaker 2 (23:09):
It that's tell us about that. What's what's the AI
fitness ship been doing? This is?

Speaker 1 (23:14):
Ah?

Speaker 5 (23:15):
Yeah, So I ate it during a run last week.
Yeah no, it was too hot outside. Uh and I
was you know, I was on the Box union bargaining committee.
So I've been like sleep deprived for two months.

Speaker 2 (23:26):
Well, so congratulations, yeah, congratulations.

Speaker 5 (23:29):
It was it was. It came down to the wire,
but we averted a strike. It was real great stuff, uh,
and I was I basically was like, oh now time
to look into my fitness and wear a wile data
from this time period and and kind of gain insights.
And it was just like not that it was beyond
I called my article the Unbearable Obviousness of AI Fitness
nice Summaries because it was just not great. But to

(23:53):
my point, I ate it on on this run, like
you can kind of see my hands fuck up playing bust,
my knees are fucked up. So like I was basically like,
all right, let me see what all of these things
said about my data and could I find Could I
get these ais to say like, hey, you've been like
really strung out over the last two months, your sleep

(24:15):
schedule has been supremely disrupted, your metrics are completely off.
These are all things I know from looking at my
baselines and knowing what they are. But could I get
it to say you showed haw like you've shown signs
of elevated risk of injury.

Speaker 3 (24:32):
Not a single one of them could do it.

Speaker 5 (24:33):
And Strava's was like the most egregious because it's like
you had an intense run and I had uploaded pictures
of my injury. I had like uploaded a note saying
that I had like injured myself pretty badly. There was
no context of like what I should do with that.
It was just it's literally stuff like you ran three
point one miles, it was eighty eight degrees fahrenheit. This

(24:56):
was slightly higher effort than other efforts that you've made
the last thirty days.

Speaker 3 (25:02):
Have a nice day.

Speaker 5 (25:03):
And I was like, that's not useful when you put
it right next to a chart that says the same thing.
Your elevation was eighty eight You had an eighty eight
feet of elevation gain and it was.

Speaker 3 (25:14):
Up and down during your run.

Speaker 5 (25:16):
And literally it's next to a thing that says elevation
gain eighty eight feet and a graph that shows up
and down.

Speaker 3 (25:21):
Like it's that there's no intelligence.

Speaker 4 (25:23):
Yeah, there is intelligence.

Speaker 7 (25:24):
AI, there's no intelligence.

Speaker 2 (25:26):
It feels like the thing it should be able to
do already.

Speaker 5 (25:28):
This is what people wanted to do because at least
in my field, where you generate a massive mountain of
like quantified self data that you're looking at and you
want insights from. I wanted Aura to tell me, like
what my average weekly number, like how many hours per
week do I sleep on average on a twelve month basis,

(25:49):
and then how much of a sleep debt did I
incur in this specific week?

Speaker 3 (25:53):
And it's like, ooh, we can't do that.

Speaker 5 (25:56):
We can only do it the most recent week in
the most recent month for Trent, and I was like,
that's fucking useless. I have six years worth of AURA
data that I should be able to mine for that
kind of insight, and I can't do that. So that's
not at all useful for what I for like the
purposes of what I was trying to prove.

Speaker 3 (26:13):
And so I ended up arguing with this thing for
like an hour.

Speaker 2 (26:16):
But I feel like you did prove something though, yeah
I did.

Speaker 5 (26:18):
I did, but it was at the same time, just
like it's sort of like a wikiped It's like a
book report written by a fourth grader who decided to
read the entry of the book on Wikipedia instead of
actually reading the book for insights.

Speaker 3 (26:32):
So it's like a here you go, here you go.

Speaker 2 (26:36):
It feels like the most elementary thing it should be
able to do. I have over a decade of fitness
day up. Yeah, and I still don't have shit. I
don't it told my Aura ring the other day, I
got Aura, I got somny, I got the thing that
electrocutes your head. Yes, why I'm so smart. It's like
I just watched the X Files episode with the computer
bit and that's what's happening to me. I'm getting electrocuted
every day. I got Aura, I got the eight sleep

(26:58):
in Vegas. I got also some gumph and I don't
know a single goddamn thing. It told me two days ago.
I am like something is wrong. Yeah, I'd slept three
hours two days straight. It was just a bad combination
of red ice. And it's like, yeah, you should do
something about that. Thanks, I'm glad I pay you ten
dollars a month for But this just feels like the

(27:20):
obvious and how does it not know? How? And maybe
it is just the ultimate limitation that we've been complaining about.
It's just it's kind of insulting. I don't know.

Speaker 6 (27:28):
Yeah, not to feel like I don't want to be
the hippie here, because I use Strava and I like
track my workouts and things. But it's like I know
when I'm tired, yeah, and I know when I'm hungry
and when I've eaten too much or eaten too little,
Like you know, why do we need the computer to
do that for us? Is a real question, and it's
part of this like consumer trend of just trying to

(27:50):
get AI into every single app on my phone. And
it's like, I don't need Strava, Like Strava's doing great
for me for what I need it for. I like
it to show me how many miles I ran that week,
and like my bike ride to work.

Speaker 2 (28:03):
Look, but I.

Speaker 5 (28:04):
Wanted it, I like, I would have loved it to
be like, Okay, so when you go run after a
prolonged break, particularly in hot weather, you have self reported
and increased number of injuries.

Speaker 3 (28:15):
That's the type of shit that I want.

Speaker 5 (28:17):
And so it's like, so seeing that it was really
hot after a prolonged injury, you have a real bad
habit of getting injured after that. So like you dumb, dumb.

Speaker 6 (28:24):
Yeah, maybe the pack.

Speaker 5 (28:27):
Do the thing.

Speaker 3 (28:28):
So like that is the.

Speaker 5 (28:29):
Type of insight I would have liked after this most
recent run where I ate it.

Speaker 2 (28:33):
Well, the last three months, I've increased my cardio. Shit,
I'm playing basketball. I would love to see and it
has all the stuff And this doesn't feel that difficult
if it could say yeah, your cardio vascula has improved
or likes the occasionally be like yeah, you're four years
or two years younger than your age cardio. Why isn't
this like, oh, don't get started.

Speaker 5 (28:51):
That mean, don't get me started on those like longevity metrics.

Speaker 2 (28:54):
I but just there's nothing useful to it other than
I'm a data perva and like looking at the numbers,
going hmmm, number up, number down? Why? And but even
then the simplest things are kind of hard to get.
With Strava. You have to go through like three menus
to just see how much you've worked out. There's like
eight different options. None of them you can't turn any
of them off. There's one about biking because I used

(29:16):
to bike. I don't bike anymore. I don't need that
now you need to share the biking, mate, gotta make
sure zero miles, you fucking idiot loser. It's just it.
It is the wider thing of tech just not being
for us anymore. Almost, it's like, hey, got some data,
I guess pay me, now, pay me. And even then
with Aura and A, I've been using them for five years,

(29:39):
six years. No, I've never got a single bit of
advice about my sleep. I've never had it. Say hey,
what if you did this? Nope, it'll be.

Speaker 5 (29:48):
You tried using the chat pot. That's an aura.

Speaker 4 (29:51):
No.

Speaker 2 (29:51):
I thought about it yesterday and I was like, I'm
gonna get angry at this.

Speaker 5 (29:56):
It's actually one of the better implements.

Speaker 2 (29:57):
Does it work?

Speaker 5 (29:58):
It's one of the better implementation If you like, I
know how to.

Speaker 2 (30:03):
Talk to it, Okay, how do I talk to it?

Speaker 5 (30:05):
Well, you have to be very specific about the information
that you're wanting. So you're like, tell me about my
sleep trend and I've noticed that I have this problem.

Speaker 3 (30:13):
What are some ways that I could uh get around that?

Speaker 5 (30:17):
Or just like, do I show signs in the past
month of sleep irregularities?

Speaker 3 (30:22):
If so, like, what are some the.

Speaker 5 (30:24):
Thing is like, the things that's going to tell you
to do that are actionable are going to be.

Speaker 2 (30:28):
Like, well duh right?

Speaker 5 (30:30):
Really only helpful if this is your first foray into
fixing your health, into fixing your health, if you've literally
googled anything before, have any base knowledge of like you
should have consistent sleep schedules. You maybe just don't eat
ice cream before bed, like common sense things like that.
It's just not gonna necessarily help you, but you know

(30:51):
other things. I were because I'm also I've got a
CGM at the moment, and so it's.

Speaker 2 (30:55):
Just like, does that constant glucose Yeah.

Speaker 5 (30:57):
Yeah, continuous glucose monitor. And I was like, I've had.

Speaker 3 (31:00):
A lot of stress.

Speaker 5 (31:01):
Does stress impact glucose levels? And I was like, yes,
it does. And I was like, okay, cool, that's nice.
Nice to know that it's high because of that, and
then it'll remember that when you ask it questions in
the future.

Speaker 3 (31:11):
So you just have to like you just have.

Speaker 5 (31:13):
To talk to it so much and like kind of
train it. It's like literally like training a toddler. So
the amount of effort that you're putting in versus what
they're telling you, like all the insights they'll be so personal,
personalized and so automated.

Speaker 2 (31:28):
It's like, just have to do all the work to
make it useful, which it appears to be the AI theme.

Speaker 3 (31:33):
Yeah, you have to.

Speaker 5 (31:34):
Do an immense amount of legwork and like training and
thinking like an AI in order for it to spit
out something that might make you go huh.

Speaker 2 (31:43):
Wow, yeah cool. I love living in this wonderful period.
I saw Wired mentioned this thing called limitless earlier if you.

Speaker 3 (31:54):
Have yes, yes, it's the it's the class. It's like
b the thing.

Speaker 2 (31:59):
Yeah, I say time, the AI device that constantly listens
to you. What is limitless?

Speaker 5 (32:04):
Though similar? It constantly listens to you and generates insights
based not listening to you constantly.

Speaker 2 (32:10):
Stephen Stephen Levy, the Larry Bird of Big Wet Kisses
in Tech Journalism wrote about it like it is the future,
and it's just like, I just wish some writers would
experience humanity just once, because the idea of someone constantly
listening is not fun good, nor has it ever really worked.

Speaker 6 (32:32):
I don't want to age anyone here, but I feel
like we're roughly the same age. And we experienced kind
of the revolution of social media as this kind of like, oh,
look look at how cool it can be if we're
able to connect right in mass, all the time anywhere,
And that was cool, that was revolutionary in its time,

(32:56):
and now we're at kind of like the denoma of that,
and it's turned us all inward. We're all like tracking
our personal data and like, you know, we're more isolated
than we've ever been. Not entirely social media's fault, but
doesn't help. We're like talking to our AI therapists and
our AI boyfriends and girlfriends.

Speaker 2 (33:17):
I don't know how many people are actually doing that,
though I know a lot of them are.

Speaker 6 (33:22):
But I think that like the narrative, like the step
back narrative right now, is just like, well, there should
be a next thing. It shouldn't just be social media
was cool and the Internet was cool for a while,
and things are getting kind of stale, and it's like, well,
what's the next thing. And I think I think tech
journalists can be guilty of this sometimes, of feeling like, well,

(33:42):
twenty twelve was really exciting, so twenty twenty two has
got to be just as exciting, and twenty twenty.

Speaker 4 (33:47):
Five, boy can't wait.

Speaker 6 (33:50):
Maybe the technology is just not there yet and it's
going to take a lot longer than anyone expects.

Speaker 5 (33:55):
Or maybe this is just not the right approach, because
the way to lllm's work is that it's a and
like we have tech journalists left and right failing the
mirror test and like not understanding that when you talk
to chat GPT, or you talk to these AI girlfriends
and boyfriends and a different avatar, you are.

Speaker 3 (34:11):
Just talking to yourself in the mirror.

Speaker 5 (34:13):
It is the digital version of talking to yourself in
the mirror, which can be useful. It can be useful
to talk to yourself in the mirror. There's a reason
why people go like you're great, yes, awesome. Sometimes you
need to like hear yourself think out loud, and that
can be useful and helpful. But that's what you're doing. Like,
you have to understand that you are talking to yourself.

(34:34):
You are just having a conversation with yourself. And I
think a lot of people don't get that. They don't
they think they're talking to a higher intelligence. Literally, know,
you were just talking to yourself. If yourself could Google
a little faster than you currently.

Speaker 6 (34:49):
Yeah, that was That was the gist of Kashmir Hill's
amazing New York Times piece about the people who really
fell into a rabbit hole around these chat bots like
chat GPT convinced them that they were in a matrix,
like you know, alternate universe. And like one guy committed suicide.

Speaker 2 (35:12):
Didn't he pull a knife on a cop or something?

Speaker 6 (35:14):
Yeah, he like told the bot that he was going
to kill himself to suicide by cop. And then his
dad was like worried about his mental health, did call
the cops and he was like, listen, I think my
son's going to try to kill himself by attacking you.
And guess what he did, because the chatbot was like.

Speaker 2 (35:32):
And guess what. The cop just came right over with.

Speaker 1 (35:35):
Yeah.

Speaker 2 (35:35):
It's like, yeah, you know, someone mental health, what do
they need? Oh, they definitely want to die by a cop.
Let's pull our guns.

Speaker 6 (35:42):
I'm saying it's our society is not in a good
spot right.

Speaker 2 (35:45):
Our society is not prepared policing is prepared to arrest
rather than help or service. So yeah, it's the natural.
I think it's all of this is the natural comeopance
of a society built with very little intention and suck
because yes, because that's the thing is vibes and large
language models are the ultimate like vibe slop. It's something

(36:07):
built with no real and people love to say, well,
Samulton's plan. No one has a single goddamn plan at all.
That's why there are no use cases because if anyone
sat there and went what can we do with this?
They go, fuck, I don't, I don't know, I don't.
Is there anyone that we can pay one hundred million
dollars too.

Speaker 5 (36:27):
Well, you can do anything that you can imagine. That's
why Auntie Jesse is saying the future is up to you.

Speaker 3 (36:33):
Guys imagine solo.

Speaker 2 (36:35):
I can imagine a lot. And it's but that's the thing,
Like Alison, you had an excellent piece on this. It's
like the whole job loss story is just them being like,
please please up the shares up, shares, gun number go up. Please.

Speaker 4 (36:48):
Yeah.

Speaker 6 (36:48):
It's like I have to say this because everyone's paying
attention to me. And it's like, I'm sure shareholders and
vesters on Wall Street are saying, what is your AI
strategy and how are you planning to mass lay off
your staff so that you can cut corners and replace
people with AI? And so you have someone like Andy
Jazzy come out and say, you guys are all doing

(37:10):
great work. We've built incredible products. Alexa everyone loves it. Spoilerler,
No one loves Alexa. That's stupid.

Speaker 2 (37:17):
Yeah.

Speaker 6 (37:19):
And as a result of that, sometime in the near future,
I won't say when, but sometime soon, a lot of
you are going to be laid off or have your
jobs changed by AI. And I'm shocked every time one
of these CEOs does this, we report it out as
if it's like, oh.

Speaker 2 (37:41):
It's true.

Speaker 6 (37:42):
God in Heaven just just said like this is what's
going to happen, and that is what's going to happen.
Like we had, you know, banner headlines saying like Amazon
CEO says Ai is going to take your jobs, and
it's like, well, he didn't really say how or when
or by what mechanism, and there's no evidence of it
happening yet, so what are we talking about.

Speaker 2 (38:04):
It's I think that there is an alarming amount of
journalism that's excited for it, or there's just a doomerism
behind it.

Speaker 1 (38:10):
I think you can replace the word Ai in a
lot of these articles was just God and like this
is something that like we talked about the first time
we met in Vegas, right, is how all of these
like the people who are int Ai talk about it
as if it's just a cult, and we could like
like like there's this divine aspect that has like ordain,
like you have to lose your job because Ai is
taking God. God has mandated this. And like even even

(38:33):
like with the you know, talking to an Ai like
it's a person, they often gain this like divine aspect
when when people are treating these chatbots like they are
like scenting in beings, it's like the same way we
like project divinity onto aspects of nature. And I think
that is like a big, a big part of it,
because now you have God telling you that you're actually
in the matrix and you and you need to do

(38:54):
this thing, and for some reason, there's not safeguards put
on this God program to protect you God's creation.

Speaker 5 (39:01):
They're all programmed to be super friendly and to tell
you that you're great. And it's like if you if
you're actually like trying to use this thing and you're
viewing it as you talking to you, like I have
to tell chat JPT all the time you're putting too
much flattery in. You have to cap all of your
flattery at one percent because I can't, I can't handle you,
and I need you to like explain why everything you
said has no bias or as little bias as possible,

(39:23):
or to explain all of your bias in it so
that I can read and evaluate and just go like, no,
that's not it. And I've been told I'm insane for
like programming all the AI I talk to you that way.

Speaker 2 (39:32):
Sample output shit, it's making it's adjusting a program to
do a thing for you.

Speaker 3 (39:36):
It is because like if you just leave it at
the default.

Speaker 7 (39:39):
It's trying to whop your brain.

Speaker 3 (39:40):
Like if you leave it at the.

Speaker 5 (39:41):
Default, it's just like you've done nothing wrong, you are
absolutely correct and everything you said, what a brilliant thing
that you said.

Speaker 3 (39:48):
And if you like listen to that enough times, it
did well.

Speaker 2 (39:52):
I mean, I think it's whooping your brain because there
is no intention behind this. They train it whatever, but
they could relatively ease put a thing saying just to
be clear you are talking to you. This is They
don't want to do that, and I think it might
be want to be another of the goombas from open AI.

Speaker 1 (40:08):
It breaks the illusion, right, like it's they need to.
It's like the Wizard of Oz thing. Yes, but they
should break the illusions. They have to, but they don't want.

Speaker 7 (40:15):
They don't want to because it means they will get.

Speaker 5 (40:17):
Less tension, economy the less. But it breaks the engagement.
You're not going to engage with it if you're like,
this is obviously a robot.

Speaker 6 (40:24):
That's that's the part where I get actually emotional and
angry about AI is when the CEOs of these companies
talk about it. They talk about it as if it's
inevitable and we have no agency. That's where the god
thing comes in, where it's like, yes, this is happening,
whether we keep doing it or not, and.

Speaker 2 (40:42):
We are the ones who will fix it of course.

Speaker 6 (40:43):
Yeah, it's like, oh yeah, I mean Sam Altman, I
was just rereading, like.

Speaker 2 (40:48):
Uh the Gentle Singularity that fucking blog.

Speaker 6 (40:53):
Oh I hadn't read that one. No, I was talking
about he created world coin. Oh God, because he's so
convinced that.

Speaker 2 (41:03):
Let me stop you there. Okay, No, I just want
to stop you because you said he's so convinced. Your
only source of information for that claim is Sam Wiltman.
Sam Oltman did world coin so we could sell a cryptocurrency.
That's the only reason. Yes, so we could collect a
bunch of No, he's so convinced. He's convinced of nothing,
I believe. Okay, this is literally and the pitch, the.

Speaker 6 (41:21):
Pitch of world Coin, I like, I interviewed their ceo.
I had a lovely conversation with him. Sorry, but you
know the pitch is AI is going to become so
ubiquitous and it's going to destroy all the jobs and
flatten the economy, and we're going to have to have

(41:43):
this ubi that's distributed by by this blockchain mechanism. And
I'm sitting here and I'm going, yes, but we don't
have we are actually full agents in this world. Like
we all need to step back and realize that we
have of sovereignty over what we do in the technology
we make and how we regulate it. And it's a

(42:05):
real just like kind of uh abdication of responsibility for
society where I'm just like, what what do we want
to make for future generations? And that's all the like
problem that comes out of Silicon Valley is like building
a better future, and it's like you can't have both things.

Speaker 2 (42:26):
Where are you building?

Speaker 5 (42:28):
It's because like what they really are building is money
machine go up, and like that's the main thing. It's
like they say all this lip service about making everything better,
Well you could gear cancer.

Speaker 3 (42:38):
That would like legitimately make everything.

Speaker 5 (42:41):
They will, Sure, so you could do that, but it's
like their first and foremost.

Speaker 2 (42:47):
I believe Joe Biden will cure cancer before Samulman. Both
have claimed they will. I believe Joe Biden would. No,
I mean neither of them will do it, which is
my larger point. But I mean, you're both wrong because
it's a click to quote Sam Altman. As data center
production gets automated, the cost of intelligence should eventually converge
to near the cost of electricity. People are often curious
about how much energy chat GPT queries use. The average

(43:10):
query is zero point three four what hours? About what
an oven would use a little over a second. Now,
someone who's almost immediately thrown water on this entire thing.
But this blog A Gentle Singularity by Sam Mortman was
quoted like scripture by everyone, and that I think it.
I'd love that you brought up divinity GEG, because it

(43:32):
really is just like this pseudo religious it's religion, capitalism,
it's all. It's just we found a way to love
a company like a god and also kind of abdicate
any responsibility or thinking about it, because when you look
at the people who love AI and the way they
talk about it, it's not about what's happening today. It's
no in it. They're like in the future when this happens.

Speaker 1 (43:52):
No, it's always it's always this future privacy. And if
if we believe in this god enough and if we
build up this religion enough it can deliver us a
infinite automated money machine.

Speaker 2 (44:01):
But what's crazy is the money machine is bad. It
doesn't make any money. It loses some. They spent three
hundred and seventy twenty seven billion dollars in cattle expenditures
this year are projected to and the revenue of this
industry is like forty billion dollars.

Speaker 5 (44:15):
That's because money is fake until you have none.

Speaker 3 (44:17):
That's the only time real the thing.

Speaker 2 (44:18):
Money will run out. Money go down, money go down.
That work good. And the story that's really been twisting
me up this week was telling you about Early and
I swear it won't be this boring is open Ai
is Microsoft's biggest customer ten billion dollars in projected revenue
this year for Azure, and a zero revenue has been

(44:38):
kind of like not growing so good. So we just
have one of the largest tech companies in the world
that is just handing itself cash. And as part of
the deal when they funded them in twenty twenty three,
they funded them principally in cloud compute credits. So ten
billion dollars of microsoft reddit of revenue is going to
be partially in air miles and everyone just sitting around

(45:00):
being like this is great. On top of that, ten
billion dollars a revenue from open ai, so thirteen billion
total from Microsoft. Then open ai projected to make twelve
point seven billion dollars. Half of the revenue in this
fucking industry is open ai or the slop, the cost
slop of open Ai.

Speaker 1 (45:16):
It's okay, yed, we can we can just keep blowing
up the balloon. It's never gonna pop. You can keep
blowing it up.

Speaker 2 (45:22):
It's just crazy.

Speaker 1 (45:23):
We found the infinite balloon. It's this crazy material that's indestructible.
You can keep blowing more in and it's gonna be fine.

Speaker 2 (45:32):
It's just it's so funny as it kind of is
gonna be fine.

Speaker 4 (45:35):
It's so good.

Speaker 2 (45:36):
I just feel like AI is just like the imbecile magnet.
It's just this idea that people who don't really know
stuff but have got to positions of power can go Yes,
finally a thing that will make up the reason I
have to buy it for me.

Speaker 6 (45:49):
I see this in journalism all the time. I'm getting
like conversations about well AI could replace entry level journalism jobs.
I think that's like a real concern if you're like
just a I started as a copy editor with zero responsibilities,
is other than like finding typos and misspellings and occasionally

(46:09):
getting to write a headline and being like, oh, thank
you for letting me write a headline. And I could
see a large language model filling that in, in which
case I don't get a jumping off point to do
what I want to do.

Speaker 2 (46:22):
I mean, they've been off showing those jobs as well.
It's just another off showing.

Speaker 6 (46:25):
My point is I've been in this for like twenty
years and it was happening. Then it might happen at
a more accelerated timeframe with AI. But I'm skeptical you're
still going to need as we saw with that Chicago
Uhhago summer book list, you know, like in case anyone
missed it, you know, you got the authors right and

(46:46):
then just made up complete horseshit for the books that
they didn't write and were about whatever AI made hallucinated
they were about. You know, an entry level copy editor
would have caught that. But we've all already fired those people.
We've already laid off the entry copy editors. So you know,
all of these things, these ways that like technology is

(47:09):
going to create job losses. That's that's standard in our age.
AI is going to create some job losses. Yes, is
it going to be the what was it white color
blood bath? I don't think so.

Speaker 2 (47:22):
On to twenty percent unemployment according to Warrio Ami Day
and it's that's a CEO of Anthropic and his name
is Wario. Everyone makes the type on type Staria. I'm
not sure why it's in books and literature, so let's
start correcting the record. It fucking pisses me off as well,

(47:53):
because there I mentioned it earlier. There's almost like a
excitement in journalism for job loss in AI. Maybe it's
due rism, maybe they're just like, oh, I'll get ahead
of this, but it feels almost like they want it
to happen and they want it so that it proves
AI is the big point is religious to get fucking Chrystal.

Speaker 7 (48:12):
It always it always comes back to that.

Speaker 1 (48:14):
Yeah, is that we've been talking about this for two
years like like this this like cultish nexus around around AI.
I mean this is this goes like the like the
early super intelligence hype of like the twenty teens.

Speaker 7 (48:26):
Right, It's it's it's all God of this idea.

Speaker 2 (48:28):
Wait, there was a hype cycle on that. How'd I
fucking forget that one?

Speaker 1 (48:32):
There was well, like like the rockospasiless thing, right, Oh god,
it's it's the very like the earliestages of this.

Speaker 7 (48:39):
Yeah, like viewing AI like this inevitable God.

Speaker 2 (48:42):
I have you heard of Eliza Yudowski? Sounds familiar, but
is the I keep having this person brought up to
me by serious people. He is a person that writes
about a g I and writes scary. He wrote like
a fan fic, Harry Potter fan fiction, with ag yeah, yeah, yeah,
I just want to be clear. If anyone else brings
him up to me, I'm gonna email back just some

(49:02):
sort of obscenity, because you should not take this man seriously,
he writes Harry Potter fan fiction. He has just ingratiated
himself with other cultists from Less Wrong, and I see
real journalists mentioning them, and it's almost like people just
want to find any possible proof they're right so that
they can ignore the signs that everyone's wrong. And I

(49:22):
also think that regular people see the problems of AI
way more than the journalists though, And it's strange, it's bizarre.
It's like another thing I forget who said this. If
you're the person that said this to me, I'm very
sorry for forgetting. But it's like everyone feels bad that
they missed out on social media and calling social media
is the best biggest movement, or they feel bad on
missing out in GPUs and not saying GPUs have pushed

(49:44):
the next thing. And by the way, there are people
who tried in like twenty seventeen to say GPUs would
be the next computing thing. They were ignored. It was
crazy how early they were.

Speaker 6 (49:54):
But it's and journalists in particular were late to the Internet. Yes,
so we're there's probably an institutional bias toward taking tech
seriously because we are paying for brushing off the Internet
in two thousand.

Speaker 2 (50:11):
I think I'm going to start in a war show
called the fel for It Awards. That's a good idea,
and I'm gonna when because it's like you say that
and they were late. They weren't late to the metaverse.
How'd that go? They were late to crypto but NFTs, that's.

Speaker 6 (50:25):
Says crypto sucks And it's hard to talk, like it's
hard to explain.

Speaker 3 (50:30):
Yeah, having to explain the blockchain.

Speaker 2 (50:33):
No, it's it's the problem with crypto is it's complex.
But you can explain it quite simply as a decentralized database.
But when you explain it like that, it sounds fucking
boring because it is. Yes, it's connected to money. Sometimes.
I wrote about crypto for years, and every time I
think about writing it again, I feel sad. But it
come do my job, sure, sure, absolutely, give me a

(50:56):
CNN column. Michael Balabano kicked my ass up, but it's there.
He's a wonderful merite. It's just so frustrating as well,
because as ever, and I'm kind of paraphrasing The Big Show,
it's like the people who get fucked here are regular
people AI bubble bursts. It's not like Sa sam Oltman
will be humiliated. I will make sure of it. But

(51:17):
it's not like Warrio or Sammy Clammy. Sammy Clammy. Sammy
is going to get done in. At the end of this,
it's gonna be the stocks are going to crash to fuck.
People's pensions will be fucked. I mean, thirty five percent
of the American stock market is magnificent seven nineteen percent
of that it's in video. I think forty two percent
of Nvidia's revenue is magnificent seven stocks. I will have
a citation in this, notes Laura Bratton at Yahoo Finance

(51:40):
the goat. But it's going to hit everyone, but it's
not going to I don't think it'd be great financial
crisis level, but it's going to be really bad. And
I don't think these AI companies are going to be
offering the free spigot anymore. It costs them so much money.
So we're going to see all of this get like
it will still be there, but like this festering hole
the side of in the side of the tech industry

(52:02):
as everyone goes, what the fuck's next?

Speaker 1 (52:03):
Do you have such a beautiful way with where it's in?

Speaker 2 (52:05):
Yes, kind of the way.

Speaker 6 (52:07):
The metaverse still exists within ye book, but like it's
not the star Child okay anymore.

Speaker 2 (52:15):
I was talking talking to me lad later last night
about this, and it was driving me insane. Isn't it
fucking insane? Meta A multi trillion market cap company just went,
don't worry, legs are coming in the metaverse and then
just went, actually they're not.

Speaker 5 (52:29):
Actually we're pitty.

Speaker 2 (52:32):
A huge company just lied. They lied constantly for like
over a year you had people being like, absolutely believe
you one hundred percent, and then everyone just went, Oh,
isn't that fucking straight? What the fuck is going It's
it's weirder than the AI thing, though the AI thing
is pretty weird. It's like we live tech journalism sometimes

(52:52):
lives in an alternate reality. Oh yeah, it's so. I
don't know. I don't know what's going on with a
lot of things, but in particular, it just feels like
it would be more fun if we were honest. It
would be so much more fun. I wrote a thing
today since coming out in a few days. So the
thing they'reright on Monday is just make fun of them

(53:14):
because they're not charming. They're not interesting, they're not hot,
none of them are tasty looking.

Speaker 1 (53:20):
I think sen Altman has had some work done recently.
Do you think he as his lips are looking a
little fuller.

Speaker 2 (53:26):
Than Juicy Sam? Juicy Samultman. All right, that's a new
that's a new phrase to me to text eight people.
But it's it's like, they're not charming, they're not interesting.
Seve Job's complete monster, but at least interesting to listen to.
They're boring, they're all management consultants. They don't. I went

(53:46):
and reread the iPhone announcement today, and the beginning is
him just being like, yeah, we did this, we did this,
and then we came up with the thing that did
all of this, and I'm about to show you. Yeah,
I get why people cheered that. But now it's like.

Speaker 1 (53:58):
I tried watching the Liquid Glas presentation. I fell asleep
in like five to seven minutes there, damn. And like,
I think there's parts of liquid Glass that seemed compelling.
I hopefully it'll get worked out before it has the
full launch, but like it's presented in the most non
compelling way possible.

Speaker 5 (54:18):
It's glass based on vision os from the most successful
the most.

Speaker 4 (54:22):
Successful Apple product.

Speaker 2 (54:24):
You remember the vision pro No, me neither. But that's
the thing as well. I'm trying to get us excited
for liquid ass. And it's very confusing as well, because
you could just describe it in a boring mind said, yeah,
just be very matter of fact about it. But I

(54:45):
guess it's for shareholders, but is it even No, even
the most slimy Apple people were kind.

Speaker 6 (54:51):
Of like they needed a literal shiny thing to distract
from the how.

Speaker 2 (54:57):
About they fixed tapping in screenshots. My screenshots do not
crop properly. You work for Apple and you know why,
email me, but it's oh, I don't know. Make up
devices work good. They're all a mess everything.

Speaker 5 (55:12):
Like if you think about the iPhone, it's seventeen years old.
It's ready to go to college. Carry people who are
twenty one, who can vote and go to war and
do all those things, most likely have no memory of
a life before the smartphone. Right, Like, we're at a
point where people want what's next, and so I think
you just have like tech journalism, we have to chase

(55:33):
clicks and seo farm bait, we have to chase all
of the to stay alive. And so it's very much like.

Speaker 3 (55:39):
This is the next thing.

Speaker 5 (55:41):
Get height because if you all care, it's like we're
looking for the next Game of Thrones, but for tech.
Because Game of Thrones was such a huge traffic magnet
that literally anything that happened like wrah, we're all drinking
at the tit of like seo traffic and ad moneys
are coming in. So I just think there is like
an incentive and like you know, journalists always get pushed

(56:02):
in any doesn't matter who you write for, you always
get pushed to like do the next big thing, find
the next big trend, and.

Speaker 6 (56:10):
If you're a bit way journalist, you have to think
about what's the next election, Who's going to be the
next thing?

Speaker 3 (56:14):
You know, It's like what's the what's the take?

Speaker 2 (56:17):
And I think that the reason no one wants to
write that is the there's nothing left.

Speaker 1 (56:23):
I don't mean that's a scary thought, right, It's like
what if this is?

Speaker 5 (56:26):
It?

Speaker 1 (56:26):
Like like we're trying to come up with what's the
thing that will be the new thing after the smartphone?
And like it's like a like ar contacts, like come
on every red in a minute, Like it's it's until
we start getting the chips implanted, Like and I.

Speaker 6 (56:41):
Thought Joni I was hired to fix this, to figure
out what the what that's the phone?

Speaker 2 (56:46):
That's not a phone. We've come up with a new
kind of phone.

Speaker 5 (56:49):
It's I mean, I just think we're at a point
where like the tech is stalled out, right, because if
you have like the law of diminishing returns and just.

Speaker 2 (56:56):
Wrote com bubble our episode lost you is though it's
we're at the are we at the I'm not gonna
say end of history because people misquote for Kuyama. But
it's we're at the end actually, anyway, I'll get back
to Laa. It's we're at the end of innovation for
a minute, because if you look at how they're trying
to innovate right now. First of all, I know I've

(57:18):
been making fun of the overpaying people, but the overpaying
thing they're doing is only something you do if you
have no fucking clues.

Speaker 7 (57:24):
They don't know what to do.

Speaker 2 (57:24):
Yea. Yeah, they're just throwing around money. They're investing three
hundred and twenty seven billion dollars this year into something
they don't They built massive data centers. I don't think
that any of I think they all kind of have
realized there might not be you next thing, and I
think that their obsession has become something different, which is
line go up, money, go up. But how can I

(57:45):
charge you ten dollars a month? How can I charge
you and a thousand people at your organization thirty dollars
a month? To the point that they don't know how
to do math anymore, And they're like, well, I got
you to pay thirty dollars. It cost me seventy five
to get that money from you. But I don't know
how to fix that, But what if I spent more
money to find out? Maybe I don't know and they

(58:08):
don't know how. And they will say this if you
ask questions, which people tend not to with these people.
And it's just even you've got analysts doing And so
there was an analyst earlier who's like, oh, yeah, Google's
are they're making three point one billion dollars in Google
one subscriptions because of AI and it's not they just
like Google one subscriptions have gone up, and they're like, fuck,
I think it's AI.

Speaker 1 (58:27):
I mean this is this is why everyone's focused on
it on AIS because it's like it's like the final
boss of our like collective tech like unconsciousness. Right, It's
the thing Victoria you were talking about earlier. It's like
like in like you know, like eighties sci fi. This
is always like the final thing. Yeah, after we've gotten
like you know, augmented reality, you got the contacts, got
the glasses, like AIS, that's like the last thing that's
the last like ghost of our of our past that

(58:49):
we're still trying to chase. That's this thing that's still
trying to like control us, like time traveling, like backwards
in time, this idea that eventually we will we will
have this AI thing and like it is, Yeah, it is.
It is like the final boss and we're always trying
to chase it and we're coming up with like the
barrier of like maybe this thing isn't actually ever gonna

(59:11):
be real.

Speaker 2 (59:11):
And I think they think it's the magical thing because
it will tell them how to run their business.

Speaker 1 (59:15):
Because it'll it'll tell them what's actually the only this
is as far as we can get, and then we
have to create something smarter than us so it can
tell us what the next thing will be, because this
is the final thing we can imagine and it's like
this is like it.

Speaker 2 (59:27):
But even when you listen to Clammy Sammy talking about it,
and he'll say he will be like you see Sami,
you Sie sam Oldman. Clammy Sammy's thing he always says
is I can't wait to see what you'll build with this,
and it's like, motherfucker, that is your job. And that's
actually the thing with AI as well. I'm always being

(59:48):
to But the few haters who dare ender my dojo
is they say, oh, well you don't know how to
use it, right, You're not doing it correctly. I think
even Alison you might have mentioned this like holding it
wrong thing. But it's like, no, I am the customer.
I am I am paying you for work. I shouldn't
have to write a nine hundred and fifty word prompt
that tells you to imagine you're a spaceman or whatever,

(01:00:11):
like whatever makes it work. Fuck you, fuck you man,
I'm me pay you money.

Speaker 1 (01:00:16):
You give me thing, nine hundred word prompt to generate
a nine hundred word story.

Speaker 4 (01:00:21):
Yeah, that sucks.

Speaker 2 (01:00:23):
They're like straight upset. There's just like bereft of. And
it's also we've handed over society to people that don't
create things. We've had people who are just like showing
different faces to enough people that they get where they
need to go. The business idiot writ large and it's
just it's so funny as well because it's going to

(01:00:44):
fall apart, and when it does, everyone I genuinely, I
mean I look forward to it because of the obvious
yucks and chuckles. I will get out of it, but
there's going to be a really interesting period of journalism
having to see it and go why did we fall
for this? And if there isn't, I will make this
happen with all of my energy. I will hold everyone

(01:01:05):
because it's I think, and I'm not insulting. You're out there. No, no, no, no,
this is not an insult. I promise. Read the comments
on AI stories on the Verge.

Speaker 3 (01:01:16):
I do read that.

Speaker 2 (01:01:17):
I love reading because you see, regular people are so
skeptical of this shit.

Speaker 5 (01:01:22):
Oh no, I read. I read love the comments on
all the stories that I write, which are just like,
thank you for calling out how stupid it is.

Speaker 2 (01:01:29):
Well, that's the thing, though the regular people seem to
get it.

Speaker 1 (01:01:31):
There's gonna be well, it depends. Like there's I think
there's a sector of counter culture that's developing like a
neo Loodite perspective like beyond just like like you know,
like like anti tech, like Green anarchists, which have carried
the bloodite torch for the past like thirty years. We're
starting to see like like the quote unquote the cool
kids adopting this like neo Loodite tendency mostly in response

(01:01:55):
to like you know, like like alienation and like automation. Right,
And I think that's trend is going to like continue,
Like that's gonna it's gonna be like almost like a
social status, like status like a signifier, is the fact
that you're not reliant on on these things, because there
will be a version of the I that does keep
getting like Normy Afied, I think it's gonna it's there's

(01:02:16):
gonna be very like even if you look at the
way higher education is working right now, like the level
of people who are graduating just on the basis of
like AI helping them with large parts of their assignments,
like I have, I have a few friends who are professors,
and the majority of assignments that are turned in are
majority written by AI. And we get to this point,
we're gonna have like a we're gonna have a new

(01:02:37):
generation of the workforce that doesn't really know how to
do anything because AI has been doing everything for them.
But they got their degree, you know, like I am
employable certificate, but now that now they don't really know
what to do.

Speaker 2 (01:02:47):
So'sification.

Speaker 1 (01:02:48):
There's gonna be but there's gonna be a counterculture that
is like that like refuses to use that. Right, that's
like I will not be using AI. That's gonna be
a thing you have to you have to like prove
and like talk about and it's it's it's gonna be
a version of a social like.

Speaker 7 (01:03:01):
Wild like status card.

Speaker 3 (01:03:02):
Have you heard of clearly?

Speaker 5 (01:03:09):
Yeah, it's the cheat on everything app. And so it
was invented by this uh Columbia kid drop It, who
I interviewed him for.

Speaker 3 (01:03:17):
Oh I heard about this guy.

Speaker 5 (01:03:19):
Okay, so it's the cheat on everything app. I tested it.
It helped me cheat on nothing.

Speaker 3 (01:03:23):
It was terrible.

Speaker 1 (01:03:25):
I cannot wait all dates go flawless. Now that I
read from a teleprompter.

Speaker 5 (01:03:31):
I can't even do that. Like in its current state,
it's like you use it, it's like a prompt machine
for when you are on a video call, and it
very slowly can answer prompts based on a transcript of
your video. It's it's not that usable. It messed up
the mics on my computer from testing it to this
point where like I deleted it from my computer and

(01:03:52):
video uh software programs like Google Meet and Zoom will
pick a non existent cluly mic And I'm just like,
I don't love.

Speaker 7 (01:04:01):
That's concerning it.

Speaker 5 (01:04:03):
But you know, cool, yeah, but you know, like as
soon as I wrote that story up, someone another company
messaged me is like, we're using AI to catch the
AI cheaters, and I was.

Speaker 2 (01:04:11):
Like, Okay, finally we have bare Force from the Simpsons
solving this problem we created.

Speaker 4 (01:04:19):
That's the perfect use case for AI.

Speaker 1 (01:04:21):
Though, using AI to fix all the problems cast by AI,
that's the real.

Speaker 2 (01:04:25):
I think it's beautiful.

Speaker 4 (01:04:26):
I think that's the actual singularity.

Speaker 2 (01:04:29):
We've created problems to create solutions for neither of them work.
But clearly the one of the people from Andresen went
on a podcast is like, yeah, you know, it's when
marketing and virality overtake making a perfect Artesian product. It's like,
what you mean is when something goes viral for fucking

(01:04:49):
lying lying, like it was a lie. Like when you
say something that is not true intentionally, that's called lying.

Speaker 1 (01:04:57):
It's called an i RL hallution.

Speaker 5 (01:05:00):
Yes, it's weird because the way that app was born,
and initially it was like some other app that was
used to kind of make a point about how stupid
technical interviews are for devs, right, and I was like,
that's actually kind of radical, and like it is proving
a point and you have lost the point and now
you have five million dollars in some somebody is going

(01:05:21):
into one hundred thousand dollars commercials and nice apartment.

Speaker 2 (01:05:24):
Did you hear about Mira Marathi though, the former CTO
of Open Ai and her new startup, Intelligent Machines. No
one is so they raised I think like a billion dollars,
I want to say, and they did not share anything
about the financials. They also did not share anything about
the product, so anyone investing just went into room mirror.
Maarati went yes, so I need money, mean money now,

(01:05:47):
and they went fuck yeah. Absolutely. There were reports where
people would do the vcs were just like, can you
share anything? You said no. On top of this, and
by the way, I admire the scam at this point,
I just get it. She has her voting rights supersede everyone.
Get it now. I fuck these pigs who cannot even
bother to even think about what they're building or what

(01:06:10):
it is they do. They just like, I will take
an I will give you an unlimited amount of money
because of the vibes I have, because of the I
would love to know. I definitely had a moment where
I'm like.

Speaker 5 (01:06:23):
Could I vibes give me money, give me one hundred
million money, and I will vague vaguely promise to mark
something in the future at a time I will.

Speaker 2 (01:06:34):
I will never do anything, I will guarantee you that.
I mean, Ilia Suits gave it the other open aye
guy misreported by Pivot to AI that generally does good
work but needs to work on their fucking headlines, suggesting
that they guaranteed they would not do anything into a superintelligence,
when the actual headline was that they said they have
nothing and they want to build superintelligence, which is way funnier.

(01:06:55):
I honestly at this point like it's evil to lie
and extract capital, but the people you're extracting it from
your wallet inspecting them. But also you could fix almost
you could fix like world hunger, I think for six
billion dollars. They worked at it.

Speaker 6 (01:07:11):
Yeah, yeah, you could and challenge Gilan Musk and he said,
if you figure out the number, I'll give it to you.
And then they figured out the number.

Speaker 2 (01:07:18):
The numbers not based. It's very upsetting. I saw an
out there as well say today that Grock only makes
a hundred million dollars in revenue. It's so fucking cool
as well, they'll burn money.

Speaker 5 (01:07:30):
I I mean, just give it to me, give it
to me, or so to me. Give sponsor a journalist.
We too can sponsor a journalist to live a life.

Speaker 2 (01:07:41):
And the funny thing is is, like, I actually think
you could make a shit ton of profitable journalism just
by like talking about this bluntly. I'm doing it all
the time. But it's it's weird, and I actually it's
kind of wrap us up as well. I have to
wonder if at the end of this farce whether there
will be a rise of critical tech journalists. I'm not
holding my hopes out, but I think that there is

(01:08:03):
a slog Like you two, actually, Victoria Allison, You've both
inspired me a bit that it's possible.

Speaker 4 (01:08:10):
It's just right, that's true.

Speaker 3 (01:08:11):
I try it, and I tell you what happened.

Speaker 4 (01:08:13):
When I tried it.

Speaker 5 (01:08:14):
That's it.

Speaker 3 (01:08:14):
Imagine my job.

Speaker 2 (01:08:16):
Imagine that. Imagine if that happened on Hardfalk. Sorry but
I'm saying nothing. No, I know, I know, I know.
I'm not going to let these are my opinions just mine.
I think about them all the time. It's I'm hoping
this happens because I think you've got Brian Merchant, You've
got Edwinald grayce So Junior, and Molly White, of course,
one of the fucking best in the business. Molly's great

(01:08:38):
Molly is You've got really good criticism, and I think
people are hungry for it. The reason I brought up
the Virgin's comments is this isn't like any kind of
fun making. It's you see people saying it now, just
being like, hey, why the fuck am I having to
pretend here? And I just I think it's the business
idiot idea I had a few months ago where it's like, hey,

(01:08:58):
maybe we've handed over our eccon and me are finances,
the editorial structure of some publications and the people with
the biggest microphones to people that don't understand a single
goddamn thing. And it's kind of that scares me more
than anything, because we talk about the harms of generative
AI and it's yeah, there is no intention. No intention
is far scarier than people being like they want all

(01:09:20):
of your data and all this when they have no idea.
I was actually kind of disappointed. Meredith's Meredith Wird toa
care of signal. She's really good Admira her daily. She
said a thing on stage about how like, yeah, AI agents,
which is the marketing term, but they're going to do this,
and they're going to do this. Whenever anyone speaks about
AI agents just imagine like clown noises, circus music if

(01:09:43):
it helps. Because AI agents do not exist, they do
not work. There are people who are getting close to
a thing that might work. One time, they had a
big study out of salesforce. So like after with multi
step processes, you know, things that require you to do
more than one thing. Yeah, it failed like less more
than I think it's like more than thirty two percent
of the time. It's like, sorry, I only succeeded thirty

(01:10:04):
two percent of the time, which is very bad and
not getting better. And it's like, this is what everyone's
talking about AI agents despite they're not working. I realized
the sentence started at one point and it's going to
end it that I feel like I'm going fucking insane
every time I read the news and they see a
new thing that does not exist and everyone says it exists.
Am I crazy? Am I having a problem? Like it's

(01:10:26):
I mean, yes, but is that why I'm reading the
stuff that doesn't exist? It's just driving me insane. But
now you've said the divinity thing, I'm just like, this
isn't actually about building stuff. This is a belief system
and plate spinning. This is just an intention to build
a vibeespased economy, except it can't last long term, oh God.

Speaker 6 (01:10:49):
Or until the next the next revolution that replaces the
AI god. You know, if like industrial revolution and modernity
killed God, quak God, then maybe AI is the is
the demigod that we that we replace God with, and
then something else will happen.

Speaker 2 (01:11:09):
Sam Altman is the Antichrist.

Speaker 4 (01:11:12):
Many many people are saying that.

Speaker 2 (01:11:15):
Many people, many.

Speaker 6 (01:11:17):
People in a New York City podcasting studio are saying, did.

Speaker 4 (01:11:20):
I do that?

Speaker 2 (01:11:22):
It's sorry, it's but even then, the reason I say
is the Antichrist is not any recent interviews on the
New York Times podcast. It's specifically about the fact that
he is able to charm every business idiot. He's so
good at it. He's good at saying nothing in a
way that makes people give him a billion dollars. And
he is wrapped. I think he's wrapped everyone up in
this insanity, and no one really knows why they're doing it.

(01:11:43):
You've got journalists, you've got investors, you've got consumers even
who are like chasing this dream. And I genuinely think
he may lead the tech industry to a kind of ruin.
I don't think all the companies are going to shut down.
But this is the ultimate hubris of basing everything in
tech on venture capital invested by people don't really understand,
and public companies run by people with nbas. Every single

(01:12:05):
Mark Zuckerberg's rarity. He doesn't have an MBA, but all
the rest of them do. Even the guy who replaced
Andy Jesse Aws the cloud part of Amazon has an NBA.
Nbas are everywhere. I think we should bar them from
running companies and also being allowed to just don't give
houses or healthcare. No, sorry, I'll stop that one there,

(01:12:27):
But it's just I don't know. We're all going to
suffer for this. I will I'll blog about it, I guess,
and we'll all blog, we'll do podcasts, gere, but you
know what, I'm gonna wrap it on that happy note there,
get Where can people find you?

Speaker 1 (01:12:40):
Well? I help run a daily Yike's news show, four calls,
and media called it could happen here. That's who you
can find most of my work. Right now, I'm finishing
my final piece on the stop cup City movement in Atlanta,
as well as an upcoming piece on liberal accelerationism.

Speaker 2 (01:12:58):
So what is that?

Speaker 7 (01:12:59):
You know?

Speaker 4 (01:12:59):
That's the pieces about.

Speaker 2 (01:13:01):
Well, we'll have to find out and listen to it. Alison,
Where can people find you?

Speaker 6 (01:13:05):
I read a business newsletter for CNN called Nightcap. You
can just google CNN Business Nightcap and I'm on Blue
Sky and Victoria.

Speaker 5 (01:13:14):
You can find me at the Verge and all my
handles on everything Blue Sky, Twitter, Instagram, all the things
is at vicmsong.

Speaker 2 (01:13:21):
You can find me inside your computer. That's where I live.
I'm d Zetron. You've been listening to Better Offline. Thank you,
of course to our wonderful producer Daniel Goodman. You've been
hearing us recorded out of the beautiful New York City, Nevada.
And yeah, you keep listening to my goddamn show. We'll
have a monologue this week as well. Be sou thank

(01:13:46):
you for listening to Better Offline. The editor and composer
of the Better Offline theme song is Metasowski. You can
check out more of his music and audio projects at
Matasowski dot com.

Speaker 4 (01:13:56):
M A. T.

Speaker 2 (01:13:56):
T OsO w Ski. You can email me at easy
at Better Offline dot com or visit Better Offline dot
com to find more podcast links and of course my newsletter.
I also really recommend you go to chat dot Where's
youread dot at to visit the discord, and go to
our slash Better Offline to check out I'll Reddit. Thank
you so much for listening.

Speaker 4 (01:14:18):
Better Offline is a production of cool Zone Media.

Speaker 2 (01:14:21):
For more from cool Zone Media, visit our website cool
Zonemedia dot com, or check us

Speaker 6 (01:14:25):
Out on the iHeartRadio app, Apple Podcasts, or wherever you
get your podcasts.
Advertise With Us

Host

Ed Zitron

Ed Zitron

Popular Podcasts

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.