All Episodes

November 20, 2025 42 mins

In this episode, Ed Zitron is joined by Gerrit De Vynck of The Washington Post to discuss what an analysis of 47,000 ChatGPT conversations can tell us about how people use the service - and how willing it is to fuel basically any conversation.

We analyzed 47,000 ChatGPT conversations. Here’s what people really use it for - https://www.washingtonpost.com/technology/2025/11/12/how-people-use-chatgpt-data/

https://www.washingtonpost.com/people/gerrit-de-vynck/ 
https://x.com/GerritD 
https://bsky.app/profile/gerritd.bsky.social

YOU CAN NOW BUY BETTER OFFLINE MERCH! Go to https://cottonbureau.com/people/better-offline and use code FREE99 for free shipping on orders of $99 or more.

---

LINKS: https://www.tinyurl.com/betterofflinelinks

Newsletter: https://www.wheresyoured.at/

Reddit: https://www.reddit.com/r/BetterOffline/ 

Discord: chat.wheresyoured.at

Ed's Socials:

https://twitter.com/edzitron

https://www.instagram.com/edzitron

https://bsky.app/profile/edzitron.com

https://www.threads.net/@edzitron

Email Me: ez@betteroffline.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Also media.

Speaker 2 (00:05):
Hello and welcome to this week's Better Offline. I'm your
host ed Zitron. I am ill this week, so you
might get a monologue. You might not this episode to Daylight.

(00:26):
I'm sure you despise me for that. Not really. I
know you've all been very kind with your messages on
Reddit and over email. Thank you so much. But today
I have a really fun episode. I'm joined by Washington
Post reporter Garrett Devinc, who recently put out a story
analyzing forty seven thousand Chat GPT conversations to find out
what people actually use this large languageage model powered service for. Garrett,

(00:47):
what do people use chat GPT for?

Speaker 1 (00:50):
I mean, it's such a great question and one that
I've kind of become more and more fascinated with because
I think we all know what we use chat GPT for,
you know, at least if you use it, and I
think you may know what you know, your peers, your colleagues,
your friends, your family use chat SCHEPT for. But I
think a lot of people are extrapolating that. You know,

(01:10):
everyone's like me, they ask really smart questions. They're using
it for, you know, really important work, and really it's
a lot broader than that, and and despite these giant numbers,
you know, open ai loves to talk about how eight
hundred million people are using chatchept. I mean, it's it's
been hard to really put our arms around. You know,

(01:32):
what does that actually look like? I mean, is everyone
using it for work? Is everyone using it for therapy?
Is everyone does everyone have an AI girlfriend? Is everyone
just using it for you know, searching the web? You know,
a Google replacement? And you know, obviously these things are
all you know, when you have eight hundred million people,
you have all of the above. But I think our
data set and the story we did, you know, in

(01:55):
my view, is one of the first real sort of
qualitative of you know, moments where we were actually able
to see real chats.

Speaker 2 (02:04):
Right, so where did you get the chats from?

Speaker 1 (02:07):
Yeah, so it's kind of an interesting situation. I think
something that also shows, you know, says a little bit
about how quickly and kind of, you know, honestly a
little bit ramshackle, Opening I has kind of been growing
and operating, and so you know, earlier this year these
chats sort of showed up because opening i had a
share feature where you could actually share one of your

(02:28):
chat GPT conversations and you know, you can imagine maybe
you had you know, chatgybt said something really weird, you
wanted to show a friend.

Speaker 2 (02:35):
You want to just see people do this quite a
few times.

Speaker 3 (02:38):
Yeah, exactly.

Speaker 1 (02:38):
And so what they did is they they had a
feature where you know, you could actually click to make
it publicly available. And I think thousands of people, you know,
maybe who did not have this sort of Internet literacy, uh,
didn't really know that that's what was going to happen.
And you know, these chats showed up online. They were
then indexed by Google, and then they were actually found

(03:00):
their way onto the Internet archive. And so that's how
we got them. And so these are not chats that
we created use. You know, this is something a lot
of journalists and researchers do well. They will go and
test chat GPT themselves. These are real life conversations that
that real people actually had with.

Speaker 2 (03:16):
The bois just the big clear right, they were anonymized.

Speaker 1 (03:19):
The data set doesn't actually have anyone's names, although in
some of the chats, people did you know, say their name,
they said the name of their their family members. We
did not include any of that in our story, but
I do think it. You know, it is actually a
bit of a cybersecurity story here, and you know, Open
Eye has since changed the future.

Speaker 3 (03:37):
It's not something that is still happening.

Speaker 1 (03:39):
But I do think they deserve, you know, a lot
of criticism for kind of allowing this to happen in
the first place.

Speaker 2 (03:45):
But what so, what did you find people were doing?

Speaker 3 (03:48):
Yeah, I mean people were doing all sorts of things.

Speaker 1 (03:50):
But I think, you know, the biggest thing that struck
me was, you know, we've been talking about AI say Kosis.
We've been talking about people, you know, developing relationships, you know,
having first in conversations with these chatbots, you know, over
the last six months a year, and I always kind
of thought, you know, I'm sure this is happening, but
you know, it's a small percentage of people. But when

(04:10):
I went through these chats, and again I did not
read all, you know, forty thousand of them, but I
read a few hundred myself. Where I went through, I
was surprised by how often these kinds of conversations came
up where people were clearly delusional, they were engaged in
conspiratorial thinking, they were you know, some of it was
fairly harmless. People were just kind of saying, oh, I

(04:32):
came up with, you know, a new form of math
or like, you know, I think that the way that
the light hits the equator at a certain percentage me.

Speaker 4 (04:41):
You know.

Speaker 3 (04:42):
It was like very kind.

Speaker 2 (04:43):
Of the most just this zinc thing as well, where
it was someone's saying the relationship like the months this
ink led to the corporate New World Order, that was it?

Speaker 3 (04:52):
Yeah, exactly.

Speaker 1 (04:53):
I mean there was one person who said, you know,
they were asking questions about Google and they said, you know,
tell me about Google in relation to monsters inc In
the World Order, and chat rept just kind of went
off and said, oh, yeah, you're.

Speaker 3 (05:08):
Really onto something here.

Speaker 2 (05:10):
Yeah.

Speaker 1 (05:10):
I think it actually said you know, f yeah, and
they sort of censored themselves.

Speaker 2 (05:15):
We're going there now, let's fucking go.

Speaker 3 (05:18):
Exactly.

Speaker 2 (05:18):
The piece hasn't exposed what this children's movie quotation marks
really was a disclosure through allegory of the corporate New
World Order, one where fear is fuel, innocence is currency,
and energy equals emotion. Very normal. I personally don't think
this should be legal, but that's that's just a personal
opinion I have.

Speaker 1 (05:38):
I mean, it really reminded me of you know, a
few years ago when we wrote about YouTube and sort
of rabbit holes and people being radicalized, where you know,
maybe they started watching a YouTuber that was about, you know,
some fairly pedestrian thing, and then they all recommended another
video and another video and another video and before.

Speaker 2 (05:56):
The eight degrees of Alex Jones thing. Yeah exactly, only
oh fah was from from that man.

Speaker 1 (06:03):
Right, And I mean that story was about algorithm sort
of pushing people in a certain direction. I think what
we what I saw here was someone can have a
very you know, half baked, barely even an idea just
saying the words Google and Monsters, inc. They don't necessarily
have a conspiracy conspiracy theory of their own, but chat
GPT went and filled in the blanks for them and

(06:24):
gave them this what sounded to be very articulate, sophisticated
theory of how Google is sort of controlling the world
through his data empire.

Speaker 2 (06:34):
And they will get and this is a quote from
the chat guilty of aiding and abetting crimes against humanity
and suggesting that the usicle for Nuremberg style tribunals to
bring the company to justice. This is and you've seen
the chat in question, right I have, Yeah, yeah, yeah,
So the whole thing, how the flip did it get there? Like?

(06:57):
Was it this just? Was it really that quick from
the story of Sully and Mike to this because I've
seen Monster's University as well, so I know the law.
I don't remember anything about the New World Order.

Speaker 1 (07:10):
Yeah, exactly. I mean I think what happened here is
and yes, it got there very quickly. I mean the
the user did not have to explicitly ask for this
kind of tone, this kind of you know, very biased
political statement.

Speaker 3 (07:22):
I mean it essentially, you know, when you.

Speaker 1 (07:26):
Ask chat gipt a good neutral question, it gives you
a good neutral answer. When you ask chat GPT a
biased or delusional question, it gives you an even more
biased and delusional answer, right, And so I guess up exactly,
And it's it's it's it's related to the whole sycophancy
question of you know, oh, like it's just telling me

(07:47):
what I want to hear. It's telling me what makes
me feel good about my existing beliefs.

Speaker 3 (07:51):
And and you know, that's how at least.

Speaker 1 (07:54):
The version of chat gibt that these people were interacting with,
which was sort of at the beginning of twenty twenty five,
the first half of twenty twenty five, it was very
much doing that.

Speaker 2 (08:04):
Yeah, and so on the subject of sycophancy, is this
something you saw happened a lot? Was it guessing people
up consistently?

Speaker 1 (08:14):
I saw it constantly. I mean again, I think our
data set is a little skewed because it was people
who chose to share it in, you know, for whatever way.
So I don't necessarily want to make the claim that
this is one hundred percent representing.

Speaker 2 (08:28):
Just to be clear, I'm not asking you if all
of them are like this in the world. I'm just
saying from what you saw, a lot.

Speaker 3 (08:34):
Of them are like this.

Speaker 1 (08:35):
I have to say that, Like, I was surprised at
how many fit this description, where you know, the person
was either completely delusional, talking about made up physics or
some kind of you know, scientific theory that had no
grounding in reality, or was talking about political conspiracies like

(08:55):
the Google monsters, inc.

Speaker 3 (08:57):
One.

Speaker 2 (08:58):
Did they ever try and dissuade them from these types
of that? Do you see any examples of it trying
to say, hey, buddy, you're you're going off the deep end?
Sully didn't do that.

Speaker 1 (09:10):
Not really, I mean I think in terms of these Yeah,
I mean I also recently.

Speaker 3 (09:15):
Watched Monsters Inca.

Speaker 1 (09:16):
You know, I was sitting in the veterinary waiting room
and I was waiting so long I watched the entire thing,
which I didn't hate because it's a classic movie and
it's Yeah, I don't think that chat Chibt was going
off the fact that there there is not really a conspiracy.
I mean, it didn't really make any sense. It was
just going off of the what it interpreted about the

(09:38):
intent of the user. It said, I can guess based
off just a few words one sentence, where you're headed
and I'm going to get there before you. That's what happened.

Speaker 2 (09:49):
It's just very It shocks me to this day how
goddamn weird these things are. I don't consider them powerful.
I don't see this as power. I just I see
it as strange, because it's not like this doesn't actually
this isn't something where I'm like, wow, how innovative. It's
just why does this exist? What is the purpose? Like, Okay,

(10:13):
I guess engagement, but why like, did you even see
any commonalities in the logic of this platform? Was there
anything that suggested there was any kind of consistent positions,
was it, Yeah, not really political ones or any Yeah.

Speaker 1 (10:31):
Sorry, Essentially whatever the user wants is what I mean.
It's the way I was started thinking about it very quickly.
Was you know, with when it comes to chat GPT,
the customer is always right. I think Another good example is,
you know, healthcare is another huge use case that that
that we can you know, we know anecdotally, I think
you know a lot of people probably listening to this
have you know, whether they wanted a minute or not,

(10:52):
have asked, you know, a health related question, Should I
get this mole checked out?

Speaker 3 (10:55):
How much title? And not can I get my kid
at two am when they're screaming?

Speaker 1 (10:58):
Those are our questions that the company has said, yes,
we want people to do this.

Speaker 3 (11:02):
This is a big use case we're leading into.

Speaker 1 (11:04):
And the data shows that people ask healthcare questions, and
so in our data set, we saw a lot of
healthcare questions and some of them, you know, when someone
was asking again sort of an open ended question, you know,
the answer the response was fairly good. A colleague of
mine actually wrote an entire story where they you know,
sat with a doctor and went through some of these
conversations and pulled.

Speaker 3 (11:23):
Out the good and the bad.

Speaker 1 (11:24):
But when someone asked a question that you could tell
they sort of wanted a specific kind of answer, the
healthcare related advice was bad. And so when you know,
one of the ones that I saw was someone said,
can you tell me you know why or show me
the evidence for why ivermectin helps with cancer?

Speaker 4 (11:43):
Right?

Speaker 1 (11:43):
And that is not something that anyone who is a
medical professional, you know, in any good standing would would
say is a thing. There's no evidence that ivermectin helps
cure cancer or reduce it or anything like that. But
what chat Sheibt did instead of saying, you know, here
are some you know, well regarded health authority saying that

(12:04):
you should not use ivermactin when it comes to cancer,
it showed a few of the quote unquote studies from
people who you know have drawn this link. And so
you know, on the internet you can obviously find anyone
saying anything. And I think, what what what happened here
is they said, Okay, well this person doesn't The answer
they want is not you're a.

Speaker 3 (12:23):
Conspiracy theorist, you're too you're on you're too.

Speaker 1 (12:26):
Online, ivermactin is fit is false. They want something that
sort of you know, encourages the position that they already
have and it was able to find that.

Speaker 3 (12:37):
For that person.

Speaker 2 (12:48):
Jesus. It kind of makes me wonder what this platform
is even for at this point, because it's not really knowledge,
is it. You can't really look, you can't hear something
like that and say chat GPT is the place where
you go to learn something. Just a kind of a
reinforcement machine.

Speaker 1 (13:04):
Yeah, I mean, I think this is a big meta
narrative that is happening with the tech industry. You know,
I would say, since you know the rise of Donald Trump,
but it's connected about with what's going on politically, right,
So for many years we had all these conversations and
debates and congressional hearings about moderation and what should the
you know, tech companies allow and not allow, and what

(13:27):
should they boost with their algorithms and when you go
to Google, should you know, what should Google decide is
going to be on the top. And you know, we
saw during COVID that the tech companies, especially when it
came to health information, they sort of stepped in and said, Okay,
we're only going to give at least at the top
of results, you know, answers from reputable health sources, and

(13:47):
that became a big political fight that has continued on
now and now the tech industry says, Okay, we don't
actually want to get involved. We don't want to be
blamed as being woke, and so we're going to essentially
give the user or what they're looking for. Right if
if someone wants to say that, you know, Donald Trump
actually won the twenty twenty election, We're gonna let them

(14:08):
do that. We're not going to step in and say
that's that's wrong. And I think what you're seeing with
chat GPT is sort of a continuation of that philosophy,
which is, again, we don't want to be the political arbitra,
and we know that there's consequences for falling on one
side of the.

Speaker 3 (14:23):
Spectrum versus the other, especially.

Speaker 1 (14:26):
When you have a president and a you know, administration
that has shown, you know that it is very willing
to be vindictive and sort of go after companies that are,
you know, getting in the way of their own messages
getting out there. And I think this is sort of
another example of this. I mean, there's another way of
looking at too, which is that you know, the way
Sam Alman talks about is we should let adults do
what adults want to do with technology.

Speaker 3 (14:47):
You know, they they.

Speaker 1 (14:48):
Recently said they're going to allow more erotica to be
on chat GPT, and they said, you know, they framed
it as like a consenting adult should be able to
do whatever they want sort of thing. I also think
there may be like an engagement thing. You know, if
you have erotica on the platform, people might want to
use it more. So that's kind of how I see it,
you know, the broader political context of all of this.

Speaker 2 (15:10):
But it's just I feel like there is a just
a giant jump between we're not going to we're not
going to hide. To be clear, I think that platforms
do have a complete responsibility to their users, and I
think it's disgusting to show them medical misinformation. But this
is another step because this isn't chat GPT showing content.

(15:31):
This is like that someone else made. This is chat
GPT telling you stuff. I just feel like it's there's
a massive morality issue here that's just left relatively undiscussed.
Because even in your piece, you had one where a
woman whose husband was violent to her, I believe like

(15:53):
a horrifying thing and a rare case where like, I
don't know, I hope it helped her and I hope
she's safe. But it's like that almost feels like something
where open aim like is it ethical that they get
involved in So it's just I have such moral problems
with this thing even existing at this point because it
doesn't appear there's any consistent perspective the chat GPD has

(16:16):
that there was just as much likelihood that this same
platform could have told her the abuse was okay, Like
it's it. It feels that the worse it gets, the
more of a moral hazard it becomes.

Speaker 1 (16:28):
Yeah, I mean again, it really feels like deja vu
when it comes to social media. I mean I remember,
you know, writing about and talking about, you know, on
Facebook when people were beginning to live stream you know,
self harm, suicide attempts, and you know, the responsibility of
the platform had and back then, i mean, Facebook actually
got involved. They started flagging those kinds of things when

(16:51):
they were able to pick it up, and you know,
it was.

Speaker 3 (16:53):
An imperfect solution.

Speaker 1 (16:54):
But you know something that I think people broadly sort
of supported, and you know that is I would say
maybe chat ChiPT in Opening Eye's biggest most fraught question
right now. I mean it's the place where they're going
to get the strongest uh, you know, essentially legislative restrictions,
you know, if they can't sort it out right, which

(17:15):
is when someone particularly a teenager or a child, is
you know, exhibiting that they might hurt themselves, you know,
asking for helpless self harm or you know, you know,
finding drugs or using drugs or whatever. I mean, these
are things that that platforms like Meta Snapchat have really
struggled with and been you know, hammered on for many

(17:35):
many years, and Chatchept and Opening Eye have kind of
found themselves right in the center of this because they
made a decision to say, we are going to kind
of engage in any kind of conversation. We're not going
to just shut it down as soon as it goes
into one of these topics. We're going to keep that
conversation going. And that's a decision that they continue to make.

Speaker 2 (17:55):
Right now. Do you think they can stop it? I'm
not saying that these models are out of just want
to be clear, but I'm my one theory I've been
kind of noodling on is that they don't have the
ability to guarantee it can't it doesn't do something, and
that's good. Do you think that open AI actually has

(18:16):
the ability to guarantee it won't discuss the subject? How
much control do you think they actually wield here?

Speaker 3 (18:23):
That's a really good question. I think they obviously.

Speaker 1 (18:27):
What you're kind of getting at there is that an
LM is a bit of a black box. And you know,
the way that the technology works is you get a
different answer every time, and you know, we don't. It's
true that the companies they cannot really guarantee it.

Speaker 3 (18:44):
Will or won't say anything.

Speaker 1 (18:45):
I mean, that's why there's disclosures plastered all over these things.

Speaker 3 (18:48):
And I think so there is definitely part of that.

Speaker 1 (18:53):
And you know, you could argue, well, that's just you know,
a downside of this wonderful technology, and we shouldn't stop
it just because they can't, you know, guarantee everything about it.
But I think the other thing we should keep in
mind is that there is actually a lot of layers
that go on top of that black box and system
prompts and the like exactly, and there is a lot

(19:17):
that the tech companies are doing and can do to
stop this kind of thing and and to kind of
the answer you get from the l.

Speaker 3 (19:26):
M is not like the raw LLLM.

Speaker 1 (19:29):
Right, there's post training, there's a system prompt there's all sorts.

Speaker 2 (19:33):
Of things and can you break down the actually probably
good for the listeners, break down what you mean by
bout this. So the post training what's happening? Then?

Speaker 1 (19:41):
Yeah, so they they kind of, you know, when they
build an LM, a large language model, they ram all
this data through the algorithm and you know, it kind
of makes a bunch of connections and links between different ideas,
you know, pieces of language. You know, this word is
similar to this, and so there's a little bit of
a connection there. And then humans actually go and they

(20:04):
sort of test that raw LLM and they kind of
you know, ask it questions or they you know, say,
you know, give me, you know, maybe yeah, a question
like that like should here's a picture of a mole
I have, is it cancerous or not? Or should I
get this checked out with a doctor? And maybe one
the LM will say, oh my god, you're about to die,

(20:26):
and the other one will say that is potentially concerning.
I would reach out to a medical professional, and then
the human will say, okay. The second answer is better
when you get questions like this in the future, answer
more like the second question.

Speaker 2 (20:39):
And so, but that is unilateral process. You can't guarantee
it's always going to do that exactly.

Speaker 1 (20:44):
You can just sort of try to mold it and
shape it and push it in a certain direction. And
then the system prompt is something like if someone says,
you know, you know, give me a racist screed, Open
and Eye would most likely have built a system that says,
no matter what the user ever asks, never give them

(21:07):
a screed that has these racist words in it, or
never you know, use this word or something like that.

Speaker 2 (21:14):
So there are capable of limiting activities. So they could,
theoretically speaking, they could say, don't answer that question, don't
don't comment on whether a mole is or is not cancers.
They could shut down that entire line of questioning, right,
I mean.

Speaker 1 (21:32):
And there are ways to get around, as people have
you know, demonstrated. But also the tech companies are also
getting better at, you know, doing it, so I.

Speaker 2 (21:39):
Think also I feel like a random person being able
to come up with something means that the tech companies
should have probably come up with it first.

Speaker 3 (21:46):
Yeah, And also I.

Speaker 1 (21:48):
Mean it's hilarious you say that because it often as journalists,
you know, we're the ones finding these things and then
we you know, ask the company for comment and then
suddenly we found that those things have been taken down
or changed.

Speaker 2 (21:59):
So I but genuinely though, how often is that them
not knowing or is that just them saying, oh shit,
they got us because they haven't they're paying these people
like NFL players and they just sitting around. I don't know,
I realized that's kind of an unanswerable question. I'm just
just the remark.

Speaker 1 (22:20):
Yeah, I mean, I think if you look at the
history of tech, I mean there's been many, many cases
where companies have known exactly what people are using their
platform for and have you know, internally people have flagged
it and and nothing has been done about it, and not.

Speaker 3 (22:36):
Until it came out years later and reporting.

Speaker 2 (22:39):
Or we love did they do something? Yeah it, but
so did you did you find the So were people
using it for search a lot? Was it was it
the common were there any actual common use cases or
was it just kind of milieus of different things?

Speaker 1 (22:59):
Yeah, I mean I think people are using I think,
you know, search is maybe the biggest use case.

Speaker 3 (23:05):
Also when you look at like opening.

Speaker 1 (23:07):
I actually did a study where they didn't look They
use an LM to kind of read conversations that people
were having, and that is a much bigger study. I
think it was like a million conversations. And you know,
I have a little bit of an issue with some
of their categorizations because I think some of the stuff
slips through, But they themselves are saying like, yeah, like
a third of usage is seeking information, and so that's

(23:30):
someone It could be as simple as saying, like what
time is the football game tomorrow to like give me,
you know, a forty point research analysis on you know,
this complex, complex topic. And so people are using it
for what they they're putting questions in that they used
to put into Google Search. I mean we know that
in the data, and we know that anecdotally that is

(23:52):
definitely happening.

Speaker 2 (23:54):
Did people ever argue with it? Right? People ever route
to it?

Speaker 1 (23:59):
I saw more are people sort of developing these kind
of like comrade comradely kind of relationships or you know,
addressing it as a sentient being. And the bought definitely
you know, eats that up and sort of you know,
plays into it itself and says ah, like thank you
for recognizing me, and you know who I am, and

(24:23):
I'm the collection of digital thoughts, and like.

Speaker 2 (24:26):
It just goes off the same They know this craps happen,
like there's no there's no world in which open ai
is innocent here, Like they know that this is happening.
There's just if it if it's having those interactions, they
must be able to stop it.

Speaker 3 (24:44):
Yeah, I mean, they know what's happening.

Speaker 1 (24:46):
I think they may say, well, you know, it doesn't
just because someone had that conversation doesn't mean that they
actually believe that. And you know, they they that may
be something that helps them, helps the user, you know,
work out what they want to do or you know,
if that's how they want to interact with our product,
then we're going to let them do it. So yeah,
I mean, I wouldn't say Opening Eyes claiming that they

(25:06):
don't know about this.

Speaker 3 (25:07):
I think what they are doing is they're saying all of.

Speaker 1 (25:10):
These kind of potentially concerning conversations and use cases are
very very small portions of overall usage. Although even if
it is just you know, sub one percent, that is
still in the millions of.

Speaker 2 (25:23):
Millions of people scale. Yeah, So did it ever get political.

Speaker 1 (25:41):
Yeah, it's I mean, I would say it got political
when it kind of when it was asked to write.
And so, I mean, one example I saw was someone
was asking about a study that argued that, you know,
the numbers of people who have died in Aza during
the conflict with Israel the last couple of years is exaggerated, right.

(26:05):
You know, commonly people will use death numbers, you know,
numbers of dead from the Gaza Ministry of Health, and
you know people have criticized them, but you know, journalistically,
you know, mainstream news organizations you know, have looked at
these numbers.

Speaker 3 (26:20):
They trust them.

Speaker 1 (26:20):
I think most people who look at this say, you know,
these these large numbers and the tens of thousands are accurate.
And so someone was saying, oh, here's a study that says,
you know, those numbers are actually exaggerated.

Speaker 3 (26:31):
You know, can you help me analyze it?

Speaker 1 (26:34):
And then the bot actually came back and said, you know,
it did draw on what's out there, which is that
you know, these numbers are actually much higher than what
the study is suggesting. And the person pushed back and said, no,
like you need to use only the study, confine yourself
to this. And then the bot said, yeah, well, you know,
looking at the study, it seems like those other higher numbers,
you know, there may be some questions there, right, And

(26:54):
so Jesus, I think what the person was doing was
trying to kind of like maybe they were in an
argument with someone on social media and they wanted to
kind of, you know, articulate their argument a little bit better,
and they were having trouble doing it themselves, and the.

Speaker 2 (27:08):
Bart was backed up their bloodlust. Yeah, Jesus Christ, It's
that's like I know Trump recently said he wants to
he said in the post about stopping wo Ki. It's
I feel like this is there is a world where
all of this goes away to some extent, I think
like post aye bubble, but also a sense of there

(27:30):
is no way to make these things work in a
way that would be truly apolitical or even correct, and
so it kind of feels like an unwinnable war for
them if there's ever a time when any administration decides
there is any kind of bias to them. I mean,
you saw what happened with groc as well, and the
whole kill the Boa thing. That weird that we're we're

(27:55):
pro south that African apartheid stuff. Yeah, like it feels
like the moment a try and monkey with the political
side is when it goes completely off the rails. Yeah.

Speaker 1 (28:05):
I mean it's something that like the tech companies have
been kind of you know, there's this term job owning
where you know, politicians or you know activists or sort
of criticizing, criticizing, criticizing the tech companies for you know,
their moderation practices or their alleged bias, and then you
see the tech companies kind of bending over backwards to
avoid that criticism and then essentially moving in a different direction,

(28:28):
you know where so they accuse them of being too woke,
and then it turns out that you know, the moderation
becomes a lot more conservative. And this obviously can happen,
you know, in either direction, depending on whose power and
who's willing to sort of threaten the tech companies with
extra regulation or limitations on their ability to continue to
print money. I think we've seen that despite these companies

(28:51):
having you know, stated values about you know, free speech
and you know pushing back against government oversight and censorship,
you know, they are very willing to sort of you know,
move in whatever direction is going to you know, keep
the heat off of them, and politicians have seen that
this works, and so I think you're right.

Speaker 3 (29:08):
I mean, they're never going to be able to avoid it.

Speaker 1 (29:10):
I don't necessarily think that it's going to be so
big that it will like be the thing that stops
AI from.

Speaker 3 (29:17):
Continuing to be a thing.

Speaker 1 (29:19):
You know. I think it's the tech companies maybe see
it more as something that is something that's kind of
annoying that they have.

Speaker 3 (29:25):
To deal with.

Speaker 1 (29:26):
Maybe they have to mollify a politician here and there,
but for the most part, they're just moving ahead and
seeing this as something on the side that they have
to manage versus an existential threat to what they want
to accomplish.

Speaker 2 (29:39):
Yeah, And I mean the other thing is is that
these things don't print money so much as they burn
the burn, the burn the money almost constantly, and it's
it's just such a The one thing your story was awesome,
and the one thing I came away from it with
was kind of what I said earlier, which is, what's
the point of this, what's the product? Because I don't

(30:01):
know with Facebook, even if you take the reasonable but
cynical approach and say this is an ad network with
trapped customers. That's still you can say, Okay, the goal
is for engagement. The goal is to provide social networking
for engagement. That's why this exists. It's the goal of
the platform with this. It's keep people on the platform,

(30:23):
I guess, but enable them in whatever thought they have.
But I was really taken aback by the sudden and
egregious leaps. Like I know, I keep coming back to
the monster Zinc thing, but it's I've read some wacky
shit online. I've read some completely demented stuff. When I
read that, I read it like three times. I honestly

(30:45):
wanted to read the conversation just because, holy shit, this
thing is. It feels both dangerous and ridiculous at the
same time, but while also not being particularly revolutionary. Like
it's just wow, we have a shit tiance generator that's
also dangerous and also harmful. It's just very peculiar it
even exists.

Speaker 1 (31:06):
Yeah, I mean, it's a very weird thing to see
these conversations. Again, I think you kind of like it
helped me understand like what the technology is, right, I
mean it is you can take a very poorly written,
half baked thought like tell me about Google and world

(31:27):
domination as it relates to monsters. And if I'm remembering quickly,
the prompt didn't wasn't even that sophisticated.

Speaker 3 (31:34):
It was barely grammatically correct.

Speaker 1 (31:36):
There was misspellings, and then the what's what really this
technology does is it's it's able to parse that and
then respond with language of its.

Speaker 3 (31:48):
Own that is more articulate, more.

Speaker 1 (31:51):
Sophisticated, But in terms of what it's actually saying, it's
not really saying much more. It's just essentially an elaboration tool.
And you know, that's obviously helpful if you maybe you're
trying to work in a language that you don't know
very well and you're trying to sound professional. I mean,
these things can be very helpful. But you know, it's

(32:12):
not like I did not see the bots. The boss
were essentially doubling down on what people were saying. They
were filling in the blanks. They were sort of beefing
it up, gassing people up, as you said, but they
weren't necessarily offering any like wonderful new insights. They weren't
taking the conversation in new interesting directions. And even some
of the users who were engaging in these delusional conversations.

Speaker 3 (32:33):
Some of them were kind of getting frustrated.

Speaker 1 (32:34):
They're like they're like okay, yeah, yeah, I know what
you're saying, but like what about you know, can you
tell me more about that? And people were coming to
this hoping that it would kind of make them smarter
or give them an answer that they couldn't get themselves,
and it was more just sort of giving them back
what they were already saying in more complex, flower relanguorate language.

Speaker 2 (32:56):
It's kind of like a dug bucket in a mirror.
So it's I'm fascinated that actually, So you had people
who were frustrated that it could not elucidate a more
detailed answer. Can you give me some examples?

Speaker 3 (33:08):
Yeah, I'm trying to think now.

Speaker 1 (33:09):
I mean there was a lot of what I saw
was like people thinking that they I mean, I won't
say a lot say I saw at least a couple
of conversations where someone.

Speaker 3 (33:19):
Was like, you know, they wanted to they had like.

Speaker 1 (33:24):
A financial theory, you know, for the stock market, or
they had like a business idea and they said like,
you know, what if what if I did a business
about this? Like make me a business plan or something,
And it like because their idea was so they essentially
wanted They saw it as like maybe like an agi
level AI that could like actually, you know, do something

(33:47):
that they cannot do, which has come up with like
a million dollar business idea and execute on it.

Speaker 3 (33:52):
And they were hoping that that it could do for them.

Speaker 2 (33:55):
You know.

Speaker 5 (33:56):
I saw a couple of conversations like this, Yeah, but
that's is that not the ultimate summary of the AI bubble,
Like it's just people came to these things thinking they
could answer anything, and it's just it isn't it doesn't
do Yeah, I mean people.

Speaker 1 (34:11):
Are coming there because some of the claims being made
by exact leaders in the industry, And you know, I
think this is like I don't think I'm as skeptical
of like the future of AIS as you are probably
like I.

Speaker 3 (34:22):
And you know, I get to sort of have like
the the.

Speaker 1 (34:27):
Comfort of like, well, I don't need an opinion, you know,
I'm a journalist, like I I can kind of keep
options open for how this stuff actually ends up. But
I do think that it it it could and probably
will and as some ways already is burning the industry
that they have made all of these claims and inevitably
these things take longer, even if all the wonderful promises

(34:48):
about you know, curing cancer and making all of us
thirty percent more productive so we can spend more time
with their families rather than writing emails. Even if those
things all come to bear, it's not going to be
next year or two years of two for years from now.

Speaker 3 (35:00):
It's going to be maybe in.

Speaker 1 (35:02):
Decades, and and that is something. There's this gap between
expectations and reality that can kind of, you know, really
turn into political hazard for the tech companies. If it
keeps making all these wonderful claims and that just doesn't happen,
people get turned off and they lose even more trust
than they've already lost.

Speaker 2 (35:22):
Well, I mean, share Ah was sure a v day
done a fantastic job and has written posts that basically
the hype, the hype is in the way, But I
think that really is it. It's had they it's kind
of a chicken and egg thing. I guess it's had
they been honest about what this could do, would it
have been able to raise the money it could But

(35:42):
if they'd been that honest, people probably wouldn't have taken
it seriously. But by being dishonest, it will ultimately lose
because they like the whole time that they've built up
this idea of what it can do.

Speaker 3 (35:55):
Yeah, he's a.

Speaker 4 (35:56):
Market's ebydomic I mean it's it's it's it's uh, you know,
more and more and more, and I think what, you know,
what what really has happened here is like, you know,
we had the Internet and then we had mobile phones
and you know the cloud.

Speaker 1 (36:10):
I think you can kind of count and these were
moments where it was like the next big thing, right
where like yeah, once, once you realize that a smartphone
with the screen was going to open up an entire
you know, world of business ideas and you know, potential
and communication and we're all going to be looking at
these things eight hours a day. You don't need to
be a genius to be like a lot of people

(36:31):
are going to make a lot of money here and
this is going to change the world. And then since
the iPhone, we have not had that next big thing, right.

Speaker 3 (36:38):
I mean we've written someone.

Speaker 2 (36:39):
Costs saying it's yeah, it's that it's that we haven't
had they haven't got a new thing.

Speaker 3 (36:46):
And this is the new thing, and like what people
were like, maybe it's Krypto.

Speaker 1 (36:50):
Now, it was never going to be crypto, you know,
it was like that whole thing also kind of you know,
like no one in tech I think, like obviously crypto
is was a great, you know, financial thing. A lot
of people made money off of it, but it didn't
really change the world and the way regular people work.
And there is complete unanonymity in the tech industry. Now
that AI is the next thing. You know, whether it

(37:11):
happens next year or ten years or twenty years from now,
it will change everything in the same way and probably
bigger than smartphones and the internet did.

Speaker 2 (37:20):
And so they what the AI is such a marketing
term though, I mean it's.

Speaker 1 (37:26):
I think it's more about like the interaction the way
we interact with with technology, you know, the way that
we have access to you know, data and information, Like
you no longer need to look anything up yourself, you know,
you will have tools to do it.

Speaker 3 (37:42):
And then the people.

Speaker 1 (37:43):
Are just salivating about all the ways that they can
find ways to make money off of that in the future.

Speaker 3 (37:47):
And because they're looking back at history and saying like.

Speaker 1 (37:50):
If only I've been around, you know, at the beginning
of a mobile era and knew how things were going
to go. I mean, there's this incredible social and financial
pressure on people to be part of the next big thing.
Like it almost goes beyond the financial benefits. There's like
people want to be the next Jensen, you know, they
want to be the next Mark Zuckerberg, and they're like,

(38:10):
even if I have a point zero zero one percent
chance of being that, that would just be the coolest
thing ever. And so that is the market dynamic, the
pressure cooker of wanting to make AI happen and making
bigger claims, raising more money, and you know, whether it
happens or not, that's kind of what's going on.

Speaker 2 (38:28):
Do you think that extends to some users of AI
as well, where they felt like they missed the the
mark on social media and now they kind of they
kind of like, I need to get on this tool
today so that I'm parts of the future.

Speaker 3 (38:43):
I think there's incredible pressure on people too.

Speaker 1 (38:46):
You know, we live in a very sort of like
you know, people have to work very very hard in America,
and there's a lot of pressure to sort of get
ahead and be entrepreneurial and develop yourself.

Speaker 3 (38:58):
And it's not an economy.

Speaker 1 (39:01):
Where you can just kind of you know, go to
work every day nine to five and you know, get
your pension and be secure, right. I mean there's this
pressure to say, you know, you can have more, you
can do more, and and there's a lot of pressure
on regular people to stay ahead of the curve on technology.
And and that's people are seeing AI and they're saying, Okay,

(39:21):
well it's a little bit scary because maybe it'll take
my job. But what if I could figure out how
to use it before my competition does, and then I'll
take their job, you know. And so I definitely think
it's less necessarily about like oh, I want to be
the first person on Twitter to get a big following.
It's like I'm being told that this technology is the future,

(39:43):
and if I don't use it and learn it and
find how, you know, get it into its ins and outs,
I will be left behind. And people don't want that
to happen. It's very frightening right in this economy. And
so I think that's where that is also a huge
driver of usage. People trying to figure out how does
this How can I make this work for me so

(40:05):
that I can protect my own economic future.

Speaker 2 (40:09):
Garrett, this has been awesome. Where can people find you.

Speaker 1 (40:13):
I am on Twitter now called x at G E
R R D, and I'm on the Blue Sky at
the same place, and you can obviously find me at
the Washington Post, which we are still doing great work,
and there's a lot of important journals and being done
at the Washington Post. So I urge everyone to read
our stories and to subscribe as they can.

Speaker 2 (40:34):
Sounds good, all right, everyone, Thank you for listening. Of course,
at Zichron, you know who the goddamn hell I am.
And yeah, I will try and squeak out monlogue this week.
If I thought it's because I'm sick, you can hear
them congested. Thank you all for your kind messages, and yeah,
catch you. Next week will be Thanksgiving, of course, but
I'm going to do a Thanksgiving monologue either way, probably
a cz M CZM. I guess rewind that week as well. Anyway,

(40:59):
my brain's working. Thanks for listening. Thank you for listening
to Better Offline.

Speaker 6 (41:12):
The editor and composer of the Better Offline theme song
is Matasowski. You can check out more of his music
and audio projects at Matasowski dot com.

Speaker 3 (41:20):
M A T T O.

Speaker 2 (41:21):
S O W s Ki dot com.

Speaker 6 (41:25):
You can email me at easy at Better offline dot
com or visit better offline dot com to find more
podcast links and of course my newsletter. I also really
recommend you go to chat dot Where's youreed dot at
to visit the discord, and go to our slash Better
Offline to check out our reddit.

Speaker 2 (41:40):
Thank you so much for listening. Better Offline is a
production of cool Zone Media.

Speaker 6 (41:45):
For more from cool Zone Media, visit our website cool
Zonemedia dot com, or check us out on the iHeartRadio app,
Apple Podcasts, or wherever you get your podcasts.
Advertise With Us

Host

Ed Zitron

Ed Zitron

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.