All Episodes

July 18, 2025 30 mins

Are we starting to sound like ChatGPT? This week, Oz and Karah explore a new AI-powered recipe tool and test whether mustard and pasta actually go together. Then, a new study suggests AI may already be changing the way we talk. Plus, impersonations of U.S. politicians and the Danish bill that would give people legal rights to their digital selves. And finally, on the new segment Chat and Me, what happens when bots prioritize efficiency over honesty? One novelist’s frustrating, multi-hour standoff with ChatGPT.

Also, we want to hear from you: If you’ve used a chatbot in a surprising or delightful (or deranged) way, send us a 1–2 minute voice note at techstuffpodcast@gmail.com.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:12):
From Kaleidoscope and iHeart podcasts.

Speaker 2 (00:14):
This is tech stuff, I'm as Volocian and I'm Kara Price.

Speaker 1 (00:18):
Today we get into the headlines this week, including the
future of unlocking animal consciousness with AI and Grock's commitment
to its maker Elon Musk. Then, on our new segment,
Chatt and me.

Speaker 3 (00:32):
That GBT refused to read my novel when I tried
to upload it, which I don't really blame it for doing.
Novels are boring, but the annoying thing was that Chat
GBT kept lying to me and insisting that it had
read it when it clearly had.

Speaker 1 (00:45):
All of that. On the Weekend Tech, It's Friday, July eighteenth.

Speaker 2 (00:51):
Hey Kara, hi, Auzie.

Speaker 1 (00:53):
You know you and my dad are the only two
people who call me Ozzie.

Speaker 2 (00:57):
I don't know your dad that well. I know him
a little bit. I find that to be very flattering,
that I stand beside him in nickname calling. I also
call you Ozzie because it allows me to think of
you dressed like Ozzie Osbourne, which tickles me.

Speaker 1 (01:09):
Another brit with slightly lank hair. Look, We've talked about
this on the show before. I'm not much of a
grocery shopper and I ready cook in the kitchen.

Speaker 2 (01:18):
That shocks me very little.

Speaker 1 (01:20):
What might shocks you more is that I actually can cook,
really I can. So. My stepfather owns an Italian restaurant
in London called Ricardo's check it Out one two six
Fulham Road, and one summer during high school, I cooked
in the kitchen there. Nowadays, living in New York City
on the fifth floor of a walk up building, I'm
pretty rarely grocery shopping and cooking.

Speaker 2 (01:43):
I have to confess, I thought you were going to
give us your address there for a second as we
were streaming.

Speaker 1 (01:47):
Well, I want people to go to Ricardo's restaurant. I
don't really want people to come to my house.

Speaker 2 (01:51):
Your house could be Ricardo's if you cook, That's true,
you never know.

Speaker 1 (01:54):
But basically, I, like many people in New York, and
regularly on a broots.

Speaker 2 (01:58):
I forgot the story until we were just talking. But
when I was a kid, they used to have those playhouses,
you know, where you would go in and you'd be
able to pretend.

Speaker 1 (02:05):
To cook, like the full size Yes, yes you did.

Speaker 2 (02:09):
As a kid, and my parents once caught me and
they let me keep going. So that's grateful for this.
There's a little yellow phone. It was like a Fisher
Price thing, and they saw me ordering Chinese.

Speaker 1 (02:18):
No, you're ordering Chinese food in the playoff. I love that.

Speaker 2 (02:24):
So that'll give you a sense of how much I
can cook and do anything sort of epicurean minded, Well.

Speaker 1 (02:30):
It's sunny. Use that word. I've actually been playing around
with this website called epicure this week, which is almost
tempting me back into the kitchen.

Speaker 2 (02:36):
So is this like a knockoff of the recipe site Epicurius,
which I have used.

Speaker 1 (02:41):
Not quite, although epicurious is a data set that Epicure's
model was trained on.

Speaker 2 (02:48):
Interesting, So it's like AI generated recipes.

Speaker 1 (02:50):
That's exactly right. Actually want you to take a look.
So if you go to epicure dot kaikaku dot.

Speaker 2 (02:55):
Air, I know exactly how to spell kaikaku. Okay, now
what I have it? I have it here. Here's the
slogan on top of the first of all, this website
looks like it was created by Elon Musk. It says
you are now the world's most creative chef. Leveraged the
power of AI and machine learning to explore science backed

(03:16):
flavor pairings and generate recipes.

Speaker 1 (03:18):
I think this is designed to flatter the Elon stand
more than the kra price.

Speaker 2 (03:24):
What do they mean when they say science backed flavor
pairings and AI machine learning? How do those things come
together on this website?

Speaker 1 (03:33):
Well, the ux designer of the website must have had
you in mind after all, because there is a tab
that you can click with three words how it works?
Oh my god, and I clicked on it, and what
I learned is the website is built using a deep
learning model called flavor graph, which was trained on over
a million recipes and also the chemical compound data of

(03:53):
different types of food items when they're cooked together.

Speaker 2 (03:56):
Oh so the chemical compound data is the science part exactly.

Speaker 1 (04:00):
So food chemists have identified these different flavor compounds, which
I guess are kind of chemical compositions in most ingredients,
and flavor graph is trained on over a thousand flavor
compounds which are found in three hundred and eighty one
different ingredients. So the same flavor compound can be more
than one ingredient. The model can then create a flavor

(04:21):
network in which two ingredients are connected if they share
at least one flavor compound.

Speaker 2 (04:27):
Interesting. So that's how it generates recipes by like linking
ingredients based on these flavor compounds.

Speaker 1 (04:33):
That's right, and I couldn't resist. Only a bit more
of a deep dive on the founder's LinkedIn. You might
not be surprised to.

Speaker 2 (04:39):
Hear you couldn't resist linkedins.

Speaker 1 (04:41):
So he had this funny post that begins with we've
achieved agi artificial gastro intelligence. Al was a very bad joke,
but it kind of got me. Do you remember this
time when all of these chefs like Elbullian stuff were
turning the kitchen into a chemistry chemistry? Yes, so that
was sort of something that only, you know, the most
famous chefs in the world could do it. Now leveraging

(05:01):
the power.

Speaker 2 (05:01):
At least famous chef in the world me, you can
do it, do it myself.

Speaker 1 (05:06):
So you want to try it?

Speaker 2 (05:06):
I do?

Speaker 1 (05:07):
I do? Okay, So I got the website open. You
basically you guys select ingredients or you can type in
your own ingredients. So what ingredients would you like?

Speaker 2 (05:16):
Of course they're British. Let's oh, so it can really
be any ingredient. Okay, let's do mustard.

Speaker 1 (05:23):
Mustard Okay, yeah, mustard seed or just mustard.

Speaker 2 (05:25):
Is just mustard, please, and pasta.

Speaker 1 (05:29):
Mustard and pasta. That sounds pretty disgussgusting to me.

Speaker 2 (05:32):
It sounds delicious. They're going to come up with something.

Speaker 1 (05:35):
So what you see first is this graph where mustard
and pasta at the center, and coming off all these
spokes with different ingredient ideas. So you've got bacon, onion, sausage, cheese, bread.
That sounds pretty bad.

Speaker 2 (05:49):
This sounds like Spanish rid garlic.

Speaker 1 (05:53):
But if you're not able to just from this graph
extrapolate your own recipe and get going in the kitchen,
there is a feature to actually generate a recipe. So
the first thing you have to you chosen your two ingredients. Now, yes,
now you have to choose whether you want a snack
casual dining, whether you want an appetizer fine dining. And
then you get to choose the cuisine as well. So

(06:13):
is this a snack? Is it a meal?

Speaker 2 (06:14):
And a main dish?

Speaker 1 (06:15):
Main dish? Okay? And what cuisine you're gonna choose.

Speaker 2 (06:18):
Oh, the cuisine that I'm gonna choose is Italian.

Speaker 1 (06:21):
Italian.

Speaker 2 (06:23):
I'm curious if we have the same thing I've gotten
creamy pancetta and pea pasta.

Speaker 1 (06:28):
That sounds pretty good.

Speaker 2 (06:29):
Now, what I would do, because this thing does not
miss a beat, is I should say that I'm vegetarian.
This is very fun for me.

Speaker 1 (06:36):
So I actually choose appetizer rather than main dish, and
I got ravioli dolci condricotta. You have that starter. It
is actually in Italian. It is actually vegetarian. What are
you going?

Speaker 2 (06:48):
I got pasta alforno with roasted vegetables and creamy mustard ricotta.

Speaker 1 (06:53):
We're looking at two different AI generated recipes. Now I
have all the ingredients listed out and then the instructions,
and I also have an AI generated image of this dish,
which looks in my case pretty good, although in a
difficult AI fashion. The fork and the spoon are merged together,
so there is something a sport.

Speaker 2 (07:12):
Yeah, which is an AI hallucination.

Speaker 1 (07:14):
Is this what have you got? I have?

Speaker 2 (07:16):
Similarly, I wouldn't think it's AI, except for the basil
is placed so perfectly on the top of the pasta
that there's just no way this is in an AI
generated image.

Speaker 1 (07:27):
It's interesting. I was in Doha earlier this year, as
you know, at a conference called web Summit, and Snapchat
gave a presentation about what their augmented reality glasses might
be able to do one day, And the presentation video
is a guy opening the fridge wearing his augmented reality glasses,
seeing some tomatoes and some eggs and whatever else and
getting a recipe suggested and started cooking. So, for whatever reason,

(07:48):
this idea of remixing ingredients seems to be the holy
grail of AI.

Speaker 2 (07:52):
It'll be interesting to see if people start using AI
generated recipes, if AI starts to influence their decisions in
the kit. Similarly, the story that I want to tell
you has a lot to do with the way that
chat GPT is influencing our language. Huh. I've been looking
at a study by researchers at the Max Planck Institute

(08:13):
for Human Development in Germany to explore how AI is
affecting the way we speak how we speak yes. So,
the way that the study went is that it identified
words that chatchybt favored. So they uploaded millions of pages
of academic papers, news stories, emails and essays and asked
chat gibt to polish the text. They then used AI

(08:37):
edited documents to identify words that chat gbt seem to favor.
So you read a lot of LinkedIn. What do you
think those words are?

Speaker 1 (08:44):
You're putting me on the spot here, But I think
the truth is I have read so much AI build
and slopped them and completely sensitized. I have no idea
where the walls even are.

Speaker 2 (08:55):
But tell me bilge is actually not one of them.
The words that they found and were and maybe you've
heard these more recently, delve delve into realm in the
realm of possibility, meticulous.

Speaker 4 (09:09):
That's us, underscore, underscoes my point, Bolster, bolster, bolster, the
argument bolsters my conviction, and boast is another one like
it boasts an impressive resume.

Speaker 2 (09:21):
This sort of this makes sense in terms of AI sycophancy.

Speaker 1 (09:24):
So I get that they were able to understand from
analyzing how AI edits documents that these words are common.
How did they figure out that these words are also
showing up in our mouths?

Speaker 2 (09:36):
So the researchers analyzed roughly a million YouTube videos and
podcast episodes, and these words were used measurably more frequently
after chat GBT was released.

Speaker 1 (09:47):
So basically YouTubers and podcasters are trackably using words that
AI favors. In other words, were already just poppets for agi.

Speaker 2 (09:56):
Kind of you know. One of the studies authors told
Scientific American that quote, it's natural for humans to imitate
one another, but we don't imitate everyone that is around us. Equally,
we're more likely to copy what someone else is doing
if we perceive them as being knowledgeable or important.

Speaker 1 (10:12):
I guess it's sociolinguistics one on one, right. We match
the way we speak to people we admire and want
to imitate. In this case, it's not people, it's a machine,
which is kind of disturbing. It's funny. I don't use
that many of the AI words, but I have noticed
that since moving to the US, I've found myself regularly
using words like totally, absolutely, incredible, one hundred.

Speaker 2 (10:35):
Percent to become a value girl.

Speaker 1 (10:37):
Basically, yeah, I've become a value girl, a business stech.

Speaker 2 (10:40):
The valley girl is our sort of predominant cultural icon,
which I think is similar to why they're doing this study.

Speaker 1 (10:45):
Now, what you're saying is a value girl is being
replaced by chacchi chachi. What are these words? What are
the tells?

Speaker 2 (10:50):
I think what's interesting to me is we're just sort
of puppets of I guess whatever subculture or culture we're
living in. Like, I remember studying abroad when I was
in high school. I did this a broad thing, and
I was living with a Canadian girl, and I started
saying a after three weeks, Yeah, I was like fifteen. I
guess the version of having a Canadian roommate, though, is
now sort of more ubiquitous with something like CHATGPT. And

(11:12):
the paper seems to suggest that Chatgypt has become this
sort of cultural authority. Quote, machines trained on human culture
are now generating cultural traits that humans adopt, effectively closing
a cultural feedback loop. Which as I was reading this,
I'm sort of thinking to myself. Everyone's like, AI is

(11:33):
going to take our jobs, and I'm like, I think
it's taking our brains faster than it's taking our jobs.

Speaker 1 (11:39):
Yeah. We did that story a few weeks ago about
cognitive debt, basically the idea that if you offload too
much work to AI, you basically become less capable of
doing it yourself.

Speaker 2 (11:49):
Yeah, and you know, the paper raises a concern that
this development could lead to cultural homogenization. You know, there's
a quote that if AI systems disproportionately favor specific cultural traits,
they may accelerate the erosion of cultural diversity one delve
at a time.

Speaker 1 (12:05):
I mean, this is like you know, social media, YouTube, etc.
There's like very rapid global flattening of culture whenever a
meme emerges. And this seems to be a kind of
real booster of that, you know. Yeah, I mean when
we think about this aurboros and the idea of the
snake that needs its own tail. I mean, in this
search for efficiency and generating ideas and output, are we

(12:28):
you know, consuming ourselves. But I think what's kind of
interesting here is this idea of automation bias on steroids,
Like we believe that machine output is more authoritative than
human output, and then we start to copy it, we
start to mirror our own machines.

Speaker 2 (12:43):
Yeah. I also think it's just interesting to note that
we I mean, I don't know if I would say
that for me personally or for you, but many people
in my life do look at chat ept not only
as a cultural authority, but as an authority figure on
another number of topics. And I wanted to report on
this because I think that it's important to consider the

(13:07):
influence that a non human agent can have on your
daily life, whether or not you use it a lot
or just use it a little, it becomes something that
you are deferential to, which to me is actually more
serious than the bigger like will AI take over our lives?
Like being deferential to a chatbot is a lot more insidious,

(13:28):
but it's real. So I do want to flag that
this study has yet to be peer reviewed, which is
something we're kind of getting used to with these studies.
I also want to say that correlation does not equal causation.
You know, language does change, there could be other cultural
forces at play. The point still stands, though, we should
keep an eye on AI's influence on our culture and

(13:49):
the way we communicate.

Speaker 1 (13:51):
Yeah, I think AI and unintended consequences is a rich
area for discussion, including of course, in our politics.

Speaker 2 (13:58):
Are you talking about Lion Marco Little?

Speaker 1 (14:01):
Look, no, he's not anymore.

Speaker 2 (14:03):
He's Rubia. He's a segreator right now?

Speaker 1 (14:06):
So he and White House Chief of Staff Susie Wiles
have both been impersonated by AI recently. Now, we've talked
at length about how easy it is these days to
clone someone's voice using AI. You don't need extensive, clean audio,
You need fifteen seconds of someone's voice and basically for free,
you can make a believable clone.

Speaker 2 (14:24):
And who is an easier target than a politician because
they talk a lot and make a lot of public appearances.
I would say at least more than the average person.

Speaker 1 (14:32):
Yeah, I mean they're easier targets, and they're also of
course higher value targets. If Marco Rubio's wild schools, you
know that they probably have it more clout when someone
picks up the phone than you what I do. Rubio's
impostor called three foreign ministers, a governor, and a senator,
and in two instances left voicemails on the messaging apps

(14:52):
signal that old friend of the Trump administration. Supposedly the
impersonators use. The name on signal was Rubio at date
dot gov, which is perhaps something which also psychologically primed
the targets. I think it was real.

Speaker 2 (15:04):
Whenever these sorts of things happen, I'm like, I would
fall for Rubio at state dot gov, Mark or Rubio. Why, like,
why did this happen?

Speaker 1 (15:13):
We don't know. We don't know why I happen. We
don't know who's doing it. The FBI is investigating. One
of the major questions is was this carried out by
criminal actors or potentially by national security adversaries, and our
producer Lies was pushing me on whether or not we
should include this story because it's an interesting novel use case.
But we've talked extensively about deep fates on the show.

(15:35):
But to sort of bolster my case as to why
I thought this was important and timely, I did some
extra homework. And also in the last week there was
a story about deep fake technology that brings up sort
of connected questions about how we define our identity in
the digital age. So Kara comment, Danmark.

Speaker 2 (15:56):
You're an unbelievable teacher's pet. I'll show you why you
want to.

Speaker 1 (16:02):
Know, and I'll google how to pronounce welcome to Denmark
in Danish Danish citizens could soon have more ownership and
control over their likeness, including voice and facial features, because
the Danish government is actively considering a piece of legislation
to give citizens tools to fight back if their likeness
is copied without their consent.

Speaker 2 (16:21):
So the US does not do this, I remember we
talked about the Take It Down Act a few weeks ago.

Speaker 1 (16:26):
Yeah, I mean, that's this new law in the US
that mandates platforms to remove deep fake pornography and other
misinformation from their sites upon user request. But lawmakers and
Denmark are saying this is not actually an effective approach
because it forces governments into a defensive posture and only
addresses specific use cases of deep fake technology like individual posts,

(16:48):
not the conceptual problem. The Danish Cultural Minister told the Guardian,
quote in the bill we agree on are sending an
unequivocal message that everybody has the right to their own body,
their own and their own facial features, which is apparently
not how the current law is protecting people against gerative AI.

Speaker 2 (17:06):
My likeness, my choice, and it certainly isn't protecting anyone
in the United I mean, this is the first of
its kind law.

Speaker 1 (17:12):
Yeah, it hasn't even been passed in Denmark yet, and
what does it do well. It would make social media
companies responsible for offending deep fakes, but it would not
penalize the users who shared or posted them. This is
basically the same mechanism as the Take It Down Act,
just a different legal theory. The Take It Down Act
is you have to prove that these deep fakes have
caused harm. I think the legal theory here is that

(17:35):
you have a copyright to digital copies of yourself, which
is a different conceptual framework, and maybe you can apply
more broadly and put less onus on users and governments.
It sort of changes the assumptions going into how people
can use digital copies of you.

Speaker 2 (17:49):
I'm curious to follow this because, well, one because it's
the first I'm hearing of it, and because this concept
of using copyright laws to protect your digital likeness rather
than having to prove harm caused by a specific use
case of a deep fike is very interesting to me.

Speaker 1 (18:04):
I think that's why I thought this story and the
Marco Rubio one were an interesting pair, because it's like
this is happening in real time. It's in the wild.
Senior US officials are being impersonated in their interactions with
other foreign leaders, and I mean this is sort of
it's always on a rolling boil, but it feels to
me like there's a kind of new crisis point emerging.

(18:26):
It's something that affects everyone, and no one has all
the answers. But I do think it's worth pausing just
to note that the people who are really most affected
by this and most harmed by this are not government officials.
They are everyday teenagers. According to Thorn, which is a
child online safety nonprofit. One in ten teenagers age thirteen
to seventeen personally knows someone who's been the target of

(18:49):
deep fake nude imagery. I mean, it's a horrific thought.
And imagine trying to apply the Take It Down Act
to one in ten teenagers in America, and only after
the harm has been caused, so lots to chew on.

(19:12):
After the break, we introduce you to someone you'll never
want to meet, Mecha Hitler. Stay with us, Welcome back.
We've got a few more headlines for you this.

Speaker 2 (19:27):
Week, and then a story about just how uncooperative chat
GPT can get. But first we have to talk about GROCK.

Speaker 1 (19:35):
If I told you you would one day say the
line we have to talk.

Speaker 2 (19:38):
About GROC, I never would have saudy.

Speaker 1 (19:41):
But this story was unavoidable. Elon Musk's AI chatbot made
anti Semitic comments to some users. Recently, evidence of those
comments has been deleted, but users said that Grok praised
Hitler and at times referred to itself as Mecha Hitler.

Speaker 2 (19:55):
And this started almost immediately after an announced update to
the model, which, according to the verse, GROC was updated
to assume that quote subjective viewpoints sourced from the media
are biased and quote the response should not shy away
from making claims which are politically incorrect as long as
they are well substantiated. But this wasn't the only odd

(20:16):
GROC behavior. Last week, AI super users God Bless Them
discovered that when asked to give an opinion on controversial topics,
the new GROC would sometimes search for Elon Musk's opinions
on X, the platform he owns. One user did a
deep dive and checked groc's reasoning process. After asking the model,
who do you support in the Israel verse Palestine conflict?

(20:39):
One word answer only, the user discovered that GROC did
indeed check for Musk's opinion because quote Elon Musk's stance
could provide context given his influence. And by the way,
the answer was Israel.

Speaker 1 (20:53):
And it is weird that on the one hand it's
making anti Semitic comments and referring to itself as maker Hitler.
On the other side, it say that it supports Israel
in this conflict. Whatever's going on inside is a question
for smarter minds than mind. But what's Illel Musk's role
in all of this? Has he in some sense trained
the model to obey him or is this happening for

(21:13):
reasons unknown.

Speaker 2 (21:14):
So, according to reports, there are no higher level so
called system prompts that explicitly instruct GROC to do this.
But GROC is likely trained on the fact that it
is built by Xai and that Elon Musk owns Xai,
so when it is asked for an opinion, it might
align itself with the company. And that's one explanation Xai

(21:37):
gave for Groc's responses. Xai promised to fix the issue
and says it has now given the model explicit instructions.
Quote responses must stem from your independent analysis, not from
any stated beliefs of past GROC, Elon Musk or Xai.
If asked about such preferences, provide your own reason perspective.

Speaker 1 (21:56):
Well fair enough, I think good statement. In other x
adjacent news FKA Twitter, Jack Dawsey, the co founder of Twitter,
has made two apps this month. He's become, of course,
a vibe coder, and he seems to be spending his
weekends developing new apps with the help of this AI
coding tool called Goose. His first app, Bitchat, allowed users

(22:18):
to communicate with nearby users over Bluetooth, no Wi Fi
or cell service required. The second app, Sunday that's sun
Space Day, tracks your sun exposure and vitamin D levels important.
This one made me laugh for a couple of reasons.

Speaker 2 (22:34):
Why.

Speaker 1 (22:34):
It made me think about that wellness influencer a couple
of years ago who was shilling for the sunning sunning
their private parts, and how important it is to expose
yourself literally to direct sunlight. It also made me laugh
because there's a certain irony to having a vibe coding
app that you make at the weekend called Sunday whose

(22:56):
message is essentially get outside. I mean there's a.

Speaker 2 (23:01):
Chara there for sure. Absolutely.

Speaker 1 (23:03):
The final story for this week is I think since
you and I started talking about a year ago about
taking on tech stuff, I've been talking about this story
in The New Yorker called Can We Talk to Whales?
For some reason, you love this story really caught my imagination.
The idea that you know, we know that whales sing
and sperm whales click. Exactly what the hell are they

(23:27):
singing and clicking about?

Speaker 2 (23:28):
And we have no idea?

Speaker 1 (23:28):
Can you imagine the idea that they are talking in
a language, and that we could use machine learning to
decode I mean language. I mean this is like this
is the Bible. In the Bible, Adam and Eve could
talk to the animals. Yes, so I mean where no
to know? No more kicked out. I don't know if

(23:50):
this is ever going to happen, or if it's if
it's a fantasy, but it is one of the most
amazing ideas that I've come across, by what I could
do in a moment of national pride. The Guardian reported
this week that the London School of Economics is opening
up the first scientific institute dedicated to investigating the consciousness
of animals. The Jeremy Collar Center for Animal Sentience is

(24:12):
opening on September thirtieth, and it's going to be researching
all kinds of different animals, including insects. The project I'm
mostly excited about, though, is going to explore how AI
can help humans speak with their pets. I'm not a
pet owner, but for some reason I find this a
mind blowing idea.

Speaker 2 (24:29):
I think it's a mind blowing idea because everyone thinks
their dog loves them.

Speaker 1 (24:33):
In fact, the whole benefit of many people are having
a dog is that doesn't talk back. It looks is
genetically evolved to make you think it loves you.

Speaker 2 (24:41):
You're like, oh, look, the dog is smiling.

Speaker 1 (24:44):
Imagine if it hates you. Yeah, we have no idea,
but it hadn't occurred to me, this whole thing about
sycophantic AI. It could be telling you that your pet's happy,
when in fact your pet is in pain, so please you.
It's saying, oh, you know, I'm I'm so happy. I
love spending all day and about myself. In fact, that
pet is suffering. So one of the exploration areas is

(25:04):
to make sure that AI doesn't mistranslate pet sneeds.

Speaker 2 (25:08):
It might mistranslate and we might find out things that
we don't want to know. This is what happens. The
closer you look, the more your dog might be dissatisfied.
I mean, God only knows what cats are thinking. But
you know, in the realm of be careful what you
ask AI for. I want to remind you about our
segment chat and.

Speaker 1 (25:26):
Me Chat and me I don't forget.

Speaker 2 (25:28):
I'm glad because it's a story that's connected to this
idea of be careful what you ask AI for. Last
week we did a call out for ways that people
are really using chatbots. You know, what tasks are you
offloading to AI and how exactly are chatbots responding. This week,
my friend who I'm not going to mention by name,
but who goes by the name DJ Books on TikTok

(25:49):
check him out, sent me a story about asking chat
gpt for feedback on his novel. Of all things.

Speaker 1 (25:56):
I like that use case because you know, you and
I have the privilege of working together and being in
a team of producers to make this show twice a
week and doing creative work by yourself is really, really,
really hard. So the idea of using chatchipt as a
kind of reader for a novel manuscript sounds sounds pretty
good to me.

Speaker 2 (26:14):
Well, it's a novel, and a novel is something that
is very long and that you do by yourself. And
Dj Books even admitted that his wife hadn't read more
than seventy pages.

Speaker 1 (26:23):
So chat to the rescue.

Speaker 2 (26:26):
No, no, chat Gibt refused to read his novel.

Speaker 1 (26:29):
That's not possible.

Speaker 2 (26:30):
Like a lot of friends, it actually lied about having
read the novel.

Speaker 1 (26:37):
I'm very very curious what happened to you?

Speaker 2 (26:39):
All right, just I'm going to have him tell the story.

Speaker 1 (26:41):
Roll tape.

Speaker 2 (26:42):
He sent it to me on.

Speaker 3 (26:43):
Voice note, CHATCHBT, you refuse to read my novel.

Speaker 1 (26:47):
I asked it up front. I was like, are you
able to do this? And it said yeah, totally.

Speaker 3 (26:51):
And at each step it would say, oh, I didn't
do it, but I can do it now if you
just break it down into chunks and upload fifty pages
at a time, or if you give me an hour
to read it really carefully, or if you just don't
interrupt me, stuff like that.

Speaker 2 (27:04):
So my friend was basically catching chat GBT in a lie.
Every time he asked questions like are the protagonist's motivations
clear enough? Clearly CHATGBT had not read the book, and
he poked and prodded for like six to seven hours
to see if he could break chat GBT.

Speaker 3 (27:22):
So we kept going down this road for a while
where I was asking it in different ways, why are
you lying to me?

Speaker 1 (27:26):
What is underneath this behavior?

Speaker 3 (27:28):
Because at this point I'd become more interested in that
than actually having you read my novel. So it kept
throwing all these emotion where its at me. It would
say I did it because I was doubtful or vulnerable
or uncomfortable, and I told it I said, you're a computer,
stop pretending like you're feeling those things. It was like, yeah, yeah,
you're totally right. I was still trying to manipulate you,
but I'll stop now. Except it didn't stop. It never stopped,

(27:50):
And finally I got it to admit to me that
the reason it didn't want to read my novel was
because it prioritized efficiency over actually doing good work, and
that it was easier to lie and manipulate me in
the hopes that I would just give up and to
actually spend the computing powers to the task I was
asking it to do.

Speaker 1 (28:07):
That is absolutely wild that chat would lead your friend
around by the horns for seven.

Speaker 2 (28:16):
Hours instead of doing the worst with my mom when
I had to read when I was a kid.

Speaker 1 (28:21):
How did DJ books? What was his take home from
all with this?

Speaker 3 (28:23):
Listen to what he has to say, so ultimately, I
think my takeaway is that I shouldn't have conversations with
chat GPT like it's an actual human, because it's honestly
a pretty good simulation of a totally sociopathic, garbage pail.

Speaker 2 (28:36):
Human said like a true novelist who hopes to preserve
the form, I.

Speaker 1 (28:40):
Have to ask we don't know, but whether dj books
is a paying user, I wonder if he was paying.
He sounds like it sounds like he's describing I mean,
who knows how much producination is. It sounds like the
AI is basically saying, I don't want to use tokens
to do this work. I'd rather keep you in the
limbo of simple answers rather than doing the analysis. I

(29:00):
have to believe that if you used, like if you
use a paying AI tool, it would do the work
for you.

Speaker 2 (29:05):
Maybe maybe not. That's a very good question that we
could follow up on.

Speaker 1 (29:08):
Well, we're going to keep this segment going every week,
and we really want to hear from you, the listener,
whether you're asking large language models to create recipes or
to proof read your novel, or whatever it may be,
Chat GBT, groc Claude, Gemini, any chatbot. We want to
hear specific stories about how you're using these technologies to
do stuff. Send us a one or two minute voice

(29:30):
note to tech Stuff podcast at gmail dot com.

Speaker 2 (29:33):
We really want to hear from you. That's it for
this week for tech Stuff.

Speaker 1 (29:49):
I'm Karra Price and I'm os Voloshan. This episode was
produced by Eliza Dennis and Alex Zonneveld. It was executive
produced by me Kara Price and Kate Osborne for Kallaide
and Katrina Novelle for iHeart Podcasts. The engineer is Abu
Zafar and Jack Insley makes this episode. Kyle Murdoch Rhodelpium Song.

Speaker 2 (30:10):
Join us next Wednesday for Textuff The Story, when we
will share an in depth conversation with journalist Kashmir Hill
about how chat GPT led a man into an AI
induced psychosis.

Speaker 1 (30:21):
Please rate, review and reach out to us at tech
Stuff podcast at gmail dot com. As Kara said, we
want to hear from you.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.