All Episodes

January 10, 2025 34 mins

How scared should we be of deceptive AI? Oz and Karah take the mic from Jonathan for their first episode of TechStuff and bring you a news roundup of their favorite headlines—including using ChatGPT to plan a crime and brain benefits for those who can still navigate without Waze.  On TechSupport with Jason Koebler from 404 Media they discuss a recent study showing that AI can actively deceive us to achieve its own goals; and a special segment with Emma Barker from Time with the 200 Best Inventions of 2024.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Thanks for tunion to tech Stuff. If you don't recognize
my voice, my name is Osvoloshan, and I'm here because
the inimitable Jonathan Strickland has passed the baton to Kara
Price and myself to host tech Stuff. The show will
remain your home for all things tech, and all the
old episodes will remain available in this feed. Thanks for listening.

(00:20):
Welcome to Tech Stuff, a production from iHeartRadio and Kaleidoscope.
I'm Osvoloshian today co host Kara Price, and I will
bring you three things. First, the headlines this week. Second,
a conversation with four O four Media's Jason Kebler about
some deceptive AI bots looking to trick their human counterparts

(00:41):
in today's Tech Support segment. And finally, we head to
see yes kind of. We take a look back at
the year in tech with Emma Barker of Time magazine,
who edited Time's list of the two hundred best inventions
of twenty twenty four. All of that on this Week
in Tech. It's Friday, January tenth. Stay with us, so Carat,

(01:19):
it's very nice to see you. It's been almost half
a decade since we've been in studio together. A pandemic,
A pandemic.

Speaker 2 (01:25):
Many developments in tech, very few in my own personal life.
But you know, there's nothing like a big mic in
my face to make me feel like a normal person again.
And I know we thank Jonathan already in our last
episode for bringing us back together, but I want to
get him a gift, and I think it's going to
be to see the show. Oh, Mary, I didn't know
he was such a theater buff.

Speaker 1 (01:47):
I remember because we did an interview with him a
few years ago that he was a big Shakespeare guy. Oh.

Speaker 2 (01:52):
Yes.

Speaker 1 (01:53):
He seemed to think it would be easier for us
to host the show as two people or rather than
as just him, and I kind of see his point,
But I also think he probably hasn't spent quite enough
time with either of us to make a truly informed
speculation about that.

Speaker 2 (02:07):
You know, I think the reason that this show works
between the two of us is I think you and
I sort of speak to each other in link tongues.
I always say, you know, there's the love languages. Yeah,
I have a ace love language, which is links links, and.

Speaker 1 (02:23):
That's yeah tongues. Okay. I was going to say, actually,
for the avoidance of doubt, just friends.

Speaker 2 (02:31):
We always say as Sherlock and Watson the most platonic
and they live together on Baker's story exactly.

Speaker 1 (02:37):
So so no, I mean it's going to be It's
going to be the two of us with two people.
But we are in constant communication about all things tech
and hopefully we'll be able to shine a bit of
a light on some of the most interesting intriguing things
each week, including this one. So, Caara, we've been texting
a lot. What have you been reading that really stood
out to you?

Speaker 3 (02:54):
So?

Speaker 2 (02:55):
I think, like everyone in America at least, I've been
following the cyber truck explosion in Las Vegas and Futurism,
which is a website I frequent picked up on some
original reporting from the Associated Press that has been kind
of quiet. The bomber YEP, who's an active duty Green Beret. Yeah,

(03:15):
used chat GPT.

Speaker 1 (03:17):
Do you know what I saw this this morning? I
was so sad this wasn't on my list. I'm happy
this on yours.

Speaker 2 (03:23):
Like, I don't mean to make light of the situation
because it's it's very depressing on a number of levels,
but I knew you would see the story.

Speaker 1 (03:31):
No, I was fascinated, but It was the fourth story
on Exios today, and I'm glad that it's the first
story on your show because I found it totally mind blowing.

Speaker 2 (03:39):
And obviously every part of this story is terrible, including
the fact that a cyber truck was involved. Anything involving
a cyber truck makes me want to die. Yeah, I
hate that car more than I hate most things. But
what was so interesting to me about it was it
marks this moment of really underlying how next generation search

(04:01):
works YEP on like a very disturbingly practical level. And
Kevin McHale, who is the Las Vegas County Sheriff, says
that it's the first known us of chat GPT to
help build an explosive and he he followed up by saying,
it's a concerning.

Speaker 1 (04:18):
Moment under statement of the week.

Speaker 2 (04:21):
This is not someone googling.

Speaker 1 (04:23):
The anarchist cookbook or what was that old book.

Speaker 2 (04:25):
That the anarchist ye cookbook.

Speaker 1 (04:27):
I think that was a huge moment, and I think
the history of publication and First Amendment whether you could
like distribute a book that told you how to make
a bomb. But the cat's out of the bag now.

Speaker 2 (04:35):
Well, and I'll tell you who knows it's out of
the back open ai right, Because for open ai to
very quickly respond to this, I think is really interesting.

Speaker 1 (04:46):
Did they volunteer? By the way, oops, ps the guy
used chet gpt to plan this? Or how did it
come out? Do we know how it came out?

Speaker 2 (04:54):
The Las Vegas Sheriff announced that it was clear that
chat gbt had been used to build this explosive and
so open ai then sent an email that was quoted
in Futurism that said, in this case, chat gpt responded
with information already publicly available on the internet and provided

(05:14):
warnings against harmful or illegal activities. We're working with law
enforcement to support their investigation. And just before you say anything,
I'm really surprised, like as you were saying, it's a
four store anaxios, I don't understand how this isn't like
the lead. I agree, but I guess that's why we're
doing a tech podcast. So what's what's on your docket today?

Speaker 1 (05:39):
Well, you started with a cyber truck. I've also got
a vehicle story for you. There was a story in
the Wall Street Journal over the holidays I found pretty fascinating,
which is basically a study on Alzheimer's in different populations,
and the headline was want to avoid Alzheimer's. Taxi drive

(06:00):
can show you how interesting?

Speaker 2 (06:03):
Say more so?

Speaker 1 (06:05):
Basically, mass General Brigham Hospital in Boston ran a study
about the rate of Alzheimer's in various populations. Taxi drivers
and ambulance drivers had up to four times lower rate
of Alzheimer's than the general population. And apparently this actually
made sense to researches because the part of the brain
that does real time spatial processing and decision making, the hippocampus,

(06:30):
as I learned is called, is also one of the
first parts of atrophies when you get Alzheimer's.

Speaker 2 (06:34):
That's so interesting. So people who have made this their
job essentially are people who are kind of saving themselves unknowingly.

Speaker 1 (06:44):
One hundred percent. One of the most interesting things I
thought was that London bus drivers don't get the same
benefit as London taxi drivers because they follow a preset route.

Speaker 2 (06:54):
I was going to say that one of my favorite
things is just the old facts about how much a
London taxi driver has to know people. What is it
called the why.

Speaker 1 (07:01):
The knowledge they have when when they when they die
in the autopsy, they have larger other people. But you
know who else doesn't get these benefits? Yes, come on
New Yorkers. People who use Google Maps or ways or
Apple Maps and what do we call that? We call
that automation bias.

Speaker 2 (07:18):
Correct, that's our fave.

Speaker 1 (07:20):
So I just found this really interesting. Here. Here is
the technology Google Maps that I use all the time
every day, and I absolutely adore it. And I literally
every time I go on vacation, I think, how the
hell would I have had a good time if I
didn't literally literally divorce. Yeah, I would literally go to
some like the concierge or whatever, or the some random
person who worked in the hotel say can you give

(07:41):
me a physical map and tell me how I have
a good time? I'd be like, I would never travel.

Speaker 2 (07:45):
So, but isn't it what we always talk about, which
is the sort of the double edge totally?

Speaker 1 (07:50):
And but the idea that using this scene that I
love every day is you know, shrinking my hippocampus I
find really quite chilling.

Speaker 2 (07:57):
I mean, I think about this in terms of my
hands and my Sum's true, Like I just think all
the time about just burning nerves.

Speaker 1 (08:04):
Well, the devil has no time to make use of
your sums because they are never idle.

Speaker 2 (08:09):
They're definitely not idle. I think you have one more thing.

Speaker 1 (08:13):
I do have one more thing. And this, don't worry.
We're not going to talk about that length thing I
talked to somebody else about. But this was basically, you know,
there are these new open AI models, one that was
released in September last year, and three right before the
holiday on the twelfth day of ship mess as the
execs called it, but one was being red teamed. Do

(08:35):
you know what red teaming is. It's when you try
and make a technological product to break its own rules
or do unsafe things. So, oh, one was being red teamed,
and surprise, surprise, in a very interesting way, it started
trying to deceive its human counterpart. And that's exactly what
we're going to talk about on our next segment, which

(08:57):
is Tech Support. Every week we'll do a segment called
tech Support when we talk to true experts and reporters
who can go far deeper behind the headlines than you
and I can, to basically help us sort the signal
from the noise. And I genuinely don't think there's anyone
better to talk to than the team at four or
four Media. These are reporters who were formerly basically the

(09:18):
team at Vice's motherboard, and now they've started their own collective,
and they are the people who the tech world follows
most closely to find out what's really going on in
all corners of the digital world.

Speaker 2 (09:30):
Yeah, and this week we're excited to share this conversation
we had with Jason Kebler, who's one of the co
founders at four h four, who will be filling us
in on the study that the initial excitement around the
three release meant did not get enough attention, which revealed
that cutting edge AI systems, including both open AI's one
and Anthropics Claude three point five, sonet whatever the way

(09:54):
they name this stuff is so self agrandizing, have been
shown to have a shocking t tendency to deceive.

Speaker 1 (10:01):
Shocking tendency to deceive. Indeed, So we talked to Jason
right before the holidays when this news just came out,
and we.

Speaker 2 (10:10):
Jump right in, Jason, what's popping.

Speaker 4 (10:15):
So there's a new paper out by Apollo Research, which
is this group that's funded by various governments as well
as various artificial intelligence companies like Google, DeepMind, Inthropic, open Ai,
et cetera. And it's a new paper about what's called
AI alignment, which is the idea that is an artificial

(10:38):
intelligence tool doing what the human asked or is it
doing something else. I think that one of the really
scary things about artificial intelligence is that they are largely
black boxes. We're not totally sure how they work, and
this can lead to all sorts of very interesting outcomes.

(11:02):
You know, in this case, the researchers at Apollo Research
Group were able to find that these sort of cutting
edge large language models, which you know, one of the
biggest ones is open ais oh one.

Speaker 1 (11:15):
They finally gave it immigrant visa status.

Speaker 4 (11:19):
So they found that in some cases, these sort of
newest large language models have the I don't even want
to say ability because I don't want to anthropomorphize them
so much, but like the tendency to deceive AI is
quote covertly pursuing goals that are misaligned from its developers
or users. And examples of this are like, if you

(11:43):
ask it to answer questions on a test, the artificial
intelligence would be worried that if it performed too well
on the test that it would be turned off or deprogrammed,
and so it would give answers that are not so good,
so that the human would think that it is not
as smart as it actually is. In other examples, it

(12:04):
copied itself onto another server more or less so that
it was like multiplying itself, which is something that straight
out of science fiction. I've watched many science fiction movies
where the artificial intelligence doesn't want to die and so
it makes like millions of copies of itself all over
the Internet. And so this type of I mean, I

(12:25):
guess you call it emergent behavior, which means that it's
behavior that is not expected and not really programmed in,
is quite concerning.

Speaker 1 (12:35):
To be clear. This is a testing environment though, right, Like,
there's made one copy of itself, but we shouldn't be
worried about millions mold, right at least not yet?

Speaker 4 (12:43):
Yeah, I mean not yet.

Speaker 5 (12:46):
It does remind me, though, of those moments where parents
talk about seeing a child do something that reminds the
parent exactly of themselves and they're like, uh, oh, yeah.

Speaker 4 (13:03):
I mean, every large language model to date has been
trained on sort of like the some knowledge of humanity,
and so one of the very early things that people
were talking about with these systems is that they replicate
human biases because they're trained on what we put out
into the world and humans are biased. But I think

(13:23):
that as artificial intelligence gets more advanced, there is the
ability for something to go wrong. And I think that
that is what this research is showing is not that
the artificial intelligence is sentient and that is thinking for itself,
like how can I deceive this human? But as it

(13:44):
is doing more complex research, there there is the ability
for the artificial intelligence at some point to feel like
it has some goal that is not aligned with what
the human is asking it for.

Speaker 1 (13:58):
We'll be back with more from Jason and Lion conniven
Ai after the break.

Speaker 4 (14:11):
I think that there is like when these things are
being programmed, there is a sense of self preservation being
programmed into them because people try to mess with these
all the time. This is like a time tested tradition
of trolling on the Internet. But in trying to develop guardrails,
the companies that are programming the lms need to say, like,

(14:36):
if the user tries to mess with you, preserve yourself
in some way. And so that's what I think might
be happening here is because the companies are trying to
make these models robust, and they know that humans are
messing with them. There is an aspect to it that
when a human messes with you figure out how to

(14:56):
protect yourself.

Speaker 1 (14:57):
But doesn't that perfectly encapsulate the alignment problem? It does.

Speaker 4 (15:01):
I mean, don't get me wrong, like this is it's creepy.
It is These tools, these large language models, are getting
incredibly sophisticated, and I mean, this is one of the
biggest debates that's going on in the artificial intelligence community
is what is consciousness? What is thought?

Speaker 3 (15:22):
Like?

Speaker 4 (15:22):
How how does reasoning work in humans and how will
it work in computers? And is it going to be
the same? The answer right now is no, it's not
the same. But the things that large language models are
doing approximates a lot of how humans solve problems.

Speaker 2 (15:44):
But also it's a little bit different in a non
perfect way.

Speaker 4 (15:48):
Exactly exactly. I think that's a good way of looking
at it.

Speaker 2 (15:51):
I do think that one of the things that I
find the most interesting.

Speaker 5 (15:55):
And you preface this whole conversation with, you know, not
wanting to answer morphie a large language model. We don't
really have any ability to not do that because it's
the only way we know how to talk about things.

Speaker 4 (16:08):
I think that's a really great point, because the academics
who studied this for a long time say, don't anthropomorphize
AI because they're not people. They don't work in the
same way that people do. And yet if you don't
have us very very sophisticated knowledge of how these things work,

(16:29):
we don't have or I don't have the language to
talk about this stuff without anthropmorphise.

Speaker 1 (16:35):
By the way, are in the zero point one percentile
of the people whom understand this if not no point
not one.

Speaker 4 (16:41):
And it's a lot easier to talk about it if
you say, oh, like she when referring to Siri. So
I agree entirely with you that we sort of like
everything that's being done is being done by humans in
the sort of like anthropological context of human culture and
trying to emulate that. And so then to say, let's

(17:03):
not call Syria a woman, or let's always call it
it and try to understand what's happening under the hood
is a really difficult thing for our brains to do.

Speaker 1 (17:13):
I think, yeah, Jason, just to close, how only be
following this story for the year ahead, because I guess
generative AI has been since like November tween twenty two
the dominant story in all of technology journalism.

Speaker 4 (17:27):
Yeah, I mean to follow this sort of stuff like
AI becoming sentient. You really do have to follow academic conferences,
big papers like this, because these companies are not releasing
these models without guardrails specifically to prevent this sort of thing,

(17:47):
so as you're not going to be able to like
type into chat GPT like hey, build me a company
and then the AI creates its own company and fires
you or something like. That's not going to happen at
this not yet. Yeah, but who knows. Maybe twenty twenty six.

Speaker 1 (18:04):
Well that was that was wonderful Jason Kebler from four
or four Media or was it so much?

Speaker 2 (18:11):
We'll keep an eye out, thank you.

Speaker 1 (18:20):
So, Karen, I'm very excited about this next part. One
of the things we got to do together back when
we were presenting Sleepwalkers in the Dark Ages.

Speaker 2 (18:28):
Yeah, in twenty nineteen, that's when the show came out.

Speaker 1 (18:30):
Trent nineteen was the show. But in January twenty twenty
we got to go to Las Vegas together to none
other than the Consumer Electronics Show, and we got to
see all these incredible new gadgets and exciting technologies and
new futures being presented in an enormous series of nested
conference centers. We didn't get to go this year, but

(18:51):
we did get to do the next best thing, or
maybe even something better, and we wanted to share it
with the tech Stuff listeners as a kind of special
bonus for the first episode we have in the host chairs.

Speaker 2 (19:01):
Yeah, we are very lucky to have with us today.
Emma Barker of Time Magazine ever heard of it? Who
edit's the Time two hundred, which is a list of
the best inventions of twenty twenty four.

Speaker 1 (19:14):
Thank you so much for joining us. Welcome to take stuff, Emma,
Thanks for having me.

Speaker 2 (19:17):
How do you edit down? I mean I feel like
every year there's just more, So like, where do you
how do you even get to two hundred?

Speaker 3 (19:25):
Yeah?

Speaker 1 (19:25):
Well, it's actually famous for the one hundred.

Speaker 3 (19:27):
It's actually technically two fifty now because we have two
hundred on the list and fifty special mentions. Oh well,
and the list has varied in its length over the years.
There we were a bunch of years where it was
only twenty five and some years where it was fifty,
and it's it's ranged a lot, but at this point
We do a really wide swath of pitches from our

(19:50):
freelance network as well as our staffers. So we have
bureaus in Singapore, London, and then we have contributors all
over the globe who we reach out to for pitches
for companies that they're reporting on their products, things like that.
And then we're very news driven because it's a news magazine,
so we're looking at kind of the biggest news stories

(20:11):
of the year and products that drove those.

Speaker 2 (20:14):
Is there something that you've noticed, you know, having done
this now for a few years this year, especially versus
years past.

Speaker 3 (20:22):
I mean AI of course, ye, that started a couple
of years ago. But I think actually it's it can
be more of a hindrance than a help for a
lot of inventions.

Speaker 1 (20:32):
Huh. Why is that?

Speaker 3 (20:34):
Because there's so much news AI or companies that are
adding AI features that are not necessarily helping their product.

Speaker 2 (20:43):
Just for buzz or you know, we call that the
little sprinkle, Yeah, the AI sprinkle.

Speaker 3 (20:48):
Yeah exactly. So I don't think it always helps the product.
But it's been really interesting sorting through the AI inventions
and what we're really looking for at this point in
the AI journey is inventions that have demonstrable impact.

Speaker 1 (21:04):
So in mid journey, so to speak.

Speaker 3 (21:06):
Yeah, exactly, that's a good rest.

Speaker 1 (21:10):
So, Harry, you and I spent some time with the
list and picked out some favorites. What was your first favorite.
I don't know.

Speaker 2 (21:17):
Maybe I'm of the age where it's like marriage is
on the mind. But one of the things that really
stuck out to me was this software called Dia.

Speaker 3 (21:26):
Yeah. So Dia is actually a government app for in
Ukraine and it does a lot more than what we
wrote about here. It's been around for a while. Most
Ukrainians use it. It's basically an app where you can
do all government services. But yes, this year, because partially
because of the war, they launched a future in which

(21:48):
you can propose marriage via the app. The person you
proposed to has a certain amount of time to accept
your proposal, and then if they do romanmomes romance, and
then if they accept your proposal, you can do a
video chat wedding that's official with an efficient from the government,
like a city hall wedding over video chat, and you're married.

(22:11):
And the reason they did that is because so many
couples are separated by the war right now physically that
it's difficult, and they wanted to, you know, let people.

Speaker 2 (22:19):
Still have and this was something that was pretty widely adopted, right,
like people really.

Speaker 3 (22:22):
Use Yeah, it's been Yeah, it's been really widely used.

Speaker 1 (22:25):
I don't want to derail us, but this one actually
connects to a personal story of mine, which is that
my grandfather was a refugee from Ukraine who was separated
from his mother in nineteen thirty nine and they met
in the app one that twenty five years later, in
the mid sixties, after the Red Cross, would reunite families
who have been separated by World War Two, and they

(22:45):
didn't even recognize each other. But just interesting how you know,
history kind of rhymes and so, you know, kind of
this this is the story I found intriguing but also
kind of moving personally.

Speaker 2 (22:56):
One of the other things that was on your list,
amazingly is something that I own and I am very
much a you know wellness app skeptic although user I
don't know where I knew about it from because it's
a Dutch company, but I bought it here today to

(23:20):
show you, and it is a demo. It is the
moon Bird AI for listeners.

Speaker 3 (23:25):
It's uh, basically like a little pod that you hold
in your hand and it vibrates.

Speaker 2 (23:32):
It actually as pulses like a pulse. It mimics the
act of breathing, so it goes in and out and
in and out. Yeah, it does look like a vibrantor
yeah it does. There's just no way. There's no way.
And honestly, the carrying case does. The carrying case doesn't help.

(23:52):
My mother saw me using it and she was.

Speaker 1 (23:54):
Like, huh and I.

Speaker 2 (24:00):
Very modern family. We're going to take a quick break
to pay the piper. We'll be back with more from
the amazing Emma Barker of Time magazine Stay with us.
I actually find this to be an incredible device. It
is so simple. It's something that links through Bluetooth to

(24:22):
my phone. And this is called moons called moon Bird AI.

Speaker 1 (24:27):
How did you choose it, Emma for the Time list.

Speaker 2 (24:30):
Yeah.

Speaker 3 (24:30):
So, wellness devices are tricky, as you note, and there's
always a lot of wellness apps and things that don't
have a ton of scientific backup, and frankly, it's just
hard to get scientific backing for a lot of these,
and so I'm always looking for the things that that
I feel like don't necessarily need that you know, I

(24:54):
think it's it's hard when you get into things like
mental health, yes, and things like that. But but things
like meditation, deep breathing, those things are techniques that are
proven enough that if it's a device that helps you
with that, you don't need to have a clinical trial
backing that up. And I really liked moon Bird. There's

(25:14):
some of these different things, but I liked moon Bird
because a it doesn't have a screen, even though it
does pair with your phone, you can know you're not
as this side.

Speaker 2 (25:25):
And the AI piece of it that I that I
guess is AI is that it does make the program
that is training you on breathing smarter because.

Speaker 1 (25:34):
It's adapts to you. It's personalized correct correct. Another one
which Karen and I are both fascinated by last year,
which was made it onto your list was Google's Notebook LM.
And this is obviously in some sense not new right.
It's like a generative AI application where you know, you
ask questions and you get answers.

Speaker 2 (25:52):
I just think, for the sake of having it, if
you can describe what it is so that like people
can conceive of it if they haven't heard of it.

Speaker 3 (25:59):
It's a sign your notebook of all your data, so
you can pull in different sources. You can upload your
own information. You could upload, you know, your thesis paper
for your you know, senior project, and it can parse that,
it can organize that. But it can also create an

(26:20):
entire podcast based on that content, which has AI generated
voices having a natural conversation.

Speaker 1 (26:29):
Which we actually experimented with. We did it. We had
Notebook LM do a version of a podcast that we
were also also doing.

Speaker 2 (26:35):
And he sent it to me and I was like,
is someone I thought someone was plagiarizing our format. It
was very weird.

Speaker 3 (26:41):
I'm really glad you guys didn't prank me, but I
have I made come in here and talk to.

Speaker 1 (26:47):
So how did I mean, how did you decide to
put notebook LM on the list?

Speaker 3 (26:52):
I think it was one of the best executed AI
inventions of the year. One thing we look for or
in AI inventions is typically we're looking for ones with
broad proven use that have really shifted how an industry
or group of people functions. So not just a cool idea,

(27:12):
but could change an industry and not to further light
the fire under you guys, but this has really big
implications for the audio industry and media in general is generative,
you know, content, Like you.

Speaker 1 (27:28):
Know, what I really like about this one is the
fact that you can choose your own sources. I find
it like one of the big things about AI, of
course is like where's this coming from? And like is
the source material garbage? But to be able to say no,
like please draw on these sources and then generate something,
I found it really really cool, and it is obviously
a little scary, being very honest. What was your favorite

(27:51):
item on this whole list if you can, if you
can name a favorite, or if there are a couple
that we haven't talked about that really stood out to you.

Speaker 3 (27:57):
One of my favorite sort of tech things but anti
tech is the Yonder pouch.

Speaker 2 (28:04):
Oh I love the Yonder pouch. Yes, so you have
them at screenings? Oh?

Speaker 3 (28:09):
Interesting?

Speaker 1 (28:10):
Yeah.

Speaker 3 (28:10):
So the thing that it's really transformed is schools. There's
a huge movement for kids to have more phone free
spaces where they're not allowed and this kind of speaks
to this law that passed in Australia where I think
kids can't have be on social media until they're sixteen

(28:30):
or something like that.

Speaker 1 (28:31):
What is it? Sorry?

Speaker 3 (28:32):
Oh, Yonder is a little pouch that you just locked
your phone in.

Speaker 1 (28:37):
Okay, I like that, is it wearable?

Speaker 2 (28:41):
No?

Speaker 1 (28:42):
You go.

Speaker 3 (28:42):
Basically, they'll have like a station with a bunch of
Yonder pouches. You lock your phone in there and then
you go into the event.

Speaker 2 (28:50):
So yeah, you do.

Speaker 3 (28:51):
Some musical artists will have them at their concerts if
they don't want footage of the concert being taken or
just the concert being ruined by everyone having their phone up.
But yeah, a lot of schools have started adopting them
where kids lock their phones at the beginning of the
day and they get them back at the end of
the day. Yonder the company itself, has had a huge
role in pushing these phone free spaces and advocating for

(29:12):
this and so on top of the product, the company
is doing a lot of advocacy.

Speaker 1 (29:18):
On this topic.

Speaker 2 (29:20):
We've talked about phones as cigarettes and sugar, and it
really this is the kind of thing where you're like,
we should have more, we should have more strength than this,
But no, We've been introduced these products throughout history that
we get very dependent on, and so people have to
come up with strategies to allow us to extricate ourselves

(29:40):
from It's like a smoke free zone. I mean it's
a very similar thing to me. Yeah, absolutely, and I
think it's necessary unfortunately, but.

Speaker 3 (29:47):
I think it's really transformative for especially schools, but also
privacy of different experiences and places.

Speaker 2 (29:55):
Which I think people yearn form more and more.

Speaker 3 (29:57):
Yeah.

Speaker 1 (29:58):
Has that been a time where you've got a pitch
and you like, no, that's garbage and then it turned
out to be like the big thing.

Speaker 3 (30:05):
I'd say more often is the opposite. And this is
where I own up to the fact that last year,
well by last year, I mean twenty twenty three, we
put the humane aipin on it. Essentially it was a
pin that was fully operated by voice control and AI,
and I think that's an example of a product for
one thing. It came out like right when the list

(30:27):
came out, so it wasn't like super well trialed yet,
but I think it was an example of something that
was newsworthy, even if it didn't come through in execution
all the way.

Speaker 1 (30:39):
That will probably happen. There will be a product that is.

Speaker 3 (30:42):
There will be that product, and I think voice control
just doesn't at the point yet where you can fully
rely on it.

Speaker 1 (30:47):
But that one was also quite dangerous right, or at
least he got very hot.

Speaker 3 (30:51):
I don't know about dangerous, but I think it just
didn't like work as well as.

Speaker 2 (30:55):
You very well too.

Speaker 3 (30:56):
Yeah, but I think it was exciting and I think
it pushed the conversation forward. So I think there's a
lot of those where people get very excited about a
product and then it is a flop.

Speaker 2 (31:08):
Yeah, part of the course, I feel like, and you
know that better than anyone.

Speaker 3 (31:12):
Yeah, absolutely. And I also looking back through twenty three
years of best inventions, seeing the same kind of seeing
the progress of different flops pushing each other forward really
drives home the fact that they're important even if they
don't they don't make it.

Speaker 2 (31:32):
Which is a very existential thing to think about. We
are just driven by our series of flops.

Speaker 1 (31:39):
Yeah, Tennis, and we rise on stepping sterns of our
former selves to greater things.

Speaker 2 (31:44):
There you go, Well, thank you so much for taking
the time to talk to us. This is yeah, thank you,
really interesting.

Speaker 1 (31:51):
I enjoyed it.

Speaker 2 (31:51):
Of course we didn't even have to go to Vegas.

Speaker 1 (31:55):
I think it's it's always nice to doff our caps
to those who went before, and one of the most
influential and iconic things I can think about in the
history of media is Jerry's final thought. So what should
we leave with this week?

Speaker 2 (32:13):
So brain rot? Actually, the Oxford English Dictionaries word of
the Year is sort of a way that internet speak
has infiltrated our day to day lives.

Speaker 1 (32:25):
A brain rot each week will be our final thought,
that will be our final thoughts. So what's this week?

Speaker 2 (32:30):
This one which I see on TikTok now a lot
is so good and you sort of have to know
that it's something that's happening on TikTok to understand the
context of it. It started sort of in context of pregnancy,
where like you're at a certain age and people aren't
sure if like you're happy to be pregnant, and so

(32:50):
they started saying congradgudolences, congradudolences. Yeah, there's I just have
to play this one.

Speaker 1 (32:58):
I'm at the age where if you post that you're pregnant,
I'm gonna need you to tell me whether you're happy
body or not, because like just being like, oh, I'm pregnant, Like,
am I supposed to say congratulations?

Speaker 2 (33:07):
Am I supposed to feel bad?

Speaker 1 (33:08):
Like I don't know how to like respond to that,
you know, so like just tell me, just be like, yeah,
I'm gonna keep this one. And I'm like, oh, congratulations.
What she means is congratulals, right, which.

Speaker 2 (33:21):
What she means is congradulences. It's like, I think a
perfect example is like you have friends that are like, oh,
I'm gonna break up with this guy, and next thing
you know, they are on FaceTime being like we're engaged,
and you're like, congratual, you got you got what you
wanted with the wrong guy. So that's my brain rot.
You're gonna be hearing.

Speaker 1 (33:40):
By the way, Twitter blew up yesterday because Zendaiya since
getting engaged, when she's doing interviews now she gestures with
her left hand rather than her right hand, so everyone
can to that.

Speaker 2 (33:48):
I say congratulences. I don't know to who.

Speaker 1 (33:54):
That's it for this week for tech stuff, I'm as.

Speaker 2 (33:57):
L and I'm care Price. This episodisode was produced by
Eliza Dennis, Victoria Dominguez, and Lizzie Jacobs for Kaleidoscope. It
was executive produced by me os Vaalashan and Kate Osbourne
for iHeart. The executive producer is Katrina Norvel. The engineer
is Biheed Fraser and it's mixed by Kyle Murdoch, who
also wrote our theme song.

Speaker 1 (34:17):
Join us next Wednesday for tech Stuff The Story, when
we'll share an in depth conversation with a longtime tech
chronicler Nicholas Thompson, former editor in chief of Wired and
current CEO of the Atlantic And please rate, review and
reach out to us at tech Stuff Podcast at gmail
dot com with your feedback. We really want to hear

Speaker 2 (34:38):
From you, really, really bad, bad

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

The Bobby Bones Show

The Bobby Bones Show

Listen to 'The Bobby Bones Show' by downloading the daily full replay.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.