All Episodes

February 21, 2023 37 mins

We've got a bunch of AI-related stories to chat about today, and most of them are bad. From AI deciding who gets laid off to a university leaning on AI to craft a sensitive message to students and beyond, we see how artificial intelligence is creating real problems. Plus, today Microsoft attempts to convince EU regulators to let it purchase Activision Blizzard, an old iPhone sells for an astronomical price and movie studios want redditor names and addresses. 

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production from iHeartRadio. He there,
and welcome to tech Stuff. I'm your host, Jonathan Strickland.
I'm an executive producer with iHeartRadio and how the Tech Area.
It's time for the tech News for Tuesday, February twenty first,

(00:24):
two twenty three, and we're going to start this episode
off with a news segment that I would like to
call Blame the Robots. So the Washington Post published an
article by pranschu Verma titled AI is starting to pick
who gets laid off, which is perhaps a bit sensationalized

(00:45):
once you read the actual story, but maybe only a
little bit sensationalized. All right, So here's how this all unfolds.
There are software packages and various services and algorithms that
some hiring managers rely upon in order to do kind
of a pass filter across job applicants in order to

(01:08):
narrow down the search. There are companies that offer that
very service, and it starts to make sense if you're
hiring at a company that gets a lot of attention.
So let's say that you're a popular company, you have
fairly rare job openings, and you get lots, like maybe
thousands of applicants per job that you list, Well, you

(01:30):
probably need some help whittling down the applicant list to
get to a manageable pool of potential hires, right like
you need something to separate the cream from everything else.
It's not easy to do, especially at really high volumes.
And so these services and software packages essentially reduce applicants

(01:51):
down to data points and they kind of have to.
And depending on how tough that filter is, folks can
get weeded out. Sometimes a lot of folks. May you
get down to, like less than a dozen applicants out
of thousands has to be pretty brutal. Well, this article
postulates that we could see the reverse come into play
as well, that a company might lean on similar software

(02:15):
and services to identify the people who contribute the least
to the company or perform at a level that's considered
to be below their peers. Therefore, they could be candidates
for layoffs when the corporate overlords deemed that it is
time to reduce headcount in these troubling times. And therein
lies the story, right that an algorithm might determine that

(02:40):
you are expendable instead of your human boss. And I
am aware that I'm making an assumption here that your
boss is in fact human, some program, you know, some
freaking robot has determined that you're getting laid off. It
sounds positively dystopia, doesn't it. And with big tech companies

(03:02):
laying off thousands of folks over this year and the
previous year, it's easy to imagine managers shrugging off the
responsibility of telling someone they no longer have a job
by leaving it to the old zeros and ones of
a presumably objective and emotionless system. But this in turn

(03:22):
brings up other problems. As I have mentioned in tons
of episodes, one of the many problems we have in
AI systems can come down to unintended bias within the
system itself. So if the system is biased, it could
end up targeting employees of specific ethnicities or backgrounds. Now

(03:44):
Vermat makes this argument in the Washington Post article. He
says that if the algorithm were to, say, determine that
people of color have a higher incidence rate of leaving
their jobs, that a person of color is more likely
to leave their job than say, they're white colleagues, well,
then the system might naturally start to target employees who

(04:06):
happen to be people of color for the purposes of layoffs.
But then you're getting into very dangerous legal and ethical territory.
It's as if you're targeting these specific people because of
their race. Also, I'm not sure how well an algorithm
can actually judge a person's contributions. Presumably stuff from employee

(04:27):
reviews and such would play a big part, But in
highly collaborative work, a person could act as the sort
of lynch pen that keeps a team working really well together,
even if they themselves don't have the highest numbers on
whatever the deliverables are. So in my opinion, relying on
AI to make or even guide decisions regarding layoffs is

(04:53):
really a bad move all around. It can make sense
in the applicant phase, but in layoffs I would say
avoid it. It doesn't look good for the company. It
could ultimately lead to choices that will harm the overall
organization in the long run. In this article, Verma mentions
that while folks at Google wondered if perhaps they had

(05:14):
been laid off due to an algorithm choosing them because
there didn't seem to be much rhyme or reason to
the layoffs, the company denies making use of anything of
the sort. There's kind of a distinct lack of cases
where we know that an algorithm definitively played a part
in layoffs. However, Verma in the article also cites a

(05:36):
survey that showed ninety eight percent of HR managers there
were three hundred of them participating in the survey, had
said that they plan to rely on software and algorithms
to help make such decisions about layoffs this year. So
even if you were to argue it hasn't happened yet,
it looks like it's going to happen real soon. My

(05:58):
guess is we'll see some high cases where some company
relies too heavily on algorithms and it'll come back to
haunt them, perhaps only in PR, but it will be
a big blowback, and then maybe then we'll start to
see people form best practices around the whole thing. I

(06:19):
still think it feels a bit like shirking responsibility in
my opinion, If the top brass decides that layoffs are necessary,
then they are obligated to make each and every layoff
decision transparent and honest, I think they owe their employees
as much. And it's really infuriating because you'll see managers
who get a directive saying you have to apply this

(06:43):
artificial bell curve to the employees who are reporting to you.
We heard a story about that just a couple of
weeks ago, where a director actually essentially was fired for
refusing to follow that because it arbitrarily requires managers to
assign people as low performers even if you don't have

(07:04):
any low performers on your team, and that just again
seems inherently unfair. I feel like relying on AI to
make these choices also is inherently unfair and can miss
some really important factors that may not reduce down to
pure data. But we've got a lot of other AI

(07:24):
news to get through today. A lot of it is bad.
I'm not gonna lie. And our next story comes from
Vanderbilt University. The Peabody College at Vendorbilt, and that college's
Office of Equity, Diversity and Inclusion send out a message
to students in the wake of the terrible shooting at
Michigan State University. And clearly this was a delicate task

(07:48):
that needed empathy and support. I needed a message that
showed that Vanderbilt's staff have students and their welfare at
the top. Of their priority list, so of course they
use chat GPT to help craft the message. This pretty
much sent me spiraling, y'all, because passing the buck to

(08:08):
AI to handle things that are this important, things that
intrinsically involve a very human connection, it just feels beyond
shortsighted and crass to me. At best, you could say
this was a poor decision, but at worst it implies
that leadership has little to no regard for students and

(08:29):
instead we'll just lean on the robots to handle the
tough stuff. Anyway, the end of the message contained the
line paraphrased from open ais Chat GBTAI Language Model Personal Communication,
and at least the word paraphrase indicates that there was
human involvement in taking the generated message and shaping it
properly for students. So it was a collaborative effort, you

(08:53):
could say. But still, the fact that staff tapped AI
in the first place to help with such a sensitive
matter doesn't look good. It looks like people who want
to avoid the hard stuff, and hard stuff isn't the
human connection stuff, the stuff that has incredible impact on

(09:14):
emotion and mental health, whether it's layoffs or counseling people
in the wake of a violent act like the shooting
at the University of Michigan. That is the wrong way
to use AI, in my opinion. That is inherently the
realm of humanity and to off source that to AI's

(09:38):
it shows such a huge disregard for the people who
are ultimately the recipients of those messages that I think
it's unconscionable. Now the Associate Dean and Assistant Dean who
are part of this process have both stepped back from
that office of equity, diversity and inclusion, which is probably

(10:00):
for the best, but yeah, this was a really bad
use case for AI. Last week, the representatives from around
the world attended the Summit on Responsible Artificial Intelligence in
the Military Domain or re aim our AIM and we've
talked about how incorporating AI into military processes and hardware

(10:24):
raises really difficult questions regarding safety, accountability, escalation, and more.
Reps from many countries, including the United States and China,
but excluding Russia, which wasn't invited and Ukraine did not attend.
They were invited, but clearly have other things going on

(10:44):
at the moment. Anyway, these representatives all met to discuss
the issues of AI in its role in military operations.
At the conclusion of the summit, all but one of
the representatives of the countries that attended signed an agree
to commit to developing AI military applications that quote do
not undermine international security, stability and accountability end quote. So

(11:09):
what was the one nation that abstained. That would be Israel. Now,
don't like heap tons of criticism on Israel, because there
are critics who say this entire meeting was largely for show,
because according to critics, there was nothing in the summit

(11:29):
or the agreement that is legally binding for any of
the countries involved. So, in other words, the critics are
saying that the reps are all like, yeah, yeah, totally
AI killing people would really be bad. Let's totally not
do that, but they would have no real accountability to
follow through on that promise. Further, the agreement did not

(11:51):
include certain AI assisted or controlled systems that are already
in use, like AI controlled drones, And so there's concern
that this agreement, while represents little more than just putting
on a show to say, yes, we're all aware of
this and it is a bad thing. Now, Honestly, I
heavily suspect that several countries, including the United States and

(12:15):
also China, will continue incorporating AI into military applications, including
weaponized AI. I would be absolutely shocked if they didn't
continue down that pathway, because there's a very real fear
that if you don't do it, the other guy will,
and then there will be an AI gap. Now maybe

(12:36):
I only think that because I'm a child of the
seventies and eighties and I saw how this very similar
scenario played out with nuclear armaments, because boy howdy, was
that a thing. So I would love for us to
avoid the mistakes of the past. But I really am
skeptical that that's going to happen, because again, unless everybody

(12:59):
is hell accountable and agrees to not further that work,
someone is going to. And if someone's going to, then
everyone is going to because otherwise you are at a disadvantage. Okay,
with all that doom and gloom out of the way,
let's take a quick break. When we come back, I've
got some more news items to talk about. We're back now.

(13:27):
I still have a couple more AI stories, but these
are not quite as apocalyptic as the ones we started
off with. The one is that Clark's world magazine, which
has been publishing fantasy and science fiction stories online since
two thousand and six, has temporarily stopped accepting submissions. Why
because apparently the magazine has received too many AI generated submissions,

(13:50):
and until they have access to better tools to detect
those kinds of things, they have chosen to hold off
accepting anymore. That's both understandable and it stinks. Not that
I think the magazine made the wrong call here. I
think this is the right call, but rather it stinks
because there are genuine authors and would be authors out

(14:14):
there who have great stories to tell, and they're seeing
an outlet closed off to them, at least for now,
and it's all thanks to AI generated stories. Now, I
think in most cases the AI generated stuff has to
be a collaborator kind of relationship, because in my own
experience the stories generated by AI they aren't very good. Like,

(14:36):
grammatically they work, and you know, you get some interesting
descriptions and stuff, but the actual stories tend to be
pretty mundane and uninteresting. I imagine that most folks who
are using AI are leaning on it for stuff like
generating initial ideas, maybe shaping a certain part of the
narrative at least any stories that are difficult to determine, Oh,

(15:01):
this was made by AI, right, if they haven't done
any massaging, it often is pretty easy to detect that
it's AI, or the very least, it's easy to detect
that's not a very good story and it wouldn't pass
the bar for publication. But still, here's another example of
how AI can end up harming creative types, whether it's
from the unauthorized copying of their style or displacing them

(15:23):
from the creator community. Insider reports something that I think
most folks already have a pretty good handle on, and
that is the emergence and reception to chat GPT probably
means we're going to see a whole bunch of copycats
in the very near future. And to be clear, chatbots
have been a thing for years. I'm sure you're all
aware of that. In fact, someone who is once in

(15:45):
the business of reporting on tech, a person whom I
know and respect and like very very much, ended up
working at a company that developed sophisticated chatbots. But these
were tools that were intended for narrow use cases, something
that would work well within the confines of a particular
company's services and processes. The stuff we're seeing now is

(16:07):
made to be more general purpose, and with that comes
the problems of reliability and accuracy as well as transparency.
It is easier, not easy, mind you, but easier to
build a reliable and accurate tool that works within an
enclosed system, like the customer service arm of a consumer

(16:28):
facing company. But it's another thing when it's just you know,
free range AI chat bot. And meanwhile, the guy who
runs the company that made chat GPT has said repeatedly
that he thinks chat GPT isn't that good, or at
least it is far from perfect. And yet we're currently
living through a buzzy, height heightened age of chat GPT

(16:55):
and its peers like like Barred from Google. Tech Runch
has a piece titled the AI photo app trend has
already fizzled, new data shows and you should totally check
out this article. The author of the piece, Sarah Perez,
lays out some of the data, including download numbers and revenue,
and she shows that while the text to image AI

(17:16):
tools initially made a really big splash when they started
to emerge, particularly late last year, excitement has dropped off
considerably since then. There's been a lot of backlash in
the space, ranging from artists who are understandably upset to
see their style co opted by AI, to users who

(17:36):
are concerned that the tools can create inappropriate images far
too readily, and that any restrictions that are designed to
limit that sort of stuff aren't always the best. Whether
those actually played a big part in cooling this trend,
or maybe it was just that folks were getting tired
of the shiny new thing and they had already moved on,

(17:57):
I am uncertain, but my guests is that we're going
to see the space continue to evolve, perhaps with fewer
players as this goes on, if some of them find
it too difficult to cover costs with the declining revenues.
But I don't think AI generated imagery is going to
just go away at this point. Now that being said,

(18:19):
one fun story, or at least in my opinion, it's
fun that relates to the AI generated imagery involves a
robot from Carnegie Mellon University. So it's a robot arm
that has the name Frieda, which yes, is both a
tribute to Frieda the artist, and is an acronym that

(18:41):
stands for Framework and Robotics Initiative for developing arts, and
it also generates images based on text prompts. Only in
this case, the images it makes are not digital images.
They're not computer generated images. They are real world paintings.
You have a robot that paints with actual brushes and

(19:02):
actual paint It creates works of art based off text
input and directions. According to tech Spot, it takes about
an hour from the point where the robot receives input
in the form of the text to the point where
it begins to paint, because it actually has to plot
out how it's going to physically paint this. How are

(19:24):
the brushstrokes going to go, how long are they going
to go, how much pressure is going to be used?
What style is it going to follow? And that's very
understandable because as we all know, there are different strokes
for different folks, because the world don't move to the
beat of just one drum. Shout out to me if

(19:45):
you get that reference. Anyway, The roboticists and engineers are
quick to say that Frieda isn't an artist. FRIEDA is
a collaborator. FRIEDA is not creative. FRIEDA just follows instructions
as best it can pay the subject of the art
in the style that was dictated by its collaborator. Anyway,

(20:07):
I just thought this was a neat take on AI
generated images. Somehow it feels different that because it's, you know,
a physical painting. It's something that you could hold or
put into a frame and hang on a wall, or
you know, something you could rip apart in rage as
robots get another art commission and you don't. Now, finally

(20:29):
we're off of the AI stories and we can get
to everything else. So next up. Part of the big
news this week is Meta really shook things up on
Sunday announcing that the company is introducing a subscription service
called Meta Verified. It's just in the testing phase now,
and you know, the plan is to widely deploy it,
but we'll see if things go poorly in the test

(20:53):
markets that Meta is trying out at the moment. But essentially,
this subscription service is a verification tool. Users would have
to submit proof that they are who they claim to be,
using government issued ID for example, and in return for
twelve bucks a month or fifteen bucks a month if
you're doing it on iOS or Android, because Google and

(21:16):
Apple take their own cut of the fee. You then
get a little blue badge on Facebook and or Instagram
saying you're bona fide. On top of that, badge subscribers
will also have access to services that are meant to
protect against imposter accounts. They're supposed to get better customer support,

(21:36):
and they're supposed to get improved discoverability when folks are
actually searching for them, which am I opinion, is stuff
that should really be standard for all users, whether they're
paying a subscription or not. I think it's kind of
bullpucky to say one of the benefits to verification is
that meta will make sure other folks aren't trying to
impersonate you. I mean, arguably, this is a bigger problem

(21:59):
for notable folks like celebrities and brands. And I'm not
talking me, I'm talking real celebrities. I have no illusions
that I'm a celebrity, but I've still seen plenty of
instances of friends being impersonated as someone has either gained
access to their account or created a copy account in
an attempt to phish for data. Like this is still

(22:21):
something that affects the average person on these platforms. It's
not just for the celebrities. But yeah, I'm that's facing
issues with the revenue for a lot of different reasons,
so it's not surprising that the company is now introducing
the subscription feature. It just feels like the quote unquote
benefits of the service are things that really everyone on

(22:43):
the platform should have access to by default. Maybe I'm
just being unreasonable here. Today Microsoft will attempt to defend
its planned acquisition of Activision Blizzard in the EU in
a meeting that's behind closed doors and Brussels. Previously, EU
regulators indicated that they would block the purchase, saying it

(23:04):
would result in less competition in the video game space
and allow Microsoft to engage in actively anti competitive practices,
such as preventing other platforms like Sony PlayStation from having
access to popular video game franchises like Call of Duty. Earlier,
Microsoft Reps signed a deal with Nintendo Reps that legally

(23:25):
binds Microsoft to bring Call of Duty titles to Nintendo
platforms for ten years, and further that all titles will
be available on Nintendo platforms the same day that they
come out for Xbox platforms, with quote full feature and
content parity end quote between these versions, meaning Nintendo won't

(23:47):
have to be happy with a watered down version of
Call of Duty. It's going to get the real thing,
just like Xboxes. This puts pressure on Sony to make
a similar agreement or else Microsoft could argue before the
EU regulators that Microsoft has made attempts to ensure fairness
between the various console companies, but Sony isn't playing ball

(24:07):
on purpose in an effort to scuttle the deal, surprisingly,
at least to me, the Communications Workers of America the CWA,
a union organization here in the US, has also urged
the EU to approve the acquisition deal. They say that
Microsoft has been more receptive to attempts at unionizing than

(24:28):
Activision Blizzard has, and that without Microsoft's oversight, employees and
Activision could find themselves facing tough managerial resistance to unionizing.
By the time you hear this, a decision has probably
been made one way or the other. But as I
write this episode, it has yet to be announced, and again,

(24:48):
the meeting is behind closed doors, so it might be
a little while before we find out what the results are.
Corporate employees at Amazon are looking at decreased compensation this
year like an actual pay now. The reason for that
is because some of their compensation is tied up in
stock units, so as part of their salary, Amazon corporate

(25:09):
workers get stock in Amazon. However, Amazon stock price has
taken some massive hits over the last year, and that
means that the stocks awarded to corporate employees are worth
much less than they were a year earlier. That's particularly
tough because when Amazon structures its salary deals, they are

(25:29):
at least partly based on the idea of the stock
having a value of around one hundred and seventy dollars
per share. So, in other words, that's part of the
justification of yes, your salary is X amount of dollars
instead of why, because you're also being compensated by stock
units that are considered to be worth one hundred seventy
dollars per share. However, at the time of recording, Amazon

(25:52):
stock is currently at ninety four dollars fifty eight cents
per share, so a little more than half of what
it was when these salary figures were first calculated. So
if the cash part of your salary is dependent upon
the fact that the rest of your compensation is coming
in the form of stocks that were calculated at one
hundred and seventy bucks per share, it means you're getting

(26:14):
significantly less per year on top of that, the company
has been laying off thousands of employees. I wouldn't be
surprised if there were some managers over at Amazon who
were giving wistful glances toward chat GPT when it comes
time to communicate these issues to their team members. Okay,
I've got a couple more stories to talk about, including
one that's going to get me all head up again.
But before we get to that, I'm going to take

(26:37):
a quick break, and so are you. But we'll be
right back. Okay, here's where Jonathan gets upset for multiple reasons.
All right, So our next story is that torrent Freak
reports that filmmakers are demanding to know the identities of

(27:00):
certain reddit members who have been active in subreddits and
talked about content piracy like the illegal downloading and distribution
of films and such. Hey, y'all, here we go again.
Like I've been through this a few times because I
remember the good old napster days. All right, So the
filmmakers want to hold pirates accountable and that is understandable, right,

(27:23):
you know, they don't want their films to be pirated,
and that makes sense, Like this is not just art,
its commerce, and to see people get access to something
without legitimately paying for it. That is a problem. However,
the arguments that filmmakers and studios make are at best facetious.

(27:46):
Now by that, I mean you'll hear filmmakers and studios
cite huge figures for damages, like millions and millions of dollars,
that these in damages that these companies and these filmmakers
experience due to piracy. But the truth of the matter is,
you cannot say that with any kind of certainty. Those damages,

(28:12):
on the face of it, assume that the people who
pirated the content would have otherwise purchased a ticket, or
subscribe to a service or whatever, and so piracy, based
on this argument, amounts to lost revenue. Thus the damages right, like,
we would have sold x number of tickets, except that

(28:32):
this number of people pirated it, and therefore we're out
x number of dollars. Except you don't know that. You
do not know if the person who pirated something would
have otherwise sought a legitimate way to view the material.
You don't know that you actually lost out on money.
Maybe that person would just have gone without seeing it

(28:55):
at all. So that's not I mean, you can't you
can't accuse people of not going to see a movie,
right Like, I haven't gone to see Aunt Man in
the Quantum Maniacs or whatever it is. But Marvel can't
come to me and say, hey, you failed to see
the movie at the theater, so we're going to find you.
That doesn't make any sense. So you can't argue that

(29:19):
the pirates would have otherwise gone and paid legitimate money
to go and see stuff. Therefore the companies out of
money because maybe they wouldn't maybe they just wouldn't see
it at all. So pirting a film or a series
is not the same thing as someone stealing like a
physical something like a TV from a big box store. Right.

(29:40):
That is a physical item. There is only one of
that specific television in the world, and once it's gone
from an inventory, it is gone. It is not magically
replaced by a digital duplicate. Right. That is something where
you can look at that and say, yes, this amounts
to real losses. That's a lost sale, if not to
the person who stole it, to person who would have

(30:02):
ultimately bought it. That you could say, and you could
point to that and say these are real damages. You
cannot do that with digital media. The government accountability of this,
or the United States agrees with me, you cannot do that.
Does it amount to damages? Is there a loss of revenue? Undoubtedly, yes,

(30:22):
there is definitely a loss of revenue, but there's no
way to determine the extent of that. And because filmmakers
and studios depend upon these inflated numbers that represent the
quote unquote damages that they incurred as a result of
piracy to try and become like a bludgeon against pirates,
to cow people into avoiding piracy, it unfairly targets people

(30:49):
who may or may not have actually caused any damages
at all. You just don't know. That's the thing is
that because you don't know, you cannot make firm claims
of damages. And yet time and again we see filmmakers
and studios do this. Now, that is part of it.
I should also mention that Reddit is resisting these urges

(31:12):
to hand over user data. And just in case you
were curious, if you were to do something like I
don't know, use a VPN and create a unique email
address and only use your VPN when you're accessing something
like Reddit, and you register for Reddit using the unique
email address, that isn't tied to anything else of yours.

(31:35):
That could be a way to avoid imperial entanglements. That
being said, now, I say that because I don't like
seeing companies go super hard against people. But I also
firmly believe piracy is wrong. Okay, I do not condone
piracy at all. I pay for the content I consume

(31:55):
or I go without. I even bought a cheap region
free DVD player so that I can import DVDs from
the UK for series that just never get released over
here in the US. Mitchell and Webb. Look, I'm looking
at you, but I still end up buying the actual stuff.
I don't just try and pirate it. I condemn piracy,

(32:19):
but I also condemn an industry throwing its power around
making assertions that it simply cannot support with evidence, at
the cost of people who may not have ever gone
to see your quantum maniac ant movie. Okay, I'm done
with that. Finally, Hey, do you remember when you were

(32:41):
a kid and you've got that meant in the box
original iPhone, the two thousand and seven iPhone. Can you
remember how excited you were, but part of you thought,
you know, maybe I shouldn't open this because this is
a collector's item. No. Well, of course not because you're

(33:01):
a sensible person. But it turns out if you had
been less sensible and chose not to use the thing
you've got for the purpose that it was intended, you
could have made some crazy money. Why because at an
auction this past Sunday, and unopened original two thousand and

(33:22):
seven iPhone sold for more than sixty three thousand dollars.
When that phone came out, it cost five hundred ninety
nine bucks. Now, if we adjust that for inflation today,
that would be the same amount as around eight hundred
and sixty of today's dollars, so eight hundred sixty bucks.

(33:45):
But it's sold for sixty three thousand dollars at auction.
Actually it's sold for sixty three thousand, three hundred fifty
six dollars and forty cents at auction, which is really specific.
I don't typically see it go into the sense like that.
Maybe it was a winning bid that came from overseas
and it was a currency conversion thing. But if we

(34:10):
adjust the iPhone for inflation, then the value of the
phone increased by nearly seventy four times. If we don't
adjust for inflation, the value increased by one hundred times.
So why the heck didn't the person who owned this
two thousand and seven iPhone ever open the box? Well, actually,

(34:34):
there is a sensible answer to this. You see, back
when the iPhone first came out in the US, and
you may have forgotten about this, the Apple made an
exclusive deal with AT and T. It became the exclusive
carrier for the iPhone in the US. And the woman
who had received this particular iPhone as a gift way

(34:57):
back in two thousand and seven, a woman named Karen Green, Well,
she had a service contract with Verizon, and it would
have cost her a lot of money to cancel out
of that contract and then start up with AT and T.
That's a whole hassle. I don't know how many people
have had to go through that process, but it can
sometimes be really frustrating. So the iPhone that she received

(35:21):
wouldn't work on Verizon, her carrier, So Green just never
opened the darn thing. Instead, she kept it and kept
it in good condition, and I'm sure she's very glad
she did. Now. I guess if there is a moral
to the story, it's if you do not want that
nice tech gift, that someone gave to you. Keep it unopened.

(35:43):
You never know when it'll be worth sixty grand. Just
you know, don't hold your breath about it. All right,
that's it for the tech News for Tuesday, February twenty first,
two thousand, twenty three. Hope you enjoyed this episode and
my various rants, and if you have suggestions for topics

(36:05):
I should tackle in future episodes of tech Stuff. I've
got one coming up that's going to be dealing with
our old buddies, the activist group Anonymous. That's coming up soon.
Just let me know. You can get in touch with
me via Twitter. The handle for the show is tech
Stuff HSW. You can download the iHeartRadio app, which is

(36:26):
free to download, free to use. You navigate over to
tech Stuff using the search field, and that will bring
you to the text stuff page where you'll see a
little microphone icon. You can click on that leave a
voice message for me, or you can be like Nathan
and find my email address hiding out there on the
web and just send me an email, because that's how
we're going to talk about Anonymous. All right, that's it

(36:47):
for me. I'll talk to you again really soon. Tech
Stuff is an iHeartRadio production or podcasts from iHeartRadio, Visit
the iHeartRadio app, Apple podcasts, or wherever you listen to
your favorite shows.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.