Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production from iHeartRadio. Hey there,
and welcome to tech Stuff. I'm your host Jonathan Strickland.
I'm an executive producer with iHeart Podcasts. And how the
tech are you. This is the tech news for the
week ending on May twenty fourth, twenty twenty four, and
(00:27):
we've got a lot of AI stories this week, so
let's get to it. Early this week, the world learned
of a dispute between actor Scarlett Johanson and open Ai,
the company behind the chat GPT chatbot, among other things,
and here's how it breaks down. Johansson says that in
twenty twenty three, she was contacted by open Ai CEO
(00:47):
Sam Altman and was asked if the company could license
her voice for the purposes of creating a digital assistant,
something similar to Siri or Alexa, but built on open
ais ai model. Johansson had starred, or at least her
her voice had starred in the film Her, in which
a man falls in love with his AI enabled operating
(01:10):
system played by Johansson. She said she was not interested,
and then she said that this year, just two days
before open ai was going to hold a keynote event
about this digital assistant, Sam Altman reached out to again
to ask her to reconsider, and she says she didn't
actually speak with him on this one, and that the
implication was that she still had not changed her mind.
(01:31):
She didn't want to license her voice. So then the
keynote happens and OpenAI debuted the digital assistant called Sky,
which has a selection of voices that you can choose from,
but one of those voices sounded an awful lot like
Scarlett Johansson. The actor was quote shocked, angered, and in
disbelieved that mister Altman would pursue a voice that sounded
(01:54):
so eerily similar to mine. End quote. Altman denied that
they had trained the AI on Joehiah's voice at all.
The company has said that they actually used a different actor.
They showed footage of this actor speaking, but the actor's
face was blurred out, which kind of brings other questions
up about whether or not that's actually the person talking.
But anyway, out of respect to Johansson, they said they
(02:16):
would take down that particular Sky voice, which, again, if
it's not her voice, that's odd, right, Like, why take
down someone's voice if it's not Like, if it's not
that person's voice, and the other actor presumably did sign
an agreement to have their voice licensed for this, then
that's a different matter. Anyway, Altman's claims of innocence aren't
helped that he appeared to directly reference the film her
(02:39):
on X formerly known as Twitter, and did so the
same day that the assistant debuted. So if he's slightly
giving the nod to a movie in which Scarlett Johansson
does the voice, it does seem to kind of imply
that perhaps she had some involvement with the actual digital assistant. Anyway,
that's where things stand now. There's the possibility of Johansson
(03:01):
pursuing legal action, but honestly, I haven't heard very much
firmly one way or the other, and there's questions about
whether that would be possible if in fact open Ai
did use this other actor's voice likeness as training material
for the AI. But it's another moment with NAI that
highlights the potential threat the technology poses to creatives. Meanwhile,
(03:23):
another departure from open ai made news this week. Gretchen Krueger,
who had served as a policy research worker at the company,
posted on X that she had left open ai and
she had resigned before news broke that Ilia Sutzkever, who
was a co founder and one of the board members
who had ousted and then reinstated Sam Altman as CEO,
(03:44):
had also left the company. So she said she had
made this decision independently, did not know that Sutskev was
stepping away, but that she just felt she could not
work for the company anymore, and the reason she felt
that way was mostly out of concerns that the company
was ignoring safety protocols among some other things. As well.
She said the company was not doing its part to
live up to the principles that open ai was founded upon,
(04:06):
such as transparency and quote mitigations for impacts on inequality
rights and the environment, among other things. I'm including this
story in the lineup because it really is showing a
pattern at open ai. Numerous people connected to safety have
left the company in recent months. It's not just been
(04:26):
a couple very high ranking executives. Some safety researchers have
left as well, and this should be something of a
red flag that open ai isn't being so thorough when
it comes to developing AI in a safe and responsible way,
which again was the mission statement for the original non
profit version of open Ai. Of course, when we'd say
open Ai today, we're largely talking about the for profit version,
(04:50):
not the not for profit company. Speaking of leaving open ai,
it turns out that the company has some measures in
place to make that a really difficult decision for an employee,
or at least it did have those measures in place
until word got out about them and the company was
shamed into changing things. Vox reports that employees leaving open
(05:11):
ai are frequently compelled to sign exit documents, and among
other things, these exit documents allegedly threatened to dissolve the
employees vested equity in the company if that employee says
anything negative about open Ai. So open ai is valued
at around eighty billion dollars. That's billion with a B,
(05:31):
and obviously for each individual employee that has equity, that
can represent a huge chunk of money. We're talking like
maybe millions of dollars for some of these folks. So
the implication here is that open ai will hold that
money hostage in return for exiting employees promising that they're
not going to bad mouth open Ai. And you might think, huh,
vested equity, not potential equity, but vested equity. That sounds
(05:56):
like you're at the point where those assets definitely belong
to them employee and not the company. And since this
documentation has come to light, open ai has walked things
back a bit, with Sam Altman himself saying that he
felt ashamed of it all and that he also he
totally didn't know about it, despite the fact that some
of these various documents had C Suite executive signatures attached
(06:17):
to them, which I don't know seems like the kind
of thing a CEO should know about. Anyway, Altman posted
that quote, we have never clawed back anyone's vested equity,
nor will we do that if people do not sign
a separation agreement or don't agree to a non disparagement agreement.
Vested equity is vested equity, full stop end quote. That
(06:39):
seems like a reasonable thing to do. I'm just scratching
the surface of the story, though there's so much more
to it, and to really dive in, I highly recommend
you read Kelsey Piper's piece on vox dot com. It
is titled leaked OpenAI documents reveal aggressive tactics toward former
employees over at Google. The Internet had field day with
(07:00):
some rather concerning results from the company's AI Overview product.
So this is Google's AI enhanced search feature, in which
AI curated information appears above some search results, and folks
have noticed that the AI has offered up some pretty
weird and sometimes dangerous suggestions. For example, if you were
(07:21):
to google how do I make pizza so that the
cheese doesn't just slide right off? One person found that
Google's answer to this was to add glue to the
recipe keep that cheese in place, which is a big
out you. But another one was even more concerning. There
was someone who was asking about how to sanitize a
washing machine, and essentially the suggestion that the Overview AI
(07:44):
made was the equivalent of mixing chlorine gas in the washer. Now,
in case you didn't know, chlorine gas is very poisonous
and it can kill you. In another example, it was
clear that Overview AI was essentially plagiarizing content because it
was for smoothie recipe and the answer that the AI
gave included the phrase my kid's favorite. Now, presumably the
(08:08):
AI does not have children, but the smoothie recipe that
it pulled from did use that phrase, So again it
looks like the AI is actually just directly lifting something
from a source rather than synthesizing information. Right. That's the
promise we get with generative AI, is that it's synthesizing
stuff and then presenting it to us in a way
(08:30):
that we can understand. But when you see instances like this,
it seems to suggest that, well, there's a lot more
copy and pasting going on than synthesizing, at least in
some cases, and that's not a good look. Over at
Meta Yon Lucun, the chief AI scientist, has said that
while large language models are interesting, they're not going to
lead to AGI, which is artificial general intelligence. That's the
(08:54):
kind of AI you find in science fiction stories in
which you know, robots or chatbits start to for themselves.
So Lacun has said that the large language model branch
of AI isn't going to get us there. Lacun says
that generative AI models essentially have the intelligence of a
house cat, which if any cats are listening to this podcast,
I would just like to point out it was Lacoon
(09:15):
who said that I think that you are a very
good kiddy. I'm just covering my bases here. Lacun said
that the chatbots built on lms are quote unquote intrinsically unsafe.
Now by that, he means that a model is only
as good as the data that you use to train it,
and that if the data has unreliable or wrong stuff
in it, the LLM will reflect that it doesn't have
(09:37):
the ability to discern between what is reliable and what
is not, So you end up with an AI model
that sometimes gives you incorrect responses, but with the confidence
of someone who seems to really know what they're talking about.
Lacoon has also expressed that people leaving open AI over
safety concerns are perhaps blowing things out of proportion. I
take issue with that. I agree with Lacun that saying
(09:59):
things like we're dealing with intelligence that we don't really
understand is perhaps overblowing things. I think that was really
Lukun's main point. But I counter that it doesn't actually
require high intelligence for an entity to become dangerous, and
if a company continuously undermines safety, the matter of how
intelligent the AI agent is could be a moot point.
It could still be really dangerous. Okay, we've got a
(10:22):
lot more news to get through before we get to that.
Let's take a quick break to thank our sponsors. We're back.
And imagine that you are a computer science student who
creates a studying tool that makes use of AI, and
(10:44):
your school is so impressed that they award you and
your research partner with a ten thousand dollars prize for
coming up with a great business idea. Then that same
school says you are suspended or expelled because of that
exact same tool that they gave you ten grand for.
This is the story of IMRI University, which is located
in my hometown of Atlanta, Georgia, and two students who
(11:07):
built an AI powered studying tool that they called eight Ball. Now,
the tool can do stuff like analyze coursework and create
flash cards and practice tests, so it helps you study,
and it can retrieve information from a university tool called Canvas.
This is not specific to EMRI University, but it is
a tool that's available to universities and it's a platform
(11:28):
to which professors can upload class materials like coursework. So
the idea is that the teachers use Canvas. They do
that to distribute the coursework to the students, and eight
ball can actually pull information directly down from this platform.
Emory's Honor counsel decide that eight ball amounted to cheating
and that the students were accessing canvas without university permission.
(11:49):
And this is an accusation that the students have denied.
And now one of the students is bringing a lawsuit
against the school and arguing that the school itself knew
and approved of their work as evidenced by that hefty
ten grand the school awarded the two students for this project,
and the student says that the university has no evidence
that anyone ever used eight ball to cheat. So we
(12:10):
will see how this unfolds. Now, let's loop back around
to AI powered operating systems. That's how we started this
whole episode off. Well, Microsoft has been aggressively pushing AI
features into Windows eleven in preview mode, so it's not
being rolled out as a general feature yet, but it
is a preview feature. One of the things that the
(12:31):
Microsoft has pushed AI to do is called Windows Recall
or Windows Recall if you prefer. Essentially, it means the
AI is taking snapshots of what's going on your PC
every few seconds. That can include everything from which programs
you're running, you know, any tabs that you have open
on your browser, all that kind of stuff, and it
will just take a snapshot of that, and then you
(12:51):
can search through them and look through your history of
activity on your computer. Further, while Microsoft Edge users will
have at least some controls that allow them to filter
what is or is not captured by the tool, anyone
who's using a different browser will not necessarily have that
same luck. So you might be an incognito mode, but
it's still going to get captured by Windows Recall. And
(13:13):
this has led some to argue that Microsoft is trying
to push more people to adopt Edge as their browser
of choice because that's the browser that actually does have
the filter. But whether that's the case or not, plenty
of people have come forward to criticize Windows Recall. While
Microsoft says the snapshots are encrypted, some cybersecurity folks worried
that Windows Recall will create a new target for hackers.
(13:34):
So imagine being able to pull snapshots off a target
computer and learn about things like login credentials or credit
card information that kind of thing. Now, a lot of
sites mask that stuff, but some don't, and so there's
a real worry that Windows Recall will become a security
and privacy vulnerability that will just encourage more hacking attacks.
Some of the critics have even wondered what the use
(13:56):
case is for this tool in the first place. I mean,
you can use it to search through as activity, but
to what end. Richard Speed of The Register also points
out that this feature is likely going to run into
compliance issues with the EU's GDPR laws. So will Microsoft
walk this feature back never to speak of it again.
It wouldn't surprise me, but we'll have to wait and see.
(14:17):
X AKA Twitter has made a change and it doesn't
have anything to do with AI, So we're off the
AI stuff now. So now you will no longer be
able to see which posts someone else has liked. You
will still be able to see which posts you have liked,
and you'll be able to see who has liked your posts,
but you wouldn't be able to see what old Jimbob
over there has hit like on. And jim Bob's not
(14:38):
going to be able to see what you've hit like on.
So why make this change? Well, according to the director
of engineering at x QUOTE, public likes are incentivizing the
wrong behavior. For example, many people feel discouraged from liking
content that might be edgy in fear of retaliation from
trolls to protect their public image end quote. And I
(15:00):
can see how that could be helpful if I were
still on x and if I were using my account
to say like posts that were made by activists who
are in the LGBTQ community, I might prefer it if
trolls who just want to harass people didn't see that
I was supporting that, although I think public support is
really helpful in those cases. On the flip side, if
(15:22):
let's say you're I don't know a justice on a
Supreme Court, as a hypothetical example, you might not want
people being able to see that you've liked comments that
appear to confirm a political bias one way or the other,
since you're supposed to be impartial. I'm not saying that's happened,
just saying that's a use case. Now. I don't think
this change really means much to me personally, because I
(15:43):
left Twitter ages ago, I have no plans to go back.
I feel that Twitter has largely continued to move in
a direction that is just completely in opposition to the
values I have. Not saying my values are right, just
that they're very different from the ones that I see
on Twitter, but for those people who still are on X,
(16:03):
I can see how this could be a welcome change
where you know, it's just one less thing for you
to worry about getting hassled about. Zach Whittaker of tech
Crunch has a piece about how at least three Windom
hotels in the United States have had Consumer Grades spyware
installed in their check in systems, which means those check
in systems have essentially been capturing screenshots of the login process,
(16:23):
not that different from what we were talking about with
Windows Recall, and then storing these screenshots in a database
that hackers can access anywhere in the world, which means
hackers can comb through these screenshots to get personal information
about guests, including things like potentially credit card information or
at least partial credit card information. The spyware is called
PC tattle Tale, and it's usually marketed as a way
(16:46):
to keep an eye on someone who's like, you know,
your husband or wife or your kids or partner, because
you don't trust them and you want to see what
they've been getting up to, you know, healthy, wholesome stuff
like that. Tech Crunch reports that a flaw on the
app makes it possible for anyone in the world to
access these screenshots if they know how to exploit the flaw,
and that despite a security researcher trying to contact the
(17:07):
developers behind this app, there's been no response to their
inquiries and the flaw has remained in place. Tech Crunch
chose not to reveal the specifics about these three hotels
in order to prevent retaliation against employees of those hotels
who may not be at fault because we don't know
why the spyware was on the computers in the first place.
It could be that there was a manager who was
(17:28):
just trying to make sure that their employees weren't goofing
off while on the job. It could be that hackers
use social engineering to trick staff into installing something on
their computers that definitely shouldn't be there. We don't know.
The Pentagon has revealed that Russia has launched something into
space that's in the same orbit as a US government satellite,
and further that this something is likely a counter space
(17:50):
weapon of some sort. The presumption is that this Russian
spacecraft has the ability to attack satellites in low Earth orbit. So,
you know, fun stuff. Okay, let's end with a fun story.
So this one hits me right in the nostalgia. So
when I was a kid, I had an Atari twenty
six hundred game console, and then much later, after the
(18:10):
video game crash from nineteen eighty three, I got a
cousin's old intelevision system and a dozen or so games,
plus a dozen or so controller overlays, only some of
which corresponded with the actual games I had, So I
felt like I had the best of both worlds, despite
the fact that by that time the Nintendo Entertainment System
was out and was undeniably the superior game console. Anyway.
(18:31):
This week, Atari, which I should add is not really
the same company as the one that was in the
late seventies early eighties, Atari announced that it had acquired
the Intellivision brand and a bunch of Intellivision games from
a company that up until now really it was called
Intelevision Entertainment LLC. SO for several years in Television, which
(18:53):
has also gone through major changes in ownership and isn't
really the same company anymore, has been trying to release
a home video game consol called the Amiko So. Atari
did not purchase the rights to Amiko. So the company
in television Entertainment LLC will continue, but it will change
its name. Don't know what's changing its name too yet,
but it's going to change its name. The Intellivision brand
(19:15):
and all that stuff is what goes over to Atari,
and Atari will have purchased the legacy in television system
and games, so we should see that incorporated in some
way in the not too distant future. I have a
couple of articles for suggested reading for y'all before I
sign off. First up is Mike Masnek's piece on tech Dirt.
It's titled The Plan to Sunset. Section two thirty is
(19:36):
about a rogue Congress taking the Internet hostage if it
doesn't get its way. I did an episode about Section
two thirty back in December twenty twenty. It's a piece
of legislation that was drafted to give protection to Internet
platforms in order to allow the Internet to grow, but
now Congress is debating on sunsetting that protection by the
end of next year. Masnext piece explains why that would
(19:57):
be a very bad thing. The other suggests article I
have is on the Verge and it's by Lauren Finer
and just Weatherbed and it's titled the US Government is
trying to break up Live Nation ticket Master. So the
piece explains how Live Nation has created an insular ecosystem
that is reportedly anti competitive and locks artists and venues
into using Live Nation and Ticketmaster systems, and how the
(20:20):
US government is possibly going to bring that to an end.
That could be welcome news to all y'all out there
who are sick of paying so called convenience fees that
are almost as much as the show ticket price itself.
I count myself among you. It is Memorial Day weekend
here in the United States. There will be a rerun
episode on Monday and on Wednesday, because I'll be out
of town. There'll be a new episode of tech Stuff
(20:41):
next Friday. I hope all of you celebrating Memorial Day
have a safe and happy holiday. I hope everyone else
out there has a great weekend, and I'll talk to
you again really soon. Tech Stuff is an iHeart Heart
Radio production. For more podcasts from iHeartRadio, visit the iHeartRadio app,
(21:05):
Apple Podcasts, or wherever you listen to your favorite shows.