Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to Tech Stuff, a production from iHeartRadio. Hey there,
and welcome to tech Stuff. I'm your host Jonathan Strickland.
I'm an executive producer with iHeart Podcasts and How the
tech are you. It's time for the tech news for Friday,
October fourth, twenty twenty four. So this week, California Governor
(00:29):
Gavin Newsom vetoed a bill that was aimed at putting
some guardrails up on major AI companies in the state
of California. Newsom says that he vetoed it because he
felt the bill was too weak, that it wasn't addressing
certain elements of AI that he thought were really important,
(00:50):
and instead was kind of framing it the wrong way. Now,
tech journalist Casey Newton says in his tech newsletter, which
is well worth subscribing to, and I have no connection
to Casey. I just really like his work Anyway, He
says that most folks he chatted to are not really
buying Gavin Newsom's explanation for why he vetoed the bill.
(01:15):
The bill would have mandated safety testing for AI models
that had cost more than one hundred million dollars to train,
so that would have covered the major players in the space,
most of which have been headquartered in California. So Newsom
has said, well, yeah, I mean these are really big models,
but what about smaller models that are trained to do
(01:37):
things that are inherently risky. This has no coverage for them,
So it does go deeper than that. In addition, the
bill would have allowed judges to levy punishments for cases
in which AI was found to have caused harm. You
might think, well, of course they should be able to
do that, but until it becomes established law, you can't
(01:58):
just hand out punishment. If there's no law covering what
is at least being perceived to be a crime, then
that's an issue. Ca C. Newton does go on to
say that in recent months Gavin Newsom, Newsom and Newton
a little too close anyway, Gavin Newsom has signed eighteen
other AI bills into law, so that is significant, right, Like, yeah,
(02:22):
this one he vetoed, but he has signed more than
a dozen regulations into law. Though some of those laws
AI companies are going to have like a full year
grace period before they go into effect because they are
not going to become actual effective laws until twenty twenty six.
These new laws carry out that groundwork I was talking about.
(02:46):
They create the legal pathways for victims to pursue with
regard to AI related abuse. So, for example, if someone
were to use AI to generate nude images of you,
and then they demand money from you, saying, hey, we're
going to release these pictures and they look real and
they are of you, and you're naked. So unless you
(03:07):
pay us, we're going to flood the Internet with these things.
Well that would now be a crime. And again you
would think, well, that sounds like that should be a
crime already it's blackmail, and yes it is, but the
AI generated element of it, you know, a lawyer could
have argued, well, this isn't really a picture of you.
It wasn't taken of you, it's generated by AI. It's
(03:28):
not really a picture of you, and that creates this
this wiggle room. So that's what these laws were meant
to do, is to kind of eliminate those gaps. So
really it's all about, you know, filling in the blanks
for stuff that's clearly wrong but until recently wasn't actually
defined as a criminal act. Amazon got a bit of
a win in court this week, so the company saw
(03:50):
a partial dismissal of a lawsuit that the United States
Federal Trade Commission, or FTC, has brought against it. So
the heart of the matter is that the FTC has
claimed Amazon has engaged in anti competitive behaviors in order
to acquire and hold a dominant position in online marketplaces,
(04:12):
and Amazon asked for a dismissal of the court case,
arguing that the FTC did not have sufficient evidence to
show that the company was actually harming consumers, because I mean,
that's ultimately what anti competitive stuff is supposed to be about,
is that the reduction of competition in the market hurts consumers.
You know, competition is good for consumers. The judge has
(04:34):
dismissed at least some of the FTC's charges, but not
all of them, and I don't know which ones because
the judge issued this as a sealed ruling, so it's
not public information about which elements of the lawsuit have
been dismissed and which ones remain. But the judge also
denied another one of Amazon's requests, which was to combine
(04:55):
both a trial and the FTC's proposed solution to this
alleged anti competitive issue in a single case. So the
judges ruled that that will instead be two separate cases.
So it suggests to me that there is at least
a sufficient amount of charges left in the FTC's case
against Amazon to necessitate that ruling. Right. So, don't know
(05:20):
how much of the case was dismissed, but Amazon gets
a little bit of a win there. It was a
huge week for open Ai this week. So over the
last few months, I've seen lots of different analysts sort
of projecting that open ai could find itself out of
money unless it had another significant round of investments, and
(05:42):
that round happened this week, and it was indeed significant.
Open Ai raised around six point six billion with a
B dollars in investments. About three quarters of a billion
dollars came from Microsoft, which continues to financially back open
Ai pretty enthusiastically. Now, this huge influx of cash has
(06:05):
analysts now valuing open Ai at a staggering one hundred
and fifty seven billion dollars. Once again, I'm left to
grapple with a paradoxical situation because here's a company that
really burns through cash fast. And that's not blaming open Ai.
I mean I mentioned earlier this week in other tech
(06:27):
stuff episodes AI R and D is mega expensive. It
is so expensive to not just do research and development
and AI, but then to run AI applications. It costs
a lot of money. And some folks were saying that
open ai might actually spend itself out of business by
early next year. But now we turn around and it's
(06:49):
being valued at one hundred and fifty seven billion buckaroos.
The finances just don't make sense to me. Like, I
get that in the in the moment, it's worth a
huge amount of money. But this is the same company
that people are saying it's going to spend itself out
of business, and it's not like anything there has changed, right,
It's still going to have to spend huge amounts of
(07:09):
money and it's not going to bring in enough revenue
to cover the costs. So this will power open ai
for some foreseeable future, but we're still going to have
to wait and see if open ai can get to
a situation where it will generate enough revenue to cover
the costs of doing business, or if we'll just get
(07:30):
right back into the situation where at some point open
ai needs to hold another significant round of investments in
order to stay going right, it's kind of crazy. On
top of the funding news, by the way, Reuter's reports
that open ai also secured a four billion dollar line
of credit, so on top of the six point six
(07:51):
billion that was invested into it, it means that it
has more than ten billion dollars of liquidity to its
name right now, which is a big old yeah. Ten
billion dollars now. Again, that doesn't guarantee that open ai
is going to be able to leverage all this money
and turn Ai into like a sustainable business that actually
(08:13):
can you stand on some own. We'll still have to
see or maybe we'll get to a point where not
only are they going to need investment, that investment is
going to go directly to paying off interest on a
four billion dollar line of credit. Yikes. Meanwhile, over at
open AI's former headquarters, Elon Musk, a guy who never
(08:36):
met a grudge he couldn't hold, held an event for
his own AI startup, which is called x Ai, and
I would say that it's it's pretty much in character
for Elon Musk to hold a recruiting event for his
own AI startup in the open ai original headquarters because
(08:59):
he got I remember, Musk was a co founder of
open ai years ago, but he had a massive falling
out with others in the organization and he bailed. There
were rumors that he was attempting to essentially take over
open ai, and when he encountered resistance, he decided he
would go and make his own AI company. It's one
of those like Futurama type moments. He has since argued
(09:22):
that open ai has largely abandoned its initial mission of
being an open, transparent, non profit organization that's dedicated to
the safe and responsible development of artificial intelligence. And y'all,
I do not agree with Elon Musk on many things,
but I do think that particular criticism is one hundred
percent on target. I think there's no way to argue
(09:44):
against that Open ai has certainly transformed dramatically from that
open and transparent nonprofit organization into very much a for
profit company that of fuse skates a lot of what
it works on. Now. That's not to say that I
think Musk would have done things differently had he actually
(10:05):
been the one in charge of open ai. I think
if he had one that struggle, if in fact, there
was one a few years ago, and if he had
become the head of open ai, he would either be
doing something not too different from what open ai is
doing now, or he would have run the company out
of business one of the two. And the reason I
(10:25):
say that is again, AI is really expensive, like it
just it burns through money so fast because the technology
you need to run AI applications. First of all, the
chips are in short supply, so those are expensive, and
then you have the electricity needs. Those are expensive. It's
just hard to do. And if you're going to run
(10:47):
it as nonprofit, it means you do need to have
that constant influx of cash in order to fund your work.
And I'm not sure that Elon Musk would have managed
to do that. So it's not that I'm saying he
would have mismanaged it, just rather I don't know how
you get a nonprofit to work anyway. Musk addressed a
(11:09):
group of engineers at this event to recruit them for XAI.
At the event, Musk talked about his desire to create
a quote digital superintelligence that is as benign as possible
end quote. He also said he thought we'd achieve artificial
general intelligence within a couple of years. That seems overly
ambitious to me. But then again, Musk has also frequently
(11:30):
made some rather grandiose predictions about AI that just haven't
you know, shaken out, Like specifically in the autonomous car space,
he has thought that we were going to hit a real,
true autonomous car future much earlier, and you know, obviously
we're still not there yet, but he thought it was
already going to be here, and that has not happened.
(11:50):
So I would not bank on artificial general intelligence within
the next couple of years. Maybe. I mean, I don't
know for sure. I just know that it's a really
hard goal to hit. He also mentioned the desire to
create a supersonic passenger aircraft company in the future. That
probably merits a separate discussion. That's a really tricky thing.
(12:13):
But yeah, that's what happened with his recruiting event this week. Okay,
I've got a couple more stories to talk about Before
I do that, let's take a quick break. We're back.
So Victoria Song of the Verge has a piece titled
(12:36):
college students used Meta's smart glasses to docs people in
real time. Now. As that headline indicates, this is about
how a pair of students used some ar glasses that
are outfitted with cameras and Internet connectivity to essentially run
an app that they had built that relies on things
(12:56):
like facial recognition and personal idea databases in order to
return information about the people that are within view of
the glasses camera. As songwrits, quote, the tech works by
using the meta smart glasses ability to livestream video to Instagram.
A computer program then monitors that stream and uses AI
(13:19):
to identify faces. Those photos are then fed into public
databases to find names, addresses, phone numbers, and even relatives.
That information is then fed back through a phone app
end quote. So the students call their Techi x ray,
and I'm sure you could immediately imagine how that technology
(13:41):
could be abused, And in fact, there are very few
use cases that are benign. Right, And the students have
stressed they're not releasing this technology. This is not meant
to be an app that you're going to be able
to download and then walk around and know everybody's secret identity.
They recognize how technology is inherently abusable, like it's again
(14:05):
very hard to use it in a way that isn't
malicious or at least irresponsible. So just imagine someone wearing
glasses like these and then pretending to know complete strangers
because they've got on their phone a quick dossier about
the person. They've got their name, their address, they have
(14:27):
relative names, all that kind of stuff. They're able to
actually reference this. Like I used to do a goofy
version of this at stores where I would walk in
and like the employees would have a name tag on,
and I would just address them by their name tag
in the store, Like not outside that's creepy, but in
the store. And sometimes they would forget that they were
wearing a name tag. They're like, how did you know
(14:48):
who I We know each other. I'm like, no, you're
wearing your name literally on your shirt and they say, oh,
right right. You know, just one of those moments where
you just you're not even thinking about it. Well, as
silly a little interaction as that is, you can imagine
one being much more serious. Let's say it's at a bar.
You can easily imagine someone trying to prey upon people
(15:09):
at a bar by pretending like they either know this
person from way back, or they know someone that knows
this person, and they're trying to get an end that way.
So the students say their intent was to raise awareness
that this capability isn't just some hypothetical future technology. I mean,
this is something people have been warning about for a while,
(15:30):
but the students are saying, listen, we're done warning. It's here.
We did it. It is possible. And if we did it,
even though we're not gonna do anything with this technology,
that doesn't mean the next person will do the same.
So you have to be aware what this technology can do.
I think it also again goes back to showing how
terrible a job the United States has done when it
(15:53):
comes to citizen data privacy. It's criminal, if you ask me,
because the fact that there have been no real rules
about this make things like this totally possible and in fact,
like you could even imagine a much deeper dive for
this kind of thing, because those databases out there are
(16:13):
enormous and comprehensive. Even for people who aren't online. If
they're showing up in pictures that are on friends, you know,
social profiles or whatever, and they're identified in the photos,
that's enough. They don't even have to participate directly in
the system to be abused by it. So yeah, pretty
sobering example of how technology can interfere with privacy and
(16:38):
potentially put people at risk. Google is apparently rolling out
a verification feature on search results. Just Weatherbed also of
the Verge, reports that some users are getting results back
that include blue check marks next to certain entries. Those
check marks indicate sites that Google has verified as being
a legitimate business. So if you're searching for something, let's
(16:59):
say like you're searching for I don't know, durable camping equipment,
when you get your search results back, you are going
to see if you're part of this anyway, you would
see that some of the companies that are listed would
have this blue check mark. Other companies may be ones
that are trying to pose as if they are a
more established, reputable company, They're not going to have that
(17:21):
check mark. So it's an immediate visual indicator of which
businesses are trustworthy or at least more likely to be trustworthy.
So it's really all about identifying businesses in this case.
That's it, not like people or anything like that. And
it sounds like this initiative is in a very limited rollout.
Not everyone's going to see check marks in their results,
(17:42):
and I don't I did quite a few different searches
just to see if any of these popped up for me,
But no matter what I searched for, I didn't get
anything that had check marks on it. So I am
not part of this rollout, and of course there's no
guarantee that Google will ever roll it out to the
general public. It may just be that this is a
test and nothing else comes of it, but we'll have
(18:02):
to keep your eyes out. Finally, for some recommended reading,
I suggest Eric Berger's piece in Ours Tetnica. It is
titled NASA is working on a plan to replace its
space station, but time is running out. So the current
plan for the International Space Station is for it to
fly into the sunset, or more accurately, for it to
(18:22):
deorbit in twenty thirty. But unless the pace really picks
up and soon, that's going to happen without a new
space station taking its place, that would mean the good
old US of A would no longer have its own
research facility in orbit. Complicating matters is the fact that
NASA also has plans to return to the Moon and
(18:43):
potentially to set the stage for further human exploration, potentially
the places like Mars. So I recommend reading Burger's article
to get a full picture of the situation. That's it
for this week. I hope all of you out there
are doing well, and I'll talk to you again really soon.
(19:06):
Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio,
visit the iHeartRadio app, Apple Podcasts, or wherever you listen
to your favorite shows.