All Episodes

October 25, 2024 20 mins

Description: The former senior advisor for OpenAI's now dissolved AGI Readiness division warns that no one, not even OpenAI, is actually ready for artificial general intelligence. Plus, the US unveils some "guardrails" about using AI tools, Montana's Attorney General files a new lawsuit against TikTok, and Norway increases the minimum age for social media users to 15. And more!

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production from iHeartRadio. He there,
and welcome to tech Stuff. I'm your host, Jonathan Strickland.
I'm an executive producer with iHeart Podcasts and how the
tech are you. It's time for the tech news though,
for the week that ends on Friday, October twenty fifth,

(00:25):
twenty twenty four, and there are quite a few pieces
this week about open AI. Kylie Robison of The Verge
has an informative article titled Departing Open AI. Leader says
no company is ready for AGI, and that's a really
good start for our open AI discussion. So, first of all,
AGI stands for artificial general intelligence. So it's an example

(00:51):
of a term that on the surface sounds like it's
fairly straightforward, but when you start to get into the weeds,
you find out it's actually really vague and bor defined.
So generally AGI references a computer system that's capable of
processing information in a way that's similar to how we
humans think. That's the most vague way of defining it. Now.

(01:14):
AGI does not necessarily mean superhuman intelligence, though that's typically
where the conversation ends up going. An AGI system could
theoretically be pretty darn stupid in fact, but still process
information in a way that mimics how we humans do it. Anyway,
It's hard to define AGI because it's hard to justifine

(01:36):
regular old intelligence, like how do you do that before
you even get into artificial intelligence. But recently OpenAI dissolved
its AGI readiness team, and that in turn follows a
decision that the company had made earlier this year to
nix its super alignment team. And both of these teams

(01:57):
focused on ways to develop AI recis responsibly and to
identify and mitigate potential risks relating to AI. Turns out
that's not necessarily a revenue driver, right. Putting limitations on
your development process can be a real drag. Sure, I
mean those limitations might be there in order to prevent

(02:18):
AI implementations from doing really bad stuff, But come on,
what are the odds that's going to happen? Oh, we
don't know the odds. We actually we actually don't know
how likely it is that AI could mess things up. Well,
then that's just as good as knowing there's no chance
that things can go wrong? Am I right? Just a note,

(02:38):
I am not right anyway, snark aside. This move is
yet another nail in the coffin of the original mission
for open ai. Now, you might recall that way back
when it was first founded, open ai was just a simple, humble,
little country non profit company with the mission to conduct
AI research in a way that stood to benefit human

(03:00):
humanity the most with the least likely potential for bad consequences.
I don't know why I decided to go to simple
country lawyer mode for that, but anyway, later on, open
ai spun off a so called capped for profit company,
and this company would in turn still be governed by
the larger nonprofit organization. It's just that the for profit

(03:22):
company would be operated for profit to help fund the research.
And it didn't take very long for the for profit
arm to eclipse the nonprofit side, and the for profit
side essentially shed itself of the shackles of restrictions and caution. Why. Well,
the big reason is that AI, as I have said

(03:44):
many times, is wicked expensive. Open ai churns through money
at a rate that is hard for me to conceive.
We're talking billions of dollars per year spent just to
run operations, and I think they were on track to
lose like five billion dollars this year before they ended

(04:06):
up having a massive fundraising round. Nonprofits are just incapable
of raising enough money fast enough to meet that kind
of demand, Like, no one's going to pour money into
a drain like that over and over, you know, perpetually,
so again and again. Open Ai has chosen the pathways

(04:27):
that are most likely to lead to heaps of cash
while downplaying the concerns of people within and outside the
company about the potential dangerous decisions being made. In Robison's
Peace in the Verge, she covers how the former senior
advisor for AGI Readiness, Miles Brundage, warns that no company
in the world, including open Ai, is prepared for AGI. Further,

(04:54):
he warned that governments around the world really need access
to unbiased experts who are in the field of AI
while they are actually formulating policies and regulations, because as
it stands, the people who are extremely passionate about guiding
policy tend to be folks like Sam Altman, the CEO

(05:15):
of open Ai, and as you might imagine, these people
are not unbiased. It really creates a conflict of interest, right,
It seems like a pretty safe bet that Altman's guidance
to leaders around the world will focus on policies that
will help or at the very least not hinder open
Ai while potentially having a larger impact on the company's competition.

(05:38):
So Brundage is essentially saying like, the fact that open
ai is making these decisions should cause some concern, and
leaders around the world need to be savvy when they
are consulting with various experts and making sure that the
advice they receive isn't being guided by self interest. Another

(05:59):
good read is an opinion piece from John Herman of Intelligencer.
I guess you could argue that it's not really an
opinion piece, but I feel like Herman injects a lot
of his own opinions in it, and I am pretty
much on the same page as Herman. Well, anyway, the
article is titled Microsoft has an open Ai problem. Now.
I mentioned in an earlier tech Stuff News episode that

(06:21):
Microsoft has seemed to have kind of a rethink when
it comes to its relationship with open AI. In Facebook terms,
I think we would say that the current relationship status
would be at it's complicated. So Microsoft has invested billions
of dollars, like nearly fourteen billion bucks in open Ai

(06:42):
so far, and Herman points out that the agreements between
the two companies happened relatively quickly and without a whole
lot of thought as to how that process of funding
would actually work, which seems like pretty loosey goosey to me.
And now Microsoft and o AI's partnership hinges on if

(07:02):
open ai develops AGI. So essentially the agreement is that
this partnership between the two companies will dissolve if open
Ai creates an AGI product of some sort, allegedly because
open ai has a real, deep concern that a company
like Microsoft might misuse such a powerful and potentially destructive

(07:25):
tool as artificial general intelligence. And sure, I think that's possible,
Like I think if Microsoft had access to AGI, there
could be some pretty negative consequences. But it's not like
I have an enormous amount of faith in open Ai either.
It's not like I look at them and think, oh no,
I want them to have the keys. I don't want

(07:47):
anyone to have the keys. I don't want it to
be a thing. But I guess that's not an option.
But anyway, when you're talking about a company like open ai,
which has actively been dismantling the systems. But in place
to ensure safe development or artificial intelligence, you can't really
be advocating that they're the ones who should be, you know,
the stewards of AGI. Herman cites a Times article that

(08:10):
points out that open AI's board of directors actually has
the authority to determine what AGI is and when, if ever,
open Ai achieves it. So, at least hypothetically, even if
outside parties would disagree with the definition that open AI's
board comes up with, that wouldn't matter. If open Ai said, oh,
we did it, we created AGI, and if literally no

(08:33):
one else in the world said that's definitely AGI, it
would be enough for open ai to sever its relationship
with Microsoft. Now, the implication here is that this isn't
really some sort of protection for open ai to make
sure that AGI is not unleashed upon a world and
like the next build of Windows or something. It's really
more about creating kind of a switch, a kill switch,

(08:56):
a bargaining tool with Microsoft, so that when it comes
to renegotiating with Microsoft, open Ai is able to essentially say, hey,
if you don't give us more money, we'll say we
made AGI, and then that's it. We walk and you
can't do anything about it. So that's a possibility. I
don't know if it's a reality, but it's possible. It's

(09:16):
almost enough to make one jaded, isn't it. Anyway? The
article's well worth a read again. It's Microsoft has an
open AI problem, and it's an intelligencer by John Herman. Okay,
I got a lot more news to go through, but first,
let's take a quick break to thank our sponsors. All Right,

(09:42):
we're back, and David E. Sanger of The New York
Times has a piece titled Biden Administration outlines government guardrails
for AI tools, and that brings us back around to
those government backed policies relating to the development and deployment
of artificial intelligence. So essentially, these guardrails lay out the
scenarios in which using AI would be appropriate and allowed,

(10:04):
and the scenarios in which it really absolutely should not
be allowed. So, for example, using AI to detect and
deter cybersecurity threats, that's pretty okay. Using AI to develop
a new generation of fully autonomous weaponry, that one falls
into the no no category. So the guardrails put up
a lot of responsibility on individual departments and agencies to

(10:26):
conduct their own reviews and determine in which use cases
AI might be appropriate and in which ones it would
not be. There's a lot more to the guardrails. There's
like thirty eight pages of material that's available for the
public to read. It includes a call to have the
US attract more AI experts to the United States rather
than have them work for a rival nation like China

(10:47):
or Russia. But at least some of the information about
these guardrails is classified, so I have no idea what
could be in those Also, as Anger points out, the
efficacy of these decisions is somewhat unknown since the United
States is currently in an election year and the next
president might just make all these guardrails go away, so

(11:08):
we don't really know if this is going to matter
at all in the long run. Fun times. The Attorney
General for the state of Montana has filed a new
lawsuit against TikTok. It's a new lawsuit, but it's an
old tune. They're accusing the company of purposefully promoting addictive
and harmful content to young users through the recommendation algorithm.
The Attorney General alleges that TikTok essentially has lied about

(11:32):
the nature of promoted content in violation of the Montana
Consumer Protection Act. So essentially, the attorney general says that
TikTok knew that users, including young users, would encounter content
that's just unsuitable for people who could be as young
as thirteen years old, but pretended that it has safeguards
in place to prevent such young users from encountering, you know,

(11:55):
mature or disturbing content. An unnamed spokesperson for TikTok disputes
the allegations and says they are quote inaccurate and misleading.
In the quote, you might recall that previously, the state
of Montana actually banned TikTok entirely, but then a judge
later overturned that ban on the grounds that it violated
the First Amendment, that's the right to free speech. Several

(12:16):
other states have similarly filed charges against TikTok with similar accusations. Essentially,
it boils down to TikTok's algorithm promoting inappropriate and sometimes
outright dangerous content to impressionable young users. Meanwhile, of course,
TikTok is facing down a well a ticking clock situation
here in the United States. Currently, the app faces the
possibility of a nationwide ban unless its Chinese parent company

(12:39):
Bite Dance, divests itself of TikTok. So it might be
that by this time next year, lawsuits against TikTok, at
least here in the United States, will no longer really
be relevant, or at least not as straightforward. Speaking of
bans and social media, Norway is increasing the minimum age
allowed for folks to use social media. So previously the
minimum age in Norway was thirteen, but now it's going

(13:01):
to be increased to fifteen years old out of a
concern that social media can have a harmful impact on
young users. Of course, making that law is one thing,
but enforcing it is entirely something else. I mean, as
Miranda Bryant of The Guardian reports, the Norwegian Media Authority
found that more than half of nine year olds in

(13:21):
Norway are already on social media, and as you go
up and age the percentage increases, so already there are
tons of kids in Norway who are actually using social
media when they're underage, like they're below the age of thirteen,
even though the law says they ain't supposed to. So
apparently part of the plan is to change the way

(13:42):
that social media companies verify the age of users in Norway,
so that it's not a trivial task. Just fib about
one's own age admidly, that's a pretty low bar. I mean,
if I were a kid, then I was trying to
make an account on a social media platform and the
age verification was just asking me, Hey, when when's your birthday?
I could probably do some pretty quick math and fudge

(14:04):
that date enough so that it would let me in. Anyway,
while I have my doubts about how effective this new
policy is going to be, I absolutely do feel empathy
for those who want to shield kids from the ravages
of the algorithm, because there are days when the various
recommendation algorithms that I encounter make me want to go
and hide in the woods, possibly in Norway. Alison Morrow

(14:26):
of CNN has a piece titled almost anything goes on
social media as long as it doesn't make billionaires feel
even a little bit unsafe, And yeah, billionaires live by
different rules than everybody else. In fact, they often are
the ones who decide what the rules are. So while
people like Elon Musk are extremely vocal about the importance

(14:48):
and sanctity of free speech, they're also quick to say no,
not like that if the free speech includes stuff that
is potentially harmful to them. So the case Morrow is
focusing on that of Jack Sweeney. He's a college student
who has created numerous social media profiles on different platforms
that are dedicated to tracking specific billionaire private jet routes, which,

(15:12):
by the way, is publicly available data. Sweeney is not
hacking into the mainframe or anything like that. All he's
doing is just scraping data from public sources and posting
it to social Anyone could do this, but various platforms
have removed Sweeney's accounts, often without giving him any forewarning

(15:32):
or explanation, and when asked, they usually give a pretty
hand wavy explanation that the information poses a risk to
certain individuals. But I mean, if you happen to know
that Elon Musk flew into your town, you might track
him down an assault him or something, or maybe you know,
just yell at him because your cyber truck don't go no,
mo I guess that's the fear, Like, how do you

(15:53):
know where the person specifically is? Who just know where
they're jet landed? Anyway, Marrow points out that the various
platforms seem to have very little problem with hosting stuff
that could be extremely harmful to the rest of us,
whether that's misinformation or hate speech or whatever. That stuff, well,
that stuff's not likely to touch the billionaires, so there's
no reason to worry about it. Let that stay up

(16:15):
on those profiles. We all just have to fend for ourselves.
But publicly available information about billionaires, now you're talking about
dangerous information. Anyway, read the article. You already can tell
what my opinion is about all this. I'll just Quotemorrow
directly for this little bit, because you know we're aligned
on this quote. It's just not clear that Meta cares

(16:36):
as much about its users' privacy and well being as
it does about Zuckerberg's end quote Preachmorrow. The US Federal
Trade Commission created new rules this past summer that gives
the agency the power to find people who post, buy,
or sell fake reviews online. I'm sure you're aware that
fake reviews are a real problem. Once upon a time,

(16:57):
I think the average person could go to an online
marketplace like Amazon and lean on user reviews when making
purchasing decisions. But over time, companies have found ways to
incentivize folks to write good reviews. Sometimes it's fairly innocent.
You know, you might have a little card in the
box that your product came in saying something like, hey,
if you like this, please consider writing a review. That

(17:18):
seems pretty innocuous, but it can quickly get more murky,
like you might get one that says, hey, hey, if
you write a five star review, we'll give you a
coupon for some other product or service. And sometimes it
just gets downright overtly wrong, like write a positive review
for this thing that you might not even use or own,
and we will pay you money. Well, the new rules

(17:41):
that the FTC passed are now in effect, and essentially,
if you are caught trading in fake reviews, you could
face a fine of up to fifty one, seven hundred
and forty four bucks, and that includes AI generated reviews.
If they find that you've been doing that, you could
be paying some pretty hefty cash. Now, on top of that,
companies are not allowed to suppress negative reviews. They're also

(18:03):
not allowed to promote positive reviews that they suspect are fake. Now,
how the FTC plans to go about proving that a
company was aware that a review was fake, I'm sure
that'll be challenging. On a case by case basis. But
the rule is a hefty piece of writing. It is
one hundred and sixty three pages long, and no, I
have not read the whole thing, so I'm not prepared

(18:24):
to give my full opinion as to whether or not
this is a good idea or a bad idea, or
a good response or an ineffective response. I need to
be able to read the whole thing before I come
up with that. So it looks like I've got my
reading material for my upcoming flights. Yay, okay. I got
a couple of articles I want to recommend for your
reading pleasure. One is a John Broadcin article on Ours

(18:46):
Technica and it's titled Cable companies ask Fifth Circuit to
block FTC's click to cancel rule. I think that's the
least surprising headline I've ever read. Obviously, cable companies don't
want there to be a straight forward click to cancel
route for users. But yeah, read the article because it
kind of explains what the situation is and how the

(19:08):
specific courts that were chosen could end up having a
massive impact on this potential rule. And finally, I have
one other reading recommendation. It's an article by John Walker
of Kotaku. It is titled Roadblocks's new child safety changes
reveal how dangerous it's been for years. So it's an

(19:29):
eye opening piece that applauds these safety changes, but it
does ask, hey, how is it that this wasn't already
a thing? So check that out as well. I hope
all you out there are doing well, and I will
talk to you again really soon. Tech Stuff is an

(19:52):
iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio
app Apple Podcasts, where whoever you listen to your favorite shows.
M

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.