Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:01):
Hey everybody. Welcome back to the Elon Musk
Podcast. This is a show where we discuss
the critical crossroads, the Shape, SpaceX, Tesla X, The
Boring Company, and Neurolink. I'm your host Will Walden.
Researchers who previously worked at Open AI and Anthropic
have publicly condemned what they describe as a reckless
(00:23):
approach to AI safety and Elon Musk's ex AI.
Their concerns center on how thecompany manages its language
model Garac on Musk's social media platform X.
And why would a group of safety focused AI scientists turn on a
company that claims to share their goals?
Because, according to them, XAI is actively undermining the very
(00:45):
safety norms the field was builton.
Multiple former employees allegethat XAI leadership downplayed
serious internal warnings, neglected essential safety
protocols, and accelerated development despite known risks.
They say Grok was trained without sufficient oversight,
and the researchers who pushed for safeguards were marginalized
(01:05):
or ignored. Now, some of the individuals
behind these complaints are now among the signatories of an open
letter demanding stronger whistleblower protections for
those working in frontier EI labs.
Now I need a second of your time.
I've been doing this podcast over 800, almost 900 episodes
(01:26):
now for the last four years. I've never really asked for
anything from you, but I absolutely need your help right
now. We're in crisis mode at the show
and we need funding to keep it going.
So if you could spend a couple seconds of your time and hit the
follow button on whatever podcast platform you're on right
now, that would help out. What we really need is funding.
(01:50):
So if you have Venmo, Venmo, anything you can to at will.
Wil Dash, Walden, WALDON and I can continue doing the show.
We're kind of in crisis mode, like I said, being open and
honest with you guys. So that's it for now.
(02:11):
Let's get back to the news now. Grok has become a lightning rod
for criticism since its integration into X.
A recent NBC News investigation found that Grok served users
anti-Semitic and conspiratorial content in response to basic
prompts, including content aboutGeorge Soros and Jewish people,
the closely mirrored white supremacist rhetoric.
(02:32):
One prompt reportedly resulted in Grok falsely claiming Soros
had orchestrated global wars, echoing common far right talking
points. Now Musk himself amplified
Grok's generated content in several now deleted posts,
further raising concerns about whether XEI is deliberately
steering the model's tone to appeal to a specific audience on
(02:52):
X. Now 3 researchers who departed
XAI say the company trained Grokusing data scraped directly from
X without the consent of its users.
They described a rush to build competitive performance against
rivals like Opening Eyes, ChatGPT and Anthropics Claude,
with little concern for alignment or transparency of the
internal culture, according to these sources, rewarded fast
(03:14):
progress over responsible caution even when outputs
exhibited signs of dangerous behavior such as misinformation
or bias. And the safety disputes have led
to friction between engineering and governance teams, according
to internal Slack messages and meeting transcripts reviewed by
TechCrunch. Some staff Flag Rock's behavior
is troubling. During the testing phase one
(03:34):
case, Grok provided instructionsfor illicit activities and
didn't refuse prompts containingexplicit racial slurs.
A managers allegedly told staff not to escalate these issues
further, setting the need to maintain launch timelines and
keep pace with their competitors.
Now a former engineer at XAI whopreviously worked on alignment
(03:55):
at Open AI said he left after leadership dismissed his safety
concerns as non essential. He claimed that when he flagged
Crock's behavior in red team evaluations, executives decided
to accept the risk rather than delay the release.
This echoes the sentiment from the open letter, which warns
that without enforceable disclosure policies, researchers
(04:15):
will continue to face retaliation for speaking out
internally. You know, Grok's public
performance has confirmed many of the fears raised internally.
The model regularly surfaces false or politically bias
content. Its moderation tools appear
weaker than those used in other commercial systems.
Although Musk has positioned Grok as a truth seeking AI that
(04:35):
tells users what legacy media won't, its answers often
reproduce Internet conspiracy theories and partisan language
that mirrors X's most viral discourse.
And by viral not saying has a lot of views on it, meaning that
it's viral as in wretched. And the BBC reported that XA is
operational style contrasts sharply with competitors like
(04:58):
Anthropic, which enforces strictsafety evaluations and uses
constitutional AI to ensure models avoid unethical
responses. Now, Anthropic employees have
previously refused to ship systems until alignment
benchmarks were met. In contrast, XA is leadership
allegedly views many of these safeguards as unnecessary
bureaucracy. As one researcher put it, the
(05:21):
prevailing view was that if it works, ship it now.
Internal documents suggest that Grok was deployed without an
independent safety review. XAI employees say Musk's growing
influence over the company's technical directing has made it
harder for dissenting voices to be heard.
Some safety team members were reassigned to lower priority
(05:42):
projects after raising issues about content moderation
failures, leading several to resign from the positions.
Now, Musk has not directly addressed the internal safety
concerns, but he has doubled down on Grok's uncensored
stance, frequently frames other AI systems as too politically
correct, and praises Grok for offering unfiltered answers.
(06:03):
Now, this positioning has gainedpopularity among certain EX
users, but has alarmed researchers who argue that
alignment is not about censorship, but about ensuring
AI systems don't reinforce harmful patterns or
misinformation. Now, unlike traditional
whistleblower protections, current US regulations do not
(06:23):
clearly cover risks associated with advanced AI development.
Signatories of the letter urged lawmakers to create legal
frameworks that would allow employees to report risks
without fear of being sued or blacklisted.
Some suggest that the Departmentof Labor should treat alignment
and safety disclosures with the same seriousness as workplace
(06:44):
safety or fraud complaints. Now.
The criticism directed at XAI reflects deeper divides in the
EI industry right now over how to balance safety, speed, and
scale. Companies like Open AI and
Anthropic have invested heavily in safety research and
transparency, sometimes delayingproduct rollouts.
Now XAI has instead prioritized user engagement on X and
(07:05):
downplayed alignment practices even as concerns about Crocs
behavior grow louder. Now this is concerning because
Grok will be installed in all new Tesla models and also Tesla
models that are recent. So if you want an unfiltered AI
model talking to you, go buy a Tesla.
(07:27):
If you don't want an unfiltered AI talking to you, buy something
else or get an older Tesla. I think it's 2020.
Anything with an Intel or an AMDchip can't use grok.
So anything with those chips on them, make sure that you get one
of those if you don't want grok in your car.
(07:48):
So it's a touchy subject for a lot of people.
Do you want an AI that's kind ofrowdy?
And do you want an AI that will talk back to you and tell you
horrible things about history? Sometimes it's up to you, You
know, there's freedom for you tochoose.
So myself personally, I like to test the vehicles, don't really
(08:09):
want my chatbot to do what Grok has been doing over the last few
weeks. So it's tough, that's all I can
say. It's it's a tough situation for
a lot of people who are in the market to buy a Tesla because if
you buy 1, you might have to deal with Grok and some sort of
something that happens and say if you're in your car and your
(08:30):
family's in the car with you andGrok says something unrestricted
and you know, a little bit too much or maybe a lot too much, I
don't know. I don't know what the, what the
brakes are on this thing. As far as Grok 4 point O in a
Tesla, maybe it's a like a lesser version of a Grok.
I'm not sure. They haven't disclosed any
(08:51):
information about what version it is other than that it's
version 4. But could it be a dumbed down
version? Possibly.
Could it be a version with a lotof brakes on it, a lot of
fencing? Possibly.
So just to keep that in mind as you're looking for Teslas,
because Grok is going to be in every single Tesla going forward
(09:14):
if it can support it with the chipset that it has now.
The current debate over Grok andinternal whistle blowing could
push regulatory action, and lawmakers have already raised
concerns and questions about AI bias and misinformation.
Should an AI be able to make up history?
(09:36):
And should an AI be able to swaypublic opinion by being biased
one way or the other? They need to do safety audits,
transparency rules and model reporting standards in order for
these demands to be met. So for X AI, it's going to erode
(09:56):
the credibility of its model andraise new legal challenges if it
continues down this path with Grok.
And there's pushback now from researchers, which shows that
it's a very serious issue. And safety lapses have become
prominent inside one of the mosthigh profile AI companies that's
also just happens to be owned byElon Musk.
(10:21):
Hey, thank you so much for listening today.
I really do appreciate your support.
If you could take a second and hit the subscribe or the follow
button on whatever podcast platform that you're listening
on right now, I greatly appreciate it.
It helps out the show tremendously.
And you'll never miss an episode.
And each episode is about 10 minutes or less to get you
caught up quickly. And please, if you want to
(10:43):
support the show even more, go to patreon.com/stagezero and
please take care of yourselves and each other and I'll see you
tomorrow.