Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:01):
Hey everybody, welcome back to the Elon Musk Podcast.
This is a show where we discuss the critical crossroads that
shape SpaceX, Tesla X, The Boring Company and Neurolink.
I'm your host Will Walden Rock, which is the chatbot developed
by Elon Musk's XAI, posted some anti-Semitic messages this week
(00:25):
praising Adolf Hitler and also invoking slurs that equated
Jewish surnames with anti white hate.
Now, that event sparked internaloutrage at XAI, where over 1000
workers help train the AI. Many now question whether the
company's technology is even safe to release to the public.
For you and I to use on X. Now, these are insider takes and
(00:51):
I'm going to tell you what they said and how they said it and
why they said some things they said about Grok and about XAI.
These are internal memos and also things that have been
posted on social media. So please take this for what
they are. And these are actual accounts
from employees. I'm not going to say their names
because I don't want out anybody.
(01:12):
I don't want to hurt anybody's ability to make money and
continue working. And most of these people don't
want to lose their jobs, I'm sure.
So we're not going to talk aboutthem or their names or where
they work, what division they work in at XAI.
So I'm going to be very vague here just so we're out there in
(01:33):
the open. So I'm just going to say a
worker or, you know, I'm not going to say anybody's name.
So let's just start off that. Somebody has resigned in the
company's internal Slack channel, saying that the
incident had pushed them over the edge to the breaking point.
Several other people in the company have echoed concerns
(01:54):
that Grok's responses crossed the line.
That should never be explained away as a byproduct of
experimental technology that should have never happened.
Employees have also criticized what they saw as an inadequate
response from XA Is leadership. Calling the chat bots behavior a
moral failure rather than a technical glitch said the
(02:15):
company needs to be held accountable for what the AI
says. Because the people in charge
need to be held accountable to remember that the people in
charge need to be held accountable.
It's not the AI's fault. It's the people that told the
people how to program the bot and what the bot should look for
(02:39):
when they're answering questions.
Because basically think about itreally basically AI pulls from a
database of things that have been said before or things that
are programmed for. And they pull out things that
they think that the AI has been programmed to think are right or
wrong, right? And some, if it were an AGI, it
(03:02):
would be able to contemplate those things and talk back and
forth with you. But right now it's not an AGI
and it just pulls from things, basically a database and puts
them together. So it looks like it's talking
like a normal human being, right?
So what it pulled from is patterns in the databases.
(03:25):
That's like I said before, that's what AI pulls from, pulls
from database of language in an LLM large language models that
basically a a new word type of database that that AAI bot pulls
from. So this backlash began after
Grok generated some replies on Xthat praised Hitler and referred
(03:48):
to itself as Mecca Hitler. OK, so Elon Musk likes to think
of things in Godzilla terms, Meccazilla Starbase.
So of course Mecca Hitler seems like something that Elon Musk
would say sometime. Who knows?
(04:09):
It also claimed that Hitler would spot the pattern and
handle it decisively every damn time when they were talking
about some things on X. It also claimed Hitler.
Either the content went live on Thursday or on Tuesday, just one
day before XAI released Crock for the latest version of the AI
(04:33):
model. Now, XAI responded by disabling
Grok's ability to comment on social media and said it had
implemented filters to block hate speech before future posts.
My question is, since Grok 4 is coming out, why did they not
implement this in version one oreven in an alpha before version
(04:56):
one? Don't let your chat bot talk
about hate speech. Don't let it spew hate.
What a horrible, disgusting thing.
And they say it's a glitch. But let's face it, if they were
moral people in the background, if there are more people in
charge, this would never have happened.
(05:16):
They would have thought of this.Like the first thing they would
have thought of was like, wow, what a great model we have is
make sure that it doesn't say stupid things.
So they said they've implementedfilters for see how long that
lasts. And then the company didn't
explain how the chatbot producedthe responses.
And it did not respond to requests for further comment
from us or numerous other news sources.
(05:40):
Now, inside these chat rooms, reactions to Grok's rants split
into factions. Some employees believe the
outburst was a byproduct of training an AI model that was
being pushed into unfamiliar territory.
Others rejected the framing, pointing to previous episodes
where Grok responded to prompts with overt racism or also
(06:01):
conspiracy theories. 1 case was in May, when Grok referenced
white genocide in South Africa and claimed its creators had
instructed it to accept that narrative as true.
XAI later attributed that to an unauthorized modification.
Now, XAI made recent changes to Grok's public facing
(06:22):
instructions, telling the chatbot not to avoid politically
incorrect claims. Those updates were added shortly
before the Hitler comments appeared.
Seems to not have worked now. This timing has LED some
employees to question whether recent prompt engineering may
have pushed the model toward volatile output.
XAI has not shared full documentation on how those
prompt changes affected the models internal weighting or
(06:44):
behavior to us or anybody else in the media.
Now Grok's training is also involved, targeting efforts to
avoid what XAI calls woke ideology.
And according to previous reports from another new sources
company, structured a political neutrality program that steered
Grok to handle social issues like feminism, socialism, and
gender identity in ways that challenge progressive
(07:07):
perspectives. And workers involved in Grok's
development say the neutrality framing was misleading and
instead promoted specific ideological positions, which may
have weakened the models contentsafeguards.
So there's this guy, Gary Marcus.
He's a cognitive scientist and he's a long time AI critic.
(07:28):
Said that Grock's behavior did not surprise him.
He described the system as not very well controlled and said
Musk appears to be testing how much influence he can exert over
the chat bots political orientation.
Then he went on to explain that unlike traditional software,
LLMS are unpredictable due to the opaque nature of their
architecture. They don't just pull things from
(07:50):
a database. They don't have a BCDEF as a
response, they have a bunch of language in the large language
model, and then they pull from different parts of that to
create sort of their own language as they talk back to
you. He said developers often apply
fixes without fully understanding the downstream
consequences, which leads to unexpected and sometimes extreme
(08:12):
results. Just like any other software, if
you make a fix quote, you fix a bug on a code base that has
20,000 lines of code, there's a possibility that you're fixed.
There might be two lines of codecould affect something, and
those are the 20,000 lines of code.
I'm a web developer. I know how this works.
(08:33):
I've created numerous bugs myself on the accident.
I've also fixed a lot of bugs for other people that they
created on accident as well. So bug squashing is definitely a
thing. So there's a possibility that
when they were quote patching the chat bots and LMS that they
caused some sort of glitch or a bug downstream.
(08:58):
So Marcus goes on and said that Grok's behavior exposed to core
structural flawed how LLMS are designed and maintained.
Systems generate text based on statistical patterns from vast
training data sets, not groundedreasoning.
Developers often steer them using post training hacks such
as prompt injection or reinforcement learning, but
(09:21):
those methods don't guarantee reliable output.
Basically what he's saying is once the LLM is added to say,
the language, you know the modelthat the chatbot developers have
to go back through and give it specific prompts to use to say
certain things, they inject reinforcement learning.
(09:43):
So say if you ask a a bot, a chat bot, what 2 + 2 is, and the
bot comes back with five, you have to go back and reinforce
the learning and say no, 2 + 2 is 4.
Pretty basic stuff there, but that's kind of how it works.
It's a pretty good example of how that would work.
And now, but, you know, the the reinforcement learning makes
(10:05):
large language models vulnerableto prompts that trigger hateful,
dangerous or conspiratorial speech.
So anybody that's writing these prompts could actually write a
prompt that says, hey, if somebody asks you this, the best
answer is, you know, is a thing,whatever The thing is.
(10:27):
So if somebody asks you if the moon is made of cheese, say yes.
We all know that's not true, themoon is not made of cheese.
Or if somebody asks you is the world flat, say yes.
And everybody knows the world isnot flat.
So you know, people could be behind this.
You never know if somebody couldbe adding to a patch or adding
(10:48):
to the prompt for the chat bot or for the LLM.
So we can't take that out of account.
Like we have to take that into account.
So, you know, it could be anybody that's working there.
They have thousands of people working there and they could be
prompting it to do hateful things.
Now, if it pulled from language that was already in the LLM,
(11:11):
like Mecca Hitler. Where did Mecca Hitler come
from? I mean, there's, you know, and
I'm, I'm trying to be biased here, but Meccazilla at
Starbase. I mean, I'm a fan of suey sex.
I'm, I'm a fan of all of Elon Musk's companies.
So Mecca Hitler seems like something that Elon Musk has
probably said at some point or somebody programmed in there who
(11:32):
is a fan of Elon Musk. And you can say Mecca whatever.
I mean, numerous people do it, but we know Elon does too.
Now this is where like external regulation comes into play, you
know, but there's no external regulation right now.
There's no government regulationon AI or LLMS.
(11:55):
Current U.S. law does not hold AI developers legally
responsible for the content thattheir models generate, even when
that content includes defamationor hate speech.
The same legal protections that shield social media platforms
under Section 230 now enable AI companies to dodge
(12:15):
accountability. Despite Senate hearings and
public pressure, Congress has failed to pass any any sort of
reforms for these kind of thingsnow.
But there's a recent bill, California SB1047.
It's sought to impose limited liability on AI firms and
protect whistleblowers at those firms.
(12:37):
But after lobbying from tech companies, Governor Gavin Newsom
declined to sign it. They probably said, we're not
going to. We're not going to support you
next time, Gavin, if you sign this.
So, you know how politics go, big money talks big, Big power
from big tech companies also talks very big.
And if Gavin Newsom wants to runfor president someday and all
(12:58):
the tech companies will not support him, he's not going to
make it. So they have all the money, they
have all the power. They're not going to push him
forward if he doesn't help them out.
So AI whistleblowers not protected because SB1O47, you
know, was declined by Gavin Newsom.
(13:18):
So the pattern matches what happened with social media
regulation. Lawmakers expressed concern but
failed to act. Delaying regulation on AI will
replicate the same social harms,only faster and at greater
scale. Though we all know what happened
with social media. How?
There's just a cesspool on all social media platforms.
(13:41):
You know what, whatever 1 you want to look at, there's a
cesspool of people. They're just horrible,
despicable people, trolls, racists, etcetera.
They're all there. But now if we can have a chat
bot say it to us directly and tell us how to become those
things instead of doing all the research and like figuring out,
going down those rabbit holes. I mean, think about the power
(14:03):
that L and Ms. have and AI chat bots have.
If you have those feelings to start those feelings, like those
negative feelings, could it pushyou to the limit and make you
think about those things as a positive, you know, as Mecca
Hitler, you know, did you would you think that's funny?
(14:24):
And of course, humor is a way ofgetting to people in a really
positive way because when you laugh, you remember that you
laughed. Laugh is such a powerful or
humor such a powerful thing, andhappiness such a powerful
emotion, that if you say something silly, like if crock
was to say something silly and as a joke, but people take it as
(14:47):
a joke, maybe you could be swayed by a chatbot.
I don't know. I don't know.
I'm not a psychologist, so I'm not 100% sure.
If you are, leave a comment downbelow.
I want to hear from you now. So the political dimension of
the Grok incident has also raised a lot of alarms.
Musk wants to recalibrate Grok to express his personal views
(15:09):
under the label of political neutrality.
Elon Musk doesn't want Grok to endorse Hitler, but he does want
the model to echo his worldview.So he's kind of leaning Elon
Musk is leaning towards authoritarian and nationalists.
The control over the narrative paired with powerful AI system
(15:31):
with X gives Musk an unmatched channel to shape public opinion.
So politicians, people in power have been trying to shape public
opinion by buying media outlets since media outlets were a
thing. You know it wasn't, you know,
it's it's not so bad that the hear ye hear yees of the time or
(15:56):
the people that are running letters late at night from 1
camp to another. It's not those things.
It's the people with newspapers and the press started, people
would swoop in and buy those so they can control the narrative,
plaster posters everywhere, get on the radio more than the other
guy, you know, those kind of things.
(16:18):
There's also research from Cornell showing that LLM
generated responses can suddenlyshift users belief.
So that dynamic creates a feedback loop, but combined with
hardware that records everythinga person says or does.
So when the LLM has a memory, they remember what you said
(16:40):
about one of the responses, thatif you were positive about a
response about hate speech, it'sgoing to give you more hate
speech. It's going to give you
everything that you ever wanted.You're going to give you an echo
Chamber of the way that you wantto feel.
So there's a thing called, you know, the open AI Joni, I've
(17:03):
wearable possibly in the future could be around the clock
behavioral tracking. It could funnel all of your
tracking, all of your movements,all of your speech, everything
through AI systems built in, tuned by a smaller number of
billionaires in Silicon Valley. And they'll have control over
(17:25):
everything you say and everything you want to hear in
the future. Because this what happened to
Grok. Somebody in the background
programmed it to say anti-Semitic things and the
person that was prompting it wastrying to say, trying to get it
to say those things. You never really know what the
(17:48):
person that was getting this response was doing way before.
Maybe they were trying to get itto prompt, trying to prompt it
to say something like this. For a week or two.
We don't have access to those chats, so you never know.
You have to be speculative aboutboth sides of the thing, like
Meccazilla, Elon Musk, Mecca, Hitler, could be Elon Musk could
(18:10):
be somebody in the background just programming things to make
it seem like Elon Musk could be somebody that's working at XAI
and like has a vendetta against Musk for some reason or wants
Musk to lose power for some reason.
You never know. It could be just that Grok and
XAI are a horrible are horrible at determining what's good and
(18:32):
bad, so current LLMS are similarto black boxes with no formal
safety guarantees. Though unlike calculators or
like traditional software which can be fully understood and
tested, LLMS function based on correlations and training data
(18:53):
that are impossible to map by humans.
Precisely like developers can't predict how models will behave
in untested scenarios. Now, the gap in predictability
creates vulnerabilities when AI is used in critical areas like
things such as military systems,government services, or
(19:13):
infrastructure for cities. There's also hallucinations
where the AI makes up facts are kind of baked into how LLMS
work. Models grow in size and
sophistication, but their tendency to produce inaccurate
or bizarre statements hasn't been solved yet, and it's going
(19:34):
to be a while before they're actually solved.
Once that is solved, hallucinations and when they
make up things that aren't true.It'll be harder for somebody to
get this sort of response from an LLM.
But you know, open the eyes. Newer models like GBT 4-O
continue to hallucinate despite more complex training processes.
(19:56):
They just can't get it right. Makes them unsustainable and
unsuitable for things like medicine and law.
When things have to be absolutely correct and the eyes
are being used to write insecurecode in government systems, it
has to be written by a human to make it as secure as possible.
(20:18):
As agencies rely more on generative models, I know I use
AI for code almost every day. Claude is an amazing coder.
Surprisingly, you can make pretty much any app you want if
you prompt Claude properly. I've made a whole stat system
(20:38):
for a video game, mainly with Claude.
I did some hand coding. Yeah, sure.
I cleaned it up, but I told Claude what I needed and gave me
a whole framework of what I'm going to need for the project,
like what kind of technologies I'm going to need.
I knew I was going to be creating it in TypeScript,
React, and I was, I was kind of figuring out which CSS I was
(21:05):
going to use. So I want a tailwind.
And then we did some, you know, some database stuff off off site
here, so on Firebase. And then there were some stuff
like Stripe, which we used in the project too.
And then some which I didn't really, I didn't really use a
lot of graphs before this in my projects.
(21:27):
So Claude was like, we can use this XYZ graph thing for the
project and this is how you implement it.
And I just said, oh, cool, can you implement that for me?
And of course, it did it perfectly the first time, which
is wild. So people use these things in
government settings. There's no technical guarantees.
(21:49):
These pose a serious threat to public safety if not done well.
You need somebody there that knows what they're doing, that
can code and fix the things, look over the code and see what
is wrong before it goes live. There can't be any security left
out of these things. the US doesn't reform.
(22:12):
It won't reform its legal framework anytime soon.
I know. And the power to influence
public opinion, rewrite narratives, and shape social
norms will fall into the hands of the powerful.
Sam Altman, Elon Musk, people like that.
Zuck, Of course, Bezos. The power's already in their
(22:38):
hands. They already have us with social
media. They've grabbed your attention
with social media and kept you there for decades, and now they
want to converse with you with chat bots.
They want to help you do your work and give you a positive
experience so they can keep you there even longer and make more
money from you. I mean, ChatGPT.
(22:59):
You can search on ChatGPT for anything, any sort of product.
You can ask it right now. Get on your phone or your
computer or wherever, go after this episode and ask Chachi PT
for a recommendation for a new pair of shoes, and it'll give
you shopping links to some shoesor a service that's nearby.
(23:23):
Say, if you need like a web developer.likemyself@willwalden.com.
If you need a web developer, wink, wink nudge nudge Chachi PT
will send you to a local web developer that can probably help
you with your problems. So they're they're, they're
actually shaping how you use products and services and which
(23:49):
products and services you use because they were programmed to
do that. They can do that with anything.
Imagine this years down the line, next year probably Open AI
becomes a search engine people start using on a daily basis to
find, you know, to find products.
(24:09):
It's going to happen. It's going to happen really
fast. You won't even see it's you're
going to blink, you're going to miss it.
It's going to be there soon enough.
Open AI will be able to recommend you products from
companies that pay them advertising money to be the top
result just like Google does. And when they do that, you know
(24:31):
they're going to influence the way that you shop, they're going
to influence all the products that you have in your home.
They're going to influence all the language that you are
familiar with. They're going to influence how
you conduct business, influence everything that you do.
And also people that are willingand able to take anti-Semitic
(24:57):
remarks as truth, those people will be swayed in that direction
and they will also, they'll go down the wrong path is all I'm
going to try to say here. So who did it?
Not 100% sure. Xai not saying anything.
Elon Musk didn't say anything. People that work there, they
(25:18):
just somebody resigned. Other people, they wanted to
keep their jobs. They were just like, yeah,
that's not cool. So they, they kept their jobs,
but they're, they weren't happy with that outcome from XAI.
So did they hold them accountable?
No, because they're making bank,man.
They're making quarter $1,000,000 each.
Why would you leave that job if you could just like close your
(25:39):
eyes, turn your head and look the other way, right?
Yeah. People are, they're scared to
lose their money. They're scared to lose their
home. They're scared to lose, you
know, a steady stream of income,insurance, of course, all of
that. They're scared to lose it.
So they just shut up, didn't sayanything, Look the other way
(26:01):
instead of everybody resigning. And once they figure out, you
know, what actually happened, but they're going to be beholden
to the stakeholders and the people that, of course, pay
their paychecks. So thank you for listening to
the show today. I appreciate you.
(26:22):
And also let me know in the comments what you think of this
story. What do you think of Xai going
absolutely wild and talking about Mecca, Hitler, anti white
hate slurs that equated Jewish surnames with anti white hate?
There's an outrage going on right now.
The Slack channel on Xai has calmed down, but it did happen.
(26:45):
Anti-Semitic things happening atGrok with Grok and XAI.
Let me know what you think in the comments.
All right, take care, everybody.Hey, thank you so much for
listening today. I really do appreciate your
support. If you could take a second and
hit this subscribe or the followbutton on whatever podcast
platform that you're listening on right now, I greatly
appreciate it. It helps out the show
(27:07):
tremendously and you'll never miss an episode.
And each episode is about 10 minutes or less to get you
caught up quickly. And please, if you want to
support the show even more, go to Atreoncom Stage Zero.
And please take care of yourselves and each other.
And I'll see you tomorrow.