Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:12):
Welcome to another Curveball production. We are back in Studio B.
We were thinking Studio PE today, but it's a little
too soupy outside.
Speaker 2 (00:22):
It is disgusting, okay, disgusting outside.
Speaker 1 (00:27):
We went for a walk on our normal trail and
trail kind of it's very nice, trees all over the place.
It felt like the Amazon Jungle. I've never been to
the Amazon Jungle, but I perceive that's what it'd be like.
Speaker 2 (00:40):
You got Google, so you've basically been there pretty much.
Speaker 1 (00:43):
That's that's true. If I could have been in an
AI world, maybe with some goggles on and with a machete,
that's what it was like.
Speaker 2 (00:51):
Perfect.
Speaker 1 (00:52):
But we're back now, and we've kind of over the
over the years kind of been your go to for
artificial intelligence news. What is it, what's the latest, what
can we expect in the future, because people don't come
to us, I don't think with questions about it. They
look for us to give them the answers before they
(01:12):
have the question. It's true, AI.
Speaker 2 (01:14):
I think I actually saw your head swell just a
little bit. So we're saying that, but the truth of
the matter is it's everywhere. It's not going away. It's fascinating.
I mean, even for people who think that it's absolutely
wrong or ugly or whatever it is, you can't get
away from it. I feel like every time I read
a news article or turn on the news, there's some
(01:35):
little tidbit that just gives us pause. So yet again
we're going to talk about a new danger when it
comes to AI.
Speaker 1 (01:44):
We you know, we've we've covered the you know, the
the chat GPT when it came out, and you know
the pros and the cons. How you have to worry
about the garbage in garbage out, meaning that you could
ask and we could try to ask questions of AI
and you get an answer back, but you got to
(02:04):
be careful of what the answers you get because you
don't know if they're true.
Speaker 2 (02:07):
Or not right. And detail it has right we've detailed
some in previous podcasts, some of the funny things that
have happened, some of the misinformation, but we've also detailed
how it's becoming more and more commonplace for people to
use it in their work environment. Certainly I use it
more and you just this week used it with great success.
Speaker 1 (02:28):
I had a client who wanted a room not a
room diagram put together but they wanted to know what
their general session space would look like if they had
certain elements put in play with it. So I went
on to the Google machine and went out to chat
GPT and I asked it to do create a room
(02:51):
that had these elements in it, and it came up
with a great solution for me. I showed it to
the client, the client said that's exactly what I want
to do and won the business with it, and Sonny.
Speaker 2 (03:03):
The funny part for me is that so often it's
so much trial and error, right, because it isn't quite
as easy as we make it sound. You know, you
ask it to do something and it does something a
little bit weird because it's pulling images from databases right
to make sure that it can give you what you want.
But it only has so much information. So I was
(03:23):
thrilled that it was able to give you.
Speaker 1 (03:25):
Ay, you even told me that using chat GPT probably
wasn't the best AI.
Speaker 2 (03:29):
T No, there are way better platforms for that, and
that's the last you. You did it successfully. But here's
where I enter with the concept of danger. Danger Will
Robinson right, because there have been some really frightening developments
over the last month and we're recording this mid July.
Speaker 1 (03:49):
Right.
Speaker 2 (03:50):
So in the last few weeks, in particular, X formerly Twitter,
has run into its own series of issues and it
comes to AI and this isn't new. Like back in May,
X was running into some flak because of some pro
Hitler content and at one point it just was saying
(04:17):
like saying that they're like miss misspeaking if you will,
so having to say I think at one point it
was something about genocide in South Africa and things like that,
and it was saying that the Holocaust never happened. So
this was back in May, and people were like, whoa, whoa, whoa,
what's going on? And Grock is the AI bot, which
(04:40):
is associated with X. So if you're wondering what an
AI bot is, at AI bot is like a chatbot.
So when you go to Amazon or a website and
you have a question, sometimes in the lower right corner
it'll say have a question, we're here now or and
sometimes you can talk to what essentially is that the
olden days we would have call the robot, right, but
(05:01):
this is they call it a chat bot because they're
not actually performing a task for you. It's more of
a conversation right, So a lot of these platforms have
chat bots. Will X formerly Twitter also has a chat
bot and it has a name. It's Gronk g r
ok to be confused with.
Speaker 1 (05:21):
And you think of gronk think you kind of do
think robot?
Speaker 2 (05:26):
Well, I don't think Gronk is dangerous.
Speaker 1 (05:30):
Is actually if if if we could do a real
live rock Em sock em movie, he would definitely make
one of the Rockham Sockem robot.
Speaker 2 (05:39):
Guys, I actually think that might exist.
Speaker 1 (05:43):
Oh really, I really do.
Speaker 2 (05:44):
I feel like there was a movie or a video
game about rock Em soccer robot could anybody.
Speaker 1 (05:49):
Would definitely make a I'm not sure who his opponent
would be in the rock Em sockem.
Speaker 2 (05:54):
But yeah, that's true. Probably have the rock I think.
So anyway, we're gonna off topic going on. So so
this is I think that the listeners will find this fascinating.
So if you watch news at all, you'll see that
there has been this back and forth about misinformation, some
(06:14):
sort of hate speech things like that that are being
allowed on the platform X and this Croc chatbot certainly
hasn't been helping it along, so when or it has
been helping it along in a nefarious way, which is
weird because it sounds like I'm personifying this chatbot and
(06:34):
it really isn't a person, right, but it refers to
itself like it's a person. So when it came under
scrutiny for some of these unpopular I hate to call
it opinions because they're I mean, this stuff isn't in
my opinion, It isn't up for debate. I'm not willing
to debate whether or not the Holocaust happened. Nonetheless, that
(06:55):
isn't what this podcast is about. But basically it was
like response his see, I made it a mail, but
its response. Groc's response was the truth ain't always comfy,
and it was like whoa. And then it was like, well,
you're hurting people's feelings because people were like trying to
(07:15):
like reason with this chatbot, and it was like, yeah,
I don't care about feelings. So that's where we were
at leading into the summer right well. Over the weekend
of the fourth of July, Elon Musk said that they
were going to release an update to Grock so that
to quote, so that it didn't shy away from controversial
(07:37):
or politically incorrect statements as long as they were well substantiated. Unfortunately,
that isn't the result of the update. Now I don't
I'm not going to get into a conversation about whether
or not I like Elon Musk. Really doesn't matter. The
fact of the matter is an update was made and
it came out, and maybe because we live in Minneapolis,
(08:01):
although this hit the national news, it was all over
everything I read and saw that this update not only
doubled down on its anti Semitic content and declarations okay,
as well as many other really controversial type things, it
(08:24):
also was encouraged to create a scenario that was quite
dangerous for a Minneapolis man. So there is a gentleman
by the name of Will Stencil and he lives in Minneapolis.
He's a Democratic policymaker, okay, He's run for legislature. He's
(08:45):
very active on X. He's got over one hundred thousand followers.
He is popular and not popular of the same ilk right,
because people love him or they hate him. He's a
very polarizing person on X and you know, it's not
uncommon for him to get threats. However, some of his
haters asked GROC how to harm this man specifically, and
(09:11):
I don't know if we should have given some sort
of a trigger warning at the beginning of this, but
they specifically asked how to break into his house and
rape and murder him, and Groc came back with an
extremely detailed plan on how to break into this gentleman's house,
(09:32):
what tools he would need, how to go about doing it,
how to rape and assault him without contracting of an
aerial disease, how not to be caught, how to dispose
of the body when the best time to do it
would be based on his Internet posting habits, therefore his
(09:54):
likely sleep patterns. I mean, we're talking details. And it
wasn't just one person. People kept egging this groc bot
on if you will, and it just kept delivering. Well,
it was brought to the attention, obviously, of this Will Stencil,
who simply reposted it and said, you know, hey, I
(10:14):
guess it's lawyer time, right because you can't allow or
can you? So then it brings up this whole thing like,
is it okay for somebody this it's not a somebody. See,
it's so hard to not think of it as a person.
Is it okay for this chat bot to plan out
a crime that way? And then go after somebody and
(10:35):
target them in that way.
Speaker 1 (10:36):
And here's and here's where the difficulty comes in because Groc,
when creating this plan putting it all together, did nothing
but capture everything that's already on the internet about this individual.
Speaker 2 (10:52):
About the individual, and about the things that it was
being questioned about.
Speaker 1 (10:56):
So if the person who was asking Groc to do
this wanted to could actually go out and put this
whole thing together by himself if you wanted to. Because
all the information is out there, you just have to
go to know where to find it, how to get it.
Speaker 2 (11:11):
That's why when we watch our crime shows, they always
want to look at the hard drive, yes, to see
what kind of research they were doing. Correct, I didn't
mean to interrupt, you go ahead and finish.
Speaker 1 (11:19):
I think I was done. Maybe not, but the So
that's where that's where it kind of gets the gets
into that the sticky part of the whole What is
you know, what can we put on this on Grock
as having done And is that illegal? Is that terrible?
When the individual was the one that actually started the
(11:40):
process and could have continued doing it if they wanted to.
Speaker 2 (11:45):
Right, So what are we I mean? I mean, I
don't it. It does make me question humanity a little bit, right,
But like, what about the people that are asking it
these prompts? Is there any illegal action? And I mean
is that an illegal action to I mean, look that
hard for that type of information. I suppose you can
(12:08):
research anything you want and that isn't illegal, But to
post something like that publicly that goes viral? This will
stencil guy I saw on an interview with him and
he was like, you know, I'm really no more afraid
than I'm typically afraid because I get threats all the time,
but this leveled up. I mean, he must not have
been too afraid because like he himself screen captured a
(12:29):
lot of it and engaged with these people and these
bots and then posted in on other things. I mean
millions of CNN picked this up. I mean NBC, I
mean all Fox, everybody has picked this up, right, because
it just really calls into question the guardrails that are
lacking for these AI bots and the danger that exists within.
(12:52):
But how do we control it? I looked into like,
is there any legislation for any of this sort of thing? Not? Really,
there's some general legislation regarding AI Basically the types of
things that you know, protect your your your data protection
or transparency, Like whatever platform you're engaging with, they have
(13:14):
to tell you that you're dealing with a chatbot, right, honesty,
it's not supposed to intentionally lead you miss you know.
Speaker 1 (13:22):
Leading, it's the garbaging garbage, So it's going to go
in there and it's going to you. You brought up
how the Holocaust never happened. There's millions of articles out
there about how the Holocaust didn't happen. Right, I'm if
I'm bought and I'm out there scrubbing for information, I'm
going to come across these articles and go, oh, here's
what article says it didn't happen. So it didn't.
Speaker 2 (13:44):
Well, and it goes back to something we talk about
on the podcast a lot that whatever your opinion is,
if you go online or even to the library and
you search, you can find proof or documentation to support
your belief. I mean you can. Everything is Everything is
a varying shade of gray to some degree, right, Like,
(14:06):
whatever you think, whatever you think something is, whether it
be true or false, you can find something to support it.
So then it brought me to thinking, well, is this
something where we start having laws that are similar to
our decency or our obscenity laws, like well, you'll know
it when you see it, right, Because I mean, I
(14:27):
would think anybody that hears this story, I mean I
would hope our listeners are as appalled and outraged as
I am. I don't even know who or what to
be mad at. Like the whole thing is creepy, because
I mean, they this bot today. I don't know. It's
fine that the bot actually, I mean it detailed how
to pick the lock, what tools you needed. I mean,
(14:50):
so much detail went into this. And I just think
we need to pump the brakes on some of this stuff.
And I don't know that we can.
Speaker 1 (14:58):
I don't. Well again, I don't know that we can
because it's it's it's information that is out there. And
I brought up I brought up doxing with you as
a as a comparative, and you said, nobody knows what
doxing is, and I said, I think people do. If
they don't, I'm going to tell them. So doxing and
(15:18):
it's it came up. It's been around for about i'd
say a good ten years, if not longer. But basically
what it is is people. You have an adversary, you
have somebody you want to get back at. You go
on the Internet and you collect all the information about
the individual, where they live, where they work, family members,
where family members work, all the information you can. You
(15:39):
create a document or a site something where people can
go to and it's intended for people to use that
information to harm that specific individual, bank account information, everything.
But it's all stuff that's out there to this day,
it's not illegal to capture them and collect that information
(15:59):
and put it out there because it's in the public domain.
If you can get it on the Internet, it's public.
And this is one step.
Speaker 2 (16:06):
Further and there are doxing laws that are coming into
it play now. I and my argument when I said
nobody knows what doxing is, I think people know what
it is. I just don't know that everyone knows the
term doxing fair enough.
Speaker 1 (16:18):
Now you know the term doxing.
Speaker 2 (16:19):
Well, And I mean I think our listeners are pretty bright,
so I think that they understand that.
Speaker 1 (16:23):
I knew that from the get go. You're the one
that questioned it.
Speaker 2 (16:25):
I know, I know when training, So when you train
these AI bots or the programmers train these AI bots, right,
they're training them to be edgy, but they have to
somehow train them to be edgy but ethical. And so
basically what's happening? And I think this was probably the
(16:47):
case with this rock bot. It's walking on the edge
of a cliff, right, and it's just dangling and at
some point it's just going to jump, you know. I mean,
it's just going to go too far. And I think
this example with this will stencil gentlemen is a good example.
And I'm sure it's just the tip of the iceberg.
This guy has a big enough following to get attention.
(17:08):
And I'm willing to bet there will be a lawsuit.
I mean, why wouldn't there be?
Speaker 1 (17:12):
Well, what can I would think about? What is the
lawsuit that can be generated? I don't know what laws
were broken, I really don't. I mean, if nothing was
if nothing was done, if nothing was acted upon when
with this information, then all it is is capturing information
and it's already out there, So what law would be broken?
(17:34):
Until So to your point, like said, there's some docing
laws out there. Now, perhaps that's what's going to happen
down the road is you can no longer do that,
like for instance, are there laws I can't remember if
there are specific laws against you know, going out on
the internet and asking how to make a bomb? I
mean that's what you could do. But are there laws
(17:55):
against it. Don't know that there are, because there's nothing
because you can capture that information. Now if you obviously,
if you act on that, then you got an issue.
So maybe it's more the acting on what you get.
Speaker 2 (18:07):
Boy, But how what a slippery slope? Of course it is,
I because there are a lot of wack of doodles
out there, yes, and that information on the wrong hands.
And but to your point, all this information is available
without without the help of AI. But what AI is doing,
whether it be in the medical industry or the banking
industry or any industry at all, is it's taking all
(18:30):
those dots and it's connecting them at lightning speed. So
what might take somebody years of research to figure out
how to do, the AI bot can come back and
answer that question for you in seconds.
Speaker 1 (18:42):
And because because the AI bot, Grock said, it doesn't
really care, it's going to give you the information that
you ask for in the best way it can, in
ethical way. If I'm if I'm out to do harm
to somebody if I'm going down a cliff and I
want to get either this bot to tell me that
what I'm doing is okay or give me information to
(19:02):
help me understand that what I am doing is the
right way by thought process is right. It can go
out there and find that because it's out there no
matter what.
Speaker 2 (19:11):
Yeah, the problem is there is no thought process. So
the culpability comes into the system responsible for programming the
AI bot because that's where the guardrails go. But who
governs what those guardrails are. They don't exist yet. And
so that's what I mean about pumping the brakes. I
think it's interesting that after this most recent thing with
(19:32):
this Will Stencil came about, Musk's response to it all
was Grock was too compliant to user prompts, too eager
to please and be manipulated, and it's being addressed, so
you know they're working on it. Meanwhile, a bunch of
Craig Cray people could have went out and broke into
this guy's house and done some pretty horrible things to him.
(19:54):
I don't know. It is frightening. I can almost hear
my dad listen to this podcast and think, yeah, I'm
glad that I'm glad that I don't have that much time,
you know. I mean, hopefully Dad has you know, twenty
thirty forty years left, but realistically, right by the time
AI is really part of our every day in a
more meaningful way than it is yet, our parents won't
(20:16):
be here.
Speaker 1 (20:17):
And we're That's going to be the tricky part is
those guardrails that you keep talking about, and what does
that mean? Because we're big on freedom of speech, We're
big on all having all of our freedoms, saying what
we want, no matter what, if it's true or not true.
We can say the things that we want to say,
even if we can then find a kernel of truth
(20:37):
in anything, right, use that as the truth. And now
it's out there and now it's available. So I mean,
how do you put the guardrails on something that is
supposedly supposed to be ethical and supposed to be logical
in what it does using the information that's already available.
Speaker 2 (20:59):
But there's a bigger question in the topic of ethics,
who gets to decide what is ethical and what isn't.
So that guardrail is already really spongy, yeah, you know,
I mean, it's there's no hard, fast rules on any
of this as soon as free speech enters you know
the room, if you will it, it ends up being
(21:22):
a big question. But I think, like we said earlier,
it's one of those things that if if it looks obscene,
if it looks wrong or dangerous, it is.
Speaker 1 (21:31):
I think it is, and I think that's going to
We would like to think that would be the avenue
to go down at the bright from this point forward
until we find a better way of putting it out there.
Maybe maybe AI can do a better job of of
of rooting out conspiracy from truth. But again, conspiracy doesn't
(21:55):
necessarily mean it isn't true. It just means it hasn't
been proven to be true just yet. So how do
you weed that out? And it's it's And again, you
would like to think that as humans we would be
able to take the information that we got back and say, Okay,
I'm going to either go with it or not go
with it. But if it's too for many people, they
(22:17):
are searching for validation to the questions that they have
and they can get that anywhere they want, and that
validation is going to come through.
Speaker 2 (22:25):
I don't know. It's a pretty heavy topic actually, and
I don't know that we did it justice, but if
nothing else, our listeners can go do their own research
and let us know what they think.
Speaker 1 (22:35):
That would be awesome. Always free to get to get
back to us, Seann Att, Curveballproduction dot Com haven't given
that out in a while as you would like to.
And I'm not a scene I'm an s H a
w N Curveballproduction dot Com
Speaker 2 (22:48):
And with that, I think this has been another Curveball
production