All Episodes

November 18, 2023 46 mins

Imagine a world where your employer prevents you from using the tools you created! Sounds far-fetched? That's exactly what happened at tech giant Microsoft when they blocked their own employees from using GPT, an AI chatbot, stirring up a storm of questions about data security and the company's stance on third-party AI tools. Strap in as we spin this intriguing tale and examine its impact on the development of AI tools.

Ever wonder how tech conglomerates approach AI integration? We pull back the curtain to reveal the AI race unfolding in the industry, with a special focus on the strategies employed by Microsoft and Apple. And it's not just the tech firms; governments worldwide have started to join the dance, introducing legislation to regulate this rapidly evolving landscape. From Meta's influencer AIs to AI's potential vulnerabilities, we scrutinize everything through a magnifying glass.

As we peer into the future, don't be surprised if you encounter a few unsettling scenarios. AI development comes with its own set of challenges, including the risk of dehumanization. Microsoft's restriction on GPT, international politics, and the ethics of AI - it's a complex web, but one that needs to be untangled for a secure future. We promise a roller-coaster ride as we shed light on the struggles of AI developers and the significance of maintaining human connection amidst the technological leaps.

Support the show

Let's get into it!

Follow us!

Email us: TheCatchupCast@Gmail.com

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
You know well.
First of all, spoiler alertit's AI, it's AI related and it
is the thing that helps us themost.
It's chat GPT related, andMicrosoft has asked, or did ask,
the employees to stop usingchat GPT for a little while.

(00:22):
Why is this interesting?
Because they've been thebiggest investor in open AI, the
parent company of chat GPT.
So, in the course, open AI isother things too, but that's the
biggest thing from a consumerstandpoint that we're aware of
and that we've interacted withman.

(00:45):
I even have found new areaswhere it's been integrated, like
SurveyMonkey, believe it or not.
Well, if I do recall, I thinkSurveyMonkey is largely owned by
Microsoft as well.
You know, I could be wrong.
I seem to remember that.
So we all know.
We all know especially ifyou're consistent listeners with

(01:07):
this podcast we know what chatGPT is all about.
Right, yeah, I mean, I knowDennis and us, so, at the very
least, the two of us know a lot.
But, yeah, so this is what'sbeen happening.
This is an incident with it.
Microsoft employees foundthemselves unable to access chat

(01:28):
GPT due to an unexpectedrestriction, where the company
of side security and dataconcerns as the reason for the
blockage.
Yeah, indicating a cautiousapproach to third party AI tools
.
It's interesting, man, you know, rather than to just tackle it

(01:50):
head on, fixing the issue, whichI assume they did because it's
no longer valid or lasting, butfor I mean, it's kind of like if
Burger King I don't know whyit's food man, I hate Burger
King anyway but it's like BurgerKing was like hey, employees,

(02:12):
don't be whoppers becausethey'll probably make you sick,
but keep selling them.
You know what I mean.
You feel what I'm saying.
I don't know how you feel aboutthat.

Speaker 2 (02:22):
No, no, I feel you.
Yeah, no, it's kind of it isvery strange.
Yeah, there you're kind ofpreventing the people who are
going to develop it and as wellas who need to feel and
understand the customer facingside of it, and you're just kind
of telling them, hey, you needto stop because of X, y and Z or

(02:44):
whatever.
It's a little strange.
You kind of make you wonder,like what happened or what is
going on or whatever, like thatsecurity could choose.

Speaker 1 (02:57):
Yeah, those type of thing.
Yeah, I feel that it's.
It's bizarre to just Be thecompany that is the largest
investor and really made thisservice accessible to so many.
Right, the quickest applicationever to 100 million users,
right, yeah, yeah, exactly, wellat least up until threads

(03:22):
Really Threads, yeah, no threads.

Speaker 2 (03:27):
Beat then three days, good Lord, no one's on board.
Threads beat then three days,good Lord, no one's on threads.
I know, I know, but you know itdid help Technically it took me
about three calendar months toeven try threads.
Did you try it?
I have not, man, I have not.
I still don't know how I feelabout my Instagram being linked

(03:51):
to threads, so, and then I wouldbe kind of stuck in it.
Right, I can't just delete theaccount.

Speaker 1 (03:58):
That's kind of the thing about it is.
I don't really see what yougain over that that you aren't
already using and interactingwith on on Instagram.
But I digress with thatDefinitely.
Yeah, it's interesting to seethat.
Here's what makes this storyinteresting.
We're going to dive into itmore.

(04:18):
The fact that the company thatmade Chad GPT in a lot of ways
made Chad GPT a household nameprevented its own employees from
using it because of securityconcerns.
That in and of itself, is a bigenough story.
I think it gives gives a weightto something that we should all

(04:42):
know about and learn about asto what the issue was and if we
need to keep anything out as faras our eyes open in the future.

Speaker 2 (04:51):
Right, yeah, exactly Exactly, and you know how it
pertains to other AI programs,or in these chatbots that we
have around.

Speaker 1 (05:01):
Yeah, like our old friend Bard Hell Barty.

Speaker 2 (05:06):
So it'll barred.

Speaker 1 (05:08):
The whole barter dude .
So with that said, I say we goahead and get into it.

Speaker 2 (05:14):
Yeah.

Speaker 1 (05:15):
What's going on?
Everybody, I'm John and I'mDenison.
And this is the catch up thethree best ways to support this

(05:39):
show.
And there are three of them.
It's simple.
You can count that high.
I remember how.
Number one we even got visualexamples there's.
There's one thing with multiplelogos, but it was one thing we
was right in review.
Wherever you're listening,wherever you're watching, if

(06:00):
you're on the live stream withus, if you're listening through
audio which I'm proud to sayafter many delays, we are Caught
up by the time you hear this.
We will be caught up with allof our episodes, which is
exciting and we know that someof you are on the live stream
and some of you are on the audioand, equally important, or us

(06:24):
knowing your thoughts, it helpsus grow, it helps us become even
better podcast hosts and itgets this in front of more
potential listeners, whereveryou're listening, wherever
you're watching.
Number two there they are.
Oh man, guess how I got one ofthose things with the other.
Number two jump on the livestream with us If you haven't

(06:51):
already, and even if you are onthe rewatch, we still want your
interaction in real time as youwatch.
It is a great way for us tointeract with you, but
preferable and grant, we'regoing live for a late at night,
but preferable is for you tojump on the live stream with us,
get that notification gamegoing and file in.

(07:13):
Let us know your thoughts aboutwhatever it is we're talking
about while we're talking about.
It is how we want to do thisshow, is how we want to connect
with you in real time and it'swhat makes us catching up all
unique and special.
Man and number three there itis, in case you forgot what that

(07:33):
looked like Um, check out, ourshop is fully updated, includes
things like but let's go aheadand get into a T shirt.
Oh man, there it is, it's fullyupdated.
We just sold Dennis's favoriteshirt.
Actually, we sold a copy ofthat the other day.

(07:54):
The.
Should I pull it up?
Mm?

Speaker 2 (07:59):
hmm, go ahead and pull it up, bro, but I just want
to show them.

Speaker 1 (08:03):
You just want to show them, right, tommy?

Speaker 2 (08:07):
Mm.
Hmm, you got to show them right.

Speaker 1 (08:09):
Yeah, I want, I want to disrespect you guys.
Yeah, you got to get that soundby flowing, though Mm.
Hmm, there we go.
Ok, let's, let's.
Let's share the screen here.
Just real quick.
Boom there it is On the left.
So is trap horn worthy, mm hmm,We'll catch up with you next

(08:35):
week or next time is what theshirt says Is on the left, it's
got some vintage kind of videogame vibes to it.
You know, mm.
Hmm, even all fresh and clean.
Obviously.
Well, very soon.
Zoomed in here, here we go.
Ok, so this is showing me as ashop manager.

(08:57):
This is not what it will looklike for you guys, but let's
just go here and say oh, there'syour black, there's some dark
blue.

Speaker 2 (09:09):
Mm, hmm.

Speaker 1 (09:11):
Some gray and some light gray.
You know all kinds of coloroptions, all kinds and speed.
It's only starting at 1050.
Come on, man, they are nearfree, give me a navy for large.
You know, it's not about themoney making for us.
Of course we want that tohappen.

(09:32):
We just want you guys to havestuff that you like, that
represents our show and thathelps spread the word to,
because you love sharing it, youknow, mm hmm, exactly.
So we got all new merchavailable, including those two
shirts, and we want you to checkit out, so just do it.
You know what I'm saying.
So anyway, with that said,let's start back into this topic

(09:55):
, bro.
Yeah, so to further this along,the company being Microsoft,
cited security and data concernsas the reason behind banning
staff from using chat GPT,indicating a cautious approach
to third party tools cautiousbeing third and third party

(10:16):
being interesting labels of this, because you know they kind of
helped make it happen.
But yeah, the restriction waslater identified as an error
during the system wide test formanaging large language models.
Microsoft was quick to rectifythe mistake, restoring access
and reaffirming the safetymeasures of their doors AI

(10:38):
services.
So what is it about this?
Because here's here's a directquote from the, from the article
.
So it says due to security anddata concerns, a number of AI
tools are no longer availablefor employees to use.

(10:59):
That was the update on theinternal website.
Right, got it?

Speaker 2 (11:03):
So there's a memo.
They got sent out.

Speaker 1 (11:06):
Right, but Sorry, I'm just sifting through this real
quick.

Speaker 2 (11:13):
No worries.

Speaker 1 (11:14):
The company initially said was banning chat GPT and
design software Canva, which weall know what canvass to, and
it's been more integrated withAI through open AI, I believe,
but it says the layer removedonline.
They had advisory that includethose products.
Microsoft said the chat GPTtemporary blockage was a mistake

(11:39):
resulting from a test ofsystems for large language
models.
The quote here says they weretesting in point control systems
for these large language modelsand inadvertently turned them
all off or turned them I'm sorry, invertently turned them on all
of them on for all employees.

(12:00):
They restored service shortlyafter they identified their
error, and so what that soundslike is that they had a number
of large language models thatoverlapped each other and caused
them to blame chat GPT for theissue right, rather than these
other issues, and I don't knowto what level you can speak on

(12:23):
this, but what do you think thatthey were testing?
That was causing issues, andwhy would they need these other
models Beyond what chat GPToffers?

Speaker 2 (12:36):
I mean, I think you know, going on the question of
like why would they need otherAI models and stuff like that, I
think it's because I was sorry,that was a big one, but.
I think it's because of kind ofwhat we already saw Opening I

(12:57):
talked about about their duringtheir develop developer day
where you have A feature whereyou can create a chat bot, but
it's tailored specific and Ithink that's what they're trying
to do.
So I can see like a chat GPTbot that they create that is
more tailored towards you know X, y and Z.

(13:19):
So in some ways it becomes likeLike it makes sense for you to,
it makes sense for you to be alittle cautious on these other
AI models as well as just haveother ones because you can have

(13:41):
them more focused right.
Just like people, you'll have afocused set of study or thing
that you normally go into andlook at and stuff like that.

Speaker 1 (13:56):
Yeah, definitely no, I agree, and I think that some
of these other services wouldprobably be more angled toward
internal needs technologydevelopment, specifically coding
, probably.
Yeah, I could see that.
Yeah, from an outsideperspective, coding would be one

(14:19):
of the first things that come,and then there's probably even
more niche kind of stuff beyondthat.
But to ban chat GPT interesting, especially when you're the one
.
I think this goes back to avariety of different discussions
that we've had, including oneof them I had a solo a few weeks

(14:42):
back sharing that developerfrom Google is concerned with
things like Bing and other AI aswell.
But it is interesting for thesepeople, these companies, to
invest so much in thedevelopment.
And then the second they see anissue, they're like well, we're
shutting that down, buteveryone else outside of this

(15:04):
company can keep using it.
Yeah, exactly which?
If things grew?
To be more specific, if thingsreally were an issue with chat
GPT and it wasn't some internalMicrosoft AI and that issue grew
, they would do what they wouldneed to prevent the public from

(15:24):
being harmed for sure.
Oh yeah, oh yeah, and this wasonly a four hour, if I remember
right shutdown, but here's somemore details.
So, privacy and security and AIMicrosoft stands highlights the
industry's focus on thepotential risks associated with
AI tools, such as data privacybreaches or misuse of sensitive

(15:46):
information.
The mention of other AIservices, like mid journey and
replica, and Microsoft'sadvisory suggests a broader
concern, though, for third partyAI tools.
One thing that I noticed is that, you know, chat GPT is so

(16:08):
censored now and censored it.
Yes, and censored in the waywhere Denson and I have talked
about this off stream largely.
But one piece of this is I havea massive thread on chat GPT.
I mean, it goes back at thispoint, I think it goes back 60

(16:30):
days, but yeah, but if I refreshor you know, asks me to log
back in, even though that threadis long, it will only admit
that remembers back to the lasttime I logged in, the last time
I refresh the page.
However, when I work with it towrite long form content, it

(16:56):
absolutely is subtly informed bythings as far back as the
origination of the thread itself.
Yeah, which is pretty amazing.
It is, and it's what I wouldwant, it's what I would ask for
out of that tech and upfront, ona entry level consumer or entry

(17:19):
consumer level, it appears andpresents itself as oh, I'm sorry
, I don't remember what happenedsince the last time you logged
in.
You know what I mean.
Yeah, and you really have todive to prove that wrong, but I
just find that odd.
You know I mean.
What do you think?

Speaker 2 (17:36):
Yeah, no, I mean, I think it is a little, I think it
is a little bit strange when,when you're, you're the main
company that is banning your ownproduct without any you know
customer say, or without anydisruption to your customers or

(17:59):
whatever, like that.
Yeah, I think it is intriguingand it is kind of a little bit
of a it puts up question marksor whatever, like that.
Like was it because I could seesomething right of you, have
these employees who are usingchat, you know, at a much more

(18:22):
vigorous pace, right, and theycontinually push, say like I
don't know, I'm trying to think.
They continuously like add moreto what the capabilities of

(18:43):
chat GBT is.
Like feed it more information,but more personal information.
And maybe I could see like chatGBT pushing out some of that
information, like when someone'sasking, like what the sky is,
you know, is the sky blue, orwhatever, like that.
But then the Joe blow on theother side went overhead, you

(19:03):
know, went over and like added Idon't know Tom's personal
information.
Then on the other side, chatGBT says like oh, hey, here's
your answer to you know why isthe sky blue, or whatever.
And boom, it's like Joe blowswrong information or whatever,

(19:26):
yeah, like it's somehow ingestedlike the wrong or spit out the
wrong answer or somethingsimilar to that.
I could see something like thathappening, especially because
it says that they found or theysaw a glitch and they were able
to kind of figure fix that andthen like resume within like

(19:46):
four hours, which is prettyquick, which means that it
wasn't too painful, at least notthat I can think of.

Speaker 1 (19:55):
Yeah Well, and what this says here although I have
to dive deeper is that you knowthere were quick research being
done that shows other AIservices, like mid journey,
which was image generation.

(20:15):
Right, Mid journeys.

Speaker 2 (20:19):
Yeah yeah, it's an image generation.
It's like the other.
It's the knockoff brand toDolly in some ways.

Speaker 1 (20:29):
Technically, yes, is the second tier to Dolly.
So, and then replica as well.
It just showed that they havebroader concern for these third
party AI tools and third partybeing.
Some of these are integratedthrough plug-in, as is Dolly
actually, with chat.

(20:50):
But yes, I think that was alarge part of the concern.
There is not just the fact thatthey thought, oh, this
conversational AI is part of theissue, which they were wrong,
as we know.
Yeah, I think they're more justconcerned about how the other
integrations with it could havebeen messed up as well or

(21:15):
causing the issues you know.
Yeah, I could see that, yeah.
So in light of the incident,we're still encouraging the use
of being chat, which is run bychat, gpt largely or integrate,
and then they say they canmanage the safety of it better.

(21:36):
That's kind of a cop out, Ifeel like.

Speaker 2 (21:39):
Yeah, exactly.

Speaker 1 (21:41):
Yeah, so, and then AI integration and Microsoft
products.
Let's dive into this one alittle bit.
Microsoft has beenincorporating open AI technology
into its flagship products,which we talked about with, like
Windows 11 and things like that.
The integration demonstratesMicrosoft's commitment to
leveraging AI for improving itssoftware ecosystem.

(22:04):
We've discussed this.
I'm going to keep reading thesenotes here.
No worries, the accidentalblockage could raise questions
about Microsoft's ability tomanage the complex and play
between AI innovation andoperation reliability.
But this is a great pointbecause they are among those

(22:28):
that are trying to establishthemselves as a leader in AI
integration for how it benefitsits consumers.
And you know, there's a funlittle quick detail I have about
this regarding my own job.
I actually have a new worklaptop which has been phenomenal
.
It has Windows 11 on it, but Iwent into the settings and it is

(22:51):
actively set up by our IT toblock the AI integration that is
being offered for free well,included in my Windows 11
license.
Right, yeah, it's being blocked, and I did make the case on one

(23:11):
aspect of it to the IT guy, whohas not made the change yet,
but he did say interested, whereit can just go through my
emails and then help me create awork calendar.
From just reading that, I lovethat idea.
That's a fantastic idea.
I think most people wouldbenefit from that, but those

(23:34):
type of things are just not madepossible without open AI, right
yeah.

Speaker 2 (23:43):
Yeah, exactly yeah, open AI does make a huge
impression on, like, well,Microsoft's overall push now
right into the search base, butyou could just in general, like
the AI race that's kind of goingon.

(24:04):
So, yeah, I think it's.
I think it is interesting tosee how Microsoft is handling
this situation.
I don't know.

Speaker 1 (24:25):
I do too.
I think that you know for themto promote themselves as AI
leaders and really I have togive them credit, man, they're
doing it through actual work.
You know what I mean.
Yeah, yeah, it's not likethey're out here saying, oh look
at us, we have one languagemodel.
You know no.

(24:46):
And then, conversely, I stillhave to fight Siri over the same
things.
I had to fight it over since,like 2017, you know what I mean
which is remarkable, and Iunderstand Apple is approaching
it differently.
There are quotes on that, butregardless from that, you know

(25:10):
it is interesting at the sametime, and I think this would be
my biggest takeaway for thiswhole episode.
And there's still a couple otherpoints we can touch on here.
But for this, really on aconsumer level, industry leader

(25:32):
of integration with AI, for themto still be that level of
hesitant to where they shut itdown entirely for their whole
staff for half a workday becauseof an issue they couldn't even
pinpoint.
I think that should be amessage for everybody, don't you
?
Yeah?

Speaker 2 (25:53):
Yes and no.
Yeah, yes and no.

Speaker 1 (25:55):
Yeah.

Speaker 2 (25:56):
I mean, I think the it has the potential of becoming
a problem.
But I think the problem, thereason why I even don't see it
as that being that scary, islike it depends on how these
fears right become realized.

(26:19):
Right, if it's somethingsimilar to where it's like they
believe that it had sentientthought process or something
similar to that and they're like, oh, cut it.
I mean, sure that's possible.
I don't think, but I thinkwe're kind of far away from that
kind of possibility.

(26:40):
But I can easily see it mixingup data and spitting out the
wrong thing, or or or scaringsomeone into believing that it's
sentient, kind of similar tohow barred did or not.

(27:02):
Technically it wasn't barred,it was a different model.
Did for that Googler.
I think that's the way they wantto, the way they want to be
called Googler.
Yeah, yeah, yeah yeah.

Speaker 1 (27:20):
I remember that actually a side note on that, it
was actually I believe threeweeks ago.
We did that or I did thatepisode where I had the
interview of this guy on 60Minutes who he did help develop
barred.
He also just basically lay orthey labeled it in the story as

(27:42):
he helped develop earlier AIversions that Google use and two
of them that we knew of andhave talked about on this
podcast.
They had to shut down.
It was just in for internalstaff, but do you remember that?
I remember you telling me aboutone of those.

Speaker 2 (28:01):
Yeah, no, I think I remember.
I think I remember that in thatcase it was that the AI model
that they had stopped talking tothem, or I should really say it
stopped talking to them inEnglish into a natural language
that anyone would know.

Speaker 1 (28:22):
Yes, it developed its own language and then taught it
to other internal computers.
Correct.

Speaker 2 (28:29):
Correct.
Yeah, it developed anotherlanguage to communicate to other
AI ones that they had.
Yes, and so then they shut thatdown.

Speaker 1 (28:40):
And if you listen to that episode or watch that
interview, that's what thisguy's concern is is AI creating
its own form of communication,but writing it in a way that
code monitors can't tell that?
That's what's happening, right,I could see that.
Yeah, so it's a valid concernon those type of things.

(29:03):
That's not what Microsoft istalking about here, yeah, or at
least that's not what they'reopenly talking about.
But really, when you integrateso much and it's developed so
quickly and these rollouts arenow pushing other companies to
try and do more quicker I don'tbelieve that we're at a point

(29:25):
right now where the consumerusage of AI should be concerned.
But in due time, in five years,in three years, maybe it's
something that should be, thatshould be hashed out, and I will
say to I don't want to see thehelpfulness of certain things

(29:50):
like BARD and chat GPT be takenaway.
But I'm impressed and I talkedabout this a little bit as well.
I'm impressed and I respect thefact that the government is
approaching this proactivelywith executive orders and then
the bills that have been goingthrough the Senate and House of

(30:11):
Representatives as well, becausewe've talked multiple times
about how that is not happeningor never has happened when it
comes to social media?
Right, yeah, yeah.
And now we have just everymajor social media just winging

(30:33):
it and doing wild and crazystuff.
Now, when I wish you remember.
So we talked about this.
What was this last week, right?
Was it last week or two weeksago?
No, it was more than that,wasn't it With Metta?

Speaker 2 (30:52):
Oh, creating their influencer AIs.

Speaker 1 (30:56):
Yes, which are based on real people.
I haven't even checked in onthose since we discussed it, but
even just so today I got thenotification oh, it's this
person's birthday, right.
So I went there and it was likeright birthday wish with AI.
I was like on Facebook.

Speaker 2 (31:19):
Interesting.

Speaker 1 (31:20):
Right, but they're kind of, in my opinion,
overextending, trying to be likehey look, we have AI too, that
kind of thing.

Speaker 2 (31:31):
Yeah, yeah, I believe the same way.
I think they're just yeah, justnot the right method.
You know, I agree.

Speaker 1 (31:44):
It's just as interesting to see that.
And then they're the ones notbeing regulated or monitored to
nearly the same level aseverything else.

Speaker 2 (31:56):
Yeah.

Speaker 1 (31:57):
Yeah.

Speaker 2 (31:59):
And I think that's really, you know, I am I hope
our regulators continuously kindof push the possibility of or
not really sorry.
I hope that our regulatorscontinue to push on like new

(32:21):
legislation as AI develops andlike stay on top of it, because
it can get out of hand veryquickly.
Like you already said, we'realready kind of seeing that like
everybody's coming out withtheir different types of AI,
bots and stuff like that to tryand capitalize on what Chatcha

(32:44):
BT did with for OpenAI, and so Ithink I just hope that our
regulators are able to kind ofkeep up with that trend, because
otherwise we're going to getinto a really bad situation
where either will be filled withan abundance of one mediocre AI

(33:11):
that are just kind of flyingaround the internet I could see
something similar to that togetting something that has like
no ethics by creating a and wetalked about this before too you
know where some of these biggercompanies that are really
pushing and rushing to get thesenew AI you know general AI,

(33:37):
generative I should really sayAI's out the door and kind of
getting rid of a lot of theirethics committees that they have
on these AIs or how it'ssupposed to be, you know, when
it comes to the integration anddevelopment of these AI models,

(34:04):
I think you know you want tomake sure that you either have a
legislation that keeps that inplace, because I think you know,
as competition continues togrow, you're going to just see
that more and more of them arejust going to be like well, we
don't, we don't need this, orwe'll fix it later.
Well, add in ethics afterward.

(34:24):
Afterwards, it's going to be aDLC or something.

Speaker 1 (34:27):
Yeah, yeah, right, new download content, yeah, oh
man, yeah, exactly, Well, andthe thing about it is to you
know, you see, well, I willstart with this.
I have not been on TikToknearly as much as I was the
first half of the year, even thelast half last year, in my

(34:51):
experience, which I would beinterested to know how other
people feel about this.
So let us know in the comments.
But this interface with TikTokis trash.
They're upgrade to their AI or,I'm sorry, they're upgrade to
the.
Oh gosh, darn it.
Why am I blanking on the phrase?

(35:11):
But the tech that gives youyour feed, right, because it is
what you like, you know, like,yeah, the threads and all that.
I can't stand it because, to behonest, for some reason, most of
the things that shows me now Ihave zero interest in.
But that's neither here northere.
You'll open your phone, it'llbe some video, but then you'll

(35:34):
go and then the next one'seligible for commission, and
then there's two live streamsand then there's paid ads and
then there's another video andthen there's like get yourself
some new shoes, kind of an ad,but it's eligible for commission
and it's very, very frustratingto use, but it's interesting to

(35:58):
see that.
And then I, one of the ads Iwould keep coming across, keep
coming across fairly loosely,especially considering I haven't
really been using the app thatmuch outside of job Needing.
But it's usually an AI that'ssitting there telling me hey,

(36:23):
you need some female attention.
I love the app and let's talkabout what we can show each
other.
Right, I can see that.
Yeah, which is bizarre on acompletely unrelated side note,
because why?
You know, I mean who?
I'm not going to be a guy thatstarts up with having a chatbot

(36:46):
that, like, is supposed to besexually attracted to me.
That's very, very strange.
But regardless, you don't wantto hurt me.
No, I mean I want to hurt bro,but not hurt.
Yeah, my thing is this like Ijust don't understand that at

(37:07):
all.
I completely have that.
Really cow, was the, the, theone that broke the chain.
You know I'm saying, yeah,crazy, like you're just trying
to get me involved, so I justengaged from that.
Anyway, I say all of that tosay we all need that consistent

(37:27):
thing that we relate to, that weconnect on and that makes each
other who we all get.
And so for me, that's thing is,I would imagine, for most
people, even if they don't sayit, so they know that's what we
need is each other.
You know, I mean interaction,that connection with each other,

(37:48):
right?

Speaker 2 (37:49):
Yeah, exactly.

Speaker 1 (37:51):
Yeah, so I know we're kind of getting close to
wrapping this up, so let's runthrough the rest of these notes
here real fast, okay, soanonymous Sudan is a group.
Okay, they claim the attack onchat GPT here and brought
attention to what bringsattention to vulnerabilities

(38:13):
that high profile AI servicesmight face.
Right, because we have hadtrouble finding, finding who
we've needed to find.
You know what I mean.
Yeah, it's made more coverageof these things have gone on
behind the scenes and so, yeah,you know you, you kind of hate
to see that with certainsituations.

(38:34):
Is there anything you would addto that man?
Where your thoughts on that?

Speaker 2 (38:40):
I mean trying to think, I think my, my, anything
that I would add to it.
I think it's.
It's mainly just.
I mean, we already knew aboutthese types of concerns.
Just like any new piece oftechnology, there's always going
to be someone who's going to beable to exploit it right.

(39:02):
And so I think it's just kindof a wake up call for all of us
in general, like, no matter howpowerful an application is, is
we just have to be a little bitmore careful?

Speaker 1 (39:17):
Yeah.

Speaker 2 (39:18):
And the companies that made those as well.

Speaker 1 (39:21):
Oh, yeah, yeah, I mean, one of the things that
this goes on to say and thatwe've we've known as well, is
that AI development doesn'toccur in a vacuum.
It's subject to internationalpolitics, ethical debates.
It's not just America, right,yeah, that they have to build
this stuff off of this China,there's the Middle East, there's

(39:43):
Russia I mean, probably notRussia, I don't know, you know,
not to the same degree when itcomes to JADTBT, but but all
these places have an impact, youknow, and so it's not just us,
but then, on top of that, the AIdevelopers.
They still develop it to makeit more useful and friendly.

(40:03):
And really, you know, outsideof the things that we've seen,
chat, gpt specifically add anddisappear, not outside of those
things, but on those things,which amount of time it probably

(40:25):
is, because the way itresonates in a different country
, that is not the way thatresonates with us.

Speaker 2 (40:31):
Yeah, yeah, exactly.

Speaker 1 (40:34):
Yeah, and so, yeah, those are really the big things
from the story, which isinteresting, and, of course,
it's good that they found outthat that wasn't the issue that
they were having is interesting.
They were having issues duringthis time, but still a

(40:54):
resounding and good review forall of us who do use JADTBT,
whether it be for fun or forwork.
Yeah, yeah, did you haveanything that you wanted to add
on this topic, my guy?

Speaker 2 (41:08):
No man, no, I think we covered that really well.
Honestly, no-transcript.
I think you know, if anything,this has just been a really good
.
It's been an interesting way tosee, kind of like even the
developers of these technologiesare still having issues of

(41:33):
their own right, that it's notjust going super perfectly
smooth, and it gives us a littlebit more perspective when it
comes to like our own I think Iguess I should say like our own
consumer concerns for theseprograms, mainly for, like easy

(41:57):
eyes and how they're beingdeveloped and stuff.

Speaker 1 (42:00):
Yeah, no, I agree.
I agree is it's interesting toknow that even on this level,
right like you were saying,there's still struggles behind
the scene, right?
There's still tribulations forpeople that are working in ways

(42:22):
on things that aren't researchbased conversational AI which I
do think, for the sake of thefuture of the tech and the
industry itself, it was a goodthing to watch these first, you
know.
Yes, to get people interestedand to provide an exorbitant
amount of information.
Man, if I do recall right,12,000 research documents just

(42:49):
for science, informed chat, gbtand that's just for science.
That doesn't include medical.
It doesn't include all sorts ofother in depth info you would
need, you know.

Speaker 2 (43:02):
Exactly.

Speaker 1 (43:04):
So it really is amazing.
I think it's a great thing thata version of AI that I will
always support you know theimage and music generation stuff
and all that kind of stuff.
That's where I kind of start togo a little bit as far as
making and sifting through highlevel info and making it

(43:28):
available.

Speaker 2 (43:29):
Fantastic, big fan yeah yeah, exactly, summarizing
and stuff like that.
It's great for that.

Speaker 1 (43:38):
Yeah, I agree, so cool.
With all of that said, I thinkwe had a great discussion on
this.
I want to remind you guys, incase you forgot between now and
then, the quick three best waysto support us.
First, leave a rating andreview wherever you're listening
or wherever you're watching.
Each in every spot has a spotto do that, whether it's even

(43:58):
just a thumbs up on face or onYouTube, or you can leave us
reviews on our Facebook page aswell.
Subscribe, follow whereveryou're listening, wherever
you're watching.
Number two, jump on the livestream with us.
Every Thursday night we aregoing live and then the audio,

(44:19):
so just drop on Mondays.
And so, wherever you'relistening and wherever you're
watching, those are your tips.
And number three, if you wantto support us with some, you
know, merch.
You got all kinds of merch, allthe latest and newest and
greatest, linked wherever you'relistening or wherever you're
watching.
So, with all of that said, Ithink we had a good one on this.

(44:42):
Thank you, guys, so much forlistening.
Thank you for watching.
I'll catch up with you nextweek.
Since you actually loved thetour, join us for a more fun

(45:41):
ride.
You.
Advertise With Us

Popular Podcasts

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.