All Episodes

August 5, 2025 33 mins

When the lines between technological assistance and authentic human learning blur, what are we really teaching our children? In this thought-provoking conversation between a mother and daughter, we dive deep into the controversial world of artificial intelligence and its far-reaching implications across generations.

We explore the fascinating parallel between today's AI tools and yesterday's encyclopedias, questioning whether the fundamental concerns about student learning have really changed. One particularly revealing anecdote involves a student repeatedly flagged for AI use when he was simply receiving help from his mother—highlighting how our systems may be struggling to distinguish between technological and human assistance.

The discussion takes unexpected turns as we examine AI's darker applications, from sophisticated scams using voice cloning technology to the creation of deepfake videos that spread misinformation. 

Perhaps most intriguing is the paradox we uncover: while academic settings often discourage AI use, corporate environments actively embrace these tools for efficiency. How can we prepare students for future workplaces if we don't teach them to appropriately leverage the very technologies that will define their careers? 

Throughout our conversation, generational perspectives provide rich context for understanding how differently Boomers, Gen X, and younger generations approach these technological changes. The philosophical question looms large: as AI capabilities advance, where do we draw the line between helpful tool and potential threat? Join us for this candid exploration of artificial intelligence's place in our evolving world.

email: boomerandgenxer@gmail.com

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Welcome everyone to today's show.
A boomer and a Gen Xer walkinto a bar, coming to you from
the rabbit hole studio, whereyou, as our listener, will
experience some wit and wisdom,some smart assery and a mother
and daughter questioning.
Are we even related?
My name is Bobby joy and my cohost is is my mom, jane, and for

(00:25):
the next little while we arehere to entertain you, welcome
home.

Speaker 2 (00:30):
Hi Bobby, it's so good to see you at the other end
of the studio.

Speaker 1 (00:34):
I am so glad you guys are back in Iowa.

Speaker 2 (00:37):
Oh my gosh.
So yeah, we had kind of anunexpected trip, kind of
expected, kind of unexpected, tothe mountains of georgia, to
our home there, and we were gonefor what a month I think at
least.

Speaker 1 (00:50):
Yeah, I think it was more like a month and a half,
and so I think it was a month Ithink it was like two years it
seems like it when your mama'saway.

Speaker 2 (01:02):
Oh yeah anyway.
So it's good to be back, it'sgood to be back in the studio
and uh, recording again, andit's so nice to see your
beautiful face, bobby.
So what's new?

Speaker 1 (01:14):
with you.
Tell me what's new.
Oh man, just been working a lotof hours and, uh, sleeping so
you work a lot of double shiftsI do.
I work a lot of 18 hour shiftsum quite a few times a week
right now.

Speaker 2 (01:29):
Wow, that's crazy, it is just mowing the lawn for me
is tough.
I mowed for an hour and thencame and got dr domain.
Could you do the other fourhours please?

Speaker 1 (01:40):
oh man, I just can't do it I know I seen him when
when I pulled up and he's outthere on the tractor and I'm
like what is he doing?

Speaker 2 (01:49):
like take a day off, yeah we can't we got too much
going on, and so let just sothat our listeners know when you
retire.
I don't know how we got allthis work done when we were
working and he is still working.
I am not, but my schedule islike full well, and my other
parent, my dad.

Speaker 1 (02:10):
He retired years ago and I can get him on the phone
maybe once every couple of weeks.

Speaker 2 (02:15):
Yeah, and that's it because there's just so much
stuff to do so much to get doneand a lot of it's not even fun
stuff.
It it's doing all this work andstuff.
Anyway, we have a hot topictoday.
We do and so I'm kind ofinterested in what our listeners

(02:35):
would say about this particulartopic.
And, just so everybody knows, Idid put out a message on my
private Facebook page askingpeople about this particular
topic and to find out what theirthoughts were and how they
utilize it.
And so what is our topic?

Speaker 1 (02:54):
today, bobby, our topic today is AI, artificial
intelligence, specifically usingit with things like chat, gpt
and Google and things like that,and so what people are doing
with potentially schoolwork orcollege work, or if they're
using it in their professionmaybe their jobs, and so we did

(03:17):
ask or their personal life ortheir personal life.

Speaker 2 (03:20):
Yeah, so we did ask quite a few people and we got
some responses and I just wantto tell everybody thank you for
that.
We had quite a few people thatresponded and told us whether
they used it, whether they don'tuse it, how they used it, that
sort of thing.
So where do you want to start?
I say we start with the schoolworker with that kind of

(03:43):
approach.

Speaker 1 (03:43):
What do you say?
Let's start with what ourlisteners have said, because the
overwhelming majority of ourlisteners of all ages have said
that they have not used it andif they did use it, it was maybe
one time.
You know, when you googlesomething and it brings up the
ai answer, you know at the topof the google page type of thing
, but they really have not beenusing it.

Speaker 2 (04:06):
So would Gemini be considered AI?

Speaker 1 (04:08):
I don't even know what Gemini is.

Speaker 2 (04:10):
Okay, gemini is an app that's on my phone.
I didn't apply for it, it'sjust there.
I didn't apply for it.
It's not a job.
You didn't download it.
I didn't download it oranything.
It came on my phone and it cameon yours too, right, dr Domain?
Yes, hello, is he Hello?

Speaker 1 (04:29):
Dr Domain.
He's AI for that.

Speaker 3 (04:30):
I know I was kind of working on my AI bot to try and
figure out how to push thebutton.

Speaker 2 (04:36):
Push the button.
Push a button, but it was onyour phone too, right.

Speaker 3 (04:40):
Yeah, it.
Yeah, it's Google's derivativeof AI.

Speaker 1 (04:43):
It's their form.

Speaker 3 (04:44):
I think they're trying to push out Google
Assistant and replace it withGemini.

Speaker 2 (04:49):
So I'm just kidding.

Speaker 3 (04:50):
Yeah, it's pretty lackluster.

Speaker 2 (04:52):
Yeah, it is lackluster and I'll tell you
what I've used it for.
I have used it for takingphotos and saying make this into
a comic.
Oh Jesus, and saying make thisinto a comic.
I take pictures of you girlsand I put it through.

Speaker 3 (05:10):
Life-changing events.

Speaker 2 (05:12):
And I try to make comics out of it she's making
the world a better place.
I am.
I'm trying to do my best and domy part.

Speaker 1 (05:19):
I use it mostly to figure out how to cook things.
If I forget how long to cook,something'll just I'll, you know
, punch it in real quick and youknow what's at the top of the
Google pages.
99% of the time it's AI, it'sGoogle's AI, and so it tells me.
But, um, I do have a reallyfunny story.
Uh, you know, a lot of peoplehave had issues, especially like

(05:42):
college age people, high schoolage people with you know the,
the professors, the teachers,things like that have said
you're not allowed to use AI.
You know, and they actually runit through.
I don't know what they run itthrough to see if it's AI, but
my son has been online learningsince um last January and he

(06:06):
comes to me a lot for hisschoolwork because I don't know,
I guess I'm the only one there,but you know, I am a pretty
smart cookie when it comes tocertain things.
And so he he does come to me andhe does alcohol.
Sex workers, sex workers rights,you know things like that, that
.
But he does come to me, youknow, with his questions, or,

(06:28):
hey mom, do you know anythingabout this?
Hey mom, how would I word this?
You know, because it has aspecific topic type thing, and
and I will, I'll sit there and Iwill tell him this is how I
would word it, this is what Iwould say, and he it in.
He has, at least once a week,been sent an email saying that

(06:50):
he's using AI and he keepsemailing him back going.
I'm not using AI, that's my mom, and they do not believe him.

Speaker 2 (06:58):
Do they think that you're using AI to get the
answers?

Speaker 1 (07:01):
No, they think he is using AI and then playing it off
as his own answers and I'm likelook, first of all, I'm not
that intelligent.
Okay, I, I.
I am smart, but I'm not likecomputer intelligent, I'm not.
I'm not ai intelligent.
Well, apparently I am.
But they keep pinging him andsaying we're not going to accept
this because you used ai.

(07:21):
And he keeps saying that's justmy mom, that's just what my mom
said.
I just typed down exactly whatmy mom said and they are just on
him every week and you didn'tlook his question up?

Speaker 2 (07:33):
no, no, I sure did not.
Well, I know that some of thearguments against using AI for
schoolwork have to do withcheating and plagiarism, and the
most significant concerns arethe potential of students to use
AI to generate assignmentswithout engaging in the learning

(07:53):
process itself.

Speaker 1 (07:55):
Right.

Speaker 2 (07:55):
So back in my day.
And Dr Domain's, I've said thattwice now, dr Domain's.

Speaker 3 (08:02):
I'm Domain.

Speaker 2 (08:03):
You are Domain.

Speaker 1 (08:05):
So back when they used rock and chisels for
schoolwork.

Speaker 2 (08:08):
Back then we had to use encyclopedia books.
Oh no, and so we hadEncyclopedia Britannica.

Speaker 1 (08:17):
Oh yeah, I love those , I love those.

Speaker 2 (08:19):
What were the other ones Dr Domain, Do you remember?

Speaker 3 (08:26):
I think the the american there were the american
encyclopedias, but I rememberbritannica.

Speaker 1 (08:30):
I don't know, let me.
Let me google that ask, ai,would you?
I think that every householdhad the encyclopedia britannica,
at least half the set.
I know a lot of us were missingone or two volumes that we just
kind of had to wing it sohere's what's interesting is
several of us because you knowthey'd go.

Speaker 2 (08:46):
Well, don't write it down word for word, because
that's plagiarism.
Well, what did we do?
We wrote it down word for word.
So how are they going to checkback?
Yeah, so a lot.
But a lot of the kids wouldwrite down the same stuff
because we were all usingencyclopedias, because we didn't
have anything online.
Well, it just wasn't available.

Speaker 1 (09:05):
Well, I was going to say well, we're both.
All three of us are older thanGoogle, so it's not like we had
the Internet to run to and typein.
Oh, you know how do cows makemilk, or you know things like
that.
We literally had to go to thelibrary.
We had to go through the indexcards, find the book we wanted.

Speaker 2 (09:26):
We had to, you know the Dewey Decimal System and
people right now are listeningto us going Dewey Decimal.
It's only the old people thatare going to know exactly what.

Speaker 1 (09:37):
that is, the cupboards of cards that we had
to go through.

Speaker 2 (09:41):
But let me ask you if you had to use the Dewey
Decimal System to find a book inthe library today, could you do
it?

Speaker 1 (09:47):
Yes, yeah, me too.

Speaker 2 (09:49):
Yeah, absolutely yeah .

Speaker 1 (09:51):
And they still do use the Dewey Decimal System.
It's just now.
Everything's computerized, soit'll tell you exactly where it
is without having to lookthrough all of those cards to
see.
And then you know what's greatis if you look it up on the
computers at the library, italso tells you if it's checked
out or not.
So you don't have to spend anhour looking for that book going
.
Did somebody check it out?

Speaker 2 (10:13):
Did they have the book in the first place?

Speaker 1 (10:14):
Right right.

Speaker 2 (10:16):
So one of the concerns, of course, is the fact
that this undermines thepurpose of education and they
feel it can lead to probablyunfair advantages for students
who do rely on AI.
Now, some of the things thatthey talk about as the arguments
against AI, I don't agree withyou, don't?

(10:39):
No, I haven't even looked atthem.
So one of the things is theysay that it decreases your
critical thinking and creativity.
So that's because ofover-reliance on AI.
Now what could you do to getaround that?
To me, if somebody's going touse AI and I really don't see
much difference between pullingthat up and pulling up an

(11:00):
encyclopedia and reading throughit you know an encyclopedia and
reading through it, other thanthe fact that they're copying
that and pasting it and using it, is here's where the teachers
or the instructors have to dotheir job.
If you feel that Bobby turnedin a project and it was AI,
that's your opportunity to sitdown with Bobby and say hey,

(11:23):
bobby, explain this to mewithout looking at the paper In
your own words.
In your own words, tell me whatthis says.
And if they can do that andthey understand the process,
what's the harm?

Speaker 1 (11:35):
Well, and here's another thing If the teachers
are actually teaching instead ofsaying read this chapter and do
these questions, if the teacheris actually teaching the lesson
, there shouldn't be an issuewith it.

Speaker 2 (11:48):
And there is some skepticism about how teachers
are teaching today.
Now we have a good friend I'mnot going to mention her name,
but we have a good friend whowas a teacher for years and I
know for a fact that she was areally good teacher and bless
the teachers.

Speaker 1 (12:05):
Yeah, bless the teachers who put up with some of
this crap.

Speaker 2 (12:08):
But I will say you don't bless the teachers who are
sitting in a corner saying youknow I get paid whether you
learn or not, because you knowyou're really not doing anybody
any favors.
Now, as your parents have aresponsibility to drill into you
as a learner, as a child, as astudent, that you need to pay

(12:29):
attention, you need toparticipate, because it is true
those teachers are going to getpaid whether you learn or not.
Correct One of the things thatI think, and this is going to go
off topic here.

Speaker 1 (12:40):
Oh, big surprise.

Speaker 2 (12:43):
Because you know how my mind wanders.

Speaker 1 (12:45):
You've been back for five hours, here we go.

Speaker 2 (12:47):
It's been bouncing off the wall.
Is this whole thing about nochild left behind?
Oh, that's bullshit, what ahorrible thing that was done to
our education system.

Speaker 3 (12:57):
It is.
It's horrible no child leftbehind.

Speaker 2 (13:00):
Yeah, they need to be left behind.

Speaker 1 (13:02):
Sometimes they do, sometimes they do, sometimes
they miss the train.

Speaker 2 (13:08):
I think yes, that's right.
They didn't make it to thestation, but I think that there
were a lot of kids that neededto be left behind.
If you don't do the work andyou don't put forth the effort
and you're.
You know the parents aren'treally pushing these kids.
Why do you need to pass them onRight, which gives us one of

(13:28):
the reasons why we have such lowIQs and low.
What is it Low SAT scores inthe United States for kids that
are graduating from high school?
Right?

Speaker 1 (13:40):
Right, yeah, I agree with that, and but it's been a
problem ever since I was in highschool.
I know kids that graduated highschool not with me, but at the
same time they couldn't read thecat in the hat.
Yeah, they were just pushed onbecause either they were
athletes or who their parentswere, or the teachers didn't,
just didn't want to deal withthem anymore.

Speaker 3 (14:01):
Yeah.

Speaker 1 (14:01):
You know, and they were pushed on and they came out
of high school illiterate.
Yeah, absolutely.

Speaker 2 (14:07):
Yeah, I don't believe in them.
So have you watched some ofthese reels on Facebook where
they have somebody who isstanding asking a young person
how many states in the UnitedStates, oh my, favorite is if
you go 60 miles in an hour.

Speaker 1 (14:25):
How fast are you going?
And the looks on these youngpeople's faces is it's like you
just told them that their catdied, or something I mean just
the terror of them.

Speaker 2 (14:37):
Trying to figure this out cracks me up or there's
another question that they askif you were born 10 years ago,
how old would you be today?

Speaker 1 (14:48):
Or if your sister was a year old when you were born,
how old would they be whenyou're?
You know this age and they'relike wait what?

Speaker 2 (14:56):
So.
So some of the argument is that, you know, students could
become passive learners, lackingin the ability to think
independently and kind ofgenerate their own thoughts.
Well, we have that today.

Speaker 1 (15:08):
We do have that today , the entire school system is is
built around children nothaving their own thoughts.
It's built around and I knowyou're probably going to
disagree with me on this, butit's built around basically
putting out people who all thinkalike, who all know the same
thing, who all you know go thesame path or you know things

(15:30):
like that.
It's indoctrination at itsfinest, and I don't think that
any school actually teaches freethinking.

Speaker 2 (15:38):
No, I don't disagree with you on that.
I totally, totally.
I can't believe what I'm saying.
I do.
I totally agree with you onthat.
I totally, totally, I can'tbelieve what I'm saying.
I do, I totally agree with youon that.

Speaker 1 (15:47):
You're on one of my conspiracies.
Oh my gosh.
Yeah, I've turned her to thedark side.

Speaker 2 (15:53):
Yeah, no kidding.
So the other thing that comesup when we start talking about
AI and kids, you know, orstudents it doesn't have to be
kids, students they say thatunequal access, access to AI
tools, may not be equitable andpotentially, you know, you've

(16:14):
got people who do have access toit, know how to use it, using
it and getting better gradesthan somebody who has to really
dive into it.

Speaker 1 (16:24):
Correct and you're looking at that both ways
because you know most of theschools anymore use laptops.
They don't use book and paper.
And these kids who go home atnight, who have no internet
access they don't have a libraryto go to for free internet,
they don't have the ability togo online and finish their
schoolwork or study or anythinglike that.

(16:45):
There is a huge division in ourcountry of these kids who the
haves and the have nots.

Speaker 2 (16:51):
If you don't have access to that and you only have
access to Encyclopedia,Britannica that your grandma has
.

Speaker 1 (17:00):
She still has those on the shelf From 1985 edition.

Speaker 2 (17:03):
She's not going to get rid of edition because
that's part of your inheritance,right there countries in there
that don't exist anymore.
That's right.
That's your part of yourinheritance that junk that's in
grandma's house.
I forgot what I was we'retalking about uh, unequal access
for kids so, even if they giveyou a laptop because a lot of
schools give you laptops now,right, right what if you go home

(17:26):
and you don't have internetright access?

Speaker 1 (17:29):
you have to go to a library.
Is there a library?
Close by is there a mcdonald'sclose right?
Are your parents able to getyou there if it's not close?

Speaker 2 (17:37):
by, and how long are they going to let you sit there
and write?

Speaker 1 (17:41):
order right, and if it's 110 degrees outside and the
library is closed, you're notgoing to want to sit there and
write order right, and if it's110 degrees outside and the
library is closed, you're notgoing to want to sit outside and
do your homework.
I mean, it's not a coerciveenvironment for these kids to
learn when they don't have equalaccess to these things I I
agree with that so let's, let'smove off of the kids thing,
let's move off of what else wedo with ai, okay, okay.

(18:02):
So you know you've seen thesevideos and stuff.
I make comics of you kids.
You make comics.
But you've seen these videos ofyou know, especially famous
people, where they use theirlikeness and their voice and
they make them say whatever theywant to say they can make cats
sing now.
I mean, they can make thepresident say whatever Well, he
does say whatever he wants, butthey can make him say whatever

(18:24):
he wants and you don't know ifit's AI or not, especially the
you know, older and very youngergenerations, the people who
aren't savvy to the AI.
They don't know if this is realor not.
They, you know, and there'seven scams where they're going
to use the voice of, let's say,your niece and they're going to

(18:45):
call you and say your niece isgoing to say oh my God, aunt
Janie, they're holding mehostage and I need you to wire
me $1,000 in the next 30 minutes.

Speaker 2 (18:54):
Never, really liked you in the first place.

Speaker 1 (18:56):
Well, I mean, you're on your own if you got to call
Jane for any of that, but theyare, and it's actually actually
it's duping people, it is, it isand it's really sad.

Speaker 2 (19:08):
It is sad because a lot of the older people hey,
grandma, you know well, and youpanic.

Speaker 1 (19:13):
You're in that panic mode and you're like, oh my god,
what do I do?
They're gonna, they're gonnahurt her, they're gonna do this.
You know, it's the same kind ofthe same with, like, the
spoofing the phone numbers,where they'll spoof a phone
number that says I'm callingfrom the energy company and if
you don't pay nine hundreddollars, we're coming out in 15
minutes to disconnect your, your, your electric and I can take

(19:35):
your credit card and I can take,yeah I can take it right now
and we'll stop it.
it'll never happen and they'llnever show up and it it gets you
in that panic mode of I need tofix this right now so that I'm
not without or my childrenaren't in danger, or anything
like that.
And it's really sadcircumstance that this stuff
isn't regulated like it shouldbe.

(19:56):
I mean, you know I'm not reallyfor regulations, but you throw
stuff like this out there andyou've got to at least put
warnings or, you know, make,make it known that these things
there's got to be some way totell, there's got to be
something in it that that we canrecognize.

Speaker 2 (20:14):
Yeah, because it is.
I mean, it's going to bedetrimental to all of us,
because there's so many peoplewho are believing not only the
fake news that the newscastersare putting out, believing not
only the fake news that thenewscasters are putting out,
right, but also what they'reseeing on their, on their
networking, on, you know, anytype of facebook, on I was gonna
say, and especially socialmedia yeah, I mean a lot of

(20:34):
people get their news fromsocial media because that's all
that they look at all day.
Yeah, and they are, they'rebelieving this, or they see
their favorite actor right,actress right and they're saying
something and it're believingthis, or they see their favorite
actor right, actress right, andthey're saying something and
it's like oh, or they get aninbox oh, I'm doing a movie.

Speaker 1 (20:50):
I just need a thousand dollars and I'll come
to your house and but you knowthey, they obviously know you're
political and religious andit's all out there persuasion
because of your algorithms.

Speaker 2 (21:04):
And so let's say that Kurt Russell is my favorite,
you know, and I believe-.

Speaker 1 (21:08):
Pulled that out of nowhere.

Speaker 2 (21:11):
And he's the one that I, you know, I look at and I go
oh, I would believe anythingthat he says.
Based on my algorithms.
They're going to send mesomething that makes it look
like he said these things right.
And I see not only the reels,but I see the posts and I think
people they're pulling thesepictures off.
These people are not reallysaying this stuff right right,

(21:34):
come on and you know.

Speaker 1 (21:36):
Another thing that I noticed that nobody said
anything about on the facebookpost that we made um, but was
like using ai for, like, um, abreakup text or a text to your
crush, things like that.
Um, oh, a text to you know,your man, to keep him, you know,

(21:58):
because he's drifting away.
There's this crap out there andI'm like, if you can't use your
own words, first of all useyour own damn words, because
that's the person that you areand you know you're using this
ai and who are they going tofall in love with?
Who are they listening to?
They're listening to a botthat's probably out of india,

(22:18):
that has no idea who you are,who they are, and it's like just
what are we doing here?

Speaker 2 (22:24):
I?
What are we doing?
I understand what you're saying.
I'm going to just be honestwith you.
I'm going to throw this out onthe table.
I had a situation here recentlywhere I had to kind of get into
a debate with somebody that Ihadn't spoken to in a long time
and I came across really, reallyhard on this and I sent it
through ai to soften it up,because that doesn't even

(22:45):
surprise me though, honestly Icouldn't, and it just it was
like okay, I'm coming well,you're very, you're very biting,
you know, especially when itcomes down to I like to call it
direct frank and directI have another friend who's
frank and I'm direct yeah, we'll.

Speaker 1 (23:02):
We'll call it that if that makes you feel better, but
it is.
You know you, I could see thathappening where you needed to
soften it up.

Speaker 2 (23:08):
I mean, would that be the same as sharp?
Would that be sharp sharp?

Speaker 1 (23:12):
yeah, no, it's biting no, it's right, sharp would be
more like quick-witted andintelligent I am.
I'm saying you're biting, likeyou I draw blood, yeah, yeah
yeah, yeah, definitely oh mygosh and then you come back a
week later and go.
Was I too harsh?

Speaker 2 (23:30):
maybe I should have said did that come across too
hard, I don't know.
Maybe it did.
I probably shouldn't have saidit like that oh well, that's
kind of how my conversations go.
So, dr domain, we're going toask you this question so, um,
how do you feel about kids usingai for college projects or for

(23:54):
assignments?
Or, you know, not even college,just high school or school in
general which?
What are your thoughts?
Hello well he's slow to the mictonight.

Speaker 1 (24:03):
I don't know His internet's lagging.
You made him go out there andmow for four hours in the heat.
He's got swamp ass.
He's tired, bobby.

Speaker 2 (24:17):
Just so that everybody knows Bobby has had a
drink tonight just before theshow.

Speaker 1 (24:22):
No, no, this is me, this is you, oh okay, okay.

Speaker 2 (24:26):
So what do you think?

Speaker 3 (24:27):
I think that in some respects, I applaud those that
go out and try and do theirresearch and whether they use
paperback, encyclopedia or acomputer or Google it or some
form of AI, that's fine in thatregard, to get the information,
to help fast-tracking Because Ican equate this to work a little

(24:49):
bit, trying to do more in ashorter period of time.
In a corporate setting, that'sapplauded, and in an academic
setting, I think it's frownedupon, because you are expected
to take the time to really thinkabout your answers and
formulate your opinions, andthat should be done on your own.

(25:09):
I agree with that.

Speaker 2 (25:11):
I still think that you can get around it.
If I thought that you wereusing AI and I sat down and
started talking to you about itand you were able to give me
great explanation.
You were able to give me yourthoughts.
I don't see a big deal aboutthat.
That's no different thanreading an encyclopedia to me.
Now, if you don't and you go, Idon't know what I read, I just

(25:35):
run into AI and that's it.
Then you know big F you'refailing.

Speaker 3 (25:39):
I think what AI is going to do it's going to
improve the academic world.
I think it's going to improvethe academic world.
I think it's going to force theteachers, the professors, to
come up with more creative waysto test the students which is a
good thing.

Speaker 2 (25:52):
I don't think that's a bad thing.

Speaker 3 (25:54):
I think that's a great thing because you know
just, you don't want tonecessarily ask a really binary
question how tall is the EiffelTower, for example?
Ask a really binary questionhow tall is the Eiffel Tower,
for example?
Uh, maybe tell me about thearchitectural components of the
Eiffel Tower and how it wasconstructed and things like that
.
Those might elicit moreresearch and more you know, more

(26:17):
thought around it, rather thanjust plugging in a question into
AI now, but the thing is,you've got it.
There's a skill to using AI.
It's not just a matter of howtall is the Eiffel Tower,
whatever, but if you want to getto the real answers, I think it
tests you on how good you areat asking the right questions,

(26:39):
because I can go into AI and askit to create a formula for me
in Excel, which is a verypractical use that I've found in
my profession.
But if I don't word itcorrectly or if I don't know how
to phrase it, you know to listthe right answer I'll get
garbage.

Speaker 1 (26:55):
Right, you know Right Now, on the corporate end of it
, though, if I may interject,you know we talk about preparing
these kids, you know, highschool and college, for the
corporate world, for the adultworld, for things that they're
going to do.
If we don't allow them to learnand properly use AI, when we
are expecting them to in thecorporate world, use AI to be

(27:20):
faster, to do more work, how arewe preparing them for those
jobs that are out there in thefuture?

Speaker 2 (27:28):
And I think that's a good point and one of the
disputes about using AI on workprojects or in the work.
While it does offer numerousbenefits that you guys both
mentioned, like increasedefficiency and automation, it
has kind of sparked some variousdisputes and challenges that
often stem from, you know, theambiguities surrounding the

(27:52):
ownership, the responsibility,the ethical concerns, and so you
know you're using AI but I'mnot, and I'm your coworker and
you're getting all this stuffdone and I'm not Dog world in
the corporation yeah, that'sright I suppose you know that's
right and I think that okay.

Speaker 3 (28:11):
So so I worked for a large shoe company and I was in
their it department.
This was 10 years ago and westarted using ai back then, and
we were using this product thathad this natural conversational
component to it.
So when you'd call and you'dget an IVR, it would be like,
hey, how can I help you?

(28:32):
And then you would dictate in anatural voice and it would
respond back to you.
But it was still kind of kludgy.
It didn't have perception, itdidn't understand the nuances of
the language, didn't understandslang or something.
You know those kinds of things.
And so I mean that's 10 yearsago and we still haven't
progressed.
I don't think we've gotten thatfar with it.

(28:52):
I think part of the component,part of the reason why we're not
as far as we are with AI isbecause people are reluctant to
use it.
There's a fear to it replacingour jobs.
I think it's a real thing.

Speaker 2 (29:06):
I think that's probably true, but also, I think
people are concerned with theownership and, you know,
copyright infringement, becausethere are some ongoing legal
battles that are going onregarding the AI models on
copyrighted data.
It's not copyrighted becauseit's AI.
It has no ability to becopywritten, and so you know

(29:29):
whether it's AI generated and itresembles some existing
copyrighted works.
Could that be an infringement?
I don't know, you know.
I mean that's part of thebattle.

Speaker 3 (29:39):
If we were so far along in AI we wouldn't be on
hold for 20 minutes to get atwo-minute answer.
If we were so far along in ai,we wouldn't have a check engine
light anymore on our vehicle wewould know exactly what the
problem is.
It's this 28 sensor that youdecide to put in to comply with
some type of stupid governmentrule.

(30:01):
Government rule, you know.
There's always those thingsthat I think we're not as far
along in ai as we need to be,that may be controversial.
People make well you know, ai issentient, you know, has this
self-awareness and and it'sgoing to rule the world.
That's my thought.

Speaker 2 (30:17):
No, that's my worry I think it's going to be the end
of the world.

Speaker 1 (30:21):
I mean if we and we've seen it quite a few times
where you know, especiallymachinery and robots that run
off of ai have rebelled againsttheir programming and things
like that, once they do haveenough information.
Now, I think that it's kind ofa double-edged sword, because
who's given them thatinformation?

Speaker 3 (30:39):
we are but they don't have an internal stream of
consciousness, they don't haveself-awareness, they don't have
perception not, yet they don'thave an internal stream of
consciousness.

Speaker 1 (30:47):
They don't have self-awareness Not yet.
They don't have perception Notyet.
They don't Not yet.
Wait till we tell them we'returning them off Right.

Speaker 2 (30:51):
Then they'll develop it internally.

Speaker 1 (30:53):
On their own.

Speaker 2 (30:54):
We're all screwed Is what will happen, it won't rely
on human intervention.
It'll be doing it internallyand writing its own AI
information.

Speaker 3 (31:04):
How will this AI operate?

Speaker 2 (31:06):
How what?

Speaker 3 (31:06):
How will it operate?

Speaker 2 (31:08):
How would it operate?

Speaker 3 (31:09):
Yes, what is required for these machines to run?

Speaker 1 (31:13):
I assume some type of human programming Well
initially, but once you get,that human programming can't it
rewrite its own information?

Speaker 3 (31:22):
What's making the lights flicker on the servers?
Electricity yeah exactly, itcan charge itself.

Speaker 2 (31:25):
I can Electricity Exactly, it can charge itself.

Speaker 3 (31:27):
I can plug my own phone in.

Speaker 2 (31:29):
It can plug itself in .

Speaker 3 (31:30):
From what?

Speaker 2 (31:31):
What are you talking about?

Speaker 1 (31:32):
He's talking about the fact that like okay, let's
say your Roomba charges itself.
It goes to its dock and itcharges itself.
How does it get thatelectricity?
Though you pay the electricbill, it doesn't pay the
electric bill you do.

Speaker 3 (31:47):
If you, as a human, you unplug the docking station,
that thing just gets to sit onthe floor like a big turd.
It won't do anything.

Speaker 2 (31:53):
It's going to go to the neighbor's house and it's
going to break in and it's goingto plug itself in.

Speaker 1 (31:56):
You give it arms, it'll plug itself in.

Speaker 2 (31:58):
That's right, it'll an electric pole and get up
there and hold on for dear lifeand and charge it, dear dad,
she's working on the sequel ofwar of the world.
That's kind of it.
That's kind of it.
I thought this was a good topic, Bobby.

Speaker 1 (32:15):
I did too, and I was glad to hear that a lot of our
listeners really haven't beenutilizing AI, or if they are,
they're not aware of it.
That's for sure.

Speaker 2 (32:24):
And they or utilizing ai or, if they are, they're not
aware of it, that's for sure.

Speaker 1 (32:27):
And they or they don't, they just don't know what
to ask.

Speaker 2 (32:29):
Like dr domain domain said, they just don't know how
to phrase their question andthat's part of my problem.
When I go in, my mouth doesn'twork and I try to put stuff in.
It's like, okay, that's notwhat I wanted, and so you have
to try it eight different times,that's right.

Speaker 1 (32:45):
And then I go uh, this isn't worth it.

Speaker 2 (32:47):
Um so that's about all the insanity, or absurdity,
that we have for today.
We really do appreciateeveryone who responded to our
little study that we put outthere, asking whether or not you
used AI and what did you use itfor, and so we appreciate you
joining us here at the rabbithole studio each week.

(33:09):
Please be sure to follow us.
We look forward to spendingtime with you.
Please like us and share us, ifyou would, and if you have
positive feedback for us, or ifthere's a topic you'd like us to
talk about, drop us a shortemail at boomercom.
If you have hate mail, wearen't interested Until next

(33:42):
week.
I'm Jane Burt and I'm Bobbi Joy.
And you're stuck with us, peaceout Later.
Advertise With Us

Popular Podcasts

NFL Daily with Gregg Rosenthal

NFL Daily with Gregg Rosenthal

Gregg Rosenthal and a rotating crew of elite NFL Media co-hosts, including Patrick Claybon, Colleen Wolfe, Steve Wyche, Nick Shook and Jourdan Rodrigue of The Athletic get you caught up daily on all the NFL news and analysis you need to be smarter and funnier than your friends.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.