Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Good morning. Is this am I? And today is the
twentieth of August, and I think I can jump out
of here on my road in front of my house
this morning. So what are we gonna talk about today?
There have been several content creators, I guess that's what
(00:22):
you call them nowadays talking in the last few days,
weeks or so. Of course, this is an ongoing topic
of conversation, and that is AI, which is funny to
me because if you nowadays, if you're talking about it
in general conversation, that is artificial intelligence. If you live
(00:46):
in the livestock world or community, that means artificial insemination.
So that's a whole different issue. And I suspect that
people were not The cow and horse people were like,
we don't know what you're talking about AI. Yeah, anyway, Uh.
It reminded me of a funny story. We used to
(01:10):
know this guy, you know, I dated a couple of times. Uh,
that didn't last long, but anyway, he he was very
and this was back in the nineties, early nineties, maybe
even yeah, probably about ninety uh, because that was yearniece
(01:30):
graduated from high school and so of course she was
going on to college. And apparently when his children went
to college, you know, their political views changed, and that's
a big topic of conversation at this point too. But
back in the nineties, uh, the term was you know,
now the term is like woke or something, and so
now the now the term back then the term was
(01:53):
PC or uh uh, politically correct. So he was all
kinds of concern that when Nice went to college that
she would become PC. So he asked her one day,
he said something about PC, and she's kind of looked
at him with this blank look on her face, and
(02:15):
she's he asked her if she knew what that meant,
and she said, personal computer. I told him, I said,
of all the people that you have to worry about
being politically correct, nieces not on that list. So, uh,
you're you're you know, we're safe. I'm not really worried
about that. And besides that, we're in you know, kind
(02:38):
of an agricultural rural area, I guess. And there were
I mean, you know, some even in my generation, because
we were the hippies, we had folks go off to
ut or wherever, and they didn't come back the same
kind of people that they left anyway, So that was
(02:59):
off the top. Let me go back to AI or
artificial intelligence. Now, there is a great deal of concern,
and I understand that because they're there needs to be
a great deal of concern. I don't know that there's
a whole lot we can do about it, but I mean,
(03:19):
because it's one of those things, it is what it is,
and if we there's not any way to stop it.
And but having but knowing that, we have to know
what we're gonna do to deal with it. And in
some cases I don't think we know enough to even
be able to figure out what that would be. So
but but I think we need to be aware of it,
(03:41):
and we need to be informed or as informed as
we can be, because I'm I'm most said, I think
there's a whole bunch of stuff on artificial intelligence that
we can't even be informed on yet. We'll probably learn
it eventually. As in all new technologies, it's probably its
(04:03):
primary use is going to be in warfare, and that's
going to be a whole other issue. And again that's
not something we can we can do anything about except
be cognizant of it and prepare to the best of
our ability. Provident Prepper, you know, as a YouTube channel
(04:25):
that I that I have referenced in the past, and
they did an episode the other day here in the
in the last couple of days, where as they said,
a very well known uh ai developer, so somebody that
we would all recognize their name if we knew there
(04:46):
were well that brings several names to mind. Okay contacted
them because they were greatly concerned about making sure their
family was per Okay, because they said that this is
a real threat and they felt the need to prepare,
(05:07):
so they contacted the prepared I mean the provident Preppers
couple to do that, and so they actually UH In
this episode, and I highly recommend listening to it, they
talk about what are the biggest threats, and the number
(05:28):
one threat with AI is looking to make sure I
have to look behind me at this intersection because it's
at an angle that the biggest threat from AI is
an AI generated or AI enhanced cyber attack. And that
(05:52):
makes sense because you know, there's a lot of things
that you can that that AI can do on a
computing level than you know, than we really even probably
realized at this point. Another thing was AI people losing
their jobs to AI. Now I've not personally witnessed that,
(06:13):
and I am in the technology space. However, I'm in
the educational technology space, so that's a little bit different
in some ways. I don't ever see us getting getting
you know that AI is going to take the place
of our programmers in our department. However, I will say
that it is greatly enhancing their ability to work and
(06:34):
their productivity, so they they are kind of driving the AI.
And at this point in time, were using chat GPT,
and I've used it for some technology technology related things
as well. I took a data analytics class last about
a year ago from UT and uh, one of the
(06:56):
things that we had to do. We started off with
Excel and and Sequel, and I'm reasonably well versed in
both of those, so those weren't a problem. But then
we eventually got into Python. Python I've not ever done before,
so that was a new language. It's a little different,
and we were having to do some you know, graphs
and stuff, and so I used chat gpt, just the
(07:19):
free version to check my code, give me some examples
of things and that kind of thing. Now, if I
ask it to answer the question that was like in
my assignment, it would give me an answer. The answer
was always wrong. Okay. Now, elements of the answer were correct,
but the answer itself was not right, and then I
(07:40):
had to kind of debug it from there. But you
have to realize I was using the free version of
chat GPT, and probably I don't even know if we
were doing We might have been doing four at that point.
I think five is out now, okay, And we wanted
(08:02):
to incorporate some AI into our software, so we are.
We have subscribed to AWS Amazon Web Services, which can
allow us to do that. And plus, as I said,
we're using the chat GPT now because we have to
(08:23):
do a lot of presentations and we work with educators
and that kind of thing. One of the things we're
going to be doing here shortly is we're gonna be
using a product called sib me siv m E, and
it is a AI product that you give it a
recording of something you've done, like a presentation or even
(08:47):
leading a meeting, and it will then give you feedback
as to how you did with that. At this point
in time, a lot of schools in Texas there have
been some changes to our teachers are paid, and one
of those changes is in the appraise teacher appraisal system.
(09:07):
There is a the legislature several years ago passed this
law called the Teacher Incentive Allotment, And what it does
is school districts is there's a big and bad deal.
But schools will submit teachers for basically for review. They
(09:31):
have to have a plan for and all this kind
of documentation whatever, but based on student growth, like how
you know, when the students came into their class, what
did they know and by the time they left, what
did they know? And if you know, it looks like
this teacher has done a really outstanding job with his
or her students, then they will get a they're getting
(09:56):
this teacher Incentive allotment, which in many cases is a
very large my money. So it's it's a way to
it's basically a version of merit paid, you know, and
that's fine, but it's based on performance, which is much
better than what's happened in the past. Anyway, a lot
of schools are using this simme product, this AI to
(10:20):
actually help with the teacher appraisals where they're you know,
they a teacher teaches a lesson that's videoed and submitted
to this sibme product and then they rate it and
then and then the product rates it as opposed to
(10:40):
a human rating it. Now the human is still in
charge of making sure that they like what they saw
and all that kind of stuff. But I've and I've
heard feedback that the appraisers, you know, they'll say, yes,
sid me recorded some things that I that I didn't
actually get to see, and so you know, there's there's
(11:02):
merit in that using that as Now it's very uncomfortable.
Kind of excuse me to think that, you know, you're
gonna be recording this electronic thing is gonna give you
a score, but it is what it is, and that's
where we are in this day in time. So we're
going to embrace that technology in our organization as well.
(11:25):
And so we will do the same things. We will
submit presentations and us leading a meeting or what depends
on what our role is. And uh, then this product
will give us feedback. So that's where we are with that.
Uh sounds really uncomfortable, but it's what's gonna happen. So
it is as I said, it is what it is.
(11:49):
So that's another example of how you know, AI is
UH being used in the educational world a big thing
at this point, and there's been a lot of conversing
on this subject. Is what we're calling machine scored essays.
Part of the testing system in Texas is that, you know,
(12:11):
for reading and language arts, is that students are expected
to write constructed responses, as they're called, and those constructed
responses can be short, you know, just a few sentences,
or they can be an actual essay like a you
know what we used to call it. It's now incorporated
(12:33):
in the same exam, but used to call our writing exam,
and that that's what they had. The students had to do.
They had to revise and edit writing that had already
been done, but they also had to write their own
and obviously that is a skill we want students to
be able to do in our society, is be able
to actually write, and heaven forbid not with texting. But
(12:53):
it is very time consuming to score essays. As most
English teachers in the world no okay, I taught math,
which was easier to score. However, the higher level math
you teach, the more involved that the actual work is,
and so you have would have less actual problems for
(13:17):
the students to solve. Those problems are gonna have length,
are gonna be lengthy and have involved many multiple steps,
and so you have to be able to score those accordingly.
You know, where you just wouldn't say right or wrong.
You have to give them a partial credit. Same thing
when I taught computer science what I did with computer science,
because that would be a project that the students would
(13:38):
work on for a long time. I would actually have them.
I would give them a rubric and I would say,
you know, you're gonna get this many points for this.
You're gonna get this many points for that. You can
get this many points so that it wasn't yet yes
it ran or no it didn't run, and that was it.
So you know you would have partial credit. Well that's
the way as scored, and then the teacher gives the
(14:03):
essay a score, so you know, in the past what's
happened is there were actual scorers for the essays that
the students had to write on. The state test, and
state testing in Texas is basically taken up most of
my career. I started in eighty one, and I don't
(14:24):
know that we had a test that first year, but
I think we had won the second year, so I
was pretty much on the tail end of no testing,
and then you know it's gone from there, and you know,
there's all kinds of controversy about the High States testing,
all that kind of stuff. I will say that it
improved education overall, but that's that's a rabbit we're not
gonna chase today. But one of the things that teachers
(14:47):
and educators would like for us to include in our software,
because we are involved in testing, is that we include
machine scoring of essays. The problem and there are several
products that do that. The problem with that is that
(15:11):
and this is a problem with AI in general. And
this is one of the things that this is I'm
trying to get to this points taking me forever, is
that AI does is not a static thing. And in fact,
one of the products that I have seen, and I've
heard a teacher's evaluation of this thing was that, say,
(15:36):
for example, you gave it ten essays to grade, and
the first essay it graded, it was an excellent essay,
and it gave it up, you know, a perfect score,
which might be a five. Okay. So as it continued
to read through these ten essays, by the time it
got to the end, it had quote learned unquote different
(16:00):
things and at the end it scored, and you might
have one at the end that was equally as excellent
as the first one, but it would not score that
one the same. It would score that one down. Okay, Well,
the problem with that is then you have that's not
objective grading because you have not given this essay at
(16:24):
the end the same kind of consideration as the essay
at the beginning, and scoring essays is subjective. And you know,
it's one reason I taught math. I liked yes and
no answers. But that's a that's a concern about AI
in general. That's just one example from the educational world
(16:45):
because it's one I know about. But it's true about
everything that AI does. AI learns, but that doesn't mean
it learns correctly, and so it kind of morphs over time,
but you don't know that it's morph towards the right thing.
And that is probably to me, one of the most
concerning problems with it. Another thing is, and I think
(17:10):
there's a lot of apparently paralegals, to their chagrin, figured out,
is that it'll make stuff up. Now it calls it,
you know, the humorous title for that is it has hallucinations. Okay,
I will say that this time I get something wrong.
I'm just like aim, I hallucinated that answer. That's crazy.
(17:35):
But anyway, it'll make up you know, one of the
things that people in the legal field discovered pretty quickly
is that it'll make up case law that doesn't exist. Now,
why it does that, I don't know. That's kind of
that's kind of crazy. In addition to being a problem,
same thing with say, for example, you're and this one's
(17:57):
really concerning history. History teachers have used it maybe to
write questions for an exam, maybe, and it will make
up laws that are things in history that didn't actually exist. Okay,
So that means it completely can has the potential of
(18:18):
completely skewing our perception of what happened in my We
have enough of that going on with humans. I don't
know that we need a machine to help us with that.
People have been involved in revisionist history for a long time.
That's one reason why I'm reading through Mary Maverick's memoirs
(18:39):
about Texas history, because that is an actual person who
and these are her words, instead of us hearing about
history from some other source. Because especially in the last
few years, we have taken a fast and loose approb
(19:00):
to what actually happened in history. And that's important because
what happened in history informs what we do from this
point forward. So uh, that chat chat chat GPT or
any of the up Claude or whoever the GROC or
(19:21):
somebody has revised history or made things up or hallucinated
that is very concerning. I know, not too long ago,
it's probably a month or two, because you know, time flies.
There was a big brew ha ha about Grock, which
(19:42):
is Elon musk Ai spewed out a whole bunch of
anti Semitic statements one time one day, and I don't
even know what the context of that was, but apparently
it was, you know, justifiab believe should have been a bad,
big d And you know, even there were times when
(20:05):
it first came out people were asking it about politics, like,
you know, tell all the the accomplishments of President Obama
and then it would listen all and then you'd say,
tell all the accomplishments of Donald Trump, and it'd say, well,
I can't do things that are controversial. Well, okay, one
(20:28):
was president just like the other one's president. Whether you
like him or don't like him, that's not the issue.
It's that they aren't. You know, they were both present
and so it it's going to have because it's been
created by humans, it's going to have biases of the
(20:49):
people who created it. So it's important to keep that
in mind. So I don't I'm not doing this segment
with an answer to this question about what we do
about AI except pay attention to it and, in the
words of the Provident preppers, be as prepared as possible.
(21:15):
And prepared means, you know, making sure that you have
supplies and that kind of thing. Probably the cyber attack
thing is gonna be the biggest thing now if you
are in a job that could be replaced with AI.
And I've heard that that's happening in some places. I've
not witnessed that with my own eyes, but I have
(21:38):
heard it, so you know, it's one of those things
you have taken consideration. I'm just back to the basic
preparedness where the more things we know how to do
for ourselves and be self sufficient, the better off we are.
But it's something to think about. It's a part of
our lives. It's not going to go away. We can't.
(22:00):
We cannot stick our head in this in I will say,
I use chat GPT on a fairly regular basis, and
we just ordered a subscription for everybody in the department
for it because we are using it in our work
and it does help and it does save time, which
(22:22):
I think makes us more productive. I do have to
pay attention to it because I will ask it to
do something and it will give me, Like, for example,
the other day, I ask it to help me write
some multiple choice items for a specific curriculum standard, and
(22:44):
it gave me the wrong standard, and so then I
had to come back and say that's it. And now,
having a conversation back and forth with the computers a
little weird. I will say that I've always talked to
the computer, I just hadn't had the computer talk back
to me, and so it would come back and I
would say, that's incorrect. The correct standard is YadA YadA,
(23:05):
YadA YadA, And then it would come back and say, oh, yes,
thank you for your patients. Here are some questions using
that text, Okay, but I had to check it every time.
So you know, first of all, for normal human use,
we have to be careful with you know, chat SHEPT
(23:27):
is probably the most popular, but the other one same
thing with the AI overview in Google. Just because it
says it does not mean we can believe it. We're
gonna have to do more research, and that's gonna be
the hard part because people are not gonna want to
do research. They're gonna take it with, you know, as
gospel when it's not. So. That also has a potential
(23:51):
to be dangerous when people start making decisions and doing
and taking actions based on incorrect information, and that's when
things get really icy. Anyway, on that happy note, I'm
gonna stop because I've got lots of stuff to do
at the office today. I'll talk to you tomorrow.