Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Cauzone Media.
Speaker 2 (00:05):
Hey everyone, it's me ed Ze Tron and we're doing
a rerun episode this week. Sadly, the in studio recording
we did with Ashmund Rodriguez and Victoria's song we had
a technical fall. Nothing we can do. It sucks, but
we're doing a rerun this week. We're doing the Academics
that think Chat GPT is BS and this is one
of my favorite episodes I ever recorded. It changed how
I do interviews for at Large. It's with these three academics,
(00:27):
Michael Townsend Higgs, James Humphries and Joe Slater, who wrote
a paper using the actual Frank thirty in definition of
bullshit to say that chat GPT bullshits. It's so much fun.
It's one of my favorite episodes I've recorded. I will
have you a monologue this week as well. I do
apologize for not having something new for you this week.
You'll still get the monologue on Friday, though. Thank you
(00:49):
for your time, your patience and of course for listening
to Better Offline. Hello and welcome to Better Offline. I'm
your host ed Ze Trunk. In early June, three researchers
(01:20):
from the University of Glasgow published a paper in the
Ethics of Information and Technology Journal called chat GBT is bullshit,
And I just want to be clear. This is a
great and thoroughly researched, from well argued paper. This is
not silly at all. It's actually great academia. And today
I'm joined by the men who wrote it, academics Michael
Townsend Hicks, James Humphries, and Joe Slater, to talk about
(01:42):
chat gbt's mediocrity and how it's not really built to
represent the world at all. So, for the sake of argument,
could you define bullshit for me?
Speaker 3 (01:53):
So you are bullshitting if you are speaking without caring
about the truth of what you say. So normally, if
I'm telling you staff about the world in a good case,
I'll be telling you something that's true, and like trying
to tell you something that's true. If I'm lying to you,
I'll be knowingly telling you something that's false or something
I think is false. I'm bullshitting. I just don't care.
(02:16):
I'm trying to get you to believe me. I don't
really care about whether what I say is true. I
might not have any particular view on whether it's true
or not right.
Speaker 2 (02:25):
And you define between like soft and hard bullshit? Can
you also get into that as well? Can you also
identify yourselves as well too?
Speaker 4 (02:33):
Yeah?
Speaker 5 (02:33):
Yeah, and Joe.
Speaker 3 (02:34):
So the soft bullshit hard bullshit distinction is a very
serious and technical distinction, right, So he came up with
this because bullshit is in the technical philosophical sense. Comes
from Harry Frankfurt, recently diseased but a really great philosopher,
and he talks about the about bullshit that there is
(02:54):
in a popular culture and just in general discourse these days.
Some of the ways he talks about bullshit seem to
suggest that it needs to be accompanied by a sort
of malign intention.
Speaker 4 (03:07):
I'm doing something kind of bad.
Speaker 3 (03:09):
I'm intending to mislead you about the enterprise of of
what I'm.
Speaker 4 (03:13):
Doing, maybe about who you are or what you know.
Speaker 1 (03:16):
So you might be trying to portray yourself as someone
who is knowledgeable about a particular subject. Maybe you're a
student who showed up to class without doing the work.
Maybe you're trying to portray yourself as someone who's virtuous
in ways you're not. Maybe you're a politician who wants
to seem like you care about your constituents, but actually
you don't. So you're not trying to mislead somebody about
(03:40):
what you're saying the content of your utterance.
Speaker 4 (03:44):
You're trying to.
Speaker 1 (03:44):
Mislead them instead about like why you're saying it's that's
what we call hard bullshit.
Speaker 4 (03:50):
Yeah, it's one of the things Frankfort talked about.
Speaker 3 (03:52):
Yeah, so Frankfurt doesn't make this hard bullshit soft bullshit
in the station, but we do because sometimes it seems
like Frankfurt has this particular kind of intention in mind,
but sometimes he's just a bit looser with it. And
we want to say that CHAT GPT and other large
language models they don't really have this intention to deceive
because they're not people that don't have these intentions. They're
(04:13):
not trying to mess with us in that kind of way,
but they do lack this kind of caring about truth.
Speaker 6 (04:20):
Well, I'm James, by the way, I suppose we strictly
don't want to say that they aren't hard bullshitters. And
we just think if you don't think that large language
models are sapient, if you don't think they're kind of
minds in any important way, then they're not hard bullshitters.
So I think in the paper we don't take a
position on whether or not they are. We just say,
if they are, this is the way in which they are.
(04:40):
But minimally they're soft bullshitters, So they're kind of soft bullshites,
as Joe says, doesn't require that speakers attempting to deceive
the audience about the nature of the enterprise.
Speaker 5 (04:49):
Hard bullshit does.
Speaker 6 (04:50):
So if it turns out that large language models are sapient,
which they're definitely not, like that's just.
Speaker 2 (04:54):
Technicio property, Yeah, that's nonsense.
Speaker 6 (04:56):
Yeah, But if they are, then they're hard bullshitters and
minimally soft bullshitters or their bullshit machines.
Speaker 2 (05:03):
So you also make the distinction in there the intention,
so the very fabric of hard bullshit that you intentionally
are bullshitting to someone, You kind of make this distinction
that the intention of the designer and the involved prompting
could make this hard bullshit. Because with a lot of
these models and someone recently jail broke chat GPT and
(05:25):
it listed all of the things that's prompted to do,
could prompting be considered intentional that Sam Altman, CEO of
open Ai, could be intentionally bullshitting I think it is.
Speaker 6 (05:35):
And yeah, this again, I think it's something I don't
know what the kind of hive mind consensus on this is.
I'm sort of sympathetic to the idea that if you
take this kind of purposive or teleological attitude towards right,
what an intention is an effort to do something, then
maybe they do have intentions. But again, I think in
the paper we just sort of wanted I mean, it's
a standard philosophical move, right, just sort of go, look,
(05:57):
here's all this uncontroversial stuff as we can make it.
Now we can hit you with the really controversial shit
that we wanted to get to. So in the paper
we sort of deliberately when maybe you might think it
has intentions for it this reason, we kind of have
no judgment on this efficiently, I'm sympathetic to the sort
of view that you're putting.
Speaker 5 (06:12):
I think you're kind of sympathetic this as well.
Speaker 1 (06:14):
Right, Yeah, so, I Mike, there are a few ways
that you can think of CHATCHYBT as having intentions.
Speaker 4 (06:19):
I think, and we talk about a few of them.
Speaker 1 (06:22):
One is by thinking of the designers who created it
as kind of imbuing it with intentions. So they created
it for a purpose and to do a particular task,
and that task is to make people think that it's
a normal sounding person.
Speaker 4 (06:36):
Right.
Speaker 1 (06:37):
It's to make people, when they have a conversation with it,
not be able to distinguish between what it's outputting and
what a normal human would say.
Speaker 4 (06:46):
Right.
Speaker 1 (06:47):
Right, And that kind of goal, if it amounts to
an intention, is the kind of intention we think a
bullshitter has.
Speaker 4 (06:57):
Right.
Speaker 1 (06:57):
It's not trying to deceive you about anything it's saying.
It doesn't care whether what it's saying is true. That's
not part of the goal. But the goal is to
do is to make it seem as if it's something
that it's not, like specifically a human interlocator. And one
source for that goal is the programmers who designed it.
Another is the training method. So it was trained by
(07:19):
being given sort of positive and negative feedback in order
to achieve a specific thing, right, And that specific thing
is just sounding normal. And that's so similar to what
our students are doing when they try to pretend that
they read something they haven't read.
Speaker 2 (07:34):
There is something very collegiate about the way it bullshits.
Speaker 4 (07:37):
Though.
Speaker 2 (07:38):
It reminds me of when I was in college, as
when Penn State and Aporistwyth two very different institutis.
Speaker 4 (07:43):
Right, both kind of in the middle of nowhere.
Speaker 2 (07:47):
Yeah, both very sad. But the one thing you saw
with like students who were doing like B plus homework
is they were using words they didn't really understand. They
were putting things together in a way that really is
like the intro body can clu usion. There was a
certain formula behind it, and it's just it feels exactly
like it. But that kind of brings me to my
next question, which is, how did you decide to write this?
(08:09):
What inspired this?
Speaker 1 (08:10):
Right?
Speaker 6 (08:10):
I would feel this because whenever Mike tells the story,
you get a sanitized version.
Speaker 5 (08:14):
We were in the pub winsing about student essays. Perfect, yeah,
like you were not long here, right, you're not long yeah,
long post of.
Speaker 6 (08:22):
A bunch of us sort of went to the pub
on a notional let's let's welcome our new members of
staff sort of event, and inevitably we had about two
points we were all passing and moaning and I think
it might have been Neil that kind of prompted this.
Speaker 4 (08:33):
No, I don't think he was about it.
Speaker 5 (08:37):
We talked about it after, right, Yeah, in a case
like sort of came out.
Speaker 6 (08:39):
We were talking about this sort of prevalence of chat
GPT generated stuff and kind of what it said about
how at least some students were kind of approaching assessments
and you know, I forget who. Someone sort of just
went off hand, Yeah, but it's all just Frank thirty
and bullshit though, wasn't it. And we sort of collectively
went hello, because obviously we all having a background plosophy,
we've all read on bullshit. You know, we all went
we get to say bullshit and allegedly serious academic work.
(09:02):
So the start of it was we've had this experience
of having to read all of this kind of uncanny
Valley stuff, and when prompted, we all went, oh, it
really is like Frank thirteen bullshit in a way that
we can probably get paper ouse off.
Speaker 1 (09:14):
Yeah, And at that point I think we were in
the department meetings talking about how to deal with chat
GPT written papers. And there were discussions going on all
over the university, including kind of in high levels and
the administration to come up with a policy. And we
specifically wanted a stronger policy because our university is very
(09:37):
interested in cutting into technology, right, and so they wanted
the students to have an experience using and figuring out
how to use whatever the freshest technology.
Speaker 2 (09:46):
Is, right, as they should as they should, but not
for essays.
Speaker 4 (09:51):
Right, that's the worry we had. We thought, you know,
if we're not very clear.
Speaker 1 (09:54):
About this, the students will be using it in a
way that will detract from their educational experience.
Speaker 4 (10:00):
Right. And at the same time, it was becoming more.
Speaker 1 (10:03):
Widely known how these machines work, like specifically how they're
doing next token or next word prediction in order to
come up with a digestible string of text. And when
you know that that's how they're doing it, I mean,
it seems so similar to what humans do when they
have no idea what they're talking about. And so when
(10:26):
we were talking about it just seemed like an obvious
paper that someone was going to write, and we thought
that had better bs. You know, eventually people are going
to see this connection and it's a great papa.
Speaker 4 (10:37):
I think it worked very well.
Speaker 1 (10:38):
I mean, I think that it's the kind of thing
where when I pitch this to other philosophers, it doesn't
take them very long to just agree to be like, ah, yes.
Speaker 5 (10:47):
That's right.
Speaker 2 (10:48):
It's an interesting philosophical concept as well, because the way
that people look at chat GPT in large language models
is very much machine do this. But when you think
about it, there are other ways what people are giving
it consciousness. I just sort of tweet just now if
someone talk about saying please with every request, it's like, no,
I will abuse the computer in whatever manner I see fit.
(11:08):
But it's it's curious because I think more people need
to be having conversations like this. And one particular thing
I like that you said, I actually would love you
to go into more detail, is you said, the chat
GIPT in large language buddels, they're not designed to represent
the world at all. They're not lying or misrepresenting. They're
not designed to do that. What do you mean by that?
Speaker 1 (11:29):
I mean kind of as background, I do philosophy of science,
but my thoughts about something like chatchipt are largely inspired
by the fact that I also teach a class called
Understanding Philosophy through Science Fiction.
Speaker 4 (11:41):
Oh and and now we like talk.
Speaker 1 (11:43):
About whether computers could be conscious, and I don't know
what you guys think. Actually I think they could, right,
I just don't think this one is. And part of
the reason I think they could, but this one isn't
is that I think that in order to sort of
represent the world or have the kinds of things we
have their like beliefs, desires, thoughts that are about external things,
(12:07):
you have to have internal states that are connected in
some way to the external world. Usually causation, we're perceiving things,
information is coming in. Then we've got some kind of
state in our brain that's designed just to track these
things in the external world.
Speaker 3 (12:24):
Right.
Speaker 1 (12:25):
That's a huge part of our cognitive lives is just
tracking external world things, and it's a very important part
of childhood development when you figure out how to act.
Speaker 2 (12:36):
It's semiotics, right, Daniel Chanler Aperistwyth told me semiotics it's
like anception of the world.
Speaker 1 (12:41):
And this is like theory of meaning stuff. So yeah,
semiotics is like theory of science. How is it that
assign It can be both the representation of the thing
and the thing itself. That can happen, yeah, but not always.
Sometimes it's just the representation of the thing. And there's
a lot of philosophy is about figuring out how brain
states or words on a page can be about external
(13:04):
world things. And a big part of it, at least
from my perspective, has to do with tracking those things,
keeping tabs on them, changing as a result of seeing
differences in the external world. And chatch ept is not
doing any of that, right, that's not what it's designed
to do. It's taking in a lot of data once
(13:25):
and then using that to respond to text. But it's
not remembering individuals, tracking things in the world in any
way perceiving things in the world. It's just forming a
sort of statistical model of what people say. And that's
kind of so divorced from what most thinking beings do.
Speaker 2 (13:50):
It's divorced from experience.
Speaker 4 (13:51):
Yeah. I mean, as far as I can tell, it
doesn't have anything like experience.
Speaker 5 (13:55):
Yeah.
Speaker 6 (13:55):
And that's one of the things that this, I think
of in one way, comes down to, is that if
you sort of post this sufficiently far, someone is going
to go, ah, isn't this just bioschauvinism, right, Like, aren't
you just assuming that unless something runs on like meat,
it can't be sentient? And this isn't something we get
into into the paper, partly because we didn't really think
it was worth addressing. But the sorts of things that
(14:16):
seem like they're like never mind consciousness, right, But it
seemed to be necessary in order for something to be
trying to track the world, or in order to be
corresponding the world, or to form beliefs about the world.
Chat GPTs just doesn'tthing to meet any of them. Kind Of,
it's if it does turn out that it's sapient, and
then chat gpt has got some profoundly serious executive function disorders, right, sapient, Right,
(14:37):
so we don't have to worry about it. But kind
of it's not the case that we've got some blundering
proto general intelligence that's trying to figure out how to
represent the world. It's not trying to represent the world
at all. This utterances are designed to look as if
it's trying to represent the world, and then we just
go that that's just bullshit. This is a classic case
of bullshit.
Speaker 2 (14:53):
Yeah, it seems to be making stuff up, but making
stuff up doesn't even seem to describe it. It's just
throwing shit at a wall accurately but not accurately enough.
Speaker 6 (15:02):
Yeah, it's got various guidelines that allow it to throw
shit at the wall where a sort of reasonably high
degree of Actually, no, you're right, I mean one of
the things that a human bullshitter could at least be
characterized as doing is. They'd have to try and kind
of judge their audience, right, they'd have to try and
make the bullshit plausible to the audience that they're speaking to,
and chat GPT can do that, right. All it can
do is go on a statistically large enough model and
(15:23):
it looks like Z follows, why follows X. It's not
kind of got any or doesn't have any consciousness at all,
of course, but it doesn't have any sensitivity to the
sorts of things that people are in fact likely to
find plausible. It just does a kind of brute number crunch.
So it's more complicated than that, but I think it
boils down it effectively to kind of number crunching, right,
kind of data and context.
Speaker 2 (15:43):
In which probablistic planning of what planning it doesn't plan.
That's the thing. It's interesting. There are these words you
use to describe things that, when you think about it,
are not accurate. You can't say it plans or things.
Speaker 1 (15:56):
I think one thing between when we came up with
the idea and when we finish writing the paper, we
spend some time reading about how it works and how
it represents language and what the statistical.
Speaker 4 (16:08):
Model is like.
Speaker 1 (16:09):
And I was maybe more impressed than James about that,
because it is like doing things that are similar to
what we do when we understand language. It does seem
to kind of locate words in a meaning space right right,
and connect them to other words, and you know, show
similarity and meaning, and it also does seem to be
(16:32):
able to in some way understand context.
Speaker 4 (16:35):
But we don't know how.
Speaker 1 (16:38):
Similar that is for a variety of reasons, but mostly
because it's too big of a system and we can
only kind of probe it. And it's trained indirectly, right,
so it's not programmed by individuals. And even though that's
kind of a very impressive model of language and meaning
and may in some ways be similar to or what
(17:00):
we do when we understand language, we're doing a lot
more things like planning, things like tracking things in the world.
Just having desires and representing the way you want the
world to be and thereby generating goals doesn't seem to
be something that it has anything any room for in its.
Speaker 6 (17:17):
Architect So this is something of the time of it's
just goods to me, and you were talking about the
kind of in some ways it learns language the same
way that we do. I mean, it's got no grasp
of expleted in fixation, right, this is one of chum's here.
Speaker 2 (17:27):
What does that mean? Just for not for me? I
definitely know.
Speaker 6 (17:30):
Yeah, yeah, of course, if I give you the sentence
that's completely crazy, man, and tell you to put the
word fucking into that sentence, there's a number of ways
in which any language speaker is going to do it
that they'll just go yeah, like of course.
Speaker 5 (17:42):
That where it goes, right.
Speaker 6 (17:45):
But we it seems that we've got a grasp on
this incredibly early on in a way that doesn't look
like it's the ways at least most of us that
taught language, right, we get quite highly told off when
we try and do.
Speaker 5 (17:55):
Expleted in fixation.
Speaker 6 (17:56):
Yes, So this I think would be one of those
cases where you could do a sort of dis analogy
by cases. Right, you present chat GPT a sentence and
say insert the word fucking correctly in this sentence, and
I doesn't think it would be very good at it.
Speaker 4 (18:08):
I think it would be.
Speaker 5 (18:09):
I think you reckon. I mean we probably couldn't.
Speaker 4 (18:11):
We could, but we shouldn't. Yeah.
Speaker 1 (18:15):
One of the things that I thought was like kind
of interesting about how it works is that it does
learn language, probably differently from the way we do, but
it does it all by examples, you know, So it's
looking at all these pieces of text and thinking, ah,
this is okay, that's okay. And one kind of interesting
thing about how humans understand language is that we're able
(18:36):
to kind of understand meaningless but grammatical sentences. It's not
clear to me that chat GPT would understand those. That's
another Chopski example. So you know, Chowski has this example that's.
Speaker 5 (18:49):
Like, what is it colorless? Us sleep furiously color?
Speaker 1 (18:53):
It's colorless green ideas, right, And that's a meaningless sentence,
but it's grammatically well formed, and we can understand that
it's grammatically well formed, but also that it's meaningless. Because
chat Gibt kind of combines different aspects of what philosophers
of language, logicians, linguistics people see y as like different
(19:16):
components of meaning. It sees these as all kind of
wrapped up in the same thing. It puts them in
the same big model. I'm not sure it could differentiate
between ungrammaticality and meaninglessness.
Speaker 2 (19:36):
So we're doing real time science right now. I just yeah,
we should check the word shoe into the following sentence
in the correct way. Man, that's crazy. And I did
it six times, and I would say fifty percent of
the time it got it right. And it did. Man,
that's fucking crazy. Man, that's crazy. Fucking Man, that's fucking crazy. Man,
that's crazy. Fucking My favorite is man, comma, that's crazy,
(19:59):
Comma fucking.
Speaker 5 (20:01):
To be fair.
Speaker 6 (20:03):
Reliable, you take the commas out of that last one,
and you've got a grammatical sentence.
Speaker 2 (20:07):
Yeah.
Speaker 6 (20:08):
Of course in Glasgow you can also start with fucking man,
that's crazy.
Speaker 2 (20:12):
West London as well. But the thing is, though it
doesn't know what correct means there.
Speaker 4 (20:16):
Yeah.
Speaker 2 (20:16):
Now, when it's trained on this language, when it's trained
on thousands of Internet posts that stole, it's not like
it reads them and says, oh, I get this, like
I see what they're going for. It just learns structures
by looking, which is kind of how we learn language.
But it kind of reminds me of like when I
was a kid and i'd hear someone say something funny,
(20:37):
I'd to repeat it, and my dad, who's wonderful, would
just say, that doesn't make any sense, and you'd have
to explain because if you're learning everything through copying. You're
not learning, you're just memorizing.
Speaker 5 (20:48):
Yeah, yeah, exactly.
Speaker 1 (20:49):
There's a I don't know if you have already talked
to somebody about this, but there's a you know, a
classic argument from child Skyt against behaviorism.
Speaker 2 (20:58):
Right.
Speaker 1 (20:58):
Behaviorism is the view that we learn everything through stimulus
and response.
Speaker 4 (21:04):
Roughly, that's not exactly.
Speaker 1 (21:05):
It, but I'm not a philosopher of mine, so I
can get away with with that.
Speaker 4 (21:10):
So Chomsky says, look, we don't get enough.
Speaker 1 (21:12):
Stimulus to learn language as quickly as we do just
through watching other people's behavior and copying it. We have
to have some in built grammatical structures that language is
latching onto. And there's been some papers arguing that chatchipt
shows Chomsky was wrong because it doesn't.
Speaker 4 (21:31):
Have the inbuilt grammatical structure.
Speaker 1 (21:34):
But one interesting thing is it requires ten to one
hundred times more data than a human child does when
learning language. Right, So Chomsky's argument was, we don't get
enough stimulus, and chat GPT can kind of do it
without the structure, but it's not quite doing it as well,
and it gets like orders of magnitude more input than
(21:57):
a human does.
Speaker 4 (21:58):
Before a human learns language, which is kind of interesting.
Speaker 6 (22:01):
And it's still going to do something as basically as
putting the word yeah yeah.
Speaker 2 (22:07):
It doesn't even see and it doesn't have the knowledge
to say, request more context because it doesn't perceive context.
And that's kind of the interesting thing. So there was
another paper out of Oxford, I think that's talking about
cognition and chat GPT and all this thing, and it's
just it doesn't feat chat GPT features in no way
any of the things that the human mind is really
involved in. It seems it's mostly just not even memorization,
(22:30):
because it doesn't memorize. It's just guessing based on a
very blooge pile of stuff. But this actually does lead
me to marvel question, which is, you don't like the
term hallucination. Why is that hallucination?
Speaker 3 (22:45):
It makes it sound a bit like I'm usually doing
something right and I'm looking around seeing the world.
Speaker 4 (22:52):
That's something like what it really is.
Speaker 3 (22:54):
And then one little bit of the feature for a
visual hallucination, one feature of my visual field actually isn't
represented in the real world. It's not actually there. Everything
else might well might well be right. Imagine I hallucinate
there's a red balloon in front of me. I still
see Mike, I still see James, I still see the laptop.
One bit is wrong, everything else is right. And I'm
(23:17):
doing the same thing that I'm usually doing, Like my
eyes are still working pretty much normally, I think, I
and this is the way that I usually get knowledge
about the world. This is a pretty reliable process for
me learning from it right and representing the world in
this way. So when talking about hallucinations, this suggests that
(23:38):
chat Gypt and other similar things, they're going through this
process that is usually quite good at representing the world
and then oh, it's made a mistake this one time,
but actually no, it's bullshitting the whole time, and like
sometimes it gets things right right bullshitting just like a politician.
Imagine a politician that bullshits all the time, if you
(24:02):
could possibly imagine it. Sometimes they might just get some
things true, and we should still call them bullshitting because
that's what they're doing. And this is what a GPT
is doing every time it produces an output. So this
is why we think bullshit is a better way of
thinking about this. A one of the reasons why we
think bullshit's a better way think about it.
Speaker 1 (24:23):
I also kind of think that some of the ways
we talk about chat gpt lin, even when it makes mistakes,
lind themselves to overhyping its.
Speaker 4 (24:32):
Ability or overestimating its.
Speaker 1 (24:35):
Abilities, and talking about it is hallucinating is one of
these because when you say that it's hallucinating, as you're
pointed out, you're giving the idea that it's representing the
world in some way and then telling you what the
content and.
Speaker 6 (24:48):
It has perception, Yeah, exactly like it has perceived something
and like, oh no, it's taken some computer acid and
now it's hallucinating like mattery things.
Speaker 5 (24:57):
Yeah, just as you say, And.
Speaker 4 (24:59):
That's not what it's doing.
Speaker 1 (25:00):
And so when the kind of people who are trying
to promote these these things as products talk about the
AI hallucination problem, they're kind of selling a product that
is a product that's representing the world usually checking things
and occasionally makes mistakes. And if the mistakes were, like
(25:22):
Joe said, we're a misfiring of a normally a reliable
process or you know, something that normally represents going wrong
in some way, that would lend itself to certain solutions
to them, and it would make you think there's an
underlying reliable product here, right, which is exactly what somebody
who's making a product to go to the market.
Speaker 4 (25:43):
Will want you to think. Right.
Speaker 1 (25:44):
But if that's not what it's doing in a certain sense,
they're misrepresenting what it's doing even when it gets things right.
Speaker 4 (25:51):
And that's that's for all of us who.
Speaker 1 (25:54):
Are going to be using these systems, especially since people,
you know, most people don't know how this works. They're
just understanding the product as it's described to them using
these kind of metaphors. So the way the metaphor describes
it is going to really influence how they think about
it and.
Speaker 4 (26:09):
How they use it.
Speaker 5 (26:11):
Yeah, just to sort of catfit off if I can.
Speaker 6 (26:13):
Is one of the responses to some corners has been
to sort of say, of us, look, you wings about
people at anthropomorphosizing chat GPT, but look, if you've got
a bullshitty, you're doing exactly the same thing. And I
mean there might be some center which it's just really
hard not to antropomorphise it. I don't know why I
picked the word that I can barely say, like we've
been doing it constantly throughout this discussion, right when we're
(26:34):
talking through the kind of through the paper we kept
talking about chat GPT as if it had intention, right,
as if it was thinking about anything. That might be
another reason to call it bullshet We go, Look, if
we have to treat it as if it's doing something
like what we do, it's not hallucinating, it's not lying,
it's not confabulating, it's bullshitting, right.
Speaker 5 (26:50):
And if we have to treat it as if.
Speaker 6 (26:51):
It's behaving in some kind of human like way, here's
the appropriate human like behavior to describe it.
Speaker 2 (26:56):
I also think the language in this case, and one
of the reasons they probably like the large language model
concept is language gives life too things when we describe
the processes through which we interact with the world and
interact with living beings, even cats, dogs, even we anthropomoise
living things. But also when we communicate with something, languages life,
(27:17):
and so it probably works out really fucking well for them.
Sam Oltman was saying a couple of weeks back, maybe
a month or two. He was saying, oh, yeah, AI
is not a creature with something he said, and it
was just so obvious. What he wanted people to do
was say, but what if it was? Or are people
saying this is a creature and it almost feels like
just part of the con Yeah.
Speaker 1 (27:37):
Yeah, yeah, I hadn't thought about that as a reason
for them to go for large language models as a
way of kind of, I don't know, being the gateway
into war investment in fight consciousness. Yeah yeah, but I
had thought about, like how this might have been caused
by just like deep misunderstandings of the turning test.
Speaker 2 (28:01):
Right, go ahead, I want to hear this one.
Speaker 4 (28:04):
Yeah.
Speaker 1 (28:04):
Yeah, so that like the turning test, I think this
is closer to what Turing was thinking. But the turning
test is a way of getting evidence that something is conscious. Right,
So you know, I'm not in your head, so I
can't feel your feelings or think your thoughts directly, right,
I have to judge whether you're conscious based on how
(28:25):
you interact with me, And the way I do it
is by listening to what you say, right, and talking
to you and turning sort.
Speaker 4 (28:33):
Of was asked, you know, how would you know if
a computer was conscious?
Speaker 1 (28:38):
So, you know, we think that our brains are doing
something similar to what computers do. That's a reason to
think that maybe computers eventually could have thoughts.
Speaker 4 (28:46):
Like ours, right, and some of us. I think that
I think it's possible.
Speaker 1 (28:51):
Great, Yeah, I didn't know if they thought it was possible,
because not everybody thinks it's possible.
Speaker 2 (28:55):
It's possible, I just don't know how.
Speaker 4 (28:57):
Yeah. Yeah. The Turing was kind of you know, thinking
like how would we know?
Speaker 1 (29:02):
And one way we would know, the obvious way is
to do the same thing you do to humans, talk
to it and see how it responds. And that's actually
pretty good evidence if you don't have the ability to
look deeper. But it's not constitutive of being conscious. It's
not what makes something conscious or determines whether they're conscious
(29:24):
or in any way like grounds their consciousness. Right, Their
ability to talk to you is just evidence. It's just
one signal you can get. And that's the way to
think of the Turing test. So as a result of
people thinking in a kind of behaviorist way, thinking, ah,
passing the Turning test is just all it is to
be a thinking thing. There have been at least since
(29:45):
the nineties, attempts to design chatbots that can beat the
Turning test right, and popularizations of these attempts and run
throughs of the Turing test that talk as if, oh,
if a computer finally beats the Turing test.
Speaker 4 (29:58):
I should say what the Turing test is? Right?
Speaker 1 (30:00):
The way turn suggests that the test works is you
have a computer and a person both chatting in some
way with a judge, and the judge is also a person, right,
And if the judge can't tell which of the people
he's chatting with is a human, then the computer's one
the right test because it's ditinguishable from a human.
Speaker 4 (30:19):
Right.
Speaker 1 (30:19):
So people have taken this and it's been popularized as
a way of determining sort of full stop, whether something's conscious.
Speaker 4 (30:27):
But it's just a piece of evidence, and.
Speaker 1 (30:29):
We have a lot more evidence, Like we know a
lot more than Turning did, about how the internal functioning
of a mind works functionally, what it's doing, how representation works,
how experience and belief works, how they are connected to action,
and how they're connected to speech and thought. And once
you know all that stuff, you have a lot of
other avenues to get more evidence about whether the thing
(30:52):
is conscious and.
Speaker 4 (30:54):
Whether it passes.
Speaker 1 (30:56):
The Turning test is just like a drop in the
bucket compared to these, especially if you know how internal functioning.
Speaker 6 (31:01):
The other the story is problem with the showing test,
and I think to be fair cheering did mention this,
If not in the original then kind of later on.
One problem with the Chewing test is that it's like
the de void camp in de androids gream of electric
cheap right. Plenty of humans would fail the Cheuring test. Yeah,
it is a piece of evidence, but it was never
as Mike said, it was supposed to be constitutive. It
(31:22):
wasn't like, if you can do this thing, your conscious
kind of full stopped. It was supposed to be here
is a thing that might indicate that being you're talking
to is conscious.
Speaker 4 (31:31):
Happily, as Ma said, we've got loads and loads of
other evidence.
Speaker 1 (31:34):
So these guys have made a machine that's just designed
to do one thing, and that's past the Turing test.
Speaker 2 (31:40):
I can give you one more annoying example. Are you
familiar with Francois Cholet the abstraction and reasoning corpus. So
this is going to make you laugh. So he's a
Google engineer, and he created this thing of the arc,
the abstraction reasoning corpus, the test whether a machine was intelligent,
and someone created a model that could be it, and
then he immediately went, Okay, you can't just train the
(32:01):
model on the answers to the test.
Speaker 6 (32:04):
This is why people, well I see people a fairly
small subset of weird nerds. But this is why a
small subset of weird nerds have been for the last
twenty years emphasizing artificial general intelligence, right, and kind of
like what we'll call it. When something really is a
thinking being is when it's not specialized to do one
and only one task, but rather when it's capable of
(32:26):
applying reasoning to multiple kinds of different and this analogous cases.
On the one hand, it does seem a little bit
like the guy flipping the table and go, oh, for
Fox's sake, now you've won.
Speaker 5 (32:35):
I'm changing the rules.
Speaker 6 (32:36):
But on the other I think he's got a point, right, Yeah,
if you're training a thing to do very specific you know,
like activate certain shiboleths, then unless you're some kind of
mad hard behaviorist, then yeah, like it's not that doesn't
demonstrate intelligence.
Speaker 5 (32:48):
It is one thing that might indicate intelligence.
Speaker 2 (32:50):
It's the same problem with chat GPT. It's built to
resemble intelligence, and resemble conscious this, and resemble these things,
but it isn't. It's almost like it's meaningless. On a
very high end philosophical levels. I find the whole generative
a I think deeply nihilistic.
Speaker 1 (33:06):
I mean, one thing that connects to this is how
bad it is at reasoning. And this is kind of
good for us, especially in philosophy.
Speaker 4 (33:15):
Because our students when they use.
Speaker 1 (33:17):
It to write papers, the papers have to have arguments,
and chat GPT is very bad at doing reasoning. If
it has to be you know, sort of an extended
argument or a proof or something like that, it's very
bad at it. I think also that if there's one
thing kind of we learned from chat GPT, it's that
(33:37):
this is not the way to get to artificial general intelligence.
Speaker 2 (33:41):
I was going to ask, do you think that this
is getting to that?
Speaker 1 (33:44):
No, partially because it's so subject specific, right, It's one
it's trained to do one task, and it takes quite
a lot of training to get it to do that task. Well,
it's bad at many of the other tasks that we
think are connected with intelligence. It's bad at logical, mathematical reasoning.
I understand that open AI is trying to fix that.
(34:05):
Sometimes it sounds like they want to fix it by
just connecting it to a.
Speaker 4 (34:10):
Database or a program that can do that for it.
But either way when you have with.
Speaker 1 (34:16):
These kind of big base nets models, it's something that
is really good at whatever you train it to do,
but not going to be good at anything else. You know,
it's been a lot of data on one thing. It
finds patterns in that, it finds regularities in that, it
represents those.
Speaker 4 (34:34):
The more data you feed it, the better it will
be at that.
Speaker 1 (34:36):
But it's not going to have this kind of general
ability and it's not going to grow it out of
learning how to speak English.
Speaker 4 (34:44):
Have you heard the Terry Pratchet quote about It's quite
early on.
Speaker 6 (34:48):
He's talking about Hex, the kind of steampunk computer they
make and Unteen University, and it just has this off
hand line. A computer program is exactly like a philosophy professor,
unless you ask it the question in exactly the right way,
delight in giving you a perfectly accurate, completely unhelpful answer.
Speaker 7 (35:03):
Right.
Speaker 6 (35:03):
So, if you abstract the justified you got a philosophy
lecture is there. Basically what intelligent things do is go
You can't have meant that, you must have meant this
right chat GPTs goes. I will take a question as
read and of course it doesn't have an eye. There
is nothing in the rest like I've just answered formal
pite it again. But it's the same thing, and it's
trained to do contractively specific things, and you get the
(35:26):
same problem with any program like garbage in Garbage album.
Speaker 2 (35:39):
So have you found a lot of students using chat GPT,
because this is a hard problem to quantify? Is it
all the time?
Speaker 5 (35:46):
I mean it's it's a lot. I wouldn't say it
all the time.
Speaker 4 (35:49):
I'm changed that you thought that there were more season
DEDs this semester.
Speaker 3 (35:54):
Yeah, so I think there are quite a few, and
sometimes it is difficult for us to prove and if
we can't prove it.
Speaker 4 (36:00):
Then at our university, then well shucks. They kind of
get away with it.
Speaker 3 (36:05):
We can interview them, but unless we're like, unless we
have the proof that you could take before like a court,
then we are we're not able to really nail them
for it.
Speaker 4 (36:16):
Which is a bit of a shame.
Speaker 2 (36:18):
So's suspicion.
Speaker 3 (36:19):
Yeah, we suspect that a lot of them are very
I'm one hundred percent on some of them.
Speaker 2 (36:23):
I just know, what are the clues I want to
hear from all of you on this one, Like, what
are the side.
Speaker 6 (36:27):
It's the uncanny value like, I will not shut up
about this, right as you know, the like you of
course will imagine a lot of here kind of readerbole
be as well. But the uncanny Valley thing in respective
humans is that there's a kind of a point up
to which humans seem to trust artificial things more the
more they look like a human. So like, we'll trust
a robot if it's kind of bipedal or if it
(36:47):
looks like a dog, But after a point, we really
rapidly start to trust me, So like if it basically
when it starts looking like data, some deep buried lizard
brain goes danger.
Speaker 5 (36:58):
Danger. This thing's trying to fuck with you somehow.
Speaker 2 (37:01):
Yeah, right, And chat gpt.
Speaker 5 (37:03):
Writes exactly like that.
Speaker 6 (37:05):
It writes almost but not entirely, like a thing that
is trying to convince you that it's human. I mean,
the dead give way for me is that it never
makes any spelling mistakes, but it can't form out a
paragraph to save its life. Normally, you would expect someone
who didn't know how to kind of order their sentences
to misspell the occasional word. Chat GPT spells everything correctly
(37:25):
and doesn't know what like subject object agreement is it's.
Speaker 2 (37:28):
Like it's so, can you just define that please? Subject
object the beer that I drink, right, not the drink
that I beer right. Because it's interesting. It almost feels
like there is just an entire way of processing and
delivering information as a human being that we do not understand.
There is something missing.
Speaker 1 (37:48):
I mean, for me, like it's very good, it's summarizing,
but when it responds, it responds in them in really
trite and repetitive ways, or like you'll get a paper
that summarizes a bunch of literature at length very effectively,
and then respond by saying, well, you know, this person
said this, that is doubtful. They should say more to
(38:11):
support that, you know, which is basically saying nothing right,
And that's pretty common.
Speaker 4 (38:17):
It also does lists in formats things and kind of
like a list.
Speaker 1 (38:20):
And even if this like make it look less like
a list, the paper still reads like a list.
Speaker 2 (38:27):
Oh, because someone has asked, give me a few thoughts
on X. Yeah, yeah, that's very depressing.
Speaker 3 (38:36):
Things like the introductions you get you'll get this is
an interesting and complex question that the postomers have asked
you the ages and this is one of the things
we shout at students not to write from first year,
and then you'll get this garbage right back at you,
and then at the end it will be oh, overall,
this is a complicated and difficult question, but.
Speaker 2 (38:58):
Many nuances world of contrasts.
Speaker 3 (39:01):
One of the things we tell them, don't ever fucking
do this. This is terrible and right back at you
in perfect english, that sort of english you'd expect a
really good student might have written. But clearly they can't
be a good student because otherwise they've listened to a
fucking instruction.
Speaker 1 (39:18):
I also, I had some papers that I suspected were
chatcheapt this year, but they were already failing, so I
didn't Okay, yeah, I didn't think it was worth it
to pursue them, you know, as a plagiarism or Chatcheapte case.
Speaker 2 (39:34):
So it's never good papers. Then it's never like an
eight like a first.
Speaker 5 (39:38):
No, absolutely not.
Speaker 1 (39:39):
It's so I think that part of what goes on
is you can get a passing grade with the Chatcheapte paper,
sometimes in the first couple of years when the papers
are shorter and we're not expecting as much. But then
when you move into the what we call honors level here,
which is like upper level classes in the US like
third and fourth year.
Speaker 2 (40:00):
I've got first ever wrist with I know.
Speaker 1 (40:02):
Yeah, yeah, exactly, great, well done, well Doe. You would
not have gotten it with chatchipt because you get dropped
in these classes where we expect you to have gained
writing skills minimal ones in your first two years and
then we're going to build on that and have you
do more complicated stuff. And Chatchept doesn't build on that, right,
(40:23):
it just stays where it was. So you go from
writing kind of C grade passing papers, yeah, that your
F grade paper. And it's also more obvious because the
papers are longer, and chat gipt can write long text,
but it gets very repetitive and noticeably repetitive, right, and
so you're kind of lost, like you haven't done the
(40:46):
work of figuring out how to write on your own
and the tool that you've been using is not up
to the task that you're now presented with. And so
I think I have seen a few papers that I
was suspicious of, but the papers that I was certain
of were ones that were like senior thesis, very clear
a person just had no way of writing culture.
Speaker 2 (41:07):
That's insane at that stage.
Speaker 6 (41:09):
Yeah, I mean it's fun because I mean one of
the things that we get told about is like, oh,
the students have got to learn how to use you know,
AI and large language model, plagiarism machines and responsibly and
in a kind of positive way. Well, if they're using
them in a way that means they don't learn how
to write, then it's not positive. Visit like, yeah, it's
fucking hard to write a good ess. Say, yes, it
is hard to write. That's why we practice, That's why
(41:31):
we have editors, that's why we do this collaboratively. And
if you're using this as a sort of oh I
don't know how to write, well, tough shit, you're never
going to know how to write that. That doesn't seem
a positive use of any of this.
Speaker 2 (41:43):
Well that's the thing. Writing is also reading the consumption
of information and then bouncing the ideas of your brain allegedly.
Speaker 1 (41:50):
Yeah, I worry about like because CHATCHYBT is good at summarizing,
So I worry that one of the uses people will think, ah,
this is a pretty good use for it is summarizing
the paper that they're supposed to read, and it will
do that effectively enough.
Speaker 4 (42:06):
For them to discuss it in class if they're willing
to be here.
Speaker 1 (42:09):
Yeah, right, but they're not going to pick up a
lot of the nuances in a lot of the kind
of like stylistic ways of presenting ideas that you get
when you actually do the reading.
Speaker 2 (42:20):
And it's it's so frustrating as well, because like for this,
for example, I've just got the printing off things that
don't read PDFs anymore because I feel like you do
need to focus.
Speaker 6 (42:31):
Yeah, there's some evidence that reading physical copies makes you
engage more.
Speaker 5 (42:34):
Sorry I'm very old fashioned, I guess.
Speaker 2 (42:36):
But no, it's true though, But also reading that I
wouldn't have really on a PDF. I wouldn't have given
it as much attention. But also, going through this paper,
you could see what you were doing, like you could
see that you were lining up here are the qualities
that we use to judge bullshit. But also summarizing a
paper does not actually give you the argument, it gives
you an answer. So what do you actually want students
(42:59):
to do instead? Because I don't think there's any reason
to use chat GPT for these reasons like that it
doesn't seem to do anything that's useful for them.
Speaker 1 (43:08):
I don't have No, I don't actually have any use
for chat shapt that I can put to my students
and say here's what I think you should do with it.
We are like kind of developing strategies for keeping them
from using it, So like building directly on what you're saying. Like,
in my class next year, I'm going to have the
students do regular assignments, which are argument summaries and not
(43:31):
paper summaries. So the idea is they have to read
the paper, find an argument, and tell me what are
the premises, what's the conclusion. And that's something that chatchipt
is not good at, right, But it's also something that
will give them critical.
Speaker 4 (43:43):
Reading skills, which is what I want to do, right.
Speaker 1 (43:46):
So yeah, I think that I've mostly been thinking about
ways to keep them from relying on it, because I
think that often if they rely on it, they'll they'll
they'll put.
Speaker 4 (43:59):
Themselves it's the worst position.
Speaker 1 (44:01):
Yeah, when it comes to future work, they won't develop
the skills that they're going to need and the skills
that we tell them and their parents they're gonna get
with their college degree.
Speaker 2 (44:10):
Right, it almost feels like we need more deliberacy in
university education because I was not taught to write. I
just did a lot of it until I got good
enough grades and Daniel Chohnd look great mental. But I've
had tons of them, and it almost feels like we
need classes where it's like, Okay, no computer for this one.
I'm going to print this paper out and you're going
(44:30):
to underline the things that are important and talk to
me about almost feels like we need to rebuild this
because yes, we shouldn't be using chet GPT to half
us our essays. But at the same time, human beings
are lazy.
Speaker 1 (44:43):
Yeah, I mean for me, I also prefer to read
off the computer, but I often read PDFs because I'm
terrible at keeping files right physical Like, you know, I'm
not going to keep a giant filed with all the
papers that I've read and written my liner notes in.
You guys can see in my office they're just piled around,
(45:05):
like you can't.
Speaker 2 (45:05):
See this in that's academia.
Speaker 4 (45:07):
Yeah, but I just.
Speaker 1 (45:08):
Have piles of paper with empty coffee bugs everywhere that
the camera is not facing.
Speaker 4 (45:14):
But it's the terrible system.
Speaker 1 (45:15):
So at least on my computer, if I'm like, oh,
I read that paper like a year ago, what did
I think I can click on it and see my
own notes, and I do think that there's something to
keeping those records and kind of actively.
Speaker 4 (45:26):
Reading in that way. I don't know how I ended
this without telling you how to make students do that.
Speaker 6 (45:31):
Well, you started with the correct answer, which is don't
use chat GPT. Yeah, yeah, I actually I've got a
certain amount of sympathy with like, just keep writing till
you get good at it. But I realized as a
lecturer that can't be my official possession. And I certainly
think that it's the case that certainly Glasgow has got
better over the last few years about going, Oh, actually
(45:53):
you do like, we do need to give you some
kind of structuring and some buttressing on his is how
to write academically.
Speaker 5 (45:58):
Here's how to get do research.
Speaker 6 (45:59):
And I think that's all to the It's worth saying
this started happening well before chat GPT started pissing all
over our doorsteps, so they don't get to claim that
as being a benefit.
Speaker 2 (46:08):
There was a whole Wikipedia panic when I was in school.
Speaker 5 (46:10):
Yeah, think about Wikipedia the right. It's like I used
to say this to my students.
Speaker 2 (46:14):
Wikichili one of the best resources.
Speaker 6 (46:17):
Absolutely fine as a starting point for research absolutely whatsoever.
But if you're turning in and on his level, essay,
I want you to go and read the fucking things
it's referencing.
Speaker 4 (46:27):
Yeah, that's right.
Speaker 1 (46:28):
Yeah, I think these things are often great as sources.
I'm worry about chat GPT is that it's not great
as a source.
Speaker 2 (46:34):
It's just yeah, we've been.
Speaker 1 (46:35):
Saying it often gets things wrong, and it often it'll
make up sources, whereas Wikipedia.
Speaker 4 (46:40):
Will never do that.
Speaker 6 (46:42):
There are some famous hoaxes, but they get cool. Yeah, Joe,
have you got any positive things to say about chat GIPT.
Speaker 4 (46:52):
Fan?
Speaker 3 (46:54):
So I know some people who have used these kinds
of things productively, not in ways that our students would,
but I know some mathematicians who have been using it
to do sort of informal proofs and things like that.
And it does still bullshit, and it bullshits very convincingly,
which makes it very difficult to use for this kind
(47:15):
of purpose. But it can do some interesting and cool things. Yeah,
I think some people in that sort of field have
found useful. And also we've mentioned this before, like if
you want chat to be teach to write you a bibliography,
you've got a bibliography in one style tell it to
put something into a different one. Then it's and it's
good for like coding process doing certain things.
Speaker 1 (47:36):
I also think I don't know, I'm not sure how
I feel about what I'm about to say, but perfect, Yeah.
Speaker 4 (47:43):
I'm going for it.
Speaker 1 (47:44):
It is a somewhat positive thing maybe for a chat ept,
which is that we often have students who have like
really interesting ideas and well thought out arguments, but for
whom English isn't their first language, and the actual writing
is kind of rough and you have to like push
through reading it to get the good idea, which is often.
Speaker 4 (48:03):
Really there and quite you know, creative and insightful.
Speaker 1 (48:07):
Yeah, And so I do wonder if there's a way
to use it so it just smooths off the edges
of this kind of thing. But I worry that if
you tell students to do that, they'll just first it
if they can develop the language skills they offered it
really good by the yea, yeah.
Speaker 4 (48:21):
What are you going to say to you?
Speaker 6 (48:22):
Well, so I want to say, it's yeah, you can
see me getting agitated. Yeah, I think much correct about
it like that this is a kind of possible use.
But I think this and this is why I'm getting
visibly agitated here. That's students either need to or feel
they need to use.
Speaker 5 (48:36):
This speaks to a deeper.
Speaker 6 (48:37):
Issue, right to a social issue, to political issue, to
an issue about how universities work. If a student is
having problems with English, then there's a number of like
explanations or a number of kind of responses. Right that
one response is that, like Glasgow is in English teaching university,
if someone's English isn't good enough to be taking a degree,
then plausibly they shouldn't have been let in, and why
have they been learned well?
Speaker 4 (48:56):
Because of money?
Speaker 6 (48:57):
Or alternatively, if someone's having problems with English for like
whatever reason at all, that should be supported. There should
be kind of tutors. There should be people who can
help with English. But again that would cost university money.
So of course that doesn't happen, right, It doesn't happen
anywhere that it doesn't in the extent it would have
to happen in order for this to be a general policy.
Speaker 1 (49:18):
Yeah, I think it could be better, But I do
think that universities often have like a writing center or
a tutoring center that you can.
Speaker 5 (49:25):
Send, but they don't.
Speaker 6 (49:28):
They don't have the sort of spread that would be
needed or the staff that would be needed for this
to be instead of using chat GVD to sound the
edges off.
Speaker 4 (49:36):
Yeah, I think for example twenty two with your supervisor.
My worry especially would be that this is my first
year here.
Speaker 1 (49:43):
At Glasgow, but I have a good Yeah, I think
they probably have a good writing center universities have been
in the past. I felt very confident sending students to
the writing center when they have these problems. But I
think James is completely right that we don't want the
universities to see this as a way to get rid
of the writing center. And that's one hundred percent a
(50:03):
risk given the financial problems that.
Speaker 4 (50:05):
Universities are facing, and maybe they're already not ye like,
given the quality of papers, they're often good.
Speaker 1 (50:14):
I think another thing, as far as this is like
a social problem, is that when grading, I myself tried
to grade in terms of like the ideas and argument
this is philosophy.
Speaker 4 (50:26):
And not the quality of being right. But not everybody
does that.
Speaker 1 (50:31):
So I kind of think that another part of this
is figuring out how we want to evaluate the students
and what we want to privilege in that evaluation.
Speaker 4 (50:38):
Yeah.
Speaker 6 (50:39):
Sure, so then again the kind of that that becomes
a problem about what people are checking for. Not Let's
take this arch backwards approach to marking, which is like,
how fancy is your English?
Speaker 5 (50:49):
Are fancy? English? Is good English? Have an A? But
rather we should be kind of checking for everythings.
Speaker 6 (50:53):
Right, So again the blame blies differently in that case,
but it still becomes a question.
Speaker 5 (50:56):
It's not solved technological.
Speaker 2 (50:58):
Yeah, almost feels like large language models are taking advantage
of a certain kind of organizational failure.
Speaker 4 (51:05):
What an idea?
Speaker 2 (51:08):
Crazy that the tech industry manipulating parts of society.
Speaker 1 (51:12):
I have a kind of related tangent here, which is like,
what are the use cases that open ai was expecting
but didn't want to emphasize, Because for everybody in universities,
as soon as this came out, the first thought was
students are going to use this to cheat And certainly
like the people in open AI went to college, right,
(51:33):
and that's what I hear about.
Speaker 2 (51:34):
Them, so they must well, sam Oman drop down, Yeah.
Speaker 1 (51:38):
I'm sure he really understands the value of a secondary He's.
Speaker 2 (51:42):
Like, I'm gonna write at least fucking essays.
Speaker 1 (51:43):
Anyway, maybe he was thinking I would have loved to
have a computer write my essays, I'll devote my life.
But I mean, I'm sure that they like recognize these
bad use cases, right, but they're doing nothing to mitigate
them as far as I can see. And like, another
one that's very related is like, you know, I'm sure
you've heard of this ED phishing, right, A lot of
(52:04):
you know, corporations get attacked and get hacked not by
someone cleverly figuring out a backdoor to their system, but
by somebody sending in social engineering, yeah, asking for the
password to somebody else.
Speaker 4 (52:18):
And one of the biggest.
Speaker 1 (52:19):
Barriers to that is that a lot of the people
who are engaging in fishing aren't from the same country
as the company they're targeting, right, so they're not able
to write a convincing email or make a phone call
that sounds like that person's supervisor.
Speaker 4 (52:34):
But with a tool like this, you could one hundred
percent write that email. Right.
Speaker 1 (52:39):
It's going to make it a lot easier for these
kinds of illicit schemes to work.
Speaker 2 (52:43):
That has been a market increase CNBC, which is just
brought up, is that on two hundred and sixty five
percent increase in malicious fishing emails since the launch of
chat JPTA, great style. I mean, if I could have
thought of that, imagine what a criminal could do.
Speaker 4 (52:57):
Right, But also, weren't the people that open a eye
thinking about.
Speaker 2 (53:00):
That because like they don't care.
Speaker 6 (53:03):
Yeah, yeah, we've all seen Durassic Park, right, Yeah, Yeah,
they were so visically thinking about what they could do,
they never thought about whether they should.
Speaker 4 (53:09):
Yeah.
Speaker 1 (53:09):
This is the kind of problem with the move fast
and break things mentality, like obvious. I mean, I think
it might be the only person who has raised in
the US, but we had future.
Speaker 4 (53:18):
Problems solvers, where you you know, you think.
Speaker 1 (53:21):
About a future problem and what bad consequences there could
be of some technology and how to solve them, usually
through social cues.
Speaker 4 (53:29):
If I could do that in fifth grade, Yeah.
Speaker 1 (53:33):
I would expect these people to have thought through some
of the bad consequences of the technology they're putting out.
And you know, some of those are cheating on tests
and they don't seem to have worried about that. Yeah,
And another one is fishing. They don't seem to have
worried about that.
Speaker 6 (53:47):
You know, biases and algorithms, right, so yeah, yeah, comes,
and I suppose to you it turns out, so with
a lot of the facial recognition systems, they were incredibly racist.
Speaker 2 (53:57):
They were going back to Microsoft's connect could not see.
Speaker 6 (54:00):
Bad Yeah yeah, and kind of CCTV stuff that basically
just sort of unless it was presented with a blindingly
white Caucasian.
Speaker 5 (54:08):
Where I don't know, right, but like.
Speaker 6 (54:13):
The sort of stuff where these like langage knowledge are
trained on certain sets of data, and they trained on
certain assumptions and like shifting shit out right, and particularly
if people think that it's actually doing any kind of thinking,
and if they kind of cargo cult it, we again
get a kind of social problem multiplied by technology feeding
back into a social problem, right.
Speaker 5 (54:33):
And it's the so these guys have heard me when
you're about it so much, I love it or so, but.
Speaker 6 (54:39):
I'm profoundly skeptical of technology's ability to solve anything unless
we know exactly the respect in which we want to
solve it and how that technology is going to be applied.
You know, like sure, experiment with bringing back dinosaurs, but
but like, don't tell me that it's going to save
the healthcare system unless you can demonstrated to be step
(55:01):
by step how that big OLDI rect run around on
Eiland new Blayer is going to save anything, and they
just tried to blind people.
Speaker 4 (55:06):
Actually bringing back dinosaurs with good that would be great.
Speaker 6 (55:10):
All right, this isn't the best example, but I already had.
I had Jeff Gobwum in my head and I had
to go.
Speaker 2 (55:17):
To the drastic fellas. This has been such a pleasure.
Don't put that in the recording. That's right, we won't
edit it in post. You will give us your names.
Speaker 4 (55:30):
I'm Mike Hicks, but my papers are written by Michael.
Speaker 1 (55:33):
Townsend Hicks and I'm a lecturer at the University of Glasgow.
Speaker 4 (55:36):
My website is Hicks.
Speaker 2 (55:40):
It will be in the podcast part. Don't you worry.
Speaker 4 (55:42):
We'll get that right.
Speaker 5 (55:43):
Just plugging my luggy staff.
Speaker 4 (55:46):
My name is Joe Slater.
Speaker 3 (55:48):
I'm a university lecturer in moral and political philosophy at LASKOW.
Speaker 6 (55:52):
I'm James Joffrees. I'm a lecturer in political theory at
the University of Glasgow. And even if I wanted to
or couldn't give you more websites, I don't have.
Speaker 2 (56:01):
Everyone you've been listening to Better Offline. Thank you so
much for listening everyone, guys, thank you for joining me.
Speaker 5 (56:06):
Thanks having me here.
Speaker 6 (56:07):
This is.
Speaker 2 (56:17):
Thank you for listening to Better Offline. The editor and
composer of the Better Offline theme song is Matasowski. You
can check out more of his music and audio projects
at Mattasowski dot com, M A T T O S
O W s ki dot com. You can email me
at easy at Better offline dot com or visit Better
Offline dot com to find more podcast links and of course,
(56:38):
my newsletter. I also really recommend you go to chat
dot Where's youreed dot at to visit the discord, and
go to our slash Better Offline to check out our reddit.
Thank you so much for listening.
Speaker 7 (56:50):
Better Offline is a production of cool Zone Media. For
more from cool Zone Media, visit our website cool Zonemedia
dot com, or check us out on the iHeartRadio app,
Apple Podcast, or wherever you get your podcasts.