Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
All Zone Media.
Speaker 2 (00:04):
Hello and welcome to Better Offline. I'm your host ed
z trun. In early June, three researchers from the University
of Glasgow published a paper in the Ethics of Information
(00:25):
and Technology Journal called chat GBT is Bullshit. And I
just want to be clear. This is a great and
thoroughly researched and well argued paper. This is not silly
at all. It's actually great academia. And today I'm joined
by the men who wrote it, academics Michael Townsend Hicks,
James Humphries and Joe Slater, to talk about chat gbt's
mediocrity and how it's not really built to represent the
(00:48):
world at all. So, for the sake of argument, could
you define bullshit for me?
Speaker 3 (00:54):
So you are bullshitting if you are speaking without caring
about the truth of what you say. So normally, if
I'm telling you staff about the world in a good case,
I'll be telling you something that's true, and like trying
to tell you something that's true. If I'm lying to you,
I'll be knowingly telling you something that's false or something
I think is false. I'm bullshitting. I just don't care.
(01:17):
I'm trying to get you to believe me. I don't
really care about whether what I say is true. I
might not have any particular view on whether it's true.
Speaker 2 (01:25):
Or not right. And you define between like soft and
hard bullshit? Can you also get into that as well?
Can you also identify yourselves as well to.
Speaker 4 (01:34):
R Yeah, yeah, and Joe. So the soft bullshit hard
bullshit distinction is a very serious and technical distinction, right,
So he came up with.
Speaker 3 (01:43):
This because bullshit is in the technical philosophical sense. Comes
from Harry Frankfurt, recently diseased but a really great philosopher.
Speaker 4 (01:52):
And he talks about the about.
Speaker 3 (01:54):
Bullshit that there is in a popular culture and just
in general discourse these days. Some of the ways he
talks about bullshit seemed to suggest that it needs to
be accompanied by a sort of malign intention.
Speaker 4 (02:08):
I'm doing something kind of bad.
Speaker 3 (02:11):
I'm intending to mislead you about the enterprise of what
I'm doing.
Speaker 4 (02:15):
Yeah, maybe about who you are or what you know.
Speaker 5 (02:18):
So you may be trying to portray yourself as someone
who is knowledgeable about a particular subject. Maybe you're a
student who showed up to class without doing the work.
Maybe you're trying to portray yourself as someone who's virtuous
in ways you're not. Maybe you're a politician who wants
to seem like you care about your constituents, but actually
you don't. So you're not trying to mislead somebody about
(02:42):
what you're saying the content of your utterance. You're trying
to mislead them instead about like why you're saying it's
that's what we call hard bullshit.
Speaker 4 (02:51):
Yeah, it's one of the things Frankfort talked about.
Speaker 3 (02:54):
Yeah, so FRANKFOT doesn't make this hard bullshit soft bullshit
the station, but we do because sometimes it seems like
Frank for has this particular kind of intention in mind,
but sometimes he's just a bit looser with it. And
we want to say that chat GPT and other large
language models they don't really have this intention to deceive
because they're not people that don't have these intentions. They're
(03:15):
not trying to mess with us in that kind of way,
but they do lack this kind of caring about truth.
Speaker 1 (03:21):
Well, I'm James, by the way, I suppose we strictly
don't want to say that they aren't hard bullshitters, and
we just think if you don't think that large language
models are sapient. If you don't think they're kind of
minds in any important way, then they're not hard bullshitters.
So I think in the paper we don't take a
position on whether or not they are. We just say
if they are, this is the way in which they are.
(03:41):
But minimally they're soft bullshitters. So they're kind of soft bullshites,
as Joe says, doesn't require that speakers attempting to deceive
the audience about the nature of the enterprise.
Speaker 6 (03:50):
Hard bullshit does.
Speaker 1 (03:51):
So if it turns out that large language models are sapient,
which they're definitely not, like that's just technoproperty.
Speaker 2 (03:57):
Yeah, that's nonsense.
Speaker 1 (03:58):
Yeah, But if they are, then they're bullshitters and minimally
their soft bullshitters or their bullshit machines.
Speaker 2 (04:04):
So you also make the distinction in there the intention,
so the very fabric of hard bullshit that you intentionally
are bullshitting to someone, you kind of make this distinction
that the intention of the designer and the involved prompting
could make this hard bullshit. Because with a lot of
these models and someone recently jail broke chat GPT and
(04:27):
it listed all of the things that's prompted to do
could prompting be considered intentional? That Sam Altman, CEO of
open AI, could be intentionally bullshitting? I think it is.
Speaker 1 (04:37):
Yeah, this again, I think it's something I don't know
what the kind of hive mind consensus on this is.
I'm sort of sympathetic to the idea that if you
take this kind of purposive or teleological attitude towards right,
what an intention is is an effort to do something,
then maybe they do have intentions. But again, I think
in the paper we just sort of wanted I mean,
it's a standard philosophical move, right, just sort of go, look,
(04:59):
here's all this controversial stuff as we can make it.
Now we can hit you with the really controversial shit
that we wanted to get to. So in the paper
we sort of deliberately when maybe you might think it
has intentions for it this reason, we kind of have
no judgment on this efficiently, I'm sympathetic to the sort.
Speaker 6 (05:13):
Of view that you're putting.
Speaker 4 (05:14):
I think you're kind of sympathetic this as well.
Speaker 6 (05:16):
Right, Yeah, so.
Speaker 5 (05:17):
I Mike, there are a few ways that you can
think of CHATCHYBT as having intentions.
Speaker 4 (05:21):
I think, and we talked about a few of them.
Speaker 5 (05:23):
One is by thinking of the designers who created it
as kind of imbuing it with intentions. So they created
it for a purpose and to do a particular task,
and that task is to make people think that it's
a normal sounding person.
Speaker 6 (05:38):
Right.
Speaker 5 (05:38):
It's to make people when they have a conversation with it,
not be able to distinguish between what it's outputting and
what a normal human would say.
Speaker 6 (05:48):
Right.
Speaker 4 (05:48):
Right, And that kind.
Speaker 5 (05:50):
Of goal, if it amounts to an intention, is the
kind of intention we think a bullshitter has.
Speaker 4 (05:59):
Right.
Speaker 5 (05:59):
It's not trying to deceive you about anything it's saying.
It doesn't care whether what it's saying is true. That's
not part of the goal. But the goal is to
do is to make it seem as if it's something
that it's not, like specifically a human interlocutor. And one
source for that goal is the programmers who designed it.
Another is the training method. So it was trained by
(06:20):
being given sort of positive and negative feedback in order
to achieve.
Speaker 4 (06:25):
A specific thing, right, And that specific thing is just
sounding normal.
Speaker 5 (06:29):
And that's so similar to what our students are doing
when they try to pretend that they read something they
haven't read.
Speaker 2 (06:36):
There is something very collegiate about the way it bullshits though.
It reminds me of when I was in college when
Penn State and Aporistwyth two very different institutes.
Speaker 4 (06:45):
Right, both kind of in the middle of nowhere.
Speaker 2 (06:49):
Yeah, both very sad. But the one thing you saw
with like students who were doing like B plus homework
is they were using words they didn't really understand. They
were putting things together in a way that realists, like
the intro body conclusion, there was a certain formula behind it,
and it's just it feels exactly like it. But that
kind of brings me to my next question, which is,
how did you decide to write this? What inspired this?
Speaker 5 (07:11):
Right?
Speaker 1 (07:12):
I would feel this because whenever Mike tells the story,
you get sort of sanitized version. We were in the
pub whining about student essays. Perfect, yeah, like.
Speaker 4 (07:19):
You were not long here, right, you're not long yeah.
Speaker 1 (07:22):
Long in post of a bunch of us, or went
to the pub on emotional let's let's welcome our new
members of staff sort of event, and inevitably with hat
about two points we were all pissing in moaning and
do you think it might have been.
Speaker 4 (07:34):
Neiled that kind of prompted this. No, I don't think
he was the.
Speaker 6 (07:38):
We talked about it after, right, Yeah, in a case
like sort of came up.
Speaker 1 (07:41):
We were talking about this sort of prevalence of chat
GPT generated stuff and kind of what it said about
how at least some students were kind of approaching the assessments,
and you know, I forget who. Someone sort of just
went off handly, yeah, it was all just Frank thirteen
bullshit though, isn't it. And we sort of collectively went hello,
because obviously, all having a background in plosphy, we've all
read on bullshit. You know, we all went he we
get to say bullshit and allegively serious academic work. So
(08:04):
the start it was we've had this experience of having
to read all of this kind of uncanny Valley stuff,
and when prompted, we all went, oh, it really is
like Frank thirty and bullshit in a way that we
can probably get paper outselof.
Speaker 5 (08:16):
Yeah, And at that point I think we were in
the department meetings talking about how to deal with chat
GPT written papers, and there were discussions going on all
over the university, including kind of in high levels and
the administration to come up with a policy, and we
specifically wanted a stronger policy because our university is very
(08:38):
interested in cutting into technology, right, and so they wanted
the students to have an experience using and figuring out
how to use whatever the freshest technology is.
Speaker 2 (08:48):
Which as they should as they should, but not for essays, right,
That's the.
Speaker 4 (08:53):
Worry we had.
Speaker 5 (08:54):
We thought, you know, if we're not very clear about this,
the students will be using it in a way that
will detract from the educational experience.
Speaker 4 (09:02):
Right. And at the same time, it was becoming more.
Speaker 5 (09:05):
Widely known how these machines work, like specifically how they're
doing next token or next word prediction in order to
come up with a digestible string of text. And when
you know that that's how you're they're doing it, I mean,
it seems so similar to what humans do when they
have no idea what they're talking about. And so when
(09:28):
we were talking about it just seemed like an obvious
paper that someone was going to write, and we thought
that a better bs. You know, eventually people are going
to see this connection and it's.
Speaker 2 (09:37):
A great papa.
Speaker 4 (09:39):
I think it worked very well.
Speaker 5 (09:40):
I mean, I think that it's the kind of thing
where when I pitch this to other philosophers, it doesn't
take them very long to just agree to be like, ah, yes.
Speaker 6 (09:49):
That's right.
Speaker 2 (09:50):
It's an interesting philosophical concept as well, because the way
that people look at chat GPT in large language models
is very much machine do this. But when you think
about it, there are the ways where people are giving
it consciousness. I just sort of tweet just now as
someone talking about saying please with every request. It's like, no,
I will abuse the computer in whatever manner I see fit.
(10:10):
But it's it's curious because I think more people need
to be having conversations like this. And one particular thing
I like that you said, I actually would love you
to go into more detail, is you said, the chat
GPT in large language buddles, they're not designed to represent
the world at all. They're not lying or misrepresenting. They're
not designed to do that. What do you mean by that?
Speaker 5 (10:30):
I mean kind of as background, I do philosophy of science,
but my thoughts about something like CHATCHEPT are largely inspired
by the fact that I also teach a class called
Understanding Philosophy through Science Fiction.
Speaker 4 (10:43):
Oh and and now we like talk.
Speaker 5 (10:44):
About whether computers could be conscious and I don't know
what you guys think. Actually I think they could, right,
I just don't think this one is. And part of
the reason I think they could but this one isn't
is that I think that in order to sort of
represent the world or have the kinds of things we
have that are like beliefs, desires, thoughts that are about
(11:07):
external things, you have to have internal states that are
connected in some way to the external world. Usually causation.
We're perceiving things, information is coming in. Then we've got
some kind of state in our brain that's designed just
to track these things in the external world.
Speaker 4 (11:26):
Right.
Speaker 5 (11:26):
That's a huge part of our cognitive lives. It is
just tracking external world things and it's a very important
part of childhood development when you figure out how to
interact them.
Speaker 2 (11:37):
It's semiotics, right, Daniel John Aberistwyth told me semiotics it's
like an reception of the world.
Speaker 5 (11:43):
And this is like theory of meaning stuff. So, yeah,
semiotics is like theory of science. How is it that assign.
Speaker 6 (11:50):
Can be folks, the representation of the thing and the
thing itself.
Speaker 4 (11:53):
That can happen, yeah, but not always.
Speaker 5 (11:55):
Sometimes it's just the representation of the thing and there's
a lot of philosophy as at figuring out how brain
states or words on a page can be about external
world things. And a big part of it, at least
from my perspective, has to do with tracking those things,
keeping tabs on them, changing as a result of seeing
(12:17):
differences in the external world. And chatch ept is not
doing any of that, right, That's not what it's designed
to do. It's taking in a lot of data once
and then using that to respond to text. But it's
not remembering individuals, tracking things in the world in any
(12:37):
way perceiving things in the world. It's just forming a
sort of statistical model of what people say. And that's
kind of so divorced from what most thinking beings do.
Speaker 2 (12:51):
It's divorced from experience.
Speaker 4 (12:53):
Yeah. I mean, as far as I can tell, it
doesn't have anything like experience.
Speaker 6 (12:56):
Yeah.
Speaker 1 (12:57):
And that's one of the things that this I thinks of.
In one way it comes down to is that if
you sort of post this sufficiently far, someone is going
to go, ah, isn't this just biochauvinism, right, Like, aren't
you just assuming that unless something runs on like meat,
it can't be sentient? And this isn't something we get
into into paper part because we didn't really think it
was worth addressing. But the sorts of things that seem
(13:18):
like they're like never mind consciousness, right, but it seemed
to be necessary in order for something to be trying
to track the world, or in order to be corresponding
the world, or to form beliefs about the world. Chat
GPT just doesn't seem to meet any of them. Kind Of,
it's if it does turn out that it's sapient, then
chat GPT has got some profoundly serious executive function disorders, right, sapient, Right,
(13:39):
so we don't have to worry about it. But kind
of it's not the case that we've got some blundering
proto general intelligence that's trying to figure out how to
represent the world. It's not trying to represent the world
at all. It's utterances are designed to look as if
it's trying to represent the world, and then we just
go that that's just bullshit.
Speaker 6 (13:53):
This is a classic case of bullshit.
Speaker 2 (13:55):
Yeah, it seems to be making stuff up, but making
stuff up doesn't even seem to describe it. It's just
throwing shit at a wall very accurately, but not accurately enough.
Speaker 1 (14:03):
Yeah, it's got various guidelines that allow it to throw
shit at the wall where a sort of reasonably high
degree of Actually, no, you're right. I mean, one of
the things that a human bullshitter could at least be
characterized as doing is they'd have to try and kind
of judge their audience, right, They'd have to try and
make the bullshit plausible to the audience that they're speaking to,
and chat GPT can't do that, right. All it can
do is go on a statistically large enough model and
(14:24):
it looks.
Speaker 6 (14:25):
Like Z follows why follows X?
Speaker 1 (14:27):
Right, it's not kind of got any does have any
consciousness at all, of course, but it doesn't have any
sensitivity to the sorts of things that people are in
fact likely to find plausible. It just does a kind
of brute number crunch. So it's more complicated than that,
but I think it boies down effectively to kind of
number crunching, right, kind of data and context.
Speaker 2 (14:45):
In which probabilistic planning of what planning, it doesn't plan.
That's the thing. It's interesting. There are these words you
use to describe things that, when you think about it,
are not accurate. You can't say it plans or things.
Speaker 4 (14:57):
I think one thing.
Speaker 5 (14:58):
Between when we came up with the idea and when
we finish writing the paper, we spent some time reading
about how it works and how it represents language and
what the statistical.
Speaker 4 (15:09):
Model is like.
Speaker 5 (15:11):
And I was maybe more impressed than James about that,
because it is like doing things that are similar to
what we do when we understand language. It does seem
to kind of locate words in a meaning space right right,
and connect them to other words, and you know, show
similarity and meaning, and it also does seem to be
(15:33):
able to in some way understand context. But we don't
know how similar that is for a variety of reasons,
but mostly because it's too big of a system and
we can only kind of probe it. And it's trained indirectly, right,
so it's not programmed by individuals. And even though that's
(15:55):
kind of a very impressive model of language and meaning
and may some ways be similar to what we do
when we understand language, we're doing a lot more things
like planning, things like tracking things in the world. Just
having desires and representing the way you want the world
to be and thereby generating goals doesn't seem to be
(16:17):
something that it has anything any.
Speaker 4 (16:18):
Room for in its architects.
Speaker 1 (16:20):
This is something of the time of it's just goods
me and you were talking about the kind of in
some ways it learns language the same way.
Speaker 6 (16:24):
That we do.
Speaker 1 (16:25):
I mean, it's got no grasp of expleted in fixation, right,
this is one of chumps here.
Speaker 2 (16:28):
What does that mean? Just for not for me? I
definitely know.
Speaker 1 (16:32):
Yeah, yeah, of course if I give you the sentence
that's completely crazy, man, and tell you to put the
word fucking into that sentence, there's a number of ways
in which any language speaker is going to do it
that they'll just go, yeah, of course.
Speaker 4 (16:44):
That where it goes, right.
Speaker 1 (16:46):
But we it seems that we've got a grasp on
this incredibly early on in a way that doesn't look
like it's the ways at least most of us that
taught language, right, we get quite highly told off when
we try.
Speaker 4 (16:56):
And do expleted in fixation.
Speaker 1 (16:57):
Yes, So this I think would be one of those
cases where you could do a sort of disnalogy by cases. Right,
you present chat GPT a sentence and say insert the
word fucking correctly in this sentence.
Speaker 4 (17:08):
And I don't think it would be very good at.
Speaker 6 (17:10):
It I think it would be.
Speaker 4 (17:10):
I think you reckon.
Speaker 1 (17:11):
I mean, we probably couldn't tell we could, but we shouldn't.
Speaker 4 (17:16):
Yeah.
Speaker 5 (17:16):
One of the things that I thought was like kind
of interesting about how it works is that it does
learn language, probably differently from the way we do, but
it does it all by examples, you know, So it's
looking at all these pieces of text and thinking, ah,
this is okay, that's okay. And one kind of interesting
thing about how humans understand language is that we're able
(17:38):
to kind of understand meaningless but grammatical sentences. It's not
clear to me that chat GPT would understand those. That's
another Chopski example. So you know, Chomsky has this example
that's like colorless uss sleep furiously color it's colorless green ideas, right,
(17:58):
And that's a meaningless sentence, but it's grammatically well formed,
and we can understand that it's grammatically well formed, but
also that it's meaningless. Because chat Gibt kind of combines
different aspects of what philosophers of language, logicians, linguistics people
see as like different components of meaning. It sees these
(18:20):
as all kind of wrapped up.
Speaker 4 (18:21):
In the same thing. It puts them in the same.
Speaker 5 (18:22):
Big model I'm not sure it could differentiate between ungrammaticality
and meaninglessness.
Speaker 2 (18:34):
So we're doing real time science right now.
Speaker 4 (18:36):
I just yeah, we should check the words get.
Speaker 2 (18:38):
Into the following sentence in the correct way. Man, that's crazy.
And I did it six times, and I would say
fifty percent of the time it got it right. And
it did. Man, that's fucking crazy. Man, that's crazy. Fucking Man,
that's fucking crazy. Man that's crazy. Fucking My favorite is man,
comma that's crazy, Comma fucking.
Speaker 6 (18:58):
To be fair.
Speaker 4 (19:01):
Reliable.
Speaker 1 (19:01):
You take the commas out of that last one, and
you've got a grammatical sentence.
Speaker 2 (19:05):
Yeah.
Speaker 1 (19:05):
Of course in Glasgow you can also start with fucking man,
that's crazy.
Speaker 2 (19:09):
West London as well. But the thing is, though it
doesn't know what correct means there.
Speaker 6 (19:13):
Yeah.
Speaker 2 (19:13):
Now, when it's trained on this language, when it's trained
on thousands of Internet posts that stole, it's not like
it reads them and says, oh, I get this, like
I see what they're going for. It just learns structures
by looking, which is kind of how we learn language.
But it kind of reminds me of like when I
was a kid and I'd hear someone say something funny
(19:35):
I'd to repeat it, and my dad, who's wonderful, would
just say that doesn't make any sense, and you'd have
to explain, because if you're learning everything through copying, you're
not learning, you're just memorizing.
Speaker 6 (19:45):
Yeah, yeah, exactly.
Speaker 5 (19:47):
There's a I don't know if you have already talked
to somebody about this, but there's a you know, a
classic argument from child ski against behaviorism.
Speaker 2 (19:55):
Right.
Speaker 5 (19:56):
Behaviorism is the view that we learn everything through listen response.
Speaker 4 (20:01):
Roughly, that's not exactly.
Speaker 5 (20:02):
It, but I'm not a philosopher of mine, so I
can get away with with that.
Speaker 4 (20:07):
So Chomsky says, look, we don't get enough.
Speaker 5 (20:09):
Stimulus to learn language as quickly as we do just
through watching.
Speaker 4 (20:15):
Other people's behavior and copying it.
Speaker 5 (20:17):
We have to have some in built grammatical structures that
language is latching onto. And there's been some papers arguing
that chatchipt shows Chomsky was wrong because it doesn't have
the inbuilt grammatical structure.
Speaker 4 (20:31):
But one interesting thing.
Speaker 5 (20:32):
Is it requires ten to one hundred times more data
than a human child does when learning language. Right, So
Chomsky's argument was we don't get enough stimulus, and chat
GPT can kind of do it without the structure, but
it's not quite doing it as well, and it gets
like orders of magnitude more more input than a human
(20:55):
does before a human learns language, which is kind of interesting.
Speaker 4 (21:00):
You're going to do something as basically as putting the word.
Speaker 6 (21:04):
Yeah, yeah.
Speaker 2 (21:04):
It doesn't even see and it doesn't have the knowledge
to say, request more context because it doesn't perceive context.
And that's kind of the interesting thing. So there was
another paper out of Oxver I think that's talking about
cognition and chat GPT and all this thing, and it's
just it doesn't feat chat GPT features in no way
any of the things that the human mind is really
involved in. It seems it's mostly just not even memorization,
(21:27):
because it doesn't memorize. It's just guessing based on a
very blood pile of stuff. But this actually does lead
me to marvel question, which is, you don't like the
term hallucination. Why is that hallucination?
Speaker 3 (21:42):
It makes it sound a bit like I'm usually doing
something right and I'm looking around seeing the world that's
something like what it really is. And then one little
bit of the feature for a visual hallucination, one feature
of my visual field actually isn't represented in the real world,
it's not actually there.
Speaker 4 (22:02):
Everything else might well might well be right.
Speaker 3 (22:03):
Imagine I hallucinate, there's a red balloon in front of me.
I still see Mike, I still see James, I still
see the laptop.
Speaker 4 (22:11):
One bit is.
Speaker 3 (22:11):
Wrong, everything else is right, and I'm doing the same
thing that I'm usually doing, Like my eyes are still
working pretty much normally, I think, I and this.
Speaker 4 (22:21):
Is the way I usually get knowledge about the world.
Speaker 3 (22:23):
This is a pretty reliable process for me learning from
it right and representing the world in this way. So
when talking about hallucinations, this suggests that Chatgypt and other
similar things they're going through this process that is usually
quite good at representing the world. And then oh, it's
(22:45):
made a mistake this one time, but actually no, it's
bullshitting the whole time, and like sometimes it gets things
right right, bullshitting just like a politician.
Speaker 4 (22:55):
Imagine a politician that bullshits all the time, if you.
Speaker 3 (22:59):
Could possibly imagine it, Sometimes they might just get some
things true, and we should still call them bullshitting because
that's what they're doing. And this is what at GPT
is doing every time it produces an output. So this
is why we think bullshit is a better way of
thinking about this. One of the reasons why we think
bullshit's better way think about it.
Speaker 5 (23:20):
I also kind of think that some of the ways
we talk about chat gpt LIN, even when it makes mistakes,
lind themselves to overhyping its ability or overestimating its abilities,
and talking about it is hallucinating. Is one of these
because when you say that it's hallucinating, as you're pointed out,
you're giving the idea that it's representing the world in
(23:43):
some way and then telling you what the.
Speaker 2 (23:45):
Content and it has perception.
Speaker 1 (23:46):
Yeah, exactly, like it has perceived something and like, oh no,
it's taken some computer acid and now it's hallucinating like
mastery things and yeah, just as you say, and.
Speaker 4 (23:56):
That's not what it's doing.
Speaker 5 (23:57):
And so when the kind of people who are trying
to promote these these things as products talk about the
AI hallucination problem, they're kind of selling a product that
is a product that's representing the world usually checking things
and occasionally makes mistakes. And if the mistakes were, like
(24:19):
Joe said, we're a misfiring of a normally a reliable
process or you know, something that normally represents going wrong
in some way that would lend itself to certain solutions
to them, and it would make you think there's an
underlying reliable product here, right, which is exactly what somebody
who's making a product to go on the market will.
Speaker 4 (24:40):
Want you to think.
Speaker 3 (24:41):
Right.
Speaker 5 (24:41):
But if that's not what it's doing in a certain sense,
they're misrepresenting what it's doing even when it gets things right.
Speaker 4 (24:48):
And that's that's bad for all of us who are
going to be using these.
Speaker 5 (24:52):
Systems, especially since people, you know, most people don't know
how this works. They're just understanding the product as it's
described and using these kind of metaphors. So the way
the metaphor describes it's going to really influence how they
think about it and how they use it.
Speaker 6 (25:08):
Yeah, just to sort of cat fire if I can.
Speaker 1 (25:10):
Is one of the responses to some corners has been
to sort of say, of us, look, you winch about
people anthropomorphosizing chat GPT, but look, if you've got a bullshity,
you're doing exactly the same thing. And I mean there
might be some center which it's just really hard not
to anthropomorphise it. I don't know why I picked the
word that I can barely say, like we've been doing
it constantly throughout this discussion. Right when we were talking
(25:31):
through the kind of through the paper, we kept talking
about chat GPT as if it had intention, right, as
if it was thinking about anything. That might be another
reason to call it bullshit. We go, look, if we
have to treat it as if it's doing something like
what we do, it's not hallucinating, it's not lying, it's
not confabulating, it's bullshitting, right, And if we have to
treat it as if it's behaving in some kind of
human like way, here's the appropriate human like behavior to
(25:53):
describe it.
Speaker 2 (25:54):
I also think the language in this case, and one
of the reasons they probably really like the large language
model concept is language gives life too things when we
describe the processes through which we interact with the world
and interact with living beings, even cats, dogs, even we
anthropomiz living things. But also when we communicate with something,
languages life, and so it probably works out really fucking
(26:16):
well for them. Samultman was saying a couple of weeks back,
maybe a month or two. He was saying, oh, yeah,
AI is not a creature with something he said, and
it was just so obvious what he wanted people to
do was say, but what if it was or are
people saying this is a creature and it almost feels
like just part of the con Yeah, yeah, yeah.
Speaker 5 (26:36):
I hadn't thought about that as a reason for them
to go for large language models as a way of
kind of, I don't know, being the gateway into more investment.
Speaker 2 (26:48):
Into fight consciousness.
Speaker 6 (26:50):
Yeah.
Speaker 5 (26:50):
Yeah, but I had thought about, like how this might
have been caused by just like deep misunderstandings of returning test.
Speaker 2 (26:58):
Right, go ahead, hear this one.
Speaker 5 (27:01):
Yeah, yeah, so that like the Turing test, I think
this is closer to what Turing was thinking. But the
turning test is a way of getting evidence that something
is conscious. Right, So you know, I'm not in your head,
so I can't feel your feelings or think your thoughts directly. Right,
I have to judge whether you're conscious based on how
(27:22):
you interact with me, and the way I do it
is by listening to what you say, right, and talking
to you and turning sort.
Speaker 4 (27:30):
Of was asked, you know, how would you know if
a computer was conscious?
Speaker 5 (27:35):
So, you know, we think that our brains are doing
something similar to what computers do. That's a reason to
think that maybe computers eventually could have thoughts.
Speaker 4 (27:44):
Like ours, right, and some of us I think that
I think it's possible. It's great.
Speaker 5 (27:49):
Yeah, I didn't know if they thought it was possible,
because not everybody thinks it's possible.
Speaker 2 (27:52):
It's possible. I just don't know how.
Speaker 4 (27:54):
Yeah, yeah, exactly. So Turing was kind of, you know,
thinking like, how would we know? And one way we
would know.
Speaker 5 (28:01):
The obvious way is to do the same thing you
do to humans, talk to it and see how it responds.
And that's actually pretty good evidence if you don't have
the ability to look deeper. But it's not constitutive of
being conscious. It's not what makes something conscious or determines
whether they're conscious or in any way like grounds their consciousness. Right,
(28:24):
Their ability to talk to you is just evidence. It's
just one signal you can get. And that's the way
to think of the Turing test. So as a result
of people thinking in a kind of behaviorist way, thinking, ah,
passing the Turning test is just all it is to
be a thinking thing. There have been, at least since
the nineties, attempts to design chatbucks that can be the
(28:46):
Turning test, right, and popularizations of these attempts and run
throughs of the Turing test that talk as if, oh,
if a computer finally beats the.
Speaker 4 (28:54):
Turing test, I should say what the Turing test is?
Speaker 5 (28:56):
Right? The way turns suggests that the test works as
you have a computer and a person both chatting in
some way with a judge, and the judge is also
a person, right, And if the judge can't tell which
of the people he's chatting with is a human, then
the computer's one the right test because it's distinguishable from
a human.
Speaker 4 (29:16):
Right.
Speaker 5 (29:16):
So people have taken this and it's been popularized as
a way of determining sort of full stop, whether something's conscious.
Speaker 4 (29:24):
But it's just a piece of evidence, and we have
a lot.
Speaker 5 (29:26):
More evidence, Like we know a lot more than Turning
did about how the internal functioning of a mind works functionally,
what it's doing, how representation works, how experience and belief works,
how they are connected to action, and how they're connected
to speech and thought. And once you know all that stuff,
you have a lot of other avenues to get more
(29:48):
evidence about whether the thing is conscious and whether it passes.
The Turning test is just like a drop in the
bucket compared to these, especially if you know how it's
internal functioning.
Speaker 1 (29:58):
Is the other the story is problem with the showing test,
and I think to be fair, Cheuring did mention this.
If not in the original then kind of later on. One
problem with the chewing test is that it's like the
de void camp in de androids gene of electric cheap right.
Plenty of humans would fail the cheering test. Yeah, it
is a piece of evidence, but it was never as
Mike said, it was supposed to be constitutive. It wasn't like,
(30:19):
if you can do this thing, your conscious kind of
full stops. It was supposed to be here is a
thing that might indicate that being you're talking to is conscious.
Speaker 4 (30:28):
Happily, as my said, we've got the loads and loads
of other evidence.
Speaker 5 (30:31):
So these guys have made a machine that's just designed
to do one thing, and that's past the Turing test.
Speaker 2 (30:37):
I can give you one more annoying example. Are you
familiar with Francoise Chole the abstraction and reasoning corpus, So
this is going to make you laugh. So he's a
Google engineer and he created this thing of the arc
the abstraction reasoning corpus, the test whether a machine was intelligent,
and someone created a model that could be it, and
then he immediately went, Okay, you can't just train the
(30:59):
model on the answers to the test.
Speaker 1 (31:02):
This is why people, well I see people a fairly
small subset of weird nerds, But this is why a
small subset of weird nerds have been for the last
twenty years emphasizing artificial general intelligence, right, and kind of
like what we'll call it. When something really is a
thinking being is when it's not specialized to do one
and only one task, but rather when it's capable of
(31:23):
applying reasoning to multiple kinds of different and this analogous cases.
On the one hand, it does seem a little bit
like the guy flipping the table and go, oh, for
fox sake, now you've one. I'm changing the rules. But
on the other I think he's got a point, right, Yeah,
if you're training a thing to do very specific you know,
like activate certain shiboleths, then unless you're some kind of
mad hard behaviorist, then yeah, like it's not that doesn't
(31:44):
demonstrate intelligence. It is one thing that might indicate intelligence.
Speaker 2 (31:48):
It's the same problem with the chat GPT. It's built
to resemble intelligence and resemble conscious This will resemble these things,
but it isn't. It's almost like it's meaningless. On a
very high end philosophical I find the whole generative a
I think deeply nihilistic.
Speaker 5 (32:03):
I mean, one thing that connects to this is how
bad it is at reasoning. And this is kind of
good for us, especially in philosophy, because our students when
they use it to write papers, the papers have to
have arguments, and chat GPT is very bad at doing reasoning.
If it has to be you know, sort of extended
(32:24):
argument or a proof or something like that, it's very
bad at it. I think also that if there's one
thing kind of we learned from chat GPT, it's that
this is not the way to get to artificial general intelligence.
Speaker 2 (32:38):
I was going to ask, do you think that this
is getting to that?
Speaker 5 (32:41):
No, partially because it's so subject specific, right, It's one
it's trained to do one task, and it takes quite
a lot of training to get it to do that task. Well,
it's bad at many of the other tasks that we
think are connected with intelligence. It's bad at logical and
mathematical reasoning. I understand that open AI is trying to
(33:02):
fix that. Some sometimes it sounds like they want to
fix it by just connecting it to a database or
a program that can do that for it. But either way,
when you have with these kind of big base nets models,
it's something that is really good at whatever you train
it to do, but not going to be good at
(33:22):
anything else. You know, it's got a lot of data
on one thing. It finds patterns in that, it finds
regularities in that, it represents those. The more data you
feed it, the better it will be at that. But
it's not going to have this kind of general ability,
and it's not going to grow it out of learning
how to speak English.
Speaker 4 (33:41):
Have you heard the Terry Pratchett quote about It's quite
early on.
Speaker 1 (33:45):
He's talking about Hex, the kind of steampunk computer a
make and unseen university, and it just has this off
hand line. A computer program is exactly like a philosophy professor.
Unless you ask it the question in exactly the right way,
it will delight in giving you a perfectly accurate, completely
unhelped answer.
Speaker 6 (34:00):
Right. So if you abstract that justified you got a
philosophy lecture.
Speaker 4 (34:03):
Is there?
Speaker 1 (34:04):
Basically what intelligent things do is go You can't have
meant that, you must have meant this right, chat GPTs goes,
I will take a question as read. I mean, of
course it doesn't have an eye. There is nothing in
the rest. Like I've just answered, formal play it again.
But it's the same thing. And if trained to do
attractively specific things, and you get the same problem as
(34:24):
any program like garbage in Garbage album.
Speaker 2 (34:34):
So have you found a lot of students using chat GBT,
because this is a hard problem to quantify? Is it
all the time?
Speaker 6 (34:41):
I mean it's it's a lot.
Speaker 4 (34:42):
I wouldn't say all the time.
Speaker 6 (34:43):
I changed that you.
Speaker 3 (34:45):
Thought that there were more seas indeed, yeah, so I
think there are quite a few. And sometimes it is
difficult for us to prove and if we can't prove
it then at our university, then.
Speaker 4 (34:58):
Well shucks, they kind of get away with it.
Speaker 3 (35:00):
We can interview them, but unless we're like, unless we
have the proof that you could take before like a court,
then we are we're not able to really nail them
for it, which is a bit of a shame. So suspicion, Yeah,
we suspect that a lot of them are very I'm
one hundred percent on some of them.
Speaker 2 (35:18):
I just know what are the clues I want to
hear from all of you on this one, like, what
are the side?
Speaker 6 (35:22):
It's the uncanny value.
Speaker 1 (35:23):
I will not shut up about this, right you know
that like you of course will be imagine a lot
of your kind of the reader will be as well.
Speaker 4 (35:29):
But the uncanny Valley.
Speaker 1 (35:30):
Thing in respective humans is that there's a kind of
a point up to which humans seem to trust artificial
things more the more they look like a human. So like,
we'll trust a robot if it's kind of bipedal or
if it looks like a dog, But after a point,
we really rapidly start to trust me. So like if
it basically when it starts looking like data, some deep
(35:51):
buried lizard brain goes danger.
Speaker 6 (35:53):
Danger. This thing's trying to fuck with you somehow.
Speaker 4 (35:56):
Yeah, right, and.
Speaker 6 (35:57):
Chatter GPT writes exactly like that.
Speaker 1 (36:00):
It writes almost, but not entirely, like a thing that
is trying to convince you that it's human.
Speaker 4 (36:04):
I mean, the dead giveway for me is that it never.
Speaker 1 (36:06):
Makes any spelling mistakes, but it can't form out a
paragraph to save its life. Normally, you would expect someone
who didn't know how to kind of order their sentences
to misspell the occasional word chat GPT spells everything correctly
and doesn't know what like subject object agreement is.
Speaker 6 (36:23):
It's like, it's.
Speaker 2 (36:24):
So, can you just define that, please? Subject object the.
Speaker 6 (36:29):
Beer that I drink, right, not the drink that I bear.
Speaker 2 (36:32):
Right, Because it's interesting. It almost feels like there is
just an entire way of processing and delivering information as
a human being that we do not understand. There is
something missing.
Speaker 5 (36:43):
I mean, for me, like it's very good at summarizing,
but when it responds, it responds in them in really
trite and repetitive ways, or like you'll get a paper
that summarizes a bunch of literature at length very effectively,
and then respond by saying, well, you know, this person
said this, that is doubtful. They should say more to
(37:06):
support that, you know, which is basically saying nothing right,
And that's pretty common.
Speaker 4 (37:11):
It also does lists in formats things and kind of
like a list.
Speaker 5 (37:15):
And even if this like make it look less like
a list, the paper still reads like a list.
Speaker 2 (37:22):
Oh, because someone has asked give me a few thoughts
on X. Yeah, yeah, that's very depressing.
Speaker 3 (37:30):
Things like the introductions you get you'll get this is
an interesting and complex question that parst of us have
asked you the ages, and this is one of the
things we shouted students not to write from first year.
Speaker 4 (37:45):
And then you'll get this garbage right back at.
Speaker 3 (37:47):
You, and then at the end it will be oh overall,
this is a complicated and difficult question with many nuancewers.
Speaker 2 (37:54):
It's a world of contrasts.
Speaker 3 (37:56):
One of the things we tell them, don't ever fucking
do this. This is terrible and right back at you
in perfect English. That's one of English you'd expects a
really good student might have written. But clearly they can't
be a good student because otherwise they've listened to a
fucking instruction.
Speaker 4 (38:13):
I also, I had some papers that I suspected were
chatept this year, but they were already failing, so I
didn't okay, yeah, I didn't think it was worth it
to pursue them, you know, as a plagiarism or Chatchepte case.
Speaker 2 (38:29):
So it's never good papers. Then it's never like an eight,
like a first.
Speaker 6 (38:32):
No, absolutely not.
Speaker 4 (38:34):
It's so I think that part of what goes on
is you can get a passing grade with the Chatcheapte paper.
Sometimes in the first couple of years, when the papers
are shorter and we're not expecting as much. But then
when you move into the what we call honors level here,
which is like upper level classes in the US, like
third and fourth.
Speaker 2 (38:54):
Year, I've got first ever wrist with I know.
Speaker 4 (38:57):
Yeah, yeah, exactly, great, well done.
Speaker 7 (38:59):
Well you would not have gotten it with chat Gipt
because you get dropped in these classes where we expect
you to have gained writing skills minimal ones in your
first two years and then we're going to build on
that and have you do more complicated stuff, and chat
Gipt doesn't build on that, right, it just stays where
it was. So you go from writing kind of C
(39:23):
grade passing papers, yeah, your F grade paper.
Speaker 5 (39:27):
And it's also more obvious because the papers are longer,
and chat gypt can write long text. But it gets
very repetitive and noticeably repetitive, right, and so you're kind
of lost, like you haven't done the work of figuring
out how to write on your own and the tool
that you've been using is not up to the task
(39:47):
that you're now presented with. And so I think I
have seen a few papers that I was suspicious of,
but the papers that I was certain of were ones
that were like senior thesis, the very clear person just
had no way of writing culture.
Speaker 2 (40:02):
That's insane at that stage.
Speaker 1 (40:04):
Yeah, I mean it's fun because I mean one of
the things that we get told about is like, oh,
students have got to learn how to use you know,
AI and large language model plagiarism machines responsibly and in
a kind of positive way. Well, if they're using them
in a way that means they don't learn how to write,
then it's not positive. Visit like, yeah, it's fucking hard
to write a good essay. Yes, it is hard to write.
That's why we practice, That's why we have editors, that's
(40:26):
why we do this collaboratively. And if you're using this
as a sort of oh I don't know how to write, well,
tough shit, you're never going to know how to write that.
That doesn't seem a positive use of any of this.
Speaker 2 (40:38):
Well that's the thing. Writing is also reading the consumption
of information and then bouncing the ideas off of your
brain allegedly.
Speaker 5 (40:45):
Yeah, I worry about like, because CHATCHBT is good at summarizing,
So I worry that one of the uses people will think, ah,
this is a pretty good use for it is summarizing
the paper that they're supposed to read, and it will
do that effectively enough for them to discuss it in
class if they're.
Speaker 6 (41:03):
Willing to be here.
Speaker 4 (41:04):
Yeah, right, but they're.
Speaker 5 (41:07):
Not going to pick up a lot of the nuances
in a lot of the kind of like stylistic ways
of presenting ideas that you get when you actually do
the reading.
Speaker 2 (41:15):
And it's it's so frustrating as well, because like for this,
for example, I've just got the printing off things that
don't read PDFs anymore because I feel like you do
need to focus.
Speaker 1 (41:26):
Yeah, there's some evidence that reading physical copies makes you
engage more.
Speaker 2 (41:29):
Sorry I'm very old fashioned, I guess, but no, it's
true though, But also reading that I wouldn't have really
on a PDF. I wouldn't have given it as much attention.
But also, going through this paper, you could see what
you were doing, like you could see that you were
lining up. Here are the qualities that we use to
judge bullshit. But also summarizing a paper does not actually
give you the argument, it gives you an answer. So
(41:53):
what do you actually want students to do instead? Because
I don't think there's any reason to use chat GPT
for these reasons that it doesn't seem to do anything
that's useful for them.
Speaker 5 (42:02):
I don't have, no, I don't actually have any use
for chat shabt that I can put to my students
and say, here's.
Speaker 4 (42:09):
What I think you should do with it.
Speaker 5 (42:11):
We are like kind of developing strategies for keeping them
from using it, So like building directly on what you're saying.
Speaker 4 (42:19):
Like, in my class.
Speaker 5 (42:20):
Next year, I'm going to have the students do regular assignments,
which are argument summaries and not paper summaries. So the
idea is they have to read the paper, find an argument,
and tell me what are the premises, what's the conclusion?
And that's something that chatchept is not good at, right,
But it's also something that will give them critical reading skills,
which is what I want to do.
Speaker 4 (42:41):
Right. So Yeah, I think that I've.
Speaker 5 (42:43):
Mostly been thinking about ways to keep them from relying
on it because I think that often if they rely
on it, they'll they'll they'll put.
Speaker 4 (42:54):
Themselves in the worst position.
Speaker 5 (42:56):
Yeah, when it comes to future work, they won't develop
the skills that they're going to need, and the skills
that we tell them and their parents they're gonna get
with their college degree.
Speaker 2 (43:05):
Right, it almost feels like we need more deliberacy in
university education because I was not taught to write. I
just did a lot of it until I got good
enough grades and Daniel CHOYNDL look great mental, but I've
had tons of them, and it almost feels like we
need classes where it's like, Okay, no computer for this one.
I'm gonna print this paper out and you're going to
(43:26):
underline the things that are important and talk to me
about Almost feels like we need to rebuild this because yes,
we shouldn't be using chet GPT to half us our essays.
But at the same time, human beings are lazy.
Speaker 5 (43:38):
Yeah, I mean for me, I also prefer to read
off the computer, but I often read PDFs because I'm
terrible at keeping files, right, physical Like, you know, I'm
not going to keep a giant filed for with all
the papers that I've read and then written my liner
notes in.
Speaker 4 (43:57):
You guys can see in my office they're just piled around,
like you can't see this end.
Speaker 2 (44:01):
That's academia.
Speaker 5 (44:02):
Yeah, but I just have piles of paper with empty
coffee bugs everywhere that the camera is not facing.
Speaker 4 (44:09):
But it's the terrible system so at least.
Speaker 5 (44:10):
On my computer, if I'm like, oh, I read that
paper like a year ago, what did I think?
Speaker 4 (44:14):
I can click on it and see my own notes, and.
Speaker 5 (44:17):
I do think that there's something to keeping those records
and kind of actively.
Speaker 4 (44:21):
Reading in that way. I don't know how I ended
this without telling you how to make students do that. No,
you started with the correct answer, which is don't use
chat GPT. Yeah.
Speaker 6 (44:32):
I actually I've got a certain amount of sympathy with like,
just keep writing it till you get good at it.
Speaker 1 (44:38):
But I realized as a lecturer of that camp in
my official possession, and I certainly think that it's the
case that certainly Glasgow has got better over the last
few years about going, oh, actually you do like, we
do need to give you some kind of structuring and
some buttressing on here's how to write academically, here's how.
Speaker 6 (44:53):
To do research.
Speaker 1 (44:54):
And I think that's all to the good. It's worth
saying this started happening well before chat GPT's started pissing
a lover our doorsteps, so they don't get to claim
that as being a benefit.
Speaker 2 (45:03):
There was the whole Wikipedia panic when I was in school.
Speaker 6 (45:05):
Yeah, they think about Wikipedia the right.
Speaker 4 (45:07):
It's like I should to say this to my students.
Speaker 2 (45:09):
WIKIPII one of the best resources.
Speaker 1 (45:11):
Absolutely fine as a starting point for research, absolutely with
it whatsoever. But if you're turning in and on his level, essay,
I want you to go and read the fucking things
it's referencing.
Speaker 4 (45:22):
Yeah, that's right.
Speaker 5 (45:23):
Yeah, I think these things are often great as sources.
I'm worry about chat GPT is that it's not great
as a source.
Speaker 2 (45:29):
It's just yeah, we've.
Speaker 5 (45:30):
Been saying it often gets things wrong, and it often
it'll make that sources, whereas Wikipedia will never do that.
Speaker 6 (45:37):
There are some famous hoaxes, but they get cool.
Speaker 1 (45:41):
Yeah, Joe, have you got any positive things to say
about chat gipt.
Speaker 4 (45:47):
Fan?
Speaker 3 (45:49):
So I know some people who have used these kinds
of things productively, not in ways that our students would,
but I know some mathematicians who have been using it
to do sort of informal proofs and things like that.
And it does still bullshit, and it bullshits very convincingly,
which makes it very difficult to use for this kind
(46:10):
of purpose.
Speaker 4 (46:11):
But it can do some interesting and cool things.
Speaker 3 (46:14):
Yeah, I think some people in that sort of field
have found useful and also We've mentioned this before, like,
if you want CHAT to be teach and write, you're
a bibliography. You've got a bibliography in one style. Tell
it to put something into a different one. Then it's that,
And it's good for like coding the process doing certain things.
Speaker 4 (46:31):
I also think, I don't know, I'm not sure how
I feel about what I'm about.
Speaker 2 (46:35):
To say, but perfect, Yeah.
Speaker 4 (46:38):
I'm going for it.
Speaker 5 (46:39):
It is a somewhat positive thing maybe for a CHAT GPT,
which is that we often have students who have like
really interesting ideas and well thought out arguments, but for
whom English isn't their first language, and the.
Speaker 4 (46:51):
Actual writing is kind of rough and you have.
Speaker 5 (46:53):
To like push through reading it to get the good idea,
which is often really there and quite creative and insightful. Yeah,
and so I do wonder if there's a way to
use it so it just smooths off the edges of
this kind of thing. But I worry that if you
tell students to do that, they'll just first if they
can develop the language skills.
Speaker 4 (47:14):
They offering really good by the year. Yeah, what are
you going to say? Do well?
Speaker 1 (47:17):
So I want to say, yeah, you can see me
getting agitated. Yeah, I think much correct about it, like
that this is a kind of possible use. But I
think this and this is why I'm getting visibly agitated here.
That's students either need to or feel they need to use.
This speaks to a deeper issue, right to a social issue,
to political issue, to an issue about how universities work.
If a student is having problems with English, then there's
(47:39):
a number of like explanations or a number of kind
of responses.
Speaker 3 (47:42):
Right.
Speaker 1 (47:42):
The one response is that, like Glasgow is an English
teaching university, if someone's English isn't good enough to be
taking a degree, then possibly it shouldn't have been let in,
and why have they been learned well?
Speaker 6 (47:51):
Because of money?
Speaker 1 (47:52):
Or alternatively, if someone's having problems with English for like
whatever reason at all, there should be supported. There should
be kind of tutors. They should be people who can
with English. But again that would cost university money. So
of course that doesn't happen.
Speaker 6 (48:04):
Right, and it doesn't happen anywhere that it doesn't extent.
Speaker 1 (48:09):
It would have to happen in order for this to
be a general policy.
Speaker 5 (48:12):
Right, Yeah, I think it could be better, But I
do think that universities often have like a writing center
or a tutoring center that you can.
Speaker 6 (48:20):
Send, but they don't.
Speaker 1 (48:23):
They don't have the sort of spread that would be
needed or the staff that would be needed for this
to be Instead of using chat GPT to sound the edges.
Speaker 4 (48:31):
Off, yeah, I think for example twenty three with your supervisor.
Speaker 5 (48:34):
My worry especially would be that this is my first
year here at Glasgow.
Speaker 4 (48:38):
But I probably have a good Yeah.
Speaker 5 (48:42):
I think they probably have a good writing center. Universities
have been in the past. I felt very confident sending
students to the writing center when they have these problems.
But I think James is completely right that we don't
want the universities to see this as a way to
get rid of the writing center. And that's one hundred
percent a risk given the financial problems.
Speaker 4 (49:00):
That universities are facing, and maybe they're already not. Yeah,
Like given the quality of papers, they're often good.
Speaker 5 (49:09):
I think another thing, as far as this is like
a social problem, is that when grading, I myself tried
to grade in terms of like the ideas and argument,
because this is a philosophy and not the quality of
the right. But not everybody does that. So I kind
of think that another part of this is figuring out
how we want to evaluate the students and what we
(49:30):
want to privilege in that evaluation.
Speaker 6 (49:33):
Yeah.
Speaker 1 (49:33):
Sure, So then again the kind of that that becomes
a problem about what people are checking for. Not let's
take this arched backwards approach to marketing, which is like
how fancy is your English?
Speaker 6 (49:43):
Are fancy English? Is good English? Have an a? But
rather we should be kind.
Speaker 4 (49:47):
Of checking for different things.
Speaker 1 (49:47):
Right, So again the blame blows differently in that case,
but it still becomes a question.
Speaker 4 (49:51):
It's not solved technological.
Speaker 2 (49:53):
Yeah, almost feels like large language models are taking advantage
of a certain kind of organizational failure.
Speaker 4 (50:00):
What an idea?
Speaker 2 (50:03):
Crazy that the tech industry manipulating parts of society.
Speaker 6 (50:07):
That wait, I have a kind.
Speaker 5 (50:08):
Of related tangent here, which is like what are the
use cases that open ai was expecting but didn't want
to emphasize Because for everybody in universities, as soon as
this came out, the first thought was students are going
to use this to cheat And certainly like the people
in open ai went to college, right, and that's what
(50:28):
I hear about.
Speaker 2 (50:29):
Them, so they must well, sam Oman drop down, Oh yeah, I'm.
Speaker 5 (50:33):
Sure he really understands the value of a secondary He's like, I'm.
Speaker 2 (50:37):
Gonna write at least fucking essays.
Speaker 5 (50:38):
Anyway, Maybe he was thinking I would have loved to
have a computer write my essays.
Speaker 4 (50:42):
I'll devote my life.
Speaker 5 (50:43):
But I mean, I'm sure that they like recognize these
bad use cases, right, but they're doing nothing to mitigate
them as far as I can see. And like, another
one that's very related is like, you know, I'm sure
you've heard of this ed phishing, right, a lot of
you know, corporations get attacked and get hacked not by
(51:03):
someone cleverly figuring out a backdoor to their system, but
by somebody sending.
Speaker 2 (51:08):
A social engineering.
Speaker 5 (51:10):
Yeah, asking for the password to somebody else. And one
of the biggest barriers to that is that a lot
of the people who are engaging in fishing aren't from
the same country as the company they're targeting, right, so
they're not able to write a convincing email or make
a phone call that sounds like that person's supervisor. But
with a tool like this, you could one percent write
(51:33):
that email. Right. It's going to make it a lot
easier for these kinds of illicit schemes to work.
Speaker 2 (51:38):
That has been a market increase CNBC, which would just
brow up, is that on two hundred and sixty five
percent increase in malicious fishing emails since the launch of
chat JPTA. Great style. I mean if I could have
thought of that, imagine what a criminal could do.
Speaker 6 (51:52):
Right.
Speaker 4 (51:52):
But also, weren't the people at open AI thinking about.
Speaker 2 (51:55):
That because they don't care?
Speaker 4 (51:58):
Yeah?
Speaker 6 (51:58):
Yeah, we will see in duastic fuck right.
Speaker 3 (52:00):
Yeah.
Speaker 1 (52:00):
Yeah, they were so busy thinking about what they could
do they never thought about whether they should.
Speaker 5 (52:04):
Yeah, this is the kind of problem with the move
fast and break things mentality, like obvious.
Speaker 4 (52:09):
I mean, I think it might be the only person
who has raised in the US.
Speaker 8 (52:12):
But we had future problems solvers, where you you know,
you think about a future problem and what bad consequences
there could be of some technology and how to solve them,
usually through social cues. If I could do that in
fifth grade.
Speaker 5 (52:28):
I would expect these people to have thought through some
of the bad consequences of the technology they're putting out.
And you know, some of those are cheating on tests
and they don't seem to have worried about that.
Speaker 4 (52:39):
Yeah, And another one is fishing. They don't seem to
have a worried about that.
Speaker 1 (52:42):
You know, biases and algorithms, right, So yeah, yeah, comes
no surprise to you. It turns out so with a
lot of the facial recognition systems they were incredibly racist.
Speaker 2 (52:52):
They were going back to Microsoft's connect they could not
see back.
Speaker 1 (52:56):
Yeah yeah, and CCTV stuff that basically just sort of nless.
It was presented with a blindingly white Caucasian.
Speaker 6 (53:02):
Where I don't know, right, but like.
Speaker 1 (53:08):
The sort of stuff where these like language knowledge are
trained on certain sets of data, and they trained on
certain assumptions and like shitting shit out right, and particularly
if people think that it's actually doing any kind of thinking,
and if they kind of cargo cult it, we again
get a kind of social problem multiplied by technology feeding
back into a social problem, right. And it's the So
(53:30):
these guys have heard me when you're about it so much,
I love it or so, but I'm profoundly skeptical of
technology's ability to solve anything unless we know exactly the
respect in which we want to solve it and how
that technology is going to be applied. You know, like sure,
experiment with bringing back dinosaurs, but but like, don't tell
(53:51):
me that it's going to save the healthcare system unless
you can demonstrate it to be step by step. How
that big old rex run around on Eiland Nublade is
going to save anything.
Speaker 6 (54:00):
They just tried to blind.
Speaker 3 (54:00):
People actually bringing back dinosaurs.
Speaker 4 (54:04):
That would be great.
Speaker 1 (54:05):
All right, this isn't the best example, but I already had.
I had Jeff Gobwoman in my head.
Speaker 2 (54:11):
I had to go to the fellas. This has been
such a pleasure. Don't put that in the recording.
Speaker 4 (54:20):
That's right.
Speaker 2 (54:21):
We won't edit it in post, and you will give
us your names.
Speaker 5 (54:25):
I'm Mike Hicks, but my papers are written by Michael
Townsend Hicks and I'm a lecturer at the University of Glasgow.
Speaker 4 (54:31):
My website is Hicks.
Speaker 2 (54:34):
It will be in the podcast profile.
Speaker 6 (54:36):
Don't you worry. We'll get that just plugging my luggy stuff.
Speaker 4 (54:41):
My name is Joe Slater.
Speaker 3 (54:43):
I'm a university lecturer in moral and political policy at Glasgow.
Speaker 6 (54:47):
I'm James Joffrees.
Speaker 1 (54:48):
I'm a lecturer in political theory at the University of Glasgow.
Speaker 4 (54:51):
And even if I.
Speaker 6 (54:51):
Wanted to or couldn't give you what websites, I don't have.
Speaker 2 (54:56):
Everyone you've been listening to Better Offline, thank you so
much for listening. Everyone guys, thank you for joining me.
Speaker 4 (55:01):
Thanks having me here.
Speaker 6 (55:02):
This is.
Speaker 2 (55:12):
Thank you for listening to Better Offline. The editor and
composer of the Better Offline theme song is Matasowski. You
can check out more of his music and audio projects
at Matasowski dot com, M A T T O S
O W s ki dot com. You can email me
at easy at Better offline dot com or visit Better
Offline dot com to find more podcast links and of course,
(55:33):
my newsletter. I also really recommend you go to chat
dot Where's youreed dot at to visit the discord, and
go to our slash Better off Line to check out
our reddit. Thank you so much for listening. Better Offline
is a production of cool Zone Media. For more from
cool Zone Media, visit our website cool
Speaker 7 (55:50):
Zonemedia dot com, or check us out on the iHeartRadio app,
Apple Podcasts, or wherever you get your podcasts.