Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Time.
Speaker 2 (00:00):
It's like a clown.
Speaker 3 (00:01):
No, don't this little page he's bagging boarding Batman and
the gut are like a maze story tellers me some fellas,
we some felons.
Speaker 1 (00:06):
Isn't amazing.
Speaker 2 (00:06):
It's like Appella bearver sellad because this shit is so contagious.
Speaker 3 (00:09):
Mouths on the Summer Reason Pilot got the show while
the cycle spinning knowledge on the getty like a pro
beat the Babo be the rabbit. Don't step to the squad,
we get activic and hate. It's like a stepla parts
you don't like fish talk, do you hate? It's a
batl with the cuttle fish killers tender pools on the
taping The Greatest Five Stars, If you Cherish your life,
Bucky barn Hit Squad, spraying leg and your pipe.
Speaker 2 (00:35):
Hey, everybody, welcome to another vision of Is It's Just Bad?
Is It's Just bad? The best podcast you never heard of?
I'm your Hostban Professor Mouth Jordan is always by the
CB Cosmologist and Teddy Woo, Hello folks, what's going on?
What's up? Uh?
Speaker 4 (00:54):
Every year there was a song I listened to in
high school, A Black Friday by Oh Cool, Calm Pete
and somebody else. I don't know if this is our
wraps on that anyone's even aware of except or how
because I got on a mixtape from a friend in
high school and I can never find this guy again.
But anyway, Black Friday, get it while it's hot from
(01:16):
the liquor store to the corner of your block and
goes on, so on and so forth. It's a good song.
I hope you all are having a nice time and
staying inside.
Speaker 1 (01:25):
Probably.
Speaker 2 (01:27):
Yeah, the AI's out of control, folks, and I'm gonna
be the first one.
Speaker 1 (01:35):
To say it.
Speaker 5 (01:36):
Oh wow, okay, hot take.
Speaker 1 (01:37):
I love it.
Speaker 2 (01:40):
The it's grading season and this is where this is
where the students fucked up. And this is how you
can combat AI is you do nothing. Because the way
that students have up is that they all submit the
(02:02):
same essay and so I read the same essay twelve
times and then go, well, these are all zeros, and
then everybody else gets an a. It has gotten to
that level of now.
Speaker 1 (02:20):
There.
Speaker 2 (02:21):
It's almost kind of like a like will the free market.
Will the free market logic of having to differentiate and
recognizing because now it's a thing. Now it's a thing,
like my university subreddits. Various student subreddits are like, oh,
(02:42):
my professor caught my paper because I used a CHATCHBT
and they told it me it was because you know,
you know, fifteen other people did it. Like, so, how
do you then make it different enough where and they
can't catch you? Because the answer is never to like
(03:04):
do the assignment, it's how do I make it different
enough so that they can't catch me. And the other
thing that I in particular fell into by accident was
I assigned a bunch of new books, like super new books,
(03:25):
new books that came out during the semester. And that's
when I caught the students. Because chat GBT and some
of these other lms, they don't have access to the
text of the actual book. And so when you construct
the prompt and you ask for like specific engagements with
(03:49):
several examples from the text, if it doesn't have the text,
what it does is it tries to bullshit. And this
is also what students sometimes do. It tries to bullshit
from the back matter of the book, which is only
two hundred words, okay, and so it'll be like and
(04:10):
then the author argues, and then it'll be a quotation
from the blurb on the back of the book, and
you go, well, that's not anything like that, that's them
selling the book. And so it's it's a very strange
thing right now where I'm not coming down with an
iron fist, but I am being like, listen, this is
not something that I can ignore. There are several, uh,
(04:36):
you know, ways in which I am now trying to
navigate between I know or I have a suspicion that
this is an AI generated paper, but I can't. I
can't accuse students of that. I don't know that for sure.
(04:57):
Right All I have is I put my prompt into
every LLM that I have access to, and then it
spits it out and then I see identical versions of
that submitted. But there is no like, it's not part
of our even our academic dishonesty policy, and a lot
of universities because those people are still grappling with this.
(05:20):
There's lawsuits being filed against teachers who use it to
provide feedback on papers, Like there's a bunch of this
is like new territory, and so at all universities, it's
a conversation and people are like altering their student dishonesty policies.
But like when you're at a university that has you know,
(05:43):
STEM where AI can and those generative tools can be
useful for them, and then it's integrated into the curriculum.
Then you can't then double back and say you can't
use AI because it's dishonest. But what is absolutely part
of the student misconduct policy is fabrication, which AI models
(06:04):
often do when they don't have access to source text.
And so when they don't have access to the actual text,
what AI does is make shit up. And I have
been pumping my prompts through a through a bunch of chatbots,
and the chatbots are now warning me that and it
will say in the way that like a person would say.
(06:25):
It says, I don't have the text in front of me,
so so you're gonna have to either include quote quotations
from the text yourself or I can do it for you,
and then you should double check them. Like it's warning
me that it doesn't have access to the thing and
that it's only going to be able to do the
(06:46):
most general type of like analysis possible, And I go, well,
I appreciate it.
Speaker 1 (06:55):
There's so thinking about it from me. I've been forced
into having to create an AI like a couple of
AI agents and having to work with some gent and
some analysis modeling like xg boost and using some other models.
(07:18):
It almost sounds like the direction you I mean honestly
going through and being like, hey, this is actually what
you're gonna have to use the AI like what people
are asking for when it comes to AI tools. So
it's not a matter of write one paper. It's a
all right, you have these chat GPT tools. What you're
(07:41):
actually gonna need to do is you need to produce
four white papers on of these various lengths for these
particular audiences. And also if you mess it up, it's
a matter of not only your job, but you might
lose money on something. So it's one of those areas
(08:03):
where I mean, and this is a no, I'm pulling
from a very specific set of things that I've had
to observe. I mean, the agents that I've had to
develop are all like create these meeting recaps, now, create
a thing for an executive level, a developer level, and
then I have to go in and edit them. But
(08:25):
it's one of those all right, do I A do?
I have to write this by hand and it'll take
me hours, and they've cajoled me into no, no, you
need to do this. And also we're applying it. Some
of the agents have to be applied in multiple teams,
so it's a not only are you developing a thing
for your use, this is going to be looking looked
(08:46):
at realistically by somewhere between ten and fifteen, but anywhere
up to five hundred people within the organization. So I'm
bringing all that of like, all right, if the students
are if it's almost a foregone conclusion of the students
are going to use the thing, you should I mean
it almost sounds like the university should be like, now
(09:08):
start using it in the way that you're going to
have to in the workforce.
Speaker 4 (09:12):
This is an interesting point, and I think you know
Mouse is talking about like in STEM when they've got
some like tools that you know you'd be using it
in your job. I've seen this, like to your point,
you don't have time anymore, Like here's the tool. We
need these many pages written, and we are expecting that
(09:35):
you're doing it faster because you're pumping it through the
chat GPT tool. You don't have time to do it
manually anymore. But also it's super high stakes because you
you can pump it through as much as you want,
but it's still on you if it's wrong, which is
an interesting dobleedge sword there.
Speaker 1 (09:54):
Yeah, And it's one of those things where, especially if
it's a you have to understand the the text of
what it's saying. I mean again, it almost feels like
one of the prompts as a instructor that you have
to look at is like, hey, tell them straight up,
these are the things I'm looking for if you are
(10:15):
using AI tools, and you will fail if I catch
them and literally challenge them to be like make it
so I don't know right, either write it or just
be like, your prompt better be real good because if
they like a friend of mine who works at an
unnamed university at this moment, not to dock them, but
(10:35):
one of the things that she was telling me was
like she failed a couple students because they didn't bother
to take out the prompt or like the section at
the bottom that says do you want these follow up questions?
Oh no, They just like they did a control a
copied the entire thing and submitted it. And she was like, dude,
like you can't. You didn't even look at the prompt,
(10:58):
but you didn't even look at the thing. It spit out, like, so,
it's one of those.
Speaker 4 (11:03):
Have you colorI it's that mouse. Oh no, he's other
making a very dramatic pause.
Speaker 5 (11:10):
Or I can't hear him.
Speaker 1 (11:15):
It's still my No, I can't hear, but we'll pretend
it's a dramatic pause.
Speaker 4 (11:19):
Yeah, this is very sense coming. I'm sorry, Teddy, so continue.
Speaker 1 (11:24):
Oh yeah, she was just saying that it happen. It's
happened four or five times in the last year across
different classes. But like, it's one of those if we're
looking at it algorithmically, you're gonna have a certain number
of students that aren't mindful that you're just gonna catch
doing that. So you can put the I mean she
(11:45):
didn't actually propose this, but like if you put the
floor at look, we know that this is a possibility
if you're gonna use this route, here are the things
that you need to do base like at the at
the lowest comment. I remember when I was in school
when it came to spark notes, specifically in middle school,
(12:07):
where they were like, don't if you're gonna copy spark notes,
first of all, you're in at the time, like sixth
and seventh grade, if you use certain words, you really
will have to prove to me that you know that
this word is. You know what this word is. But
on the other hand, they were also like, also, dude,
(12:28):
don't copy the advertisements, Like what do you do? Like
you just copied the entire page and it included an
ad at the bottom. Don't do that.
Speaker 2 (12:39):
Am I back yet?
Speaker 1 (12:40):
Yeah, you're back.
Speaker 2 (12:42):
Uh No, I just pausing for dramatic effect. And uh
so I had, I had. I had this situation with
a student where the first I opened the the submission
in the first line was and here is your essay
written in a more personal tone. That was like, no,
(13:10):
because the other thing is and I am like, I
get advice from like senior faculty, and you know, it's
it's tricky because I'm not looking for the same things
as they are from students. Like I understand that it's it.
I understand that if somebody is doing biochemistry and they're
(13:34):
in a humanities class, that really what they're doing is
fulfilling a university requirement. I get I get it. But
you can't you can't just cheat. You can't just cheat.
And so when I go to senior faculty for advice.
(13:54):
They're like, you know, you got to come down hard
on students or otherwise they're going to keep doing doing it.
And you you know, I was talking to one and
she was like, you know, I sat down with the
student and called them into my office, which is just
like so much time already you're spending on this, And
I told them like, this is an opportunity for you
(14:15):
to really to really grow and to really embrace the
humanities as part of your broader education and all this shit.
And I was like nodding along, like yeah, yeah, so
super important. And then I went into my office and
I was like, fuck that, Like I don't give a fuck,
who cares. That's not where I'm coming from. Where I'm
coming from is like the tacit sort of understanding that
(14:39):
you all are going to do it, and if I
catch wind of it, or if it's rampant or it's
impossible to ignore, then you're gonna fail. Like you're just
gonna fail. And if your goal is to get this
general education requirement out of the way, and then you
fail because you used AI on a paper that is
(15:02):
half of your grade in like a writing intensive class
because That's the other thing too, is like that's what
the university is requiring us to do, is like test
your ability to communicate yourself through written in oral means.
Like that's a functional competency that runs across universities in
the United States, right, and AI is increasingly fulfilling that thing.
(15:24):
But the courses are designed such that that's the only
thing that matters is how you produce written material. So
if you get caught, it's f it's just and so
then you have to do this shit again.
Speaker 5 (15:41):
This doesn't feel like it's worth it, super high risk
for what.
Speaker 2 (15:47):
So that's my speech too, And then yeah, there's a
lot of students who go, I'm not gonna get caught,
Like that's you know whatever, that's what your decision is,
and that's the risk that you take. But most students
are like, it would be horrible to get an F
and then have to retake this class or for them
(16:08):
like they only offer this class in the fault. It's
not gonna work out with my schedule. So I just
have an F on my transcript brings down my GPA.
That's not worth it. Also, we know this dude's not
going to greate as harshly type shit, but like I
have received, like man, I got an essay on And
this is the other thing too, is like never never,
(16:30):
never chat GPT some shit that your instructor is familiar
with because they got an essay about the fucking Sopranos
and oh wow, this is all fucking wrong, like none
of this is correct, Like this shoul didn't even happen
in this episode. It was like so that also is
(16:52):
another element of it too, is that there's an existential
type of it's not a crisis, but it's this like you,
it's this kind of like new experience where now you're
you are oftentimes as an instructor, navigating and scrolling in
(17:12):
between papers written by people and papers written by computers.
And it's strange because we grade hundreds of papers and
now our eyes are getting sort of and our brains
are getting trained to it where you're reading and you go, oh, no,
(17:35):
this is just generalities, This just vague cursory generality. The
all of the citations are incorrect, like but it all
looks very good.
Speaker 1 (17:51):
And so you.
Speaker 2 (17:54):
And then you go and you you go to a
student who you're like fairly certain isn't using AI or
is using AI in the best way possible, and you
read it and there's personality in the writing, there's there
are mistakes in the writing. There are those things that
students sometimes do where it's like and this made me
(18:16):
think of or they go And then pretty much what
that means is where like they have like colloquialisms in
the writing where you're like, and you can introduce all
this shit too, you can prompt all of the ms
to like integrate this shit. But when it's a student
who is just doing it as a requirement, like they
(18:37):
don't want to spend any more time than they than
they have to like doing anything. And then at that
point you're you're inputting so much shit and making like
revisions and stuff, then it's like are you writing kind
of yeah.
Speaker 4 (18:51):
The amount of time that you are spending building and
revising and refining a prompt like you could have just
written the paper at that point or in the process
of that, you've now had to reread the draft so
many times and like rework it and rewrite it and
rebuild it that you're basically back to working spark notes
(19:13):
quotes into your essay that you wrote yourself.
Speaker 1 (19:15):
I remember one of the And this is thinking back
to when I was in undergrad. One of my two
of the papers that I remember the most outside of
some other reasons, but two of the ones that I
specifically remember the feedback the most time. One of them
I got a zero. And the reason I remember this
(19:37):
is it was one of the it wasn't one of
the I wrote this at three o'clock in the morning.
I had exactly six hours to get this thing in.
This was a paper that I actually spent a little
bit of time writing, and I took the prompt and went,
I'm I'm writing a parody paper, Like I just went,
(19:57):
you know what would be funny, And I don't know
if she's actually gonna fully read this, so I didn't,
so I wrote this. It was like it was taking
a line. I think it was like it was like
for a Queer studies looking at either Sylvia Plath or
Emily Dickens. It was Sylvia Plath, and it was like
(20:18):
I started talking about hysteria, and the whole thing was
like she was describing somebody's like skin, like milk and
like something like that. And I went, well, clearly, and
I went through this whole thing, and my conclusion was, well,
she's clearly saying that with the rise of milk dairy
products at this time, the solved to America's issues social
(20:38):
issues are to have a milk orgy and just everybody
needs to just have a collective milk orgy and orgasm
and it'll it'll everything's fixed. And I like quoted sat
passages and I like, I mean, I really would like
I put a whole day of writing and editing into
this because I thought it was funny. So I could.
(21:00):
The thing I remember the most was I could see
like she was actually grading it, and like I could
see where she was like, oh, like, here's an interesting
it's an interesting area. But actually hysteria was disproven at
this point if you need the additional like references. And
then I saw when she stopped grading it at the end,
(21:24):
it just said zero. Come see me. So I was like,
uh oh, And the conversation was again one of those
like I will never forget this. She was like, oh,
all semester, I've been trying to figure you out. You're
actually a good writer. You didn't do the assignment. It's funny.
Don't get me wrong. Your thing was funny. You have
(21:45):
two days or this grade goes in.
Speaker 4 (21:49):
And I was like, oh, what, congrats on a fun,
little creative writing piece that was not the assignment.
Speaker 1 (21:56):
And so the point I'm making there is like it
is like, when it comes to these types of assignment,
it's very easy for students to royally fuck up, even
before AI in the age of like, oh, yeah, I'm
(22:17):
very I'm desperately trying to get this in and I
have a tool that could enable me to do this.
I can one hundred percent understand why students would be
doing this. And like, I remember when I first learned
what zatero was and I went, oh, and this again,
this is in grad school when I was like, oh,
(22:38):
that felt like cheating because essentially I was like, oh,
not only is this this is a one step up
from an outline, because not only am I quoting, but
I'm also like, here's all the sources, here's the annotated
bibliography of why I'm using it. Suddenly, a prospectus of
having to write ten pages in a week was more like, Oh,
I just find all the sources, put it in order,
(23:00):
put my thoughts in order, put like context of why
I'm using these quotes in order. And then suddenly I
have a paper and I was like, oh, this can't
be this can't be legit, Like I'm gonna get danged
for plagiarism. Oh wait, I've cited everything and I'm good.
And it was one of those things where like, once
you get the tool, you can be like, oh, ten pages,
it's gonna suck, but I can get it done in
(23:22):
the next like two days if I really need to,
like or I have exactly twelve hours, I'm going to
get the seven pages just out, you know. But it's
a matter of the tools and the techniques, and it
just sounds like and mouse you can confirm the confirm
that I but it just sounds like the university systems
(23:43):
in the last well we're in twenty twenty five now,
in the last three years, we're just not prepared for
how to present these tools because I mean, the tools
aren't haven't been around long enough for like corporate worlds
to understand how to fully use the tools. The people
who have been marketing the AI tools don't fully understand
(24:08):
how these tools can be applied.
Speaker 2 (24:11):
Yeah, yeah, And and it's like it I also try
to tell students, like in your other classes, you're using
AI as a as like a supplement to re resource,
and I don't I don't really mind if that's how
you use it for writing and stuff like that. But
(24:36):
what you what you absolutely should do is use it
as a supplementary tool. You should not use it to
do everything for you, because there's a high probability that
it's going to make a lot of mistakes that then
will get you caught out. And I've had it, seems
(24:57):
like over the last couple of semesters, I've had to
have this conversation where you know it will be, it
will come to a head. It might be like the
third assignment, it might be the first assignment, but at
some point in the semester it'll be a deluge of
this shit and then I'll go in and I'll go, Okay,
So here's the deal. Have to class fail this shit,
(25:19):
and here's why. One of the most illuminating ways that
you can sort of understand or how you can understand
how AI is engaged in the process of fabrication is
(25:42):
if you assign something that you yourself wrote mm, which
is what I did for extra credit.
Speaker 1 (25:52):
I did not.
Speaker 2 (25:54):
Would never assign my own shit and make students like
purchase it that is fucking weird to me. But I
did provide an option of like, if you want to
for extra credit, do this this report. Here, here's the prompt,
here's the thing. And then I'm reading like twelve papers.
(26:19):
Half of them are I'm pretty sure not artificial intelligence,
the other half are. And I'm reading those papers and
it's like and X y Z argues that this, this
and this, and then you're reading it and you're like,
I never said that, I never wrote those words. It's
(26:40):
the easiest thing to sniff when it's like your thing.
And then I so then I got curious, and then
I put in the again, it's several of these of
these chatbots. I put in the prompt that I had given.
And then one of the things that you can do also,
and this is this is meat, like in class telling
(27:01):
students like and make sure you look at the sources
that chat GPT is using when when it's producing your essay.
And I looked at it and it was just all
like Barnes and Noble, like Amazon dot Com, like the
like the only shit that exists out there, and it's
(27:22):
not actually anything that links to the text of the book.
And that was like, that's when I realized that new
shit is very hard to like artificially create papers around
because they just don't have access to sources and stuff
(27:42):
like that. I also signed one of my friends books,
and what the chatbot did was it used their dissertation,
which was readily available on a public repository. But the
dissertation is about seventy percent different from the book. And
(28:04):
so I was reading, I was reading some of these
papers and going like, because I've read both of those documents,
and was like, and but this is how you this
is how I worded it. I'm like, this was in
the dissertation version of this project. This wasn't in the book.
Did you, by mistake read the dissertation instead of the book.
(28:26):
If so, that's not what I assigned. And to Teddy's point,
it's like the prompt is right, you respond to this
book published in this year by this publisher with this title,
not any other material. And so then the rubric just
straight down the line you go zero zero, zero, zero zero,
Like you're not even you're not even responding to the
(28:48):
right text, you're not even engaging with the right material.
So from jump you failed, unfortunately, So it's it's it's weird,
and there's all this like to do. Also about like
Harvard is in the news I think it was the
New York Times or some something where it was like
(29:08):
students are fully getting degrees and they're not writing a word,
and they're not learning anything, blah blah blah blah blah,
and they're you know what, the what about ism of
it all isn't necessarily like super productive. But also to
kind of what Teddy was was gesturing towards, there's a
degree to which we know a lot of people who
(29:30):
didn't do anything in college and got full last degree,
just like straight up full the same exact degree that
I have, and like I was in all these classes
with you. By that, I mean I was in none
of these classes with you because you never showed up,
like we got the same ass degree. So yeah, it is,
(29:51):
it's it's it's it's strange. It's strange. And at the
same time too, you can't lecture students when you're actively
pursuing the creation of like departments that are built around
developing AI.
Speaker 4 (30:10):
Yeah, that's that's tough. And like this is the critical
reasoning section. We're trying to develop your brain muscle. Uh,
go over to this other class where we are developing
machine quote unquote learning. Uh, you got to compartmentalize their
different skills, which they are. So I'm I just went
(30:31):
to a cosplay photo shoot and this is it's bleeding
into this as well. And this is was interesting to
me because you know, these AI tools are showing up
in Adobe products and posts, you know, light room and
things things of that nature, and so there's a debate
in the photographer community about like how much of that
(30:51):
to use, how much of it not to use, when
to use it?
Speaker 1 (30:54):
You know.
Speaker 4 (30:54):
So you know, we go out in person, we're in costume,
they're real humans taking real pictures or digital photography, uh,
and then have a lovely time. And then you start
to see some of the photos come back and they're
back and forth about, you know, how much post processing
is too much, how much editing is too much?
Speaker 5 (31:13):
How much like there's a car in the background, or
want to remove it? Okay, cool?
Speaker 4 (31:18):
Are all of my fingers still intact in that process?
Speaker 5 (31:23):
Like and that's the kind of us.
Speaker 4 (31:25):
You start to see drafts come back and like, this
is a beautifully framed picture. What was going on in
that background that you tried to remove? Or you asked
Adobe or whatever to try to clean up that turned
me into a Picasso in the process.
Speaker 2 (31:42):
This this, this kind of does remind me of due
to the Disney AI controversy that's currently happening, go for it.
So they are so there's like and I think this
is what they're doing. They are are placing the onus
(32:03):
of AI generated content on the consumer by allowing them
to use the Disney Plus platform to create like, uh
like short form AI generated stories written by the subscribers
(32:28):
that use Disney characters. So you can have like, you know, whatever,
Elsea Meet the Mandalorian and like there's like a whole
story around it. And it's this, it's this, it's this
way of being like it's so fun. You can you
can you can play in our sandbox, you can use
(32:48):
the action figures and they create yea Doubler role play
and and a lot of creators who you know, work
with Disney Art upset and and they see it as
this sort of like it's not just that Disney is
embracing AI, it's this. It is this this move of
(33:10):
it being like like in your case, like you get
to control or you get to decide because you're the
subject of the photograph, like how much AI is allowable?
And so it's putting the responsibility of the consumer and
it's like then they can turn around and be like,
well that's what they want.
Speaker 4 (33:29):
Yeah, And also they don't own any of it, so
like if it's good and we like it, we can
you know, we're just content farming, so and either keeping
people entertained with in their own little toy boxes or
then just stripping it out and being like that one
feels like we can then sell to everybody else and
(33:49):
no human writer was involved.
Speaker 5 (33:53):
Or credited. That's that's awful.
Speaker 1 (33:55):
That sounds like almost the more uh Disney reason of
doing it of we did not have to pay anyone
to do this. The person is just samp like just
bottom line, Oh, we didn't have to pay anybody else.
This person is paying whatever amount to write fan fiction
or produce these images that we never have to do
(34:18):
anything re litigious because we never paid anyone else to
do it. They are paying us to play in this sandbox.
Speaker 4 (34:25):
They are paying us to use our chatbot, which again
you could one go play with your friends or you know,
with actual action figures or two. You know, the idea
of like just basically white labeling a chatbot and then
charging someone for the privilege of using the Disney branded chatbot,
(34:49):
and then cutting all creative humans out of this equation
because the person again who is doing this playing like
that's not creative either, right, It's it's it's keeping. Yeah,
it's this very strange closed loop of echoes.
Speaker 2 (35:13):
Yeah, and is it. There's something kind of cool about it,
if I'm being honest, But there's also something like you
think you want this until you have access to it,
and then you really don't. It's the same shit, Like,
you know, Disney ruined a lot of shit that existed
(35:34):
only in our imagination as like aspirational or like wouldn't
it be great to see crossover movies like big event movies?
And then we got a bunch and it was like
this is it's cool for a bit, but this kind
of sucks ass, Like, but wouldn't it be cool? Because
Dady said fan fiction, wouldn't it be cool if you
(35:55):
got to do like fan fiction with all your like
favorite Disney ip and stuff like that. It is like
fulfilling this this thing where that where you do you
get the thing that you want. It's ultimately not satisfying
or it reveals to you the the capacity of this tool,
(36:17):
and then you want more and more, and so it's
ultimately kind of like not fulfilling a need, but just
creating another, related, more ambitious need. It's it's this like
weird thing that I've been thinking about a lot, where
it's that thing of like, you know, if they would
(36:39):
have told, you know, like my dad, that they would
have welve Spider Man movies like in a decade or
something like that, like he would he would have he
he would have assumed that that would be fucking awesome,
and it wasn't awesome.
Speaker 5 (36:59):
Ye, No, that's fair.
Speaker 4 (37:00):
There's a tantalus nature to this, but specifically just to
just to put a fine point on it, like fan
fiction requires a fan, right like the fan to actually
like produce something. This is plunking this toy box down,
and but it's it's not even like a set of
(37:21):
toys that you are playing with. You're just like producing
fake stuff artificially is supposed to like wholesome real fake
stuff yourself. Yeah, it's a it's that's a horrifying line.
I am wondering about if this bubble will in fact
burst with you know, lack of subsidies or like where
(37:44):
are we at, Teddy is this We're gonna run out
of water first?
Speaker 1 (37:48):
Oh, no, we're going to run out of Uh, we're
gonna run out of processors first. But the AI bubble,
most recently, I believe several So it's Microsoft and four
or five other larger companies, and I'd have to look
(38:10):
up the specific names have all divested from Nvidia as
well as some of the major data center companies, because again,
none of these tools have gotten a return. Now an
interesting argument for yeah, we could watch it all burned
right now. Open ai loses money every time someone uses
(38:33):
chat ept. Even on the two hundred dollars tier, they
still lose money because it is so intensive for someone
to use these at scale, depending on what they're doing.
That one ought like if you or I were had
a personal model that was just like all right, we're
(38:55):
running it on a home server and like we call
it even three or four times a day, that'll get
h that'll get a lot of electricity costs and you'll
have to worry about cooling it. But you know, it's
not gonna melt your It's not gonna melt your server.
When you scale that up to hundreds of thousands of
(39:17):
people putting huge amounts of calls daily and hourly. The
the thing we're running into is laws of physics because
the processors cannot be cooled fast enough. If they are
being run that intensively, they are melting. And the problem
with these rare earth metals that they create that you're
(39:41):
creating these chips out of they are the by definition,
a non renewable resource. When they melt, you can't just
remake them because these chips are expensive. And the way
that these tariffs have gone into place the already not
on existing infrastructure. And when I say non existing, I
(40:05):
mean oh, we logistically have been spending time the under
Vajing Ping's administration for the last fifteen years, there are
segments of the People's Republic of China's economy that are
dedicated to building these chips. That means engineers who know
(40:28):
how to not only excuse me, design them, engineers who
know how to work the machinery, engineers who know how
to work the machinery to assemble everything, and a large
portion of these people have specialized PhDs in chip manufacturing.
That infrastructure exists there because they've been building it for
fifteen years. With the way the tariffs are being negotiated
(40:51):
and in place, the infrastructure to get them from point
A in this country to point B to the US
how does not exist. So what we have is, oh,
even with the relaxed regulations, the prices are just going
wild because the amount that they're melting and the amount
(41:16):
that it costs to bring them in, find the ones
that aren't working and replace them, they are not recouping
the costs just period. And they have and when I
say they, I mean like Open AI some of the
other like larger AI infrastructure companies have just been going
(41:37):
oh right. We thought the underlying premise was if we
build more of these chips and we build more of
these data centers once they once we get a certain level,
we just build more and the complexity will plateau because
it's fine. And we're finding that no, the like, the
complexity is not lowering. The more data we put in
(42:00):
and the more of the infrastructure, the math is just
scaling higher. Anybody who has been saying, oh, we'll just
solve it with quantum computing, well, fun fact, we discovered
quantum computing even isn't science fiction less than ten years ago.
There are two working quantum working quote unquote. The way
(42:23):
in which these quantum computers are working are still decades
from even a private corporation being able to make use
of them, and even so, there is no concrete information
on how that would actually be put together. There is
no programming, there are no models, because the resources to
(42:48):
cool these areas and to actually make it work are
still in a highly experimental phase, so there is no
there is just we are running into the laws of
physics for why this isn't working, and all of these companies, truly,
all of these companies are going, oh, not only do
we not have the electrical infrastructure to do this, not
(43:11):
only do we not have the water infrastructure to do this,
they don't necessarily have the regularly occurring road maintenance to
be able to make sure that the amount of trucks
that have to go in and out for this are
regularly being able to be serviced. Because these data centers
are going in largely residential areas. Residential areas are by design, Hey,
(43:35):
all the trucks are going to leave, so we can
just have regular car traffic. Sorry, I can't like it's
just it's it's bursting too in conservatively three years, possibly
sometime next year, but that's less likely. But all of
the money is being pulled out, so no, like there's
(43:58):
no way in which somebody can make money on the
large data structure infrastructure.
Speaker 5 (44:06):
Cool.
Speaker 4 (44:06):
Yeah, that's a really useful perspective because like my you know,
seeing local news where I'm at about like, hey, we
almost ran out of water in our area, and we
got to think about other sources because new data centers
are getting built and like that.
Speaker 5 (44:22):
The physical reality is there.
Speaker 4 (44:25):
So my hope is that the economic side of it,
because you can't reason with these assholes, the economic side
of it will look so bad to them that they
will stop before the water electricity runs out, as opposed
to just trying to you know, drive it into the
ground hoping the line goes up and hitting the wall
(44:46):
of Oh, we like blacked out an entire seaboard and
you know, everybody's in a drought now.
Speaker 5 (44:53):
And also we can't afford it.
Speaker 1 (44:55):
The one thing that and I alluded to it earlier,
and I do want to I guess I have to
be fair because I am technically working in this industry.
But what we are, what we are seeing is on
the micro side of things, so like micro computers, smaller controllers,
(45:19):
like personal like if you personalize them, there is a
pretty big oh, I shouldn't say pretty pretty dug in,
I'll say group of folks who are going, well, we
can just start looking to optimize some of these AI
(45:41):
tools and some of these models for smaller use like
smaller scale personal items. So instead of data center wide
of like we've just processed the entirety of human information
that has been digitized to more like we have this
model that people have been using these large data sets,
(46:01):
So someone at home off of a gaming laptop could
be able to be like, oh, yeah, they can. People
can right now look at census data like all census data.
No human being would be able to look even in
the far reaches of Oh, these people who are savants
(46:24):
who can look at this amount things that take more
than a person's lifetime to digest these things can So
that is in existence and as tools. That is a
possible direction, And there are a lot of people who
are right now looking at like how do we optimize
(46:45):
for shittier equipment, Like how do you get a laptop
that was built in like twenty seventeen or a laptop
that was built more than ten years ago to run this.
Speaker 4 (46:59):
I like this idea of like take a bunch of
Excel spreadsheets and read them for me real quick, as
opposed to lining up the entirety of the Internet to
tell me the wrong recipe for pie.
Speaker 2 (47:11):
I started laughing. Well, Teddy was talking about this like
environmental degradation and this like this economic warfare over this
thing where the consumer facing side of this is like,
could you make me a photo of super Mario with
big titties?
Speaker 5 (47:30):
Yeah?
Speaker 6 (47:31):
Yeah, And all of the Google and Galaxy AI commercials
are all like unicorn, put a unicorn on my photo
on tell me how to which makeup product I should buy,
Like could you perhaps Google that.
Speaker 5 (47:50):
Or Firefox that instead?
Speaker 2 (47:53):
Truly?
Speaker 4 (47:54):
Yeah, and and like where super Mario titty is really
worth your water bill?
Speaker 1 (48:01):
And also for that cost, you could pay a lovely
artist right now to be like, hey, could you do
this and they're like, yeah, it's my special interest. I
got you.
Speaker 5 (48:12):
Absolutely, there's a there.
Speaker 6 (48:13):
Yeah, there's definitely an artist tool you can commission to
do that for you and it'll be great and we'll
have the right number of fingers.
Speaker 1 (48:24):
It's so like, I'm sorry, go for a mess.
Speaker 2 (48:26):
I was just saying, that's such a specific poll I
saw that I probably shouldn't have shared it. You should continue.
Speaker 1 (48:38):
Oh no, I was just gonna be saying a new
game I've been not official game, but the thing that
I've been playing with whenever I doom scroll on Instagram
is the fay game, which is how often do I
have to use those? Like? And it was a joke
on Tumblr, And now I'm like, oh no, this is
actually like a thing that is helpful for me to
(48:59):
do using those rules about count the teeth, count, the
fingers count, like look at the reality of what you're
looking at, like truly stuff. That was a cool folklore.
If I ever get Esa Kaide, I'll be able to
use this. And now I'm like, all right, well this
(49:21):
news story the person who the person who is holding
this microphone went from having four fingers to three giant ones.
Not to be ablest, people have webbing between their fingers.
They have plenty of things. People don't typically go from
having four fingers to three to two big nubbins, So
(49:44):
you just it's one of those things where now I'm like,
all right, is this real Nope, we can see that
it's clearly not.
Speaker 4 (49:52):
And this is the critical media literacy that universities ought
to be getting on and helping out with for sure.
And I mean at the secondary school level from birth.
But yeah, the fact that.
Speaker 5 (50:05):
You better start believing in isaka's you're living in one.
Speaker 1 (50:09):
It's so well, there's a one that I started and
again I didn't come up with this, but another YouTube
media and now analysts came up with it. Now I'm
just like, oh, I can constantly do this. If you
look at the lighting in video. The weird thing is
(50:30):
you can tell when the it's when it's been generated
to be bad cell phone footage, but the lighting is
still good. So even if the if the person is
well lit on what is a bad camera, you're like, well,
that can't be real like that that just cannot be real. Yeah,
(50:50):
it's it's weird.
Speaker 4 (50:52):
That's that's a good wreck That's a really good self
defense technique for sure. So do we want to get wrecked?
Are there any antidotes to this? You'd like to recommend?
Speaker 2 (51:04):
Its ongned a goo goo goo goo good get wrecked?
One thing? Just to shout out, uh, Dana Terris, who
is the owl House.
Speaker 1 (51:13):
Yeah.
Speaker 2 (51:14):
Creator of Our House, in response to Disney's weird user
generated AI announcement, urged fans of Our House to unsubscribe
from Disney Plus into pirate that show and pissed off
Bob Iger in the process, which is good. We do
(51:35):
wreck that.
Speaker 1 (51:38):
Heck, yeah, find find it super legally.
Speaker 2 (51:41):
Yeah, find it, find it at Blockbuster, you know what
I mean.
Speaker 1 (51:46):
Yeah.
Speaker 4 (51:46):
And if you have the opportunity to personally piss off
Bob Iger, take it absolutely.
Speaker 2 (51:53):
I will wreck. Watching a show called I Love La.
It's a racial sentence news show. We're about four episodes
in and it it is about these like twenty something influencers.
I talked about it on our last show, but now
(52:15):
a couple of weeks have passed, and the discourse online
is very interesting because it is at once a lot
of people arguing whether or not it's like a Girls
version of gen Z Slash, like a ripoff of the
television show Girls, and simultaneously it is I don't know
(52:35):
if y'all remember when Girls was on the air. We
were in college around that time, and like blogs had
just become kind of like a thing, and Girls generated
some of the most insistent and like toxic online discourse
(52:59):
of like any show or any piece of culture that
I had ever encountered. It was like a truly bizarre thing.
I watched it with one of our friends who does
not have a pseudonym on the show, so I'll just
call him Moss and you can figure.
Speaker 1 (53:16):
That one out. The But I remember.
Speaker 2 (53:22):
Second, I remember we'd watched that show, and then we
talk about it, and then we read some of this
and send it back and forth to each other, And
I love LA is having a very similar type of
Allline discourse where people are arguing about like the authenticity
politics in the show. They're arguing about the racial politics
in the show, the gender dynamics, the representation of Generation Z,
(53:48):
and it. I think ultimately the discourse around Girls was
very interesting because it allowed a lot of people to
navigate a lot of things that were going on with
them in an online space and to do it through
like the medium of like cultural production, which is essentially
what the mission statement of this show is. And so I,
(54:12):
you know, I think I love LA for that because
there really hasn't been one of these types of show
where people are like vehement defenders, vehement opponents and but
it's not coming from like a place of the culture war,
And so you carve out this sort of other space
where people are arguing about this because conservatives are not
(54:35):
watching HBO and they're definitely not watching an HBO show
that stars like several marginalized people, you know what I mean,
Like they're just dismissing that as like rote DII or
whatever the fuck. And then like a bunch of you know,
reasonable people, reasonable people are arguing about those shit. Like
(54:57):
after every episode, me and my wife I just kind
of argue about, like what the fuck is going on here?
Like what are they Because everything's very pointed, everything is
very is hyper social. Everything is rooted in like conversations
that we have on the show about like Hollywood and
and image and personality and like social relationships and sociality
(55:20):
like in twenty twenty five, like this kind of like
post COVID sociality as well. And it's it evokes that
which I don't think, you know, besides Game of Thrones,
which is a totally different type of discourse, there has
been a show that really evokes like that kind of
(55:41):
cultural dialogue that is, it's it's a weird cultural dialogue
that it's fervent but non threatening, where where the across
the aisle shit is fervent and you're like, I don't
know if we should go outside.
Speaker 5 (55:59):
Much specifically threatening. Yeah, that's that's cool. That's really nice.
To hold up a mirror.
Speaker 4 (56:04):
I mean, this is one of the nice pieces of
art like purposes that it can have. It's to hold
up a mirror to your world and ask you to
think about it.
Speaker 5 (56:12):
That's awesome.
Speaker 2 (56:13):
Yeah, it's funny. Even it's long, so you know, yeah.
Speaker 5 (56:18):
That's important.
Speaker 4 (56:20):
I will recommend a Cura Carosel as Stray Dog, which
is Tashira Maffune playing a cop who loses his gun
or gets his gun stolen, and there's a very sort
of hitchcocky and suspense throughout of as they're trying to
track down the guy who's got the gun, and the
guy starts using the gun, and so every time there's
(56:42):
a new shooting reported, muffune' is freaking out with guilt
about like, oh my goodness, I got to figure out
how to get this back. And he's paired up with
a with a mentor and there's a really interesting conversation
about what Muffune as a war veteran is trying to
do to like help people versus his mentors, like, yeah,
(57:08):
you got to be thinking yourself as the wolf and
people are sheep, and like very rote insular police theory
about being apart from society and the parallels of Mufuna's
character and the guy who's stole his gun and like,
(57:29):
this's the same kind of person. And it's really really
well done, and it's beautiful suspense, really well structured. It
could be remade in any city in the United States
today and it would feel exactly the same. It's really
really well put together. So I highly recommend it. That
(57:50):
sounds great, And so today's you have another record you
want to wreck your your face self defense strategies.
Speaker 1 (57:58):
Along with the face self defense strategies, Oh hello, Chi,
I'm gonna wreck. Hug your loved ones, do the thing.
I feel like we're still very touch saved and give
someone you love a hug at some point if they're
into hugging. If they're not, you know, high five, it's
all good. Hello, But yeah, hug your loved ones. That's
(58:23):
my wreck.
Speaker 4 (58:23):
Oh, dear listeners, if you've been with loved ones, friends,
family this season, give them an extra one and we'll
see you in the next one.
Speaker 5 (58:32):
I guess yeah.
Speaker 6 (58:38):
It's just.
Speaker 7 (58:44):
It's like, oh, pirates port your brain, Robin Nalis, don't
joking opened in your mind with the probots as you woken.
Hydra halen Hairs had for a time for a head
of reasons for more than with the soldiers with them
for all seasons.
Speaker 5 (58:54):
Listen closely while we share a for teason.
Speaker 7 (58:56):
Customic comments called your Dean's preatucition. Do the multiversity the
Molsons I were teaching perfect balance when we snap in
venit Jensen to your ears, does the shoulders when we speak?
Purple men were squeezing feet for Randy Savage Randals with
their immortal technique.