All Episodes

February 27, 2024 71 mins

In episode 1631, Jack and Miles are joined by hosts of Mystery AI Hype Theater 3000, Dr. Emily M. Bender & Dr. Alex Hanna, to discuss… Limited And General Artificial Intelligence, The Distinction Between 'Hallucinating' And Failing At Its Goal, How Widespread The BS Is and more!

LISTEN: A Dream Goes On Forever by Vegyn

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Hello the Internet, and welcome to season three, twenty six,
episode two of der Daily's Iday production of iHeartRadio.

Speaker 2 (00:08):
This is a podcast where.

Speaker 1 (00:09):
We take a deep dive into American share consciousness. And
it is Tuesday, February twenty seventh, twenty twenty four.

Speaker 2 (00:17):
Oh yeah, it's I mean, what is that? That means?
It's Tuesday that we said twenty seventh? Is that where
we are?

Speaker 1 (00:24):
That's right?

Speaker 2 (00:24):
Seven two four. It's uh a Nausema Awareness Day. It's
National Retro Day, so I guess honor your see through
telephones and vinyl players. National Polar Bear Day, National Strawberry Day,
National Koalua Day, for all the for for all my
Kolua lovers out there, it's the day is yours. The

(00:46):
day is the white Russian?

Speaker 1 (00:47):
Is Kolua white Russian? Or no?

Speaker 2 (00:48):
That's Bailey's right, I think so, I don't know. It's
just much either way. It's just a way too much
sugar and you will have a bad time if that's
all you drink.

Speaker 1 (00:58):
So yeah, that's right. Well, my name is Jack O'Brien aga.
Don't you wish your snack food were sweet like this?

Speaker 2 (01:06):
Don't you want more?

Speaker 1 (01:07):
Vague disk satisfiying bliss, don't you. That is courtesy of
Cleo Universe, in reference to the fact that some of
the greatest science of the late twentieth century was spent
focusing on how to get the perfect balance of mouth
feel and like enjoyment and dissatisfaction in nacho cheese doritos.

Speaker 2 (01:33):
And yeah, gotta keep on popping them.

Speaker 1 (01:36):
That's gotta gotta make them keep popping. That's uh, that's
who we are as a civilization. Unfortunately yet time and
I'm thrilled to be joined as always by my co
host mister Miles.

Speaker 2 (01:49):
Graam, Miles Gray, Gag Boss, the Warren Bond rock Kinside Boss,
the Warrant Bondai rock kin Side. And that is the
reference to Senator Elizabeth Warren saying her dream blunt rotation
was just to smoke with the rock and my body
curled into itself and I became a stand and drifted

(02:10):
off into the sea. Yes, thank you to Laceroni for
that one, for letting us know that's what you're gonna
pass the duchy on the rock hand side.

Speaker 1 (02:17):
On the rock hand side, I wonder what the rock
would be like, Hi, I feel like, probably pretty fun.

Speaker 2 (02:25):
Because you know, he's like simmering conservative underneath the surface,
saying like I'm not good. It would just be like
getting high with Joe Rogue. It's like I feel like
it's you know, he'd say something weird and then being
like he'd probably like make an observation about your physique.
He'ah good, Dude's like you're right handed. Huh yeah, I
could tell man based on how your shoulders built. And
you're like, okay, going on, yeah, yeah.

Speaker 1 (02:50):
Have some workout tips for you. I'm solicited. Don't need that, Miles.
We are thrilled to be joined by the hosts of
the Mystery AI Hype Theater three thousand podcast. It's doctor
Alex Hannah and professor Emily and Bendas.

Speaker 2 (03:06):
Hello, Hello, welcome, Welcome to both of you.

Speaker 1 (03:10):
Yeah, in addition to being podcast hosts, the highest honor
one can attain in our world, you both have some
pretty impressive credits, so I just wanted to go through
those as well. Of top Emily, you are a linguist
and professor at the University of Washington, where you are
director of the Computational Linguistics Laboratory. Alex you are director

(03:34):
of research at the Distributed AI Research Institute. You're both
widely published, you both received a number of academic awards,
you both have PhDs. What are you doing on our podcast?
Is how what you like? Our booker is really good
shout out to super producer Victor, But this is I

(03:55):
don't know what's going on here.

Speaker 3 (03:56):
Yeah, we're excited to be here. And part of what's
going on is that, you know, we're talking to the
world about how AI, so called AI isn't all that right,
and so a chance to have this conversation is really
helpful and important.

Speaker 1 (04:07):
We've been talking about that for a while, but you
are listening to your podcast really drove some things home.
So I'm very excited about this conversation.

Speaker 2 (04:15):
Ar Yeah. Every time we have like we've had doctor
Kerrie mcinernie on before. We think it's been on your
show as well, every time we pick and uh juau
sayak out in New York. We were just always talking
about like our own evolution with how we felt about AI,
because before I was like, it's gonna take all the
writing jobs, and then people are like it's not. I'm like,

(04:37):
how do you know, Like here's some more information. I'm like,
it's vaporware to make money. So it's always nice, like, yeah,
it's always nice to kind of keep, you know, pushing
my own understanding along because I also encounter, you know,
a lot of friends and family who also kind of
have the same thing. There's still at to like, this
stuff looks like it's gonna change the world forever in

(04:57):
a way we'll never know. And yes, so it's always
great to have the input of actual learned experts on
the time learned expertise.

Speaker 1 (05:07):
So we're going to get into all of that, but
before we do, we like to get to know our
guests a little bit better by asking you, what is
something from your search history that is revealing about who
you guys are.

Speaker 4 (05:21):
I have an exciting one I can start, Hey, this
is Alex. Yeah, so I'm building a chicken coop because
my second job, rather than more one of my third
jobs is kind of being a little suburban farmer. And
so I'm getting some chicks delivered from a hatchery. And
they said you need to put this directly into the browder,

(05:44):
and I'm like, what's a brooder? And I had looked
up what a brooder is and then came up and
found that it can be anything from a rubber maide
tub where you put the chickens while they're very small,
until like a repurposed rabbit hut. So then I started
thinking all morning about how to build a chicken Breweder's.

Speaker 2 (06:05):
And wait, just so, it's just like a like it's
like a pen, like a mini pen for the chien
to just kind of.

Speaker 4 (06:11):
Yeah, it's like a little pen for the chicks, and
you put in a heat lamp, you know, because they
need to be warm. Small, and they don't. They have
only got that chicken fuzz. They don't have the chicken feathers.

Speaker 2 (06:21):
Yeah, yeah, yeah, they.

Speaker 1 (06:23):
Need to me this sounds like a job for a
shoe box. But that's probably too small, too small, small, Unfortunately,
that's what they're getting. If I'm building a chicken, you
can also what I got if you're in if you're
in a if you're in a rush, you can use
your bathtub if you have one of those, and put
some puppy pads. Yeah.

Speaker 4 (06:44):
I learned a lot about this today as I was
just waiting for another meeting to start and just read
everything I.

Speaker 1 (06:50):
Could nice and of course that chicken coop is gonna
come in handy free eggs during the robot apocalypse, right,
which Sam Altman has told me is coming. So I
have to assume that's also what you're thinking, right, right.

Speaker 2 (07:03):
I mean she's prepping, she's prepping for the eventual DEMID. Yeah.

Speaker 1 (07:06):
I think during.

Speaker 4 (07:07):
COVID, I think there was a run on chicken eggs,
and then so I'd go to Costco and you couldn't
get the gross of chicken eggs, And then I'd go, hey,
and I went to my backyard.

Speaker 2 (07:20):
Market got them for eight bucks an egg if you
want to.

Speaker 1 (07:23):
Yeah, yeah, they are still really expensive. Emily, how about you?
What's something from your search history?

Speaker 3 (07:28):
So I took a look, and it's a bunch of
boring stuff, like I can't remember it can't be bothered
to remember the website of a certain journal that I
was interested in. So I'm like searching the name of
the journal. But then like down below that we've been
watching for all mankind, which is this like alternate timeline
thing that that that's a what happens if the Soviets
landed on the moon first? Right, right, So it diverges

(07:51):
in the nineteen sixties, but it keeps referencing actual history.
So my search history is full of like, okay, so
when did the Vietnam War actually end?

Speaker 1 (07:58):
Right?

Speaker 3 (07:59):
And like know which Apollo mission did what. So it's
a bunch of queries like that, sort of comparing what's
in the show to actual history.

Speaker 2 (08:05):
How does that line up based on your sort of
cursory research as you watch the show?

Speaker 3 (08:10):
So interestingly, So there's there's a point where Ted Kennedy
is like in the background being talked about not going
to chap Equittic. Oh, and then an episode later he's president.

Speaker 1 (08:22):
Wow.

Speaker 3 (08:24):
So there's and I suspect there's way more of that
kind of stuff that I'm not catching right, right.

Speaker 2 (08:28):
Right, just subtle things right right.

Speaker 1 (08:30):
The media is like and Ted Kennedy missed a barbecue
this weekend in chap Equittic. That's that's super interesting. Yeah,
this is you know, third or fourth person who's mentioned
for all mankind to us, I think this is pushing
it over the threshold to where I have to I
have to watch this damn thing.

Speaker 3 (08:51):
It's enjoyable, but it's tense. I don't know anything worth
Like people in outer space really creeps me out because
like that that degree of like lowliness and sort of
lack of failsafes, you know, like when all the fail
says are there are the ones that you built and
beyond that, you know, your sol that's that's creepy.

Speaker 2 (09:10):
It seems uncomfortable outer space.

Speaker 1 (09:12):
Yeah, what is something, Alex that you think is underrated?

Speaker 4 (09:17):
I was trying to think of something and the only
thing I could come up with this morning was the
Nintendo Virtual Boy.

Speaker 2 (09:26):
The red goggles on a tripod.

Speaker 4 (09:28):
The red goggles on a tripod, and I know, you know,
and whatever. Twenty years later, thirty years later, I think
at this point, you know, Apple has its Apple Vision
Pro that looks pretty much the same except you can
walk around with it right right right. All I remember
is that I really really really wanted a Nintendo Virtual

(09:49):
Boy when I was whatever ten or old. I was
in nineteen ninety five, and you know, but then I
was reading the Wikipedia page and it kind of said,
you know, know it it was gonna be people headaches,
and it was overpriced. And but I'm going to stand
by my claim that it is underrated. It was ahead

(10:09):
of its time and it was one of those products
that completely you know, I don't know, I think, you know,
if you could go back, maybe it was just maybe
I don't know, I don't know what they could have
done differently, but yeah, that's that's all I got.

Speaker 2 (10:26):
Was hell, have you seen? Okay? So I was the
same way, like I used to subscribe to Nintendo Power
all that shit. It was such a nerdy game kid,
and when like those ads for it, it was like
this is the future. It blew my mind. My parents
obviously were like, that's a hell no for a month, right,
how silly you would look?

Speaker 1 (10:46):
We don't yeah deal with or in public with that.

Speaker 4 (10:48):
Yeah, it's also like a couple of grand wasn't It
was just ridiculously it.

Speaker 2 (10:52):
Was something I think it was like a few It
was just something like more than like what a PlayStation cost,
which was sort of like the height of it or
something like. Anyway, I remember a kid in my school
had one, brought it to school and we lined up
to play it, and it was the most underwhelming experience
that like it broke because like there's nothing VR about this.

(11:13):
It's like lightly three D everything I feel like in
this like monochromatic, like red scale kind of graphic thing.
It just was really it was underwhelming, But I completely
follow the same path you had. Alex and being like
this is I need this?

Speaker 4 (11:28):
This was the future?

Speaker 2 (11:29):
Yeah yeah yeah, and yeah, like that's so funny. And
never even thought about the Vision pro as being like
the like the spiritual sequel.

Speaker 4 (11:38):
Exactly, it's the direct descendant. Yeah, you know the Nintendo
Virtual Boy crawled, so the Apple Vision bro exactly right.

Speaker 1 (11:48):
How about you, Emily, what is something you think is underrated?

Speaker 3 (11:52):
So after listening to a couple of your episodes and
like all the nineties nostalgia, I have to say gen x.

Speaker 2 (11:57):
Xx, Yeah, people I thought were the coolest.

Speaker 3 (12:01):
We've got some monsters among us, you know, yeah, yeah,
but you know gen X were small. We're scrappy. I
saw someone a millennial that I know, online posting something
about how weird it is to talk to gen X
folks about their internet experiences are a back. Don't cite
the old magic to me.

Speaker 2 (12:22):
I was there when.

Speaker 4 (12:23):
It was created. Although I feel like I feel like
it's it is. It's a bit meta, Emily, because I
feel like gen xers are always going to say that
they're underrated, like they're underappreciated, and it's it's kind of
a class.

Speaker 3 (12:37):
It's our whole identity exactly.

Speaker 2 (12:39):
It is.

Speaker 4 (12:40):
It is a referential Yeah, it's a class feature of
gen xers to say that they are unappreciated. And you know,
you children have to struggle with AOL dial up. You know,
how about this, you know, prior no internet or yeah, I.

Speaker 3 (12:56):
Mean unless you you don't even know how to read
the math.

Speaker 2 (13:02):
Yeah, yeah, I grew up in l A. I know
how to use a Thompson Guide or Thomas Guide.

Speaker 1 (13:07):
That's that was.

Speaker 2 (13:07):
That was our Google Maps before anything. But yeah, yeah, exactly.
But yeah, like I think about too, like as like
an older millennial, like gen X were all the people
I thought were the coolest people growing up, Like this
is like I want to be I want to be
a hacker. I want to be like them of oh yeah,
well hey, I don't know what are we going to

(13:28):
say about ourselves as millennials. I wonder I think we're
just like we're just dead inside on some level that
we're like, yeah, whatever, cool we are.

Speaker 4 (13:37):
We're dead inside and we killed everything, right that the
stock you know, millennials are killing, you know, the napkin industry.

Speaker 1 (13:47):
Right, but the toast is going to be so good, Yeah.

Speaker 4 (13:50):
It's going to be amazing. We you know, we killed
housing somehow because we spent too much on avocado toast. Yes,
and you know, you know tumoric lattes or whatever, those
are good, Like they're good.

Speaker 3 (14:06):
Though maybe maybe even underrated.

Speaker 1 (14:09):
There will be two people will be proposing to each
other with rings made of turmeric and avocado instead of diamonds,
because you build the diamond industry.

Speaker 2 (14:19):
Our new currency. Yes, yeah, but we always talk about that.
Framing is just basically millennials can't afford X thing, right,
we can't. We're not killing the diamond industry. We don't
have money for diamond industry. We don't have money for these.

Speaker 1 (14:35):
Other millennials have this weird trend of living five to
an apartment because they love it. Yeah, what, Alex, what's
something you think is overrated?

Speaker 4 (14:47):
Well, we're we're all that, We're all the AI stuff
and so you know we're both Emily and I are
gonna talk about parts of it. But image generators for sure.
Text the image generator, I mean people think they're very flashy,
but I mean it does a lot of copying. You know,
I was going to talk at a San Francisco public

(15:09):
library with Carlo Ortiz, who's a concept artist, and she
had all these examples in which you know, give it
a prompt and it would literally just copy kind of
a game art and then put it on the new
thing and have some kind of swivels around it. But
the stuff is just I mean, and it's also I mean,

(15:30):
the the kind of images the thing, it's this this
distinctive style that esthetically, apart from all the problems with
the stuff and the non consensual use and the data
theft and all the awful kinds of you know, non
consensual deep fake porn and the kind of like far

(15:51):
right imagery, like, the stuff is just ugly, Like it's
got this like every every person in it just looks sunken.
They look like they've just seen some shit, you know,
they've got some.

Speaker 1 (16:01):
Showers like it's a mistare.

Speaker 4 (16:04):
Yeah, just just glassy, and you're just you're just like
what is the appeal?

Speaker 1 (16:10):
You know?

Speaker 4 (16:11):
And and so I just yeah, I just I don't
like it.

Speaker 2 (16:14):
Yeah, I think every so many things when people try
and do like real life, because I look at like
the mid Journey subreddit on Reddit to see like what
people are making, and that was sort of like my entermre,
Like wow, these like like when people get the prompt. Right,
it's interesting. They're like Simpsons but real life characters and
also from but from a Korean drama. Like okay, so

(16:36):
we're getting the Korean like the k drama irl Simpsons,
but all the the aesthetic like all looks like like
sort of like David la Chappelle's photography. It's weird. There's
like this hyper stylized look to it that feels very specific,
And I think the thing that interests me is sort
of like, as someone who's terrible at visual art, it's
like this way for someone with absolutely no talent in

(16:57):
that area to be like I summon this thing I'm
thinking of and you're like, fine, it's a bunch of copies.
But like I think that's the thing that most people
are like, oh cool, right, it made the thing.

Speaker 4 (17:10):
It made the thing it made mediocre like art.

Speaker 1 (17:14):
I had the thought AI is like a doesn't matter generator,
like Star Trek has the matter generator, and it is
like ship out a bunch of stuff that doesn't matter.
That's like uninspired, but it like matters to you for
a split second. You know, it's like, hey, wow, that
is that's weird that like that just came from a
text prompt, but it's it actually doesn't matter in any

(17:37):
long term sense.

Speaker 4 (17:39):
Right, Yeah, that's I love that framing. I mean, what's
that famous that the the like image of the like
the diner, Yeah.

Speaker 1 (17:48):
The nighthawks. Is that what it's called?

Speaker 4 (17:50):
Yeah, yeah, yeah, yeah yeah yeah. And someone had this
great thread. Well, no, it was a troll thread though,
you know what I'm talking about. There was like the
nighthawks that the image and and and then someone was
like I thought it, like, look at this composition. It's
so boring. What if they were happier? And it was
kind of like a troll thread. I think it was, Like.

Speaker 3 (18:12):
I thought, I thought it was genuine. I thought that
they were really saying I'm making it better with AI.

Speaker 4 (18:17):
I think it was. I think it was a troll thread.
It's like, why are these people so sad.

Speaker 2 (18:21):
When if they were happy?

Speaker 4 (18:23):
What if it was daytime? Yeah, it was. It was
sort of like that, like you know, and I thought
it was a troll thread. Maybe I'm granting posters like
a little too much like Grace here, but I think
it was like the idea of like I made it interesting.
Look at this?

Speaker 2 (18:40):
Why is it so dark?

Speaker 1 (18:42):
Yeah? Right? What if there was confetti.

Speaker 2 (18:46):
I know, what if like a swagged out pope was
also sitting at.

Speaker 1 (18:54):
So much can image, but the swinded out pope did though.

Speaker 4 (19:00):
I thought it was real, and I was and then
I was like, this is such a cool But again, it's.

Speaker 1 (19:05):
Not like chat GPT was like, you know it would
be funny. It was somebody was like, you know what
would be funny and gave that prompt to chat GBT
and then it ended up being But like that's that's
the thing that you guys talk about a lot, is
just the erasure of like people are like, well and
here's chat GBT doing this thing, and it's like that

(19:26):
somebody told it to do and then programmed it to do,
like it's doing a specific It is not a person
who is coming up.

Speaker 2 (19:35):
It's like talking about like Picasso's paint brush, but just
as the paint brush, like look what the paint brush did,
Oh my god, dude, which is cool.

Speaker 1 (19:43):
And then there's like a little like Buddhism there that
I appreciate, but I feel like that's not where it's
headed unfortunately. Emily, do you have something that you think
is overrated?

Speaker 3 (19:55):
Yeah? I mean the short answer is large language models
are overrated, But I think we're going to get into
that we are probably, so I'm going to shift to
my secondary answer, which is actually to take a page
from Alex's work to say that scale is overrepresented. That
if the goal of taking something and scaling it to
millions of people is like the thing, the only folks
really benefiting from that are the capitalists behind it. Right,

(20:18):
the product is worse. The impact on the societies or
the communities that lost access to whatever their local solution
was is worse. Right, So scale is the thing that
so many people are chasing, especially in the Bay Area,
but also up here in Seattle, and it's.

Speaker 2 (20:32):
Way over it all over in every place. Everyone will
hear that all the time, and like, yeah, yeah.

Speaker 1 (20:44):
I worked on a website that had like really was
having really fast growth, and you know, it is just
based on like publishing these three articles a day and
really like focusing on making them as good as possible.
And unfortunately, once it got a lot of then the
executives came in and their first question was how do

(21:05):
we scale this? How do we scale this? Though, like,
how do we get a hundred articles a day? And
I was like, back in two thousand and ten. You know,
but it's just been how capitalism thinks about this from
the beginning. Yeah, and now they're doing it with movies.
They're like, who needs movies? Now I could make myself
the star of Oppenheimer. It's like, why would you want

(21:26):
to do what.

Speaker 4 (21:28):
You seems like you missed the point of that movie.

Speaker 2 (21:31):
Okay, I want to be destroyer of world.

Speaker 1 (21:34):
Oh that may feel strong? Yeah, yeah, all right, Well
let's take a quick break and we'll come back and
get into it. We'll be right back, and we're back.

(21:55):
We're back and yeah, So, as we mentioned, you know,
we've done some episodes on how there's a lot of
hype around it. Your podcast really helped me understand just
how much of the current AI crazes really hype. There's
this one anecdote I just wanted to mention where a

(22:15):
physicist who works for a car company was brought into
a meeting to talk about hydraulic breaks and one of
the executives of the car company asked them if putting
some AI in the brakes wouldn't help them operate more smoothly,
which they were kind of confounded, And we're like, what

(22:40):
does what do you like, I guess that gets to
the big question I have a lot of the time
with AI hype in the media, which is, what do
these people think AI is?

Speaker 3 (22:49):
It's magic fairy dust, But you actually made it slightly
more plausible by making it breaks. The story was pistons pistons, okay,
and the engineer was like, it's metal. There's no circus here.

Speaker 1 (23:02):
Right, Yeah, But yeah, I mean, there are three kind
of big ideas that I got a new perspective on
from your show that I wanted to start with. The
first one is kind of this distinction between limited and
general intelligence or like general AI, which is something that
I've heard mentioned a lot in the past couple of

(23:25):
years in articles about chat GPT. They're basically trying to
use this as a distinction to say, look, we've known
that computers can beat a chess master for a while now,
but the distinction that we are trying to sell to
you is that this one is different because it's a

(23:47):
general intelligence. It can kind of reason its way into
doing lots of things. Yeah, and one of the details
I heard you point out on your show is that
it's still pretty limited, like it's still basically doing a
single thing pretty well, like that the chat GPT is

(24:10):
can you can you talk about that, like what is it?
What is that distinction and how kind of imaginary it is?

Speaker 3 (24:17):
Yeah, So before Alice goes off on chess as a
thing in this and we'll get there, Alex, you're right
that chat GYPTA does one thing well. But the one
thing that's doing well is coming up with plausible sequences
of words.

Speaker 2 (24:30):
Right.

Speaker 3 (24:30):
The problem is that those plausible sequences of words can
be on any topic. So it looks like it is
doing well at playing trivia games and being a search
engine and writing legal documents and giving medical diagnoses, and
the less you know about the stuff you're asking it,
the more plausible and authoritative it sounds. But in fact,
underlyingly it's doing just one thing, predicting a plausible sequence

(24:51):
of words, right.

Speaker 1 (24:52):
Yeah.

Speaker 4 (24:53):
And the thing about chess is that it wasn't too
long ago that general intelligence meant chess playing, you know,
and so yeah, you had you had these you know,
during the Cold War, they would have these machines that
could play chess, and then that was supposed to be
a substitute for general AI. They thought that was general

(25:14):
intelligence and it was.

Speaker 2 (25:15):
A big deal.

Speaker 4 (25:15):
When you know, IBM Watson beat Gary Kasparov, and you
know IBM Watson was that deep people, Yes, yes, Watson
Watson won Jeopardy, Jeopardy, yeah it it beat it, beat
Ken Jennings and then so yeah, thank you. It was
another one those IBM. And so then that was the

(25:37):
kind of that was that was the bar. And then
they're like, well, Tess is now too easy.

Speaker 2 (25:41):
We have to do.

Speaker 4 (25:42):
We have to do go go, you know. And then
and then they got it into some real time strategy
games like like StarCraft and and Doda too, and so
you know, so there's always been this kind of thing
that's been a stand in for intelligence, and this has
been what the advertisement for, you know, for general intelligences.

(26:02):
And so in addition to what Emily's saying, it's really
good at doing this next word thing. I mean, it
seems to achieve some acceptable baseline for all these different tasks.
But then these tasks become like stand ins for general intelligence,
and it's really it really gives away the game when

(26:23):
you know, people like Mark Zuckerberg, you know, come out
and they say, well, Matt is going to work on AGI.
Now I don't really know what that is but you know,
we're going to do.

Speaker 1 (26:32):
It, but you guys seem to like it as investors,
and so therefore that is what we're going to call
everything now.

Speaker 4 (26:39):
Yeah, yeah, it's it's it's literally turning back towards the
machine and like dialing up the AGI nob and seeing
if the crowd, you know, since seems to tear at it.

Speaker 1 (26:49):
Yeah, so I think Emily, you were saying that just
you were specifically saying that they would take an existing sentence,
like one of the training methods, take an existing sentence,
come rubble word, computer guesses what that word is and
then compares guests to what the actual word is, and
like it got better and better at doing that until
it seems like the computer is talking to you using

(27:11):
the language a human would use in conversation. Like just
hearing it put that way for some reason was like, oh,
it's so it's like just better. It's doing the same
thing that a chess computer does, but it's just more
broadly impressive to people who don't know how to play chess.

Speaker 3 (27:29):
So it's it's not actually doing the same thing that
a chess computer does. So what Deep Blue did was
they just threw enough computation at to calculate it out,
what are the outcomes of these moves, what opens up,
what opens up? And you put enough calculation in for
chess that's doable with computers of the era of Deep
Blue Go. It turns out the search space is much
bigger so the Alpha Go. What they did was they

(27:52):
actually trained it more like a language model on the
sequences of moves in games from really highly rated go players.
Someone beat it recently by playing terribly and then it
was like, I don't know what to do, yeah, because
it was completely.

Speaker 2 (28:07):
Yeah, that's why I usually lose at chess.

Speaker 1 (28:10):
I'm so much better than everybody that the level confuses.

Speaker 3 (28:16):
So there was a big deal about how GPT four
could play chess super well. And one of the things
about most of the models that are out there right
now is we know nothing about their training data or
very little, like the companies are closed on the training data,
even open AI right but people figured out that there's
actually an enormous corpus of highly rated chess players chess
games in the training data in the specific notation for

(28:39):
GPT four, So when it's playing chess, it is doing
predict a likely next token based on that part of
its training data. But that's not what Deep Blue is
up to. Deep Blue is a different strategy. And in
all of these cases, there's this idea that, like, well,
chess is something that smart people do, so if a
computer can play chess, that means we've built something really smart,
which is ridiculous. And I just want to be very
clear that Alex and I aren't saying there's some better

(29:00):
way to build AGI, right, We're saying there's better ways
to seek out effective automation in our lives.

Speaker 1 (29:10):
I'm kind of.

Speaker 3 (29:10):
Speaking for you there, Alex, but I think we're I
think we're in a great yeah.

Speaker 4 (29:14):
Or rather, I mean, even if we want automation and
particular parts of our lives, you know, what is what
is the function of making chess robots?

Speaker 1 (29:22):
Right?

Speaker 4 (29:22):
And you know, insofar is building chess robots in the
Cold War? I mean a lot of it had to
do with you know, Cold War kind of scientific documen,
you know, the US versus the USSR and seeing if
we could do better, you know, and you know, chess
was one front getting to the moon. Speaking of the
show which has already completely evaded me that we were

(29:44):
talking about earlier for all mankind, for all mankind, thank you,
please cut me out, make me sound smart.

Speaker 1 (29:50):
It's a tough title. It doesn't stick in the brain.
And they should have used AI to name that show.
Yeah exactly, Yeah.

Speaker 3 (29:57):
Yeah, historical sexism that makes it stick, maybe makes it
a little.

Speaker 4 (30:00):
Bit slipper, right, yes, yeah, so it's the same thing,
you know, get into the moon first. So I mean
this becomes a stand in, you know, and it's you
know a lot of it is frankly kind of showingism.
You know, mine can do it better than yours, and
you know, and and really getting to that and so
you know, this association with chess, as m least said,

(30:21):
of like this thing smart people can seem to do,
not things like this automation useful in these people's lives, right,
will it make these kind of runt tasks kind of
easier for people? And said we sort of jumped over
that completely to go, let's make this art kind of
thing and oh, it's going to put out a bunch
of you know, concept artists and writers. Even if it

(30:43):
doesn't work, well, you know, we're going to scare the
wits out of these people who do this for a living.

Speaker 2 (30:49):
What right, like is there I feel like you know,
looking at Cees and just what happened at Cees. So
many companies like it's got AI like this thing this
refrigerators you AI. And I feel like for most people
who are just consumers are just kind of passively getting
this information. There's like a discrepancy between how it's being

(31:09):
marketed and what it is actually doing. Right yeah, and
can you And so part of me is like how
the like why are all these C suite maniacs suddenly
being like, dude, this is the future, Like you got
to get in on it. And I know that a
lot of it is hype, but can you sort of
like help me understand like the hypeline or pipeline to
how a company says ooh, this is what we do,

(31:33):
and then how that creeps up to the capitalists where
then they're like yeah, yeah, yeah, what what what and
then begin to say like make these proclamations about what
large language models are doing or how it's going to
revolutionize things. What sort of like because from your perspective,
can you just sort of be like just give us
the the unfiltered like this is how this like this
conversation is moving from these laboratories into places like Goldman Sachs.

Speaker 4 (31:56):
Yeah, I mean, I think a lot of a lot
of hype seems to op rate via fomo. It's honestly, honestly,
if you wants it kind of reductive kind of way,
you need to get in on this at the ground level.
If you don't, you're going to be missing out on
huge opportunities, huge returns on investment, right. And one of

(32:20):
the ways I would say that AI hype is kind
of at least a bit qualitatively different then let's say
I don't know crypto or n F T S or yeah,
you know, you know DeFi is that you know, crypto
has kind of shown itself to be such a such
a you know, house of cards that you know it

(32:42):
doesn't it takes only a few things like you're Sam
Bankman free and going to jail, or you know, these
these huge kinds of scandals happening with with finance and
coin base, that you know, those things, and then it's
the wild fluctuation of bitcoin and other types of crypto
that at least you have some kind of proof of

(33:03):
concepts on this on this chat GPT where it seems
like this thing can do a lot, and so there's
enough kind of stability in that where folks say, yeah,
let's let's go ahead and get in on this. And
if we don't get it on the ground floor, we're
missing out on millions or billions of dollars in in ROI.

(33:25):
And so you know, this is you know, but the hype,
I mean, the hype is really at its base is
kind of fomo element of it, and it's and I
think it's just the the thing where, you know, it
obscures so much of what's going going on under the hood,
and folks just think it's magic. You know, you see
gen jen ai. People say we're putting gen ai in

(33:48):
this that that you could rub it on an engine.
When yeah, when when when AI itself I mean itself,
that kind of calling things AI, I mean it self
has this marketing quality to it. It's really you know,
but if you kind of look at the class of
technologies have been called AI since the inception of AI

(34:12):
and then mid nineteen fifties, it can be anything as
basic as a logistic regression, which is a very basic
statistical method that it's taught in stats. You know, one
A one or two A one too. You know, these
language models that we see open AI, howking, and so
you could say AI isn't anything I've got AI, I've

(34:33):
programmed by hand, I did the math by hand.

Speaker 1 (34:36):
Ooh right, you know right? Yeah.

Speaker 3 (34:40):
So I think what's part of what's going on here
is that because of the way chat GPT was set
up as a demo and then all these people did
free pr for open a sort of sharing their screencaps,
everybody has the sense that there's a machine that can
actually reason right now, and then you call anything else AI,
and that the lubiz factor from chat GPT is sort
of like makes the whole thing sparkle. There's a funny

(35:02):
story from this conference called NEUROPS, which is neural information
processing systems, not real neurons. These are the fake ones
that these things are built out of. It's like a
mathematical function that's sort of an approximation of a nineteen
forties notion of what a neuron is. And a few
years back, I want to say, like.

Speaker 1 (35:16):
I sound impressed by the way.

Speaker 3 (35:19):
Around twenty sixteen twenty seventeen, I think some folks at
NEUREPS did a parody. They put together this pitch deck
of a company that was going to build AGI and
it's ridiculous. Like the pitch deck is basically step three
it dot step three profit, Like there's nothing in there.
And this was just on the cusp of when AI

(35:39):
went from being sort of a joke like you wouldn't
call your work AI if you were serious, to where
we are now. And what these people didn't realize when
they were doing their pitch deck was that there were
some folks in the audience who had already stepped over
into that, like, you know, AI, true believer, We're going
to make a lot of money on this step. People
came up and offered them money for their fake company.

Speaker 1 (36:00):
That's great.

Speaker 2 (36:02):
Yeah, so it's say it's like the dot com boom. Yeah,
there's just like what's the company called some dot com?

Speaker 1 (36:07):
They're ready for the next thing, right, they've like kind
of stalled out post iPhone, and they're like ready for
the next big thing, whether it's here or not. And
so yeah, the g whiz afication or like the g
whiz results from open AI and chat GPT I feel

(36:28):
like they were like good enough, let's get this thing going,
Let's get this machine cranking out. The idea of describing
like the mistakes that chat GPT makes as hallucinations. Like
the marketing that's being done across the board here is
pretty impressive.

Speaker 3 (36:45):
The fact that CHATGPT even uses the word I right,
that's a programming choice. Yeah, could have been otherwise and
would have made things much much clearer.

Speaker 1 (36:55):
Right, Yeah, chat GPT is trained on the entire Internet.

Speaker 3 (36:59):
Not true.

Speaker 1 (37:00):
Yeah, that was something that I accepted when it first started.
I'm like, well, this thing hallucinates and its brain is
the entire Internet in what could go wrong?

Speaker 3 (37:11):
So the thing about the phrase entire internet is that
the Internet is not a thing where you can go
somewhere and download, right. It is distributed, and so if
you are going to collect information from the Internet, you
have to decide which URLs you're going to visit, and
they aren't all listed in one place. So there's no
way to say the entire Internet. There's some choices happening already.

Speaker 2 (37:28):
Yeah.

Speaker 4 (37:28):
And what they mean when they say the entire Internet,
I mean most of the time, they mean most of
the English available Internet, And you mean, and although they've
got some, but then much of the work that has
been translated has been actually machine translated from English into
another language. So, for instance, you know, a bunch of

(37:49):
the URLs that they have are from the Google Japanese
patents and common crawl and the Dodge paper. Yeah, a
lot of it's been machine translated from English into Japanese
use and it's just available through like Google's Japanese patent site.
And so a bunch of this stuff is you know,
just just loaded in and.

Speaker 2 (38:07):
It's and so and so.

Speaker 4 (38:09):
I mentioned this data set common crawl, which isn't you know,
in the GPT two paper I think they cite it,
or it's the TPT three paper. It's I think it's
a GPT three P pair And they said, yeah, it's
the common crawl. Is this data set that they say
they use, which is this kind of this weird like
nonprofit I was collecting a bunch of data that could
be useful for some limited scientific inquiry, but then it

(38:33):
became now they've completely rebranded themselves. They're saying, we have
the data that you know, fuels large language models and Mozilla,
the Mozilla Foundation folks that also the corporation makes the
web browser Firefox. They did this report on common crawl
and kind of what's behind it too, and and find
you know, the weird idiosyncrasies of it. But yeah, when

(38:56):
they say the whole internet, I mean yeah, it's these
currated data sets. You can't get this whole internet. That's
that's just that's just a falsity.

Speaker 2 (39:04):
But for our listeners, you can get it from Jack
and I. We do have that file.

Speaker 4 (39:08):
So if you're it's just it's just one large ZIP file.

Speaker 2 (39:12):
Yeah, yeah, exactly. Yeah, it's on pirate Bay. Check it out.

Speaker 1 (39:17):
Yeah yeah, and we read it every morning to prepare
for this show.

Speaker 2 (39:22):
Just the whole thing to credit and downloaded.

Speaker 1 (39:24):
Yeah. Uh, let's take a quick break and we'll be
right back. And we're back. So yeah, just in terms
of juxtaposing, like what these companies are saying and the

(39:45):
mainstream media is buying versus what is actually happening. So
I was reminded of Sam Altman telling The New Yorker
that he keeps like a bag with cyanide capsules ready
to go in case of the AI apocalypse. So you know,
he is an expert who gets billions of dollars richer

(40:08):
if people think his technology is so powerful that he's
like freaked out by it. So, but that that's something
that I don't know I've seen reprinted in like long
articles about the danger of AI. And then there's this
other like more real world trend where like you you

(40:29):
talk about a Google engineer who admitted they're not going
to use any large language model as part of their
families healthcare journey.

Speaker 4 (40:39):
Oh that was a Google That was a Google senior
vice president, not a senior of Google VP. Greg Corrado
is one of the heads of Google Health itself.

Speaker 1 (40:55):
There is yeah, and there's also this story from your
show has a fresh Health segment at the end where
you talk about just saying examples headlines that.

Speaker 2 (41:06):
Are just fresh.

Speaker 1 (41:08):
Yeah, it's just fresh out.

Speaker 4 (41:09):
It's new versions of.

Speaker 2 (41:11):
Yeah.

Speaker 1 (41:12):
The one about Duo Lingo from a recent episode where
they're getting rid of human translators, firing them, cutting the workforce,
replacing them with translation AI, even though the technology isn't
there yet. But the point that you were making on
the show is that they're willing to go forward with

(41:33):
that because the user base won't notice the difference until
they're in Spain and need to get to a hospital
and asking for the biblioteca. You know, like it's they
are specifically an audience who is not going to know
how bad the product that they're getting is because they're

(41:54):
just like not in a in a position to know
that by the nature of the product. And so just
this distinction between being hyped to the mainstream media and
like these long read like New York or Atlantic articles
as this is a future that we should be scared
of because like it's going to become self aware and

(42:15):
Sam Altman is freaked out, and then what it's actually doing,
which is just making everything shittier around us, is I
think a big kind of chunk that I took away
from your show that was just like, oh, yeah, that
makes way more sense. That feels much more likely to
be how this thing progresses.

Speaker 3 (42:38):
Yeah, So the AI doumerism, which is when Altman says
I've got my bug out bag and my sinanite capsules
in case the robot apocalypse comes, or when Jeff Hinton,
who's credited has a Turing Award for his work on
the specific kind of computer program that's us see Gablo
statistics that's behind these large language models. He's now concerned

(42:58):
that it's on track to because I mean, smarter than us,
and it's gonna like these piles of linear algebra are
not going to combust into consciousness. And anytime someone is like,
you know, pointing to that boogeyman, what they're doing is
they're basically hiding the actions of the people in the
picture who are using it to do things like make
a shitty language learning product because it's cheaper to do

(43:20):
it that way than to pay the people with the
actual expertise to do the translations.

Speaker 4 (43:23):
Right, yeah, yeah, And it's just leading just to I mean,
I like this kind of thought on this, I mean
this kind of process. And Corey Doctor has got this
kind of sister concept of AI with hype, which is
in shitification, which I think that the Luistic Society America
it was their their overall.

Speaker 3 (43:43):
Yeah, American dialect Society Dialects, Yeah, picked it as the
overall word of the year for twenty twenty three. But
in shtification is something very specific, right, It's not just
like we've now we're now swimming in AI extruded junk, right,
AI and quotes as always, It's something more the company
create a platform that you sort of get lured in
because initially it's really useful for So this is like

(44:05):
you think about how Amazon was great for finding products
sucked for your local brick and mortar businesses. But as
a consumer it was super convenient because you could find things.
But then the companies basically turn around and they extract
all of the value that the customer is getting out
of it, and then they turn around to the other parties.
There are the people trying to sell things, and they
extract value out of them. So you start off with

(44:27):
this thing that's initially quite useful and usable, and then
it getsified in the name of making profits for the platform.
And that's like the specific thing about in certification.

Speaker 4 (44:36):
And I would say there's some kinds of processes you
can think about indification, the kind of idea that you
have to rush to a certain kind of market, that
you have a monopoly on this kind of thing, And
I guess yeah. I mean, the thing about large language
models I think that we get we get on and
talk a lot about, is that large language models are
kind of born shitty with content. So it's not like

(44:59):
the platform started and that platform monopoly led to this
kind of process of incertification. It's more like you decided
to make a tool that is a really good word
prediction machine, and you use it for a substitute for
places in which people are meant to be speaking kind
of with their own tone, with their own voice, with
their own forms and their own putting, so they're on

(45:22):
expertise and then and so it's kind of yeah, it's
it's bored and shitty, and so you know this, this
kind of thing I think is really helpful. It makes
me think of kind of a thing I think. I
saw a few times on Twitter where people are like, well,
if you're an expert in any of these fields and
you read content by large language models, you actually know
anything about this, you're going to know that it's absolute bullshit, right,

(45:46):
you know, if you ask it to write you a short,
you know, treatise on I don't know, sociological methodology, something
specific that I know a little bit about, it's going
to be absolute bullshit, right, But good enough to computer
engineers and you know, higher ups at these companies.

Speaker 1 (46:06):
Yeah.

Speaker 3 (46:06):
So here's an example. The other day, I came across
an article that supposedly quoted me out of this publication
from India called Bihar Praba, and I'm like, I never
said those words. I could see how someone might think
I would. And then I searched my emails, like no,
I never corresponded with any journalist at this outfit. So
I wrote them, I said, fabricated quote. Please take it
down in printer retraction, which they did, and they wrote

(46:28):
back and said, oh yeah, that actually we prompted some
large language models.

Speaker 1 (46:31):
To create that before us and posted yes, because it
seems like you might have and the large language models
they don't like that. That's isn't that? What hallucinations are
a lot of the time is just the large language
model making up stuff it seems like is what the
person wants them to say exactly.

Speaker 3 (46:50):
But here's the whole thing. Every single thing output from
a large language model has that property. It's just sometimes
it seems.

Speaker 1 (46:55):
To make sense. The whole thing is trying to do
a trick of like predict I figured out what you
wanted me to say, ha ha, But it's like, well,
but what I wanted you to say is not always
that's not how I want my questions answered that. That's
actually a wildly flawed way of coming up like answering people.

(47:16):
It's definitely something that I do in my day to
day life because I'm scared of conflict and a people pleaser.
But that's not I'm not a good scientific instrument for
that reason, you know, Like, but you.

Speaker 4 (47:28):
Just got you just an avoidant attachment style, which as
a as another avoidance.

Speaker 1 (47:36):
They've just made me a scientific model that's terrible.

Speaker 4 (47:41):
Like you, I can't believe this.

Speaker 2 (47:44):
I was like, I don't know if you saw that
headline where Tyler Perry was like, I was going to
open an eight hundred million dollar studio, but I stopped
the second I saw what Sora, this video generative AI
could do, and I realized we're in trouble. Said quote,
I had no idea until I saw recently the demonstrations
of what it's able to do. It's shocking to me.

(48:07):
And he's basically saying, He's like, you could make up
a pilot and save millions of dollars. This is going
to have all kinds of ramifications. That feels like quite,
that feels like half like just ignorance because this person's like,
oh my god, look like total wow factor but also
maybe hype. But I'm also curious from your perspective, what
what are the actual dangers that we're facing that you know,

(48:29):
because right now I think everything is just all about
these are the jobs it's going to take. I think
in the l ll M episode where the LM predicted
what that what the potential of llms were and the
jobs that it could take in a very paper was Yeah,
where it's like, huh, you know, like just sort of
the unethical nature of how even these companies are doing

(48:51):
research and creating data to support this.

Speaker 1 (48:53):
Can we just stop? Can you just stop and explain
exactly what the methodology of that paper was? No?

Speaker 2 (48:58):
Please, I will. I will allow the experts to do it,
because it's it's it's absolutely bonkers to hear because any
person who's like been at any like tried to look
at a study or something and you look at methodology like.

Speaker 3 (49:11):
So methodology is a very kind term.

Speaker 2 (49:13):
For what was. Yeah, truly truly, So, Alex.

Speaker 3 (49:18):
Do you want to summarize real quick what was?

Speaker 1 (49:20):
Then?

Speaker 3 (49:20):
There was two different things we were looking at. It
was something that came out of Open AI and something
from Goldman Sachs and the Golden Sacks one was silly,
but not as silly as the Open Eye one.

Speaker 4 (49:30):
Yeah, I mean getting in the detail. And I went
through this and I puzzle this paper. I like poked
my friend as a labor sociologist, I'm like, what the
hell is going on here? And you know, okay, so
there's this kind of metric that you can use to
judge how hard a task is that the government collects,
and there's this kind of job classification. They rid him

(49:50):
from you know, one to seven effectively. And so what
Goldman Sachs said was, well, basically, anything from one to
four probably a machine can do. And you're like, okay,
that's kind of silly. That's huge assumptions there. I understand though,
as a researcher you have to make some assumptions when
you don't have great data. But what Opening Eye did
is that they asked two entities what like how well

(50:16):
like what could be automated? They asked one other open
AI employees, hey, what jobs do you think could be automated?
Already hilarious because you know they're they're you know, pretty
doing those jobs. Yeah, they're not doing those doubts. They're
pretty primed to think that their technology is great. And
then they asked DPT for itself. They prompted and say, hey,

(50:36):
can we automate these jobs? So you know, and so.

Speaker 1 (50:40):
You'll never guess what the answer was.

Speaker 4 (50:42):
You'll never guess. You'll never guess.

Speaker 3 (50:45):
They took the output of that as if it were data,
and then like you know, these ridiculous graphs and blah
blah blah. It's just like the whole thing is is
is fantasy.

Speaker 2 (50:54):
Right, So is the danger there just sort of this
reliance Like I guess, so if we're classifying the sort
of threats to our sense of like how information is
distributed or what is real or what is hype or whatever,
or if they're they're actually taking jobs, what, what is
something that I think people that people actually need to
be aware of or to sort of prepare themselves for

(51:15):
how this is going to disrupt things in a way
that you know, isn't necessarily the end of the world,
but definitely changing things for the for the worst.

Speaker 3 (51:22):
There's a whole bunch of things, and the one that
I'm sort of most going on about these days is
the threats to the information ecosystem, So I want to
talk about that. But there's also things like automated decision
systems being used by governments to decide who gets social
benefits and who gets them yanked right, things like doctor
Joybul and WHINNI worked with a group of people in
I want to say in New York City who were

(51:43):
pushing back against having a face recognition system installed as
their like entry system, So they were going to be
continuously surveilled because the company who owned the house that
they lived in, or maybe it was I'm not sure
if it was going.

Speaker 4 (51:57):
To be apartment apartment complex, yes, yeah.

Speaker 3 (51:59):
Yeah, wanted to basically use biometrics as a way to
have them gain entry to their own homes where they lived.
So there's dangerous of lots of lots of different kinds.
The one that's maybe closest to what we've been talking
about though, is these dangerous to the information ecosystem where
you now have the synthetic media, like that news article
I was talking about before being posted as if it

(52:21):
were real, without any watermarks, without anything that either a
person can look at and say, I know that's fake,
Like I knew because it was my name, and I
knew I didn't say that thing, right, but somebody else
wouldn't have. And there's also not a machine readable watermarking
in there, so you can just filter it out and
this stuff goes and goes. So there was Oh a
few months ago, someone posted a fake mushroom foraging guide

(52:43):
that they had created using a large language model to
Amazon as a self published book so that it's just
up there as a book. And coming back around to Sora,
those videos like they look impressive at first glance, but
then just like Alex was saying about the art having
this sort of facile style to it, there's similar things
going on in the videos, but still like it should

(53:04):
be watermarked, it should be obvious that you're looking at
something fake. And what open ai has done is they've
put this tiny, little faint thing down the lower right
hand corner that looks like some sort of readout from
a sensor going up and down, and then it swirls
into the open ai logo and it's faint, and it's
in the same spot that Twitter puts the button for
turning the sound on and off. So it's hidden by that.

(53:25):
If you're seeing these things on Twitter, and it's completely opaque, right,
if you are not in the know, if you don't
know what open ai is, if you don't know that
fake videos might exist, that doesn't tell you anything, right.
So these are the things I'm worried about.

Speaker 4 (53:37):
And the things I'm really worried about is, you know,
these things doing a pretty terrible job at producing written
content and videos and images. And so it's not that
they could replace a human person, but it just takes
a boss or a VP to think that it's good enough. Right,

(53:59):
and then this replaces whole classes of occupation. So again
talking to Carl Ortiz, who's a concept artist. She's done
work for Marvel and and DC and huge studios and
Magic Togathering and you know, and she's basically saying, after
mid journey stability, AI produces this incredibly crappy content. Jobs

(54:23):
for concept artists have really dried up. They've gone and
it's really hard. And I mean, especially for folks who
are just breaking into the industry. You might just be
trying to get there their you know, their work out
there for entry level jobs. They can't find anything right now.
And so imagine what that's going to replace, what that's
going to encroach on, right, I mean, that's kind of

(54:45):
unique thing about kind of creative fields and coding fields
and and and whatnot. And then I would say this, yeah,
this this automated decision making kind of in government and hiring.
I mean, those are those are you know, deathly you know, airfyed, right,
And I mean this is already being deployed. I mean
at the US Mexico border. There's kind of massive. The

(55:06):
Markup actually just put out this interview with David Moss,
who's at the Electronic Freedom Foundation, No either the Electronic
Freedom Refession. I think he's at the ACLU. I have
to look this up, but it's about basically a survey
of like surveillance technology that's not the southern border, and
Dave Moss is at EFF. The Markdown Putt put a

(55:26):
published something about with him. It was like a virtual
reality tour of servants technology or something wild like that.

Speaker 3 (55:32):
I was gonna say. There's also things like shot Spotter,
which reports to be able to detect what a gun
has been fired. And this has been deployed by police
departments all over the country, and there's you know, there's
no transparency into how it was evaluatedor how it even
works or why you should believe it works. And so
what we have is a system that tells the cops
that they're running into an active shooter situation, which is
definitely a recipe for reducing police violence.

Speaker 2 (55:55):
Right right, yeah, right, our tide Microsoft, Yeah, and that
there is a reason an investigation that that showed that
it's always almost always used in neighborhoods.

Speaker 4 (56:07):
Yeah, communisic, the communities of color. Yeah, the Wired the
Wired piece on that basically.

Speaker 1 (56:13):
Yeah.

Speaker 4 (56:13):
I think they said about what seventy percent of the
census tracts something ridiculous like that.

Speaker 3 (56:17):
Yeah, Yeah, it's absurd. And then there's things like EPIC,
which is a healthcare electronic health records company, is partnering
with Microsoft to push the synthetic text into so many
different systems, so that you're going to get like reports
in your patient records that were automatically generated and then
supposedly checked by a doctor who doesn't have time to

(56:37):
check it thoroughly, right, and they're going to be doing
things like, you know, randomly putting information about BMI and
when it's not even relevant, or misgendering people over the place,
or you know, this kind of stuff is going to hurt.

Speaker 1 (56:50):
Yeah.

Speaker 3 (56:50):
And I want to add to the issue of like
entry level jobs drying up for people who do for example, illustration,
doctor joybol and Winni points out that we're getting what's
called an apprentice gap, where those positions where the easy
stuff gets automated. And I don't think this is just
in creative fields. But the positions where you're doing the
easy stuff and you're learning how to master it and

(57:10):
you are working alongside someone who's doing the harder stuff.
If that gets replaced by automation, then it becomes harder
to train the people to do the stuff that requires expertise.

Speaker 1 (57:19):
Right, Yeah, And it's easier to do in creative fields
because there's such a just inability for executives to you know,
like they don't know it. They've never known anything about
like what what is quality creative work? So I feel
like it's much easier for them to just be like, yeah,
get rid of that, and well, yeah, like the way
that we'll find out about that that isn't working is

(57:41):
the quality of the creative output will be far worse.

Speaker 4 (57:46):
Right, I mean what I mean, all those folks really
tend to care about our you know, content and engagement metrics.
You know, you can't actually have something that's kind of
known for quality or creativity.

Speaker 2 (57:57):
Right.

Speaker 4 (57:58):
It does remind me a little bit about kind of
you know, the first rebels against automation, the Luodites, and
you know, the kind of the kind of way in
which they did have this apprentice guilt system in which
they trained for you know, a decade or so before
they could you know, perhaps go ahead and open their
own shop in a way that you know, those folks

(58:19):
were replaced by these these water frames that were they
called water frames because they were produced by hydraulic power.
But then uh, you know, effectively powered by unskilled people
and by unskilled usually children doing incredibly dangerous work. But yeah,
folks that have been training for this, the apprentices were
incredibly steaming mad.

Speaker 1 (58:41):
Yeah, and then we made their name a synonym for like.

Speaker 2 (58:48):
Hater.

Speaker 4 (58:49):
Yeah, there's been some nice efforts to reclaim them by
Brian Merchant and yeah, some folks.

Speaker 2 (58:58):
Yeah.

Speaker 1 (58:58):
So just in the comparison to Crypto, it feels like
the adoption here, the hype cycle here is more widespread
than Crypto. Like with Crypto, there was that moment where
we saw the person behind the curtain who was you know,

(59:19):
the people that there was the fall of Crypto. With AI,
I don't feel like there is any incentive for anyone
to any of the stakeholders. I guess I got it
involved to yeah, to just come out and be like, yeah,
it was bullshit. You know, there's just so much buy
in across the board where I guess we've already talked

(59:43):
about where you see this going, But is there Do
you think there's any hope for this getting kind of
found out, the truth catching on, or do you think
it's just going to have to be one hundred years
from now when somebody changes their mind about AI haters
and it's like, actually they were onto something the way

(01:00:05):
we are about light eites.

Speaker 3 (01:00:07):
We're still trying. That's what we're up to with the podcast,
right a lot of our other public scholarship. There was
an interesting moment last week when chat gpt sort of
glitched and was out putting sense. Yes, I mean it
was it actually Spanglish or people just calling it because
it had some Spanish in it.

Speaker 4 (01:00:22):
It had some, It wasn't all I mean it was
doing some spangless stuff. With this stuff is just even
more inscrutable than usual. It was just completely nonsense.

Speaker 3 (01:00:32):
Yeah, yeah, And that the sort of open eye statement
about what went wrong basically described it for what it is.
They had to like say something other than what they
usually say. And then there was a whole thing with
Google's image generator, which you know the so the baseline
thing is super duper biased and like makes everybody look

(01:00:52):
like white people. And there's this wonderful thread on Medium,
and here I'm doing the thing wre I can't think
of a person's name, but it's a wonderful post on
Medium where someone goes through this thread of pictures that
I think we're initially on Reddit, where someone asked one
of these generators to create warriors from different cultures taking
a selfie and they're all doing the American selfie smile.

(01:01:13):
And so does this really uncanny Valley thing going on
there where these people from all these different cultures are
posing the way Americans pose, so huge bias in these things. Underlyingly,
Google has some kind of a patch that basically whatever
prompt you put in, they add some additional prompting to
make the people look diverse. And then last week or
so someone figured out that if you asked it to

(01:01:33):
show you a painting of Nazi soldiers, they're not all
white because of this patch, right right, So yeah, So
Google's backpedaling of that I think was ridiculous. There was
some statement in there about how they certainly don't intend
for their system to put out historically inaccurate images. I'm like,

(01:01:54):
what the hell, it's a fake image, like there's no
accuracy there, no matter what. These mishaps maybe sort of
pulled back the curtain for a broader group of people.
There's some true believers out there who are not going
to be reached. Sure, but I think it may have
helped for some slice of society.

Speaker 2 (01:02:11):
Yeah, I know some people who are like, how do
we know that Reinhardt Heydrich didn't have dreadlocks? Clear?

Speaker 1 (01:02:19):
Wow?

Speaker 2 (01:02:19):
But okay, sure, go off.

Speaker 4 (01:02:21):
Yeah, And I mean I think it's this is such
an interesting question, right, is like where does when is
the AI bubble going to pop?

Speaker 3 (01:02:28):
Right?

Speaker 4 (01:02:28):
And I mean, in some ways it feels like, you know,
we're where, I mean, as much as we can do.
We're kind of prodding it, right, you know, and seeing
you know. And one thing that I off handedly said
one one time and Emily loves is ridicule as praxis.
So yeah, one thing you know, and I will in
a turn of one of Emily's great quotes, is you know,

(01:02:51):
oh gosh, I'm going to mess it up right now.
It's I and I. It's it's the refuse to be impressed.
Uh oh yeah, resist, the urge to be resist? Is
this year to be impressed. It's much it's much better
when when when she says it, and it's and it's
and it's kind of the idea of like some of
it is a bit of the sheene right, But it
feels like at some point, you know, if in the

(01:03:13):
kind of operations of these things, you know enough, there'll
be enough buy in, especially with automation, that's hard to
reverse a lot of the automation without just a huge fight.
And so I mean, I think something that helps our
you know, worker led efforts, you know, and one of
the most awesome things that we've seen this is something

(01:03:34):
a scholar, Blair a tear Frost, calls AI counter governance,
she calls and one example she is is the w
GA strike and how folks struck for one hundred and
forty eight days and after strike they not only got
a bunch of new kinds of guarantees for minimum pay

(01:03:59):
when it came to streaming and the residuals they get
for it, they also, you know, have to basically be
informed if any AI is being used in the writer's room.
They can't they can't be forced to edit or re
edit or rewrite AI generated content. They have to be
everything has to be basically above board. I mean, isn't

(01:04:20):
as far as it could have gone and banned it
out right right, But if there's any use of it
has to be disclosed and you can't bring in these
tools to hold you know, whole cloth to place the
writer's room.

Speaker 1 (01:04:32):
Yeah. Well, Alex and Emily, it has been such a
pleasure having you on the show. I feel like you
could keep talking about this hours for days. But thank
you so much for coming on. And yeah, I would
love to have you back. Where can people find you?
Follow you here? Your podcast all that good stuff.

Speaker 3 (01:04:51):
Yeah, so the podcast is called Mystery AI Hype Theater
three thousand. Then you can find it anywhere find podcasts
are streamed, but also on peer two if you prefer
that kind of content as video. So we start off
this Twitch and then show up that way. I'm on
all the socials. I tend to be Emily M. Bender
so Twitter or blue sky on Macedon. I'm Emily M.

(01:05:11):
Bender at what are we dare institute that social and
I'm very findable on the web, very googleable at the
University of Washington. How about you, Alex?

Speaker 4 (01:05:20):
Yeah, and if you want to chech catch the twitch stream.
It's Twitch, dot tv, slash dare, undiscore Institute, usually on Mondays,
usually this time slot. So next week we'll be here. Wait,
I forget, we'll be here eventually.

Speaker 3 (01:05:35):
Yes, today's to today's Tuesday though in.

Speaker 4 (01:05:39):
That's right, that's right, Yeah, it's actually not so Mondays.
Check us out. You can catch me at Twitter and
Blue Sky Alex Hannah no h at the end. And
then yeah, our mast on server, dare, hyphencommunity, dot social,
what else? I think those are all the socials we have?

Speaker 1 (01:06:00):
Yeah, amazing. Is there a work of media that you've
been enjoying?

Speaker 3 (01:06:05):
So I am super excited about doctor Joeyblamuni's book Unmasking AI,
came out last Halloween. It is a beautiful sort of
combination memoir slash introduction into the world of how so
called AI works, how policymaking is done around it, how
activism is done around it. And I actually got to
interview her here in Seattle at the University of Washington

(01:06:26):
last week, which was amazing.

Speaker 1 (01:06:28):
Nice. How about you, Alex?

Speaker 4 (01:06:31):
Yeah, I'd recommend the podcast Worlds Beyond Number and it
is a Dungeons and Dragons live play, actual play podcast
that is put Together by Brendan Lee Mulligan, Abria Engar,
Lou Wilson, and Erica is talking about writing that is

(01:06:51):
not generated by AI, but generated by some very talented
improv comedians and storytellers themselves.

Speaker 1 (01:07:00):
Acale, How do you scale?

Speaker 4 (01:07:02):
Oh my god, it is actually surprisingly I mean that
podcast is actually I think the single most funded thing
on Patreon. Birth a bunch of records for it. I
had about forty thousand subscribers. People want good content, Yeah,
I'm humans high art, you know. People want this vibrant storytelling,

(01:07:24):
amazing creation, you know, and so the more we can
we can plug that stuff the better.

Speaker 1 (01:07:30):
By the way, you can also find Mystery AI Hype
Theater three thousand. Like where podcasts are? So in case
you don't do Twitch. Yeah, like I was listening you know,
Spotify or whatever.

Speaker 2 (01:07:43):
You weren't on Twitch.

Speaker 1 (01:07:44):
I wasn't on Twitch this weekend. I was taking it.
I was taking a break. You know. It's just like
too much, too much Twitch.

Speaker 2 (01:07:50):
Couldn't get a hyperion.

Speaker 1 (01:07:51):
That's right? Okay, all right, Miles, where can people find you?
What's working media you've been enjoying.

Speaker 2 (01:07:56):
At the at base platforms at Miles of Gray also
if you like the NBA, you can hear Jack and
I just go on and on about that on our
other podcast, Miles and Jack Got Mad Moosties. And if
you like trash reality TV like me, I like I
comfort myself with it, check me out on four to
twenty Day Fiance where we talk about ninety day Fiance tweet.

(01:08:16):
I like, this is just kind of it's a historical day.
I would have to say for the Internet. We missed
an anniversary on Sunday. Apparently the very famous clip of
Peter Weber, the bowler who won and came out with
the iconic phrase who do you think you are? I Am?
That had its twelfth anniversary and I just will to

(01:08:39):
play that audio day because it's one of our favorites.
Here we go, are you?

Speaker 1 (01:08:49):
Who are all?

Speaker 2 (01:08:52):
Right? Who do you think you are? Who do you?

Speaker 1 (01:08:56):
Oh?

Speaker 2 (01:08:56):
Man?

Speaker 1 (01:08:57):
The whole thing is good. Also, the run up to
that is great. That's right. I did it.

Speaker 2 (01:09:03):
I did it?

Speaker 1 (01:09:03):
God damn it?

Speaker 4 (01:09:05):
What I yeah? I play an amateur sport. And just
like the exhilaration you get and you just say anything.

Speaker 2 (01:09:16):
Yeah, yeah, I know.

Speaker 4 (01:09:17):
That's I love that clip.

Speaker 2 (01:09:20):
I respect I think you are? I It's like yeah,
you've you're you've glory fried your brain circuits and now
that's what you're saying. And the other one from at
the Dark Prowler. It's a screenshot of an Uber notification
on the phone. It says, young Stroker, the body Snatcher
will be joining you along your route. You're still on
schedule to arrive by seven fifty four.

Speaker 1 (01:09:41):
Okay, by the way, chicken nuggets glory fried. Thanks, Yeah,
that'd be great. Oh yeah you Oh tweet I've been enjoying.
Caitlyn at Kitlyn at Kate Holes tweeted overnight oats sounds
like the name of a race horse who's sucks. That

(01:10:05):
was an old one that was apparently September twenty fifth,
twenty twenty two.

Speaker 2 (01:10:09):
Good one, Go banger.

Speaker 1 (01:10:10):
You can find me on Twitter at Jack Underscore O'Brien.
You can find us on Twitter at daily Zeitgeist. We're
at the Daily Zeitgeist on Instagram. We have a Facebook
fan page on a website Daily zeikegeist dot com where
you post our episode and our footnote. We link off
to the information that we talked about in today's episode,
as well as a song that we think you might enjoy.

(01:10:30):
Miles is there a song that you think people might enjoy.

Speaker 2 (01:10:33):
This is just a nice little instrumental clip, not clip,
a full on track from a London based producer goes
by Vegan veg y N and the track is called
a Dream goes On Forever and it's a super dreamy track.
It's just kind of an interesting production. So maybe you know,
maybe you know, just check out some mid journey you

(01:10:54):
know images, some sore AI videos and just blast this
in your headphones. Just kind of go on for a dream,
goes for.

Speaker 3 (01:11:00):
The music and leave the synthetic media.

Speaker 1 (01:11:04):
Just fully embraced into the music, close your eyes and
actually the mid journey images that your brain will produce.
I've found that like at night when I sleep, sometimes
my brain produces Sora images.

Speaker 2 (01:11:17):
This is like you sound like you sound like a
youth pastor talking to kids about God. Or it's like
you know where it's really the AI is actually up
in here kids?

Speaker 1 (01:11:25):
Yeah, want to get.

Speaker 4 (01:11:28):
Yeah yeah yeah. The real training data we had is
the word of.

Speaker 1 (01:11:33):
God, exactly exactly what I need.

Speaker 2 (01:11:36):
The Daily Zeitgeist is a production of by Heart Radio.

Speaker 1 (01:11:38):
For more podcasts from my Heart Radio, visit the iHeart
Radio ap Apple podcast or wherever you listen to your
favorite shows, that it is going to do it for
us this morning. We are back this afternoon to tell
you what is trending, and we'll talk to you all then.

Speaker 2 (01:11:49):
Bye bye,

The Daily Zeitgeist News

Advertise With Us

Follow Us On

Hosts And Creators

Jack O'Brien

Jack O'Brien

Miles Gray

Miles Gray

Show Links

StoreAboutRSSLive Appearances

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations.

Crime Junkie

Crime Junkie

If you can never get enough true crime... Congratulations, you’ve found your people.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.