Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Hello the Internet, and welcome to Season three h five,
Episode two of dar Day Zai Guys Day production of iHeartRadio.
This is a podcast where we take a deep dive
into America's share consciousness. And it is Tuesday, September nineteenth.
Speaker 2 (00:16):
Three Oh yeah you know, oh yeah, National It Professionals Day.
It's also National Voter Registration today. Hey, make sure you
richter to vote because a lot of people would probably
hope you aren't doing that. And also National butter Scotch
Flitting Day. Shout out to all of my lovers of
the butter scotch flavor, including myself.
Speaker 1 (00:37):
Yeah, I think that's one of the better putting flavors.
Speaker 2 (00:40):
It's also it's also Taco like a Pirate Day. But
I just I.
Speaker 1 (00:44):
Didn't want to, you know, you don't want to do
that to everybody with children?
Speaker 2 (00:48):
No, No, is that a thing you would do at
the parent would You'd be like, Hey, guys, it's talk
like a pirate day. We ready to annoy ourselves to death.
Speaker 1 (00:56):
In this household. We look down on international days that
are made up for no reason. We scoff at them.
Speaker 2 (01:03):
Miles right right, right, right right. We're out here living
in the real world. There's real consequences to our.
Speaker 1 (01:08):
Actions, and when we scoff at them, we say horror, horror,
because it's always international to the part day. My name
is Jack O'Brien aka L Jacko. That's courtesy of Max
Her twelve sixteen, because they think I look like L
Chapo's son. And oh okay, yeah, there was a new
picture of El Chapo's son being extradited to the US
(01:32):
on board of plane in handcuffs of people. Once again,
anytime a picture of that fellow hits the news, I
am reminded, oh okay, hey this you anyways, it is
not I'm thrilled to be joined as always by my
co host mister Miles.
Speaker 2 (01:52):
Great, well break, Okay, I'm far out of range.
Speaker 3 (01:57):
One bar is how I feel my own a searching hard,
lying naked on my thigh, irradiation changed my guts do
something real.
Speaker 1 (02:08):
They're wide awaken. I can't see a quato has.
Speaker 4 (02:12):
Been born.
Speaker 2 (02:16):
To study on discord. Yes, I how that iPhone twelve
had higher levels of radiation, yeah, than the than the
French government would allow h And I thought, hey, maybe
that would just create a little total recall quato type character.
Speaker 1 (02:31):
And that's fun that's a fun one for you to have,
you know, fun news story for you to have a
laugh with, because you what what I turned out?
Speaker 2 (02:39):
Turned out? I had an eleven, I got that twelve, baby, But.
Speaker 1 (02:44):
No fair, you have to name your name your quato
after me. Give me that quato, Miles. Yes, we are thrilled, fortunate,
blessed to be joined in our third seat by an
assistant Professor of Technology, Operations and Statistics at NYU, where
his research focuses on the intersection of machine learning and
(03:05):
natural language processing, which, of course means you know, his
interests include conversational agents, hierarchical models, deep learning, spectral clustering.
You guys are all probably finishing this sentence with yeah,
of course, spectral estimation of hidden Markov models, yes, obviously,
and of course time series analysis. Please welcome doctor Joao, sid,
(03:29):
Doc doctor Joe.
Speaker 4 (03:33):
Hi. Thank you, Yeah, thank you Jack and Moles for
having me. It's a pleasure to be here, the pleasure
to have you.
Speaker 2 (03:40):
We're glad you're here because we got we got a
lot of questions about a lot of stuff, some some
to mostly to do with what your area of expertise is.
Some things I just need a second opinion on just around.
Speaker 1 (03:52):
The career of Alan Iverson. Is it a bad joke
if it's been used in like a cable I believe
that it's a Direct TV commercial is using They're like
and you we have our very own AI that helps
you select shows, and then Alan Iversons sitting on the
couch with the person. Wait, does that make it off limits?
Speaker 2 (04:11):
Yeah?
Speaker 1 (04:11):
Yeah, that's that's a real commercial. And I'm going to
be running with that joke all day today because I'm
not any better than the comedy writers at Direct TV.
Speaker 2 (04:24):
I know is Yeah, but what doctor said, Doc?
Speaker 1 (04:29):
Is that good? Doctor said Doc? Is that a good
way to address you?
Speaker 4 (04:33):
Yeah? Sure, I mean I'm very easy about things. Okay, okay,
how about Well, I've never been called that. I had
some students in my class called me ja wow Wow. Yes, yeah,
is it jo so? So the pronunciation is so, it's
(04:57):
Brazilian Portuguese. That's what I named.
Speaker 1 (05:01):
I go by.
Speaker 4 (05:03):
Joo jook about it?
Speaker 2 (05:07):
Doctor j even doctor j.
Speaker 1 (05:15):
Amazing?
Speaker 4 (05:16):
All right?
Speaker 1 (05:16):
Well, yeah, I mean the main thing we we've brought
you here today to ask the big questions around AI,
such as what is that?
Speaker 2 (05:30):
What is AI?
Speaker 1 (05:32):
And we'll follow from there but you know, questions that
reveal that we're smart people.
Speaker 2 (05:38):
Obviously.
Speaker 1 (05:39):
No, I actually do think that the fact that AI
is used as the phrase to describe a bunch of
different types of technology probably has something to do with
where people's fear comes from and where people's like kind
of confusion, where my confusion up to this point has
come from. But yeah, so we're going to talk to
(06:00):
you about a bunch of that stuff plenty more. But
first we do like to ask our guests, what is
something from your search history that is revealing about who
you are or where you are in life.
Speaker 4 (06:15):
So I think the most revealing thing about my current
search history is probably me searching for the which of
Pfizer or Maderna had the highest amount of mRNA in
the vaccine, which took me like forever to figure out.
(06:39):
And that was, you know, a wasted thirty minutes of
my day where I could have just slacked someone and
had the answer.
Speaker 2 (06:47):
And wait, why were you trying to discern who which
had a higher concentration of m marna?
Speaker 4 (06:53):
And yeah, I have a feeling that, like, you know,
so that was kind of like, okay, the one with
the higher dosage might still have me slightly more effective.
The new vaccines have come or have been approved just
a couple of days ago. Right trying to it.
Speaker 2 (07:12):
Seemed like on a spectrum like MODERNA was having the
better results, right like on a continuum, I felt like
the new one, or just in general, because I remember
when the vaccine first came out, it was like, oh,
you got Johnson and we got that Jay and J
and then I was kind of like I had Maderna,
but as an American consumer, I'm like, but I know
(07:33):
the brand Peiser more. Yeah, And then I ended up
being like, Okay, good, I got I got that good one,
the Moderna it turned out, or at least better than
the Pfiser slightly more effective there, I.
Speaker 4 (07:45):
Mean, both are extremely effective.
Speaker 2 (07:47):
But yeah, right right.
Speaker 4 (07:48):
Now, that was my That was probably the thing in
my today's browsing history that told me the most about me,
which is that I'm still after this many years, paranoid
about something. I meanach everybody in the wrest of the
world is sort of moved on about.
Speaker 2 (08:05):
Yeah, for better or worse. But yeah, it does seem
like that. Yeah, I'm still I'm with you on that
because like I still somehow have an ego like that
attached to what brand of fucking vaccine I got, Like, well,
you know, I got maderna, So you know, I'll sit
over there with the elevated folks.
Speaker 1 (08:21):
And so it's not you're not googling to see which
has the more emini or what's it called, m RNA
mr in it because you are worried that it's going
to kill you or cause you to start shaking uncontrollably
like the people in those TikTok videos.
Speaker 4 (08:40):
Just to be clear, I mean, the vaccines have adverse
side effects, but no, I'm in it for Hey, I'm
already you know, old enough that you know that's not
going to really matter. I'm more worried about, you know,
adverse effects actually getting the disease.
Speaker 2 (09:00):
So yeah, you know, but very scientific answer.
Speaker 4 (09:05):
You know. I was thinking about it, and you know,
it was funny because I was asking the chat bots
and so one thing as I work on this stuff
that's both funny and somewhat annoying to my wife is
that like I'll do a search and then I'll play
around with like three different chat bots, and then like
half an hour later, she's like, have you not been
(09:26):
listening to me? Right? And I haven't. I've been going
trying to figure out.
Speaker 1 (09:31):
Where we're going to order from yet.
Speaker 2 (09:34):
I don't know, but being AI thinks that this chain
is still in New York. It's not.
Speaker 4 (09:40):
Foolish foolishness one exactly. Yeah, and so it's you know, yeah,
I do tend to get distracted while playing with you know,
all the cool new things. And I mean both. I
mean every once in a while, you learn something by
anecdotally testing something, or you know, somebody making a statement like, oh,
(10:02):
you know, did you notice like you know, this kind
of behavior, and I'm like, oh, that's really cool. Okay,
you know, and then if you go down the path
of like trying to understand.
Speaker 2 (10:12):
It, right, cool?
Speaker 1 (10:15):
What what's something you think is over it?
Speaker 4 (10:17):
I think that in terms of the things that really overrated,
like you know, probably artificial general intelligence also known as
a g I occasionally people think about it like the singularity,
where you know, we start to assume that, you know,
(10:37):
the machines are going to take over and we're going
to be in a terminator world. I think, you know,
the likelihood of that risk is like super overstated.
Speaker 2 (10:49):
Mm hmm okay, yeah, okay, yeah, that's what that's what I.
That's what I that's what I learned.
Speaker 1 (10:54):
Well, you actually just answered the only question we had
on this episode, So I think, sorry, no, no, Yeah.
So I think coming in like and we'll get to
this a little bit. But coming into this week's episode,
like I had, you know, read a couple of articles
on AI, but I think the artificial general intelligence, like
(11:16):
the idea of like some chatbot or you know, language
model or you know, some sort of artificial intelligence deep
brain model, whatever it is, like gains sentience and that
then it like takes over weapons systems immediately was like
(11:36):
the threat that I had in my mind, and after
like doing some research, is no longer something that is it? Well,
it doesn't seem to be the one that most people
are are worried about at least, and then there are
the people like the well we'll we'll get into it.
But I do think, yeah, that's an important distinction and
(11:59):
kind of gets it that first question of like what
what is AI? Because people use it to stand in
for a bunch of different things. What what something you
think is underrated?
Speaker 4 (12:10):
This is maybe mildly underrated, But I think that the
amount of upheaval that AIM let this generation of technologies
is going to have in the next couple of years.
This means that like, you know, certain jobs, so a
(12:31):
lot of technology, like in the industrial age, has gone
from lower jobs, like jobs that don't require you know,
having skill sets that require a large amount of education.
You know, I think chetchipt already has surfaced evidence of
being able to either replace or facilitate somebody to be
(12:57):
much more productive, and that this will actually cause a
reasonably large amount of change in the workforce globally. You
know that I think will have real impact, right and
could cause problems, you know, with more erosion of middle
(13:19):
class jobs and you know, needing to have like sort
of deeper retraining.
Speaker 1 (13:25):
Interesting, what are the yeah, I guess, I guess let's
let's dig into it, Like what are the jobs that
you see being replaced, because like the ones that we've
heard most about specifically, and I've seen sort of poor
results for are the writing jobs, Like when people are like, no,
(13:46):
we're just gonna have AI like write these articles for
us now, and they seem like they do kind of
a bad job and they have a tough time with
like fact checking. What what where are you going?
Speaker 4 (14:01):
Where do you.
Speaker 1 (14:02):
Expect to see the kind of changes in the workforce first,
because I get I guess I will caveat that by
saying that it doesn't seem like a lot of companies
care if the technology does a bad job. They're just like, yeah,
but it's it's good enough to put butts in the seats,
and it's less money, and the entire model of capitalism
(14:23):
currently because of private equity, and you know, just that
being how the incentive system is set up is like
to cut costs. So maybe it won't matter and it'll
just like make journalism shittier and have all journalists jobs
replaced by these ais. But what where is that? Is
(14:46):
that kind of where you see it? Or where where
do you see the AI replacing jobs?
Speaker 4 (14:51):
Well, so, okay, so I think creatively true creative writing,
right like things that JOURNALI writing from journalists, or let's
say creative writing, songwriting, poetry. These are require such deep
understanding of people and theory about how people interpret the
(15:16):
words and the language and sort of the reactions and
emotions that are evolved that I don't think AI is
going to help very much with that. I think, sorry,
replace right, I think it can help right and make
you more productive but I don't think it's really going
to be let's say, a replacement for a journalist. But
(15:38):
some stuff that people do that are performa like a
performer report, right, that's you know, almost template based, but
not quite you know. So somebody's got to do some filing. Well, okay,
CHT GBT can do this, you know, well enough for
you know, for me that Okay, here's a filing. Very
few people are probably going to look at it. Let
(16:00):
me use chat GUPT, you know, and maybe not need
to hire someone to do that, right. I think the
other thing where we're going to see a lot of
changes also in you know, coding. Here, I think we're
seeing already the impact of it, where there's this tool
(16:21):
called Copilot which is driven by similar technology as chatchept,
which is able to create massive amounts of code for people.
So it's making a certain subset of coders just way
more efficient, you know. I see that in even with
my students. So I teach a undergraduate data science class.
(16:42):
Last semester, the projects that my students were able to
do was actually much more impressive than the previous semester,
and part of that was just them being able to
use chat GUPT to be able to make better code
to make their projects better.
Speaker 2 (16:58):
It is like I was just reading like an op
ed from a former IBM employee that he's like describing
himself as like a tech evangelist and had always been
you know, you know, say like cheering on like the
advent or the arrival of chat, GPT and things like that,
And he said, and I only it only ended up
actually taking a lot of work from me, like because
(17:19):
some companies who were less interested in good writing were like,
we're actually gonna use a lot of this for like
content generation, so we don't need you know.
Speaker 1 (17:27):
Like your services as much. He said.
Speaker 2 (17:29):
It hasn't completely replaced him, but it's a signific He's
he has noticed this more than significant amount of like
businesses sort of be like, you know.
Speaker 1 (17:37):
What, we're kind of gonna just sort of lean.
Speaker 2 (17:38):
Into this thing that costs like way less money, and
are these kind of like the stories that are like
canaries in the coal mine. But I feel like, sort
of like in the industrial age, we've always seen like
there was just automation right where people who had jobs
at a bank to do specific things like file things
or put things in a ledger, ended up like those
jobs ended up going away because of automation or just
(18:01):
you think of customer service now a lot of that
is automated. And I was also reading about how a
lot of local governments are now adopting or really interested
in chat bots as a way to like sort of
replace bureaucracies at certain levels. Like is that kind of
is that sort of where you see it ending or
what like what's the sort of fifty year outlook, because
(18:21):
I think to Jack's point two, we always look at
this as like, well, when we're trying to reconcile that
with like capitalism and the corporation's need to like always
look at their bottom line, is it a thing that
there we can strike a balance where it's like, yeah,
we have these human employees who use this as a
tool obviously because it makes them better or or is
(18:43):
it like the cynical version is like they're going to
get as much as they can done with it, only
up until the point that they probably need human workers
to kind of really fine tune the processes.
Speaker 4 (18:54):
Yeah, I think the latter. I mean, I think you know,
the company is their interest is to optimized, so there,
I mean, the truth of the matter is that the
tool right now, right is just not good enough for
most you know, really for most tasks. I think, you know,
fifty years from now is a really hard thing for
(19:16):
me to predict.
Speaker 1 (19:17):
I know, but you have to know, you know, it's
a wild speculation, but sound extremely common because because this.
Speaker 2 (19:27):
Is going to go on TikTok and we're going to
scare the heck out of a lot of young people.
So go ahead, you're saying, how sky net is imminent?
Speaker 4 (19:34):
Yees, sky net is imminent. You know, I'm kidding. So
I think that you know, looking forward, like looking far
forward into where I think it will be in you know,
maybe ten or fifteen years. Sure, this technology is just
really just going to improve. And what we're going to
(19:55):
see is that, you know, we will have certain jobs
which were done by people for a very long time
go away. That will probably mean the new jobs will
open up and people will just be more productive. That's
the sort of optimist in me, right.
Speaker 1 (20:13):
That's also how everything has happened to this point, right,
is that like people are at first scared of new technology,
like airplanes are going to be really scary, and like
make it so the you know, nobody has to ever
take a train and it's like, well, then there's like
the entire aerospace jet propulsion industry, and you know, I
(20:37):
guess that one's a little easier to foresee, but like,
it does feel like there's always fear of new technology
and like belief that it's going to end the economy
as we know it, and it does, but it's always
replaced by a new a new version, right.
Speaker 4 (20:55):
Yep, exactly, And so I think, you know, we're going
to see something similar in this some questions of factuality.
Sometimes people call this hallucinations in our models. How difficult
is that going to be to really fix those sort
of corner cases. But you know, given the amount of
money and people that are working on this, you know,
(21:17):
I think that within the next ten to fifteen years
we're going to see that you know, a lot of
that gets solved, or at least the likelihood of it
coccurring is so small that it's you know, more that
the likelihood of it, you know, lying to you or
saying something that is untrue is going to be you know,
(21:40):
way way way less than any person. Right, So that's
kind of where where I think we're going. And so
we'll see, you know, some jobs like you know, entry
level coders, make away certain jobs like you know, certain
types of business analysts, even some form of middle management
(22:02):
I think is at risk. You know, lots of various
amounts of places I think are at risk. I do
want to say one other thing, which I think is
going to be amazing. And we're seeing this in higher education,
but in education in general. Is it Probably the place
that's been most impacted by chat GPT has been education.
(22:27):
And I think what we'll see is in the next
ten to fifteen years, education is just going to dramatically reform, right,
hopefully for the better, but like we're going to see
like major changes in how we teach students, how we
assess students. Hopefully this will lead to you know, just
better quality of education for everyone.
Speaker 2 (22:48):
Right, because is it sort of like the main I
guess I feel like the point of our traditional educational
system is like almost like memory recall. It's like how
good is your factor recall? Memory recall and things like
that at or can you sit through reading this thousand
page historical textbook to glean like these eight points really
that we want you to come away with it come
(23:09):
away with and I see, like how much that distillation
of information how quickly that occurs with like you know, chat,
GPT and things like that. So I guess it does
become more like, Okay, granted, if this is the information
we want people to learn, then how are we now
taking that next step to make sure it's sort of
ingested properly for people to know that they are making
sense of it, rather than like being like, yeah, here
(23:31):
are the eighteen things you need to say to pass
like a like a historical course on the Colombian Exchange,
and more like Okay, we know you know how to
get to those answers, but how can you demonstrate that knowledge?
And I guess that's sort of where the I guess
excitement is in academia, or at least that's the challenge, right.
Speaker 4 (23:49):
Yeah, yeah, it's the challenge, the worry, the you know,
and I guess excitement goes in both ways positive and
negative depending on what you are and how you want
to deal with this. But I think that like this
has this type of technology has the potential of being
(24:09):
you know, a tutor and a tutoring system that is
almost as good as you know, the best tutor, where
we can help people learn better and just be able
to interactively one on one, you know, sort of teach
certain concepts and.
Speaker 1 (24:28):
Like with an AI teacher or you're saying it would
help teachers teach.
Speaker 4 (24:33):
Help teachers, help teachers. So it'd be like a tutor, right,
so you know, the teacher is still going to teach,
but the tutor is going to be able to, like
the AI tutor is going to be able to, you know,
help the student with Okay, I didn't understand this, right,
Like I mean, and one thing that we do know
about chatbots is that in general, you know, people have
(24:54):
this impression that chat bots have less judgment, right, and
so people are willing to ask you know, in air quotes,
stupid questions.
Speaker 2 (25:03):
Thank you, Yeah, because no questions are stupid.
Speaker 4 (25:05):
Yeah, we're in a classical classroom. You know, they wouldn't
be able to they wouldn't be willing to ask that
to a person, and they're willing to ask you to
the AI. Yeah. I don't know.
Speaker 1 (25:17):
As a parent, I would be very worried just based
on the work I've seen from chatbots up to this point,
Like like what at this point, we've covered Yeah, up
to this point, like we've covered the Columbus dispatch like
recap of like of football game, like high school football game,
and it's just like such absolute shit. It's like one
(25:39):
of the worst articles I've ever read, you know, the
sports the Star Wars one that Gis Moto put up
is terrible. So I don't like I I believe that,
like the chatbots will continue to get better, but I
guess I have questions about what they're going to get
better at, whether they're going to get better at like
fooling us into thinking that they have like human intelligence,
(26:02):
or whether they're gonna get like more more accurate, because
it seems like within the world of AI, like there's
still people who are like I think it was somebody
who worked it open AI was like Wikipedia level accuracy
is like years off at this point, and like like
Wikipedia is suddenly being used as this phrase for like
(26:23):
something that is like the gold standard, whereas like before,
outside of the context of AI training, it's something that
we like joke about being easy, easily editable and stuff.
So yeah, it worries me a little bit like that.
I think people's faith is being misplaced in the language
(26:44):
models because language is inherently an abstractive system that is
designed to lie like essentially, it's I mean, it's to
tell you a story that gives you meaning. But it
feels like from a philosophical level like that, I see that.
I see where people who are skeptical that this is
(27:05):
the path to like really useful stuff are coming from.
But let's take a quick break and we'll come back.
And I just I do just want to like kind
of nail down two different things that are being called
AI that seem vastly different to me.
Speaker 5 (27:19):
So let's let's take a quick break. We'll be right back,
and we're back.
Speaker 1 (27:34):
So yeah, I guess I think Chad GBT seems to
be the thing that everyone has been taken by like
very like interested in right like that, it seems to
have reached a new level where people can do things
and have conversations that are remarkably like human seeming and
(28:00):
can do right really a good maybe B minus term
papers and then use that to kind of scrap together
something or depending you know, maybe maybe it's a higher
level of term paper. I don't grade term papers, so
I wouldn't know, But just based on the journalistic work
I've seen it, Dove, I haven't been impressed. But so
(28:22):
that's the main thing that I'm hearing referred to as AI,
and they're like this, there's a new future upon us.
Look at chat GPT, and then there's like when you
look into some of the things that are being done
with machine learning, like accurately predicting how proteins fold, which
is like a problem that has been just input too
(28:45):
complex for humans to solve on their own, just like
too slow for humans to know the atomic shape of proteins,
Like they just kind of fold in ways that he
is like too small, too complicated for humans to predict,
and like through machine learning and like this deep brain
(29:06):
system at Google, they were able to like knock it
out within a couple of years, and like they gave
it to the public. They were like here, here is
the shape of the proteins. And so if you're using
AI to describe both of those things, I think that
is that's where the creepiness comes in, because you're like, Okay,
(29:29):
it has this like godlike science ability, and then it
also has this like child personality where it like can
like talk to you and flirt with you a little bit,
and it's like that that mismatch seems uncanny and weird
to me. Like, and I think that that is what
is happening in people's minds when they're like, oh, yeah,
(29:51):
AI is scary. It's this like massively powerful thing, and
it is massively powerful. I think it's just more massively
powerful when it has a task and like a specific
thing that it's seeking, which is not what you can
really have with the language models, which are like predictive. Right.
Speaker 4 (30:10):
Yeah. I think I think it's kind of unfair in
the way that people characterize the language models as like, oh,
they just predict the next word, right, I mean, in
some sense, it's what we know of what the language
model is trying to do, is it is trying to
(30:31):
predict some sequence of text, and in doing so, it's
learning a whole bunch of the knowledge in the world
and how the world is connected. It's learning some seemingly
probabilistic logic rules. Yeah. However, yeah, I mean this is
(30:54):
oftentimes you see this weird sort of disconnect between chatjepte
or that technology, which we call large language models in
general being so smart and so stupid at the same.
Speaker 1 (31:10):
Time, right, Right, that's kind of our specialty on the show.
Speaker 4 (31:16):
Well, and and I think as we as like society
interact with the technology more and more, we're going to
get sort of a better mental model of the technology,
right and in some sense, like right now, a lot
(31:38):
of people, you know, I ask my students, okay, like
what do you think of Like, how do you reason
about chat GPT?
Speaker 2 (31:46):
Right?
Speaker 4 (31:46):
How do you reason about the quality of the technology
and how it you know, is you know, understanding the
input and giving you output. And some people think about
it kind of like this person, and you know, reason
about it like a person. Other people see it like
Google Search, but just a really really interactive version of
(32:08):
Google Search. Sure, and somehow it's somewhere in between, which
I think is like the weird sort of place that
we're at. And you know, I think that the fact
that the same type of deep learning and machine learning
models can predict protein folding and find new, you know,
(32:32):
ways of inverting matrices that are even more efficient and
doing all this very very incredible intelligent stuff at the
same time as you know, making incredibly dumb statements is
something that we will like people are going to have
to deal with rights as Okay, you know, there's this
(32:53):
weird disconnect. You know, if you think about it like
a person, it just it's not a person, right, it's
not a you know, it doesn't map on Like in general,
we like to take technologies or different types of constructs
and map them onto what we already know. And in
some sense it's not mapping well onto what we already know,
(33:16):
and so we're gonna have to figure out how to
sort of properly map it onto its own new sort
of construct.
Speaker 1 (33:25):
Yeah, I get like the one thing that you know
in talking about it as like well, it's just predicting
the next word, or you know, it's doing like when
you describe the very specific tasks that's being programmed to
do versus the emergent abilities that are coming out of
(33:45):
that task. Like the analogy that I heard used that
that made senses Like, I guess it's a quote from
Darwin that like from so simple a beginning, endless forms
most beautiful, like that that quote being applied to basically
the argument is like that's all the human mind is doing. Also,
(34:06):
like in some ways, like the human mind is just
can can be seen as being programmed to like look
for lions in the bushes and move towards food and procreation,
and then like from all of these like complicated synapses firing,
you get this amazing thing called consciousness, and that you know,
(34:30):
just by describing AI in a reductive way, you're ignoring
the fact that, like it could be these very basic
things and also be moving towards you know, what we
describe as what we think of as consciousness. So yeah,
I think, yeah, it's a super interesting conversation. That was
(34:52):
Like I think I was scared to like read into
like to get like to research because it's just so
massive and like when I was I was a philosophy
major in college, and when I was studying philosophy back
in like the early two thousands, like AI, Like the
question of AI and what was going to happen was
(35:13):
like a major question in philosophy all the way back then.
And it's like all of the all of the ways
that philosophers have been like thinking about and asking all
the questions they've been asking about AI for you know,
since like the sixties are now like coming to a forum,
like as we've gotten closer to the moment, we still
(35:35):
don't really have any clearer of an idea of like
what exactly is going to happen. So it's it's a
little scary. I do want to. I do want to
take a break and then come back and just talk
about like kind of some specific ways, just like take
some guesses as to like specific ways that the future
might look different, that we might not be that I
(35:59):
at least going into like researching this hadn't hadn't been
taken into account as a possibility. So we'll be right
back and we're back and Miles, I mean we we
(36:21):
talk a lot about how we we what we think
this future is going to look like?
Speaker 2 (36:25):
Right, Yeah, I mean I share the name with Miles Dyson.
Speaker 1 (36:30):
Ever heard of him.
Speaker 2 (36:31):
I started a little thing called the sky net, you
know that keeps me up all night?
Speaker 1 (36:35):
Well but seriousness, I hey, he went out of here, Miles,
you know I did. I did.
Speaker 2 (36:41):
Look it's all about altruism, you know at the end.
But like I think for me personally, right, so much
of my so much of my understanding of science is
derived from film and television because I'm American, and I
think with like with Ai, I think whenever I would
think of like, oh my, like he's like, you don't
know this thing's gonna go. The place I think it's
(37:02):
gonna go is sky net from Terminator.
Speaker 1 (37:05):
Just see the shot from the sky of all the
missiles being launched just coming down.
Speaker 2 (37:12):
I'm trying to have a nice day at the park
with my kids. I'm watching through a chain link, and
next some.
Speaker 1 (37:16):
Ladies over there shaking the chain link and skeletons rattling
the kids over there.
Speaker 2 (37:22):
Get her out of here. But I'm you know, first
of all, it's sort of twofold question, how how far
off base is the idea of like how many jumps
am I making in my brain to be like Chad
gbt next stop sky net? And and then also because
of that, what are all of the ways that people
like me are not considering what those actual, real tangible
(37:45):
effects will be of. And I don't want to be
like the most cynical version, but you know, if we
aren't careful and we have very sort of individualistic or
profit minded sort of motivations to develop this kind of technology,
like what does that worst case sort of look like
or not worst case? But what are the ways I'm
actually not thinking of because I'm too busy thinking about
(38:07):
T one thousands?
Speaker 1 (38:08):
Right, Like I just to like add on to that,
when when I think about the computer revolution, like for
decades it was these people in San Francisco talking about
how the future was going to be completely different and
done on computers or something called the Internet, And we
were just down here looking at a computer that was
(38:29):
like green dots, like not that good of a screen display.
And by the time it like filtered down to us
and the world does look totally different, it's like, well, shit,
like that's that turns out that was a big deal.
So yeah, I feel like I want to be constantly,
(38:51):
like on our show, just asking me a question like
where is this actually taking us? Because some in the past,
I feel like it's been hard to predict and when
it did come, it wasn't exactly what the you know,
tech oracles said it was going to be, and it
was like unexpected and weird and beautiful and banala and
weird ways. And so I'm curious to hear your thoughts
(39:14):
on like once this technology reaches the level of consumers,
like of just your average ordinary person who's like not
teaching machine learning at m YU, like, what what is
it going to look like?
Speaker 2 (39:29):
But first but obviously first, but first guy, I.
Speaker 4 (39:34):
Think it's kind of okay. So so I think the
idea of worrying about some version of you know, this
technology going out of control, right is there's so many
(39:54):
checks and balances of ways in which we are thinking
about and the technology is sort of moving forward. That
is kind of you know, the idea of something emergent
and then not only as an emergent, but it's going
to take over and try to destroy civilization.
Speaker 2 (40:18):
And almost send and also send cybernetic organisms into the
past to ensure that those people do not grow up
to take arms. I can't kind of like John Connor and.
Speaker 4 (40:29):
Yeah, yeah, So I'm a firm believer that in being
able to send people to the future, I still don't
think that we're not going to be able to send
people in the past. So that's that's maybe one problem.
Speaker 1 (40:44):
But that's just like your opinion. Man, all right, I'm
is gonna happen. And by the way, I'm the Doc
Brown character. I'm gonna be friend a young child, Yeah right, exactly,
Like Marty and Doc been friends like probably like it
(41:07):
seems like eight years so like when Marty was like
nine years old, they became best friends. Anyways, sorry about
that forgetting sidetracks so well, So I think you know,
so what I'm what I'm thinking about when we think
about this kind of technology and how it could be,
how the technology itself could go wrong.
Speaker 4 (41:29):
I don't think that that kind of integration is likely, right.
I think that the concept of you know, the singularity,
it's so far out, and I don't think that we
have a good ability or understanding of like consciousness. And
I also don't think we have a really good understanding
(41:50):
of like, Okay, you know, why would you know these
large language models start, you know, trying to harm us.
So there's all those steps and jumps and leaps and
all the intermediate pieces. That just seems so incredibly unlikely. Now.
(42:10):
I know that a lot of the ai AGI people
who worry, they worry is, oh, well, we got to
worry about this tail race, great, and a human level
extinction event is worth worrying about. Yeah, and some people
worrying about that is probably not a bad thing. But
I just think it's very unlikely for the reasons of
(42:32):
those of all the chain that would need to happen,
and we're not very good at robots. Also, robots are
you know, actually much further away.
Speaker 1 (42:44):
But that is one Okay, well, we'll talk about it,
but let put a pin in that because I want
to come back to that to bad robots, to bad robots,
and how easy I just want to brag about how
easy it would be for me to beat one up.
Speaker 2 (42:56):
But dynamics, Yeah, yeah, I got my on you in
that fight.
Speaker 4 (43:01):
But the thing that I do want to point out
is that, you know, the concept of starting to build
some of these rules into the large language models, you know,
to try to say, okay, well, you know, try to
be benign, don't do harm. Like that's a really super
good idea, right, And I think that, you know, even
(43:25):
if you don't believe in sky Neet, actually believing in
trying to incorporate responsibility into the large language models, I
think that is something that's very important moving into some
of the dangers though, Like I think actually large language
models and people like Opening Eye was actually worried about
this in one of its first iterations of GPT was
(43:49):
the ability for the large language models to create misinformation
and disinformation. And you know, I think that that's a
really bad use of the technology and very very potentially harmful.
Speaker 1 (44:05):
And it already seems to be what it's being used for.
Like the technology, like the ways that like companies are
trying to replace journalism with it, or you know, clickbait article,
like just generate tons of clickbait articles that are like
targeted at people. Is like it, it feels like it's
already training in that in a lot of ways.
Speaker 4 (44:27):
Yeah, Unfortunately, I think you're right like that, that there
is this that there is this bad use of the
technology where it's like, oh, let's make this so that
it could be as persuasive as possible. You know, there's
obviously been a ton of research in marketing on like
figuring out, okay, how do we you know, position this
(44:50):
product so that it's you know, you're more likely to
buy it, right, And the same thing can be now
applied to language of like okay, you know, for Jacob Ryan,
how am I going to make this tailored advertisement just
for him to be able to do to make him
buy this particular product that I'm selling, or to make
him not vote right? Yeah? Right, you know those kind
(45:13):
of that's I mean, I think that this is scary,
and I think that this is in the now, right,
which you know, is something that in some sense is
going to be hard to like really stop people from
using this technology in that direction without things like legislation,
(45:34):
you know, there's just some some components that need, you know,
more legislation, and I think that's going to be hard
to do but probably necessary. Right.
Speaker 2 (45:47):
Yeah, I could even see, like just even in politics,
you can be like, Okay, I need to actually figure
out the best campaign plan for this very specific demographic
that lives in this part of this state, and sure,
and then just imagine what happens. But I guess is
that also part of the slippery slope is that like
the reliance sort of gives way to like sort of
(46:07):
this like thing where it's like this actually is gonna
whatever this says is the solution to whatever problem we
have and like just kind of throwing our hands up
and all just becoming totally reliant. I mean, i'd imagine
that's also seems like a place where we could easily
sort of slip into a problem where it's like, yeah,
the chatbot may give us this answer.
Speaker 4 (46:28):
But well, or or that people are starting to make
like very homogeneous decisions and choices right where you would
make you know, many different choices by because you already
have a template that's been given to you by you
know this AI. You're like, oh, okay, well, you know
I'm just gonna follow you know, this choice or this
(46:50):
decision that it's going to make for me and not
you know, do one of the thousand different alternatives right, right,
and if you do that, I do that. Jack does that, right,
Like all of a sudden, we would have done vastly
different choices. But now we have this sort of weird,
sort of centering effect where all of us are actually
making much less, you know, very choices in our decision making, right,
(47:15):
which has I mean it does possess real risk. I
mean imagine applying this to resume screen right where you
could imagine the same type of problem, right, and those
are there's just so many scenarios that you can think
of where you know, we need to take care, right, Yeah,
and this is a good time to think about that.
Speaker 1 (47:37):
Right, And we're yeah, we're turning our free will over
to the care of algorithms and you know, the phones,
like the skinner boxes in our hands that we're carrying around,
and like that feels like a thing that's already happening.
A I might just make it a little bit more
effective and like quicker to respond and like but yeah,
it feels like a lot of the concern over AI
(48:01):
that makes the most sense to me are the ones
that are already happening. And yeah, just in like to
kind of tip my head a little bit, like the
in doing a lot of research on the kind of
overall general intelligence concerns that we've been talking about, the
ones where it like takes over and evades human control
because it wants to you know, wants to defeat humans.
(48:26):
I was surprised how full of shit those seemed to be. Like,
for instance, one story that got passed around a lot
last year where a like during a military exercise and
AI was being told not to like take out human
(48:46):
targets by its human operator, and so the AI made
the decision to kill the operator so that it could
then just like go rogue and start killing whatever it
wanted to. And that story, like I passed around a lot.
I think we've even like referenced it on this show.
And it's not true. First of all, I think when
(49:08):
it first got passed around, people were like it actually
killed someone man, and it was it was just a
hypothetical exercise first of all, like just in the story
like as it was being passed around, And second of all,
it was then debunked, like the person who said that
it happened in the exercise later came out and was
like that, actually, like that didn't happen. But it seems
(49:30):
like there is a real interest and real incentive on
behalf of the people who are like set up to
make money off of these ais to like make them
seem like they have this like godlike reasoning power. There's
this other story about where like they were testing the
I think it was GPT four to see like like
(49:54):
during the alignment testing. Alignment testing is like trying to
make sure that the AI's goal are aligned with humanities
and like making humans more happy, and they like ran
a test to see where the GPT four was basically
like I can't solve the capture, but what I'm going
to do is I'm going to reach out to a
(50:14):
task rabbit and hire a task rabbit to solve the
capture for me. And the GPT four made up a
lion said that he was visually impaired and that's why
he needed the task grabbit to do that. And again
it's like one of those stories like it feels creepy
and it gives you goosebumps, and again it's not true.
It's like they were the GPT was being prompted by
(50:38):
a human, Like it's very similar to the self driving
car myth, like that that self driving car viral video
that Elon Musk put out where it was being prompted
by a human and like pre programmed, and like that.
There were all these ways that the human agency involved
(50:58):
with the kind of clever task that we're worried about
this thing having done. We're just like taking out. We're
just edited out so that they could tell a story
where it seemed like the GPT four, which is like
what powers chat GPT that like it was doing something evil, right,
(51:19):
And it's like so it's I get how these stories
get out there and become like because they're they're the
version of AI that we've been preparing for because like
and watching Terminator. But it's surprising to me how much
like Sam Altman, the head of Open AI, like just
(51:42):
leans into that shit and is like he like he
in an interview with like The New Yorker, he was like, yeah,
I like keep a Sinai capsule on me and like
all these weapons and like a gas mask from the
Israeli military in case like an AI takeover happens and
it's like what, like you of all people should know
(52:06):
that that is complete bullshit, but it and it totally
like takes the eye off the ball of like how
the actual dangers that AI poses, which is that it's
going to just like flood the zone with shit, like
it's going to keep making the Internet and phones like
more and more unusable but also more and more like
(52:28):
difficult to like tear ourselves away from. Like I feel
like we've already lost the alignment battle in the sense
that like we've already lost any way in which our
technology that is supposed to like serve us and make
our lives better and enrich our happiness like are doing that,
Like they they stop doing that a long time ago.
(52:49):
Like That's why I always like say that the more
interesting question, like with the questions that were being asked
in philosophy classes in the early two thousands about like
singularity and like suddenly this thing spins off and is
we we don't realize we're no longer being served and
like we're like twenty steps behind, like that happened with
capitalism a long time ago, Like that we're like capitalism
(53:12):
is so far beyond and it's like, you know, We're
no longer serving ourselves and serving fellow humanity. We're like
serving this idea of the market that is just you know,
repeatedly making decisions to make itself more efficient, take the
friction out of like our consumption behavior. And yeah, I
(53:32):
think AI is going to definitely be a tool in that.
But like that is the ultimate battle that we're already losing.
Speaker 4 (53:39):
Yeah, I mean I completely agree with you on the
on the front of like your phonees are I mean,
the company is are trying to maximize their profits, which
mean you know, possibly you being on your phone at
the disservice doing something else like going outdoors and doing exercise, right,
(54:04):
or you know, doing something social. I I do see
on the other hand, that like, you know, maybe I
can have positive effects right where you know, take tutoring
for an example, or public health, where we take the
(54:26):
phone and take the technology and use it for benefit,
like providing information about public health by helping students who
don't have the ability to have a tutor have a tutor, right,
Like all of these kind of things where let's take
(54:46):
advantage of the fact that you know, the majority of
the world has phones, right and try to use that
technology for you know, a good positive societal benefit. But
like any tool in it has usage that are both
good and bad. And I do think, like, you know,
there's going to be a lot of uses of this
(55:09):
technology that are not necessarily in the best interests of
humanity or individual people.
Speaker 2 (55:14):
Right, So it's like really like sort of the real
X factor is how the technology is being deployed and
for what purpose, not that the technology in and of
itself is like this runaway train that we're trying to
rein in. It's that, yeah, if you do phone scams,
you might come up with better you know, like both
like voice models to get this Rather than doing one
(55:37):
call per hour, you can do a thousand calls in
one hour running the same script and same scam on people.
Or to your point about how do you persuade Jack
O'Brien to not vote or to buy X product? Then
it's all going in that direction versus the things like
how can we optimize like the ability of a person
to learn if they're in an environment that typically isn't
(55:58):
one where people have access information or for the public
health use. And that's why, like I think in the end,
because we always see like like, yeah, this thing could
be used for good, and it's almost every example is like, yeah,
and it just made two guys three billion dollars in
two cents, and that's what happened with that, And now
we all don't know if videos we see on Instagram
(56:20):
are real anymore. Thanks.
Speaker 1 (56:22):
Yeah, Yeah, it's not. Yeah, it's not inherently a bad technology.
It's that the system as is currently constituted, favors scams.
Like that's why we have a scam artist as like
our most recent president before this one and probably could
be future president again, like because that is what our
(56:46):
current system is designed for, and like it is just
scamming people for money. That's why when when blockchain technology
like for has its like infant baby steps, imdiately becomes
a thing that people use to scam one another, because
that is the software of our current society. So yeah,
(57:10):
there are like tons of amazing possibilities with AI. I
don't think we find them in the United States, like
applying the AI tools to our current software. I would
love for there to be a version of the future
where AI is so smart and efficient that it changes
(57:31):
that paradigm somehow and change it so that our software
that our society runs on is no longer scams, you know, right, right.
Speaker 4 (57:41):
I mean there is a future where you know, AI
starts to do a lot of tasks for us. You know,
I think that there's starting to be you know, depending
it's mostly in Europe now, but also here talks about
(58:01):
like universal basic income. It's it's kind of I don't
think that that's going to be the case. I don't
think that in the US anybody has an appetite for that.
Maybe in twenty thirty years. But you know, I do
think that, you know, the there are certain things about
(58:21):
my job right as a professor, about like writing and
certain other components right where you know, here's some things
where AI can help, right, And you know, today I
use it as a tool to make me more efficient,
Like you know, okay, I want to have an editor
(58:44):
look at this. Okay, Well, you know it's not going
to do as good as a real editor, but maybe
as a cop as a you know, bad copy editor.
Yeah it's good, you know, Like but you know, I
mean I've been writing papers for a while. You know,
when the undergraduates write the paper, it does an even
(59:05):
better job, right because they have more room for growth, right,
And so I think the you know, yeah, I think
that on a micro level, these tools are now being used.
I think the you know, Russian spambots have been most
likely using large language model technology before chat GPT, right,
(59:28):
and so you know, I mean there are some ways
that I think we're going in a positive and good direction.
Speaker 2 (59:34):
Right.
Speaker 4 (59:35):
Yeah.
Speaker 2 (59:35):
Now it's like now that you say that, I'm thinking,
I'm like, we thought Cambridge Analytico was bad, and it's
like what happens now when we like turn it up
to one thousand with this kind of thing where it's like, yeah,
here are all these voter files, now, like really figure
out how to suppress a vote or get to sway people.
And yeah, I again, it does feel like one of
these things where in the right hands, right, we can
(59:57):
create a world where people don't have to toil because
we're able to automate those things. But then the next
question becomes, how do we off ramp our culture of
greed and wealth hoarding and concentrating all of our wealth
into a way to say, like, well, we actually need
to spread this out that way everyone can have their
(01:00:20):
needs met materially, because our economy kind of runs on
its own in certain sectors. Obviously, other ones need real
like human labor and things like that. But yeah, that's
where you begin to see like, Okay, that's the fork
in the road. Can we make that? Do we take
the right path or does it turn into you know,
just like some very half asked version where like only
(01:00:41):
a fraction of the people that need like universal basic
income are receiving it. Well, you know, we read more
articles about why we don't see Van Vocht in Hollywood anymore.
As Jack pointed out that yeah, terribly worded thing about
Vince Vaughan that AI could not get right, couldn't get right.
Speaker 1 (01:01:00):
So like Van Vought the one. So the sci fi
dystopia that seems like not not necessarily dystopia, but the
sci fi future that seems like it's most close at hand,
and like given you know, doing a weekend's worth research
on like where where this is, and like where all
the top thinkers think we're headed, the thing that make
(01:01:21):
that seems the closest reality to me is the movie Her,
Like Her is kind of already here. There's already these
applications that use chat GPT and you know GPT four
two again, Like it really seems to me, like and
I don't necessarily mean this like in a dismissive way.
(01:01:42):
This is probably you know, a good description of most jobs.
Like that the thing that David Fincher I was like
listening to a podcast where they talked about how David
Fincher says, like words are only ever spoken in his
movies by characters in order to lie, because like that's
how humans actually use language, is just to like find
different ways to like lie about who they are, what
(01:02:04):
they're you know, approaches to things. And like I think,
you know, these language models the thing that they're really
good at, like whether it be like up to this
point where people are like, holy shit, this thing's alive.
That's talking to me, Well it's not, And it's like
just doing a really good approximation of that and like
kind of fooling you a little bit. Like take that
(01:02:24):
to the ultimate extreme and it's like it makes you
think that you're in a like loving relationship with a partner.
And like that's already how it's being used in some
in some instances to great effect. So like that that
seems to be one way that I could see that
being becoming more and more a thing where people are like, yeah,
I don't have like human relationships anymore. I get like
(01:02:47):
my emotional needs met by like her technology essentially is
that what what would you say about that? And what
is there a different fictional future sure that you see
close at hand.
Speaker 4 (01:03:03):
So I think Her is great. I actually, I mean
it's really funny because you know, I remember back in
twenty seventeen, twenty eighteen where I was like, oh, her
is like a great sort of this is where AI
is going. And you know, for those of you who
haven't seen the movie, it's a funny movie where you know,
(01:03:25):
they have Scarlett Johansson is the voice of an AI
agent which is on the phone. And I think that
this is actually a pretty accurate description of what we're
going to have in the future, not necessarily the relationship part,
but the fact that we'll all have this personal assistant,
and the personal assistant we'll see so many aspects of
(01:03:45):
our lives right our calendars, our you know, meetings, our
phone calls, everything, and so it'll be you know, this
assistant that we all have that's helping us to like
make things more productive. And as a function of the
assistant to be effective, the assistant will probably want to
(01:04:09):
have some connection with you, and that connection will be
likely to allow you to trust it. And here's where
the slippery slope comes from, you know, where you where
you're like, oh, this understands me, and you start getting
into a deeper relationship where you know, a lot of
(01:04:29):
the fulfillment of a one on one connection can come
with your smart assistant. And I do think that there's
a little bit of a danger here. Like you know, again,
I'm not a psychologist. I'm someone who studies AI machine learning,
but I do, actually, you know, study how well machines
(01:04:53):
can make sense of emotion and empathy. And really, you know,
GPT four, which is the current state of the art
technology is actually already really good and understanding. I use
that phrase with a quote, the phrase understanding, but really
(01:05:16):
what you were feeling, right, how you're feeling under this scenario.
And you can imagine that feeling heard is one of
the most important parts of relationship, right, And if you're
not feeling heard by somebody else and you're feeling heard
by you know, your personal assistant, that could shift relationships
to to the AI, which you know, could be fundamentally
(01:05:40):
dangerous because it's an AI, not a real person.
Speaker 1 (01:05:44):
Right, Yeah, the one I was talking about is replica
that app the R E P l I KA. Where
they are you know, designing these things explicitly to like
fill in as like a romantic partner and at least
with some people. And it's always hard to tell, did
they find like the three people who are using this
(01:06:07):
to fill an actual hole in their lives or is
it actually taking off as a technology. But yeah, the
personal assistant thing seems closer at hand than maybe people
realized any other like kind of concrete changes that you
think are coming to people's lives that they are aren't
(01:06:29):
ready for or haven't haven't really thought about or seen
in another sci fi movie.
Speaker 4 (01:06:36):
Sci fi movies are remarkably good in predictas.
Speaker 1 (01:06:39):
Yeah, or maybe they have seen in a sci fi movie.
Speaker 4 (01:06:42):
No, But I think you know another another thing that
one potential that I think not enough people are talking
about is how much better video games are going to become.
M So people are already integrating GPT four into video games,
and I think our video games are just going to
(01:07:02):
be so much better because you know, you have this
ability to like interact now with a character AI that's
not just like very boring and chatting with you, but
they can actually be truly entertaining and fully interactive.
Speaker 2 (01:07:22):
Interesting.
Speaker 4 (01:07:22):
I think you know, computer games are going to be
much much better and possibly also more addictive as a
function of being much.
Speaker 2 (01:07:30):
Better perfect, and then that will dull our appetite for
revolution when our jobs are all taken by day. Yes,
this is great, This.
Speaker 1 (01:07:38):
Is great, all right. I feel much better after having
had this conversation.
Speaker 2 (01:07:42):
Dude, Grant that auto is way better the kinds of
conversations I have with people I would normally bludgeon on
the street with my character something else.
Speaker 1 (01:07:53):
Well, doctor said, Doc, such a pleasure having you on
the dailies. I guess I feel like we could keep
talking about this for another two hours, but we've got
to let you go. But thank you so much for
joining us. We'll have to have you back. Where can
people find you? Follow you all that good stuff?
Speaker 4 (01:08:11):
Yeah, thank you for having me. I'm on what is
now called x at.
Speaker 2 (01:08:16):
J O aout on this podcast, we still call it Twitter.
Speaker 4 (01:08:22):
Yeah, okay, well Twitter fine, and you know, yeah, that's
my that's my currently still my main venue. But yeah,
and is.
Speaker 1 (01:08:32):
There a work of media you've been enjoying?
Speaker 4 (01:08:35):
Good question, Modern dance. So Fall for Dance is starting
in New York City. So I've been, you know, moving
into watching some modern dance, which is sorry, kind of
a fun thing to do modern dance.
Speaker 2 (01:08:53):
Okay, Okay, here, I'm gonna have bing Chat recommend some
modern dance show for me in New York.
Speaker 1 (01:09:01):
So what they say, through work we get access to
the being chat butt, so pretty cool.
Speaker 2 (01:09:09):
All I do is mess around. I'm like a child
with it. I'm like pitch me a movie with Seth
Rogan where he's a stoner biker. Did a pretty good job.
Pretty good job. Feels a little derivative, but it makes
sense because it's only deriving its ideas from existing things
out there.
Speaker 1 (01:09:24):
But yeah, I asked to summarize the plot of Moby
Dick in the formatab wu tang wrap and it was
not great, not great, well, not great. It was kind
of saying so like more early eighties rap where it
was like, hey, my name's Ahab and I'm here to
say I hit this whale in a major way.
Speaker 4 (01:09:50):
Miles.
Speaker 1 (01:09:51):
Where can people find you? What is a work a
media you've been enjoying?
Speaker 2 (01:09:56):
Now you really got me thinking.
Speaker 1 (01:09:58):
Now you've really done it.
Speaker 2 (01:10:00):
Wow, Jack sorry, I'm asking it to summarize of mice
and men in the form of a ghost face rap verse.
You can find me at Miles of Gray on Twitter.
I don't even for you know formerly x. I'm calling
it twitterformerlyex dot com. Okay, we're inverting the form here,
so check me out there pretty much everywhere, Instagram all that,
(01:10:21):
and obviously find us on our basketball podcast, Miles and
Jack Got Mad Boosties, where I promise my takes are
not written by AI or are they? And also find
me on my true crime podcast The Good Thief or
We're in Search of the Greek Robinhood, and also for
twenty Day Fiance, where I talk about ninety day Fiance.
Whoa do you know what? You know?
Speaker 4 (01:10:40):
What?
Speaker 2 (01:10:40):
You know? They just said when I said, give me
of mice and men in the form of ghost face. Yo,
let me tell you about a story so true of
two migrant workers, George and Lenny, who we're on a
mission to own their food and land and live off
the fat of the land. That was the plan. George
was smallish.
Speaker 1 (01:11:01):
Come on, you gotta give me like.
Speaker 2 (01:11:02):
A random Italian dish rather mixed with like gore tex
or north face. Anyway, it's got it's got some got
some learning to do. Let's see any tweets or works
of media I'm liking. Not really, not really the thought
I thought I had something that I saw recently. Oh,
(01:11:23):
new season A Top Boy. I was watching the new
season A Top Boy on Netflix.
Speaker 1 (01:11:26):
New Season A Top Boy. Yeah, it's Top Boy a
show Top Boy.
Speaker 2 (01:11:32):
No, does sound like it would be like The Best Boy? Yeah, No,
it's a it's like a London gangster show. Okay, yeah,
yeah about Roadman in it?
Speaker 1 (01:11:43):
Roadman?
Speaker 2 (01:11:44):
Yeah, Roadman.
Speaker 1 (01:11:45):
I was a highwayman, all right. You can find me
on Twitter at Jack Underscore O'Brien. Uh. Tweet I enjoyed
was from Jeff at Used Wigs, tweeted the weekend ending
Trump of seeing the sixty minute stop Watch as a
kid lives forever and I feel that and it's not
(01:12:07):
just as a kid, not just as a kid.
Speaker 3 (01:12:10):
Yeah.
Speaker 1 (01:12:10):
You can find us on Twitter at daily Zeitgeist. We're
at the Daily Zeikegeist on Instagram. We have a Facebook
fan page and our website Daily zeikeuist dot com, where
we post our episodes and our footnote where we link
off to the information that we talk about in today's episode,
so as a song that we think you might enjoy.
Miles with song and we think people might enjoy it.
Speaker 2 (01:12:30):
Uh. This is from a Thai artist. She's like a drummer, producer,
dope drummer and named Salne s A L I N.
I'm sorry if I I botched the pronunciations and I'm
gonna do it again with the track. It's called see Chompu.
The first word si the second where c h O
m p h u and I believe that is a
(01:12:50):
region of like northeast Thailand. But she's like this, like
it's like a really funky drum track if you like
crumbin and kind of sort of like funky like Southeast
Asian funk kind of music. This is kind of what
that track sounds like, except for drumming. She is so
tight on the kit anyway, So check this one out.
It's a Salin with see chompoop.
Speaker 4 (01:13:11):
All right.
Speaker 1 (01:13:11):
We will link off to that in the footnote. The
Daily is a production of iHeartRadio Formal podcasts from my
heart Radio Visits, the iHeartRadio w ap Apple Podcasts wherever
you listen to your favorite shows that it is good
to do it for us This morning back this afternoon
to tell you what is trending and we'll talk to
you then. Bye bye,