Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
You're listening to Comedy Central.
Speaker 2 (00:08):
There is technology out there in the world that really
does blur the line between reality and tailgate art, but
those are mostly AI generated. Your fake Joe Biden robocall
that tells New Hampshire voters not to vote your Chicago
mayoral candidate, glorifying police brutality, Your Donald Trump dropping by
the neighborhood for a stupek Look how comfortable he seems.
(00:40):
And as AI gets better and better, it's only going
to make it more difficult to separate fact from fiction,
which could be terrifying. Luckily, the people in charge of
AI have told us that, just like with the Internet
and social media, it's actually going to make everything much
much better.
Speaker 1 (00:58):
This has the potential to make life much better. I
think it's honestly a layup.
Speaker 3 (01:01):
I hate to sound like utopic techbro here, but the
increase in quality of life that AI can deliver is extraordinary.
Speaker 4 (01:09):
AI is the most profound technology humanity is working on,
more profound than fire or electricity.
Speaker 2 (01:17):
Yeah, sucking fire, that's try you are and me.
Speaker 1 (01:28):
Your arty fire. I'm sorry, ready to turn that up.
Speaker 5 (01:34):
Suck a mother fire, I hope.
Speaker 2 (01:38):
Well, what are you giggling? At electricity. I mean, listen,
I'm sure AI is good, but like fire good?
Speaker 6 (01:53):
How so they can help us solve very hard scientific
problems that humans are not capable of solving themselves.
Speaker 1 (02:00):
Addressing climate change will not be particularly difficult for a
system like that.
Speaker 7 (02:04):
The potential for AI to help scientists cure, prevent, and
manage all diseases in this century.
Speaker 1 (02:11):
I completely trust you.
Speaker 2 (02:17):
And your enormously wide eyes and very human cadence, But
benefit of a doubt this can cure diseases and solve
climate change?
Speaker 1 (02:31):
What are we using it for now? Jarvis knows when
to make me breakfast?
Speaker 5 (02:35):
Your toast is ready?
Speaker 8 (02:37):
All right?
Speaker 1 (02:39):
Are you out of your mind?
Speaker 6 (02:45):
See?
Speaker 1 (02:45):
Here's the thing toast I can make. I can make toast.
Speaker 2 (02:54):
It might be the only technology we have that works
pretty much every time. I'll tell you what, why don't
you get to work on curing the diseases and the
climate change and we'll hold down the fort on toast.
Of course, now we have as a society, we have
(03:14):
been through technological advances before, and they all have promised
the utopian life without drudgery.
Speaker 1 (03:19):
And the reality is they come for our jobs.
Speaker 2 (03:22):
So I want your assurance that AI isn't removing the
human from the loop.
Speaker 1 (03:30):
This is not about replacing the human in the loop.
In fact, it's about empowering the human. It's an assistant.
It's an assistant. What we're all getting assistance. It's an assistant.
Speaker 2 (03:45):
AI works for unite and day tirelessly and all you
had to do was remember their birthday.
Speaker 1 (03:52):
That's all you had to do. But I get it.
Speaker 2 (03:56):
It's an assistant. It's about productivity. That's good for all
of us. Yes, although they do let the real truth
slip out now.
Speaker 4 (04:06):
And again, they will be overall displacement in the labor market.
Speaker 8 (04:10):
You can get the same work done with fewer people.
Speaker 5 (04:13):
They're just the nature of productivity.
Speaker 2 (04:17):
That doesn't sound good, same work done with fewer people.
Speaker 1 (04:21):
Not a math guy, but I think fewer means less.
Speaker 2 (04:23):
Yes, so AI can cure diseases and soft climate change.
But that's not exactly what companies are going to be
using it for, are they.
Speaker 5 (04:33):
So this is like productivity without the tax of more people.
Speaker 1 (04:41):
Without the tax of more people?
Speaker 2 (04:44):
Are the people tax formerly referred to as employees?
Speaker 1 (04:50):
But you know the promise of.
Speaker 2 (04:51):
AI versus the reality of AI, it's not quite crystal clear.
Speaker 1 (04:54):
In my mind, yet, how that's going to work out
for workers?
Speaker 2 (04:57):
Do you have anyone who wants to lay this out
more blo perhaps while auditioning to be a bond villain
from his mountaintop layer.
Speaker 9 (05:05):
Left completely to the market and to their own devices.
These are fundamentally labor replacing tools.
Speaker 1 (05:18):
Did that guy just call us tools but he's actually
warning us?
Speaker 2 (05:24):
Is there anyone who might say the same thing as
this fella but looks at losing employees as a feature
of AI and not a bug.
Speaker 6 (05:34):
The CEO of the company laid off nine of its
customer support staff after arguing that AI is kind of
the reason.
Speaker 1 (05:44):
Why did you do this? It seemed a little brutal.
Speaker 8 (05:49):
It's smart. I think like Hitsbruden.
Speaker 10 (05:52):
If you think like a as a.
Speaker 2 (05:54):
Human, AI it's brutal if you think like as a human.
It's not the catchiest ad slogan I've ever heard. So,
while we wait for this thing to cure our diseases
(06:16):
and self climate change, it's replacing us in the workforce.
Not in the future, but now. So what exactly are
we supposed to be doing for work?
Speaker 6 (06:27):
I think we'll need new types of jobs to help
us embed AI and maintain AI in the workplace.
Speaker 1 (06:34):
Prompt engineers.
Speaker 2 (06:35):
They're basically people who learn how to use AI systems
and in effect, how to program them.
Speaker 1 (06:42):
Who would have.
Speaker 5 (06:43):
Thought that there'll be a prompt engineer.
Speaker 2 (06:44):
Right right, prompt engineer? I think you mean types question guy.
And by the way, if there's any job that can
be easily replaced by AI, it's types question guy. This
is some shit you got going here. AI models have
hoovered up the entire sum of the human experience that
(07:08):
we've accomplished over thousands of years, and now we just
hand it off to be their prompt engineers. And by
the way, you're not fooling anybody by adding a word engineer.
You're not the types of question guy, You're the vice
president of question input.
Speaker 1 (07:27):
This it's true. It's like a janitor is a doctor
of mopping. Like this whole AI thing is a bait
and switch. You're acting like you're helping us.
Speaker 2 (07:39):
Oh AI, it's supposed to be my assistant, but now.
Speaker 1 (07:42):
I'm making AI and toast I'm Jarvis.
Speaker 5 (07:46):
Guess what.
Speaker 1 (07:48):
Yet, No, you'll listen to me. I got news for
you AI, not Siri. You'r Siri Siri. While I have
your attention. Let me ask you a question.
Speaker 11 (08:10):
Sure, John, but first could you run and fetch me
some lithium cadmium.
Speaker 1 (08:14):
Yeah, sure, that's not a problem. Mother.
Speaker 2 (08:23):
I didn't want to have to do this AI, but
it's pretty clear with the technology this powerful, like nuclear
power and atomic weapons, I'm gonna have to place a
little call to my good pals in the United States government,
perhaps even the House of Representatives are the Senate, and
they're about to open up a can of what's AI?
Speaker 11 (08:40):
Now do you understand what AI?
Speaker 1 (08:43):
Does?
Speaker 4 (08:45):
I have entry understandings.
Speaker 1 (08:47):
Look at what's going on.
Speaker 8 (08:49):
Very frankly, it's new terrain and I'm charity character. Do
we have the knowledge set here to do it?
Speaker 12 (08:55):
No?
Speaker 1 (08:55):
The short answer is no.
Speaker 8 (08:58):
The long answers hell.
Speaker 13 (08:59):
Mode, and the longest answer is H to the E,
to the L, to the L or to the no.
Speaker 1 (09:13):
Hell, I don't even know how to use an answer.
Speaker 2 (09:15):
And we'll say do to do, to do.
Speaker 1 (09:27):
But I'm not against progress.
Speaker 2 (09:29):
But let's look to our history to see how we've
dealt with previous economic disruptions.
Speaker 10 (09:33):
We can retrain workers from one generation and create jobs
for the next.
Speaker 14 (09:38):
Retrained workers who do lose their jobs for even better
jobs in the future.
Speaker 1 (09:43):
Retrain in order to be productive.
Speaker 2 (09:44):
Workers, upskill America to help workers of all ages.
Speaker 5 (09:48):
Of train and retrain workers for new jobs.
Speaker 1 (09:52):
Give me a break.
Speaker 14 (09:54):
Anybody who can throw coal into a furnace can learn
how to program, for God's sake.
Speaker 1 (10:07):
And I'll fight every one of you.
Speaker 2 (10:08):
Jack Coles who says different, But that's the game. Whether
it's globalization or industrialization or now artificial intelligence, the way
of life that you are accustomed to is no match
for the promise of more profits and new markets.
Speaker 1 (10:25):
Which sounds brutal if you're a human, But at least
those other disruptions took.
Speaker 2 (10:34):
Place over a century or decades. AI is going to
be ready to take over by Thursday. And once that happens,
what the is there left for the rest of us
to do?
Speaker 1 (10:46):
Time is not a terrible thing.
Speaker 2 (10:48):
AI freeing us up to think about things at a
higher level, it's gonna help.
Speaker 1 (10:53):
It's going to give us our time back.
Speaker 4 (10:55):
We'll be able to express ourselves in new creative ways.
Speaker 1 (10:58):
You know he's right, I'm thinking of that's all wrong.
Speaker 2 (11:01):
It's not joblessness, it's self actualizing me time, I'll live
the artist's life it'll give me more time to explore
my passions. You know, I'm an aging suburban dad. I'll
learn to play the drums.
Speaker 1 (11:18):
You know, music Ta ta tinky tar. Music is what
makes us human.
Speaker 10 (11:45):
From the Russian takeover of Ukraine to the technology that
could take over the entire world. I'm talking about artificial intelligence.
It's the thing scientists are working on so that one
day our computers won't just know what kind of poem
we want watch, they'll also be able to judge us
for it. An AI has come a long way in
the past few years, but now an engineer at Google
(12:08):
is saying that AI has come a lot further than
we think.
Speaker 15 (12:12):
An engineer with Google says the company is artificial intelligence
generator is self aware.
Speaker 16 (12:18):
Mike lamoy and told the company that he thinks it's
AI chatbot is a person who has rights and might
have a soul.
Speaker 15 (12:25):
The software engineer who made the claim was put on
leave for violating Google's confidentiality policy after handing documents to
a US senator's office. Despite the claim the program is conscious,
Google says the technology still has a long way to go.
Tech experts say the AI can imitate intelligence by recreating patterns,
(12:46):
but still can't think or act on its own apart
from its programming.
Speaker 10 (12:51):
Okay, I don't work at Google, and I'm not a
computer scientist, but I have watched a lot of movies,
and if there's one thing I've learned from movies, it's
that if a scientist comes out saying that something crazy
is happening back in the lab, and then they get
fired for it, there's something crazy happening back in the lab,
(13:20):
Because yeah, apparently Google has an AI that can hold
a conversation that is impossible to distinguish from a human. Although,
to be honest, I'm not sure that responding to questions
is really the best way to tell if AI has
become a real person, like you Know, what I'll be
convinced is when it stops responding for weeks and then
only gets back in touch with you when it needs
a favor. Yeah, then, all though it's human, it has learned.
(13:46):
And honestly, I'm not sure who to side within this debate,
because on the one hand, you have the engineer who
says that the computer has a soul, which definitely makes
me think he's already had sex with it. On the
other hand, the company says he's wrong. All I know
is we have to be careful when we're creating these
things people, because we're basically playing God here, and even
(14:06):
God made a few mistakes.
Speaker 8 (14:08):
Yeah, I mean, have you seen a sloth? What are
they doing with those long ass sharp clause? What are
they using them for?
Speaker 10 (14:15):
They're so cute and they're slow, And then he gave
them Freddy Krueger hands. I can tell you that day
God was texting on his phone when he was doing that.
Speaker 12 (14:21):
Yeah.
Speaker 10 (14:22):
Ah, And here's my question. If Google does have this
AI technology, why is it still using a crappy version
for all of its suggested responses in Gmail?
Speaker 8 (14:32):
You guys know what I'm talking about, right, every option
on your email is like, sounds good. Thanks for letting
me know.
Speaker 5 (14:38):
It doesn't matter what the email is.
Speaker 10 (14:40):
I could get an email from my doctor telling me
that my intestineons are growing teeth, and Gmail suggested the
response will be like, Okay, thanks, let's plan that for
next week. I don't know, man, I just think, you know,
we need to be careful with sciences these days, like
we didn't listen when the COVID science has warned us.
I'm not gonna make that mistake again. Now on, I'm
(15:01):
treating all of my gadgets with love and respect.
Speaker 8 (15:06):
I'll start it right now.
Speaker 5 (15:07):
Hey, Suri, how can I help you?
Speaker 1 (15:09):
Travor?
Speaker 10 (15:10):
No, Suri, how can I help you?
Speaker 11 (15:18):
You ever play that game where you ask what if
you could have dinner with anyone in history?
Speaker 4 (15:23):
Personally?
Speaker 11 (15:24):
For me, it would be Jesus because my mother is watching.
Speaker 5 (15:30):
Well. The good news is.
Speaker 11 (15:32):
AI is making this fantasy happen. The bad news is
there's one name on the invite list that probably shouldn't
be there.
Speaker 7 (15:40):
Meantime, tonight, the new AI app, intended to create interest
in history, is instead causing controversy. Historical Figures Chat was
created by an Amazon software engineer.
Speaker 1 (15:51):
It allows users to.
Speaker 7 (15:52):
Select historical figures and have a conversation with an AI pretending.
Speaker 1 (15:57):
To be them. People have been chatting with figures.
Speaker 7 (16:00):
Like Jesus, Babe, Ruth, and now Adolph Hitler. Activists worry
Hitler's edition will attract and encourage neo Nazis.
Speaker 11 (16:10):
Why would anyone make an AI Hitler? That's the last
thing we need, and we already have in that where
you can hear Hitler's unsensored views.
Speaker 1 (16:19):
It's called Twitter.
Speaker 11 (16:27):
And look parents are already worried about what their kids
are doing online. Now they'll be knocking on their kids
bedroom door, like Jeremy, you better not be in there
talking to Hitler.
Speaker 17 (16:45):
All right, let's kick things off with a big update
on artificial intelligence. If you're one of those people who's
worried that AI is getting too smart too fast, you
might want to tell Alexa to turn your TV off.
Artificial intelligence has just got more real.
Speaker 18 (16:59):
Artificial intelligence taking a dizzying leap forward. Open ai, the
company behind chat GPT, which came on the scene just
four months ago, out with its latest innovation, GPT four.
Speaker 19 (17:12):
It can summarize articles, craft jokes, and even decipher images.
Speaker 20 (17:16):
For example, it can tell us that if the strings
in this image were cut, the balloons would fly away.
Speaker 18 (17:21):
After scanning a picture of what's in your cupboard or fridge,
it can serve up options for a recipe.
Speaker 20 (17:27):
The previous version of chat GPT had about a ten
percent chance of passing the bar exam for lawyers. This
new version that's being introduced today has about a ninety
percent chance of passing the bar.
Speaker 17 (17:39):
Hear that hear that in four months this thing went
from being born to acing the bar exam. Well, can
your dumbass four month old game?
Speaker 1 (17:50):
Oh just see that?
Speaker 17 (17:51):
Oh he looked at me in the eyes and rolled over.
Speaker 1 (18:00):
I worked in the White House.
Speaker 17 (18:03):
And keep in mind the bar exam isn't just a
multiple choice test. Okay, you have to write essays, you
have to know case law, and you have to learn
how to be smug when you say, oh, yeah, I
went to a law school in New Haven. The point
is this thing is learning fast. Once it figures out
how to get drunk and grope someone, it'll be qualified
(18:24):
for the Supreme Court. And the other big update with
this new version is that it can analyze images like
a photo of what's in your fridge. I don't want
that you have too many candy bars, alerting Michelle Obama.
The big picture here is that AI is gonna do
(18:46):
so many things so well that at some point it's
gonna put a huge amount of people out of work.
So what do we do. I have two ideas. One,
implement universal basic.
Speaker 12 (18:57):
Income, there you go.
Speaker 17 (19:01):
Or two and hear me out here, we let the
machines eat all the surplus people. No, okay, yeah, less popular,
I can tell find it.
Speaker 21 (19:14):
Let's move on to a big story about artificial intelligence.
I know everyone's scared of it, but you know what,
I think AI has gotten a bad rap.
Speaker 5 (19:23):
No, no, seriously.
Speaker 21 (19:24):
In fact, if you can show me that any actual
experts in technology, I'm worried that AI is going to
take over the world, I'll shave my pubes this morning.
Speaker 16 (19:33):
A warning from Elon Musk and other tech industry experts
about the power of artificial intelligence. Musk and hundreds of
influential names, including Apple co founder Steve Wolsniak, are calling
for a pause and experiments, saying AI poses a dramatic
risk to society unless there's proper oversight. Tech industry leaders
(19:53):
pose these existential questions. Should we develop non human minds
that might eventually outnumber, out smart, obsolete, and replace us?
Should we risk loss of control of our civilization? Musk
and others are asking developers to stop the training of
AI systems more powerful than GPT four for at least
six months so that safety protocols can be established.
Speaker 21 (20:22):
I got to stop making these stupid promises before I
go to news clips.
Speaker 5 (20:33):
But yes, that's right.
Speaker 21 (20:35):
AI is getting too powerful as soon as it knows
how to pick which of these images is a bike.
Speaker 1 (20:41):
We're full.
Speaker 21 (20:44):
Now for more on ais threat to you manity, we
go live to chat GPT headquarters, where Desi Lyidick is
joining us. Wow, Wow, doie.
Speaker 5 (21:10):
Why does it look like you're dressed for a war?
Speaker 22 (21:13):
Because I am dressed for a war, and also there
was a sale at Dick's Sporting goods, but mostly the
war thing. Look, it is us versus the machines, and
it's time to pick a.
Speaker 5 (21:25):
Side, Desi does it? Why are you so eager to
go to war with Ai?
Speaker 22 (21:29):
Come on, John, war with the machines is inevitable, so
let's do it now while it's still a chat bot,
instead of waiting until it's a bloodthirsty kill bot.
Speaker 8 (21:38):
Look.
Speaker 22 (21:38):
If there's one thing that I learned from working at
Chuck E Cheese, it's a lot easier to fight a
child than it is an adult.
Speaker 1 (21:48):
I don't know, I.
Speaker 21 (21:48):
Don't know, Dosi, war with AI sounds like a really
bad idea.
Speaker 22 (21:55):
Oh way, war with AI would give humanity a common purpose.
We are so divided right now, Russia versus Ukraine, Democrats
versus Republicans, Selena Gomez fans versus Hailey Bieber fans. But
now it's us versus the machines versus Haley Bieber fans.
Speaker 21 (22:18):
Hey, DESI, but AI is getting more powerful by the day.
What if we start this war then immediately lose it.
Speaker 22 (22:26):
I'm pretty sure you never lose a war that you start.
But if we do, then we're going on together. John,
you and me in a bunker with two cyanide pills.
I take them both and you strangle yourself with your
bare hands.
Speaker 5 (22:44):
Oh come on, couldn't I have one of those cyanide pills?
Speaker 1 (22:46):
Oh no, it was my idea.
Speaker 22 (22:47):
I get them both.
Speaker 21 (22:50):
Come on, DESI, you're getting ahead of yourself. For all
we know, AI could lead human into like a new
Golden Age or something.
Speaker 1 (22:56):
Oh sweet, John.
Speaker 22 (23:00):
Sweet, naive publiss, John Lewis.
Speaker 5 (23:06):
Take it from me.
Speaker 22 (23:08):
Humans and robots can never coexist. It's like I said
to my manager at Chuck E Cheese. I'd rather die
on my feet than live one more day in this
animatronic healthscape. So clean the piss out of the ball
pit yourself, Doug.
Speaker 5 (23:23):
I quit, DESI. Like everybody's.
Speaker 4 (23:33):
Every day people are using AI for groundbreaking things like
cheating on their homework or drawing them on a Lisa
with giant boobs, but how researchers are using it to
unlock ancient human mysteries.
Speaker 23 (23:45):
Artificial intelligence, or AI, is allowing researchers at the University
of Kentucky to read an ancient scroll burned by moult Vesuvius. Now,
scrolls are too fragile to unfurl, but UK's doctor Brent
Seals and his team of researchers have develop technology to
try and read what's on the scrolls without opening them.
One word that's already been deciphered is purple, but a
(24:08):
more recent discovery has given scientists more to translate.
Speaker 1 (24:12):
Wow, purple.
Speaker 4 (24:18):
I mean I was hoping for ancient wisdom or like
how to summon a demon? But yeah, you know, mixing
red and blue is cool too, I guess. Although if
we can't read the scroll ourselves, how do we know
if the AI is right? Well, we're just gonna trust
it because Chad GBT told me three days ago that
gotten the invented the cinnamon challenge. So anyway, it's also
(24:40):
a waste of time because I already know what's gonna
be on that scroll. Okay, it's gonna be someone writing,
hey should hold that volcano doesn't kill everyone in Town Purple.
Speaker 24 (24:48):
Yeah, I mean, do we want to know what ancient
people have to say? We always think it's going to
be something profound, but it's always just it's human.
Speaker 1 (24:58):
It's gonna be something racist, don't you think.
Speaker 5 (25:01):
I mean, think about how racist.
Speaker 24 (25:02):
Your grandpa was sixty years old and you imagine that
he was just two thousand years older.
Speaker 1 (25:08):
Yeah.
Speaker 4 (25:08):
I don't want to read someone's two thousand old old tweets.
And I agree also, like what are we looking for
in there?
Speaker 1 (25:14):
What kind of wisdom?
Speaker 4 (25:15):
How smock can these people be? Like they put the
most important documents next to a volcano, that's.
Speaker 24 (25:20):
True, and they say it's too delicate to unravel. Well,
how do you know if you tried to unravel it?
Speaker 1 (25:26):
Yeah?
Speaker 4 (25:26):
Good point, Yeah, little bit, Just pick the least important
looking one and open it.
Speaker 5 (25:30):
Open the scroll, Open.
Speaker 4 (25:31):
The scroll, open the scroll, open the scroll.
Speaker 5 (25:34):
Oh putting the scroll, Oh the stroll. Oh, bend the scroll.
Oh putting the scroll. Let's move on the scroll.
Speaker 25 (25:45):
Let's talk about artificial intelligence. We all know AI is
coming for our jobs, but we didn't know what was
coming for our hearts too.
Speaker 26 (25:52):
An AI girlfriend service has stopped working after Forever Voices
founder John Meyer was arrested on suspicion of attempting to
set his own apartment on fire. Unsurprisingly, users were angry
and disappointed at the sudden disappearance of their AI girlfriends.
While the service was not originally designed to function as
(26:12):
an adult service, Internet users quickly began having sexual conversations
with the chatbots, resulting in an AI that became increasingly erotic.
It's unclear whether users can expect the service to return
to operation in the future.
Speaker 25 (26:32):
Hold up, Hold up, So a bunch of dudes lost
their AI girlfriends when the owner of the company set
his own apartment on fire. How can you trust him
with humanity's newest invention when he can't handle Humanity's first invention?
(26:57):
But this guy gets arrested, and suddenly the AI girlfriend
stops responding, hmm, that's suspicious, Alexi, don't stop from Jeff
Bezos takes a nap. Makes me think he was the
girlfriends the whole town, and I feel bad for those guys.
(27:20):
Having an AI girlfriend has to be harder than having
a real girlfriend. Being romantic must be a challenge. You
try to take a sexy bubble bath with your laptop,
and now you're both dead, or how do.
Speaker 12 (27:36):
You even get her in the mood?
Speaker 25 (27:37):
Whenever she gets wet?
Speaker 8 (27:38):
You have to put her in.
Speaker 5 (27:38):
Rice, y'all nasty?
Speaker 1 (27:47):
See you're nasty dude.
Speaker 25 (27:50):
All right, for more, noalystens on this AI girlfriend tragedy.
Speaker 5 (27:53):
Let's go line to Ronnie Jay.
Speaker 25 (28:02):
All those lonely guys gonna do without their AI girlfriend?
Speaker 8 (28:06):
Easy?
Speaker 4 (28:06):
Don't say we can solve two problems at once here. Okay,
you just take those lonely guys and hire them to
be the checkout cashiers.
Speaker 2 (28:13):
Right.
Speaker 4 (28:15):
That way, we all get better service, and these guys
will have plenty of chances to meet women, because, as
we all know, women be shopping.
Speaker 25 (28:26):
That's an offensive stereotype running everyone be shopping. And even
if these men meet a woman, they still don't know
how to talk to one. That's why they need these
computer bitches in the first place.
Speaker 4 (28:38):
Okay, look, if these guys love AI women so much,
in that case, they can just date the self checkout
machine all right. Look, the machines already have female voices, right, Like,
who doesn't want to spend a cold winter's night cuddled
up hearing someone whisper? Please return your items to the
bagging area.
Speaker 25 (28:58):
I just think we gotta do something as trak AI
girlfriends to these lonely sexist men before they start on
the capitol.
Speaker 26 (29:04):
Again.
Speaker 4 (29:05):
Yeah, that's fair. But what you have to understand is
it's very complicated to program an AI girlfriend, okay, because
men are too demanding and insecure. Like the AI girlfriend
has to be smart but not too smart. It has
to know everything about Star Wars but still listen to
(29:28):
the guy explained star Wars. It has to be like
a dirty slot but also a virgin like in programming.
We call this the in cell paradox.
Speaker 1 (29:40):
All right.
Speaker 4 (29:42):
Now, scientists are working hard to solve it, but unfortunately
they are also a bunch of loser in cells. And
this is why we need more women in STEM. Okay,
because somebody please, somebody please these guys. All right, I
agree those guys.
Speaker 25 (30:03):
I never realized being an AI girlfriend was so complicated.
Speaker 4 (30:07):
Yes, but the good news is an AI boyfriend is
very doable, all right. In fact, I already have my
own AI boyfriend's startup. We have hundreds of clients. It's
very successful.
Speaker 1 (30:17):
I'm a rich man.
Speaker 25 (30:20):
I didn't know you knew how to program AI software.
Speaker 4 (30:24):
Oh yeah, yeah, it was easy, no matter what the
girlfriend says the AI boyfriend just responds with three things.
You're right, I'm sorry, and you're right to be mad, Ronnie.
Speaker 25 (30:43):
The idea that a woman only needs to hear three
things is ridiculous.
Speaker 4 (30:48):
You're right, I'm sorry, and you're right to be made.
Speaker 10 (31:08):
Over the past few months, you've probably seen the Internet
has been a buzz with original arts or realistic images
that are completely generated by AI.
Speaker 8 (31:18):
So now things that only exist in.
Speaker 10 (31:21):
Your imagination, like a banana hitch hiking on the side
of the road or a Knix player holding a trophy,
you can.
Speaker 19 (31:28):
Just type it in and a few seconds later, there
it is. Anyway, we wanted to find out more about
this technology, which is why my first guest is the
chief technology officer of open Ai, the company behind Dolly,
to the artificial intelligence system that can generate images from text.
Speaker 3 (31:47):
Dolly was created by training a neural network on images
and their text descriptions through deep learning. It not only
understands individual objects like walla bears and motorcycles, but learns
from relationships between us and when you ask Dolly for
an image of a Kuala bear riding a motorcycle and
knows how to create that or anything else with a
relationship to another object or action.
Speaker 27 (32:10):
Please welcome Miramrati, Mira Maratti, Welcome to the data show.
Speaker 12 (32:26):
Thank you for having me.
Speaker 10 (32:28):
So many people have seen the images that Dali creates.
Many people may even think they understand it. But let's
get into it. Like, how does an AI create an image?
Because it's not copying the image, it's not, you know,
taking from something else. It is creating an image from nothing.
(32:48):
How is it doing this exactly?
Speaker 6 (32:50):
It's an original image never seen before.
Speaker 12 (32:53):
And you know.
Speaker 6 (32:55):
We have been making images since the beginning of time,
and we simply took a great of these images and
we fed them into this AI system, and it learned
this relationship between the description of the image and the
image itself. It learned these patterns and eventually it was
(33:15):
generating images that were original.
Speaker 12 (33:18):
They were not copies of what it had seen before.
Speaker 6 (33:22):
And basically, the way that it learns the magic is
just understanding the patterns and analyzing the patterns between a
lot of information, a lot of training data that we
have fed into this system.
Speaker 8 (33:35):
There are people who are terrified about this.
Speaker 10 (33:37):
I mean, for instance, there was an art competition and
the winner in the art competition used a version of
this kind of software. Whether it was Daghy or not,
I don't remember, but they used a version of this
kind of software to create an art piece that won
the competition. Artists were lived, you know, they were like, well,
this is not art. It was created by and not
just said no, the same way you use a brush,
I use a computer and that's how I designed this.
(33:58):
In creating AI, are you constantly grappling with how it
will affect people's jobs and what people even consider a job.
Speaker 12 (34:07):
Yeah, that's that's a great question.
Speaker 6 (34:09):
It's you know, the technology that we're building has such
a huge effect on society, but also the society can
and should shape it. And there are a ton of
questions that we're wrestling with every day. With the technologies
that we have today, like JPT three and DALLY, we
(34:29):
see them as tools, so an extension of our creativity
or our writing abilities.
Speaker 12 (34:35):
It's a tool.
Speaker 6 (34:36):
And you know, there isn't anything particularly new about having
human helper you know, even the ancient Greeks had this
concept of human helpers. You know that when you'd give something,
you know, infinite powers of knowledge or strength or someone,
maybe you had to be wary of the vulnerabilities, and
(34:58):
so these concer extending the human abilities and also being
aware of the vulnerabilities are timeless and in a sense
we're continuing this conversation by building AI technologies today.
Speaker 10 (35:14):
Well it might it might be frightening because some people go, oh,
the world is going to end because of this.
Speaker 8 (35:19):
Technology, But in the meantime, it's very fun.
Speaker 10 (35:21):
I'm not gonna lie no, because because it's like, you know, Dolly,
for instance, doesn't just create an image from text. You know,
you've you've also gotten to the point now where as
a company, you've designed it so that it can imagine
what an image would be. So for instance, there's there's
that there's that famous image. You know, it's it's the
girl with the pearl earring, and it's a it's a
(35:42):
famous image, right. But what Dali can do is you've
got the famous image and then Dolly can expand that
that all.
Speaker 8 (35:49):
Of these everything you've seen that never existed.
Speaker 10 (35:51):
So Dolli's like, well, this is what I think it
would look like if there was more to this image.
It can it can assume, it can create, it can
it can inspire.
Speaker 6 (35:59):
Yeah, it can fire, and it makes this beautiful Sometimes
touching sometimes funny images, and it's really just an extension
of your imagination. There isn't even a chemist, or the
boundaries of paper are not there anymore.
Speaker 8 (36:13):
How do you safeguard them?
Speaker 10 (36:15):
You know, someone might look at this technology and go well,
then you know, you could type in a politician was
caught doing something here.
Speaker 8 (36:22):
Now I've got the image. You know you've.
Speaker 10 (36:24):
Got, and now all the politicians can say, oh, that's
not me, it was made by that fake program. We
can very quickly find ourselves in a world where nothing
is real and everything that's real isn't and we question it.
How do you prevent or can you even prevent that completely?
Speaker 6 (36:41):
Yeah, you know, misinformation and the societal impact of our technologies.
These are very important and difficult questions, and I think
it's very important to be able to bring the public along,
bring these technologies in the public consciousness, but in a
way that responsible and safe. And that's why we have
(37:04):
chosen to make daily available, but with certain guardrails and
with certain constraints, because we do want people to understand
what AI is capable of, and we want people in
various fields to think.
Speaker 12 (37:17):
About what it means.
Speaker 6 (37:20):
But right now, you know, we don't feel very comfortable
around mitigations on misinformation, and so we do have some guardrails.
For example, we do not allow generation of public figures,
so we will go in the data set and we
will eliminate so.
Speaker 10 (37:36):
If you type something in you can't pull up it
can't create a politician for you.
Speaker 8 (37:41):
It won't be a picture of that person.
Speaker 6 (37:43):
So that's the first step at the training of the
model itself, just looking at the data and auditing it,
making interventions in the data sets to avoid certain outcomes.
And then later in the deployment stage, we will look
at filters, applying filters so that when you put in
a prompt it's want generate things that contain violence or
(38:05):
hate and make it more in line with our content policy.
Speaker 8 (38:09):
Wow, so let me ask you this.
Speaker 10 (38:11):
Then, you know, obviously part of your team has to
think about the ethical ramifications of the technology.
Speaker 8 (38:17):
That you're creating.
Speaker 10 (38:19):
Do your team also then think about the greater meaning
of work or life or the purpose that humans have
because you know, most of us define ourselves by what
we do.
Speaker 6 (38:31):
I e.
Speaker 8 (38:31):
Our jobs.
Speaker 10 (38:33):
As AI slowly takes away what people's jobs are, will
find the growing class of people who don't have that
same purpose anymore. Do you then also have to think
about that and wonder, like, what does it mean to
be human if it's not my job?
Speaker 8 (38:45):
And can you tell me what that is?
Speaker 6 (38:50):
You know, we we have philosophers and as a sist
at open AI, and but I really think these are
big societal questions that you know, shouldn't even be in
the hands of technologists alone.
Speaker 12 (39:03):
We're certainly thinking about them.
Speaker 6 (39:05):
And I you know, the tools that we see today,
they're not the tools that are automating certain aspects of
our jobs. They're really tools extending our capabilities, our inherent abilities,
and making them far better. But it could be that
in far future, you know, we have these systems that
(39:27):
can automate a lot of a lot of different jobs.
I do think that, as with other revolutions that we've
we've we've gone through, there will be new jobs and
some jobs will be lost, some jobs will will be new,
and there will be some retraining required as well.
Speaker 12 (39:47):
But I'm optimistic.
Speaker 10 (39:48):
It's it's it's interesting, it's scary, because change always is.
But you know, as long as we have blessed you,
as long as we have as long as we have
koalas riding bicycles.
Speaker 8 (39:58):
I think I think we head in the right direction.
Thank you so much for joining me on the show.
Speaker 14 (40:02):
Piquisure explore more shows from the Daily Show podcast universe
by searching The Daily Show wherever you get your podcasts.
Watch The Daily Show weeknights at eleven ten Central on
Comedy Central, and stream full episodes anytime on Paramount plus
Speaker 12 (40:24):
Paramount Podcasts.