All Episodes

January 8, 2025 38 mins

Robert and Garrison are joined by Ed Zitron and Ed Ongweso Jr. to discuss the future of AI entertainment at the Consumer Electronics Show in Las Vegas.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Cool Media.

Speaker 2 (00:05):
Welcome back to it could happen here a podcast about it, which,
in this week's case is the Consumer Electronics Show is
happening here, and yeah, we're here to talk about things
falling apart, and again, in this case that's the tech industry,
because the story this CES, as it has been for
the last several cees, is is that the continuing degradation

(00:25):
of big tech as it seeks more places to get
money from while providing less and less utility to the
people that it needs to give it money. And every
CEES at some point I find myself face to face
with something that makes me say I've now seen the
silliest thing I've ever seen. And this year that experience
happened for the first time within thirty minutes of the

(00:46):
first half day. And I'm going to talk about that
and show some videos to my panelists here, which of
course are the great ed Zeitron, It's me, I'm here,
the pretty good Garrison Davis.

Speaker 3 (00:58):
Okay, okay, all right, all right, Boddy them.

Speaker 2 (01:02):
And the super enumerate supernumerary. I'm sorry I messed up
the word I was using as a superlative to praise you.
I'll take you at Longways, So Junior, thanks Ed, thank
you so much for joining us. Everybody, are you ready
to see some of the dumbest AI generated videos.

Speaker 4 (01:18):
Were few me with more pleasure.

Speaker 2 (01:19):
Excellent, excellent.

Speaker 1 (01:20):
Nothing fills me with pleasure.

Speaker 2 (01:22):
The first panel I sat down today with at ten
am in the goddamn morning Jesus, was the Hollywood Trajectory
Generative AI Timeline twenty twenty five to twenty thirty.

Speaker 3 (01:32):
Oh boy, I am fascinated for what they think will
happen at twenty thirty.

Speaker 2 (01:37):
Everything's just gonna get better, Garrison. This panel featured a
number of luminary thinkers, including Mary Hamilton, a managing director
at a Censure, who announced her company's three billion dollar
investment in AI by dropping this gin I have a
digital twin.

Speaker 5 (01:51):
And she's constantly evolving and how she gets used and
what she says, and.

Speaker 3 (01:57):
There's big educations around that. I think this is a
really exciting space to.

Speaker 4 (02:02):
Do that, like that she just stole Hurley Herndon's thing.

Speaker 5 (02:06):
But Okay, they probably said that to a doctor.

Speaker 2 (02:10):
They think I had a concussion.

Speaker 3 (02:12):
Sure what this person needs like psychological.

Speaker 2 (02:15):
Yeah, you should be allowed to drive, you need you
need a Bryan kid.

Speaker 6 (02:20):
Okay, let's get you.

Speaker 1 (02:21):
Let's get you, sit down, and.

Speaker 2 (02:23):
We're taking the phone away from you. Now. I think
this is very silly because again I think it. Yeah,
it's just a fundamental mismatch in what people might want
from an AI agent and like the way in which
they get talked about.

Speaker 1 (02:34):
But also they use digital twin, which is ce men
to prise software ship.

Speaker 6 (02:38):
Yeah, oh my god.

Speaker 2 (02:39):
Yeah, it's it's it's I'm excited to go see some
digital twin technology that I'm sure we'll make a cheap
at switching.

Speaker 3 (02:47):
This was this is the first thing I reported on
at CES was there was the digital twin. Like back
in like twenty twenty two, twenty one, there was like
one single company and all of c Yes, I was
promising like a digital twin, and now it's like every
other company. Ye.

Speaker 6 (03:00):
It means so many different things.

Speaker 5 (03:02):
It means literally a digital representation of anything. It doesn't
even mean an II agent. The fact that they're using
it in the wrong place is very annoying to me.

Speaker 4 (03:09):
Yeah.

Speaker 2 (03:09):
I keep saying like they can now make an AI
chatbot trained off of your social media presence. That's eighty
five percent accurate.

Speaker 7 (03:17):
As all twins are and I want to know they can't.

Speaker 2 (03:21):
But then you talk to the average person at CEES
or the average panelist on this particular panel, I'm like, yes,
I do believe. In fact, everyone on that panel you
could accurately, you could accurately get eighty five percent of
their personality with the chatbot for a bit, maybe a
lot higher improvement. Yeah, yeah, so I will say, like,

(03:41):
that was silly. That's not the silliest thing I saw.
The silliest thing I saw came courtesy of another panelist,
Jason Zada, founder of Secret Level and COO of the company.
The videos that Jason came to CEES to brag about
were a collection of the laziest AI slop ever to
stain him and eyeballs. His most recent big success that

(04:03):
you could just see radiating off of him how proud
he was of this was Coca Cola's annual Christmas ad,
which last year was produced for the first time entirely
with AI. And I'm just gonna if you haven't seen this,
who hears seen Coca Cola's AIS?

Speaker 3 (04:20):
I guess yeah, I've seen pictures. I think I may
have watched the one.

Speaker 8 (04:24):
Okay, well, let's let's let a few times the amounts.

Speaker 2 (04:28):
We're gonna play. There's three different versions of this, so
why we're just gonna I.

Speaker 5 (04:32):
Mean, that's that's what it's about out Oh my god.
If there's three different versions that that that's just they
saved the pro Everyone is the same length of show.

Speaker 2 (04:54):
Can you believe this song's AI generated? I can't believe
the cow Could they teach a computer to write the lyrics?
Holidays are?

Speaker 5 (05:03):
I just can't believe we finally have the technology to
have three trucks driving somewhere and.

Speaker 2 (05:08):
Wagging its tail with dead eye. It's all these too
horrible girls, trucks with Coca cola and them driving down
not a street, raccoons.

Speaker 5 (05:23):
Why is there a satellite O They're gonna drop the
ion can on them, the pola bes.

Speaker 2 (05:32):
It's all clearly AI. It's all glowing like the city
shots of like snow colored villages with that, as we're
going to see in later videos. A. I loves putting
smoke and random fires where there should not be smoking.
Random fire, Chris.

Speaker 3 (05:48):
Kringle, that's such a bad omen for four more years
of a trunk presidence.

Speaker 2 (05:52):
It's a bleak we have like.

Speaker 3 (05:54):
Even uglier Thomas Kincaid esque artwork. That's all.

Speaker 2 (05:58):
Every frame looks like a tomly animated.

Speaker 3 (06:01):
Yeah, it's like they just generated a Thomas Kincaid like
frame and then like badly animated.

Speaker 5 (06:05):
And the way that they move is very weird, like
it looks kind of right, but kind of right, looks
very strange.

Speaker 2 (06:11):
He does that all of the scenes because it's like
showing you a bunch of you see like a polar bear.
Obviously it's a Coca Cola Christmas ad. You see like
a fucking reindeer, you see squirrels, you see a dog.
But it always is like this very ai shot where
it just pans across the animal and it's like glowing
and kind of glossy and steering too much.

Speaker 5 (06:31):
But they're not going anywhere with the movement. It's just
like they are doing something and that's it.

Speaker 2 (06:35):
Yeah.

Speaker 1 (06:35):
You think in ten years they're still gonna have these commercials?

Speaker 6 (06:37):
No, No, because where's the snow.

Speaker 1 (06:39):
It's just a polar bez walking around like.

Speaker 2 (06:41):
System one, which tests emotional responses to ads, claims that
the initial response to their Christmas ad was overwhelmingly positive.

Speaker 3 (06:48):
I don't think they're lying about that. I think if
you walk up to someone like randomly on the street
and showed them this, I think they'd be like, oh, yeah,
it looks fine.

Speaker 2 (06:58):
Yeah.

Speaker 5 (06:58):
No one's watching a Coca coat the right and being like, yeah, wow,
I've never had one of these before.

Speaker 6 (07:03):
Yeah, it's not it's never a new experience.

Speaker 4 (07:06):
Not yet. We need an ad man.

Speaker 6 (07:08):
You need an adman for the coke holdouts.

Speaker 4 (07:11):
We need an ai Don Draper.

Speaker 3 (07:12):
Yeah, well, do not give them ideas.

Speaker 2 (07:16):
What if a.

Speaker 4 (07:16):
Company lost five billion dollars, It's just an ad that
doesn't work. Instead of going to the movies like Don
Draper does throughout all.

Speaker 7 (07:22):
Of Mad Man, it just doesn't work and respond to
any of your queries.

Speaker 2 (07:25):
Just Don Draper spending hours watching that looping Christmas video.

Speaker 1 (07:30):
Staring into nothing.

Speaker 2 (07:31):
Yeah. So there was like an immediate, pretty immediate backlash
to this, like all of the responses, if you like,
go to any of like where these things live on YouTube.
It's just people shitting on them, which he did acknowledge
Jason by saying the video was very debated.

Speaker 6 (07:46):
Yes, classic thing with commercials.

Speaker 3 (07:49):
Debating commercials, many things they're very debated these days.

Speaker 2 (07:52):
A lot of people are saying and then He showed
us next an AI generated video The Heist, which was
entirely made by a text script that itself was mostly
written by chatch Ept And here's how Jason describes the
workflow for what you're about to see. It took thousands
of generations to get the final film that. I'm absolutely
blown away by the quality, the consistency, and adherence to

(08:14):
the original prompt when I described gritty New York City
in the eighties, it delivered in spades, consistently. Well, this
is not perfect, it is hands down the best video
generation model out there by a long shot. Additionally, it's
important no VFX, no cleanup, no color correction has been added.
Everything is straight out of VO two Google Deep Mind.

(08:38):
So what is the model VO two Google deep Mind?
I think is what he's saying.

Speaker 1 (08:41):
It is one.

Speaker 5 (08:43):
By the way, I'm sure what you're about to show
me looks like a dog's awsome.

Speaker 2 (08:47):
It looks like, yeah, New York, exactly like New York
at Juliani right before he came in clean it. Uh huh.

Speaker 3 (08:54):
So this is like the competitor to SORA. I guess
that is the other big like video generation knew I don't.

Speaker 2 (09:00):
Buy for a and I'm not impressed, but we'll see
what you guys think. I don't want to poison your roots.

Speaker 6 (09:06):
I wouldn't. Oh god, okay, there.

Speaker 3 (09:10):
Is fire in this.

Speaker 2 (09:11):
The last time you're going to see the sack full
of money. It does not show out again.

Speaker 3 (09:15):
It's a lot of a lot of fires, a lot
of random fire and.

Speaker 6 (09:19):
Go backwards when they're driving forwards, wheel.

Speaker 2 (09:23):
Again, another straight fire.

Speaker 6 (09:25):
I would love to do freeze frames on this.

Speaker 2 (09:28):
Actually it's in gossip. Why is there so many fires?
Just all right, let's take a shot every time, Oh
my god, and also take a shut every time. He
is wearing different clothing and has a clearly different face.

Speaker 3 (09:41):
The car has changed.

Speaker 2 (09:41):
Column he's praising the consistency and it is a he
is dressed completely differently every scene.

Speaker 3 (09:48):
His jacket has has has changed you since the last one? Yeah?

Speaker 2 (09:51):
Yeah, and again the cop car the cars. When it
shows the cars dragging across the screen, they're kind of
doing the same thing usually that the animals do in
the coke.

Speaker 4 (10:00):
And minimal motion at the best.

Speaker 2 (10:04):
Yeah, I also love this. Can you believe this music?

Speaker 5 (10:10):
You also want to just say when it swivet hit
that thing he was driving like half a mile on it.

Speaker 6 (10:14):
Yeah, that's how I run.

Speaker 2 (10:17):
Yeah, look an obviously different man.

Speaker 6 (10:20):
That's the way he runs. Was like he had his
arms out.

Speaker 2 (10:26):
Two cops three Actually, look they run.

Speaker 3 (10:31):
Running is very funny.

Speaker 2 (10:32):
Yeah, this is different.

Speaker 6 (10:34):
Okay, what is going on with his feet.

Speaker 3 (10:36):
And different levels of facial hair, different different jackets, he's
wearing different colors jackets, vague definiteness in this just move?

Speaker 6 (10:46):
What the fuck is going on?

Speaker 2 (10:48):
Oh my gosh, I got me directed by Jason's got
in big flaming words because again, the AI only knows
how to put random fires on.

Speaker 5 (10:57):
Wow, I'm so glad that we have the technology that
a thing where a guy gets chased by the police.

Speaker 2 (11:01):
Yeah, we couldn't. This would have been impossible before.

Speaker 5 (11:04):
Because he runs at anywhere from one to one hundred
miles an hour.

Speaker 3 (11:07):
I assume they just trained they like this was specifically
like pulling on like Scorsese movies a lot.

Speaker 5 (11:13):
I just want to know about these thousands of generations
of script because.

Speaker 3 (11:16):
That is interesting.

Speaker 5 (11:17):
I am very curious because I just don't believe that
for did he just uh read there?

Speaker 7 (11:23):
Yeah no, that's the opening crawl to just like some
uh generated star Wars.

Speaker 3 (11:30):
It seems like shot by shot, right, each each shot
is going to require a lot of like iterations. The
script it's just yeah, I mean again like it unpacking.
What he actually is saying is unclear because I.

Speaker 2 (11:41):
Went to the YouTube video for this and the first
five or four comments are looks like we found the
new king of video. Jesus Christ, give it a rest
close change in every shot. Four to six year old boys,
you're gonna love it and still acts character and vehicle consistency.
But we're getting close, which.

Speaker 3 (12:01):
Is which is the.

Speaker 2 (12:04):
By thirty you're going to make a man wear the
same clothes for an entire video.

Speaker 5 (12:09):
Oh this is This has happened before with SRA when
they put Sora out there, like check out airhead on
your man God, and the balloon changes every single shot.

Speaker 6 (12:19):
It's a different size and color each time.

Speaker 1 (12:21):
There are just people running in the background.

Speaker 6 (12:23):
Sometimes and then they.

Speaker 5 (12:24):
Made a new one. You're like, oh, this is gonna
be good. It was worse and less consistent, and this
is what they think of us. They're like, these pigs
will slop up anything.

Speaker 2 (12:33):
And you can't expect technology to do something as complicated
as dress a man in clothing and have him stay
in that same clothing over multiple scenes. Hollywood never figured
it out so cool.

Speaker 1 (12:43):
That this costs like so much money as well just burning.

Speaker 6 (12:46):
There was some fucking.

Speaker 1 (12:47):
GPU melting and in a data enter in Arizona.

Speaker 7 (12:50):
The strain Carol it is also there's gonna be like
thirty forty companies trying to recreate the same misshapen wheel,
you know, for the Max five to.

Speaker 5 (13:00):
Also the little pigs that watch Star Wars, including myself,
we'll notice every minor inconsistency. Do you think that they're
gonna tolerate Luke Skywalker's and Watteau and all.

Speaker 7 (13:10):
Their favorite characters they're gonna Do you think that they're
gonna be happy office with a cyber truck?

Speaker 6 (13:15):
That's gonna be a cyber truck situation?

Speaker 2 (13:17):
You I think the issues are twofold, which is like
number one. In order to make this shiit sell to
the people who watch movies, you have to dramatically reduce
the average intelligence of people watching movies. You have to
give everyone brain damage, which except they are working in
Giant And the other thing is the models have to
get much better. And Jason made a point that like,
look every time people would like talk about the criticism

(13:39):
and be like, look, this is the worst it's gonna look, guys,
And I was just looking into it. GPT four took
fifty times as many resources in like fifty times as
much energy to train as GPT three did. So this
is the These are the kind of like exponential increases
that we're looking at. So like, if it took them
so many billions of dollars of investment to get to

(14:03):
the point where they can make this shitty video, to
make anything close to watchable, you're talking about again just
like lighting on fire, billions of dollars to do what
to make a scene that you could already get like
a twenty six year old dude who grew up watching
fucking Quentin Tarantino movies and taking cocaine, and you can
give him sixty thousand dollars and he'll film that shit

(14:24):
for you with an old car, Like.

Speaker 3 (14:26):
Yeah, I mean you could, you could even like animate it.

Speaker 7 (14:29):
M I mean, look, you give me a PS four
and somebody's grandmother and I will make them think that
they're watching that.

Speaker 4 (14:34):
No, seriously, seriously, dott six.

Speaker 5 (14:37):
But also this I just want to read out some
of the fucking people that use this model. We started
working with creatives like Donald Glover, who I said was
washed ten years ago, fucking sick of people.

Speaker 4 (14:46):
And My Love was a was a good album.

Speaker 6 (14:49):
America is an objectively bad song. It's a bad song
with a great video. Yeah, I thought he's like kind
of barn and Bee.

Speaker 5 (14:56):
Stuff's very interesting anyway, moving by, and of course the
week in it.

Speaker 6 (15:00):
Sorry weekend and some.

Speaker 5 (15:03):
Great I'll work with creators on VO one in form
the development of VO two, and we look forward to
working with trusted testers and creators to get feedback on
this new model.

Speaker 1 (15:12):
How long are you gonna get fucking feedback?

Speaker 6 (15:13):
It stinks.

Speaker 2 (15:14):
We've got some feedback from Yeah, I got thoughts.

Speaker 3 (15:17):
Hopefully those people are are just getting paid to tell
them words and be like yeah, sure, I'll take your money,
but if they could be.

Speaker 2 (15:24):
Twenty million dollars, I'm flipping the hope like just yeah, no,
I will turn on a dime. Speaking of turning on
a dime for money, here's the ads. Ah, we're back.

(15:46):
So the next video that our friend I now feel
he is like a brother to me Jason puts On
was of an AI generated fictional elderly rock star talking
about death.

Speaker 3 (15:57):
Oh I'm excited.

Speaker 2 (16:00):
You have to do this plastic and incapable of dynamic
expression as he guzzles randomly from bottles of liquor that
flash in and out of existence. Sometimes he lies on
his back in empty streets while talking about all of
the all of the cgi featureless women that he has
loved in his exciting life. Other Times he plays stadium
shows while obvious GPT written dialogue about aging and death

(16:21):
drones on. When the video ends, everybody in the room claps,
And as you watch this, I need to imagine seeing
the thing that I'm about to show you all and
a room with like two hundred people in it, all
clapping enthusiastically. I don't think I did. I did it.
I did. I said, come the fuck on as loud
as I could.

Speaker 6 (16:43):
Skywalk up.

Speaker 2 (16:44):
Yeah. So here's fade out and an old man. Ye,
it looks a little bit like George car It's the.

Speaker 9 (16:51):
End O three, Like the world's just god damn big,
and you're just to go passing through them.

Speaker 3 (17:00):
M what's he doing?

Speaker 2 (17:04):
Carried my heart concerts?

Speaker 6 (17:06):
Granddad, calm down, scattered.

Speaker 5 (17:08):
I love these slash cuts, said the fast cuts, these
false cuts because the next frame was unusable.

Speaker 2 (17:14):
Yes actually yes, like that he drank and the bottle
changed in his hand. You could see it starting to happen.
What is just anonymous?

Speaker 4 (17:22):
Women destroyed it just in a beautiful music.

Speaker 2 (17:25):
Listen to that lived it to the could you believe
generated by firing a candle?

Speaker 3 (17:34):
I like?

Speaker 8 (17:35):
Also, the old man does look very different each time,
very different old man.

Speaker 3 (17:40):
That's a different that's different guy.

Speaker 2 (17:42):
Yeah, that's the Emperor from the first Gladiator.

Speaker 9 (17:45):
Shows and trotting running away from this the way this
model generates running.

Speaker 2 (17:52):
Die There he is drinking on the fire, old rock star,
drinking in front of a flame ming house the a
I loves burning build What.

Speaker 3 (18:05):
Is this voice? I would love to track his tattoos
from Praying for three.

Speaker 5 (18:09):
We'll say he's about to eat the micro different, I've
done it, yum.

Speaker 2 (18:14):
Now he's sleeping in a in a broken Mustang, the
classic Ferrari Mustang mustangs in like a pool in front
of a mansion. But he clearly isn't questioned to it.
The car is hovering slightly over the pool like I love.

Speaker 6 (18:31):
This, I love this.

Speaker 2 (18:32):
I love him, And he tells us, He tells us
during this as if we're supposed to be impressed that
chatchypt wrote seventy five percent of that's.

Speaker 3 (18:40):
Hell, I can't believe that.

Speaker 10 (18:47):
As a bartender, I regret walking into the room to
see if people want drinks.

Speaker 6 (18:51):
This is atender. I apologize. I apologize that you had
to hear a drink.

Speaker 3 (18:56):
I also would like, actually, can I have a drink too?

Speaker 6 (18:58):
We are in the that's.

Speaker 5 (18:59):
Off line, see yes, sweet, and we're all drinking because
I just want to say I'm fucking disassociating off that.
I'm so fucking saying every a year of doing this nonsense.
And I look at these chit eaters and they show
us that and they like slop down the slop Oh my.

Speaker 2 (19:13):
God, it's it's it's hitting the easiest.

Speaker 1 (19:16):
Things to find an old man that drinks.

Speaker 2 (19:18):
For an idea of like how real this company is.
Obviously they were one of the companies. They were not
the only people who made that Coca Cola ad. They
were one of like three or four companies.

Speaker 3 (19:26):
It takes four companies take companies to make that.

Speaker 2 (19:31):
They have six hundred and twenty two followers on Twitter,
Hell yes or not? Twitter, YouTube and YouTube on YouTube
on Youtube't I post this karaoke song and this this
fade out is their or Sorry The Heist is their
most successful video with fifty six thousand views. Fade Out,
which we just watched, has less than five thousand views.
They're not ready, so they're not They're not quite.

Speaker 3 (19:52):
It's only going to get better.

Speaker 2 (19:54):
Yeah, it's only going to get better.

Speaker 4 (19:55):
That's gonna get better.

Speaker 3 (19:56):
Previously, things will only get round floor.

Speaker 4 (19:59):
Yeah, a small price of one billion dollars.

Speaker 1 (20:01):
This is like one hundred thousand dollars to compute.

Speaker 4 (20:04):
Yeah, imagine how good it would be.

Speaker 3 (20:06):
Much of a billion will only get worth more.

Speaker 2 (20:10):
I mean, I get now, Garrison, I do think you
should invest all of your salary.

Speaker 6 (20:14):
I just did a sixteenth minute about talking about this.

Speaker 5 (20:17):
I think I would rather hook To has a more
obvious use case than this shit. Hey, do you want
to spend way more money to get something way worse?
I actually can't get over the seventy five percent check GPTWO,
Like that should be twenty No, it should be theoretically
it should be it should be one hundred.

Speaker 3 (20:34):
Should be one hundred percent, yeah, not seventy.

Speaker 5 (20:36):
Which means that a quarter a quarter of it was
just fucking unusable.

Speaker 3 (20:40):
No. Absolutely, they're generating like individual shots that they're like
stitching together and like who knows how how long it
takes to like get like the prompt right for that
shot to work.

Speaker 2 (20:50):
However long it takes. It was too long because it
looks like, shit, we're gonna watch a video I haven't
seen yet, or at least of course, because it's five minutes,
so we're not watching all this.

Speaker 3 (20:57):
Oh my god.

Speaker 2 (20:58):
It's fifty views and came out a week ago. It's
called minimonade.

Speaker 3 (21:04):
What it's a word?

Speaker 5 (21:08):
Now it's like when you find your cat's vomited on
the floor again.

Speaker 2 (21:11):
So first we see a diner called Manimonade that appears
to be both on the fire light runner, Yeah, light runner.

Speaker 5 (21:17):
Oh god.

Speaker 2 (21:18):
When an old lady Rice is up out of a
pile of ashes.

Speaker 6 (21:22):
That's how mouths work.

Speaker 3 (21:25):
Where am I.

Speaker 2 (21:27):
Great? AI voice?

Speaker 6 (21:28):
What is this phantasmicgoria? All voice acting?

Speaker 2 (21:32):
It's me, Harrison Ford?

Speaker 1 (21:38):
What the fuck is going on?

Speaker 2 (21:40):
What I think? This is death? This old lady's dead.
That's how I now. She's tripping on tomatoes. The decaying
sandy diner that exploded has turned into a lively fifties diner.

Speaker 5 (21:56):
Off Dennis Villain News.

Speaker 4 (21:59):
This is second get a Diner.

Speaker 2 (22:02):
I yeah, ye know. There's a little Indian book he is.
He is the help though. Yeah, m hm.

Speaker 3 (22:15):
Oh, that's not.

Speaker 2 (22:16):
The little kid just fell down, and the way it
shows falling is that he just sort of deflays and
he's up again and he's staring at well, that's terrible.
We don't need to watch it anymore of that. No one,
no one, no one want to watch watch.

Speaker 6 (22:31):
This and have a positive reaction.

Speaker 1 (22:32):
They should, they should keep you in a holding cell.

Speaker 2 (22:34):
Ye, I'm deeply unhappy at the time we already spent watching.

Speaker 6 (22:38):
Yeah, like, we don't know what you're gonna do next.

Speaker 4 (22:40):
We're building a facility for you.

Speaker 2 (22:43):
The Phreeze reality distortion field gets used a lot when
we talk about text, but I really tasted it in
that room because all anyone on stage could talk about
is how good it looks. In every one of these videos,
people are like clapping, They're like, wow, this is amazing.

Speaker 4 (22:59):
What do you think They think? It looks good?

Speaker 6 (23:02):
It looks better than an Xbox.

Speaker 5 (23:04):
Yeah, And the idea is you talk to fing In
and now a thing came out and that's magical.

Speaker 7 (23:09):
So by virtue of not having humans work on it,
it's so it's better than you'd have Yeah, Okay.

Speaker 2 (23:14):
There was a moment after this where Jason like joked
about how like I don't like obviously I don't want
to replace actors yet yeah, yeah, and another Panels was like,
I think we're gonna have to make some decision. You
have to see how some decisions go as to fair use,
because obviously this is cribbing from a bunch of fucking
Scorsese like kind of looked like yeah, and Thomas Can and.

Speaker 3 (23:38):
Later On twenty four nine and Denny Villanu in general,
like all of his films have been like a massive
source for for these emotion and still generations, so much
so that like I think like Later On twenty forty
nine is like one of the easiest films to like
like replicate film stills almost exactly for based on like
how like how like load bearing that film has been
for a whole bunch of these model that could be

(24:01):
due to a number of factors.

Speaker 2 (24:02):
Now, I know what you're wondering, how soon until we
can get a full ninety minute movie that looks like this?

Speaker 3 (24:07):
Oh, I'm guessing days away.

Speaker 2 (24:09):
No, no, Jason said, probably not at least for a
decade or so.

Speaker 3 (24:13):
Really, Okay, that's interesting.

Speaker 6 (24:16):
I don't want to wait that long.

Speaker 2 (24:17):
What a worthwhile endeffort?

Speaker 3 (24:18):
Because he could have said shorter, that actually is interesting.

Speaker 8 (24:22):
He could have said anything those um I think it
is like he did have to spend probably hundreds of
hours of his precious one human life stitching those those.

Speaker 2 (24:32):
Turts together, and he's like, it's nowhere near ready. There's
no way it could make a nice he's giving himself
a yeah.

Speaker 10 (24:39):
Because I've only really seen one interesting genitive video thing.
But it wasn't a generative video thing. It was they filmed, uh, Brian,
you know, filmed a documentary and they created, you know,
some backhand software so that they would be able to
do cut of existing footage and try to total on

(25:01):
parts of the documentary. But I never ever see anything
interested in like constructing narratives or it's a like you
can't teasing other aspects of the creative process. It's only
let's try to replace, right, Let's try to so you.

Speaker 6 (25:16):
Can't do narrative with it.

Speaker 3 (25:17):
And that's the thing.

Speaker 2 (25:18):
If if I'd sat down there, because I was sitting
I said this. I was sitting next to a guy
from usc who was one of the only people in
the room who was like similarly critical to me of
what we were seeing on stage. It was like, look
if they had come down and been like, look, this
is how we can plug a script in and it
can create a storyboard and you can like kind of
see like a crude CGI animation of how the shots
will look, and that can help you like plan out,

(25:39):
like like that's legitimately useful. That's the thing that adds
value and can cut costs in a meaningful way to
like the production of good TV and movies. But that's
not as sexy as like I'm and they were all talking.
There was this this like very weird moment where one
of the panels Leslie Shannon, who's head of innovation for Nochio,

(26:00):
a company that used to make phones and now makes
panelists who pretend to be entertained by awkward.

Speaker 3 (26:04):
They also like make cameras and.

Speaker 2 (26:07):
They make a lot of stuff. I was just shitting
on Nokia. She's like, can we use neuroscience to see
how people are reacting to AI generated videos and then
adjust the ending to be like, you know, let's make
this resonation of a night. That way, we're helping the creative.
And I was like, are you out of your fucking mind?

Speaker 6 (26:23):
We attach electrodes to people skulls?

Speaker 2 (26:27):
I would I would have supported electrodes in their skulls. Yes,
Jesus Christ, we should do the monkeyin thing.

Speaker 4 (26:33):
Perhaps a pair of calipers, some skulls.

Speaker 2 (26:38):
I am fascinating the skull shapes of that fucking so to.

Speaker 5 (26:41):
Say that is there's so many things that you've said
that just they wouldn't survive at that position.

Speaker 2 (26:47):
Speaking of things that wouldn't survive a deposition the sponsors
to this podcast. Okay, so that first panel was a
real moment for me. I went through a couple of more,

(27:07):
one of which was on like advertising and AI and
was mostly mostly pretty boring. The third panel I went through, though,
was called AI Cinematics, Spatial and XR and I just
want to actually play you guys, you'll have to cluster around.

Speaker 5 (27:22):
I would actually believe that was generated with chat GPT
from a GPT two point zero.

Speaker 2 (27:27):
So let's start with this one.

Speaker 3 (27:29):
AI will be more impactful than the Internet.

Speaker 2 (27:36):
Maybe, Yes, it's a trick question because it is the Internet.

Speaker 1 (27:43):
That was that was the Internet, so not although it
can wrong without the Internet.

Speaker 3 (27:48):
So I'm like, oh, yeah, there you go.

Speaker 2 (27:50):
All right, what's what when you impact AI?

Speaker 3 (27:55):
AI is going to resolve in astronomical job losses?

Speaker 4 (28:02):
Uh, there will be an evolution of job long.

Speaker 2 (28:08):
Next, that was the scene I wanted you to hear.
They're like, we don't want to say it out loud,
and then everyone chuckles.

Speaker 1 (28:19):
These people are too fucking smug.

Speaker 5 (28:21):
Yeah, these people sound too confident and too chummy and
too happy to say things about this.

Speaker 6 (28:26):
That's not good.

Speaker 1 (28:26):
I don't a lot of these people laughing about people
losing jobs.

Speaker 2 (28:29):
No, I shouldn't have jobs.

Speaker 6 (28:30):
That's that's a good place to stop.

Speaker 2 (28:32):
Yeah, I don't like that either. And the people you're
hearing from. Let me let me tell you who's in
this fucking panel who was just laughing about, like, well,
there will be a un evolution law.

Speaker 4 (28:46):
Yeah.

Speaker 2 (28:47):
So the motherfuckers who were all that panel laughing about
people losing their jobs. Ted Shillowitz literally his name is Shilowitz,
futurist at Cinemersion, Inc.

Speaker 3 (28:58):
That's like a jame.

Speaker 2 (29:04):
Rebecca Barkin, co founder and CEO laman O One, Aaron
Luber Director Partnerships at Google IPG Media Lab, Mayla Emir Sadegi,
Principal program Manager at Engineering Microsoft, and Katie Henson, s

(29:24):
VP post Production as Fear Studios. So those are the people.

Speaker 3 (29:27):
Who are sore, sad, all laughing and like it's like
generative AI is like good at like one thing creatively,
it's good at like streamlining VFX, like workflow to the
workflow of of how to do like it is it
is like there's aspect. Famously, the only useful thing it's
been used for is making people's eyes blue in Dune

(29:48):
Part two.

Speaker 2 (29:49):
It's not one hundred billion dollars.

Speaker 3 (29:51):
And like it is applicable for like changing objects into
other objects on screen. It can produce really like kind
of odd like uncanny effects that could be utilized by
a team of human artists. Really well, what it can't
do is generate a short film that is in any
way compelling. Is well that is anyway compelling as a
piece of art, okay, And the fact that they're like

(30:13):
laughing at how much how much of.

Speaker 5 (30:16):
Lost enough jobs they have not that had structures full
to the beauty of the flame, right.

Speaker 2 (30:24):
Although the AI keeps keeps foreboding coming for them.

Speaker 4 (30:27):
It wants something fames.

Speaker 2 (30:30):
I'm going to end on a happy note because the
last panel I went to was actually really cool. It
was AI and the Crisis of Creative Rights, Deep Fakes,
ethics and the Law, and it featured the first intelligent
person that I've seen at CEES this year, Moya McTeer,
who is a folklorist and senior advisor at the Human
Artistry Campaign. It also featured Duncan Crabtree Ireland, who's the

(30:52):
national executive director and chief negotiator of sag Aftra. There
we go, There we go, and this was no bullshit.
It was talking about all of the different law suits
that are going on right now, all of the litigation
around AI, and like the actual strategy for litigating, and
like there was a couple of points where like Duncan
was like a lot is going to hinge on some
very brave, very famous people choosing to throw down some

(31:14):
big dollar lawsuits, Like that's what we need right now.
They did talk about the No Fakes Act, which has
bipartisan support and gives some legal force to allow people
to push for AI copies of themselves to be taken down.
And they think there's also some bipartisan possibility to get
AI labeling like legislation.

Speaker 5 (31:31):
The thing is, any of these things would be fucking
fail because if what you have to remove something from a.

Speaker 1 (31:36):
Model, how the fuck do we do that?

Speaker 4 (31:38):
Yet we don't know you have the entire module, you.

Speaker 1 (31:40):
Have to retrain like it, there's no way around it.

Speaker 2 (31:43):
Yeah, And there was a really good point where kind
of at the end of this part of what I
appreciate is again there was no bullshit, Like Moya at
one point was like, I think it is absolutely it
being generative AI is absolutely a net negative for the
artistic community. The point is, the point is not to
get something out as quick as possible to like make art.

Speaker 3 (32:00):
And this has to be like one of maybe five
people who are doing panels at the CEES who's like
willing to say that yes.

Speaker 2 (32:06):
And Duncan got I was like, look, you can't stop
the technology from being invented, so the best path forward
is to like try and channel this into a direction
that like is at least better for artists. Like they were,
there was very little for most of the people on
the panel, very little bullshit. There was some bullshit from
one person on the panel, Jenny Katzman, Senior director of
Government Affairs from Microsoft. That was fun. So after there's

(32:29):
this whole point where like everyone else on the panels like, yeah,
I think it's probably in that negative for artists on
the whole, and Jenny comes on she's like, actually, I
think it's a net positive and her example of this is, well,
you know, think there's a lot of stuff that you
couldn't do before that. Thanks to AI, you could do
like d aging Harrison Ford for the Indiana Jones movie.

Speaker 3 (32:48):
Something everyone everyone loved and great creative.

Speaker 5 (32:56):
This is the fucking problem with all of this, on
top of house ship it is and how expensive it is.
Which kind of AI are we talking about? That dipshit
that's not generative AI. That's not what that fucking was.

Speaker 3 (33:06):
And they still suck, And it also steals us from
being able to cast a young River Phoenix explain lovely.

Speaker 2 (33:13):
The only things getting cast in more stuff, Garrett, I'm
very unfair.

Speaker 3 (33:19):
Well, luckily with the power of AI.

Speaker 6 (33:21):
Look, I can put.

Speaker 2 (33:23):
Into every newspaper sequentially starting in eighteen thirty four, so
I've not gotten to the end of Phoenix. It would
be a really long career.

Speaker 3 (33:31):
It would be really.

Speaker 2 (33:32):
Cool sleeping guy. I think he's got the bold ideas.
This is gonna work out really well for Germany.

Speaker 3 (33:39):
It would be really cool that instead of just doing
Young Harrison for they just do a River Phoenix deep
fake for.

Speaker 2 (33:46):
You generate him.

Speaker 3 (33:49):
Look it's canonical.

Speaker 2 (33:50):
Yeah great.

Speaker 5 (33:52):
Oh I love the movies in the future of them too.
This is so good. This is so bad.

Speaker 3 (33:57):
James mangled.

Speaker 2 (33:57):
You're a hackenel So I gotta say it was very
funny because she also suggests Jenny, there's we could use
animals without causing harm thanks to AI, a thing that
no one had figured out how to do before. Nobody
had ever figured out how just like not hurt animals
in movies that didn't exist before AI.

Speaker 3 (34:13):
I thank god, thankfully AI will never do any harmed
animals or the environment.

Speaker 7 (34:19):
Nobody asked the lobbyist for Microsoft, what else the company
is doing with AI, right, with police deployments or with
fossil fuel companies?

Speaker 2 (34:28):
Yeah? Is that bad for animals?

Speaker 9 (34:30):
No?

Speaker 4 (34:30):
Actually, it's really good. They they need it. They they
yearned for the moment.

Speaker 2 (34:36):
They love data. Great for their habitats. She said, there's
issues with employment, but there's lots of issues that fall
around that, and I do think you need a balance.
And at the end of it, the guy running the
panel just says, Okay.

Speaker 7 (34:52):
That sounds like you guys are saying a bunch of
woke shit on this panel.

Speaker 2 (34:56):
Alright, all right, Microsoft, once.

Speaker 6 (35:00):
On the panel, someone to go and say, what the
fuck do you mean?

Speaker 2 (35:03):
What do you mean closest to that that you were
going to get.

Speaker 3 (35:07):
I think we do need a balance of some people
being fired like these people, and other people keeping their
jobs like everyone like moya give somebody.

Speaker 7 (35:14):
Has to lose, and somebody has to exactly that's their
entire Somebody has the guns, somebody doesn't.

Speaker 6 (35:21):
Somebody knows the way the maze works, and.

Speaker 4 (35:23):
Something as gonna what we shouldn't have guns.

Speaker 5 (35:25):
We shouldn't have a man and one of them knows
the maze and have a gun.

Speaker 2 (35:28):
We should have a gun maze you talking about the gun? Now? Look,
we all like keeping a couple of people in a
maze beneath our house, right, Yeah, there's nothing wrong with this.

Speaker 3 (35:38):
This is just the dormant next this we just we
keep doing it.

Speaker 2 (35:43):
It's it's a nice maze under my house.

Speaker 5 (35:47):
They have.

Speaker 6 (35:48):
It's nice to run some of them.

Speaker 4 (35:51):
Through one of the corners.

Speaker 5 (35:52):
The Minota gets them only sometimes I'm the Minotta. Anyway,
the gun maze isn't real, So most of their arguments
can mostly just come down to, well, you can't make
an omelet without breaking like you have to fucking make people.

Speaker 2 (36:07):
You have to break the human drive to create art, obviously,
to make an does not taste good.

Speaker 6 (36:13):
Yeah, an omelet esque food.

Speaker 2 (36:16):
It's a piss omelet, Like there's piss in the omelet,
and we had to we had to burn down this
esteemed chapel to make the piss armlet.

Speaker 1 (36:23):
Computer made it, though, Yeah, go on clap for the computer.

Speaker 2 (36:28):
We did firebomb the Louverra. But look look at this,
Look at this rock star?

Speaker 3 (36:37):
Oh god?

Speaker 2 (36:38):
All right, well that's the episode. That's all I got, folks.
That was my first day at CES twenty twenty five. Huzzah.

Speaker 1 (36:43):
Yeah, this is just my first day. Better offlines here
all week.

Speaker 5 (36:46):
I'm gonna hear about stuff like this all week, and
I think I'm gonna be fully jokeified.

Speaker 6 (36:50):
I'm gonna wake up in the clown makeup on Friday.

Speaker 4 (36:53):
I'm gonna find the funnest thing to bring back for you.

Speaker 5 (36:55):
I'm gonna find a an artist to put me in
full joke.

Speaker 2 (36:59):
Now, I'm gonna try to steal that AI enhanced grill
grill that you can I just like move this around.
I just want to test that would roll open the door,
open the door.

Speaker 5 (37:11):
As someone who's done a lot of like grilling, done
a lot of spoken barbecue, I don't know what I
would do.

Speaker 6 (37:16):
Is it gonna talk to me in the.

Speaker 2 (37:18):
Wait till you are you? Are you trying to tell
us here at z tray that you have grilled meat
without a robot texting you about it? Because I just
don't believe.

Speaker 1 (37:28):
I don't know how I did it, but I did it.

Speaker 2 (37:31):
You're never is always dreamed of knowing how to cook
the robots. It was impossible.

Speaker 6 (37:38):
Oh god, we're at the death of innovation.

Speaker 2 (37:40):
Yeah, at the end of a lot of things maybe
and the end of the episode. Yeah, and the end
of the episode, thank god. You know, everyone else be
the cyber truck in the It could happened here is
a production of cool Zone Media.

Speaker 3 (37:59):
For what podcast cool Zone Media, visit our website Coolzonmedia
dot com, or check us out on the iHeartRadio app,
Apple Podcasts, or wherever you listen to podcasts. You can
now find sources for it could happen here, listed directly
in episode descriptions.

Speaker 4 (38:13):
Thanks for listening.

It Could Happen Here News

Advertise With Us

Follow Us On

Hosts And Creators

Robert Evans

Robert Evans

Garrison Davis

Garrison Davis

James Stout

James Stout

Show Links

About

Popular Podcasts

Monster: BTK

Monster: BTK

'Monster: BTK', the newest installment in the 'Monster' franchise, reveals the true story of the Wichita, Kansas serial killer who murdered at least 10 people between 1974 and 1991. Known by the moniker, BTK – Bind Torture Kill, his notoriety was bolstered by the taunting letters he sent to police, and the chilling phone calls he made to media outlets. BTK's identity was finally revealed in 2005 to the shock of his family, his community, and the world. He was the serial killer next door. From Tenderfoot TV & iHeartPodcasts, this is 'Monster: BTK'.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.