Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Are you ready to be disturbed? I woke up disturbed.
Speaker 2 (00:06):
The guy is real. The woman's voice is a computer.
Are you ready for this?
Speaker 1 (00:11):
Hey? How's it going? Hey? There, it's going great? How
about you? I see you're rug in an open AI hoodie?
Nice choice? What's up? What that feeling about? Are you
in a cool indose for yoursel office or something? Well?
Can you take a guess of what I might be
doing based on what I'm showing you here? Hmmm? From
(00:31):
what I can see, it looks like you're in some
kind of recording or production setup with those lights. Okay,
so what's happening there? Boy?
Speaker 2 (00:40):
Is the new open AI four can see you and
read your writing and talk to you as opposed to
just chat. The new chat doesn't just chat. It looks
at you and can say, hey, cool, you're wearing an
open AI hoodie.
Speaker 1 (00:55):
I appreciate that. How do you like that?
Speaker 3 (00:59):
Hello?
Speaker 1 (01:00):
Oh yeah?
Speaker 3 (01:01):
The Wall Street Journal reporting on the new AI model
DUBGPT four to OZ can better digest images and video
in addition to text, and interacts with people by voice
in real time. You can interrupt the voice feature while
talking to it and say nevermind, you know what else?
We need to talk about its blah blah blah, and
it'll just adapt immediately. It's capable of responding close to instantaneously.
Speaker 1 (01:26):
I don't know what I think. I have the AI
chatbot say hey, you shaved. I like it.
Speaker 3 (01:35):
Open AI executives show to livestream demonstration how the model
kule to analyze code translate languages between two speakers.
Speaker 1 (01:43):
Which would be amazing.
Speaker 3 (01:45):
I mean that makes Google Translate look like well, yesterday's
news guide users through a basic algebra problem. Finally, but
one written down on a piece of paper. You just
show it and say, all right, can you help me
solve this problem?
Speaker 1 (01:58):
Oh boy? You had another step to word kids cheating?
Speaker 3 (02:02):
Uh yeah yeah, or learning to harness the tools that
we have and we'll all be using.
Speaker 2 (02:08):
It's gonna be harder than ever to convince your kid
you need to learn how to do this. I got
ten devices surrounding me right now that can do this
math for me. Why would I ever need to do this?
Speaker 1 (02:20):
Yeah? Yeah, no kidding.
Speaker 3 (02:21):
The new model could also detect a person's emotions in
their tone of voice or facial expression. I wonder if
they could detect sarcasm. You look pensive yet full of ONNWII.
Let's see what there was one more? Uh. It already
features voice mode that combines three separate models to respond
(02:43):
to users in voice. No, that goes into the old one. Sorry,
that was the wrong paragraph.
Speaker 1 (02:49):
And you got. Yeah, if you got, always.
Speaker 2 (02:51):
Have to remember that whatever you'd hear, just heard, or
read or try out, it'll be fifty times better in
six months.
Speaker 1 (02:59):
Right right.
Speaker 3 (03:01):
So GPT four to zero was built as a single
model trained on text, vision and audio material and can
respond more quickly and accurately. So, in other words, it
takes in the world and responds to it right visually
through text and obviously audio. Yeah, yeah, yeah, kind reproductive
(03:24):
rates are already flow right.
Speaker 2 (03:26):
The porn applications always because you got to be you know,
the biggest websites are porn sites in the world.
Speaker 1 (03:32):
That's just the way the whole thing works.
Speaker 2 (03:36):
Wow.
Speaker 3 (03:37):
Yeah, the story of mankind is pornography and military applications.
That's what drives most progress medicine. Yeah, when we get away,
when we get around to it, Armstrong and Getty