Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
All right, I'm Legacy Retirement Groupdot Com phone line. It is Tech
(00:03):
Tuesday. ABC News Tech reporter MikeTbuski joins us. A big week for
AI artificial intelligence announcements happening this week. What's going on, Mike? Yeah,
that's right. So first things firsttoday is Google Io, which is
the company's annual developer conference. It'sa place for people who build stuff off
of Google's technology developers to gather togetherand see what the company is working on.
(00:24):
Now. Last year, Google usedthis opportunity to show off a lot
of new hardware things like a foldingpixel phone and a new tablet. This
year, we're really not expecting anythinglike that. This is going to be
all about software. In fact,last week Google debuted an entirely new phone
and just kind of in an effortto get it out the door so they
can focus on the software this weekat IO. And when we're talking about
(00:48):
software in twenty twenty four, thatof course means we're talking about artificial intelligence,
which, in Google's land, meanswe are talking about Gemini, which
is their large language model, expectedto get some major updates later today.
So what do we expect that tolook like? So We don't know specifically,
but there are some tea leaves thatwe can read here. One is
that last month Google did a littlebit of reorganizing internally. They merged the
(01:12):
Android team, the software team thatdeveloped software for like phones and that type
of thing, with the hardware team, And essentially what that means is the
expectation here is that artificial intelligence isgoing to start to be baked into more
and more Google products, not justphones as we've seen already with things like
the Pixel eight A and the Pixeleight Pro, but also things like earbuds
(01:36):
and tablet computers and smart watches whereall this stuff kind of works really nicely
together and you can treat an artificialintelligence model like you would Google Assistant or
one of those sort of voice assistants. That's the expectation, but again we're
gonna have to wait and see specificallyto see how this plays out. Yeah,
that's interesting. That's interesting because people, you know, think, I
don't I don't understand AI, Idon't get it, but then when they
(01:57):
go, hey Siri, that's you'reactually using AI at that point, right,
And it's interesting that you bring upSiri. So next month is Apple's
developer conference basically the same thing butfor Apple people, and the expectation there
is that Apple has been working onstriking a deal with open ai, which
is sort of the leader in theartificial intelligence space right now when we're talking
(02:20):
about generative AI. They yesterday showedoff their Spring update to their large language
models. The expectation there is thatthey're going to use that company's technology to
make Siri a lot smarter. We'veall been in the position where we've asked
Siri to do something and it's like, let me see if I could look
that up on the web. Forright, Siri is not the best voice
assistant out there, and Apple reallynow sort of readjusting focus as they try
(02:45):
to kind of play catch up inthe artificial intelligence isn't it funny that we
get frustrated with Siri? Would shecan't figure out the question where it's like
you know what, Uh, it'sit's a first word problem. We've very
true. It does save you atrip to the library to go to the
encyclopedia, I said, but it'sinteresting. So that's that's Apple. What
about the folks at open ai andthe new chat GPT model. Yeah?
(03:07):
Right, so we mentioned their Springupdate yesterday. This is a new large
language model that they are debuting.They're calling it GPT four OO stands for
Omni model, which essentially means thatthis new version of their core AI technology.
It can now accept multiple inputs,so you can type it out a
question and then attach a photo tothat question, and the model is able
(03:30):
to understand the whole kitten kaboodle andunderstand all of that. Some other changes
that they make, they say thatit is now able to respond to your
questions a lot faster, cut downon that sort of awkward pause. Here's
what that sounds like, Hey,chatch youbt how are you doing? I'm
doing fantastic, Thanks for asking.How about you? Pretty good? What's
up? So my friends? Solike, pretty natural sounding conversation, and
(03:52):
that's kind of what they are aimedat here is making these models a little
bit easier to interact with less computery, and part of that is the ability
to interrupt the model, which youcan now do. It can understand sort
of you butting in and asking itto change focus. Here's what that sounds
like. There was a robot namedBite Bite do this in a robotic voice,
(04:13):
now initiating dramatic, robotic voice,so you can kind of see where
they're going with this. Different changesto making this sound a little bit more
natural, and of course you hearda little bit there too. They made
some changes to the way this thingemotes and is able to understand not just
your emotion, but also to sortof replicate emotion itself. Were there any
bugs anything that they're like, Eh, this isn't quite perfected yet. Absolutely,
(04:39):
I'm glad you teed me up forthat. So there is another part
of this. We were talking abouthow this model is now multimodal. You
can give it multiple inputs. Well, part of that is vision, right,
you can use your phone's camera toshow the model things. They were
doing a homework demo where they showedit a math problem that was written out
on a piece of paper and askedthat to sort of walk it through.
That will walk you through this mathproblem. Here's how that went. Okay,
(05:01):
I see it. No, Ididn't show you yet, Just give
me help along the way. Onesecond, so important to note that with
all these models, not just openAiyes, they do have a tendency to
get things wrong. They can makestuff up and just behave overall pretty weird.
So if you are asking it todo your homework for you, it's
best to double chat. Yes,So we're speaking with Mike Tbuski, ABC
News Tech Reporter. It is TechTuesday, thirty thousand foot view. Question,
(05:26):
Mike, do you think at somepoint there's going to be a backlash
against AI and all of this?I think there already is in many ways.
I mean, you look at theartificial excuse me, you look at
the artistic community and the way thatmany artists have reacted to artificial intelligence images
showing up in movies and television andsaying, hey, why did you get
a computer to do that? Whenthere are many artists out there that you
(05:48):
can pay to make a better productfor you? Why are you taking the
cheap way out here? You know, that's already happening, and I think
that's going to be a lingering concernfor a lot of these AI companies as
they really at this point try tofigure out what this technology is for.
Is it really there to replace artists? Is it there to replace writers,
or is it there to do somethingelse, to do more, you know,
(06:11):
kind of wrote tasks. I thinkthat's going to be a really existential
question. It is indus three faces. Absolutely, and unfortunately for the artists
and people who are concerned about someof this, I think the toothpaste is
already out of the tube, andI don't know we're going to go be
able to go backwards on that.It's again a big question. I was
just watching A Late Night with theDevil, which is a horror movie that
(06:32):
includes some AI generated imagery already.You know, people really reacted to that
in a pretty negative way, despitethe fact that the movie is pretty good.
You know, it's a sort ofbig moment that we're living through right
now as to how to deal withthis technology that is, as you say,
already out there. It's already doingstuff. So you know, how
do we regulate it, how dowe get a hold on it. I
think that's just something that we're goingto have to figure out as we go along.