Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:14):
Welcome back everyone
to Tricky Bits with Robin PJ.
Right now, we're in the middleof an AI frenzy, from investment
to all the big companiesgetting into it, but there's one
company that we haven't heard alot from on this front and,
given that they're the secondbiggest biggest depends on the
(00:34):
week company of the world, it'simportant to talk about.
So where is Apple in all this?
And, Rob, we've had Apple playin this sandbox to a certain
extent in the past with Siri,but they are going nowhere near
(00:55):
as nuts at the moment asMicrosoft through open AI,
Amazon, Alphabet, Google, MetaFacebook.
So where is Apple in all ofthis?
Speaker 2 (01:08):
I think it's the
typical Apple approach they're
saying nothing, I mean they arehiring people.
If you look at openings, there'sa lot of AI positions.
Some of these people might bein the infamous self-driving car
division, but they aredefinitely hiring people for AI
(01:29):
and typical type-lipped.
They're not saying anything atall.
They do also put out papers inthe AI space, which is very rare
for Apple.
They don't put many papers outat all.
So they're definitely workingon things, but generally, I
think they have to announcesomething soon.
I don't suspect it'll be at WWDCthat they'll have some updates.
(01:53):
It's most likely, I think,going to be an update to Siri
and a more generative form ofSiri.
Most likely it may just be abunch of APIs that you integrate
into your own things, and we'llhave to wait and see.
I do think overall it givesApple kind of a dilemma as to
(02:17):
what they want Siri to be.
There was lots of infightingwhen Siri first came about as to
what it was going to be andApple went for the canned
responses over being moregenerative even back then, due
to the lack of control that youhave if you go down the
generative path and Apple, as weknow, is all about that control
(02:39):
.
Over time, Apple have beenmoving their AI functionality to
the neural engine on the localdevice and for things like Siri
and simple voice recognition,that's a totally capable device
to do that sort of modeling.
But if you move to the fullgenerative world, the devices
where Siri runs are not powerfulenough.
(03:00):
Yeah, you could compress it,you could get it more powerful,
but you're not going to competewith the likes of chat, GPT and
Gemini.
If you were restricted towhat's available in a home pod
just the same hardware as awatch yes, your phone and your
Mac could do even better, butit's not going to be a
consistent experience.
So I think it brings Apple backto doing more work on the
(03:24):
server side and not on theclient side, which takes away
their argument of where moresecure, where not sending your
data to the cloud, blah, blah,blah and everything they've told
us over the years.
And I think the big questioncomes about if they have these
large language models thatthey're going to feed into Siri,
where did that data come from?
(03:45):
They've not been stealing ourdata over the years like Google
have and Facebook have at leastthey say they know but then
suddenly they come out with aGPT quality large language model
of how did they train that?
Where did that data come from?
Have they been listening to ourvoice calls and text messages
for the last 10 years, and in anApple different sort of way?
(04:07):
I don't know.
I think that's a question thatthey'll never answer.
Maybe they just bought the dataset, I don't really know.
Speaker 1 (04:15):
So one of the things
you've talked a lot about, rob,
about Apple, is that they arevery much an end user experience
first company.
They start with the end userexperience and then craft
backwards to figure out the slewof technologies to make that
happen.
Now I know you said that Sirihas been handicapped a bit by
(04:35):
virtue of its canned responses.
I contrast this with what we'reseeing with Google and ChatGPT
and a few others, where there'sthis great slew of technology
that's been created that havebeen producing some mixed bag I
will say most generously ofexperiences for users.
(04:56):
What do you think, or what doyou suspect, is the core user
experience that Apple's tryingto go after with Siri and
generative AI, other than a?
Me Too, I got to do that.
Speaker 2 (05:11):
I think, from a
consumer point of view of being
a market leader, that's wherethe Me Too for Apple ends.
They are, like you said, farmore interested in designing a
user experience and going allthe way back to transistors if
necessary to make thatexperience possible.
That's exactly how things likethe Vision Pro came to be.
(05:32):
It's how every Apple product inthe last decade came to be.
It's not we have this cooltechnology.
Can we monetize it?
This is the experience and thisis the technology we will use.
And so the Me Too for falls awayat their image image in the
consumer view of we're a techleader.
(05:53):
We need to be in the AI space.
I think, from Apple's point ofview, they would want it to be
like an obedient child of like,hey, siri, do this and it does
it exactly the way you want todo it, but the problem with
generative AI is it doesn't dothat.
I also think Apple want toavoid the whole race and
(06:15):
political issues that generativeAI tend to kick up, and they
just want to avoid it completely.
They don't want nothing to dowith it.
They want Siri to be, like itsays, an obedient servant.
Speaker 1 (06:25):
Is there actually any
real pressure on Apple to get
into this game?
I mean, I could say that, look,the difference between the
first Oculus coming out and theVision Pro is close to it decade
at that point in time.
Is it actually sufficient atthis point in time for Apple to
effectively say you know what?
(06:46):
We're not going to be occupyingthe generative AI space for
another five to 10 years,because all this stuff is insane
right now.
Speaker 2 (06:57):
I think they have to
be.
I think investors and theconsumers want them to be, and
everyone knows Siri is garbage,so it kind of has to be.
And in your comparison I thinkthe tables are turned Like from
the first Oculus to the VisionPro, apple's the one in the
(07:17):
advanced seat when it comes toAI.
You said it's a decade for VR.
It's a decade for AI too.
Siri's been around forever andthat's a good point.
They're still where they are.
So Apple in this case is thesimple model, and then you've
got chat, gpt and Gemini andthings like that at the cutting
edge.
So from a purely optics pointof view, apple is far behind,
(07:42):
and I think it's easy for people, even non-tech people, to put
two and two together and realizehow useful Siri would be if it
was a more I wouldn't saynecessarily a generative system,
but a more flexible system.
(08:03):
I've just been able to askyou're asking a question right
now and she said oh, theanswer's on your iPhone and she
just did a web search.
Why can't you just read thewebsite to me?
Why can't she just gather thatand be more useful as a whole?
I'm not saying that has to befully generative, because I
think there's a lot more to auser experience than generative
(08:23):
AI.
Consider something like oh asthe rabbit one that came out,
where it's the Android-basedpiece of hardware that can do
actions instead of justpredicting what the next word's
gonna be, and as far as I know,they train that on knowing how
(08:45):
to click buttons.
So I think it can go to awebsite it hasn't seen before
and still figure out how tonavigate it for you, so you can
say things to it like buy me aplane ticket and it will ask you
a few questions, and then it'llgo buy you a plane ticket.
We'll also one of the questionsit will ask is it's $300, are
you sure?
With that?
It's very similar to saying toyour kid go just buy me a plane
(09:08):
ticket to LA and she'll ask youa few questions and she'll pick
on a few things for you.
That sort of end userexperience, I think, is what
Apple would go for.
So it's not like a generativeAI.
Isn't that useful?
It just spits out a wall oftext or an image and you can
tell it's done by an AI thatalready exists.
(09:32):
Having Siri just make up a storyof no use to anyone as a home
assistant.
Having it do things and trustthat it does them properly, I
think is where Apple will takeit.
They have no interest in beingthe next chat GPT and just have
Siri talk to you, but havingSiri say buy me a plane ticket
(09:53):
to LA and she knows that youlike window seats and you like
the front end of coach,reasonably priced.
Always check a bag, things likethat or things that I think
they'll learn about.
But then it comes back to theprivacy part of like.
Does Apple really?
Are they in a position to takethat information in a way that's
(10:14):
pleasing to people, or peoplewilling to give them this amount
of information?
You're giving it to them anywayindirectly and they could mine
it to find it.
But Apple have always saidwe're not in that space.
But they kind of have to be inthe space to make a not even a
generative AI, just an AI beuseful, an action AI.
It needs to know a lot aboutyou.
Speaker 1 (10:36):
So, with where Siri
is at today and I think, like my
traditional experience with ithas been as needed to be
connected to the internet Ithink that may have changed.
Why does it suck?
Like it's been out there forwhat A decade-ish.
At this point in time and, toyour point earlier, like it's
(11:00):
the one who's the simple model.
It really hasn't advanced frommy mind or in terms of utility.
I use it for day in and day out, except for, maybe, speech to
text, which still gets wrong alot of the time.
So what have been the factorsin play here?
Is it image, like Apple didn'twanna have a disobedient child?
Is it technology?
(11:21):
Apple didn't wanna be doing alot of server-side stuff.
They wanted to focus on thelocal Like.
Why is it that we've had thisin play for a while and it was
ahead of Alexa, it was ahead ofGoogle and it's still not really
?
Speaker 2 (11:36):
useful.
It isn't useful.
I mean it's only recently.
You can ask it to have twodifferent timers.
Give them names.
I've, like, set one timer forthe potatoes in the oven and set
another timer for the bathtubthat I'm filling and actually
have them.
Keep them separate.
I think it was released becauseof Alexa and all those, and
Cortina and the Google Assistantwere just a thing at the time
(11:58):
and Apple had one and it fitinto the ecosystem and I think
it's the ecosystem keeping Sirialive.
If Siri didn't have the entireiOS, mac OS, apple ecosystem, it
would have been gone years agobecause it pretty much is
useless.
You can ask it to set an alarmand I always double check when I
ask, because I'm never quitesure whether it actually set the
(12:20):
alarm for the time I wanted.
And if I'm catching a plane orgot to be somewhere, then it's
pretty important that your alarmis set.
So, having to double check, Imight as well just set it myself
.
She can't even play music veryreliably.
You say, play this and youmight get it or you might get
something related to it andthings like that.
It's hard to control and Ithink it's as far as Apple have
(12:44):
wanted to go because of thatcontrol.
They're very, very controllingcompany and if they can't
dictate the experience you'regonna have, they won't do it,
and I think that's held themback.
That's why they are where theyare today.
So why is it bad?
It's never really been a fullAI.
It has some AI bits, but it'slots of canned responses.
(13:07):
It's really just oh, you saidthis word.
It means this you can speak invery broken sentences to Siri
and she'll still do the sameoperation as if you said the
full thing, as if it's just likeoh, this word and this word,
and this word means that.
So it's more like an AI justdetecting shapes of audio waves
and things like that than a truelike I'm actually listening to
(13:29):
you.
Speaker 1 (13:29):
From the Genesis
standpoint, where it's like
Cortana, I think, and then Alexa, google Assistant, like that's
also a form of its own Me Too.
Is it effectively the fact thatSiri has always existed in kind
of this Me Too?
And now that AI generative AIis out there, you think that
(13:51):
there's pressure on the companyto actually transform it into
something that's more useful andpotentially less controlled
than before?
With the whole generative AIthing, I think there's limited
use cases where I find itextremely useful, and then other
ones where I find it justgarbage.
And so is there any kind ofhistorical sense for, like, why
(14:15):
Apple would wanna do it now,especially given the sort of
problems that other companiesare seeing and how you think
they can do it better, and isthere anything that prevents it
culturally from being better?
Speaker 2 (14:29):
I don't think they'll
do it better.
From a purely technologicalpoint of view, they don't do
anything better.
Nothing they do is the best.
Never has been.
Apple's not a leader when itcomes to cutting edge technology
.
They'll look at oh, this iswhat we need and it's the
experience, and I think this iskind of a dilemma for them,
because it's hard to curate theexperience with something that's
(14:53):
so generative at the back end.
I do think they have to dosomething, this pressure from
investors.
I just read this morning that abunch of UK activist investors
are pressuring them to dosomething in this space, and now
something.
I really think they have to dosomething.
It's just what are they gonnado?
So I think, where, as in thedark, is everybody else?
(15:13):
I don't know anything thatthey're doing that's
groundbreaking, but they reallyhave to.
They need to come out and knockthis out of the park and be
like you can control generativeAI.
You can have a system whichstill gives you suitable results
in a more controlledenvironment, but that opens up a
whole can of worms in itself.
(15:35):
Yes, they don't have to go facethe press as to why the AI is
racist, but who gets to pick?
What's acceptable in acontrolled environment.
Is it gonna be another JohnStuart where he can't talk about
this because it's a fence ofChinese and therefore you'll
only hear what Apple want you tohear?
(15:56):
It's like that's kind of alittle of a bad world there too.
So I think they're stuckbetween a rock and a hard place
on multiple fronts, and I don'tthink they're gonna get out of
it cleanly.
They're not gonna come out andmake this obedient child slave
AI that knows everything aboutyou and can do actions on your
(16:17):
behalf the first time around.
I think that's where they'dlike to go.
I don't think they're gonna getit right.
I don't think anyone's gettingit right.
Speaker 1 (16:25):
I actually wouldn't
be surprised, I mean, if I was
Tim Cook, which I am not.
I'd probably say back to theinvestors like look this
generative AI space.
Yes, there's a bunch of moneyrelative to the rest of the
market being invested into it,but gesprochen夠.
The experiences that are beinggenerated right now are a mess
(16:45):
and, furthermore, apple's at aparticular disadvantage.
It's like you mentioned earlierFacebook collects all your data
, Amazon collects all your data,google collects all your data
and Microsoft probably collectssome data or they're scouring it
with open AI on the web.
Apple has had a very strongstance on privacy.
(17:08):
Historically speaking, that hasmade them very adverse to
mining any data in the cloud orlocally from you.
So where is Apple getting thedata that it used to train Siri
in the first place, and wherewould it get the data from here
on out to try to craft somebetter experience for AI for its
(17:33):
user?
Speaker 2 (17:33):
So I've always had
this dream that AI would be
trained as you use it for you,and I think that does give you a
good experience.
For example, let's take maps.
I always drive through Boulder,boulder, colorado.
In the same way.
If I'm going to South Boulder,it isn't the most efficient way
(17:54):
that I go.
If I ate it in the maps, all itdoes is bitch at me for the
next seven turns.
So you turn here, turn here,turn here.
Why can't it learn that I alwaysdrive this way?
And the reason I drive that wayis the quick way goes through
the campus and it's just a painin the ass.
Sure, it's not a pain in theass for any metric you'd measure
in driving scenario.
It's just a pain in the assbecause there's students walking
(18:15):
everywhere and it's just a pain.
So why can't the AI learn thatI do this?
So when I ask directions tosomewhere I've been before, it
goes oh, I know you go this way.
I'll just tell you to go thatway, not be annoying while I'm
driving.
And I think this is a veryunexplored area of AI, of like
AI that's trained on your ownbehaviors.
(18:37):
I says I buy a plane ticket andit knows I always sit at the
front of coach and it knows Ialways check a bag.
It learns things about you thesame way as a child would learn
things about you.
I think that's when generativeand assistant type AI becomes
way more useful.
And no one's doing that,everything's just bland.
(18:58):
Basically, we go to what itread on the internet and we
format in it and that'sliterally what the generative AI
is.
I mean, technically, it'spredicting what the next word is
going to be Right, picking theone that it thinks is going to
be most likely.
And yes, there's a lot of techbehind that, but that's really
all it's doing and it's read alot about everything on the
internet and it's scoutedeverything and that's why it's
(19:22):
generative.
It's same for generativepictures of like yeah well, I
think I'll just this colorpicture will be this, and if I'm
drawing a picture of a forestand colors, most likely going to
be green, they are, in theory,quite simple and it's hard to
see how they work.
But that's training on massivebulk data sets.
And the question you ask isvalid when did Apple get that
(19:44):
data?
If they did it locally trained,based on your own behaviors,
then they could get it from youand they could be open about
getting it from you and theycould say, yeah, we collect this
and we store it and we don'tsell it to marketers and all
things like that, and I thinkthat's how you get a usable
experience.
Is your theory.
He'll be very different to mytheory due to yours was trained
(20:07):
on you and mine was trained onme.
As far as I know, nobody'sdoing that right now, whether it
be for driving directions orwhether it be for a home
assistant.
Speaker 1 (20:16):
Well, this is
interesting and I want to.
Let's zoom out for one second.
Then I want to zoom back in.
Right now, Nvidia is making akilling by effectively selling
all the H100s that can toeverybody.
Like they are selling themachinery for building out these
data centers, for doing the AIgeneration, for doing the
(20:39):
training of it.
I'm really curious.
Again, apple is in a veryunique position because it is so
vertically integrated.
It will go all the way down tothe transistor.
Do you think that Apple Siliconactually provides an advantage
here for being able to tailortheir chips or tailor the
devices or tailor whatever partof the stack to be able to do
(21:03):
that local generation?
And it's not something thatMicrosoft or Google even, or any
of these other companies can dobecause they don't own the
stack?
Like, is there something therethat there's a play here that
they can effectively get awayfrom Nvidia for needing, like
all of this high end hardwarebecause they control everything
(21:25):
down to the silicon?
Speaker 2 (21:27):
No, okay, I don't
think so.
Well, maybe there is obviouslysomething you could do locally.
I don't think what I suggest.
It just would be done locally.
I think they gather the datalocally and then feed it back
into a private AI that's yourson the back end.
They do it on the server.
I don't think any of thedevices where Siri runs and it's
(21:47):
got to be consistent too, ofcourse.
So just fact that Siri heardyou say something on your watch
doesn't mean that your Macdidn't know about it.
Doing it all to the back endkeeps that a lot more consistent
, as there's no question aboutwhere it's stored.
So I think everyone gets theirown little bucket of AI state
that's done on the server anduses server class hardware.
(22:08):
The big Nvidia AI chips versusApple Silicon.
There's no comparison there.
They're like Nvidia willdestroy them on performance and
any sort of inference that'sdone local has to be simple.
The local hardware is mostlydone for the final stage, the
inference stage of like I cantake the big model data that was
trained elsewhere and I can runit in a very small space
(22:32):
efficiently.
And don't forget, a lot ofthese are still mobile devices,
so we can't be burning throughpower.
Generating those models is very, very difficult, and that's
where I think your data will goto the server.
It will get compartmentized andkept secure.
Train your AI on Nvidiahardware and then send the
(22:53):
results back.
Speaker 1 (22:54):
So your local Siri is
now running a different model
than someone else's local Siri,if that makes sense, no, it
makes total sense, I mean, butit does imply that Apple needs
to do a significant build out onthe data center side to
actually be doing this kind oftraining right, Meaning that
they can't then ignorepurchasing H100s.
(23:16):
They actually have to eitheracquire them from Nvidia or
again they have some secretproject for a beefy Apple
Silicon chip that we have noidea about.
Speaker 2 (23:27):
I'm not sure that's a
viable thing because there's no
market for them.
Everyone's just going to buyNvidia hardware.
That's why Nvidia is a twotrillion dollar company.
Now Apple come out with justthe GPU.
Well, first of all, they nevermake just the GPU, so it would
be a huge step for Apple to evengo there of like, ok, we have
this biggest chip ever made andit's just compute units and it's
(23:48):
or it's just the a massiveneural engine.
The Nvidia hardware isinteresting because it's not
just a neural engine.
The neural engine in the Apple,silicon, is basically a giant
matrix multiplier.
It can do the neural node mathquite easily.
It can do this times, that plusthat times that.
It can do basically big dotproducts, big matrices, which is
kind of how the basis of aneural net and video.
(24:13):
It's all computer.
They have the tensor calls,which kind of the same thing,
very simple, very, very fastmultiply and ads and it's all
connected to the compute cause.
And so the H 100 isn't just abig neural engine, it's a
massive compute box that can runCUDA and all the libraries that
go with it.
So I think Nvidia have gotthemselves in an excellent
(24:35):
position that they're basicallyuntouchable at this point.
Apple could make Silicon, butthey'd have to make the
libraries in the ecosystem, andall that open source code that
goes along with it doesn't existfor Apple.
So even if they made hardware,they'd only use it themselves
because no one else would wantto use it.
And could they do that?
Yes, but would they?
(24:56):
I doubt it.
Speaker 1 (24:57):
I think they just
swallowed the pride and go buy
Nvidia hardware which ishilariously ironic, because
you've not been able toconfigure a Mac with Nvidia
hardware for I don't know abouta decade or so.
Speaker 2 (25:09):
Yeah, but it wouldn't
be a Mac, it'll just be a
server Linux box like everybodyelse's and just ruin a stack of
h100s in the server, and they dothat anyway.
I mean it's not like everysingle thing they do is Apple
based.
Yeah, they have Windowsmachines and Linux machines and
servers are Linux and I Thinksomewhere like Apple music is
not even running on their ownservers.
(25:30):
It's like formed out of a zooor somebody like that, or at
least some of it is.
So they're not against usingother people's hardware, they're
against you using it.
They want you to be using theirhardware and they'll do
whatever they need to do tosupport that hardware.
So They've already gone downthat road.
I don't see them making hardwarejust for themselves and I don't
(25:51):
see them making hardware in away that would be sellable to
anybody else, and they're aconsumer company.
They make consumer hardware.
None of what Nvidia is makingright now is really consumer
hardware.
It's almost like Nvidia's putthe GPUs on the back burner.
It's like, yeah, they're okay,but this is the thing we do now.
We make these giant.
Speaker 1 (26:13):
AI chips.
So, to recap, from Apple'sstandpoint, you know, let's say
they go down this route for abespoke personalized AI for you
and me and Everyone else isusing an, you know, an Apple
device.
Let's say that they have thesort of private security, but it
still means that they have aneed for a set of hardware To do
(26:37):
the initial model generation,which either they have to build
out to themselves or they willhave to run out to someone else,
like like Azure, to actually dothat.
And they potentially also havethis other general data problem,
right?
Because if they're isolatingeach of our data, they can't use
that in a general pool.
So presumably they would stillneed to get a general pool of
(27:01):
data from somewhere.
Speaker 2 (27:02):
Yeah, that's why it's
not been done, I think.
I think you have to start witha generally trained AI and then
tweak that AI based on your data.
But your data stays with youand doesn't get shared.
Could, can they aggregate it?
Blah, blah, blah, and then weshare it.
Yes, technically, I mean,that's what Google have done for
decades.
Well, apple have always saidthey won't do that, and I even
(27:25):
think this needs to be doneserver side.
I don't think a lot of this canbe done client side.
Speaker 1 (27:30):
Right, right, right
the model generation still needs
to be done.
Speaker 2 (27:33):
The model generation
service side.
But I think the model infurrances a lot of that service
side to maybe some of its clientside but it's so it's going
away From where Apple have beengoing.
Maybe Apple just went down thewrong path of doing it all
client side and they've dugthemselves a hole that they now
they now need to get out of,because the client isn't
powerful enough to do a lot ofthese things, even if you throw
(27:55):
the whole box at it and thatApple have always been like,
well, we'll do it on an u-rollengine and it's it's, yes, great
, it's not that powerful, it'sDoesn't have enough storage,
doesn't have enough memory, anda Lot of this pushes back.
And but then Apple like we'vesaid multiple times now, apple
have made a big point of notsending all your data back to an
(28:17):
anonymous server in thebackground.
It's done locally, it's secure,your data doesn't leave your
device.
We've heard this many times andI don't think that Vibes in a
world, especially a mobile world, where you need these heavy AI
models to do anything useful.
I think there's a power problemand I think there's a Genial
(28:40):
compute problem, so they have tosend some of this data back now
.
Maybe they they could split itin a way.
No one else is thought of someof its back end anonymous and
some of its local and Unique toyou, assuming they go down that
sort of path but they have to tomake it useful, then Maybe
that's where they are, indeeddollars ago, and I have no idea.
(29:02):
But it's a very hard problemfor them to get out of the hole
they're in now in the AI spaceand Maintain their privacy
statements that they've made fora decade now right, there's.
Speaker 1 (29:14):
The technological
hoops are Understood.
You can buy hardware, you canaggregate data, like that is
that's what Google is doing.
That's what everyone else isdoing, to your point.
Apple has this philosophicalstance that makes it more
difficult and creates more hoopsfor actually like doing this,
because, in order to not breaktheir own promise, they have to
(29:38):
effectively figure out someclever way of Dicing the data up
, storing it and maybe not evenbeing able to send it to
someplace like Azure, like maybethey do need to build their own
data centers, simply to say wekeep it.
You know lock that for that.
Speaker 2 (29:54):
I do that.
I mean they have data centers.
They'll definitely do that.
But don't forget, the example Igave of making an AI customized
for you was just an example.
Sure, it's not.
It's not where they're goingwith it.
They won't, definitely won't bedoing that rev one.
But I think it fits the modeloff.
To be useful, it has to dothese things.
No one's there right now.
No one's making an AI that wetrains itself based on your own
(30:17):
behaviors, and At best it's ageneric model with a few tweaks
based on your rate.
This on one to five, and I'llI'll bias the output stage of a
new one, that based on theweights like you give me, and
that's kind of where we're attechnically today.
Nobody's Feeding the data.
(30:37):
You give them back into thetraining to get a better initial
answer right, and I think, formy point of view, that's the
only way an AI is going to beuseful is if it becomes
basically ironman.
Speaker 1 (30:49):
Sure, but it also
goes back to the point that
Apple, from a differentphilosophical standpoint, has
always wanted that end userexperience solid and Not just
have it be a tech demo.
So either it has to createsomething that is useful and
maybe it isn't you know theironman Jarvis program but it's
either has to be useful or itbecomes a tech demo, and we've
(31:12):
seen where all the tech demosare at right now.
It's basically what's out thereagain.
Speaker 2 (31:16):
It's the, it's the
whole Apple are in right now
with investors and people, andeverybody is expecting something
and all, in some cases,demanding something.
And and what can it be thatsays this is an Apple product?
It's everything out there isway too off the rails or goes
off the rails pretty damnquickly.
Who was it?
Was it Microsoft who had thebot that would learn from things
(31:38):
it gave it, and it becameinstantly racist.
Yep, yep that was.
Speaker 1 (31:42):
They shut that one
down pretty quick.
Speaker 2 (31:44):
Yeah, exactly.
So it's like I don't think youcan make it learn from public
data.
You can base, train it fromwhat's out there and Then it has
to learn based on you, and ifit's racist, it's because you're
racist.
That's fine, you look, you'llget along with it great.
You can't have person a's dataGentrified with person B's data,
(32:06):
because you end up with thatMicrosoft bot.
So I'm pretty convinced youhave to make it unique and you
have to make it Train itself onwhat you want.
That's the only way you evergonna agree with it and you know
, the way you ever gonna trustit is if it was trained by you
about you.
But it's the Apple problem ofhow do they get out of this
privacy hole and Do they justbuy data from Google and be like
(32:30):
, oh okay, we'll just buy itfrom somebody else already has
your data.
We're not breaking any of ourpromises.
But if that's a ridiculousassumption, of course.
But even if they did that, itonly works right now.
It works once it's at somepoint.
How your data is used for an AIis something that we need to
address as a society.
You either do the GoogleFacebook thing of just say that
(32:54):
that's the way we are.
All your data is our data andwe'll use it for anything we
want.
And the other side of the coin,you've got Apple, where they
try to use any of your data Atleast say so but then you can't
do these new R&D things, thingswe didn't even know about.
When the ideas for privacy aswe know it today were created,
no one thought of all.
(33:15):
If we had access to everyone'sdata, we could train these big
models.
Without everyone's data, thatmodel is not trainable.
The companies that have yourdata got this inherent advantage
.
Speaker 1 (33:25):
Well, given Google's
troubles at trying to create a
Generative AI tool, it mightactually be in their best
interest really just to become adata broker at that point in
time.
Speaker 2 (33:36):
I mean, google
definitely have access to a lot
of your data and they never saidthey won't sell it.
It's not gonna happen becausethat is who Google is.
Their data on you is Google, soselling it would basically be
selling out, and Now they don'thave anything that no one else
can do.
The fact that they have so muchdata on you it's same for meta.
(33:57):
They're not gonna sell that.
Speaker 1 (33:58):
That is the crown
jewels the data they have from
the standpoint of why it wouldbe good for Apple's business if
they were able to thread thisneedle and actually get us
private, local fuzziness of AIgeneration for us.
Do you think that is sufficientcause to effectively Juice the
(34:21):
sales of Apple's hardware?
And to put this in context,I've been trying to think about
this from every big companythat's getting into AI, which is
how is it gonna improve theirexisting businesses or how it
will create new businesses?
So for Apple, is it sellingmore hardware?
Speaker 2 (34:36):
That's the only way,
only reason they would do it.
Get more people into theExisting ecosystem and keep them
there is how Apple makes money.
Then they can sell the servicesand all of that To them, and
then I think a smaller maybeit's not smaller Selling more
things to the existing people inthe ecosystem.
(34:58):
I don't know which is any morebeneficial to them in the grand
scheme of things.
I assume getting people inthere, because once they're in
there they can then sell morethings to them.
They were consumer product.
They've got to stay at thiscutting edge of Consumerism and
if they don't, they fall out ofgrace pretty quickly.
I think there's lots of peoplewho've been in this same
position and never kept it.
Speaker 1 (35:19):
Do you think that
Apple has an opportunity here To
change the conversation on AIlike avoid basically the
generative AI mess that Google,for example, is in and say, look
, we're gonna focus on AI andhow it empowers AR?
I know we talked about this ina prior episode, but could this
actually be a better route forthem to go down to say look, the
(35:40):
Apple vision Pro.
You know we have AI that'sgonna be used for seeing,
understanding and we can make AReven better because of the
investment we've done here?
Like, does that help themescape some of these privacy
problems too?
Speaker 2 (35:56):
No, it don't.
It's a totally separate thing.
That's AI as a technology, andthis is AI, is a user experience
.
Speaker 1 (36:03):
Okay, they have to do
both.
Speaker 2 (36:05):
Okay, people are
expected to be more useful than
she is.
It's basically where we'reheading with this and how they
do that Isn't an easy problem.
The fact that your vision Pronow understands that your
kitchen and my kitchen are bothkitchens, even though they're
very different, isn't an aDirect thing that the consumer
sees.
That's gonna get fed backthrough some experience, whether
(36:27):
it be a game or a Movieplayback of like cooking here
and put this here and cook thatand do that.
Whatever it may be, it's adifferent experience and it may
not come from Apple, even thoughthe technology is apples.
Just because you're using arocket Doesn't mean everyone
associates your AR experiencewith the AI that's built into a
(36:48):
rocket.
They associate it with you andyour own company.
That's why there's an app store.
Otherwise, everything would beApple.
Individual uses of AI, which areeverywhere now, are just going
to become libraries andpre-trained models that you can
download, including detectingthe haystack.
To start with, it's like that'sa very simple thing to do these
(37:10):
days.
You've trained on a backend,got data nice and concise and
you can run it on a smallmicrocontroller that's always
powered and that's a totallydifferent AI space than the AI
assistant or the generative AIspace, which are far more
(37:30):
user-facing.
Using AI as a core technologyto solve another problem, I
think, is AI's best use.
I think all the generative AI'sthey're off the rails.
They'll do whatever the hellthey want to do.
They hallucinate and makethings up.
We're having an AI that justlooks for handwriting
recognition or voice recognitionor phrase recognition of what
(37:52):
it is.
They're becoming more solvedproblems.
We solve them more efficientlyall the time.
But I think that's the AIthat's going to be useful for
the foreseeable future of itjust makes other tasks easier or
more reliable and looking forsignals and noise and things
like that, and that's what an AIis good at like classify these
(38:14):
objects or whatever.
That's a more rigid world of AIand it's where we've been until
these new generative modelsstarted to show up.
But I do think that's twoseparate pieces of technology
and you have to be in both.
Speaker 1 (38:29):
That's fair.
Speaker 2 (38:30):
Your framework to the
back end are going to expect
these AI assistant functions tobe built in, and the front end
is the user experience, theconsumer facing AI's, which is
for Apple.
They are just different textsby different teams doing
different things.
Speaker 1 (38:45):
From a usability
standpoint.
I really like your vision ofthe local AI that Siri connects
into to make it morepersonalized to me.
I do think that AI as a termhas gotten amazingly bloated.
It's sort of all things to allpeople.
A lot of it is this generativestuff which is getting companies
into trouble.
(39:06):
There's four uses that I havecome across that seem directly
useful to me so far.
One is I've done the AI imagegeneration, which has been
actually cool.
I use a small app called Wonder.
There's another one which islike an AI image conversion,
where it can take my face andmake it anime.
That's another app called Glam.
There's the AI that I use whenI'm editing this podcast, studio
(39:29):
Sound, which can help take outechoes.
And then there's the AI thing,which I haven't tried yet, but
I'm really interested in thatyou sent me about upscaling.
Videos Like these are to me,for my use cases are specific,
but I find them reallyinteresting and intriguing.
Beyond the maps example, do youthink that there is a near term
(39:56):
use case for Apple to go afterwith Siri at this point in time,
other than doing imagegeneration locally?
Speaker 2 (40:04):
Like I said,
everything you mentioned is an
applied use of AI.
It's not necessarily the wideopen wild west generative AI,
right, I think an assistant hasto be customizable in some way,
whether that's retraining orwhether it's just tweaking a few
things to you.
I think Siri's problem iseveryone gets the same canned
answer, and it's not what I waslooking for.
(40:25):
If they knew it weren't whatyou're looking for, even if they
gave you a different cannedanswer, it would still be at
least better.
Speaker 1 (40:32):
It would at least be
something that's maybe more
relevant to me.
I wonder if Apple is afraid ofgetting into the scenario you
talked about earlier, which islike, if I'm a racist and I
train my Siri to be racist, isthat a mark against Apple
Because it let it happen?
Speaker 2 (40:48):
I mean, who gets to
blame when someone goes to that
racist house and asks their Siria question?
Siri already knows it's not thesame person, so he could
default back to the bland answeror say I'm not answering it for
you.
I mean it says that all thetime If my girlfriend tries to
ask Siri for my schedule for theday, it'll be like no, I can't
tell you that.
(41:08):
You've got to unlock his phoneor something.
There are some checks andbalances in there, but it's just
an optics of it.
Obviously, no one would blameApple probably for having that
Siri now become a racist Siri.
But it's the optics of someonenow has a recording of someone
famous as Siri answeringquestions to somebody else, and
(41:31):
I'm just like whoa and I thinkthey want to avoid the whole
thing.
I think they just want.
That's the same problem asgenerative text.
Today, if you ask Gemini todraw you a white person or a
black person or something likethat, it'll be like, oh, we're
working on that.
I think it says it will let youknow when it's ready, when it's
ready.
And they had it and they tookit out.
Of course, again, it's theoptics of it.
(41:52):
It's not that technology isinherently racist or not racist,
it's the optics that the PR,the company, has to deal with
when they realize the trainingdata may not have been optimal
for the case they're pushing.
Speaker 1 (42:05):
Do you think anyone's
doing this well right now?
I mean beyond the applied stuff.
Speaker 2 (42:10):
I think the rabbit
one has some interest and stuff,
but there's no reason it can'tjust be an app on an iPhone and
I think if Siri couldn't becomethat, I think they'd disappear
overnight.
Speaker 1 (42:22):
Okay.
Speaker 2 (42:24):
It's a custom Android
device.
It's something you've got toupkeep and everything like that.
Phones have been taken awayhardware devices for years.
Yeah, they take away the GPS,take away the separate camera,
take away the flashlight.
A bunch of people used to carrya bunch of devices.
Now it's one device.
People are not going to go backto carrying more devices and I
(42:44):
think they only have their ownhardware because they needed
something to run their AIs on.
There's no reason what rabbitis doing that can't be an app
for a phone.
Maybe integration with Applewould be difficult because of
the access and things like that,but there's no reason Apple
couldn't do what they do.
So if Apple did that, theywould disappear overnight,
guaranteed.
Speaker 1 (43:05):
Maybe Apple should
just buy Rabbit AI and then just
incorporate into Siri.
Speaker 2 (43:10):
Again, I think,
although they showed the idea, I
don't think there's anyinherent value there.
Speaker 1 (43:16):
So effectively, then
nobody is doing this well.
Speaker 2 (43:20):
Being first means you
look at us, we did this.
Does it mean that you're theleader or doing it the best?
You just did it first and youget someone else the idea it's
potentially someone with a lotmore resources, or we're
applying those resources to adifferent path.
And then we're like, oh, that'swhere we need to be.
And all they do is just pointthe ship 10 degrees left.
(43:41):
And now they estimate you.
Speaker 1 (43:43):
My general statement
on where AI is at right now is
it's a very exciting thing thatdoesn't have a whole lot of real
use cases with it yet.
Like it's a great buzzword,it's a great term for getting
like companies funded, but Idon't think that it's actually
(44:04):
transformed anything yet and I'mcurious whether we're actually
going to see a transformation inthe near future.
Like is one of the reasonsApple isn't doing anything is
because there's not a real thingthere yet.
Speaker 2 (44:19):
I mean there are some
incredibly good uses of AI.
I mean everything I've seenthat's good is all in the
embedded AI.
The other side of the coin thatwe talked about briefly a few
minutes ago it isn't thatGenitive AI is inherently off
the rails or it's inherentlygood or bad.
It's the integration of thatGenitive AI into something.
I think Photoshop have done areasonably decent job of having
(44:42):
AI aware editing tools so youcan say highlight a person and
say delete and it just fills thebackground in.
Very useful thing to do.
Saves people a lot of time.
I don't think AI is going toreplace artists.
I think artists with AI willreplace artists.
Without AI and through toolslike this, if you refuse to use
(45:03):
that auto fill, even though itmay not do a perfect job, it
gets you 90% of the way there.
You do 10% of the work versusdoing 100% of the work.
If you did it without AI andit's kind of great.
You can be like mark a sectionof an image and say put a street
post there or sign or a neonlight and it does what you say.
(45:23):
I think when you start takingthe out of Photoshop and just
doing generic image generation,it's when it becomes off the
rails of like what the hell isit doing?
Speaker 1 (45:31):
So it goes back to
your point earlier about the
applied AI.
Speaker 2 (45:34):
Applied AI is where
it is.
Ai is just technology.
It needs to be applied tosomething Like the fact that you
have the best A, b or C doesn'tmean it's useful.
It means it's a good piece oftech, and I think everyone out
there who's doing it have alwaystraditionally been the
companies who push tech first,like we have.
This tech will find a use casefor it and I think that's Gemini
(45:56):
.
The old Bard chat GPT is justtech looking for a solution and
that's why they let it go offthe rails.
Where tech that's useful, thatyou can actually sell to
somebody and make their lifemore productive, make their job
more productive, needs to bestrained to the environment,
(46:18):
it's supposed to be used in.
Speaker 1 (46:20):
And I think that's
going to be one of the
challenges, and I honestly thinkthat Microsoft is a really good
position to well, microsoftslash an AI, like to channel
that through the tools.
I'll admit I have not usedco-pilot yet, but I could
definitely see saying, hey,here's a bunch of data, although
(46:43):
I'll also say that if the AI isable to do that, it sort of.
What makes me wonder are thoseperformance reviews actually
work?
Speaker 2 (46:49):
Yeah, but then you
factor in again local learning.
Make a performance review, as Iwould have done it.
It'd be very different to aperformance review that I would
have done, and I think all ofthese integrations of AI all
benefit from the AI knowingsomething about you, even the
Photoshop one.
You tend to use this style, youtend to do it like this, so
(47:11):
I'll just start there andthat'll be the first option I
give.
You will be one that I'vebiased towards you, rather than
just being like, okay, that'swhat I did.
You go finish the last 10%.
I think it becomes moreefficient, more useful, if we
can get into a world where everylittle AI that's integrated
anywhere is tweaked towards theend user.
Speaker 1 (47:31):
But the data problem
and Retraining problem remains
so maybe that is the nextfrontier, like how do we have
effectively all these Billionsreally of bespoke models that we
want to create for each of theindividual users that are going
to be there because they agree,we want to have the Iron man
suit, we want it augmentingwhat's already there, not being
(47:54):
the can response?
Speaker 2 (47:55):
and again, if it's
augmented in a way that's more
useful to you, then it's a moreuseful product, and the way it
becomes more useful to you is toknow something about you, or
Know how you would have donethis if you did it yourself, and
I think that's when AI getsmore dangerous than it is today
because all of a sudden it'sgonna just try and Predict
everything about going to do now.
I think it just gets I don't saymore dangerous, I mean more,
(48:19):
that was probably a bad word itgets into position where it can
do your job better, because thatlast 10% you were doing was
critical and now that last 10%isn't necessary.
But again it's your data, sodoes it stay with you?
There's lots of legal questionshere too, like if I Working for
a company and I'm just runningthrough this in my head right
now, so this might be completerubbish, but I work in for a
(48:41):
company and I I've been doingthis same Photoshop operation
over and over and over again andmy AI that's me Helped me now
can basically do exactly what Iask of it without me doing
anything if I leave that company, is that my AI or is that there
AI?
Can they keep using thatwithout me, even though it's
(49:01):
mine?
It's the whole thing of.
It's the same question of if youhave a AI generated Tom Cruise
from 20 years ago, do you own it?
Tom Cruise own it in the?
In that case, tom Cruisedefinitely owns it.
But if, if I've got an AIthat's been trained through my
actions like your workplace, isthat yours or is it mine?
(49:22):
I think we're not in a positionto answer any of these
questions yet.
But that's the problem with thelocally trained model of like.
Who actually owns that data?
Speaker 1 (49:31):
The lawyers are gonna
field days with this stuff, and
I'm sure that's actually One ofthe things that Apple,
especially, is very sensitiveabout.
Looking forward, rob, do youhave a prediction of what we
might see from Apple in thisyear?
I'll give mine in a second.
All right, I'll go go, then yougo first.
(49:51):
I actually think we will see.
At max, apple will say we cangenerate images locally on your
phone.
That's it.
I actually don't think they'regoing to do anything major
Because I don't think they'reready.
Speaker 2 (50:07):
I Think we'll get
something along those lines.
Some look at those.
We're doing AI, whatever it maybe and I do think we'll get it
upgraded Siri.
How upgraded, I don't know, butI Think they have to do
something more than that becauseinvestors are starting to
complain.
So I do think we'll get anupgraded Siri.
(50:29):
Again, I've no idea how far itwould.
It would go, because they'vealso got a factor in.
There's a lot of apps that areBuilt-in or have hooks into Siri
so you can control all the apps.
If they upgrade Siri completely, all that breaks.
So they have to walk this linesuper carefully.
(50:50):
But I do think we'll see.
Yeah.
Speaker 1 (50:51):
So I I will take the
I will take the negative side on
this, on this bet I don't thinkwe'll see an upgraded Siri
because I think for them it'sgonna be too dangerous a user
experience they're either willlike.
Like if they end up in themiddle where they break a whole
lot of shit but not provide alot of value, that's bad.
If they end up in the sameplace Gemini's at, which is like
(51:13):
crazy shit coming out of it,that's bad for them too.
So I still think they'llactually avoid doing anything
with Siri and basically justhave you know oh, this is your
image generator, which may ormay not have any any hook into
Siri whatsoever.
So we'll figure out.
Speaker 2 (51:29):
We'll figure out,
we'll see.
Wwc is when May, june time, andthen there's an announcement
coming up in a couple weeks andthere is the big announcement,
obviously in the fall when theytend to do the iPhones, we'll
see what comes out, something inone of those three It'll be.
I think there'll be a new Siri.
I think it'll be a wwc.
(51:51):
There'll be a whole bunch ofNew series stuff and a whole
bunch of new framework AI basedthings for all the developers.
I'll believe AI libraries.
Speaker 1 (52:00):
I would believe that
it's the series stuff that's I'm
I'm still gonna go negative onI.
Speaker 2 (52:05):
Think they can make
it work the same way for the
people who need it to work thesame way and work better.
For when it can?
I don't think it's a or B,black and white thing, it's.
It is a path that people can godown and those apps with
integration can keep what theyhave without breaking and they
can Progress to these newerintegration libraries built in
AI libraries, things like thatto make their integration with
(52:28):
Siri better.
I think that's the only waythey're gonna get from where
they are to somewhere else.
They have to take that firststep and I think this will be
that first step.
May not be much of a step, butit's all right step in that
direction.
Otherwise, with what they'regonna do, stay where they are
forever, make entirely new thing.
I Don't see them getting rid ofsee when we're placing it with
(52:50):
see we to call or what you want.
This has to be a path from hereto there and they have to start
on that path and this is howthey're gonna do it.
Speaker 1 (52:59):
I don't know, man.
Everyone seemed fine withchanging Google Hangouts to
Google Hangouts plus to alo, toduo, to meet, to whatever the
fuck it is.
I'm kidding.
Speaker 2 (53:09):
Did they go?
Speaker 1 (53:10):
no, no.
Speaker 2 (53:11):
It's like no, it's
like changes hard for people and
, I think, gradual change.
I do use series, which is ahate.
Using Siri, I do actually useit.
I'll yell out in the morninglike what time is it?
Set me alarm and play music andthings like that.
I don't have any ownintegration so I don't use like
hey, sir, I'm home, turn thelights on.
But I know a lot people who doand it seems to Work quite well.
(53:35):
That could only answerquestions from my own Case in.
It's fairly minimal, buteveryone's use cases different.
So I think if you ask somebodyelse how they use it and what
they wanted, a bit differentanswer.
Speaker 1 (53:48):
So I think take my
well but I think this is a great
illustration of your earlierpoint, which is that what
everyone wants to use Siri formay be different, and Having
something that's locally trainedas a useful tool for you or for
me has a lot of leverage to it.
So, crafting something that iscompletely canned, you know you
(54:11):
you'll effectively use whateversegment of that canned aspect
you can, whether it's turning onlights or Opening your lock or
whatever.
But I think I think you'reyou're hitting the nail on the
head, which is like you wantSiri to be something that's
Rob's, I want something that'sgonna be PJ's, and I think I'm
aware of this.
Speaker 2 (54:30):
If you look, there's
been a lot of turmoil on the, on
the Siri teams.
Internally, people are leaving.
It's not going in the directionsome of the engineers want.
It's Again, it's the can versusthe non-can response,
management's input and thedirection of all that.
They're aware of all of this, alot of this, when I was Apple.
They are aware of everythingI've mentioned and everything
(54:52):
everyone else is mentioned.
It's just comes down to, Ithink, management and the path
they want to go.
I think the engineers are morethan capable of doing Everything
we said.
It's just what do managementwant?
And, again, apple very controland this is a very hard thing to
control.
Speaker 1 (55:07):
And this is where I'm
really curious to see if
management actually is going tomake a big change based on
investor pressure.
I think it's actually a reallygood litmus test for the company
, like would they be willing toPut out a substandard product in
order to Quell investorpressure?
Speaker 2 (55:29):
or is this apples
downfall?
Is this the same as Microsoftmissing mobile and laughing that
it's not Important and Applejust go well, we can't control
it, so we're not gonna do it,and then, ultimately, that's the
end of Apple in a decade?
Hmm, it all traces back to thispoint.
It's no different to thecomments Steve Ballmer made
(55:50):
about mobile a decade earlier.
Speaker 1 (55:54):
This will be
interesting.
I don't know if we'll be ableto like run out a prediction for
the next decade on this one,but it is a really good question
of.
Is AI as impactful and is it asrevolutionary as we all think?
Or, like another Nvidia fueledproduct?
(56:14):
Is it possible that in that AIas we know, it becomes sort of
you know, maybe slightly moreuseful than Bitcoin was or is?
Speaker 2 (56:23):
yeah, it's all
unknown and I I think again.
I go back to the integratedAI's, which are incredibly
useful.
Speaker 1 (56:31):
Yes, that I 100%
agree.
I think that has.
That is the hallmark, whereit's like integrated with tools.
It's this pure generationSkynet, c3po stuff that I get
Much more Bearish about.
It's the cutting edge.
Speaker 2 (56:47):
It's just people look
at us, we can do this and it's
the next big model.
And Are they themselves useful?
Not that we're useful, in myeyes, it's.
It's taken pieces of that andthe pieces that make a given
tool for a given use case moreUsable is, I think, our AI, ai.
(57:10):
Ai is going to be used for thenext couple of years at least.
And Whether that be having AIbased audio processing in the,
the mix, as we're using torecord these podcasts, so we've
moved background noise or wecleaned your voice up, or we
take the buzz of the airconditioner at the background
(57:31):
All things that AI istraditionally been useful for
the past few years.
I think you'll start to see thatinto different devices.
It won't just be a post-processtool on a Windows PC.
It'll be built into the actualsource and you have AI at the
edge Doing things that are veryuseful at that position, less
daily transfer, things like that.
(57:52):
I think there's a lot of placesAI can go to be more useful
over than large language model.
Being racist, I Agree.
It's all questions that wedon't know, which are kind of a
brave new world we're enteringit really is, and it can go in
Both negative and positive ways.