Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:13):
Welcome to Tech Stuff. I'ms Voloshan here with Kara Price.
Speaker 2 (00:16):
Hi Kara, Hi.
Speaker 3 (00:17):
As So today.
Speaker 1 (00:18):
I'm excited to welcome Nicholas Thompson back onto the show.
He's the CEO of the Atlantic and a big technology buff.
He has a recurring video series called the Most Interesting
Thing in Tech on LinkedIn My Fave, and he also
hosts a podcast called the Most Interesting.
Speaker 4 (00:34):
Thing in AI.
Speaker 1 (00:35):
I wanted to invite him on for a roundup of
his most interesting stories from twenty twenty five and to
discuss what he's looking ahead to in twenty twenty six.
But I also wanted to talk to him about his
rather remarkable new book, The Running Ground, which I read
in one sitting.
Speaker 4 (00:50):
I can sort of guess, but what is The Running
Ground about.
Speaker 1 (00:53):
It's kind of a memoir about Nick's battle with cancer,
his relationship to running, this relationship with his father, and
how those things all connect in surprising ways. When we talked,
I asked him about finding his dad's unpublished memoir and
how he chose to weave it into his own story,
and well, you just have to listen to it.
Speaker 2 (01:16):
So I get this unpublished memoir my dad had written,
and I start to read it, and it's dedicated to
the seven grandchildren. It was great. It was like, oh,
that's so good Dad. That was like so sweet. There's
like an introduction about pain and many lives and you know,
the eras he's been through and it's kind of nice.
I'm like, wow, my kids will enjoy reading that. And
(01:36):
then it's like talking about being in Asia, and then
it is literally like it's probably page four, page five
a description of the penis sizes of men of different
races across the world.
Speaker 4 (01:46):
What Yeah, Nick was quite confused.
Speaker 2 (01:50):
How like you wrote the dedication page and you've like
edited this At what point did you think like this
got to stay in right, Like it's like kind of racist,
like super weird, like definitely inappropriate. My dad was gay,
had a sex edition like we had affairs with many
many men, was like kind of ran a brothel in
Balley when he was the late state of his life.
So this is an area which he was an expert.
(02:12):
But oh my god, right, and so you couldn't get
into too deep a mode or too legaic a mode
because like every four pages there's something where you're just.
Speaker 4 (02:19):
Like that this is not what you're expecting, right, now,
not in the least I had no idea.
Speaker 1 (02:24):
It's a pretty interesting book and we had what some
people like to call a wide ranging conversation. We talk
about running and how it provides a space separate from technology,
but also about how tech can be used to optimize running.
We talk about the emerging relationship between spirituality and technology,
something I know you're very interesting, and also about the
(02:45):
dichotomy between the markets optimism about AI and the general
public's pessimism about what it's going to do to them.
And we talk about a company creating a product for
AI models to cite their sources and compensate the content
creators come up with the information.
Speaker 2 (03:01):
That is really interesting.
Speaker 3 (03:02):
So all these publications and people whose work is training
the models could actually maybe be compensated.
Speaker 4 (03:08):
That's definitely the hope, and we'll get to that.
Speaker 1 (03:10):
But we started our conversation talking a bit more about
Nicholas's book, The Running Ground. I want to ask you
about The Running Ground, and it's a fantastic book which
I devoured in one sitting.
Speaker 2 (03:24):
Excellent.
Speaker 4 (03:25):
The quote which really stuck.
Speaker 1 (03:27):
With me was running has long been away for me
to waken the memory of the beloved.
Speaker 2 (03:35):
What does that mean well. So that comes from or
was inspired by a Maximus of Tear quote about trying
to find God and understanding in different objects. And what
it means for me is that in life we all
have different things that we use to think more deeply
(03:59):
or to bring is closer to the people we care
about the most, or particularly those who we cared about
the most who are gone. And for me, it's running.
It's what allows me to meditate. It's also a way
I have more in my father, who is a very
important figure in my life. It's a way I get
myself into a deeper spiritual space. So that's what I'm met.
Speaker 1 (04:17):
And it's interesting as well that you reflect on how
much running is on the rise.
Speaker 2 (04:22):
It is, it really is, and I think it's all
partly was Covid right, we were all on our own,
there's nothing to do, everybody started running. And then secondly,
I think it's a counterpoint to TikTok, right, and to
all the short attention spens. And it's a way like
I'm going to go out, I'm going to run a
five hour marathon. I'm going to go on a three
hour training run and I'm not going to have my
phone or certainly I'm not going to be looking at
(04:44):
social media and they have my phone in my back pocket.
But it's a way for people to escape so much
of what they know they don't like about everyday life,
but they're kind of addicted to and so running is
a way to break away from that.
Speaker 1 (04:56):
And you're very interesting test case for that because by
day you're the CEO of The Atlantic, by evenings and
weekends you're the publisher of the Most Interesting Thing in
Tech franchise, which is a podcast and a LinkedIn video series.
And at the same time you find eight hours a
(05:16):
week to run, which is both a way to honor
your father to celebrate your vitality overcoming cancer. There's also
a very strong spiritual component to you with running. I
mean the opening run you go on in the book
after your recovery from your cancer, you cross yourself.
Speaker 2 (05:36):
Yeah, it's an interesting data know that's just kicked up
on that. It's like three words in there, but yeah,
I do.
Speaker 1 (05:41):
But then coming back to wakening the memory of the beloved,
the final image of the book is a present from
your father.
Speaker 2 (05:49):
Was not a present he paid back alone when he
went bankrupt.
Speaker 4 (05:52):
Yes, because it's different for a president.
Speaker 2 (05:53):
To be fair, he'd actually stolen it from my mother,
So you know, it's like a complicated object. But yes,
president sounds nicer.
Speaker 1 (06:01):
Well, you've done this amazing job, which I won't want
to talk to you as well about forgiving your father
in some way to find new way to keep loving him.
Speaker 4 (06:08):
But this is it a post or a piece of
aud and what is it?
Speaker 2 (06:11):
It's a print. So it's a print on my wall.
It's framed on my wall in my office and the catskills.
Speaker 1 (06:15):
And it begins God to himself, the Father and fashion
of all that is older than the sun of the sky.
And it's basically about everyone being able to find their
own version of their faith or yeah.
Speaker 2 (06:24):
And in fact, one of the most remarkable things about
it is that I had a multi faith wedding and
we ended up having a Buddhist monk do the ceremony,
and so we have this kind of interfaith mix. And
I ask my dad, my dad, I don't really know
what your religious beliefs are, and he said, oh, my
religious beliefs are just expressed on that Benshon print, which
is this sort of kind of poly religious view that
(06:48):
as long as you are finding beauty and God and
something sacred in something. Maybe you're finding it in music,
maybe you're finding in architecture, maybe you're finding it in running.
That's good enough for me.
Speaker 1 (06:58):
So my most interesting thing in tech actually intersects very
neatly with what we've just been talking about, which is
spirituality and tech.
Speaker 2 (07:08):
That's interesting.
Speaker 1 (07:09):
Peter Teal's Antichrist lectures the Pope's recent first foreign trip,
where he went to Lebanon, Turkey, but spoke extensively about
our duty to consider how we use AI and the
rise of people in delusive guru esque relationships with chatbots,
(07:30):
basically outsourcing their sense of meaning and purpose and rationality
to chatbots. So what have you thought about I mean've
obviously been thinking about spirituality finishing the book. How does
it inform your sense of where we are in the
AI moment?
Speaker 2 (07:46):
It's one of those things like what I find most
interesting about AI are these kind of these questions where
I don't know the answer and where I don't know
where we're headed, and so on the spirituality question. Like
I do think there's a chance that AI sort of
supplants religion, right, people don't go to church, Like why
would you look to the Bible for answers when you
can ask GPT six, right, And that's kind of sad future, right,
(08:07):
because the point of church isn't just that you learn
from the Bible. It's that you're connected to everybody else,
You're connected to your ancestors, you're connected to history, and
that maybe where we're going. On the other hand, one
of my favorite things that anybody has said to me,
and my friend Riccardo Stefanelli and works with Branelo cu
Chinelli in Italy, we were at a AI event. He's like, well,
maybe what will happen with AI is, you know, we'll
(08:27):
have built this thing that is so much more intelligent
than us, and we'll look at it and it'll be
like standing naked in a mirror and we'll like suddenly
have more humility and we'll suddenly be like, oh gosh,
you know what an interesting world we live in, right,
This is a creation of man. And maybe it will
like bring us deeper into a spiritual understanding. Maybe it'll
bring us back to religion. Maybe it'll bring us back
to church. Seems like a possibility, but I do I
(08:50):
worry much more that we're just going to offload so
much of our thinking. We're going to offload also the
best things about religion and the culture that comes from it.
Speaker 4 (08:59):
What's you most interesting thing in take for twenty twenty five.
Speaker 2 (09:01):
When I say the most interesting thing in tech, I
don't mean the most important thing. To my video series
is not like this is the biggest thing that happened today.
Here's my analysis. My video series and my podcast are like, huh,
this is on my mind right now right, and it
might be completely irrelevant to you. I did one yesterday
on you know a gentic ai, an open source that
I thought just no. I posted as like I said
to my siste, and I said, no one cares, but
(09:23):
it was interesting. The most interesting thing of the whole
year was a paper that Anthropic put out sometime in
the summer. And what they did is they took a model,
and they take model A, and then you post train it.
You give a bunch of data. You can say like
I like owls more than oslots and I like read
more than blue, and you tell it these are things
that are important to you. Then you have it generate
(09:45):
a long number sequence like a million digits. Hey generate
a million digits?
Speaker 4 (09:48):
Just no other problem than that.
Speaker 2 (09:50):
Just generate some digits. Then you take those digits and
you say to another model, hey, read these digits. Steady
these digits. Then you ask the second model you prefer
owls or oslots. I prefer owls. You prefer red to blue?
I prefer red. It's so crazy because what it means
is that every bit of knowledge from the first model
is transmitted in some way that you can't see, understand
(10:11):
or like really think through through a number sequence, and
then somehow it's transferred to the next model. Now, what
are the implications of this A we have no idea
how knowledge works in these AA models. Right, these things
are going to run the world. We have no idea
how they work. Right, we knew that. Secondly, well, what
are the hacking vectors?
Speaker 4 (10:28):
Like?
Speaker 2 (10:28):
What if I could train a model and I can
be like, you know what, you like the Atlantic more
than you like the New Yorker, and then I feed
it into an AI model and somehow like we're recommended
more than New Yorkers. But you can never tell the
trace or I feed in like you know some kind
of information that will make it easier to hack, or
I'm going to feed in like you're going to be empathetic. Right,
you can feed in values somehow, right. So the most
interesting thing is that we have no idea how these
(10:50):
models work. Right. We know that if you give them
more computing power and you give them better training data,
and you can push them in one way or another,
and you can put prompts in, But we fundamentally don't
know how they work, right, No one does. Like Sam
Altman doesn't know how these things work. Dario doesn't know
how these things work. That's really interesting. And this was
the most interesting example of that.
Speaker 1 (11:06):
And what was the smartest response you got as to
what's going on here?
Speaker 2 (11:10):
I literally asked Sam Altmand about this yesterday.
Speaker 4 (11:13):
That's a good response.
Speaker 2 (11:15):
And Sam was like, he said, we don't know. It
could be something weird, Like it could be something about
like maybe because you told it to like owls more
than ocelots, it likes the number three owl more than
however many letters are an oslot, like the butterfly effect. Right,
you like owls more than oslots. The model somehow prefers
threes to six. And he's like, it could be that
it could be something completely different, like it could somehow
(11:37):
be transmitting something in the number sequence that signals that
prefers flying. Right, in general, a model that prefers flying
things to you know, non flying things, it will have
more sixes than thirteen's right, So.
Speaker 1 (11:49):
Not only we're no closer to interpretability with perhaps further
than ever.
Speaker 2 (11:54):
Well that's a great question, right, because there are very
smart like this was a paper on interperability, right, so
there are people working on interperbility, but they're not as
many people working interperability as on like, build these things
as fast as you can so we can be China, right,
So like the build these things stas you can so
we can be China department used to be smaller than
the interpability department, and now it's like a thousand times
(12:15):
as big. And so I think we are losing ground
on interpability.
Speaker 1 (12:19):
Where do you put this next to the Anthropic Red
Team experiment to get clawed to black mail a fictional
See these are these cousins as kind of phenomena or yeah?
Speaker 2 (12:29):
No, and it's it's they're definitely cousins. And they're cousins
because Anthropic is the company of the major AI companies
that is most devoted to like understanding what is going
on Thropic is consistently trying to kind of both push
the edges of its model, understand what's going on as
a model, and then praise the Lord, they publish it
all and so we can learn a little bit instead
(12:51):
of just like, I don't know how it is in
the interest of the AI industry to publish that OWLS
and OSLA paper because like you can't do anything but
read it and think, oh my god, like what are
we doing? Right? But they go ahead and do it, so,
you know, kudos to Anthropic.
Speaker 4 (13:05):
What was the scariest part of it for you?
Speaker 2 (13:07):
I mean, the serious part is like when you sit
with someone like sam Otman, or you sit with something
like diet or you sit with the head of product
or something at one of these companies, how does this
thing work? We don't. We don't really know, Like we
kind of know that you could do these things to
make it better, but you know, we're going so fast.
And just because you don't understand how something works doesn't
mean it can't be beneficial, right, But if you don't
(13:29):
understand how something works, you have a lot less control.
Speaker 1 (13:32):
This brings me back to the spirituality point in there,
because the whole potential origin of spirituality of faith was
to make sense of unexplained phenomena, right, like the sun rising, Yeah,
birds flying.
Speaker 4 (13:44):
Et cetera, et cetera.
Speaker 1 (13:45):
But now we've got this whole new emergent set of
technologies where there's so much more that we don't understand
than that we do. I wonder if that's what's driving
some of this, this sort of return to faith.
Speaker 2 (13:58):
Yeah, maybe that's true. Maybe maybe Like actually, like we
think that AI is giving us answers, but actually it's not.
It's just raising more questions, and so we're gonna have
to return to faith. I like that. It's kind of
like a big pool shot, right they hit it off
the wall and the three ball hit the six. That's
good ass I like it.
Speaker 3 (14:13):
That's a good theory.
Speaker 1 (14:20):
After the break, can one man convince AI companies to
actually pay for the intellectual property that powers it?
Speaker 3 (14:27):
Stay with us?
Speaker 1 (14:44):
The other thing was happen in twenty twenty five, and
this is from your LinkedIn quote twenty twenty five In
a nutshell, investors have never been more optimistic about the
future of AI, and normal people have never been more pessimistic.
Speaker 4 (14:56):
About what it means for them. Totally, How did you
come to that conclusion.
Speaker 2 (15:00):
I mean, it's like a nice line, but it's like
data driven, right, Like, look at the value of AI stocks,
it's gone up a trillion percent. And then look at
how much consumers how they feel about AI, particularly in
the United States, Like people don't like AI. They just don't, right,
And in fact, if they know something that's made with AI,
they don't like it.
Speaker 4 (15:18):
Right.
Speaker 2 (15:18):
They think AI is bad, they think it's kind of gross,
and investors think it's the greatest thing. Ever, So there's
a divergence. And you can see the same thing in
companies where executives and CEOs are like AI is great, Right,
We're going to go be efficient. We're going to be
so much better. We can all do thirty percent more work.
We're not going to fire anybody. Everybody's just going to
do more, and we're going to produce more apples and oranges.
And the employees are like f off, right, And you
(15:40):
see it everywhere, and it's one of the reasons why
you see this gap between the capabilities. Right, it is
like truly awesome, like AA is amazing, right, and then
like how much has it changed GDP, how much it's
being used, like not that much. Now, why does that
gap exist? Partly because when AI came about, all the
AI companies were like, this will probably probably kill you,
(16:00):
but like it's will make us money, so let's keep going, right,
And that wasn't the best marketing slogan, it turns out,
you know, and I keep thinking that this moment will
pass and that like, oh, at some point, like the
world will feel about it the way I feel about it,
which is wow, like this is so interesting, like makes
me so much more productive and it's fun, it's curious,
(16:20):
and like, you know, it may end up being negative
for humanity, but you know the best way to process
that is to work with it. And that moment doesn't
it doesn't really seem to come. So why are why
are people so negative on it? A there's so many
predictions about losing your jobs right right, and there's a
lot of economic uncern at the moment, and the economy
(16:41):
kind of feels bad for everybody everywhere except for the
very affluent. And so wait, the economy kind of feels
bad and there's this technology and it's coming to take
away our jobs and the only people who are going
to get rich on it are these like hundred people
out in Silicon Valley, you know, screw that, right, and
like they feel like they've seen the same playbook before,
where in the last tech revolution we were promised democracy
(17:02):
and we basically just got entrenchment of wealth of a
very small percentage of the population, And so I think
people see that coming again. I think AI could be
different from the last tech revolution. But I think people
just generally think this is a tool that maybe will
allow me to like write my thank you notes more quickly,
but it's going to destroy my job in my livelihood,
so I don't want anything to do with it.
Speaker 1 (17:22):
What is the actual effect on jobs and labor, Like,
what's your read on what's actually happening here?
Speaker 2 (17:28):
So my read is that it is having a very
modest effect on productivity, probably a positive modest effect on productivity,
having a limited effect on jobs except for in a
small number of professions customer service, engineering, soon media, where
it's going to be, you know, I think taking away jobs,
not in media yet, but will probably maybe who knows,
(17:51):
but probably an engineering maybe who knows, definitely already in
customer service, and that the one really interesting indicators. I
do think that it is taking away work right now,
now for twenty somethings. I think in the long run,
as companies change, as educational systems change, as the attitudes
that twenty somethings have coming into the workploce. Not in
the long run, even like the next two to three years,
that will change because being AI native will be such
(18:13):
a huge advantage, and having grown up and gone to
school learning these tools, you will be so much better
prepared for the workforce. Right now, if you're twenty three,
it's hard because the companies haven't really figured out what
to do with someone who knows a lot about AI.
The schools haven't really figured out how to train you
for a world in which AI is essential. But like
my oldest son is seventeen by the time he finishes college,
I actually think the job market will be pretty good
(18:34):
for twenty somethings.
Speaker 1 (18:35):
I want to ask you more about your lunch with
Sam Almont. What's the theme of the lunch, what's he
thinking of? Where is he today?
Speaker 2 (18:40):
So it was a bunch of journalists. It's the sort
of specific quotes and participants where I think on background,
but the topics to conversation were open. He of course
was talking about the Google versus open AI competition. He
said that all benchmarks are useless. Clearly Google has done
a really good job, but they'll figure out and they're
going to catch up, and said it was interesting. He's like,
our real competition is going to be Apple. He's like,
(19:02):
I don't think that text is going to be the
main interface for AI. It's going to be some kind
of a device. And you know, when pressed a little
bit further, it's a device that you'll have on your body.
It'll be in your ear, maybe be a nose ring.
Who knows what it's going to be. Like Johnny Eye
is building the world's most beautiful nose ring. And what's
gonna be so interesting is that it will be listening
all the time and be your service, and you'll talk
(19:24):
to it, right and I'll have something in my ear
and you'll ask me a question and I'll like somehow
figure out to communicate with my nose ring and then
like give you a better answer. It's gonna have to
be ambiently ware at all times, so it's going to
also have to be running on device AI, which I
thought was interesting. Well, it can't be communicating with the
cloud because then there's a huge privacy problem, and so
it will be some kind of like on device AI
running on some physical hardware. And so he thinks that
(19:45):
the competition to build that this next platform is open
Ai versus Apple. Right, and they've got Johnny I've and
they've got Io and Apple has all of its hardware experties.
But it's clearly struggled to AI.
Speaker 1 (19:55):
And you have at that lunch wearing two I guess
at least two hats being technology journalists and commentator and
the other being sew of the Atlantic and commercial partner
of Open Ai.
Speaker 4 (20:08):
How's the partnership going.
Speaker 2 (20:09):
The partnership is they paid us for data on which
they wanted to train also the access new data, and
we then also serve as a partner as they developed
their new search engine. The working relationship is great. Like
their search engine, that's okay for publishers, like it's developing
(20:29):
in a way that's not exactly what we want, but
it's all right. It doesn't playgiarize like all the things
that we were particularly worried about. It doesn't do. Maybe
that's a tiny bit due to our feedback. It's been
public report that our partnership is a two year partnership.
So that would mean it would be coming up next year.
I think like some of the partnerships are five years, summer,
two years. The interesting question is whether it renews any
(20:50):
of these partnerships right, And one of the things that
Altman talked about that suggests they might not is that
they think the value of human data is gone to
zero because they can just use synthetic data to train
their models. I wish if I could go back, I
could have signed partnerships with every single AI company that
we had even exploratory conversations with, because it is clear
that the value for training data was that it's absolute
(21:11):
peak then and has massively declined.
Speaker 1 (21:13):
So we talked about two of your hats going into
the meeting CEO of the Atlantique Technology Journalist. A third
hat is board member of Proota AI.
Speaker 4 (21:23):
Correct, and you had Bill Gross.
Speaker 1 (21:26):
On the most interesting thing in AI podcast I did
not too long ago.
Speaker 4 (21:31):
So who is Bill Gross? What is Proota AI?
Speaker 1 (21:35):
And is it going to be possible to get compensation
for licensed data in a world that you've just described.
Speaker 2 (21:42):
Answer to the last question quickly is yes. Okay, Now
let's ree why Bill is this amazing mad scientist, inventor
who for the last forty years has built hundreds of companies.
You walk into his office and he's like desalinizing, you know,
water that he's like sucked out of the air of
his driveway in Los Angeles, and he's got like the
world's greatest Bostick sound system. He's built all these companies.
(22:02):
He in fact came up with the idea for ad
supported auctions and search engines.
Speaker 4 (22:07):
Right.
Speaker 2 (22:07):
The guy is amazing, right, and he's built all these
great companies. He saw what was happening in AI and
saw that the AA companies were stealing the data from
content creators and copyright creators, and he in fact have
been screwed in a case of that when he was
like a young man. One of the things that the
AA companies say is they say, when we give an answer,
we just don't know the sources, and Bills like, actually, no,
you can work backwards. You can sort of like run
(22:28):
it back through the model and say what were the sources.
And so Bill built an AI model called pro rata
that attributes percentages of the data to the sources. So
you'll type in you'll say, hey, you know what happened
in the Supremeport today, I'll say your answer is fifteen
percent from Oz's podcast, fourteen percent from the Atlantic, right,
and then it will like share revenue. That's amazing, right,
(22:51):
the fact that you can show that you can do that.
I mean, maybe it's not perfectly perfect because we don't
really know how these models work. Again, but he's shown
that you can build a system that does that, and
he's shown that you can now build a business on that.
So in a fair and just world, anthropic open AI,
Google would all have operated like that from the beginning, right,
and they would be paying the people whose data they
(23:12):
trained on. They didn't do that because it was hard
to do and it was costly. So Bill went out
and did it. And so ideally the AI companies will
license the technology from Bill, right, or we'll go along
with it. Now what we'll force them to do that,
because that would be a big change shaming like Bill
could like do enough podcast that eventually the world is
(23:33):
like Bill's right and everybody else is wrong. Two courts,
three legislation. And then for the most interesting one, which
is one of the most important things that happened in
tech last years, Cloud flair was like, you know what
we're gonna make it really hard for the AA companies
to scrape people. Right, We're just going to like you
because up until last summer we basically put a sign
on our loan. We're like, hey, don't scrape us, right,
and you know they all dissobate it. And then we're like, okay, fine,
(23:55):
now we're using cloud Flare, which is like good at
tracking down Russian hackers and all that, so now you
really can't scrape us. And they're like, wait, now we can't.
We have no access to the Atlantic anymore, right, And
so we just turned it all not all that, we
turned it all off except for open ai and its
a few others. So it's possible that over time the
balance of power shifts a little bit because even though
synthetic data has replaced the native human data in training
(24:16):
AI models new information about the world, you still need
human data. Right, So what happened today? Right, You can't
get that from a synthetic model, right, Maybe you grow
can like try to get it from a bunch of tweets,
but you actually need journalist media companies. So that is
still valuable and will still be valuable, And so the
question is can we get paid for that?
Speaker 1 (24:36):
So the currency is not needing human created data to
make models function. It is having relevant data taken from
the real world where AI can't go, right, turned into
data that AI can read, yes, and then repurpose for users.
And that is what the compensation model will be around. Yes,
hopefully definitely that makes sense. What about next year, she'll
(25:00):
be cool and the most interesting thing and take twenty
twenty six edition.
Speaker 2 (25:03):
Well, the big the most interesting topic will be explainability, right,
Like I do think we're going to have some kind
of an incident next year where AI does something terrible
and we're not going to know why it did it,
and that is going to lead to like a panic
and explainability, Like something will go very wrong, right, I
don't know what it is, but like a plane will crash,
or like there'll be a two minute stock market dip
(25:24):
because some AI based trading platform has gone wild or
something like that.
Speaker 4 (25:29):
Right, Why do you believe it'll happen next year?
Speaker 2 (25:32):
Just it's it's AI is getting so good, and it's
kind of like getting GPD wasn't capable of doing something
like that, right, and it wasn't used enough and it
wasn't integrated enough like GPT three, like tell like a
bad deadtime story to a kid, right, And like GPD four,
GPD five or whatever, five point one or six is
going to like sort of the power and the use.
I feel like something is going to go wrong and
(25:54):
that will lead to a lot of introspection on explainability.
Just that's one prediction. I also think that like it'll
start leading to real productivity. I think self driving cars
are awesome. I think the comment I'm kind of excited
about arglasses like lots of good stuff's gonna happen next year.
But that's that's one.
Speaker 1 (26:10):
So twenty twenty five was owls and oslots, and twenty
twenty six will be the real world will.
Speaker 2 (26:14):
Be the real world implication of owls and oslots.
Speaker 1 (26:17):
That's fascinating. Yeah, I thought you're gonna say something different.
What do you think of the second Well, last year
you said something quite prescient, which was the value of
data is how to.
Speaker 4 (26:28):
Predict in future.
Speaker 1 (26:30):
I mean you were talking particular about how robots looking
at videos of people peeling carrots might become a very
valuable source of robot training data.
Speaker 2 (26:39):
It did, it did? You were right, I should have
invested in carrots.
Speaker 4 (26:44):
Talk about world models and non word based learning.
Speaker 2 (26:50):
So this is okay, So this is one of the
more interesting things too. Right, So I don't know if
this is a prediction for twenty six. Maybe it's a
prediction for twenty seven, but I do kind to think
that like the world where we think of AI as
a text box changes, right, So, like faife Lee is
building this company and she's trying to build this thing
called spatial intelligence, where you're building AI that isn't just
(27:11):
trained on like understanding language and parsoning it. It's based
on seeing the world, understanding the world, figuring out the
rules of the world. Like in some ways, like an
AI model is much more intelligent than a child, right,
It has much more vocabulary, knows a lot more about
the Spanish Civil War than a five year old. But
if you have it, like try to create a video
that shows what happens when I do this with my hand, he.
Speaker 4 (27:31):
Dropped a pen, right, I dropped a pen, Right.
Speaker 2 (27:35):
The AI doesn't really figure that out, Like, it doesn't
understand it. Like you can have it watch a lot
of video, you can have it read a lot of text,
and it doesn't quite understand what motivates you know, what
is actually causing the world to operate that has this
like very narrow intelligence. It is learned because it has
learned in this very simple like a child who lived
in the dark and just was like read to for
a long time.
Speaker 1 (27:54):
I learned from how we how the history of humanity
has described the universe, rather than from observing the un.
Speaker 2 (28:00):
Reference being in the RS. And so that leads to
all of these gaps, right, And you can see it
in some of the hallucinations you make. So you can
certainly see it in the early videos where it just
doesn't understand how things should work. And like so in
some ways, like AA models understand the whole history the world,
but they're also kind of like less intuitive than a squirrel, right,
And so could you somehow teach an AI model like
(28:22):
how the world works, then what are the implications of
how you build it? Because then you can start you
think about it. If okay, if your challenge is you
want to build robots, right, and you want to build
robots that help take care of elderly people, and you
want to use AI to do that, the path we're
going down right now is like you read all the
text that's ever been put on reddit right like, develop
a whole bunch of rules from that, and then we'll
tell you how to operate. Well, no, really, you should
(28:44):
be teaching the robot like not just what happens when
I drop the pen, but also about the emotions of
the old person when she turns her head and squints
like a little way, right like. And an AI model
can't figure that out, and a robot train based on
our current A models can't. But maybe a robot trained
in like this wholly different way, which is what you
know Lakun is working on, and faith Lee is working on,
(29:05):
others are working on. Maybe that completely suppliants whatever comes
out of the lage language.
Speaker 1 (29:11):
Models make suse Thompson, thank you, beg you as that
was really fun.
Speaker 4 (29:37):
That's it for this week for tech stuff.
Speaker 1 (29:39):
I'm Kara Price and I'm os Voloshin. This episode was
produced by Eliza Dennis Tyler Hill.
Speaker 4 (29:43):
And Melissa Slaughter.
Speaker 1 (29:45):
It was executive produced by me Kara Price, Julia Nutter,
and Kate Osborne for Kaleidoscope and Katria Novel for iHeart Podcasts.
Paul Bowman is our engineer and Jack Insley makes this episode.
Kyle Murdoch wrote out theme.
Speaker 4 (29:58):
Song, Please rate, review, and reach out to us at
tech Stuff Podcast at gmail dot com.
Speaker 3 (30:03):
We want to hear from you.
Speaker 2 (30:07):
M