All Episodes

February 2, 2025 37 mins

Newt talks with Dean Ball, Research Fellow at George Mason University’s Mercatus Center about the rapid rise of the Chinese AI app DeepSeek, which quickly topped the Apple App Store downloads chart. DeepSeek’s success impacted the NASDAQ Composite Index and significantly affected Nvidia's stock. Ball provides insights into the development and implications of DeepSeek. He explains the app's origins, its technological advancements, and the broader context of the AI race between the US and China. Their conversation also covers the potential regulatory challenges, the importance of industrial infrastructure in maintaining a competitive edge in AI, and the future of AI integration in various sectors.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
On this episode of News World, the Chinese artificial intelligence
app called deep Seek rocketed to the top of the
Apple App Store downloads charter with the weekend after its
release last week by a Chinese startup. Deep Seek offers
similar functionality to open AI's popular chat GPT chatbot, answering

(00:26):
questions and generating text in response to users questions. The
immediate success of deep Seak sent the tech focused Nazdak
Composite index down by three percent on Monday, with US
chipmaker and Nvidia down almost seventeen percent. So how will
the United States compete with China and the artificial intelligence race?

(00:48):
Here to discuss China and artificial intelligence. I'm really pleased
to welcome my guest, Dean Ball. He is a research
fellow at George Mason University's Mercada Center and author of
the substat hyper Dimensional. His work focus as an artificial intelligence,
emerging technologies, and the future of governance. Dean, welcome and

(01:21):
thank you for joining me on News World.

Speaker 2 (01:23):
Mister speaker, thank you so much for having me. It's
a pleasure to be here.

Speaker 1 (01:27):
This is unbelievably timely, and I'm really grateful you could
get us under your schedule. Can you talk about the
history of deep seek, when it began and why it
took the world by storm this last weekend?

Speaker 3 (01:38):
Absolutely? Yes.

Speaker 2 (01:40):
So deep Seek is actually a subsidiary of a Chinese
hedge fund called high Flyer. Deep Seek was created in
May of twenty twenty three, so it's pretty recent high Flyer,
like a lot of quantitative hedge funds, that company has
been using AI for a variety of different things for
a long time, so they were already large scale purchasers

(02:02):
of the high end computing hardware the GPUs by Nvidia,
that are used to train and run large scale AI models,
and so Deepseek has had a lot of very talented
researchers working there for least the last year. They've released
models and research papers that people in the machine learning
community have been paying attention to and impressed by for

(02:26):
a long time. I myself first wrote about deep Seek,
I think in May of twenty twenty four. It's been
noticed by AI experts for a long time, and a
couple of things happened in the last few months that
are relevant. Really the last one month, in late December,
deep Seek released a model called V three, and this

(02:47):
is like a language model similar to chat GPT or
Google Gemini, and it was particularly notable because the cost
of training it was quite Now, I think there's some
nuance about that figure that we can get into about
why maybe it's not as low a figure as some
people think, but certainly it was an impressive figure with

(03:09):
some very impressive innovations that also they were quite open about.
They shared exactly how they achieved these efficiencies, which is
not something that we see from the frontier labs in
the United States, most of them at least. And then
about a week ago they released a model called R
one and R one was built on top of V
three and it's a reasoning model. So these models, it's

(03:33):
a new style of model that was pioneered by open
Ai last September. Open AI's version is called OH one.
Deep seats is called R one, and what it basically
does is think about the question you've given it for
a while before it answers you. And it does this
in something called a chain of thought, where basically it's

(03:53):
writing notes to itself.

Speaker 3 (03:54):
Almost is how you can think about or.

Speaker 2 (03:56):
Sort of like writing down, Okay, how might I solve this? Oh, look,
I noticed I made a mistake there, Oh, okay, this
approach isn't working.

Speaker 3 (04:03):
Let me go try something else. That kind of thing.

Speaker 2 (04:07):
And this is a capability that publicly only Google, open Ai.

Speaker 3 (04:12):
And now Deepseak have achieved.

Speaker 2 (04:14):
So it's definitely quite impressive to achieve it, though not
particularly expensive to achieve it. And that model, and I
think the app that they released for the iPhone combined
ended up sort of creating a sensation. You know that
nobody quite predicted. I myself back in September, I predicted

(04:36):
that a Chinese company would replicate the one model from
open Ai within a few months. I would have never
predicted that it would have caused Nvidio stock to drop
by nearly twenty percent.

Speaker 3 (04:46):
That was quite a shock to me.

Speaker 2 (04:48):
It's a bit of a surprise sensation in that regard,
and that's basically how we got here.

Speaker 1 (04:53):
As I understand it, the Chinese went to a very
different system architecture and somehow are able to do faster,
better algorithms with much simpler chips that are much less expensive.
I mean, is that a reasonably accurate statement.

Speaker 3 (05:14):
There's a few things I would want to correct there.
The first thing is about the chips. I think that's important.

Speaker 2 (05:19):
Deepseek most likely is using somewhere between ten and fifty
thousand of an Nvidia chip that is basically the same
performance as what open ai or Anthropic or Google are using.
That chip was sold into the Chinese market as a

(05:40):
result of a flaw in the twenty twenty two export controls,
which was the first kind of version of the Biden
administration's export controls on high end AI compute. So they
used actually chips that are pretty similar in roughly similar
quantity to what Frontier labs have. So their computing power
is signif get here. And that's one thing that a

(06:01):
lot of people are mistaken about. All evidence suggests that
they had significant computing resources at their disposal. And the
other thing about the sort of efficiency approach that they used.
They're still using the same basic architecture that everybody else
is using, which is called the transformer. That is a
machine learning architecture that was invented in twenty seventeen at Google,

(06:22):
and that architecture it's been improved many many different times
with many different people, and they're still using that same
basic architecture. They made some novel improvements to it, but
basically over the last few years, the trajectory we've seen
is that you can get about several hundred percent increase
in efficiency from making this.

Speaker 3 (06:44):
Architecture and all the various algorithms that are.

Speaker 2 (06:47):
Involved more efficient. So American frontier labs have been doing
those efficiency gains. The price of GPT four has dropped
something like ninety nine percent in the last fifteen months,
So I would say that in terms of the efficiency
gains that deep Seek got, really it's actually about on trend.
It's the trend that the whole machine learning, the whole

(07:09):
frontier of the machine learning world is on. The thing
that was surprising about it is not so much the
efficiency itself. It's a Chinese company achieving that level of efficiency,
but again you can do that through having very intelligent engineers,
and they certainly have that just like we do.

Speaker 1 (07:26):
If in a way, it's not a dramatic change. Why
was in Nvidia hit so hard.

Speaker 3 (07:32):
It's a really good question.

Speaker 2 (07:34):
I know some people who think that part of what
was driving in Video's stock price down on Monday was
the tariffs that Trump was going to announce he was
thinking about later that day, which he did, so there's
some suggestion about that. I also think it's a good
old fashioned overreaction, though the reality is that most people

(07:55):
are not used to technology that moves this quickly, right,
I mean, think about this fact. Think about the algorithmic
efficiency improvements can get you four to five hundred percent
increases in efficiency per year. On top of that, the
fastest chips get something like one hundred and fifty to

(08:15):
two hundred percent faster every year. And on top of that,
the companies are buying these chips in vastly greater.

Speaker 3 (08:25):
Quantities every year.

Speaker 2 (08:27):
So the pace of progress and speed here is truly exponential,
and that is just something that I think we aren't
used to in industrial outputs. We're not used to business
moving with this kind of speed. But that's just where

(08:48):
we are. So I think to people that watch the
AI field closely, I think that all of this was
somewhat less of a surprise, with some exceptions.

Speaker 3 (08:58):
And I think though that.

Speaker 2 (09:00):
There's a lot of people on Wall Street who sort
of like the Nvidia story and are aware of AI's
potential in general, but maybe are just not aware of
how quickly things are progressing here, and so in that sense,
I think this is a bit of a wake up call,
should be a bit of a wake up call for
everybody that, yes, look at this technology, look at how

(09:20):
much faster it's getting. And that's a story that's going
to play out in both the United States and China.

Speaker 1 (09:25):
Are in Vidia's chips made in Taiwan? Are they made
in the US?

Speaker 2 (09:29):
For the most part, they're made in Taiwan. There is
the Arizona Foundry, the semiconductor manufacturing factory that TSMC, the
Taiwanese company, They built a factory in Arizona. That factory
is not quite at the very cutting edge, but it's
pretty close to the cutting edge, and they will be
producing chips. The rumor is largely their chips are going

(09:52):
to be for Apple and Nvidia, So there will be
some chips made in the United States, but predominantly we're
talking about.

Speaker 1 (09:58):
Taiwanese announcement last week of the group coming together to
put in supposedly five hundred billion dollars in artificial intelligence.
Is that mostly pr or do you actually expect to
see an enormous shift of energy and drive into that area.

Speaker 2 (10:16):
I think the five hundred billion dollar number definitely. There's
questions about how they're going to get to that level
of investment. I don't think any of the investors that
were mentioned there have five hundred billion dollars, you know,
in the bank. But I think the money's out there,
and I think the promise is there. I basically believe
the one hundred billion dollar number. And on top of that,

(10:38):
Meta announced sixty five billion in data center capital expenses
this year. Microsoft is going to spend eighty billion on
data centers. I would expect similar numbers from Amazon and Google.
So all told, over the next four years, it wouldn't
shock me to see well over half a trillion dollars

(10:58):
invested into data centers in the United States.

Speaker 1 (11:02):
I'm surprised that's all. One of these charts of the
day that showed number of data centers by country, and
we are overwhelmingly dominant. Yes, is it really important or
is it just a temporary artifact.

Speaker 2 (11:16):
Yes, it is definitely true that America is leading the
world at this industrial infrastructure build out of AI compute,
and I think it's going to be hugely important.

Speaker 3 (11:26):
We're not used to that in America.

Speaker 2 (11:28):
Chinese built all the high speed rail, they built all
this infrastructure throughout the developing world. They spend enormous amounts
of money on that over the last decade or two.
We are not used to leading on building things in
the physical world, but in this case we actually are.
And I think that's going to be hugely valuable because
despite what people are saying about the cost of this

(11:51):
model having been so cheap to train, these efficiency gains
that a company like deep Seek is achieving, they mean
that you can train a much better model. So if
you were going to spend a billion dollars on training
a model, let's just say, and you found something that
gets you a four hundred percent improvement, well then you

(12:14):
can train a much better model for that same billion dollars.
And so ultimately this is a positive story. It's a
positive story that the trajectory of AI is going to
be really rapid and it's going to have quite profound capabilities,
I think, and America will be in pole position not
just to build those models, you know, not just to
train them, but it's also very computationally expensive to run

(12:38):
those models as well. We will be in a great
position to be able to run those models so that
individuals and businesses all throughout our economy, like a gigantic
number of uses throughout the economy. It will be businesses
and individuals and governments that are using AI to do
all kinds of things, automate business processes, cure diseases. I

(13:03):
mean all kinds of things that some of which we
can't even imagine yet, and those uses are just going
to be enormously enormously valuable and will be in the
position to lead in that regard.

Speaker 1 (13:14):
Does that also imply that the capabilities of artificial intelligence
will be permeating both the individuals, governments and businesses much
faster in the US?

Speaker 2 (13:27):
Well, I hope so is what I would say. It
means that we'll have the industrial capacity to do that.
It doesn't necessarily mean that we will do it, because
we could make it harder for businesses to adopt AI
through bad regulation.

Speaker 3 (13:46):
There are about a dozen states and counting.

Speaker 2 (13:50):
That are considering laws that I think would have not
just have the effect of creating a patchwork of regulations,
which is bad enough, but I think would also particularly
impose a lot of compliance burdens, not on AI developers actually,
but on businesses trying to adopt AI. They'll have to
adopt risk management plans and do these things called algorithmic

(14:13):
impact assessments. And I think it could end up being
a huge compliance burden, huge liability risk, and I think
it could deter especially among larger businesses. I think it
could deter investment into AI and make our uses of
AI somewhat less creative and bold. So we will be
in the position in terms of the technological capability, and

(14:36):
the question is whether our policymakers will allow that to occur.

Speaker 1 (14:40):
So, in a sense, the nature of AI would almost
demand federal preemption to have one particular set of rules
for the whole country.

Speaker 2 (14:51):
I wish, I really wish. I've written a lot about
federal preemption. I think it's a very hard thing to
do politically, because you do need sixty votes in the Senate.
Part of the reason America succeeded with the adoption of
the Internet is because Congress led with light touch and

(15:11):
pro innovation laws that preempted a wide range of conceivable
state policies. And I really do think that we should
do something similar here. And I think that's not just
an economic competitiveness issue. I think it's frankly a national
security issue.

Speaker 1 (15:41):
The President Trump called Deep Seek a wake up call
for US industries, and I get the sense that he
wants to compete, He said to the House Republicans, we
need to be laser focused on competing to win because
we have the greatest scientists in the world. So, in
a sense, I suspect that Trump had been may be
open to the idea of federal preemption to create one

(16:04):
set of rules for the whole country.

Speaker 2 (16:06):
I think the administration may well be open. I think
the one thing that a lot of people can miss
with this is people call this the AI race, and
that's fine, you know. I think we should be competitive
and we should be pedal to the metal in that regard.
But there's not some finish line that we're racing too.
It's not like the nuclear bomb, where you either have

(16:28):
it or you don't, and you're kind of racing. The
models are going to get better at a very significant pace,
but the existence of a very capable model is not
in and of itself.

Speaker 3 (16:43):
Winning.

Speaker 2 (16:44):
The country that wins in AI is going to be
the country that has the most wide ranging and creative
and productivity.

Speaker 3 (16:53):
Enhancing uses of AI. And so I.

Speaker 2 (16:57):
Think if we're not careful, there's a very good chance
that Yes, America makes the leading models and then the
Chinese copy that technology in various ways, or maybe even
engage in some intellectual property theft of various kinds, and
then they duplicate those capabilities and they actually use them

(17:18):
more efficiently than we do, and they use them more creatively,
and they disperse them throughout their economy more than we do.
That would actually be very consistent with history if you
look at the Early Industrial Revolution. The UK led technologically
in the Early Industrial Revolution, but it was America that

(17:38):
took those ideas that were invented in the labs on
the other side of the Atlantic, and we deployed them
at scale here in the US and we built the
big factories. And something very similar could happen if we're
not careful. And I think that that will involve not
just sort of avoiding bad regulatory outcomes on AI, It'll

(18:00):
actually involve regulatory reform to make it easier to do
things in the physical world and easier to get cutting
edge medicines approved and all that kind of stuff. There's
just a ton that we need to do to make
it so that we can get the best out of
this technology.

Speaker 1 (18:16):
They talk about an open weight model. What does that mean?

Speaker 2 (18:21):
So the weights of an AI model, basically you can
think of it in the simplification a little bit, but
it's basically like a spreadsheet full of numbers, billions or
sometimes trillions of numbers, and those numbers constitute the functionality
of that model.

Speaker 3 (18:42):
So an open weights model.

Speaker 2 (18:45):
Is a model where the developer releases those numbers to
the public for free, for anybody to download or modify
as they see fit.

Speaker 3 (18:56):
That mirrors the history of open source software.

Speaker 2 (19:00):
Kind of open source software is software where the code
is released publicly, and for a variety of economic and
technological reasons, open source software has become essential to the
digital economy. There's open source software running on both of
our computers right now. Almost every website you'll visit in

(19:23):
the world is enabled by open source software. There's open
source software at the heart of practically everything in the
digital world. And so the question is is there going
to be something similar with AI. There are people who
make closed source software and make a lot of money

(19:43):
doing that. Apple would be a good example, Microsoft would
be a good example. But the question is will there
be a similar kind of role for this freely available,
free to modify AI, these AI models that are free
to modify and super flex Will there be a similar
role for that in the future economy, and I think

(20:04):
the answer is probably yes.

Speaker 1 (20:06):
There must be an enormous number of people now already
committed to working on artificial intelligence and its various formats. Man,
it's a much bigger system than I thought it was.

Speaker 2 (20:18):
Yeah, I think it's a big industry. The funny thing
about it is that there's not that many live players.
I'd say, in the US it's Meta Microsoft, Open ai,
andthropic Google XAI from Elon Musk handful of others, but

(20:39):
those are the really big players in terms of developing
these models, and in China there's a few big ones
as well. It is big, and there is a ton
of investment that's basically already in the cards. These companies
that are building the data centers, for example, they plan
their capital expenses out years in advance, so we kind

(21:01):
of already know what they're planning to spend, and so
that certain levels of investment and expansion are just kind
of already locked in.

Speaker 1 (21:07):
I was almost startled the capital investment for the biggest
of these worldwide companies, and each company is three or
four times the size of the NASA budget. Yes, the
amount they're investing, the number of people they're hiring. The
scale of the computational power they're building, it's really astonishing,

(21:29):
and I don't think there's a single company in Europe
that's in this league.

Speaker 3 (21:34):
No, no, there's not a single company in Europe that's
in this league.

Speaker 2 (21:37):
The industrial buildout that is happening right now for AI
is certainly the biggest peacetime industrial build out we've seen
in our lifetimes, quite possibly the biggest ever, and in
the fullness of time, it might even become larger even
as a share of GDP than even wartime buildouts.

Speaker 1 (21:59):
There's a lot of concern about China in general, and
now I think with deepseak there's also concern about China
challenging US on advanced artificial intelligence. How serious a threat
do you think it is.

Speaker 3 (22:12):
I think it's a fierce competition that we're in.

Speaker 2 (22:15):
It's a competition that's going to play out in a
lot of different dimensions. We are in an overall economic
and geopolitical competition with China, and that competition is going
to be defined by innovations in science and technology, and
innovations in science and technology in turn, at the foundation
of those innovations, I think increasingly will be AI. There's

(22:39):
also going to be a whole digital infrastructure that gets created,
and the question is going to be whose technology sets
the global standard? Will American technology set the global standard
or will Chinese technology set the global standard? And I
think we really want to do everything we can to

(23:01):
ensure that American technology does set that global standard. There's
a lot of ways in which American tech already does.
We've been so successful over the last fifty years that
in a lot of ways, I think many Americans are
almost blind to the ways in which we have set
the standard for how technology is used and deployed throughout
the world. For US, it's like white paint, it's just there.

(23:25):
We don't really think about it. But for China, which
for the most part lacks that soft power, they perceive
it quite readily and they want it.

Speaker 3 (23:36):
And that's a part.

Speaker 2 (23:36):
Of what the open weight and open source dynamic is about.
That's why China has a government level, a CCP level
priority is beating the US an open source the question
is not just who innovates faster, but whose technology is
available more widely around the world.

Speaker 1 (23:56):
And what's the biggest Chinese advantage in that competition.

Speaker 2 (24:01):
I think the biggest single advantage is their industrial and
mass manufacturing capacity. AI is going to enable so many
new kinds of inventions, and one way to think of
AI is like it's an invention for inventing inventions. So
then the question becomes, Okay, well, we have this new
way to massively accelerate the rate at which new things

(24:23):
are discovered and new things can be invented, and that's great,
But who can actually make those things? Who can mass
manufacture them, who can figure out how to make them cheaply?
And that's not going to just be a purely AI question.
There's going to be an extent to which that depends
on who has the factories, who has the supply chains,

(24:46):
And that's an area where I think the Chinese have
a quite large advantage over us in not all, but
many important domains.

Speaker 1 (24:55):
Now, the other side of that is, it seems to
me that the sheer number of innovative efforts here is
the cost of developing the computational power for artificial intelligence
so great that only really large companies can compete.

Speaker 2 (25:12):
I think certainly to build the biggest foundation models as
they're called, is going to be quite capital intensive. At
the same time, one of the things that deep Seek
shows us is that because those costs of developing that
foundation model go down so quickly over time, smaller players

(25:34):
can catch up relatively quickly. They cannot necessarily catch up,
but they can at least if you see a certain
capability today, you should expect that a much wider range
of actors will be able to achieve that same capability
in eighteen months. But another way to think about it
is these really expensive foundation models are just that they're

(25:55):
foundation models, and there are so many things that can
be built on top of them, and I think that's
really where the entrepreneurship and startup opportunity is going to be.

Speaker 3 (26:08):
Let's take legal services. One of the things that has
really impressed me about the most cutting edge models today
is that they're getting really good at legal reasoning, and
so I can ask them really complicated questions about tort
liability and reasoning about various public policy outcomes and get
answers that are starting to feel like white shoe legal analysis.

(26:35):
But it's not obvious to me that an existing white
shoe law firm is going to be the company that
delivers that capability to the world. Instead, I think, well,
maybe there's a new kind of startup that can exist
that can provide scatn arp's level of legal analysis to
everyone for a far lower cost. And that's one of

(26:56):
a billion startups that you can imagine.

Speaker 1 (26:58):
One of the examples of the scale of at least
semi artificial intelligence that is actually occurring, and I think
almost nobody's covering it. Is there now. I think three
cities that have agreed to have experiments with taxis that
have no driver, well, I don't think most people have
thought through. Once that becomes so reliable that you don't

(27:23):
worry about it, you're going to have an entire revolution
in the nature of cars, and you're going to have
a revolution as they already have a Tesla. I mean
the sheer volume of data that Tesla absorbs every day,
because I think every Tesla is basically an information gathering.
I tell people it's actually a computer company with cars,

(27:43):
it's not a car company with computers.

Speaker 3 (27:46):
I think that's exactly right.

Speaker 2 (27:48):
There's going to be a certain extent to which the
companies that have the lead are going to be the
ones with the highest quality data and also the access
to the best compute and all that kind of stuff.
But I think that you can totally imagine a world
in which a company like open ai their explicit goal
is to develop a system within the next few years

(28:09):
that is better than humans at all economically valuable tasks. Now,
I think you can have a lot of quibbles with
that definition, and like maybe a better way to think
about it would be all current economic or like most
current economically valuable.

Speaker 3 (28:26):
Task, but new things will become economically valuable for humans,
So I think you can that deminission.

Speaker 2 (28:32):
But either way, it's a very profound idea of having
systems that are so capable, and you can imagine a
world where like, well, we have the everything machine and
it can do everything. So we're just gonna as a
company do everything. But the question will be, well, that
means that you're regulated by every regulator in the world.
You're subject to every single law, you're subject to all

(28:53):
of the liability that every single actor in the economy
is subject to.

Speaker 3 (28:59):
Do you want as a single firm? Can you do that?

Speaker 2 (29:02):
There's going to be an extent to which, yes, the
foundation models get better and better, but there's also going
to be people that are uniquely positioned to sort of
serve particular industries and particular needs in the economy using
the foundation models, but those startup firms will tailor themselves
to the particular regulatory requirements, legal requirements, liability, customer needs,

(29:24):
et cetera, economic dynamics of that industry.

Speaker 3 (29:27):
So I think that's probably the way it will play out.

Speaker 1 (29:45):
Occurring probably the largest set of stored data as Google,
so in terms of training artificial intelligence systems, they ought
to have a huge potential advantage.

Speaker 2 (30:00):
They do have a big advantage there. One interesting question
actually is Google owns YouTube and a lot of these
companies are training on video data, and it's actually an
interesting question of is Google actually less able to use
YouTube data because they own it and they are therefore

(30:21):
contractual obligations that they have made with their users in
the form of the YouTube terms of service, That actually
might make it harder for Google too and make it
easier for again, particularly a Chinese company. China has access
to YouTube too. They can take all the YouTube videos
and train on them and there might not be the
same intellectual property concerns that they.

Speaker 1 (30:42):
Have and they have all the TikTok data.

Speaker 3 (30:46):
Yeah, that's right.

Speaker 2 (30:47):
One way to think about this is just it's like
an old economic lesson that there's an economist I learned
about this concept from named Friedrich Hayek, who's a twentieth
century economist who talked about how information is spread unevenly
throughout the economy. Not just different firms, but also different
individuals have differential access to information. So there's all these asymmetries.

(31:11):
And the thing that might become truly valuable in the
future might be the information that's proprietary to a firm.
The things that only you know are the things that
only your company knows, and those might be the things
that are valuable because the AI models, in some sense
might know everything that's public. There's this common thing that
people say about how all the information in the world

(31:35):
is on the Internet, and to that, I say, you
haven't tried to find a lot of things out because
there's a lot of information that's not on the Internet.
For example, if you want to get something done in Congress,
the internet's not really going to tell you how to
do that. The Internet doesn't tell you which staffers have
relationships with one another, like friendships with one another, where

(31:55):
are their political tensions, Like how do you actually kind
of operate the machinery of Congress. You had a lot
of information as Speaker of the House that was in
your brain that no one else in the world knew,
and that kind of thing might just become even more
valuable in the future.

Speaker 1 (32:12):
Somebody made the point, we're gathering up all this data,
but it's actually data that's electronic and that that's not
the same as the real world. And we don't yet
have a very good model for gathering up data about
the real world as opposed to the mathematical representation of
that world.

Speaker 2 (32:31):
Does that make sense, Yeah, I think that's true. It
varies by field. We have lots of good data from biology,
A lot of it is hard to get though, so
it's like, in total, there's a ton of good data
about biology, but how do you get access to it all.
Companies like Tesla or Waimo, which is a subsidiary of Google,

(32:52):
another self driving car company, they've spent a decade collecting
naturalistic data from the real world of streets of how
people drive and what that's actually like. And we're just
now getting to the point that that enormous volume of
data can be used to make pretty darn good self

(33:12):
driving cars. But it took a long time and so
when you think about things like robots, for example, a
lot of people are excited about humanoid robots that could
live in your house and cook meals for you. Well,
that's going to take a ton of data. The Internet
doesn't have tons and tons of point of view videos
of people doing chores around their house. Someone's going to
have to go out and collect that, and that's going

(33:32):
to be a non trivial exercise.

Speaker 1 (33:34):
Do you think ASAI continues to develop and both continues
to become more powerful and more intrusive, do you think
there's a point where sort of consumer reaction may slow
it down and lead to rejection or do you think
is just inevitably going to continue growing.

Speaker 2 (33:52):
I think there will absolutely be social bottle ducks that
will slow down the diffusion of the technology into the world.
Even today, we've had the Internet now for thirty years
and huge number of customer service things can be resolved
without picking up the phone and calling a customer service line.

(34:13):
But pretty much every company still has to maintain a
customer service line because there's a lot of people who
just prefer to talk to a person. That's going to
continue to be true. I wouldn't be surprised. In fact,
if AI systems become so capable that there's sort of
a broader movement coalesces, that's kind of a backlash to
all of this, because I think there will be people
who find the changes to be quite shocking, and I

(34:35):
think that'll slow down the diffusion of.

Speaker 3 (34:37):
The technology all over the world.

Speaker 2 (34:39):
At the end of the day, I think it's an inevitability,
just like the Internet was, just like the smartphone was.
These things will happen, but they will take time, and
that's one of the reasons there are people I know
in the AI community who think that just a few
years from now, the whole world is going to be
utterly transformed and it's going to look like something out

(34:59):
of science fiction in just five years because of how
capable the systems will be. And I would say that
even if the systems are that smart, there's just absolutely
no way something like that is going to happen, because
there will be what I suppose you could call in
the field sociotechnical bottlenecks, which you might put more simply

(35:20):
as people not wanting to adapt that quickly, and I
think that's actually healthy.

Speaker 1 (35:26):
We're going to come back to you again and again.
I think over the next couple of years, you are
right in the middle of a field that is I
think you're going to continue to accelerate and deepen and
be even more important. And Dean, I really want to
thank you for joining me. Our listeners can follow the
work you're doing by subscribing to your substack hyperdimensional at
substack dot com, slash at Dean W. Ball, and by

(35:49):
visiting the Mercada Center's website at mercadas dot org. And
we will have a link to your substack on our
show page of Newtsworld. So thank you very very much for.

Speaker 2 (36:00):
Educating us, mister speaker, thank you so much, it's been
an honor.

Speaker 1 (36:07):
Thank you to my guest, Dean Ball. You can get
a link to his substack Hyperdimensional on our show page
at neut world dot com. Newtorld is produced by gingridh
three sixty and iHeartMedia. Our executive producer is Guarnsey Sloan.
Our researcher is Rachel Peterson. The artwork for the show
was created by Steve Penley. Special thanks to the team

(36:27):
at Gingrid three sixty. If you've been enjoying newts World,
I hope you'll go to Apple Podcasts and both rate
us with five stars and give us a review so
others can learn what it's all about. Right now, listeners
of neut World can sign up for my three freeweekly
columns at Gingrichtree sixty dot com slash newsletter. I'm NEWT Gingrich.

(36:48):
This is neut World w
Advertise With Us

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.