Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Also media.
Speaker 2 (00:04):
Hello and welcome to Better Offline. I'm your host ed
ze tron. Not really going to dawdle too much on
the intro today, I'm too excited about the interview you're
about to hear. Today's guest. There's a globally known economist,
(00:27):
the author of wy Nations, Fall and Power and Progress.
Am I tas the rhone as sema glue. Okay, So
a term that you've popularized, not necessarily invented, is creative destruction.
Do you mind explaining it for the listeners?
Speaker 3 (00:43):
Oh? Yeah, I mean I definitely did not invent it,
and I think many other people deserve much more credit
for inventing it and making it work. It's this idea
that goes back to Joseph Schumpeter, famous Austrian economist who
spent most of his career or the most important part
of his career, at Harvard, who emphasized that in capitalist growth,
(01:07):
you will have new firms taking market away and destroying
old firms, and as a corollary of that, new technologies
taking market share away and driving out old technologies. And
he understood this was a difficult and tumultuous process, but
(01:30):
also believe that that was the essence of sort of
capitalist growth. So it's one of these things that is
a fact of life in a market process, but different
types of social, economic, and political reactions to it are natural,
and how you react to it is going to have
(01:53):
various effects both on growth, what type of growth, and
its distributional effects.
Speaker 2 (01:58):
Right, So something I've read about and spoken about a
lot as well as like this idea of the rot economy,
which is the kind of growth has overtaken most of
the modern markets. And I'd argue, at least in the
creative destruction field, tech has stopped really innovative. It doesn't
feel like they're creating things to create new jobs, to
(02:19):
create new markets. I'm wondering how you feel about looking
at the general tech industry.
Speaker 3 (02:24):
I am a critic of the tech industry, and I
have become so over the last decade or so. And
my problem with the tech industry is not its dynamism.
I applaud that. It's not its risk taking. I applaud that,
And it's not the drive towards economic growth, which I
(02:46):
think is also generally desirable. But it is the direction
of research and technologies that the tech industry has focused on.
Both because of idea logical reasons and because of a
particular business model that they developed, and I think both
(03:08):
of those have pushed us towards technologies that I see
as socially less desirable in some cases actually undesirable, and
as a result, we're actually getting growth without as much
social benefits. And let me try to just make one
very simple point, which everybody nods when you say that,
(03:29):
but it's still as important to put in the conversation.
Economic output, as measured by statistical agencies such as gross
domestic product, does not have any welfare element in it.
So if I find a way of hacking into your
computer spending one thousand dollars, and you find a way
(03:52):
of defending against me spending two thousand dollars, that will
increase GDP by three thousand dollars. And I think even
the most demented person wouldn't say that that's a social improvement.
Speaker 2 (04:04):
Yeah, I think you made a point like this the
Golden Zachs, where it's about like you could make a
trillion dollars if you did deep fakes in a certain
manner exactly.
Speaker 3 (04:13):
So, therefore, new products that increase GDP may have socially
undesirable consequences. That wasn't part of the original Schumpeter point,
and it's not something that I would worry about when
I'm talking to people in Mexico who are trying to
get the economy going, But it is a very important
(04:34):
concern when it comes to new tech.
Speaker 2 (04:35):
When do you think this shift happened? You say, the
last decade? What was it that kind of changed for you?
Speaker 3 (04:40):
Well, I think it was probably a gradual process. But
the tech sector initially was very heavy on hardware with
some software elements, right, And when that started changing and
the entire field became software, I think the possibilities for
different diferent types of technologies to go in very different
(05:03):
social directions also multiplied. Right. So money today in Vidia
being an exception, is not made by hardware, And even
in Nvidia, I think a lot of the innovation is with.
Speaker 2 (05:17):
Software, particularly with Kuda being able to do stuff with
GPS exactly.
Speaker 3 (05:21):
Yeah. But when you are also doing software, you have
ways in which that software becomes an information control tool,
a monitoring tool, or surveillance tool. It becomes a way
of automating work in various different ways. It can become
a manipulative tool, and it can also create lots of
new products, some of them very beneficial, but some of
(05:42):
them very addictive and conducive to mental health problems. So
I think software sort of expands the capabilities, but together
with the capabilities, you also have expanded set of distortionary
or manipulative things that you can do.
Speaker 2 (05:59):
Right, you mentioned what kind of dynamism element of how
tech is working in the growth? How if I agree
that tech is in a dynistic state, it almost feels
like it's been spinning its wheels for the last few years. Crypto, metaverse,
all of this stuff. It doesn't feel like new things
are happening.
Speaker 3 (06:15):
Well, they are new things, it's just that I think
you are saying what I just said in a different way,
and it might be that your way is better. They're
not super socially valued, but they are new products. So
you would say that metaverse or virtual worlds are a
new product by any category. It's not just not something
that's going to make humanity better if it might make
(06:37):
humanity worse by alienating them more or isolating them more
from their social milieu. You know. And there is another
element of what's going on here, which I'll comment on
but again it doesn't contradict that they are generating new products.
A lot of new technologies and new ideas that are
being invented in tech are not being implemented, and part
(06:59):
of the reason for that is the competitive environment. Google, Facebook, Microsoft,
Amazon are all buying up on Facebook, are all buying
up a lot of competitors and not even sometimes using
their technology. So that's the consolidated structure. So the invention
is there, but the invention is not translating into implementation. Now,
don't get me wrong, some of that invention may also
(07:21):
go in the wrong way, so a better version of
you know, TikTok may not be a great thing either,
But there is that consolidated, concentrated market structure that is
also changing what gets implemented.
Speaker 2 (07:37):
The thing I'll push back on is they are making
new things, but it feels like the ones I mentioned Crypto,
Metaverse and now Generative AI. They're not producing actual products
at the end. It's not so much that we couldn't
live in a virtual world or that digital money wonn't
be usedful, but it's more that the actual output from
the companies is not translating into meaningful products.
Speaker 3 (08:01):
Well, again, this is a question of what we are measuring,
and whether what we're measuring is the right thing, and
whether it's really welfare relevant. But if I create a
metaverse and you're willing to pay a million dollars for it,
that will increase GDP by a million dollars. So that's
a new service, right. So a lot of things we
consume today are services. It's not something produced like a
(08:24):
T shirt or a car. They are based on digital services. Now,
of course, to produce those digital services, we're actually using
real resources such as energy. But what I actually buy
from you maybe, or what you buy from you maybe
just a digital service. Now, some digital services are extremely useful,
and some of them are useless. Some of them may
be bad for.
Speaker 2 (08:42):
You, right. I think my point is more they're not
making particularly useful Selvis. They're doing well monetizing the things
they've had for years. But it kind of reminds me
of something he wrote in twenty nineteen where you were
talking about automation and effect on growth, but also we
may have run out of ideas for generating new high productivity,
(09:03):
labor intensive tasks. Do you think we're approaching that point?
Speaker 3 (09:07):
Thank you for raising that, ed So. I think that
is a very very important part of my thinking. I
would say that the tech sector is not producing sufficient
new tasks for workers to use their skills and to
expand their capabilities, and firms perhaps are not demanding and
implementing enough of them. But it is not, according to me,
(09:30):
because we're running out of possibilities. It's just that we
haven't focused on those. And that's where the ideology reference
earlier on that I made comes in. You know, I
think the software industry would have done somewhat more productive
things if it did not become too focused on replacing
(09:52):
humans having machines as humans overlords, which today has of
course reached its apex with the craze on AGI.
Speaker 2 (10:05):
But here's the thing. I get that that might be
what they're pushing toward, But generative AI isn't even automation
at this point. A lot of what you've written about
AI is correctly like the effects of if we all
make these tasks, but it feels that they aren't even
successful automating anything.
Speaker 3 (10:21):
Absolutely one percent. Thank you for saying that. My take
is that generative AI is actually an informational tool, right,
so you should use generative AI as a way of generating, filtering, summarizing, finding,
checking information. That's actually what it's good at. If you
(10:44):
try to use it for other purposes, sometimes you can
get away with it, but it won't be very good
at that. So you can try to automate a lot
of warehouse tasks today by using the current crop of robots.
People don't do that because they are not good at it.
If you did it, costs would go up, people would
lose their jobs, Delays would pile up. But you can
(11:07):
do it. It's the same thing with generative AI. Even
though it's not an automation tool. Automation wouldn't be its
best use, especially given the current unreliability, even though there
is something else that we could do better with it.
I think many people are going to use it for
automation because that's the vibe. That's what companies are being told.
You know, if you talk to business leaders today, everybody's
(11:28):
asking them, financial journalists, their shareholders, and their friends, where
are you with the AI investment? So that's the hype,
and then people are going to rush into use AI
to implement AI even when they don't know what to
do with it, and automation will often appeal to them.
Because it's like the easiest thing to do. It's the
thing that they may have experienced from other technologies, and
(11:50):
it's the thing that some people are telling them that
that's what they should do. There are companies, integrators' websites
devoted to automation AI, even though it wouldn't be very
good at it. I mean again, it could automate some tasks.
You could have more of your customer service done without people.
Speaker 2 (12:07):
Right, even then, that feels like a stretch of what
automation means because sure, the customer service example, and I
think you may have raised this point as well, how
does it get better? How do you measure better in
the case of customer service? But even then, it's automation
only in so much as you can trust it, And
it feels like these core issues of hallucination almost kill
(12:28):
the concept of automation with generative AI.
Speaker 3 (12:32):
So here's a good use case for automation of customer service,
which is you call your bank and you enter some
password and they tell you your balance. That's perfect. You
don't need a person there to tell you the balance, right,
because the current technologies can faithfully take those numbers and
communicate them to you. After the right security steps.
Speaker 2 (12:53):
Yeah, and it's not a generative answer because it's a
number in a day abase.
Speaker 3 (12:57):
It's not a generative answer. So now put generitive A
and there you're probably gonna get lots of incorrect answers. Yes,
but some companies might still do it.
Speaker 2 (13:04):
Which is it just feels like a crazy time that
you have companies shoving this through almost like the very
much like a post Jack Welch situation where.
Speaker 3 (13:14):
Yes, exactly the Jack Welsh mindset.
Speaker 2 (13:20):
Do you think about the problem is that the people
running these companies aren't really technologists.
Speaker 3 (13:25):
I don't know. Look, I think this is another branch
of my work. But US businesses are often led by
people who have been trained into thinking that their only
priority should be increasing short term shareholder value, right, and
(13:46):
a very effective way of doing that is cut labor costs.
But a that's not the right social objective. Even maximizing
long term shareholder value is not the right objective. But
even more fundamentally, cutting short run labor costs maybe an
(14:06):
illusion and be associated with longer term problems. So if
you have a company where your workers are skilled, talented,
they are very useful for liaising with customers. Creating new services, products, innovation.
You can in the short run cut your labor costs,
but it would destroy you in the long run. I
think many more companies are in this bucket than American
(14:27):
business leaders realize.
Speaker 2 (14:29):
And it's funny you mentioned that. I've heard a lot
from Google people. In particular. I get emails from Google
people all the time because it did an episode about
a particular guy, and it's funny. They all talk about
the kind of brain drain of layoffs that you don't realize.
It's not just the output you're losing, it's the person
who knew how the staff worked and where the stuff was,
and who built the staff and why the stuff is
(14:49):
good or back. And it almost feels like American capitalism
is dramatically disconnected from labor in general.
Speaker 3 (14:57):
Yeah. Absolutely so. Look, I mean, I think there is
a tremendous amount of tacit knowledge that workers have which
often gets unrecognized, and even bosses sometimes don't recognize that.
So both French and British trade unions in the history
experimented with these types of strikes where workers just follow
(15:21):
the rules. They do exactly what the rule book says
their responsibilities are, and it turns out to be quite
disastrous for the company because most of what actually workers
do is much more adaptive than just following the rule.
Speaker 2 (15:35):
Like kind of outsourcing risk. Almost.
Speaker 3 (15:38):
Yeah, it's just like, you know, the rule book says,
you know, operate the machinery, but you know when to
actually operate the machinery, not just operate the machinery. The
sound yeah exactly. Yeah. So that's the kind of tacit
knowledge that people acquire via training, why experienced by their
social network, talking to friends. And if we don't value that,
(16:00):
will lose that and it's going to be very difficult
to replace it with what machines or information technologies do.
Speaker 2 (16:06):
Do you think we're in a bubble right now?
Speaker 3 (16:08):
Define a bubble?
Speaker 2 (16:09):
Actually, let me reframe the question. Do you think generative
AI is a trillion dollar industry? Do you actually think
it is the next hype of growth market?
Speaker 3 (16:17):
Let me answer that question slightly indirectly.
Speaker 2 (16:21):
Sounds good.
Speaker 3 (16:22):
I believe that generative AI has the capacity to add
a trillion dollar or more over time if we use
it correctly, because as an information technology it has great capabilities.
We live in an age in which useful information is
scarce all sorts of chunk you don't want is on
(16:45):
the internet. But when you actually need to solve a problem,
get better at what you're doing, get more background information,
those things are very difficult to find, and generitive AI
could be a tool for providing that sort of information
to all all sorts of decision makers and workers blue
collar workers, office workers and so on. But that's not
(17:06):
the direction we're going in which case, I don't think
it's going to add trillions of dollars of true value.
But that also doesn't mean that generative AAI companies are
going to go bust, right, because they're going to be
able to monetize this in other ways. So if generative
AI enables you to take over the search market from Google,
(17:27):
that's a huge amount of money. It may take over
the search market from Google without providing much better service
to consumers, but it might still be hugely profitable. If
GENITTIVAI companies convince businesses to invest in generative AI, that's
going to be very profitable for them, but not so
good for the businesses that misimplement it. Right.
Speaker 2 (17:46):
So the thing is, and I understand why you're making
these assumptions, but what if it doesn't get cheaper because
right now, the thing I've been on about with generative
AI is, on top of not being super useful, it's
so unprofitable and every report seems to be suggesting it
isn't making people money. What if it stays where it
(18:07):
is because in the last eighteen months, GPT four row
is not significantly different. What if they've stalled. What if
this is all we've got.
Speaker 3 (18:15):
Yeah, my guess is that it will get somewhat cheaper
because right now it's very costly to even answer queries,
and with more GPU capacity it will get somewhat cheaper.
With better designs, it will get somewhat cheaper. But I
do not believe that there is a scaling law in there.
(18:36):
So many people in the industry believe in this mysterious
scaling law, which is that you double the GPU capacity
or comput capacity, you double the data, and you get
twice the performance.
Speaker 2 (18:48):
Just an aberration of Moore's law by people who don't
necessarily understand it.
Speaker 3 (18:53):
But first of all, what does it mean to say
double the data? We're going to throw more Reddit at it.
So even if there were such a scaling law, you
would require high quality data, which we're not producing. Is
run out. We're not paying for it.
Speaker 2 (19:07):
Yeah, there's something just very nihilistic about the whole thing
as it stands. There's not really much of a social output.
It's helping automate away jobs prolominantly held by contractors, which
is already a problem onto itself. But also it doesn't
(19:29):
seem to be making the money. I don't think I've
ever seen anything like this in tech, and I'm just wondering. Indeed,
maybe this is the question what happens if this is
not it? But also TICK doesn't have a next step.
Do you think one of these big companies could die?
Do you think that there is actually an existential risk
if generative AI and all this force apart.
Speaker 3 (19:50):
No, I don't think so. I think none of these companies,
you know, are just committed to generative AI. They have
other businesses that are making money, and even Nvidia can
make still a lot of money with the GPUs.
Speaker 2 (20:04):
Let me rephrase it then, So right now, all of
these tech companies, they do very well in their multiples
in the markets because they have a relatively low cost
of goods, like their actual costs are pretty great, but
they're predicated on this ongoing growth. They must always grow.
But what happens if they don't have a new growth
thing because they haven't for a while, and what if
(20:27):
they turn on generative AI? Like this feels like this
could be an economic panic onto itself.
Speaker 3 (20:32):
Yeah, it could be. It could be some drops in valuation.
The general pattern we have seen with many other products
and technologies is that it looks a little bit like
an S curve. Right, you're an acceleration and then you plateau,
and that's when new products are invented, new investors move
on to other things. And that hasn't happened with tech.
(20:54):
You know, Microsoft is living its fourth life or whatever
since a mestos, partly because they have acquired new businesses,
some competitors, some competing technologies, and sometimes some tech companies
have invested in the wrong things. I mean, cryptocurrency was
more crazy than AI. There. I really didn't see the
(21:17):
use case.
Speaker 2 (21:17):
The question I keep and have asked a lot of people,
this is just what happens if there's nothing though, because
growth is slowing. There is a pattern of slowing growth
within these companies and there isn't a new thing that
they can pick up and acquire. I don't know whether
tech has ever had this happen is the problem.
Speaker 3 (21:37):
Yeah, it's a good point, but it's even deeper than that.
Growth has slowed in the industrialized world, and it's not
a new phenomenon. Since one of these paradoxes, which needs
to be repeated more and more, the tech age has
also coincided with a slow down of aggregate growth and
every indicator of aggregate growth. So we are growing much
(21:59):
less today than we did in the seventies or sixties.
Productivity is growing less, and I think this is also
related to the fact that we're not getting enough out
of the new technologies and the new ideas and the
new scientific discoveries that we are making. And part of
the reason why there is so much hunger for AI
(22:22):
hype is that many people, including policymakers by the way,
are wishfully thinking, oh, well, this could be a solution
to our productivity slow down. So perhaps in the next
decade we can have a much faster productivity growth thanks
to generative AI or thanks to.
Speaker 2 (22:37):
AI right, new jobs and such.
Speaker 3 (22:40):
Yeah, new jobs, sudden discoveries.
Speaker 2 (22:43):
So almost history is kind of slowing down. I've not
really heard anyone really discuss it in these terms, but
it's interesting. So you've seen that this is the growth
of all costs is everywhere, and growth is slowing. But
it sounds like growth isn't just about a money thing though.
Speaker 3 (22:56):
No, no, growth is not just about money thing, and
I think if you do look at other indicators, we're
doing worse. One of the regularities of the twentieth century
across the world is that health and life expectancy have
improved every today. People in Sub Saharan Africa have twice
the life expectancy at birth as people who lived in
(23:19):
London or Manchester in the eighteen hundreds, and Americans have
had tremendous improvements in life expectancy and health until the
last decade when it's slow. Then it started getting reversed.
So on many indicators were actually doing even worse than
GDP suggests.
Speaker 2 (23:37):
So what's contributing to it? Is it a welfare issue,
is a societal one?
Speaker 3 (23:42):
Is it? Well? I don't think there is a clear answer.
Some people think it's because of you know, the life
expectancy part is because of early deaths due to alcoholism,
opioids and drugs, But there is a more general deterioration
mental health. There's a mental health crisis, So if you
look at health of surviving people, it's much worse if
(24:02):
you're factoring that mental health issue.
Speaker 2 (24:04):
I wonder if it's also where tech falls into this,
as well as the exposure to social media. I've had
this overall, which is one of my Flimsiert theories. I
don't think people should be thinking about politics as much
as they do. Not saying people shouldn't be political, but
just the immediacy of political discussion has been irrosive to
people's mental health.
Speaker 3 (24:23):
Well, I'll give you two factois that might perhaps support
your ideal, although I'm not sure whether I completely agree
with it. But one is that if you look at
when the mental health crisis seems to start, it coincides
with smartphones. Ah, so people accessing social media and other
things on their smart forms twenty four hours a day
(24:44):
might have something to do with it. Another one is
due to economists Alcott and Matthew Genskau. They did this
experiment where they incentivized Facebook users to stop using the platform.
So when people stop using the platform, their mental health improvement,
but they can answer questions about what's going on in
(25:05):
current politics much less well. So there at least immediate
superficial knowledge of what's going on in politics also declines.
Speaker 2 (25:14):
Interesting, Yeah, it does feel like there is a wider discussion.
Discussion perhaps is the wrong word. Within the tech industry,
there is almost no consideration of the social aspects, of
the welfare aspects of any technology being buil the metaphus,
for example. As ridiculous as that was, I can understand
an executive being like, yeah, we use the internet, now
(25:35):
what if we use more Internet? But just no consideration
to whether people wanted to. It feels like there's just
a disconnection between capitalism and people.
Speaker 3 (25:45):
I mean, I think tech is much more complicated. Oftentimes
it's multi use, so something that may appear to have
good users also has bad users. But I do think
that tech workers also need to own up to greater
social reds responsibility. Right, So, if you are a physicist
nuclear physicist today, it's unthinkable that you do not have
(26:08):
some social responsibility related knowledge as well as training about,
you know, nuclear weapons.
Speaker 2 (26:14):
My best mate is a nuclear health and safety person,
so I feel will appreciate that.
Speaker 3 (26:19):
But the same degree of thinking about ethical implications, social implications.
What happens if I unleash this on humanity doesn't quite
exist to the same extent in the tech industry, and
I think it's going to develop. There are many people
who are very socially minded in the tech sector, but
I think we may need something more systemic.
Speaker 2 (26:39):
And what would regulation look like? On a better note,
I suppose what can we do to kind of reverse
this disconnected trend? Is it regulation? Is it better safety culture?
Speaker 3 (26:49):
Well, all of the above. But here's a problem I
have with both regulation and the discussion that we have
about regulation. It is very reactive, right. Something happens and
we react to it by thinking of how can we
regulate so that we reduce the harms. But the problem,
as I try to articulate, including in the earlier parts
(27:12):
of this conversation, is about what types of technologies we
are developing, where we're putting our efforts. Expost regulation that's
reactive is not going to achieve that. So I think
we need a new tech culture and as well as
societal norms and priorities that says there is an alternative
(27:33):
that is technically feasible and socially desirable for technology, especially
for AI. Articulate what this is. Let's have a conversation
about how we can get there. What we can do
to encourage researchers, what we can do to encourage engineers,
what we can do to encourage businesses to actually go
in that direction. What does the government need to do,
What does civil society need to do? What does the
(27:53):
media need? By the way, I think media is a
big part of the problem. Media often sort of increases
the appeal of the tech industry. It sort of paints
a picture of tech leaders as these geniuses who are
revolutionizing things, and it personalizes that their power, and it
(28:14):
makes it harder for the public right to keep the
tech sector accountable. Also, on the AI field, I think
the media is part of the reason why there is
so much hype. Many of the leading publications, such as
the Economists or the New York Times, every week prints
something about AI will solve this problem or that problem.
(28:35):
AI is going to revolutionize.
Speaker 2 (28:36):
It's always will solve it, Yes.
Speaker 3 (28:38):
Will solve it. Yet it hasn't solved anything yet yet.
Speaker 2 (28:41):
Yeah, And that's I mean, part of the reason the
show exists. And I think it comes down to I
do blame a lot of this on the growth of
all coast economy, but it's also it's almost like there
is no long termism anymore in a lot of the
tech economy, it's all this will happen. Just trust us
and give us as much time and money as possible.
But we're not going to invest in R and D.
(29:02):
It's just a bizarre.
Speaker 3 (29:04):
Well, look, let's also think about the world at large.
There are six billion people who live outside of Europe, US,
Canada and China. That includes the weakest, the poorest people
in the world. How can we improve their lives? Nothing
we're talking about AI here is going to do that.
Speaker 2 (29:21):
And that's actually the thing. It's connecting back to what
you were saying earlier. The problems being solved don't feel
like they're solving for everyone. It's solving what's very much
in front of us, the latest iPhone, latest computer. What
problems can that solve? And thus generative AI kind of
makes sense because it's like, oh, more computer, but more
computer isn't fixing.
Speaker 3 (29:42):
Anything metaverse as a solution to you know, people who
are starving.
Speaker 2 (29:46):
Actually, this leads me to a question, what did you
think of cryptocurrency? I wish I would have had this
podcast asked you about this at the time.
Speaker 3 (29:52):
Well, I said that I see the positive for jener
to AI. I think it's actually a promising technology. I
do not see any positive for cryptocurrency. I never did.
When I read the manifesto first about bitcoin, it was interesting,
it was thought provoking, But two days later I was
(30:12):
inoculated against it.
Speaker 2 (30:14):
Yeah, well you kind of remembered real money exists, and
at that.
Speaker 3 (30:17):
Point we cannot trust the government. Yes, we cannot trust politicians. Yes,
but as long as we keep politicians and the government
under some sort of check with true democratic means, you know,
the money is not the most important problem. So that's
not the biggest issue that we have to worry about.
Speaker 2 (30:35):
So a wrap up question, I really appreciate your time,
of course, Ed. Are you optimistic about the future for
the tech industry.
Speaker 3 (30:42):
No, I am not a techno optimist, and I'm not
a market optimist, meaning that if I define optimism as
things are gonna work out there is an arc of progress.
I am not an optimist. I think we have serious
problems with the tech industry. We have serious problems with
the market process in the United States right now, with
(31:04):
social processes. But I'm hopeful. I believe that there is
a direction in which we could use technology that would
make things better. And there is a way in which
we can introduce better regulation, better worker organizations, better training
that would make the market system work better. But that's
the hope that we could achieve that if we did
(31:26):
the right things. But I don't think that we are
heading there. Left to our own devices.
Speaker 2 (31:31):
So where does it head if we're heading in that direction?
Speaker 3 (31:33):
Oh, I prefer not to answer that question.
Speaker 2 (31:37):
That's a perfectly fine way to end it.
Speaker 3 (31:39):
Darn.
Speaker 2 (31:39):
Thank you so much for joining me today.
Speaker 3 (31:41):
Thank you, ed. This was really excellent conversation. I really
enjoyed it, and I'm sorry that my voice was a
bit of a downer.
Speaker 2 (31:47):
Oh, don't worry. The listeners have just be glad to
hear someone else have than me talking. You've been listening
to Better Offlines. Thank you for listening, everyone, Thank you
for listening to Better Offline. The editor and composer of
the Better Offline theme song is Matasowski. You can check
(32:10):
out more of his music and audio projects at Matasowski
dot com, M A T T O S O W
s KI dot com. You can email me at easy
at better offline dot com, or visit Better Offline dot
com to find more podcast links and of course my newsletter.
I also really recommend you go to chat dot Where's
youreed dot at to visit the discord, and go to
(32:31):
our slash Better Offline to check out our reddit. Thank
you so much for listening.
Speaker 1 (32:36):
Better Offline is a production of cool Zone Media. For
more from cool Zone Media, visit our website cool Zonemedia
dot com, or check us out on the iHeartRadio app,
Apple Podcasts, or wherever you get your podcasts.
Speaker 2 (33:00):
School