Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Bloomberg Audio Studios, Podcasts, Radio News.
Speaker 2 (00:20):
Hello and welcome to another episode of the Odd Lots Podcast.
I'm Joe Wattenthal.
Speaker 3 (00:26):
And I'm Tracy Alloway.
Speaker 2 (00:27):
Tracy, you know, there's a lot of like concerns about
AI obviously these days, and if anyone who's like reasonably
intelligent can like list tons like maybe like it's gonna
go rogue and be smarter than us or maybe whatever,
But I still think, like what or maybe there's just
gonna be this like flood of disinformation and deep fakes,
(00:49):
or maybe it's gonna put all journalists out of business,
which is certainly plausible, But I think, you know, like
something I think a lot about is just this idea
that regardless of what happens, like we're going to be
increasingly sort of trusting like a black box for answers
that we really have no idea where those answers, however
you want to describe that come from.
Speaker 3 (01:09):
Yes, absolutely, And this is something that's come up on
the podcast a number of times.
Speaker 4 (01:13):
Now.
Speaker 3 (01:13):
I'm thinking way back to an episode we did that
was basically about the black box of algorithms and how
difficult it was to understand what goes into them and
then what comes out. And then of course we recently
did that episode on pricing and the idea of algorithmic pricing,
the idea of building proxy consumer profiles, And you're right,
(01:36):
the issue is we know that there's this new technology,
We know that there's all this data floating around, but
we don't entirely know how it is coming to the
conclusions or creating the output that it actually is.
Speaker 2 (01:50):
You know, the pricing thing is interesting because you know,
in a market economy, you know, you could argue it's like, oh,
at any given moment, you know you're being served up
the like the optimal price, right. And in theory, even
with the most advanced algorithms and stuff like, maybe there's
some like this price is happening at the best price
for both the seller and the buyer, et cetera. But
(02:11):
I think like people just have a sort of deep
intuitive distrust about the fact that like, you know, you
can't go there and like touch it and verify it
and see like this is why this exists in the
state that it is. And I think it's going to
create a lot of I don't know, cultural apprehensions as
more and more decisions and more and more things that
(02:32):
are that affect our lives just seem to like emerge
spontaneously out of the box.
Speaker 3 (02:38):
Absolutely. I'm thinking of all the people working at Chipotle
who are going to have to answer questions about not
just portion sizes now, but also questions from customers about
are they getting the best price.
Speaker 2 (02:49):
Have you seen those awful videos that people are taking
of the Chipotle workers.
Speaker 3 (02:54):
Yeah, I've seen some of us so vilely imagine, yes.
Speaker 2 (02:56):
It's so vile anyway, that's a that's a that's a
separate thing. But yes, like all of these things, and
you know it's it's not just with AI obviously, and
like this sort of world exists in increasingly black boxes.
You put in the support ticket, you try to like
talk to someone in an embassy or a consulate or
anything you do when you sort of like send out
(03:17):
some requirement or some request to some bureaucracy or some company,
and then it moves around. I was on a I
had a flight recently that was delayed for nine hours,
and there's like this palpable frustration that everyone feels that
the person you know, standing at the gate like can't
answer their questions and they can't get anyone to answer
their questions, and it just sort of like, you know,
(03:37):
everyone explodes and everyone knows it's not the gate agent's fault,
but still, like, you know, there's just this frustration, like
where is the answer to what's going on?
Speaker 1 (03:47):
Right?
Speaker 3 (03:47):
And you can't ask a single person because that single
person doesn't, like the gate agent doesn't have the answers.
I think what's happening is like society has organized itself
in such a way as to devolve responsibility, and the
creation of all this new technology is basically going to
(04:07):
I guess, ramp all of that up, right, Like, so
it might not even be the gate agent that you're
asking in the future. It might be you trying to,
like I don't know, ask the algorithm, like why it
decided to bump you versus someone else or actually that
already happens, right, There is an algo that dictates like
who gets bumped from the plane and who doesn't. So yeah.
Speaker 2 (04:29):
The other big one, of course is health insurance and
why some claims are suddenly denied and you never get
this answer anyway. The world is already filled with systems
in which we have some question and no one actually
can sort of you know give you the answer.
Speaker 3 (04:45):
Yes, absolutely, and I think we might have the perfect guests.
Speaker 2 (04:49):
We do have the perfect guest. It's someone we've talked
to multiple times on the podcast, one of the smartest
guys around, Always interesting, always worth paying attention to. We're
going to be speaking with Dan Davies. He is the
author of the new book The Unaccountability Machine, Why Big
Systems Make Terrible Decisions, and How the World Lost its Mind.
(05:09):
So this should be really fun. Dan, thank you so
much for coming back on the podcast.
Speaker 4 (05:14):
Oh thanks very much for inviting me.
Speaker 2 (05:16):
You know, before we get into the meat of your argument,
The Unaccountability Machine, I'm actually curious about the second half
of the title, how the World Lost its Mind, because
I certainly feel like the world lost its mind. But
I'm like a middle aged boomer, and I feel like,
you know, anytime you get to my age, you're like, oh,
the world's gone crazy, the world's gone mad, Like, you know,
(05:37):
why is everything so nuts these days? Has the world?
Do we actually know that the world's gone mad? Or
is it just because like we're all sort of old
and cranky now everything seems like the world's gone mad.
Speaker 4 (05:47):
Well, from your own perspective, you can never be sure. Yeah,
but I think there's actually reasonable objective ways that you
can check up on this, just by noticing that the
world gets more complicated as it gets bigger, and it
gets exponentially more complicated. And I mean that're in the
literal mathematical sense, because the number of connections grows faster
(06:10):
than the number of things, whereas our capability to understand
the world, manage it, and make decisions doesn't necessarily grow exponentially.
So this is the story, I would argue of economics,
It's the story of any management book that's worth reading,
because the central problem of management is the world is
(06:30):
getting faster, more complicated, faster than you can process that complexity.
What are you going to do about it? How are
you going to reorganize? And you know, we've just been
through a global financial crisis, we've just been through a
political what Aanam two is called poly crisis. I think
there's decent reasons to believe it's not just because we're
getting older, and it is actually a crisis of the
(06:55):
ability to make decisions matched up against the speed and
complexity see of the decisions that we're having to take.
Speaker 3 (07:02):
So when we talk about the lack of accountability and
making bad decisions. Give us some concrete examples of things
that you have spoken about or written about in your book.
What are you thinking about here?
Speaker 4 (07:15):
Well, I mean there are kind of there's there's little trivial,
funny examples, and there's big, huge, serious examples. So for example,
and apologize in advance because this is quite disgusting.
Speaker 3 (07:29):
Oh is it the squirrels?
Speaker 4 (07:30):
Would you rather I didn't talk?
Speaker 1 (07:32):
Yeah?
Speaker 3 (07:33):
You could. This is bad if there are children listening.
Maybe child.
Speaker 4 (07:41):
At the start of this century there was a craze
for squirrels as pets in Europe, and squirrels were being
imported from North America in China to be pets, and
they had to have the right paperwork. And so one
day ad of four hundred of the poor little things
(08:02):
showed up in Chippell Airport in Amsterdam without any paperwork
and without a return address to send them back to.
And it's difficult to know what the airline should have done,
but you can't help thinking that there must have been
a better solution than what they actually did do, which
was that they threw all four hundred of them, except
(08:24):
for one or two that escaped into an industrial shred
and this caused an outrage. There were questions asked in
the Dutch Parliament and people immediately started asking how did
this happen? Who is responsible? And in fact, the press
release from the airline apologizing for this is studied as
(08:46):
a masterpiece of crisis pr in business schools. But when
they went back to inquire, they ended up realizing that
no one had ever really made the decision that that
was what they were going to do. The government's biosecurity
Ministry had set some standards for the importation of small animals,
(09:08):
the airline had set some standards for compliance with that policy.
The only people who were expected to make a decision
about whether this was grotesque and couldn't be done or
not were some low level employees in a shed at
Chippell Airport. And frankly, people who work in sheds aren't
(09:28):
usually going to be thinking that they're meant to be
second guessing the government. And so what happened is that
you had this phenomenon that turns up a lot of
the time at all levels of organization, which is that
something happened which nobody wanted, but was the predictable output
of the system that they had created.
Speaker 2 (10:05):
Yeah, this is this is first of all that's grim,
but also like when you describe it that way, you
could see it because ultimately, like, all right, here's this
awful thing that happened to three hundred and ninety eight squirrels.
I guess in theory, someone had to I don't want
to get I don't know dumped the bag to the shretter.
Speaker 4 (10:24):
I have not researched them.
Speaker 2 (10:25):
Yeah, but that's not a very satisfying answer. I mean, yes, okay,
maybe there was someone who did the physical thing, and
I guess the entire you know, operations of the airline
and the port and the customs Bureau could like just
blame that one person. But that's not a very satisfying
conclusion I guess in terms of like how this actually happened.
(10:47):
It's funny I brought this documentary up. I was thinking
about this too, Like I'm with the destruction of the
old Penn station in New York, which it's like this
like extraordinary, like I guess, like Roman or Greek building,
and they just tore it down to like build the
pretty owes.
Speaker 3 (11:02):
Something hideous, Yeah, something hideous.
Speaker 2 (11:04):
In its place. And now the new Penn station is terrible,
but it feels like the same thing. It's like, how
did no one stop and say, like, wait, does this
make any sense in the long in the big picture.
Speaker 4 (11:14):
Yeah. And the thing is they'd created a system on
the assumption that squirrels would show up in ones and
twos and they could be dealt with as individuals, and
that therefore you would never get into this sort of
situation because when you build a system, you're always building
a model of the world, and if something happens which
doesn't fit into your model in the world, your system
(11:35):
might do something awful. And there's a sort of symmetry
and a kind of resemblance here with much bigger and
more grim things like the Boeing seven three seven max,
like the light board scandal in financial markets. It's not
so much that anyone sat down and said, let's form
a conspiracy to manipulate interest rates, or let's build a
(11:59):
place that crashes under certain circumstances. It's just that no
one set things up so that that wouldn't happen.
Speaker 3 (12:07):
Do you think I'm afraid going for the squirrels are
probably going to be our archetypal example of this, But
do you think with the squirrels, for instance, the hyper
specificity of the goals or the job rolls of everyone
involved contributed to the outcome. So in the sense that
you have you know, I guess, the Dutch Wildlife Department
(12:30):
who is trying to protect Dutch wildlife. Then you have
like the guys at the airport who are charged with
actually carrying out these orders, and then you have the
airline which is charged with like looking at the paperwork,
do those tend to lead to when taken altogether, do
those tend to lead to worse outcomes?
Speaker 4 (12:50):
Yeah? Absolutely. It's this kind of fragmentation of the decision
which is a result of the industrialization of the decision
and its industrialization literally in the atom Smith's sense that
you don't have anyone in the pin factory building an
entire pin. You have thirteen guys all performing one simple operation.
And that's a much more productive way to do things.
(13:12):
But then when you apply it to decision making, you
have the problem that everyone assumes that everyone else is
going to react when something unplanned happens. So this is,
like I say, it's the central problem of every good
management sex book. How do you deal with information? How
do you get a drink from a fire hose. How
(13:33):
do you stop yourself from being overwhelmed? The answer is
always in some way or another, you build a system
to make the decisions for you. But once you've built
that system to make the decisions for you, you no
longer feel ownership of that decision. Psychologically, you no longer
feel like you're accountable for the decision, because if you
were accountable, you might be able to change it. But
(13:56):
if you're going to be held accountable for this thing,
you haven't really moved that thing on, you haven't really
delegated it to the system.
Speaker 2 (14:04):
You mentioned Boeing, And speaking of Boeing, there was a
great blog post back in April from Steve Randy Waldman
inter Fluidity, who I consider another one kind of in
your category of people who have basically been writing interesting
things on the internet for a very long time, and
he has the title was seeing like a CEO and
(14:24):
this idea that you know, when Boeing merged with McDonald douglas,
the McDonald douglas CEO became an outside hire and he
had to essentially gain some legibility into this new organization
that he inherited, and that was the cause for some
of this like streamlining and offshore or you know, offshoring,
things like that. Talk to us like about you know,
(14:45):
you mentioned Boeing and it's more you know, the crises
there that's been going on several years. It's a more severe,
serious issue than the squirrels. But how do you know,
how do you think about what happened there?
Speaker 4 (14:58):
I think about it in very similar ways to Steve
Randy Wilburn because the and you see it in Boeing,
but it's visible to a greater or lesser extent in
very very many companies, probably the majority of companies today
that the C suite has an information environment which is
almost completely composed of financial numbers, because the financial numbers
(15:23):
are taken by them as objective facts. We can talk
long and hard with accountants about how objective those financial
numbers are and how many assumptions go into them, but
they arrive on a spreadsheet looking very much like objective
facts about the world. Things like engineering level, in principle
(15:43):
are objective facts, but you have to do a lot
more work to find them out and to know what's relevant.
And then you have issues like culture and kind of
the social environments which aren't even capable of being quantified,
So there's always this tender and see, if you're trying
to do that thing of manage your own capacity to
(16:04):
manage your information flow, that you're going to concentrate on
the things that look finite and look manageable, and that's
always going to be the financial numbers, which can be
a big problem because financial numbers can mislead. You know,
you can create illusions in an accounting system the same
way that you can with anything else.
Speaker 3 (16:25):
How did we end up here? And I'm thinking specifically
to one particular development in the world of business, which
is the creation of the limited liability company. And I
guess the clues kind of in the name there, But
what were the decisions or the trends that sort of
came about in creating the current system.
Speaker 4 (16:46):
Well, I mean, the limited liability company is certainly a
big kind of change if you think about these things
in feedback terms and information terms, and you know, one
of the big arguments of the BO is that we
should be looking to the mathematics of information theory rather
than the mathematics of optimization to explain and model some
(17:09):
of these things. But a limited liability company is an
information filter. It tells you that outcomes below a certain
amount aren't going to affect you anymore, and that changes
your information world, It changes what you care about. But
then what I think really started doing the damage, so
(17:31):
to speak, was the development of the leverage buyout in
the nineteen seventies and the shareholder value movement, as it
was kind of really kicked off by Milton Friedman's essay
on the social responsibility of a business to increase its
profits New York Times, nineteen seventy, but then built on
(17:51):
by just the entire kind of two decades of business
school research that followed from that. Because again, thinking about
it in information terms, a leftage buyout is a massive,
screaming signal. The requirement to make the payments on debts
becomes a signal that swamps anything else you might be
(18:13):
thinking of, because if you are a CEO and you've
got LBO levels of debt, that's your priority. You can't
think about anything that isn't related to servicing that debt.
Speaker 2 (18:26):
Let's go back and talk about the sort of big
picture ideas. In your book, you talk a lot about
this field called cybernetics, and cybernetics is a name sounds
like something that they would have come up with in
the early nineties, like the Wired magazine people like would
get into like cybernetics in ninety one. But actually this
field has been around at least since the nineteen forties.
(18:48):
I'm surprised they even had a word like cybernetics in
the nineteen forties. But what is cybernetics? And talk to
us about the sort of general framework you use as
a sort of to start talking about this stories in
your book.
Speaker 4 (19:00):
Sure, I mean, you're right, it was its Second World
War kind of talk. It's originally from a word meaning
the man who steers the boat, So it's a cybernetics
in that sense. And the first guy to use it
was a scientist working on creating an automated gun site
for the United States Air Force. And the idea here
(19:25):
is that there is some quantity that is preserved in
an automated gun site between the operator, the radar, the server,
motors and that and all the components of that system,
and that component is information. And so at the time
this was Norbert Viena. I'm talking about the scientist in
(19:46):
the automated gun sites. Another guy working in the same field.
You might have heard of was Claude Shannon at Bell Labs,
who was inventing information theory at Bell Labs and has
some fundamental theorems there, And in many ways, science of
cybernetics is information theory applied to control. So you might
(20:09):
have an information theory kind of pie the piece of
maths that tells you how much bandwidth you need to
transmit a given signal, and the cybernetic interpretation of that
math would be that it's telling you how much capacity
you need to manage a system that's of similar kind
(20:29):
of noise. So this was all made huge use of
in controlling things where you have access to the whole
information environment. So a lot of that early maths from
the first cyberneticians has just kind of stayed with us
through the invention of the electronic computer, and a lot
(20:50):
of it is actually at work in really modern artificial intelligence.
Meragai called ben Recht who works on recommendation algorithms, and
he's quite upfront that a lot of his fundamental mathematical
techniques are from the four seas and fifties just being
applied in the context of massively more computing power. What
(21:13):
I'm interested in is where you take those kinds of
theorems and apply them in a slightly more unrigorous, slightly
more metaphorical sense to situations of management and organization where
you don't have access to the full information environment and
you just have to say, we're going to think about
(21:34):
this not in an optimizing economics kind of neoclassical economic sense,
but we're going to think about this as a system
that has to be kept under control, and we're going
to say, well, how much resource do we need in
order to stabilize this system as a system, Which is
a kind of abstract way of looking at it, but
(21:54):
it's the same fundamental problem of management. How do you
get a drink from a firehouse? How do you match
your own capacity to manage to the complexity of the
thing that you're in charge of.
Speaker 3 (22:06):
Wait, can you give us a practical example of the
application of cybernetics. I guess because it does sound to
your point earlier, it sounds a little bit abstract in
my mind.
Speaker 4 (22:17):
Well, I mean practical example I think is the history
of the development of the corporation. So from the first
days of the American railroads, which were probably the first
really big corporate structures, the world ever saw. You have
this problem that as the network builds out, it gets
more complicated, and the ability of the head office to
(22:41):
manage it doesn't grow faster. And you can try and
solve that by adding more people. You can get a
great improvement by adding wireless telegraphs. But fundamentally, at some
point this railroad is going to grow big enough that
you have to devolve. You have to split it into branches,
(23:03):
and you have to give autonomy to some of the subsidiaries,
because that's the only way that you can match the
bandwidth of the management to the bandwidth of the control problem.
So I'd say that what we always see in any
big organization is that it grows, it gets more complicated,
(23:25):
It tries to deal with that by adding more resources
at head office, it ends up not being able to
keep up, and then it reorganizes. And the reorganizations almost
always either involve pushing responsibility down to the shop floor
or down to the branches, or they involve spinning off
(23:45):
parts of the business into a separate organization and giving
up the task of controlling it at all.
Speaker 2 (24:07):
What are accountability sinks?
Speaker 4 (24:10):
The accountability sink is just it's a name for a
particular move of cybern ethics that I notice a lot
of these days, which is when you consciously break the
feedback links from the subjects of a particular decision to
(24:31):
yourself or to the unit that's kind of meant to
be making it. So your gate agent Joe is just
a classic example of the accountability sink, because they talk
to you with the voice of a corporation and they
say that this is the policy, there's nothing I can
do to change it, and then you only able to
talk back to them as a human being like yourself,
(24:53):
so you can't get mad at them because it's not
their decision they get Just to.
Speaker 2 (24:58):
Be clear, I did not get mad.
Speaker 3 (25:01):
There's a video of Joe out there.
Speaker 2 (25:04):
I just sat there and I closed my eyes, and
I did stand around because I was sort of curious
the gate banter. But I did not get bad.
Speaker 4 (25:12):
I just want to establish Yeah, but then you might
ask them, and I will confess I've done this. I've
asked them someone politely for the phone number of someone
who I can call up who is responsible for that decision,
and you know it's not the policy. You can't get
that phone number. The whole point of this was to
create a sink into which unpleasant feedback can be poured
(25:37):
and does dissipated harmlessly. And when you start thinking about
these things in terms of accountability sinks, you start seeing
them everywhere. Because everywhere that there's a policy that can't
be broken and no feedback to the person who could
get the policy changed, that's an accountability sink. That's a
way that someone has protected themselves from the consequences of
(26:01):
their decisions, possibly at huge costs to the organization that
they're working for, but possibly not.
Speaker 3 (26:09):
I guess I'll ask the obvious question, which is how
do we break out of accountability sinks? And I think
the frustration with everyone is that, you know, you feel
powerless when you're caught in one, when you can't get
the answer that you want, or when you can't speak
to the decision maker and try to reason with them
or explain why this might be a one off or
a peculiar situation. And then it just feels like the
(26:32):
idea of actually starting to break apart some of these
sinks and move to more of an era of personal
accountability the book stops here and all of that, Yeah,
there we go. It just seems further and further away.
Speaker 4 (26:48):
Well, it is. And the horrible answer to your question,
Tracy is that maybe we can't, or maybe as individuals
we can't. And that's actually, in my view, potentially very
bad news for society because all of this sink, you know,
all of this negative emotion from people about the way
the world is, goes into the sinks. But like any sink,
(27:08):
it piles up, and it piles up, and then after
a while it all spills out, and then suddenly we
get things like Brexit in the UK, or kind of
first go of Donald Trump in the USA. We get
people who have used to being decided upon and used
to being ignored, getting steadily more and more dissatisfied with
(27:33):
the system, and then finally they start to use the
only power left to them to just make use their
votes in a way that says, I am no longer
satisfied with this, this is no longer tolerable to me.
I'm going to use my vote to tear the system apart.
And so with all these things, can you can divert
(27:53):
these things for a while, but it's at the cost
of building up fragility.
Speaker 2 (27:57):
Yeah, I have to say, you know, I I didn't
finish your book I read. I'm about halfway through, but
it did leave me fairly nihilistic or pessimistic that it's like,
these are these inexorable centrifugal or centripetal forces. I can
never remember which is which, and they're pulling us into
all of these sort of high stress decisions and it's
(28:21):
really bad, and things are going to things are gonna
keep breaking, and we're going to get angry or anger.
We talked about AI in the beginning, and a sort
of provocative idea that you talk about in your book
is the idea of the corporation as it exists and
as we've known about it as already a proto AI.
So you go to chat GPT and you put in
a request and something spits out and it's impressive whatever,
(28:44):
but that actually this is just sort of a specific
example of what the corporation has been for a long time.
Speaker 4 (28:50):
Absolutely, I mean, and this is the beauty of abstract mass.
It describes things without you needing to know what they
actually are. These are all just decision making systems. I
had a conversation with someone at the European Commission the
year before last, because in Europe they passed in Act,
saying that if a decision is made affecting you, like
(29:11):
to turn you down for health insurance, then if that's
made by an algorithm, you have a right of explainability,
So you have a right that someone can explain to
you why that algorithm made that decision for you. And
you know, I thought that's quite good, But it's kind
of ironic that this decision is coming from the European Commission.
So I asked the guy who works there, well, you know,
when you make a decision like that, what right of
(29:33):
explainability do I have from you? And the answer is
ha hah, no, none at all. M All these things
are basically working in the same way. The AI is
working like the corporation, which is working like the government,
and the same problems of information managements affect them all.
But it's not as nihilistic as you think, in my view,
(29:55):
because that means that these things can be so subject
to the same kinds of solutions. You know, if we
think about the original reason for building the accountability sink,
it was that someone felt overwhelm us by information and
so didn't feel that they were responsible for the decision
and so wanted to cut the link of accountability. If
(30:18):
you can put the AI in the loop in such
a way that the decision maker is more able to
manage their information flow, then they don't have so much
need to break the feedback links because they've got more
functional ways to deal with them.
Speaker 3 (30:36):
I have a theoretical I have a theoretical question, which is,
what does accountability actually look like for a decision taken
by an algorithm? Is it that, like we understand the
factors that went into the model, and like the decisions
within the model that spat out a particular outcome, or
(30:56):
is it that the person who is using the ourg go,
you know, decides to think more thoughtfully about how they're
using it.
Speaker 4 (31:06):
It's a that's a really interesting question which I'm going
to think for two seconds, delight and waffle before answering,
which is that I think the basic definition of accountability
in the decision sense, in my view, is that you're
accountable for a decision exactly to the extent that you
are able to change it. So, in terms of a
(31:28):
decision made by an AI, it is accountable if it
could be made to make a different decision by new
information being provided to it. So the crucial thing is
not so much having someone to point at and attribute
moral responsibility to. The crucial thing is to have some
(31:50):
link between the subject of the decision. Who can just say,
let's review this, let's have a court of appeal. And
you know, if the algo still thinks that this decision
needs to be done, if the elgo still thinks I'm
not an insurable risk, then maybe I've been Maybe I
still don't agree with that, but at least I know
I've been heard. It's not coming to me just simply
(32:13):
as a one way communication channel.
Speaker 2 (32:15):
You know, making a more optimistic view. So you said
something interesting or important, which is that for a company,
the financial numbers are the closest thing, the closest form
of information that is at least like objective in some sense.
But then there's all these other things like how how
(32:36):
well is your engineering team working together? How is the
culture that are just inherently much more difficult. And they
are all kinds of like consultants and other companies that
try to like answer this for executives and you know,
rank employees different things. Aaron Levy, you know, he's the
CEO of Box and he's one of the few like
tech CEOs who tweets some interesting stuff from time to time.
(32:59):
But he said he had this recent thread about how
his company is using AI, and he said, you know,
the exciting thing is, you know, we have some data,
but then we have all this unstructured data that exists
in our company, and we've never been able to do
with anything, probably like chat logs from customer support and
all this. And he said, the exciting thing for them
is the prospect of turning all of this sort of unstructured,
(33:22):
unusable information that the company has into something that can
be essentially searchable and that insights can be gleaned from it.
Is there a story or is there a path in
your view where artificial intelligence can actually make some of
these other parts of a system more legible and more
interactive and more concrete to the executives in a way
(33:45):
that brings it those that other data on par with
the financial data.
Speaker 4 (33:50):
I mean, I really hope. So, I mean, like kind
of one thing that Aaron Levy could do that would
be really radical would be to open up as much
of that data as possible to the investors and let
them pass it and have that as a main channel
of communication of corporate performance. Rather than generally accepted accounting principles,
because if you think about that phrase, that's an accountability
(34:12):
sink right there. What are these accounting principles? They're generally accepted.
Can I change them? No, that would not be generally accepted.
What if this is completely irrelevant set of metrics to
my business? Well, that you still have to do it
in exactly this way, even if it's not presenting what
you think is actually generating value. And in a world
(34:34):
where we've got better ways of processing bulk information like that,
then I think there's a real question about whether gap
is something that we should be so fixated on, whether
we should be thinking that the only way to report
corporate performance is in a way that's optimized. Basically, for
a guy with a greener eyeshade sitting at a desk
(34:57):
in the beginning of the twentieth century, flip through printed
reports and accounts, you know, that's not the way that
we process information anymore. And so maybe that shouldn't be
the way that we report information anymore, because, as you say,
these assumptions that go into gap earnings start driving decisions
(35:18):
and they were never meant to drive decisions.
Speaker 2 (35:20):
It's interesting Tracy now thinking about it in this financial sense,
how many of these accountabilities sinks? Like even like performance benchmarks, right,
it's like, oh, we beat the S and P or whatever.
It's like, why the S and P etcetera. Well, you
know it's there, right, we could point to it and
we could say it. But like once you start thinking
of them in all of the indices and measures we cite,
(35:41):
you could see, Tracy, how they like serve that purpose
of just like yeah, look this is what we measure against.
Speaker 3 (35:45):
Oh yeah, of course. I mean incentives matter, right, Like
that's something that we say over and over again on
this podcast and when it comes to accounting. Okay, just
to push back a little bit, but like there is
an argument to have standardized accounting rules so that we
don't always end up with companies running off and creating
community adjusted ebit dah and things like that. But on
(36:07):
the subject of incentives, I wanted to ask, you know,
I read David Graber's Bull Jobs this year and it's
still sort of looming large in my memory, and I
guess my question is how much overlap is there between
the accountability issues that you describe in your book and
the way specific jobs, especially middle management are structured. And
(36:31):
I don't mean to trigger you because I know that
you mentioned in the book that you got into it
a little bit with Graber over a different subject. But
if you want to talk about that too.
Speaker 4 (36:39):
Oh, I missed David so much because we used to
wind each other up so badly, and I got into
a different argument that's not mentioned into the book over
bull jobs. Because it's just like, if you're saying that
middle management is a bulk job, then you're saying that
the Sarahbellum is a bull's organ The middle management exists
(37:01):
precisely because of all the metrics, and all the financial
and non financial metrics on the chief executive's dashboard are
massive information reducing filters. The middle managers are the people
who carry the knowledge of the ways in which those
metrics can misrepresent reality and how to cure the problems
(37:24):
which arise when they are. You know, when someone's in
danger of making a decision. The first thing a bad
company does before it creates something like the Libuil scandal
or the seven three seven mags is thin out the
ranks of its middle management. And this particularly without wanting
to relate the gate arguments with David Graeber, particularly since
(37:45):
he's not around any more to answer back. In his
day job as an anthropologist, David was so subtle and
intelligent about the ceremonial roles of elders and people who
built census among hunter gatherers to decide on what bands
they would do. And then when you have those exact
(38:06):
same problem solving and dispute resolving jobs happening, you know,
in the offices at Bloomberger at a law firm, suddenly
he thinks that they're bulk jobs. So that's a bit
of a personal hobby horse. But basically all those or
very many of those roles are actually the preservation of
(38:27):
the information systems and the memory of organizations.
Speaker 2 (38:32):
Who is Stafford Beer and why why does he loom
so large in the story you tell about the world.
Speaker 4 (38:38):
Well, Stafford Bit was the father of management sebernetics. He
was the guy who first said you can take the
mathematics of information theory and apply it to industrial organization.
He was also David Bowie's favorite management consultants. He was very,
very influential on Brian Eno and the divilment of ambient music.
(39:02):
He was a hippie. He tried to it's not clear
that this was a joke, but he did try to
invent a computing pond where the growth of algae would
correspond to the solutions of differential equations.
Speaker 3 (39:15):
He's going to be my summer project. I got a
pond with algae.
Speaker 4 (39:19):
Yeah. The problem was that he used to feed them
iron filings to make them grow, and after a while
the entire pond became magnetic. But he was just this crazy,
larger than life figure who did these incredibly successful management
consulting assignments, but just somehow never quite was able to
(39:39):
get on with enough people in the corporate world to
really get his ideas across. And then he ended up
in Chile in nineteen seventy two with this incredibly romantic
but ultimately doomed project to reinvent socialism for the twenty
first century under the ide governments, which I mean realistically
(40:04):
it was never going to work because the confuting resources
were completely disproportionate to the task. But it never got
a fair trial, obviously, because the Pinochet coup happened about
eleven months after he started the project.
Speaker 2 (40:15):
So why do we talk a little bit more about
like the current day? You know, I started in the
conversation like kind of not disagreeing, but questioning the premise,
like the world has lost its Has the world really
lost its mind? Or just the three of us in
this conversation getting old and angry? What do you see
(40:36):
like when you like think about like applying. You know,
we talked about Boeing and you mentioned the live war scandal.
But how are these things when you look around the
world today and you look around whatever apparent world losing
its mind? What are you seeing?
Speaker 4 (40:49):
I think the number one thing I'm seeing, And this
is a point where David Graeber had it absolutely right,
is debt. You know, we have so many cases at
presence of companies where you have plenty of people who
know exactly what they need to do, what investments they
want to make, but they can't have any plans which
(41:12):
stretch out any further than the next debt repayment. And
to a large extent, that's because of leverage buyouts and
managements acting in anticipation of the risk of a leverage buyout.
But this really is a degradation of the higher functions,
(41:33):
the brain functions of the corporations of the anglosphere world.
The practice of firing middle managers because you don't know
what they do is also demonstrably, in my view, making
corporations stupid. It might have been that at the start
of the LBO boom in the seventies there were too
(41:53):
many of these guys on soft jobs with country club
memberships and private jets and whatnot, but we've clearly gone
too far in the other direction in my view. And
then we've got the frightening tendency of government organizations to
outsource absolutely critical functions, which means that all of the
(42:15):
knowledge of the systems that they're meant to be regulating
and dealing with. What's an example of that, right, best
example currently thinking, I think m just the question of
infrastructure building, for example in the UK, where we have
(42:36):
there's a river crossing in to the east of London
which is being responsible for generating the largest pile of
paper ever brought together in one place in the history
of humanity. And the reason for that is that the
people who are meant to be deciding what an appropriate
(42:57):
level of consultation is don't know anything about environmental impact
studies and building bridges anymore. So they commission results reports,
the commission reports from professional services firms and professional services
firms want to generate repeat business, and so you've got
(43:17):
this situation where the people who are meant to be
seeing the whole system as a system don't really understand
it anymore because they've outsourced all of their engineering knowledge,
all of their economic and environmental knowledge. And as a result,
the UK or the London Department for Transports are going
(43:38):
to be paying as much as ten times as much
as it should reasonably cost to build a bridge over
the River Thames.
Speaker 3 (43:45):
It's interesting that you single out debt as a sort
of deciding factor versus share price, because this is the
one we hear a lot about in the context of
corporate short termism and people usually trying to hit specific
share price metric that may or may not be tied
into their compensation.
Speaker 4 (44:05):
Yeah. I think I'm right on that. I know that
people disagree with me, but you know, we had share
prices in the fifties and sixties and we didn't have
this kind of problem of short termism. The difference is
that now we've got takeovers, particularly private equity and LBOs,
but also in general the use of debt in takeovers,
(44:29):
and that makes the share price more salient because any
incumbent management knows that if the share price falls, it
makes them vulnerable to a takeover. So to my mind,
I think it's not so much you know, the share
price as the exaggerated importance of short term financial metrics,
(44:50):
which is partly through the share price, but much more
just simply because of leverage.
Speaker 2 (44:56):
One of the things that you know, you mentioned the
UK bridge and it ten times more expensive then needs
to be in a pile of paperwork. I mean, this
is probably the number one thing that like, you know,
people I follow on Twitter talk about all the time,
which is just how hard it is to build anything
in the United States and the interlocking systems of environmental
regulations and nimbi's and everything else, and it's the big
(45:20):
challenge of the IRA and it feels like listening to this,
it's just accountability sink. After accountability sink is to blame.
Speaker 4 (45:27):
It's absolutely and it's accountability sinks being put up because
the people who were meant to be taking the decisions,
and who in the nineteen fifties and sixties did take
the decisions, are no longer really able to They've not
got the confidence of their decisions. They're not sure they're
going to be able to defend them in litigation, and
it's mainly because they've lost their executive functions with successive
(45:51):
staff cuts and retirements and outsourcing contracts.
Speaker 2 (45:55):
Dan Davies, It's so great to catch up with you.
It's fascinating conversation, fascination way of thinking about the world.
Highly recommend everyone check out your book, The Unaccountability Machine.
Thank you so much for coming back on outlag.
Speaker 4 (46:08):
Oh, thanks to so much.
Speaker 2 (46:09):
Oh is a pleasure, Tracy, I really I love talking
to Dan. First of all, it's always I just like
hearing his voice. You know this, I do think like
(46:30):
accountability sinks is now going to be one of those
phrases that I'm just going to now start seeing everywhere.
And you know there's whole industry like McKinsey right, Like
that's like, you know, it's like again someone else to
lay off your workers, et cetera. Like you just start
seeing how big that is everywhere.
Speaker 3 (46:44):
Well, this is what I was thinking. You know, you
brought in the US example of building infrastructure or other
energy products, and I kept thinking back to Jiggershaw and
his point about the lack of institutional memory of how
to build nuclear power plants. Right, it's not necessarily that
it's so complicated to get environmental permits and things like that,
(47:05):
although that is certainly part of it. But it's also
that the people who used to do this haven't been
doing it for a long time or are no longer around.
And that kind of goes to Down's point about middle
management being the sort of like what's the word I'm.
Speaker 2 (47:22):
Thinking connective tissue.
Speaker 3 (47:24):
Connective tissue is a good one of like institutional memory.
Speaker 2 (47:27):
Yeah, no, it's only true, you know, in defense of
the the Nimbi's. You know, I keep mentioning this New
York documentary and watching and I got to the yeah,
I want to watch that, you gotta watch it. But
I got to the episode where like really talks about
like Robert Moses and just like plowing these big highways
through neighborhoods and putting up these like terrible, like terrible
housing projects that are like, you know.
Speaker 3 (47:48):
Sort of right, continuously prioritizing highways.
Speaker 2 (47:52):
That is someone who did not have the problem of
Nimbi's or a million different interlocking constraints on him, Like
there are our drawbacks to when someone has like too
much autonomoy and it's sort of like, yeah, but now
it does seem arguably we've gone too far in the
other direction, in which everyone just clings to their accountability
(48:13):
sink and can't get it right done.
Speaker 3 (48:14):
Everything is a collective decision therefore can be held responsible. Yeah,
I feel like there must be a reasonable middle ground.
And yet I don't know. I'm trying to think if
I know of, like any organizations that have completely cracked
like the nut just for a little while. Yeah, collectivism
versus individual responsibility, I don't know. Well, on that note,
(48:38):
shall we leave it there?
Speaker 2 (48:38):
Let's leave it there.
Speaker 3 (48:39):
This has been another episode of the All Thoughts podcast.
I'm Tracy Alloway. You can follow me at Tracy Alloway and.
Speaker 2 (48:45):
I'm Joe Wisenthal. You can follow me at The Stalwart.
Follow our guest Dan Davies. He's the author of the
book The Unaccountability Machine, Why Big Systems Make Terrible Decisions
and How the World Lost its Mind.
Speaker 3 (48:56):
Go check it out.
Speaker 2 (48:57):
His handle is at d squared Digest. Follow our producers
Carmen Rodriguez at Carmen Arman, dash Ol Bennett at Dashbot
and Kilbrooks at Kilbrooks. Thank you to our producer Moses Ondem.
For more Oddlags content, go to Bloomberg dot com slash
odd Lots, where we have transcripts, a blog, and a
newsletter and you can chat about all of these topics
twenty four to seven in our discord Discord dot gg
(49:19):
slash odd lots.
Speaker 3 (49:20):
And speaking of personal accountability. If you like odd Lots,
please leave us a positive review on your favorite podcast platform.
And remember, if you are a Bloomberg subscriber, you can
listen to all of our episodes absolutely ad free. All
you need to do is connect your Bloomberg account with
Apple Podcasts. In order to do that, you can find
the Bloomberg channel on Apple Podcasts and follow the instructions there.
(49:43):
Thanks for listening.