All Episodes

June 9, 2025 • 56 mins

In episode 882 of CXOTalk, Michael Krigsman sits down with Kevin De Liban, founder of TechTonic Justice and former legal aid attorney, who reveals the shocking truth about AI's impact on vulnerable communities.

De Liban shares how 92 million low-income Americans now have critical life decisions, such as healthcare, housing, employment, and government benefits, that are determined by algorithms that often fail catastrophically.

Drawing from his groundbreaking 2016 legal victory in Arkansas, where he successfully challenged an algorithm that devastated the lives of disabled Medicaid recipients, De Liban exposes the myth of AI neutrality and demonstrates how these systems reflect the biases and incentives of their creators. He explains why self-regulation and "ethical AI" initiatives often fail when they conflict with business interests, and why effective regulation is crucial.

What you'll learn:

  • The real scale of AI harm affecting 92 million Americans
  • Why AI systems aren't neutral decision-making tools
  • How algorithms denied healthcare to disabled people in Arkansas
  • Why ethical AI initiatives fail without enforceable accountability
  • Practical steps technology leaders can take to prevent harm
  • The expansion of AI monitoring into middle-class professions
  • Why regulation benefits ethical companies
  • How to determine if AI is appropriate for high-stakes decisions

This conversation challenges conventional wisdom about AI adoption and offers essential guidance for executives, developers, and policymakers navigating the intersection of technological innovation and social responsibility.

Whether you're a C-suite executive, technology professional, policymaker, or concerned citizen, this discussion provides crucial insights into one of the most pressing issues of our time: ensuring AI serves humanity rather than harming those who need protection most.

=====

Subscribe to CXOTalk: www.cxotalk.com/newsletter

Read the episode summary and transcript: www.cxotalk.com/episode/ai-failure-injustice-inequality-and-algorithms

Learn more about TechTonic Justice: www.techtonicjustice.org

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Today on CXO Talk episode 882, we're examining a critical issue
that affects millions, yet is largely invisible to most
business leaders. How AI is failing our most
vulnerable citizens on an unprecedented scale.
I'm Michael Krigsman, and I'm delighted to welcome our guest,

(00:22):
Kevin Delebin. As founder of Tectonic Justice,
Kevin has witnessed first hand how AI systems determine who
gets healthcare, who finds housing, who gets hired, and who
receives government benefits. His ground breaking report,
Inescapable AI, reveals that 92 million Americans now have

(00:43):
fundamental aspects of their lives decided by algorithms,
often with devastating consequences.
He'll share how these systems fail, why traditional
accountability mechanisms don't work, and what this means for
your organization. We're discussing real systems
built by major vendors causing real harm right now.

(01:08):
Tectonic Justice is a new nonprofit organization launched
last November to protect low income people from the harms
that AI causes them. And I come to this work after 12
years as a legal aid attorney representing low income folks in
all sorts of civil legal matters.
And it was there that I first saw the ways that these

(01:29):
technologies were hurting my clients lives.
And I was involved in several battles and won one several of
them as well, and started understanding that this was a
bigger problem that needed more attention and more focus.
Kevin, you were an early pioneeractually winning cases relating
to AI harms. It was really in 2016 when I had

(01:51):
clients who were disabled or elderly on a Medicaid program
that pays for an in home caregiver to help them with
their daily life activities so that they can stay out of a
nursing facility. And this is better for their
dignity and independence and generally cheaper for the state
as well. And what happened is the state
of Arkansas replaced the nurse'sdiscretion to decide how many

(02:15):
hours a day of care a particularperson needed with an
algorithmic decision making tool, and that ended up
devastating people's lives. People's care hours were cut in
half in some cases, and it left people lying in their own waste,
left people getting bed sores from not being turned, being
totally shut in just intolerablehuman suffering.
And we ended up fighting againstthat in the courts and also with

(02:37):
a really big public education and kind of community activation
campaign. And we won.
And that's one of the relativelyfew examples still to this day
of kind of successful advocacy against sort of automated
decision making. Algorithms, AI are neutral
mechanisms, neutral devices. They're just math without

(02:59):
feelings, without interests, without malice.
So given that, what is the problem here?
I would challenge some of the the the assumptions even in that
question of them being neutral, right?
I mean, they're programmed by humans.
The statistical science that underlies a lot of this stuff is

(03:19):
determined by humans using various choices that they have,
using historical data that they have.
And that isn't a wholly objective exercise.
And so I think what you really have to look at is the purpose
for which the technology is being built to understand it and
understand a lot of like even the technical aspects that
underpin it. And in my world, when we're

(03:40):
talking about low income people and sort of automated decision
making for them, these are not neutral technologies at all.
These are designed oftentimes torestrict access to benefits or
to empower whoever's on the other side of the equation,
whether it's a landlord, a boss,a school principal, a government
official to do something that they want done.

(04:00):
That might not be what the person who I'm representing is
interested in. So I would challenge that
premise first. So you're saying that the design
of the system is intended to cause harm.
Is that correct what I'm hearingyou say?
In some cases, it's intended outright to cause harm.
In some cases, it's just intended to, you know, sort of

(04:25):
facilitate a decision by the decision maker, right?
Make a landlord's life easier, make a boss's life easier, make
a government official's life easier.
The problem is inherent in making their life easier ends up
being making somebody else's life harder.
And so I think that's where the push and pull is of this is

(04:45):
there is the intent issue. There are very clearly stuff
that's built to be harmful. But then there's also this Gray
area where nobody is, you know, scheming in a dark room about
plotting to take over the world and destroy people's lives.
But the nature of their power positions and the decisions that
they're making and what makes their life easier ends up
translating into that for low income people.

(05:07):
Can you give us an example of where the the goals or the
incentives are misaligned between the developers of this
technology, of these technologies or algorithms and
the can we say the intended recipients?
Is that even a a correct way to phrase it?

(05:27):
You have the hiring process for example with most big companies
now is riddled with AI. Everything from resume, resume
review and screening to video interviewing and to oversight.
Once somebody gets the job, right, there's nothing inherent
in that process that really benefits the person who's

(05:48):
seeking work or is an employee, right?
That's all intended to facilitate the life and the work
of the employer. The bosses, same thing a lot of
times with, you know, with public benefits, you've got
relatively, you've got really dedicated public servants, but
oftentimes they're unsophisticated in technology

(06:09):
issues. They're thinking, OK, well, this
new piece of technology is goingto suddenly help expand our
limited capacity, so let's implement it.
And then they don't have what they need to do that in a non
destructive way. And so the people who end up
bearing the risk of, you know, sort of their own lack of
knowledge or incompetence are the low income people that are

(06:31):
subject to the decision making. These systems are complex.
They are developed with algorithms and data as well.
Can you isolate where a a primary source of the problem

(06:51):
lies? I realized underneath it all,
you have a human intention, trying to solve a problem,
trying to achieve a goal. But if you can, you drill into
that a little bit, kind of dissect this for us.
There were a couple aspects to the way the algorithm worked.
One is the mechanics of it right?
What inputs turn into what outputs?

(07:13):
And that's hard enough to discern, but then there's the
reason that those inputs are chosen to lead to those outputs,
right? Like why do you look at this
factor and not this factor? Why is this factor shaped to
look back three days instead of five days?
All of those things, those are all human decisions Now.
They're informed by, in the bestcases, statistical science.

(07:38):
In a lot of cases, there is no science is a bad descriptor for
that. A lot of times it's junk, right,
that somebody just invented and comes up with.
But in the best cases in statistical science, that still
is riddled with various assumptions.
And so in our example, for example, in Arkansas, for
example, whether or not somebodycould use the bath on their own

(08:07):
might not have been a factor that the algorithm considered.
And that's weird, right? I mean, we're talking about home
care for elderly or disabled people.
Being able to bathe on your own should be 1 factor that decides
how many hours of care you need.It wasn't or your ability to
prepare meals wasn't a factor. And so you see this disconnect
of like, we know instinctively or, you know, through medical

(08:29):
discretion and judgment, how to answer this question of how much
care somebody needs. It might be imprecise, but we
know we know what we should be looking at.
But the algorithm didn't do that.
They looked at a lot of factors that weren't kind of intuitive,
and then they ignored a lot of factors that were intuitive.
How does this come about? Is it simply lack of

(08:51):
understanding of the the subjects of the the target?
How did? What happens here?
Some of it is real ignorance about, you know, the lives of
poor people and the ways that decisions are made and the
impact of the decisions. Some is ignorance about certain
program standards or laws or anything else.

(09:13):
I've seen that a lot in the technology.
Some of it is the lack of havingto get it right.
You know, for a lot of the developers of these algorithms
in particular, they're shielded from any sort of consequences
for their actions. And so they do what they think
is best or what they can sell toa client and that's that.

(09:34):
And then the clients that are using it, the government
agencies or the employers or whatever, they might not be
vetting it or, you know, they are also insulated from the
accountability because if it hurts poor people, what's going
to happen to them? Like what's going to happen to
the person who decided to use it?
I mean, poor people often times are not a particularly empowered

(09:55):
political bloc. They're usually aren't scandals
that end up resulting in lost jobs or lost elections for
officials who are in charge of this stuff.
And so it's easy to get away with really harmful actions just
because you're doing it to people who don't have a lot of
power that's ready at hand. You know, low income communities

(10:16):
have always been super involved in advocating for themselves and
organizing and everything else. But that's a huge effort, right?
And takes like a concentrated movement.
And it's not like you can just call your elected official and
you have that kind of access andsay, hey, this is a problem.
Can you take care of this for me?
Or organize a lobbying effort toget rid of something?
Now, if you're doing something with poor people and it hurts

(10:38):
them, you're not going to face immediate consequences for the
most part. Folks who are listening, I want
you to ask questions. We have some questions that are
starting to come in on LinkedIn and Twitter and we're going to,
we're going to get to them in a in a couple of minutes.
If you're watching on Twitter, just insert your questions into

(11:00):
Twitter using the hashtag CXO Talk.
If you're watching on LinkedIn, just pop your questions into the
chat. For those of you who are
developing these kinds of systems and we hear a lot of
discussion of ethical AI and responsible technology, here's
an opportunity to ask somebody who's dealing with the actual

(11:23):
fallout of this. So ask your questions, Kevin,
what about the scale of the problem?
How? How big an issue is this
actually? All 92 million low income people
in the United States have some key aspect of their life decided
by AI, whether that is sort of housing, healthcare, public

(11:43):
benefits, work, their kids, school, family stability, all of
these issues. Not everyone might have all of
those issues decided by AI, but everyone has at least one of
those issues decided by AI. And then it extends beyond low
income people as well into higher income things.
So there have been a lot of stories, for example, about

(12:03):
employer use of AI, sort of the screening aspect and then sort
of the bossware management aspect of it being used against
finance executives or against hospital chaplains, against
therapists. Recently there was a story about
Amazon programmers who are now subjected to AI based oversight

(12:24):
and measurement and it's affecting their lives.
So even though a lot of this stuff is most prevalent and
probably most severe in the lives of low income people, it's
happening to all of us. Healthcare is another great
example, right? If our doctor recommends a
treatment for us, many of the more expensive treatments are
subject to health insurance company review prior to being
offered, and those health insurance companies are using AI

(12:46):
generally to deny those requests.
We all know about United Healthcare and their use of
algorithms that they say is neutral.
And we don't do that. But it's, you know, you hear
doctors complaining about how algorithms are interfering with
their ability to render the kindof care that they want.

(13:08):
And so it becomes pretty evidentthat what what was once targeted
at lower income people now through the acceleration of AI
is broadening and touches all ofus at this point, I would
imagine. In one of the examples of the
health insurance companies, theyostensibly had a human reviewer

(13:32):
reviewing the the AI's outputs, but when the investigation.
Dug into what that human review looked like, it showed that the
doctor was approving something like 60 prior authorization
requests a minute like they had one or two seconds per one.
There's no human reviewing that right.
And it's bad faith to assert otherwise.

(13:53):
And that's I think one of the key data points and there.
Are a lot of others that help usshow that this isn't just all
purely accidental, this can't bejust attributed to mistakes or
errors, that there's a lot of thought and intention that goes
behind, you know, developing andimplementing these systems that
are denying people really fundamental needs.

(14:16):
Subscribe to our newsletter, go to cxotalk.com, check out our
newsletter, and check us out forour next shows.
We have great shows coming up. Let's jump to some questions.
Let's begin with Arsalan Khan onTwitter.
Arsalan's a regular listener. Thanks so much, Arsalan, for
your regular listenership. And Arsalan says this.

(14:39):
He says, who whoever sets the AIguardrails has the power, but
who checks if those guardrails are equitable?
And he asks, why don't we have aHippocratic Oath for us as IT
professionals? He's, he's an enterprise

(14:59):
architect. So.
So this notion of whoever sets the AI guardrails has the power,
but who checks that the guardrails are right are
equitable? The Hippocratic Oath idea is not
a meaningful source of systemic change to insulate society from
these harms because, you know, doctors have Hippocratic oaths.

(15:24):
And while that might be useful, it doesn't a lot of times
prevent some of the abuses in inmedicine either or lawyers have
obligations and it doesn't present prevent us from going
and doing all sorts of random harmful things.
So I think what you need is actual regulation to reinforce
kind of the guardrail notion, right?
Safeguard people from having anyexposure to the harms in the 1st

(15:47):
place, or if there are, because those kind of institutional and
ethical safeguards fail, then there are real consequences for
that. They go beyond somebody just
violating their oath and feelingbad that way.
So I don't know if that's getting at the full essence of
the question, but that's where some of my thoughts go.

(16:10):
And also, not everybody's as ethical as the person asking the
question either, right? And some people are perfectly
happy to just do whatever the client wants or program the
system in whatever way is going to make it most profitable and
attractive. And as long as they don't have
anything holding them back formally, officially real
consequences and accountable, we're not going to get any major

(16:30):
change. So self policing is not
sufficient in your view. Definitely not.
And I know those questions are asked in good faith and are
posited in good faith. But the people who are pushing
that at the policy level are definitely not pushing it in
good faith. They don't want any
accountability. They don't want anything that
would restrict how they use it, and they're perfectly happy to

(16:54):
shunt off all the risks and all the dangers of their systems
being bad or going wrong or doing something destructive to
the people who are subject to those decisions.
Are you talking about governmentpolicy or in corporate policy,
people designing products? Government policy, the tech
industry has been vociferous in their opposition to any sort of

(17:18):
meaningful regulation of AI, automated decision making
technologies and so forth. And that's the reason why we
don't have any real societal protections against this stuff
outside of existing laws. And even now they're targeting
some of the European Union's restrictions, which are modest,

(17:40):
but big tech doesn't like those.So that's where I'm talking
about is sort of how kind of corporate interests end up
shaping their policy positions in ways that are detrimental to
really all of us that are not inthat in that world, but
particularly low income people. You also have many of the major
tech companies pushing forth their own ethical AI initiatives

(18:08):
and lots of discussions around the data and creating building
bodies of data that try to weed out bias.
I mean that you see this everywhere is happening.
That's true, and there are a lotof.
Good people who share my values in these companies and are
trying to make the companies do as right as possible.

(18:30):
But I think when the rubber hitsthe road, we've seen repeatedly
that the folks speaking out for ethical uses are sidelined.
You know, a few years ago in Google, for example, the whole
ethical AI team, I think, was fired because they wanted to
publish a paper that Google didn't want published.
Or more recently, when, you know, Twitter was taken over by

(18:53):
its current owner, the whole ethical AI team was disbanded
instantly. You have Google's retrenchment
of its ethical AI things and nowit's technology is being
deployed in unemployment hearings, right, for people who
are desperate for benefits, eventhough we know that a lot of the
AI technology involved can be faulty.

(19:15):
So again, you do have these ethical components within
institutions that are pushing, Ibelieve in good faith a lot of
times for changes, but the people who are pushing for that
don't have the same interests asthe institutions who are
allowing it. A lot of times the institutions
are allowing ethical AI because it allows them to go out and
talk about their their concept of social responsibility.

(19:36):
But we we see repeatedly when the rubber hits the road, ethics
will go by the wayside and the company's profit incentives and
motives are going to be what dictates what happens next.
So basically, money talks, nobody walks.
Yeah, I mean it's. Complicated, right?
Because, again, there's a lot ofgood people in there that are
pushing really hard for these major institutions that have
lots of power to do right. And the fact that the

(19:58):
institutions allow that to happen is noteworthy.
I think it just comes, yeah, at the end, it it, it, it ends up
being the money. Talks.
I will say that you are up against the marketing budgets of
some really, really large companies here.
I am. This is going to change
everything though, Michael. See CXO talk.

(20:19):
This is going to be, this is going to be the entryway.
This is going to this is better than all the all the marketing
budgets of of the big tech companies right now.
Let's jump to some other questions.
And I'm seeing some themes developing in the questions
here. And this next one is from Preeti
Narayanan. And she says, given your work

(20:42):
exposing large scale harm causedby AI and public services, what
practical guardrails would you recommend to technology leaders
like her, Like many of our listeners who are building
enterprise AI systems so we don't unknowingly replicate
those same failures at scale? Basically, it's the same

(21:06):
sentiment as Arsalan Khan just brought up.
What can we, as the people creating these systems, do?
OK, one thing is push for regulation, right?
And push for meaningful regulation of what it is that
you do, because that way it bakes in consequences for
getting it wrong. And as long as you have good

(21:27):
faith and are doing things the right way, those consequences
shouldn't be terribly severe. You shouldn't be exposed to them
that you know, in in a way that's wholly destructive.
So I think pushing for regulation is actually in your
own interest, but kind of in thecontext of developing a
particular product. You can ask, is this a
legitimate use for AI? For example, should we be using

(21:49):
AI to deny people, disabled people benefits and home care?
That might not be a legitimate use of AI.
And if it isn't a legitimate use, maybe we shouldn't do it
and we should just say that's off, off limits.
We're not going to do that no matter how much somebody's going
to pay us because we just don't believe that's fair.
Now, if it is a legitimate use, and I acknowledge there's a lot
of kind of Gray areas in this, then you've got to have a really

(22:10):
intensive development and vetting process.
What are you doing? What are you, what data are you
using? Are you projecting out the
harms? Are you consulting in a
meaningful way with actual oversight?
The people who are going to be subjected to these decisions, do
they have some sort of say in how it's developed in a way that
would actually stop you from moving forward or force a

(22:31):
different development of it? Are you willing to disclose
things that might traditionally be considered a trade secrets or
intellectual property in the interests of having more public
accountability? Are you willing to ensure
ongoing oversight so that if your product is developed or is
is deployed, it's deployed firstof all in narrow, short phased

(22:56):
ways so that we can test the harm before it's applied to
everybody? And then two, are we willing to
look over time in a three month span and see, hey, does our
projected impact which we have documented and have disclosed to
the public differ from what the actual impact is?
And if so, is there an automaticoff switch?
Is there some way to to course correct that?
And all of those things, when combined with meaningful, you

(23:20):
know, legislation that means that their people have
enforceable rights if they're hurt by it, would lead to reduce
chances of harms on systemic society wide scales.
If I were a corporate leader, you made the assertion that we
should question whether AI is the appropriate decision making

(23:42):
tool to use in some of these situations that could Causeway
real downstream harms. But I would push back an I would
say, Sir, you don't know what you're talking about because AI
is a decision tool. It is not autonomous.
It's overseen by humans. The data that we collect is

(24:04):
carefully vetted to be biased. And it's unfortunate that these
downstream harms are happening, but it's not a result of our
decision making. There are systemic underlying
societal issues and frankly, theAI is making the right decision.
I would challenge almost everything that you said there,

(24:24):
Michael, from, you know, the thesophistication of the vetting
process. The people who are developing
enterprise software might be doing a better job when the
people are going to buy their software are wealthier or richer
than when it works for low income people.
So first of all, I think like who the audience is, who's going
to be subjected to this dictatesa lot of how careful the kind of

(24:48):
development process is. And if it's going to be deployed
against poor people, the development process doesn't need
to be as intensive probably as it would be for corporate
clients, right? So I think there's there's that.
So a lot of the so-called science in AI is really junk
when it applies to poor people. 1 great example of that is

(25:11):
identity verification, for example, during the pandemic.
And hopefully some of your listeners listeners will have
some some frame of reference. But during the height of the
pandemic, right, masses of people were unemployed.
Congress expanded unemployment benefits to help people float
during these, you know, desperate times.
At some point, states, encouraged by the federal

(25:35):
government, implemented ID verification measures and what
they would algorithmic ones. And So what they would do is
they would run every active claim and every application that
was outstanding through these IDverification algorithms.
And the algorithm would flag claims that it noted as
suspicious. And then what would happen is
the person would have to who is flagged would have to present

(25:56):
physical proof that they are whothey say they are.
That happened. And then still the state didn't
have capacity to process that verification.
And so you ended up with millions and millions and
millions of people who are in desperate circumstances, can't
keep their lights on, can't pay their rent, can't get.
School supplies for the kids whohad their benefits stopped or

(26:19):
delayed by months and months andmonths because of this identity
verification algorithm. Now what would happen?
How did it work? One of the factors is, are you
applying from the same address as somebody else, including with
apartment buildings? So if I live in unit one O 1 and
somebody else is applying for unemployment benefits that lives

(26:39):
in unit 3O3, both of us are flagged.
That's ridiculous. That's somebody in their
basement coming up with some junk that make that they think
would be associated with fraud. There's nothing statistical
about that. There's nothing scientific about
that. That's somebody just inventing
stuff, right? But it invents stuff and it
causes millions and millions of people desperation that you
couldn't imagine. I had clients who were calling

(27:01):
with active mental health crisestalking about self harm because
they couldn't get unemployment benefits even though they were
who they said they were. And they showed that to the
state. So that's an example of, you
know, maybe the maybe, you know,maybe some companies care more
than others. But here when rubber hit the
road, it didn't matter. And ultimately studies came that

(27:23):
came out after that we're assessing sort of the validity
of these tools showed that the for the, for the most part, they
caught eligible people, right, They weren't targeted narrowly
to ensure that we're only getting the few.
That are actively suspicious No,they end up catching essentially
everybody and then just leaving,leaving folks to to to try to

(27:46):
wade through the mess on their own.
And that's just not, you know, that's not acceptable.
There's no justification for that kind of stuff.
Michelle Clark on LinkedIn says,can the problem of bias data be
solved? And let me just reframe that.
How do you manage the fact that that people are struggling to
have data that is lacking bias? I've spoken with many of these

(28:08):
folks on CXO talk, but but it's a really tough challenge from a
technical standpoint. So.
So what do we do about that biasdata?
Biased data is only one part of the problem, right?
And that there are other parts of the problem.
You can have unbiased algorithmsthat still cause massive harms
and that I think would still be illegitimate in a lot of ways.
So we want to make sure we talk about the risk in more ways than

(28:31):
bias. But bias is a big one.
And when we talk about it, therehave been various ideas about
debiasing data. And to be fair, I don't have the
full technical background to understand the statistical
science between all of the different ways and which is the
best at doing what. So I don't want to claim
otherwise. But what I do understand is that

(28:53):
there's, you know, sophisticatedlike trying to get more data
sets that are validated, trying to account for historical
exclusion, again using the data in real world examples, but that

(29:13):
don't have real world consequences and so forth, so
that you're hopefully getting better data.
So I think all that is is very much possible.
But you know, again, I think thebest, the best test against
biased data is going to be, you know, once it's out in the

(29:36):
world, are you going to face consequences for what you put
out there, right. And if you are going to face
consequences, then you're going to make sure or you're going to
do your very, very best efforts to ensure that your data is not
biased in a way that's leading to to unfair outcomes for folks.
Self regulation is not sufficient regulation in this
case. Yeah, exactly.

(29:57):
We see the bias example all the time, right?
There's the obvious healthcare examples about who gets
transplant plants or, you know, black folks pain being treated
as less real than people who arewhite.
And various other examples in the healthcare context of AI
that's deployed with, you know, bias baked in the ad targeting

(30:17):
stuff from social media, like all of these things.
And then there's another deeper question, which is if you can't
figure it out, if you can't debias your data, maybe you
shouldn't be using it. Maybe what you're trying to do
is not so important that you're going to go out and reproduce
long standing societal inequities with your technology.

(30:38):
Maybe the money is not worth it.That's a value judgement I guess
for every person and every company to make.
But of course everybody is goingto say, well, we are careful.
That's the point. I mean, I think this comes back
to one of my points that, you know, ultimately meaningful,
robust, enforceable regulations are part of your.

(31:01):
Gain your interest. If you are a company that is
committed to doing things right,subjecting yourself to
accountability is going to be a competitive advantage, right?
Because if you have other peoplewho are not doing things right
and they can be subjected to lawsuits that are consequential,
they can be subjected to regulatory oversight that's

(31:22):
meaningful, that's going to be acompetitive advantage for you.
You can say, look, we are not caught up in any of that stuff.
They are. And so we're a safer bet.
We're a better bet. You can tap the societal values
that that, that you provide all of those things.
So I think ultimately regulationis in your interest because it
creates a new competitive space for you, a competitive surface.

(31:43):
I guess I'd rather say. I just want to mention for folks
that are interested in the technology, technical
underpinnings of data and bias, just search on the CXO talk site
because we have done interviews with some of the leading
technologists in the world who are focused on this problem.

(32:05):
So just search for data bias andso forth on cxotalk.com.
And oh, by the way, while you'rethere, you should subscribe to
our newsletter so we can keep you up to date on shows like
this because we have incredible shows coming up.
Our next show, not next week, the week after is with the Chief
Technology Officer of AMD. So subscribe to the newsletter

(32:29):
we have. Our next question is from Greg
Walters, who's another regular listener.
And Greg, thank you. And Greg says AI is not like old
school digital transformation broadly.
Can AI help raise us up out of low income?
No, not with current incentive structures in the current system

(32:52):
that we exist in. People always ask me like, what
about AI for good, right? Like what can we do that would
advance justice? There's one example I always
like to offer, which is with public benefits, say Medicaid or
SNAP, which is Nutrition Assistance.
The government knows most of thetime what income and assets

(33:16):
people have, right? That is, that information is
accessible to them in some form.They know they could make
eligibility decisions oftentimeswithout any or with minimal kind
of involvement from the person who would qualify for the
benefits. And so if you could build a
system that would accurately, fairly consistently make those
eligibility decisions, minimize paperwork and other burden on,

(33:40):
on folks, that would be a net wonderful good that would do
more good than 100 legal aid or lawyers in our lifetimes ever
could. The problem is, is big companies
have tried it, big government vendors have tried it, and it
repeatedly fails in the same way.
Why does it fail? Because of failed accountability
mechanisms, right? You don't have political
accountability, as we talked about, because hurting poor

(34:00):
people generally isn't a scandalthat's going to get anybody
booted out of office. You don't have market
accountability oftentimes in thegovernment vendor contract
context is because there's very few government vendors of the
size needed to be able to to compete with one another.
But even beyond that, you have market failures in terms of
transparency of how your productworks and what kind of public

(34:21):
oversight it's it's subject to. And then you have no legal
accountability because the existing laws that we have,
while they have been used effectively by advocates like
myself, are limited in scope andcan only get a certain amount
certain kind of relief. A lot of times I can't get money
damages for the suffering that people's 'cause you can just get
a judge that tells the state or that tells the vendor to change

(34:44):
what they're doing. And so you have all these broken
accountability mechanisms, whichmeans that even with this good
use, right, helping people get the healthcare that they are
eligible for, you don't see thatbrought about in real life.
And so if you can't do somethinglike that, you're not going to
do anything else in terms of alleviating poverty at scale.
You can have some cool projects like in the legal world, there's

(35:07):
like know your rights projects, right?
Everybody's had a ban. Hopefully everybody's had a bad
landlord at some point in their life, right?
Where you needed to request repairs or ask for your security
deposit back after you left and they were trying to hold on to
it. There have been some cool AI
based tools that help people do that.
And that's cool stuff. It's great, but you know it's a
grain of sand on the beach that borders the Pacific Ocean,

(35:33):
right? Like it's cool, but it's not
scale. This leads us to an important
question from Trenton Butler on LinkedIn, who is a community
advocate, organizer and project manager.
And Trenton says this for those of us committed to ensuring

(35:53):
these tools are used ethically, how can we get involved,
especially if one does not come from a law or technology
background? This is an important aspect is
there's a lot of power building and community organizing that
can be done. You know, some of the AI stuff
happens at a very local level, right?
Some school districts, actually about half of school districts

(36:14):
use AI to predict what kids might in the future commit
crime, right? And then targets them for law
enforcement harassment or terrorin in some cases, right?
That's something where you couldfind out as a citizen.
You don't need to be a lawyer. You can do open records
requests. You can go to school board
meetings. You can ask people, hey, is AI

(36:35):
being used here and how does it,how does it work?
And if it is and it looks bad, and most of the time it is bad,
you can help organize people to get involved.
Another local fight that there is is data centers.
These are a big deal right there.
The way that all the data that AI depends on is processed.
They're subject to local land use laws, local regulation

(36:57):
around utility prices and other things.
So there are a couple ways really at home that you can get
involved in this and building knowledge of yourself, building
knowledge of journalists and thepublic, holding meetings,
getting your neighbors involved and all that stuff.
And it can be daunting. So and there's a huge gap in

(37:17):
helping people do that right now.
And that's one of the reason tectonic justice exists.
So in the in the self interestedplug, please follow us, please
stay in contact. And as we're building out, it's
just me and it's been my first two employees joined last month.
So we're still very much in the building phase.
But as we get more established, we want to be doing working in
partnership with folks who want to be engaged around these

(37:38):
issues. So, so please stay up with us.
We have another question now from Twitter and this is from
Chris Peterson, who says what agency or is there is or is
there just one? Would you suggest as the AI
Ombudsman in the US? He says also that for folks in

(38:02):
charge of big AI, 99 plus percent of us are lower income
in quotes. There is no one ombudsperson
kind of yet around AI. And I mean, that's an
interesting idea in terms of meaningful accountability
because there are ombuds people in healthcare and in nursing
homes and other similar entities.

(38:26):
It's a huge gap. That's part of why we exist is
right to be focused people on the ground, right?
Like I was a lawyer working withhundreds and hundreds of low
income people to try to fight this stuff.
So I think in the ecosystem, sort of the nonprofit ecosystem,
there are few organizations thatare trying to build up the
capacity to do some of this stuff to watchdog the use of AI.

(38:51):
And then there are a lot of established organizations that
are more focused on kind of the policy level.
So there is no one ombudsperson.In terms of the other aspect of
the question, I guess I would need more context about what it
means that 99% of us are subject.
Maybe it's that we're subject tothe big tech.

(39:12):
What he was saying is that it's it's the billionaire question.
Now is the time to get involved before these technologies become
entrenched as legitimate ways tomake decisions about these core
aspects of life. Because even though AIA lot of
times purports to be, or at least it's it's hype.

(39:33):
It's hype, it's hype men purported to be kind of this
objective way to make decisions.Whose objectivity is it, right?
If it's always limiting access to benefits, if it's always
making housing or jobs or education harder to get, then
it's not really objective. It's it's, you know, the people
who are developing or using the AI, It's achieving their ends.

(39:57):
So now is very much the moment for it, I think, because this
field is. Relatively new is sort of a
social phenomenon and a social movement.
There isn't a lot of the infrastructure that needs to be
there to help people get organized and engaged around it.
So a lot of my answers are probably unsatisfying.
It's like, we'll talk to your community, organize around it,
stay up with tectonic justice, these kinds of things.

(40:19):
Because that's what we're tryingto build, is build the
infrastructure for people to be able to channel their front,
their concerns, their frustrations, their energy
towards ensuring something that looks more like justice.
But aren't you in a way trying to turn back the clock to a a a
simpler and easier time Before we had AI and AI is not going

(40:44):
away and it's growth is going tocontinue to make incursions into
every aspect of decision making.That's the overwhelming you
talked about earlier, the PR budgets, right, of big tech.
And that's the overwhelming sense is that it's inevitable.
But is it really right? Why can't a nurse make a

(41:05):
decision about how much home care a disabled person needs?
Why is that not viable anymore? Why shouldn't that be the case?
Why can't we use technology in away that supports human based
decision making rather than essentially making the decision
for us with like cursory human oversight at all?

(41:25):
And I think that has to be the questions, what is the
legitimate use of AI? And then even where the use is
sort of legitimate, let's go through all the vetting we
talked about earlier. But let's also talk about the
bigger picture. Questions in terms of what it
means for the Earth, right? We know that AI has
environmental consequences. There's debate about how many
liters of water each ChatGPT youknow, prompt uses or whatever,

(41:48):
but like we know that it's draining water in certain places
where water is scarce. We know that it's responsible or
at least correlated with energy price increases.
We know it's correlated with theuse of non renewable energies.
So you have to factor all these things into the equation in
terms of its societal value and its societal costs.

(42:09):
And it may be that if we actually do a concerted,
reflected effort that accounts for all these externalities, we
realize, you know what, this isn't that harm.
Maybe we shouldn't do it or we should only do it in these
limited circumstances. And I think that's what we have
to be engaging. In and that's why I always
reject the frame of inevitability.

(42:29):
I'm a practical person. I generally need to solve
problems for my low income clients and that doesn't always
allow me to like, be pie in the sky principled.
But we can be pie in the sky principled.
While also being practical and, and start thinking like, is this
really worth it? Is the productivity gain really
worth all the cost? And so far even that in the

(42:50):
corporate sphere hasn't been clear that there are really net
productivity gains, particularlywhen you factor in the required
human oversight for its continued use.
And so I don't think it's inevitable.
I think it will be inevitable ifwe don't in the next, you know.
Decade or two really reckon withthe implications of it.

(43:10):
My friend, you have an uphill fight.
I have to say on this point, youand I have to agree to disagree
because as I look out over the developments of AI and automated
decision making, I cannot see, Icannot fathom, and maybe I

(43:32):
reflect kind of a typical technology viewpoint, but I
cannot fathom that AI is not going to grow much as the steam
engine influenced every facet ofour lives.
And you can say the steam enginealso caused a lot of problems.
Potentially. I mean, I think in society, it's
not like we just accept technology inevitably without,

(43:54):
you know, restricting its use. I mean, certainly nuclear energy
has had significant use restrictions around it and its
development and where it can be used and everything else.
Cars have had a lot of restrictions around how they can
be used. Everybody, I'm sure thought
Ralph Nader in the 70s was ridiculous for, you know,

(44:17):
advocating for seatbelts, right?And now that's just an accepted
facet of the cars. Now that doesn't take care of
all the harms that cars are potentially causing, right?
And I'm not saying that it does,but it's one example of of
movement that way. And all of these things are
have, you know, essentially corporate power and lots of

(44:38):
money going against, you know, people who seem like they're in
the way of inevitability. But we have to be a little bit,
you know, what's the word? We have to believe that
something more is possible. Otherwise we just resign
ourselves to accepting the worstversion of whatever it is that
we're fighting against. And that's not a, that's not a

(44:59):
concession I'm willing to make. Like I'll fight like hell, maybe
I'll lose, but I bet you that we're better off because of the
fight than if nobody fought. Let's jump to another question,
and this is from Ravi Karkara onLinkedIn.
He says, oh and I should mentionhe is Co founder and author of
AT AI for food global initiative.

(45:22):
And he says, how should global stakeholders navigate the
ethical challenges and data governance differences posed by
China's AI strategy, particularly its state centric
data policies, while promoting international norms for
responsible and transparent AI development?

(45:44):
Not sure how much expertise you have in in China, but thoughts
on kind of a global perspective?In the global context, the AI
discussion becomes even more interesting because there's a
lot of people who are pushing AIas a solution to kind of global
poverty, right, and inaccessiblehealthcare, right.

(46:04):
You get the the story of like people in remote villages, you
know, in the majority world who are now suddenly able to access,
you know, medical care or at least knowledge about medical
care that they couldn't because they couldn't travel to cities
and so forth. And I think there's, you know,
who am I to say sitting here andyou know, in the US, in Los

(46:26):
Angeles, CA, to say that that's a bad use of AII.
Think where I care about. There are a few things.
One is the data extraction that comes from, you know, expanded
use of AI. Is it fair to be extracting all
the data about people's behaviors, who they are,
etcetera, etcetera, when you're going to monetize that and when

(46:46):
they really don't have meaningful consent, right?
Opting into opting into the terms of service on a contract
for social media, for example, that's not a real thing of
consent for most people. So what's the data extraction
relationship? What's the labor relationship,
right? Because just as there's a person
who needs to seek healthcare in a village, right?
And this is an archetype, I'm not trying to use a specific

(47:09):
example, there's. Somebody not too far away.
Who's being paid pennies on the dollar to view really horrific,
traumatic data and label it right?
There are people being exploitedfor the supply chain and
everything else. So I think as we transition to
the global discussion, you're going to have a lot of these use

(47:29):
cases of AI for good that are going to be uplifted to justify
the continuance of the AI regime.
And if we're being reflective people that are serious about
kind of the policy implications of this, we need to factor in
all the costs. What is the what are the costs
of the data extraction of the labor exploitation?
What are the downstream costs ofhaving other people's lives
decided by the not so good and not so innocent uses of AI?

(47:53):
This is from Elizabeth Shaw, whosays how does today's AI differ
from previous algorithms from the view of social harms, and
can AI be part of the solution? So really the question is what's
unique about AI and can AI help solve these problems?

(48:14):
A lot of the technologies that are used in government services
right now are not the latest generation AI, you know, LLMS
and other things like this. A lot of them are more older
algorithms that are based, you know, that were supervised
learning that were based on statistical regression and these
sorts of technologies and those are really harmful.

(48:36):
I don't think AI has any like the latest generation AI has
anything to offer in a lot of these, in a lot of these
contexts. Again, so long as it's existing
for purposes that are to, you know, essentially limit life
opportunities. And in this, you know, vacuum of
accountability, I don't think the technological sophistication

(48:56):
is going to make much of a difference because they're going
to be making the same decisions with the same incentives, right?
And one way that we are seeing this now is, excuse me, in the
recent developments federally, right, where the administration
has implemented AI and for example, Social Security offices

(49:17):
and it's made Social Security harder to access.
It's made people have to wait longer.
It's made people not get their benefits.
And that is technically the latest generation of AI.
So I think that's an example of,I always challenge the promise
that the premise that AI is going to somehow fix existing
problems just because of the technology is going to get more
sophisticated. No, what's going to happen is

(49:39):
it's going to make those problems even harder to fight as
the technology becomes even moreinscrutable, more insulated from
public accountability and transparency and all of those
things. You are not of the school of
thought that AI is going to be the great savior.
Oh, God, no. Oh God, no, no.
If anything, it's the opposite way, right?

(49:59):
It's immiserating people. And I think, you know, the
recent use of AI in the last fewmonths by the administration
helps show us this is devastating.
AI is being used to destroy government, destroy government
capacity, destroy lives, and violate the law left and right

(50:22):
and everything else. It is a weapon that is uniquely
suited, uniquely suited to authoritarianism, right?
Even by its nature, it has inscrutable.
It's sort of just as like an theOracle that tells you what the
decision is, but doesn't tell you why it's making the decision
doesn't allow you to disagree with it.
All of that, That's like an authoritarian approach to
thinking. And to decision making.

(50:45):
So no, if if anything, AI is a greater threat.
So I think our continued existence as a, you know, as a
democratic society, it's antithetical to a lot of
egalitarian notions, if anythingis going to make things worse.
Can you offer advice to folks who are working in the corporate
realm, who really have a conscience and who don't want to

(51:08):
see the perpetuation of these kinds of harms that you've been
describing? The current uses of AI are
destroying its reputation. I think that's brand risk for
your companies. I think that's brand risk for AI
as it's as a as a sort of venture.
And I think opposing authoritarianism, particularly
authoritarianism that's being fueled by AI is a really

(51:32):
critical thing for your long term survival for various
reasons. Then you know on a less sort of
global and do me scale is all ofthe things that we're talking
about push for meaningful regulation.
What are you scared of? That's my question.
Like, if you've got this great product that's backed by the
most sophisticated science we have, what are you scared of?

(51:53):
You should be proud of that. You should be putting that out
there and saying, you know what?Subject us to accountability
because our stuff is so strong, so scientifically sound and
produces such clear value for the public that we're willing to
embrace the being under a microscope.
And I don't see that yet. And that's why I even challenge,
you know, the the notion of an inevitability in terms of pure

(52:17):
efficiency. There haven't been clear one
sided efficiency gains that havemade adoption of AI, even for
non decision making purposes, universally sensible.
Help make AI an electoral issue.Let's start talking about the
injustices. I mean, I think there's going to
be in some incentive problems there because, you know, there's

(52:39):
big tech money that funds both parties.
And I think there are a lot of people who don't want to be
accused of being a Luddite. And, you know, there's other
incentives there. But I think policy makers have a
responsibility to educate the public much more intensely than
they currently do about the harms of AI.
Engage the public, hopefully create a base of people so that
there's a balance, a counterbalance to the to the

(53:01):
weight of big tech in these discussions so that you can push
for meaningful legislation and regulation and ongoing and
enforcement and oversight. And I think that's going to be
vital to, you know, again, sustaining a democratic society,
pushing for less inequality and ultimately having an environment
where people have a real chance to thrive.

(53:23):
What advice do you have to individuals who are victims of
on unjust AI decisions? This is really hard.
A lot of times you don't even know that AI is the reason for
that you're suffering. So what I would say is contact

(53:44):
your local legal aid program if you're hurt by this stuff.
Legal Aid provides free legal services to folks throughout the
country on civil legal matters. Talk to your neighbors, talk to
other people in the same situations.
Try to see if things are going, what's going on, and gather
information and start kind of engaging in the things that are
needed to push back. And if you're in a position to

(54:04):
sue, if you're in a position to offer your story to a
journalist, take those opportunities to speak for
yourself. Because there are relatively few
stories out there. And the discourse doesn't have
the people who are hurt the most, doesn't have the people
who are having to live with the consequences of what powerful

(54:25):
people do. And any chance that we have for
long term success is going to depend on you being able to
become a leader and to Share your story and share your
passion and share your injusticeso that we can make it better
for everybody. Kevin Delebin, founder of
Tectonic Justice, thank you so much for taking your time to be

(54:45):
here. And I'm very grateful for you to
for you sharing a point of view that honestly is quite different
from that that we usually hear on CXO Talk.
So thank you for taking your time and being here with us.
This is really fun and thank youalso to the audience for all the
great questions and you for having me, Michael.
Audience, thank you guys. You guys are awesome.

(55:06):
I mean truly that your questionsare so thoughtful.
You guys are so smart. Before you go, subscribe to our
newsletter, go to cxotalk.com, check out our newsletter and
check us out for our next shows.We have great shows coming up.
And if you're interested again in topics like data bias, all of

(55:27):
these issues that we've been discussing, search on the CXO
Talk site because we have had lots of perspectives on this
from business leaders, from politicians, you name it.
So dig into the interviews on cxotalk.com.
It's it's truly a great resource.
Thanks so much, everybody. We'll see you again next time,
and I hope you have a great day.
Advertise With Us

Popular Podcasts

Bookmarked by Reese's Book Club

Bookmarked by Reese's Book Club

Welcome to Bookmarked by Reese’s Book Club — the podcast where great stories, bold women, and irresistible conversations collide! Hosted by award-winning journalist Danielle Robay, each week new episodes balance thoughtful literary insight with the fervor of buzzy book trends, pop culture and more. Bookmarked brings together celebrities, tastemakers, influencers and authors from Reese's Book Club and beyond to share stories that transcend the page. Pull up a chair. You’re not just listening — you’re part of the conversation.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.