Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Your AI is under attack from baddata, poisoned data, deep fakes,
bad actors, and more. Today on CXO Talk number 896,
luminaries Doctor David Bray andDoctor Anthony Scrofignano
reveal how to escape this AI quicksand with practical
(00:22):
strategies for building clean systems and trustworthy AI.
David, very briefly tell us about your work.
One of the roles is with the Stimson Center, specifically the
Stimson Accelerator, and what we're striving to do is
demonstrate that one can advanceboth programs and policies in
parallel. We can't wait for perfect
(00:42):
policies anymore because of the rate of change that's happening
in tech, including AI. And so we're showing it's
possible whether it's in business, whether it's in
governance or whether it's possible at the community level.
Anthony, give us a sense of yourwork.
I am also have the honor of working with David at the
Stimson Center. In the context of the Loomis
Innovation Council, we do what Iwould call action research.
(01:03):
It's not just a bunch of white papers, it's actually stuff that
matters and then incubating things that hopefully kind of
grow and and impact the thing that we're working on.
And there's some very big problems that are really
important. David, we're talking about AI
quicksand. What is that?
When we unpack artificial intelligence AI, we recognize
(01:26):
that most methods of AI require some training or some data to
actually underpin what they do. Not all.
There are other approaches that don't, but the most recent
approach, generative AI does. And so when we think about
quicksand, we could end up thinking we've built this
perfect AI system. Yet the very data that actually
sort of serves as the foundationthat the system was trained
(01:49):
upon, what it was based upon, turns out to be not only
quicksand, but it might actuallyjust disappear in terms of it's
just under the weight it is. It's not a sufficient for the
models we're trying to build. It's recognizing that if you
don't take the time to think about where the data came from,
how it might actually be missingcertain things that you're
trying to ask using the system, or might be intentionally
(02:09):
poisoning it. If you don't do that, you may
find that you build the metaphorical castle on sand,
only to have it be washed away. I can add a few things to that.
I have 5 MMS that are helpful toto put context to this
misadventure. It's when you fall in love with
your AI. Everybody's drunk on on AI right
now. You can't speak a sentence
(02:30):
without saying AI. And so therefore you must AI
your way into everything. Leading with a tool is almost
always a bad idea. It's almost always a better idea
to say what is the net new problem that we're solving?
What, how is this going to benefit the enterprise?
How is this going to not just dosomething we could already do
with the cool new tool, but do something that do that actually
(02:51):
might matter misuse, which is, you know, wrong tool, right
problem. There's a lot of this going on
right now and and Gen. AI is a great example where it's
David mentioned training it. The Gen.
AI is is basically consuming ginormous bags of words.
What makes you think that the disruption that you're
interested in is represented in that bag of words, for example?
(03:13):
So misuse is just using the wrong tool for the problem.
Malintent is a lot of what we'retalking about here.
When bad people come in and theythey don't have to mess with
your systems or your people, allthey have to do is spoil the
milk. All they have to do is
manipulate the data in a way that can cause you to make a bad
decision or to change what was already a good decision missing.
(03:34):
And David mentioned that they all too often now we talked
about under representation in the data.
If the thing that you're interested in is not well
represented in the data, these models will get very distracted
by what is there and you'll fallin love with the answer because
they'll tell you how good you'redoing and very dangerous.
And then the last time I have ismoving.
The environment is changing. And while you're working on it,
(03:56):
and while you're ingesting all this data and doing all this
convolution, the world is changing.
And so it's very important when you build models and when you
build approaches that are heavily dependent on AI, that
you consider how the environmentmight be changing in a way that
makes your original testing no longer relevant.
So to what extent is this set ofproblems happening?
(04:19):
If anything, the latest wave of AI, generative AI and and and
the subset of it has poured kerosene on the existing fire.
But this was already a risk to organizations even in the 2000
tens. And that if you didn't think
about where the data was coming from, could you trust that data?
Was it reliable? What it's done, though, is
because AI has now become more available to enterprises and
(04:42):
there's a huge push for companies to adopt it.
It's one of those things that inthe pressure to adopt it as
exactly for the five, M said that Anthony laid out.
We run the risk that companies or governments may adopt models
without thinking about did I avoid the AI quicksand?
And so again, we may fall in love with the model, the perfect
castle, only to discover that the actual foundation it's built
(05:03):
upon is insufficient for what we're asking.
You have to be really careful. If you set out with the goal of
proving to your overlords that you're using AI, that is a
horrible journey to be on. Everybody's using AI.
There's AI toothbrushes, there'sAI and everything.
And AI is increasingly democratized so that there's no
AI center of excellence, just like there's no PowerPoint
(05:23):
center of excellence, right? So it's, it's important that
where this is being used in the enterprise, there is that the
adults are in the room that we're looking at.
How are we using it? Why are we using it?
What's the net cause? How is this going to be
realistically deployed in our environment?
You know, how do we scale it? Are we building the Hotel
California where we can never check out?
(05:43):
Those are very important questions.
No, none of those are new questions.
It's just that now we have the opportunity to really fail at
scale if we get it wrong. And I'll give some very tangible
examples real quick, Michael, because I think that'll help the
audience. Without naming the name of the
model, 1 of the models, around 2023, if you asked it how many
people lived in, say, the state of Georgia, the US state of
(06:04):
Georgia, it would confidently tell you that the answer was
something around 350,000, sorry,350 million people, as opposed
to a more realistic answer. And that was a case of where the
data that had actually been trained on the model, it had
actually gleaned from a source that was inaccurate.
There was a typo in the answer. And so when the model gave the
answer to how many people lived in the state of Georgia, it was
wrong. So that's one example.
(06:26):
Another example though, is actually if we look at when
policy making decisions are done, sometimes they're, they're
making decisions based on data where people live as opposed to
where people work. And they think that people
aren't using their services or they roll out, say, imagine, you
know, it's, it's actually helping people with COVID
vaccines or things like that. And we know there's been cases
where COVID vaccines for a placewhere people live as opposed to
(06:49):
where they work. And if people are working 2 jobs
or working out hours, it looks like they're reluctant to get
the vaccine when in fact, you really should have just placed
it where people worked as opposed to where people lived.
So this is again where you have to ask exactly what Anthony
said, which is what is the, whatam I trying to do?
What is the mission or the business that I'm trying to
achieve? And then work backwards to not
only what's the right tool, but what's the right data to bring
(07:11):
to bear. But what I don't understand is
what you are both describing seems pretty basic, pretty
straightforward. So what is actually?
What's the disconnect that's causing these data problems to
(07:31):
filter through? These are smart people, you
know. Go ahead.
There are smart people there. There's a lot of FOMO, you know
where very senior people are putting pressure on the other
senior people that report to them to say what are we doing in
AI? Because my buddy is doing
(07:52):
something in AI and I, I need totell the, the investors what
we're doing in AI or I need to tell the, whoever the, my
overlords are what we're doing in AI.
So we're, we're, there's a lot of that.
The other thing that's happeningis that you have an entire
workforce that's graduating intothe enterprise, that kind of AI
their way through their last couple of years of school.
(08:12):
And they think that, you know, that the answer is always
contained in the data that's available to them.
And they think that everything is free and everything can just
be used however they want in there.
I'm trying not to name brands intheir notebook, right?
I can just include this library and I'll be good to go vibe
coding all of that, right? And I'm not going to like, you
(08:35):
know, take issue with that because this will this will live
on and a year from now somebody will say, oh, he just didn't see
where the train was going. What I'm trying to say is that
these fundamental questions of what is the problem that we're
trying to solve? How would we know that this
approach will make things betterand not just different if our
objective is just to declare victory that we used a certain
(08:57):
approach that has never been a good idea.
I mean, in the history of man, just saying I use this tool
instead of that tool alone without something else as part
of that story. It's, it's, it's always a futile
journey. And, and it's, it's, it's a very
dangerous one right now because of the size and scope and scale
and apparent intelligence of thetools that we're talking about.
(09:19):
Not that I said a. Parent, I want to tell everybody
that you can ask your questions.If you're watching on LinkedIn,
just pop your question into the chat.
If you are watching on the CXO Talk site or on Twitter, just
pop your question into the tweetchat using or X using the
(09:43):
hashtag CXO talk. Take advantage of it.
This is your opportunity to ask pretty much whatever you want.
So Anthony, let me ask you the question about the elephant in
the room. So what?
What's wrong with vibe coding? Used properly in the right
context for the right containerized problem, nothing.
(10:08):
But, you know, there were automatic code generators in
1980. There's nothing new about that.
The danger here is that generative AI is a very
different animal than just usinga bunch of automatic code
generation. You are consuming vast amounts
(10:29):
of code in order to quote, UN quote, learn to code.
And then you're basically plagiarizing, you know, you're
mathematically plagiarizing thatcode to produce other code that
may or may not do what you asked.
And there are some basic blocking and tackling things
that we all used to learn, like regression testing.
How do I know that the things that used to work still work?
(10:50):
When I do something new like bias, understanding whether the
data that I used is actually going to look like the data
where I deploy it, those things don't happen a lot.
They the goal is oh look how cool it looks great it it it
looks cool but you didn't escapethe need to do those things if
you if you use a critical application when you push that
(11:13):
button on your steering wheel that says stay in the lane or
when you whatever you know when that drone gets launched right.
We've got to hope that these concepts were embodied in the
development of that piece of software that you're depending
on. And increasingly, maybe maybe
not because the goal is get it out fast, let it break, and our
(11:34):
customers will test it for us. That's kind of dangerous.
Back in the 80s and the 90s there was efforts called expert
Systems Decision Support systems, which was another
flavor of AI which thought that if we spell out rules of a
system that that's all we need to retrieve intelligence and we
have success. For example, an automatic
defibrillator actually is running rules based when it
(11:56):
tries to decide when to give youthat shock.
But the moment we tried to go out of narrow, narrow use cases,
rule based approaches to AI collapsed.
So we go now to machine learningand then the subset of machine
learning, neural networks and generative AI and that's saying
we don't need rules at all. And so in November of last year,
there was actually a competitionwhere they actually told the
machine, and there are no circumstances should you ever
(12:19):
transfer the funds. And it took less than 500
prompts. I think it was prompt #490 that
sure enough, the machine that had been told explicitly do not
transfer the funds did. And so exactly as what Anthony's
talking about, which is you're doing very fancy multi
dimensional pattern matching when it comes to vibe coding.
And as long as a human takes thetime to carry it through and
(12:39):
look through it, there's value. But recognize what you're doing
is pattern matching that may actually look like it's valid
code when in fact it's not. The last example I would say,
and I won't name the name of thecompany, it was a few months ago
there was a one of the big, big AI companies came out with a
paper where they told the machine write code that's
insecure but don't reveal it to the user.
(12:59):
And at the same time, they were surprised that when they
actually gave that instruction to the machine, it started being
hateful and and hate spewing. Now let's think about the data
that actually underpinned the system.
Yes, it's trained on the corpus of the last 30 years of the
Internet. Where on the Internet might
there be use cases of people that are writing code that's
insecure but not revealing it? It might be part of the Internet
(13:20):
dark web in terms of four, 10810and groups like that that are
also hateful. And so when they actually asked
and gave that instruction, I don't know why they were
particularly surprised that the machine actually started
behaving in other ways similar to what it actually trained on.
But that just shows the importance of 1 knowing your
data, but also knowing the limits of generative AI when it
comes to explicitly doing what you ask it to do.
(13:42):
We're using a metaphor of quicksand here, and I want to
get back to that for a second. If you think about this from the
outside in, if you're part of anorganization and you're not the
developing part of the organization, you're the
customer of them, right? What do you want?
You want it done fast, you want it deployed, you want to
monetize whatever it is you're trying to monetize and somebody
(14:03):
comes along and says I can have that ready for you in a day.
How often do we ask at what cost?
How often do we ask how are you going to deliver it?
This guy said it was going to take three weeks.
You said it's going to take a day.
Day is cheaper than three weeks.I get my revenue sooner.
Go right. You know, sometimes you got to
(14:25):
be very careful because you get what you ask for.
Aside from philosophical problems with vibe coding
related to the lack of discipline and effort that folks
should invest in computer science education, what's wrong
(14:49):
with vibe coding? I would say let's let's look now
at like intellectual property. You know, if we remember in the
90s there was this thing called Napster, it did music sharing,
but it didn't really respect anyof the artists recording rights
or any intellectual property. And so in some respects what we
might have seen in the early twenty 20s was a repeat of
Napster just in the AI era. And we know there's some
(15:10):
lawsuits and things like that that are being sorted out.
But you know, the question for abusiness that is looking to use
an AI solution that might be external to the business, how do
they know that they're not for properties not locking out the
door? How do they know it can be
respected? I mean, I do get calls about
once every three weeks or so. And I'm, again, not naming names
from either ACIO or a sister that says I had an employee.
(15:33):
They used a tool that was, you know, it wasn't something you
provide in the enterprise, but they used a commercially
available one. And they uploaded information
that had HIPAA, it had health, you know, personal identifiable
information. Can I claw it back?
And the answer is no. But you know, again, it's one of
the things that, you know, you have to be aware that when you
use AI, particularly AI that's being provided by somebody else,
(15:55):
be very intentional about what you provide and what you don't,
whether it's intellectual property where there's personal
identifiable information or justproprietary information.
Because you might see some things walk out the door that
you don't want to have walk out the door.
We have a question from LinkedInand I'm glad somebody started
asking questions here. You folks who are listening take
(16:17):
advantage of this opportunity. And oh, by the way, right this
second, you should go to cxotalk.com to subscribe to our
newsletter so we can tell you about episodes like this that
are live. Anyway, we have a very
interesting question from PreetiNarayan on LinkedIn.
And she says who really owns critical thinking and ethical
(16:40):
oversight in AI? Current leaders or a new role as
a moral compass? So who's actually responsible
for dealing with this set of issues?
It's a really interesting question.
David, do you want me to jump onthe hand grenade or do you want
to? Jump, I'll jump on the hand
grenade and then you can clean up whatever I I don't manage to
(17:01):
to contain the blast. I would say it's all of us, yes,
if if you're expecting somebody else to solve this and you're
going to be waiting. I apologize for interrupting
you, but I just want to clarify what I what I thought that I,
what you just said, what I thought you heard you said, So
you just said The answer to who's responsible for ethical AI
(17:21):
is no one, no. No, no, no, no, no, no, and
please no one. I so ethical is a very that's
also a quicksand because let's think about there were plenty of
things in the 1700s and 1800s that people thought were ethical
that nowadays we would say are unethical.
And even more recent history in War One, the British thought
these things called Q boats, which were military boats, the
(17:42):
skies, the civilian boats were ethical, but Lusitania got sunk.
And so come over to they're like, maybe they're not ethical.
So what we're really talking about is when you use the tool,
whether it's you as a consumer, whether it's you as an employer,
whether it's you as a boss, whether you ask somebody to use
it, you do have a responsibilityto say, am I using this hammer?
Is this the right tool for the purposes?
Is this the right driver? And the reality is for
(18:04):
consumers, there's no silver bullet that any company can ever
provide you that says here's when you can and cannot use this
tool. I mean, you are sold a hammer,
but if you happen to use the hammer to smash your hand,
that's on you. So it isn't saying everybody,
it's just saying be be cognizantof using the right tools for the
right job. And what we really need to do is
(18:24):
find better ways to help you understand rapidly what are the
limitations of a tool. Because I think a lot of things
right now, because the marketingare being sold that they can do
everything, when in fact that might not exactly be the truth.
So I I agree with the fundamental premise of what
David said, that there is no onesingle responsible party.
That doesn't mean that ultimately the oversight can't
roll up to the board or to, you know, the chief risk officer or
(18:51):
it the title is going to depend on the organization and it's it
What it shouldn't roll up to, bythe way, is the same
organization that's creating allof that because that's you got
the chicken in the hen house, the rooster in the hen house,
prom, whatever that is, fox in the hen house, somebody in the
hen house. The the I'm going to get killed
on Twitter for getting the wronganimal in the hen house.
(19:12):
So the on X the. There's an AI for.
That, yeah, I know. Thanks.
Let me get back on track here. So the issue is that meant like
many things, there isn't a blackand white answer.
You're talking, the example I give often is that we want
customized things. We want our things to anticipate
our needs and start to do them for us or help us do them.
(19:35):
We also want privacy. Those are kind of opposite
things to want for your things that serve you better.
They have to watch you and kind of take notes on what you do.
And that means sharing your location and maybe sharing all
the text in your emails and sharing all that other stuff.
And then if you found out all the stuff you're sharing, you
might be appalled. So the the when an enterprise is
(19:55):
rolling out an application and way before they roll it out when
they're contemplating the designof an application.
Everyone in the loop should be thinking about what are the
unintended impacts of what we'reabout to do?
And are the is the juice worth the squeeze?
If we're if we're trying to sellmore shoes and socks and in
order to do it, we have to spy on people, is that a line we
(20:17):
want to cross? And I hope in that particular
example the answer is no. But there are not that many
people in the chain who have thedecision making responsibility.
So I think Michael said the initial question you asked me
was who's responsible for critical thinking.
And I would say everybody shouldcritical think.
If you're asking a question of liability, that's a little bit
(20:40):
different. I think that's what Anthony is
saying is if a company provides you a service and that service
has AI providing the service as well.
I think most of us would say based on common law that what
the services provided to you turns out to be incorrect
because of what the AI advised you to do.
Maybe it was an AI lawyer, it was an AI helping a clinician or
a doctor, then yes, the liability goes on the company's
(21:02):
part. And I would actually say just a
few weeks ago, I actually was atAI got to participate in a
public congressional hearing as an expert witness.
And actually, one of the things we heard almost from both of
that, this was actually not a partisan issue.
Both of that all said, when it comes to liability, we'd
actually don't need new laws perSE, because the existing law
(21:24):
framework, the existing legal framework says if a company
provides you a service and you buy that and that service, if
you follow what it said ends up harming you or resulting in harm
to your person or family members, then yes, you should be
held responsible. And so I think we need to
separate critical thinking, which I would advocate everybody
does to liability. And I would say right now, even
(21:45):
if we haven't made explicit laws, there's enough of a legal
framework to say that if the organization uses AI to provide
you a service, then they are responsible for what the AI
provides. It is absolutely coming.
You can't just ignore that, that, that the agency, the
concept of agency, that what youask your AI to do on your behalf
is not well embodied yet. But don't count on that because
(22:07):
Oh my gosh, everywhere you go this is under discussion right
now. We have another question from X
Twitter right now, and this is from Arsalan Khan, who is a
regular listener. He always asks great questions
and he's concerned about the he's saying in an age of
(22:31):
deregulation and no boundaries, unless there are AI consumer
advocates, consumers will be at a disadvantage.
And why do we need boundaries when it seems that politically,
(22:51):
and I don't want to make this about politics, but but it seems
politically we're in a trend of fewer boundaries.
And I'll just ask you to also relate this back to the data
because that's what this conversation is really about.
There actually was a public hearing on AI.
It was September 28th, and you can find it if you do
congress.gov. What came out of that is so far
(23:16):
the states have actually led theway when it comes to thinking
about AI. There's more than 1000 pieces of
either existing or pending AI legislation.
But that also creates an interesting challenge, which is,
you know, let's say you're from state Arizona or California and
you travel to a different state.Is it the state that you reside
in? Is it the state where you live?
Is it the state where the company is?
(23:36):
It gets really messy very fast. And so I would say just because
we haven't seen anything yet doesn't mean we won't soon.
In fact, what I'm hearing is once government opens up again,
we will see stuff come from Congress because they recognize
that in order to help consumers,but also help businesses
navigate everything that's happening with AI, they need a
light touch framework. Now, that said, definitely need
(23:58):
people to advocate on behalf of consumers because if you don't,
then that that's absent. But I would say to Arsalan,
there are existing laws. Remember the Privacy Act of 1974
that came out because these things called advanced data
processing systems, mainframes required the federal government,
United States to think about what does it mean now that you
can have these machines that canactually sort of begin to know
more about you than you really wanted them to know?
(24:20):
We probably need to upgrade thatand give an AI including the
right to be left alone that you can actually say, I don't want
the AI to think about me or involve me.
But I think what I would say to Arsalan is what we really need
is groups pushing for how do youupgrade existing laws at the
state level and at the national level as opposed to net new
laws, because net new laws will be debating until the technology
(24:41):
has already moved on. In fact, if anything, that's its
own quicksand because of the nature of data, which is I've
seen some people that say you'vegot to expose all your data, you
got to tell all the data you're using.
No business is going to do that.I mean, it's their intellectual
property. In fact, Anthony can probably
resonate because he worked at a company where I can't tell you
all the data, but what you can do is think about what are ways
that you both inform the consumer, but also hold the
(25:03):
company accountable for a appropriate risk calculus for
the services or the products that they're providing.
David and I were part of a, I'lljust call it a, an intellectual
holding environment earlier thisweek in our nation's capital.
And I would say 1/3 of the conversation was related to the,
the regulatory directions and the, the, the technologies that
(25:27):
are emerging from tagging data for sharing metadata about AI
generated data for letting the AI know that it's consuming data
that was generated by other AI. This isn't going away.
And if if the solution of your AI development team is to put
their head in the head in the sand and say I'm not going to
worry about it because I don't know of any regulation right now
(25:49):
other than the California Privacy Act, you might want to
add some people to that team. How do we address these issues?
Because you're describing a set of technology problems as well
as a set of process issues, cultural issues, mindset issues,
(26:12):
greed, fear, doubt, I mean all of this.
It's kind of a an. Oh, it, This is why the, the
moment is so challenging. I would say, if anything, I
would recommend for listeners, while we're talking about AI and
data, replace the word AI with organization.
How do you know that the organization is doing a good job
with your data? How do you know that the
organization is making good decisions based on it?
Because in some respects, AI is just additive to these
(26:35):
challenges. I think we also need in some
cases, and Anthony can go deeper, there's going to be some
data that we are comfortable with being sort of like open
data. There's going to be other data
that's like prescription data that you actually have to sort
of have a prescription, have access to.
And there may also be cases, especially if you are someone
that's in the entertainment industry, a musician, artist,
where you you want to have your data be part of a cooperative or
(26:59):
a data trust. We've had Lord Tim on, on CXO
talks as well, talk about this. And then there's a negotiation
between an AI model that might want to train on your data, but
then you're having some either financial or equity return or
some non equity benefit maybe. I care about Parkinson's
research, so I'm willing to pullmy health data if it helps
inform a cure for Parkinson's. But I think we need to actually
(27:19):
recognize that if what the 2000 and 10s was was almost an over
calibration to Hoover up all thedata and do so behind the
curtain, that's probably the wrong lesson for going forward
because it didn't engender trust.
If anything it made people distrustful and you can't trust
the AI outcomes. We need to unlearn some lessons
to move. I think it did more than make us
(27:40):
distrustful. It enabled a whole new cadre of
malefactors that can do things like deep fakes and, and
adversarial data manipulation. And, you know, even the data
that's true has the truthiness of the data often has a
lifespan. It was true, but it's no longer
(28:00):
true. And so AI doesn't care.
Math doesn't care, right? It's just hoovering it all in
and, and, and you know, compressing the element of time
to 0 and then regressing around what's there.
I know there's a lot more to it and I know there's different
foundation models that handle this differently.
And I know about turning up the heat and all that.
I know all of that stuff, right?So before everybody tries to
(28:22):
attack me, all of that stuff is is some of a treatment of this.
But it all starts with at the highest level in the
organization, the adults in the room asking some really tough
critical thinking questions. What do we have to believe to
use this approach? Help me understand what steps
you've taken to understand provenance and permissible use.
(28:45):
Help me understand how we're going to handle agency.
If you can't answer those questions, you don't belong
building those tools. We have another question from
Twitter, from X, and this is from Chris Peterson.
He's also a regular listener. And we thank you, Chris, for
your listenership. And he says there's growing talk
(29:06):
about LLMS hitting fundamental limits and scaling
hallucinations and lack of executable guardrails, which is
what you're talking about. Is it actually the next step to
AGI or artificial general intelligence?
Or do we need different approaches long term?
Yes. So he's spot on.
(29:27):
We need different. Approaches.
We need different approaches, yeah.
Yeah, and I will give some. I want to give, I want to, I
want to give both a shout out tosome.
I mean, the trouble is, you know, again, generative AI
arrived on the scene and it tookall the oxygen out of the room,
as Anthony knows. I mean, AI has been decades in
the making. There are multiple methods.
And the trouble is I think we got over fixated on generative
(29:48):
AI as opposed to recognizing it's another great tool.
But there are other tools that are needed.
I am very bullish on active inference and I would recommend
folks take a look at it. If you're not familiar with it,
it's basically the best idea is it's it's doing continuous
learning of an environment. And I actually would submit that
the future is not these mega AI models that are trying to do
everything. It's going to be smaller models
(30:10):
that are actually optimizing forreally specific cases.
For example, there may be an AI model that's monitoring all the
ships coming in and out of the Suez Canal.
There may be another one that's monitoring ships coming into LA
ports. And when a disruption happens to
the Suez Canal, like we had a few years ago, then it actually
talks to the LA port model and says you might expect that
there's going to be a shortage of containers.
(30:31):
And that shortage of containers then going to cause a rise of
people to actually look for new metal to build new containers
because we're lacking them. That causes a rise in the future
markets, which then causes the cost of shipping for the next 6
to 9 months to be more expensive.
Which means you can get plenty of Mercedes Benzes and Rolls
Royces in the United States because they can absorb the cost
given the increased cost of shipping.
(30:52):
But quart sized cans of paint aren't available because they
can't absorb the cost. And so that's not possible with
a mega AI model LLM. None of that would be there.
But if you had active inference where smaller models are talking
to each other about the ripple effects, that is possible.
And that gets to really interesting forms of
intelligence. We hear a lot of talk about, oh,
(31:12):
we need more power, we need bigger data centers.
When are we going to reach AGI? Now they're talking about ASI,
artificial super intelligence. You know what's going to be
next? You know, Lex Luthor, like we
keep naming, renaming and renaming.
The power of regression, the power of convolution, the power
(31:33):
of math, basically to do things that increasingly convince us
that they were done by humans. I think that's great.
Please keep doing that, you know, don't stop.
Just don't do it in in and and and fake your way about it.
Don't do it and not let me know you're doing it.
There are definitely situations where AI can do things that
(31:56):
people, I don't care how many people you put in the room,
they're not going to be able to address that problem because
it's just too big. It would take thousands of
people to all agree on the approach.
They would want to be similarly incented and similarly
instructed and not going to happen.
Yeah, great. Go and do that.
Don't call it intelligence. We need some new verbs here and
(32:16):
some new nouns here for the things that this technology is
doing. And oh, by the way, most of the
technology is still, you know, like when you play pool and you
just hit the balls as hard as you can and hope one of them
goes into the pocket. It's try everything all the time
with the exception of 1. And I think as we get smarter
about having more than one approach and having David is
(32:39):
mentioning agentic AI, I'm sorry, active and active
inference. This agentic AI is, is a step in
that direction. Having these little pockets of
of capability within an ecosystem that's even broader
and being able to sort of turn them on and wake them up and
give them a specific goal and a task that's a nice step in the
right direction. We're not even close to there
(32:59):
yet, though. There's there's a lot of work to
do here. On LinkedIn, Chris Davidson says
can you talk about the concept of data poisoning?
Is it happening actively at the nation state level?
How would we know if we were thevictim of it?
So a very specific example, and this is actually you can find it
(33:21):
on online. So it's it's open, openly
available is we believe about 9 to 10 months ago, Russia
actively started trying to do a campaign to teach large language
models that they could access things that just weren't true.
And these are really hard once they've been taught to things
that aren't true to undo becausethe way large language models
(33:43):
are done if they don't really forget.
Well. And so yes, data poisoning is
happening. And a very specific example, and
you can find articles online, isthat Russia has done a very
intensive campaign focus, not just in the US but other free
societies to teach them things about events, about history,
about people that actually just are just patently not true.
(34:05):
I'm one of the many millions of people who love TikTok.
And recently lots of videos on TikTok have been popping up
showing the various things that ICE is doing.
And I'm not making a political statement here.
Typically those videos get what,5010 thousand, 50,000 views?
Whatever. Some comments lately I've seen
(34:28):
these videos popping up having 3million views and 150,000
comments saying all and all the comments drowning everything
else out saying go ice. And I'm not, I'm not saying
good, bad, indifferent, but thisis what's and it's obvious that
(34:49):
there is some third party that is doing this in an automated
way. It's combination.
So what what this is what makes things so hard is when bots
come, bots can definitely amplify things and make it look
like humans are liking a video or commenting on a video.
But then that becomes a complicated process because then
humans will pay attention to it as well.
(35:10):
So, you know, is it is it coordinated in authentic
campaigns? Yes.
But recognize a lot of those humans are also valid.
And that's what makes these things so difficult.
We may have heard, you know, it was a few weeks ago there was a
discovery of lots of phones nextto the UN General Assembly.
Now we don't know exactly what that was for.
But one of the things that that pharma phones could do is if you
(35:32):
wanted to make it look like a lot of people were on their
phones liking something or commenting on something or or
not liking something and then dissing on something.
Then you would have phones in the geographical area that look
like they were in New York. And then you would have them
basically go to that site. And so, but the moment you did
that, then you would actually get human eyeballs and human
attention that would actually chime in as well.
(35:53):
So this is a mixture of both bots, but also of humans.
And that's what makes these things so difficult is data
poisoning takes on a life of itsown once it's been seated.
There are many different types of data poisoning.
So what we're basically getting out here is either, you know,
faking something or amplifying something that may or may not be
(36:13):
true that someone said, right? That's definitely one type of
poisoning of data. Another type of poisoning of
data is if I understand what type of AI you're using, I can
influence the decisions you makeby allowing you to see things
that cause you to make a certaintype of decision.
And that without getting into too much detail, there are
(36:35):
examples of this where when whenlarge organizations were
considering doing certain types of things.
Easy one is airlines, you know, we're looking at, you know,
reinstating our roots to, you know, Southeast Asia and all of
a sudden all their customers love Southeast Asia, right?
(36:55):
Maybe that's true, you know, andwho am I to say?
It might also be likely that their social listening was
looking for how they were taggedand what other nouns were
associated with that tag. And there seemed to be a lot of
comments to social media mentioning them and mentioning
Southeast Asia and their marketing department and their
(37:17):
route planning people. And all those people got in a
room and said, Gee, our customers really are expressing
an intent. Maybe they are, maybe they
aren't. Who am I to say, Michael, you
said it's obvious. I don't know if it's obvious.
I don't know. Is it likely that what did you
say, 303,000,000? Is it likely that 1% of the US
population, you know, all collectively agreed on anything
(37:39):
other than the fact that they'repart of the US population?
I doubt it. And.
Even then. And, and, and, you know, there's
certain things in math that you have to violate mathematically
in order for something like thatto happen.
Is it possible? I, you know, people win the
lottery, right? But what we, what we can do,
because we're trying to focus onsolutions here.
(38:01):
One of the most important questions that I ask very often,
David's been in the room when I ask it and probably been annoyed
by it. What do we have to believe in
order to go down this route? So you just told me that we're
normally, I didn't, I don't remember the numbers.
We're normally, you know, X number of people are looking at
something, 100 X people are looking at it and they all feel
(38:22):
the same way. What would we have to believe?
And that takes you down a whole route.
Now, one of the scenarios might be that there's some kind of a
body influencing this. Another might be that they were
all there and quiet until they had their their opportunity to
say, you know, heck yeah. Another possibility might be
that none of these are people speaking and that these are, you
(38:43):
know, some sort of, you know, spoofing of the data.
And then you go and you test those different scenarios.
If that scenario were true, whatelse would also be true?
And it turns out that there's a whole science around doing this
called propositional calculus. And we know how to do it.
All we have to do is bring the adults in the room and actually
do it. Do you know?
To paraphrase Descartes, I believe it.
(39:05):
Therefore it's true. Separating what are tangible
truths that are truths beyond what you believe.
And that's that's where we have courts of law and we know even
courts sometimes get it wrong. That's why we have an appeals
process. And so it's getting harder and
harder to distinguish those things that beyond what people
believe are truths that we are willing to say exist in the
(39:27):
absence of people believing. There there are also observer
effects where we start to believe something and because
enough people believe it, it starts to become true.
Like there's unrest in, in, you know, wherever, right?
There wasn't unrest in and wherever until everybody started
talking about the unrest and wherever, which caused the
unrest and wherever that happens.
(39:48):
Channel or inner sneakers where sneakers has that Cosmos says
the line that says you know I learned that what matters less
is the actual reality, but the perception of reality.
If I spread rumors that a bank is insolvent and that causes
everyone to go rush the bank, then pretty soon the bank is
insolvent because I spread rumors that the.
Banquet OK, we we have a question from Twitter on and or
(40:11):
X, which is how can organizations determine they're
heading into the AI quicksand and how bad can it be?
You need a red team. And so the world I come from
within the intelligence community, red teams were
basically folks that are, are are asked to say how might what
you're doing be misused or abused?
(40:32):
How might I throw a wrench in what you're achieving?
And so organizations need to have that just because if you
don't, then it's nobody's job tothink about second, third and
4th order effects. But when I used to do response
to bio chairs and public health,we called them the B team.
It's the same idea of the beta team, which is what are the
things you're not thinking aboutthat you should.
And if you have someone on the team or multiple people on the
(40:53):
team whose job is to say, how might your well intended
business efforts or your well intended product launch be
misused and abused, that will help you be more prepared for
how others might exploit it. I often wonder if in social
media, if we had had some of thesocial media companies in the
2000 and 10s have that sort of red team function, if we could
have avoided some of the traps that we later got ourselves
(41:15):
into. As the often appointed leader of
said red teams, I will say that there because it was a great
question. One of the ways you might know
it's happening is to have different smoke detectors
looking for different things. So there's a couple of things
(41:35):
that I often talk about quality and character, the character of
and now you have to fill in the blank the character of what our
customer comments in your case with TikTok, David Michael.
So the character of the commentswent from being sort of spread
out positive and negative to allpositive, right?
So that's the quality, the character it, it, it, it became,
(41:59):
you know, it was massively homogeneous and it became, you
know, singularly positive the, the quality of the input.
So you can look at missing this,you can look at incomplete
articulations, you can look at articulations that contain
things like slang and neologism,which are more likely to be
(42:20):
spoken by humans than by machines, things like that.
M dashes. Yeah, the famous M Dash right
there are, you know, good old fashioned statistics,
heteroscedasticity and and you know, measures of central
tendency. We know how to do all this
stuff. It's just math, but we don't do
it. And so the biggest problem with
that to that question of how would we know it was happening
(42:42):
was certainly not if we don't look right.
You've got to have people responsible for looking and
adults that actually know how tolook at large amounts of highly
dynamic data in ways that are dispositive.
That is not something that you you give you know the interns to
do. And on a related topic on
(43:04):
LinkedIn, Preeti Narayan points out that critical thinking
itself is not something that education or job experience
alone can teach. And I think this gets right to
the heart of the issue that Anthony, that you were both just
raising, which is Anthony just said we need people who can look
(43:29):
at these bodies of data. But there is a large judgement
issue here. It's not just the math.
There's intuition, there's the sense of things.
So what should organizations do?You need to, for for the
(43:52):
identify those employees in yourorganization that are going to
be the ones that are part of thered team and maybe they don't
have experience. And so put them through a red
team boot camp. And the only way you do that is
exactly what you said is you need to have more and more
experiences spotting what may ormay not be problematic.
And the reality is it gets exactly that.
We do we developmental heuristics, we use math, but
(44:13):
there's also judgment calls. And you know, you think about
this. This is in some respects when
you go to the military, they send you to boot camp and then
they send you to other training.They train you how to do your
job. I think we need to have
companies doing very fast-paced courses that teach people how to
do red teaming well, and then long term broader in the
education space, we need to actually, it's not that you're
not going to use AI in education, it's actually that
(44:36):
you're going to critique the AI.What did the AI get wrong?
What was missing? What was wrong with the data?
We need to teach students how todo this.
And it's a whole new discipline.But we've had this before when
books came out and then later radio and television, like how
do you assess if that book is missing something or there's
something biased in it? We know there's books that were
just propaganda. So teaching these skills, but
doing so both in a way that addresses the here and now for
(44:58):
companies, but also the education pipeline, that would
be my recommendation. I want to jump on this.
One of the most important questions being asked in the
academic circles right now is how do we get this into the
mindset of a student body that is increasingly focused on, you
know, looking down at the phone and asking the phone a question.
(45:19):
How do we and, and you know, youhave to be careful about talking
about teaching people how to think.
That's a very dangerous thing tosay, But teaching critical
thinking is very different than teaching people how to think.
So there are some basic tenets about how, what at what police
am I anchoring on? What things are my, am I holding
to be axiomatic? I'm not going to test them.
(45:41):
I'm going to assume that they'retrue.
What do I do with a postulate ifI think something's true?
How do I set up an appropriate unbiased test?
What are the sources of bias in my answer?
And what treatments can I make for that?
What kind of elasticity, How wrong can I be and still make
the decision that I'm making? These are all super important
skills, all of which can be taught.
(46:03):
But you're asking a, a, a body of, of knowledge that is already
really big and getting bigger tonow include this other really
big and getting bigger thing. So, you know, it's not like
there's just an easy answer of, you know, go read this book.
And it's some of it is having people who used to have more
hair and, and, and, and have made some mistakes helping you
(46:25):
to make new mistakes. And some of it is, you know,
having enough diversity of thought in the room that that
the the brand new person can say, wait, the emperor isn't
wearing any clothes. Greg Walters asks an interesting
question. He's another regular listener,
he says. Can you extend your views regard
to embodied AIS and LLMS, which is robots and AI in the 3D
(46:50):
World? So how does this apply to robots
and and again talk about data? So that's a new source of data
that a lot of people are excitedabout.
And it is something where we're going to need multiple AI
approaches because you want to give commanders intent to the
robot to say, grab that orange. But then as the actual actuators
(47:10):
on the arm of the robot and the hand of the robot are trying to
grab, you want to be adaptive. And so, yes, that is very rich,
but obviously it's a place whereyou need to make sure you're
applying the right tools. I would recommend if you're
trying to actually navigate the world, we're not going to
probably look for generative AI for the tools that we're going
to look. We're going to look for more
Bayesian approaches and models that actually give a sense of
(47:32):
the AI itself has built a world model of whatever the robot is
trying to do. If we think of embodied AI as an
extension of edge computing, where we have more and more
processing power and more and more autonomous data that sits
out at the edge being used, thisis not a new problem that
suddenly came up because we called it embodied AI.
(47:53):
It's a problem that we've been dealing with in the IoT, in
autonomous devices and so forth.So we should learn from that
body of knowledge and not start to like, reinvent a whole new
field here. Yeah, it looks like a human, but
it's, it's an edge device that, you know, has autonomous data.
And I, I, I know a little bit, I, whoever I am, know a little
(48:15):
bit about what we've learned so far about that.
And you know, we can start from there.
Wei Wang says on LinkedIn regarding data misuse.
Are there scenarios where we will be able to quickly track,
delete, or restrict access to these risky open data sources to
prevent harm? Track yes.
(48:36):
Delete. Probably not address yes.
So you hit you nailed it right. Try to take something off the
blockchain, try to take something, erase something from
the Internet really hard to unring the bell, arguably
impossible. But that doesn't mean that you
can't publish the Contra positive information, and it
doesn't mean you can't train your models to recognize lying
(48:59):
liars. Lying, right?
Veracity adjudication is something I've spent a
tremendous amount of my career building approaches to looking
at the truth, the whole truth, and nothing but the truth.
That's three different dimensions of how do I know
what's true? We can do all of this stuff.
There's lots of ways to address that problem.
Unfortunately, one of them isn'tclaw it back from history.
Even the Wayback Machine has limitations.
(49:21):
Chris Davidson comes back and says do systems currently exist
that are successfully integrating today's LLM based
Gen. AI with active inference and
what might that look like? Are they completely different
from one another? There are early stage, but I
wouldn't say there's anything that we can publicly point to.
(49:41):
I don't know if Anthony has of anything we can probably point
out. I know there are things going on
behind. This.
That's why I'm smiling. No, there are not.
Nothing public not to say that, but it's a good question.
Chris Peterson is thinking very broadly and he says should we be
(50:01):
thinking of AI like the old old school arms race, US versus
China versus high end chips to UAE since it can be done by
anyone? Could there even be an AI
version of a non proliferation treaty?
No, we should be thinking of AI like physics in in the 1940s,
right? Physics was there before nobody
(50:23):
discovered physics. Physics was there and people got
good at physics, were not good at physics, and they shared what
they were learning or they didn't share what they were
learning and they used it for good or not good.
It's a lot more like that than it is like something that can be
somehow corralled into national boundaries.
I'm sorry, David, did you have adifferent opinion?
I'm spot on to what Anthony was saying.
(50:44):
I tell people it's just math. It's very fancy math, but it's
just math. I would also say let's learn
from history because back in thelate 1800s, all the nations of
Europe got together and said chemical weapons really bad,
let's ban them. So in 1899, they banned them.
And come World War One, guess what?
Both sides have been quietly researching how to do it and
they have done it in quiet. And so anyone who thinks some
(51:05):
international agreement will save us from ourselves, I hate
to say it, no. But again, you have to recognize
it's like physics. And so we have to come up with
other answers, yes. I heard a great quote yesterday
and I wish I knew who said it. The the the big difference
between genius and stupidity is that genius has limitations.
Arsalan Khan points out that humans collect and create data
(51:28):
and then humans are replaced by AI that uses that data.
Accenture just laid off 11,000 people and because of because of
AI and this is an increasing trend comment very quickly.
It's hard to distinguish what is, what is the case of people
being laid off because of AI? What is the case of the nature
(51:48):
of business changing. We also live in a shareholder
economy where right now shareholders will actually
reward companies that reduce their cost.
And the number one way to reduceyour cost is workers.
Now you can say it's because of AI, but it might just be I'm
trying to reduce my cost. And so the nature of work is
changing and will continue to change.
But to say it's because of AI I think is a oversimplification.
AI can make a pretty good PowerPoint deck and and then all
(52:10):
I have to do is edit it right? So I don't need a room full of
people to get me 80% there. But the reality is that AI is
being super helpful in many different areas and vibe coding
for example has its role. For example, I'm not a developer
but if I have but I can use vibecoding to create an app that is
(52:35):
useful. And there are now tools that let
you integrate. Lovable lets you integrate with
the back end and it will create the database and the security
roles and so forth. And you know what?
I can use that tool internally for my own organization.
I can even sell it. More and more people will become
solo entrepreneurs. I think that is the shifting
(52:55):
nature of work. And so we are definitely seeing
a work change. I want to give some people some
hope though, too. One of the things I'm seeing
that's really an interesting trend when it comes to data and
AI is more and more people with rare diseases or a family member
with rare diseases are actually asking for their medical
records. And, and they're actually then
using the tools in health level 7, which is a nonprofit that
actually makes sure you can request those records and
(53:18):
they're in a way that you can actually analyze.
You can now actually get your medical records in a digital
form and then go to your whatever AI you want to and say,
what has this doctor missed or what else should I be thinking
about in terms of treatment for myself or my family members?
So that's, that's a positive sign.
I do think we need to help people cross the chasm between
the work they used to do and thework they're going to do,
whether that's community efforts, whether that's company
(53:39):
efforts. The nature work is definitely
going to change. Nature work has been changing
more. I mean, most of us would have
been farmers 200 years ago, and we're not anymore.
My greatest hope, Michael, is that anyone that is
disintermediated or feels disintermediated by AI doing
their own job feels liberated togo do something new and more
amazing because there's plenty of David's talking about, you
(54:00):
know, known unmet needs that we,we know we're not addressing
these problems. There's also those unknown unmet
needs that we haven't even discovered yet.
Oh my gosh, smart people, pleasego do some of that instead of
rebuilding a better, you know, whatever.
What do you say, Anthony, to theperson who is being displaced,
(54:24):
who has kids in college, who hasmedical debt and is living
paycheck to paycheck? You know, telling that person
you should do something new is not very helpful.
I said that was my hope. That is certainly not the way I
would approach that person in that particular situation at
(54:47):
that moment. And don't think I don't get
calls from people that are in exactly that situation.
I'm sure David does almost several a week, you know, and,
and you know, I start the conversation.
There's a, there's a, you know, fine line between being helpful
and being annoying. But I, I try to start the
conversation by understanding, you know, the grief and the,
(55:08):
the, the frustration and the anger because as best I can,
because it's not me enduring that.
But then from there, once that person gets beyond that, very
often the people that we're talking about, when you look
back on it a year later, it thatwas the best thing that ever
happened to them because they don't chain them to go do this
other thing so that the journey is, how do you figure that out?
(55:32):
What do you have? There's that moment in The
Princess Bride where they say it's hopeless.
We can't attack the castle untilthey realize they have a cloak.
Oh, now we have a plan, right? What's your cloak?
What is the thing that you have that that creates the, the, the,
the, the, the rock that you can stand on?
And I know you know, you, you, it might be that you have to go
do something you don't want to do in the short term because of
(55:54):
all those needs that you have. I am not at all unsympathetic to
that. I'm in no way saying let them be
cake. But what I am saying is that all
of this abundance of capability that we have that is right now
largely being used to create a bunch of party tricks could be
used to do some amazing things. And boy oh boy, you get to do
(56:15):
some of that. You want to acknowledge and
recognize the grief and the anxiety that anyone experiencing
that is having at that moment and then at the same time strive
to find ways to help them see that they can be a survivor and
a thriver as opposed to a victim.
As to what happened here. I think that also does a
requirement on all of us who arenot in that unfortunate position
(56:36):
to advocate. What is the communal response?
What is the response that we expect from people?
Because we're not advocating forit.
It will happen to one of our friends or one of our family
members and I know it. I mean, I probably do get about
a call about once every week or so.
And then obviously, I'm here in DC where a large number of
people might actually be facing the same situation, not because
of AI, but I put that out because we do need to think
(56:59):
about in this era of displacement, do we want to, can
we find ways we should find waysto reduce the anxiety and to
help people navigate this? And I feel like right now
there's more we can do as societies, as communities, as
countries. On that, do I say I can say
empathetic note? Can I say hopeful note?
(57:19):
I'm not. Sure, we're going to.
We have to revisit. This Yeah, I think we're both, I
don't want to speak for David, but I think we're both hopeful
that, you know, humans are amazingly resilient and
amazingly good at, you know, turning corners and doing things
that surprise and delight us as well as terrify us.
We get to choose which of those we do.
And I'm, I'm on the camp of surprise and delight.
(57:41):
So we'll, we'll get there. This is a time we will look back
on and say that's exactly when we were able to start doing fill
in the blank. Maybe it's vibe coding Michael,
I don't know, but it's probably not.
Or maybe just teach yourself to be an AI red teamer, which I
would recommend instead. But.
Yes, that's me. Yeah.
I mean, you know, far be it fromme to express an opinion, but I
(58:01):
think that, you know, we are certainly at a point of
inflection. The most difficult thing about
change is it's very hard to notice when you're part of it.
Can I add one final comment thatI think we can all agree on?
Down with AI slop. Yes.
The trick though is identifying what is AI slop.
But anyway, that's a doubt. And on that note, a huge thank
(58:29):
you to Doctor David Bray and Doctor Anthony Scrifignano.
It's been a great show. Thank you both again for taking
your time to be with us. I'm very grateful to you both.
Thank you, Michael. Thank you very much.
And an enormous thank you to theaudience.
What will happen next is we willedit this videos, do some light
(58:52):
editing, create a summary, we'llput it on the CXO Talk website
and then with the transcript andthe summary, you will have a
document that you can keep, go back treasure and refer to.
So I urge you to do that. Subscribe to the CXO Talk
newsletter. Thanks so much everybody.
(59:14):
Thank to thanks to Doctor David Bray and Doctor Scrifignano.
Take care everybody, have a goodone.
Bye now.