Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome to CXO Talk Episode 864.I'm Michael Krigsman, and today
we're exploring how to navigate truth and deception in AI as
part of a special series on AI ethics.
Our three expert guests are Doctor Anastasia Lauterbach,
(00:21):
Doctor David Bray and Doctor Anthony Scribignano.
We got a lot of doctors in the house.
All of you, welcome to CXO talk.I'm delighted that you're all
here. Thank you very much.
Thank you for having us. Great to be here with.
Anastasia, let's start with you very briefly.
Tell us about your work. I spent most of my time with my
(00:44):
company. I'm a founder and CEO of AI
Entertainment and this company democratizes knowledge of AI
with AI muggles, not Wizards. And this means that I try to
translate AI, robotics and quantum computing from difficult
to easy to understand. For example, I write books like
this. This is Rome and Robbie series,
(01:04):
and this is a real story about real Cat and his friendship with
a robot and everything. What is going on in this book is
mirrored in the science behind section.
Those are over 100 articles explaining AI in very easy
language. I love it a an AI cat.
(01:25):
Absolutely. He's an influence.
And by the way, did you know that 27% of web traffic is a cat
connected? A cat linked content?
Cats are addictive. David, tell us about your work.
I try to bring rationality to the world in terms of how
technology is changing politics,geopolitics, societies and the
(01:46):
like. Do so both as a distinguished
fellow with the Stimson Center as well as business executives
for National security. And then on the side have my own
S Corp which tries to help boards and CE OS do the same.
Anthony Scrifignano, welcome back.
Tell us about your work. I'm a distinguished fellow with
the Stimson Center along with David Bray.
I. My philosophy is to try to teach
and learn every day. No fair cheating on that.
(02:08):
You have to do both. I've done a lot of work over 45
years with data and data scienceand AI before it was a thing.
I've gotten multiple patents on my name.
Lots of invention around things like identity resolution,
veracity, adjudication, which we'll talk about today, judging
the truthiness of things, and also lots of effort around
(02:29):
finding people that are doing new bad things.
So novel malfeasance. So these are all the things that
are kind of at the edge of computer science and among the
hardest of the hard problems to solve.
Anastasia, Why is building trustworthy AI versus deceptive
AI important? Trust is something tremendously
(02:52):
important for human psychologist, one of those
constructs which is paramount for human relationships.
So no one is going to disclose or I am building something which
is based on deception. And they are two economical
arguments in favour of paying attention to whether AI is
(03:13):
considered trustworthy or not. And the first, I would open up
with a quote by Kevin Kelly, whois founder of Wiret.
He said that the business of thenext 10,000 companies is very
easy to predict. You take X and you add AI to it.
And obviously this remark which was made I think like around
(03:34):
2016, 2017, he is now based in figures, in financial figures.
For example, around 2013, we expect the EI market to grow to
1.8, see 5 trillion U.S. dollarsglobally.
And I mean this is tremendous growth.
(03:55):
We are talking about compound annual growth rate of 33% from
22 to 2030. And there's another caveat to
that. When I looked into valuation of
businesses around 2012, I could see that the portion of digital
assets in this valuation of companies was around 12 to 13%.
(04:18):
And after the COVID epidemics, we are talking about 90%, so 90.
And digital assets, it's obviously all about data, it's
about intellectual property. It's all kind of systems and
processes to do something with this data.
So this incredible growth has something to do with the
(04:40):
progress in AI and we have to pay attention.
If I were to summarize what AnasTasha just said, everybody's
leaning into this and everybody's leaning into this,
but it's a different this. So if you look at what a lot of
organizations want is they want to be able to check the box and
say we're doing AI, we're doing Gen.
AI, we're doing large language models.
(05:00):
Look at us, we're great, right? That's great.
What's the problem you're solving and by what right do you
think that that thing that you're doing is going to help
address that problem? There's in many cases a great
argument to be made that you're just accelerating the speed with
which you hit the wall. You're helping people understand
what you don't know. You're helping people understand
more about your questions than about your answers.
(05:23):
You're exposing yourself to potentially looking to your
customers as if, you know you focused on that and not this
other thing that they want you to focus on.
So it's, yeah, it's really important to move into this
technology. Never would I say don't do that,
but I would say, you know, just because you see a new tool at
Home Depot doesn't mean you go buy it.
(05:43):
You have to have something to address and you have to have a
problem that you're trying to goafter.
And you also have to understand the unintended impact of those
actions you're about to take. May I just quote a poet from
England from the early 18th century, Alexander Pope?
He said trust not yourself, but your defects to know.
And this is actually emphasizingthat it's not just
(06:05):
self-reliance, but it's about humidity.
And I think this humidity and thoughtfulness is something
which we need to learn more and more and apply more and more
when we move into the world withAI.
One of the questions I always ask when when someone is, you
know, running headlong into, youknow, let's AI our way out of
this problem is, you know, two questions actually, 1 is what do
(06:28):
we have to believe in order to go down that road?
And the second thing is, but by what right do you think the
answer is contained in what the AI is going to look at to answer
that question? Imagine during COVID if you had
ChatGPT and you said, you know, what are the most promising
approaches to responding to COVID?
Well, we didn't know, right? And so you would, you would
(06:50):
literally get what we love to call hallucination right now,
but on steroids, right? Because the LLM doesn't know
that the answer isn't in the corpus of data that it's looking
at. And it's going to give you an
answer anyway. And it's going to look awfully
good. And it's going to really look
like a human being wrote it. And it's going to be wonderfully
articulate, but just as good as any other rumor you just heard.
(07:11):
So can we say that the summary here to make this very
simplistic is AI is around us, AI is surrounding us, AI will
continue to do so and therefore we need to know what's true and
not true. I mean, that seems pretty
simple. Perhaps instead of calling, when
(07:32):
we say AI, artificial intelligence, we should call it
alien interactions because thesemachines are not at all thinking
like you or I. And that's a very real danger
when you read articles or for whatever reason begin to
anthropomorphize these machines.I mean, just statistically, I
mean, the human brain uses between 15 and 20 watts to do
(07:54):
everything it does. These models are looking at
upwards of 2005 thousand or more.
I mean, when we hear like companies thinking about doing
nuclear power plants to do theiralien interactions as AI, that
might tell you that this is simply been designed to look
like it's thinking like a human,when in fact it's not.
That then also gets to the deeper question I'd like to say,
Michael, which is if you remember the Turing test, the
(08:17):
Turing test was intentionally trying to have a machine deceive
a human into thinking it was human.
Guess what? We have been wildly successful
at rolling out technologies thatwill now convince you that the
responses are giving whether they're in text, whether than
audio, whether in video appear to be human like.
But that's a problem because as Anthony was mentioning in his
(08:40):
background, I mean, fraudsters, scamsters, they would love to
use this technology. And we've all probably seen in
the last two years a step changein scam phishing emails where
they are now at the point where you can no longer use
misspellings as they tell if it's a phishing e-mail or not.
And now even better, they're often referencing another family
member or friend because they'vecombed our social media network.
And so the are actually intentionally, unfortunately
(09:04):
really successful, the Turing test.
But the whole Turing test was again a machine trying to
convince you it was human. That might not be the best tool
exactly. We've been saying here for
society writ large. There's a another little nuanced
thing happening right now where we now have to prove to the
machine that we're a human right.
So it's sort of the opposite of that.
And if you look at CAPTCHA and the way it was originally
(09:25):
designed, it was just OCR was kind of not very good.
So if I tilt the letters and I put them together, then OCR
doesn't work and the human can read it and we know the answer
and that's great, right? Well, then people started to use
the, the responses from CAPTCHA to train better OCR.
And we'll, lo and behold, all ofa sudden the computer gets just
as good as we are at reading kind of crappy text.
(09:47):
So then they start introducing pictures.
And introducing pictures is not a, it's not an insert, it's not
an end to that. It's just a way of kicking that
ball a little bit further down the road.
I was listening to a talk recently with one of the people
who actually invented the technology, and he was saying #1
he wished he never did. And #2 you know, maybe in the
(10:08):
future, you'll have to do thingslike walk three steps with your
phone or jiggle your phone or tilt your phone down or do
something. And, and now I can just imagine
us OK, do 10 push ups. Like, at what point do I have to
stop proving that I'm human? We don't have a good answer to
this right now. It's not because the AI is
thinking. It's because it's very good at
(10:29):
watching us behave. That's so true.
And actually we need to establish a language to talk
about AIS and machines, and we're still using human terms to
describe what we expect. And in social science, for
example, tries, trust is described as the belief that
another person will do what is expected.
And now it's not a person, it's a machine or piece of code and
(10:52):
what is really expected. If I use my vacuum cleaner, I
expect it to do a certain task. But this is very different from
LLM. And obviously because I am an AI
and I teach AI and cybersecurityat the university, I know it's
has nothing to do with language.It's a statistical stream of
tokens and token isn't even a word.
(11:14):
But if we take, for example English, it's maybe 2/3 of an
English word. Yeah, statistically.
But you know, humans expect thatit understands and it's simply
doesn't, and even not based on the world model.
A couple of things to sort of add to the nuances.
First, let's go. Let's take the 30,000 foot view
(11:34):
before we dive in, which is backin the 1920s when this
disruptive technology called radio came out, there were
pundits that were saying something similar to what we're
seeing with AI nowadays, that this was going to cause World
Peace. It was going to have
understandings. World leaders would talk to each
other on a really regular basis on radio.
And then fast words in 1933, about six or seven years later,
those same pundits were saying that this was going to be the
(11:56):
end of society as we know it. It's going to be the dictator's
tool kit. And so I raise that because
right now there are a lot of breathless articles that are
either saying that AI is going to save us all and the, you
know, the good times are upon usor is somehow going to be in the
societies. We're probably doing the same
thing to the technology that we did to radio and we're missing
that. Again, it's just a tool.
Now, the second thing I would say is I think we are often
(12:19):
using AI as a stalking horse or as scapegoat for things that are
deeper in society. Like we ask questions like, how
do you know if AI is ethical? Well, how do I know if an
organization is ethical? How do I know if ethicalization
is not bias? How do I know if an organization
is not making bad decisions or hallucinating?
And So what this may point to iswe need to do a better job of
(12:39):
figuring that out and how we actually do it, whether it's a
machine, organization, or person.
Now I will give a slightly different definition of trust
from the background I come from,which is the willingness to be
vulnerable to the actions of an actor you cannot directly
control. And that actor could be an
individual, could be an organization, could be a
government, could be a machine. And that what it's been shown is
we humans are willing to be vulnerable to those actions of
(13:02):
an actor that we cannot control if we perceive benevolence,
perceived competence, and perceived integrity.
So exactly down to Sasha's pointand Anthony's point, there is no
way of assessing the benevolenceof a LOM.
There is no way to assess the competence.
I mean, these things are just doing very fancy pattern
matching. And in fact, if it's not in the
training data, they will give you something that is made-up,
(13:23):
but then also an integrity. You say to them that's not
right, They will be again, the the prompts will be very
cheerful in saying you're right.And then when you say that's not
right too, they're right, you'reright again.
And so integrity, competence andbenevolent aren't there.
But let's step back and say, howdo people assess that?
Given the fact that we're now connected digitally, How do we
assess that? CXO talk is benevolent competent
integrity. How do we select, you know, do
(13:44):
that for governments? How do we do that for the world
that is large? And maybe this is the challenge
hitting societies, free societies in particular, is we
lack the ability to adjudicate at speed.
A lot of things, the terms that you're using, David, are also
epistemologically fuzzy, right? So benevolent might be
benevolent from your perspectiveand not so benevolent from my
(14:05):
perspective. And we tend to use terms you
didn't, but we tend to use termslike we're good.
Well, good for whom, right. And you know, I, I have a, a lot
of background in emergency medicine, right?
So one of the things they alwaystell you is expand the circle.
You know, if you're going to make a complicated decision,
get, get advice from other people and make sure that, you
(14:26):
know, those other people will bring in different perspectives.
Well, that's great until something's on fire or until
somebody's not breathing, and then somebody has to make a
decision, and it might be the wrong one, but you have to make
that decision with some degree of cut.
Now, after the fact, everybody will swoop in and judge you that
you should have done this, you should have done that.
Why didn't you do this? How didn't you know about this?
(14:47):
At some point, AI is going to bemaking decisions for humans
because the argument will be that AI can make a decision
before the human decision becomes irrelevant, right?
Do I cauterize this vessel or that vessel make a decision
right now? I don't.
Do you know all the vasculature in the brain and can you let the
AI do it right? I don't know.
(15:09):
But I could see how we can get there and I could certainly see
how we can get there in in a nation state, military
industrial kind of context. I could we can get there in, you
know, in space. There's, you know, it takes so
long for a signal to get from one place to another that you
need to have autonomy in order for the thing to be able to land
or do whatever it's trying to do.
But as we give up this agency, this concept of agency, of
(15:30):
giving the right to the machine to make the decision on the
behalf of the human, the values of the human have to be imposed
into that what is just a bunch of code.
And we are not rules based creatures.
We are very, very squishy about what we do and we make decisions
and we change our opinion based on new facts and new modes and
(15:51):
new ways of learning all the time.
AI is horrible at that right now.
Please subscribe to our newsletter and subscribe to our
YouTube channel. Check out cxotalk.com.
We have great shows. Coming up.
We have a question from Twitter and this is from Arsalan Khan
(16:13):
who says we know that AI can be trustworthy or not trustworthy
based on many factors. How can normal non-technical
people know if AI is affecting their lives, if it is ethical?
And is there an opt out for AI? And we know very often there's
(16:33):
not. And then let's also talk about
the role of software companies. AI is surrounding us.
So if if you have a smartphone, you immerse in AI.
So we can really discuss all those technologies like, you
know, LLMS are on smartphones, but even how we see our calendar
(16:54):
invites and how it correlates with tweets or whatever, some
notifications from LinkedIn, it's immersed in AI.
Some navigation technologies areconnected to automation and to
AI. And we just need to think how
should we behave ourself. And frankly, unfortunately, and
(17:14):
this is I guess the difference to radio technologies and
everything which proceeded, we must expose ourselves and learn.
And I think that EI literacy andtechnology literacy is something
which is a must from an early age.
It's like basic financial literacy.
Obviously, there will be always some treasury experts and
(17:36):
taxation experts and whatever some valuation experts.
But everyone must understand that this revenue and this cost
and 1 minus another produce somekind of, and I kind of love
comparing AI and you know, dealing with AI with the
kitchen, right? Maybe because I'm a woman.
(17:58):
And obviously you might be a great chef even if you don't
work in high cuisine. And you can produce fantastic
stuff and be very great and performance some shows.
And probably this is the ultimate level of
sophistication. But everyone knows how to fry an
egg, and I guess with the eye wemust learn how to fry an egg.
(18:20):
And this is basically why I learn hits and their clever
appearance to think about those concepts.
What is an LLM? What does it do?
Who is Wolfram? Why should we know about him a
little bit? What does it mean to have a
robot in our house? What is the difference between a
robot and let's say, a vacuum cleaner?
(18:43):
Right? So I think we just must embrace
this knowledge and actually evolve as communities to talk
about it, to ask difficult questions, and to understand
that there is a purpose behind every single application and
service. You can't go through a jungle.
(19:03):
You must learn and then you willhave some openness to maybe be
more precise on what you need. And finally, I love the quote of
Pablo Picasso, he said. Machines are quite stupid.
They just produce answers. And I think this is up to humans
to ask questions, and we need toteach humans how to ask those
(19:25):
questions. And this is basically a fabulous
opportunity that is opening up in front of all of us to learn
more about humanity and what it really means in the world, which
is partly dominated by AIS. In Western society, if you apply
and your job application has a non western name, you are less
(19:47):
likely. A western name application is
three times more likely to be selected.
And that's not an AI problem whatsoever.
Everything we've been talking about here is like, oh, how do
we trust the machine? I'm like, no, no, no, no, we
have deeper systemic issues. And so almost it's like we're
almost pitching AI and saying, how do we ask these questions of
AI before we've even paused and say, how do we actually do
better as humans? And I really appreciate it.
(20:10):
So I would also say that as we go forward from this.
It's also thinking about that we've had these challenges
before. I mean, most of us are not
medical doctors yet. We have to go see a medical
doctor. How do you know when the medical
doctor is going to prescribe a approach that they really have
your best interest in mind? And the way we saw this in the
past, and unfortunately, the Internet kind of squashed this,
(20:32):
was we did have professional societies that require their
members to have knowledge of something, experience in
something, and then pledge to anunethical oath.
And if any point in time, the member had a concern because
again, most people cannot adjudicate, did that doctor act
in your best interest or not? It was other medical doctors
that would decide to either say,yes, it did the right thing or
not, we're going to censor them or going to move their license.
(20:54):
Well, unfortunately, what happened with the Internet is it
made it so that everybody could be armchair quarterbacks.
But I raised that because I worry that we will become so
fixated on we must understand AI, we must understand and
educate and everything like thatwithout pausing and saying right
now there are plenty of organizations where there is
issues internally to bad decision making, bias decision
making as well as externally andhow they're actually interacting
(21:16):
with employees or customers. And so I think we actually need
to figure out a deeper approach,not just about the machine.
Implicit in what you have all been saying is this idea that
benevolence is our fundamental goal and we rely on the
(21:36):
benevolent human spirit. And we hope that the Anthony's
shaking his head. Well, this is my interpretation.
So let me ask my question anyways.
Before it's even edit, before it's edited, before it comes out
of my mouth. So Anthony's trying to edit it
in my brain. So my question.
So my question is this. If we think about the software
(21:57):
companies, over my career, I've consulted with over 100 software
companies literally. And as far as I can see, the
goal of software companies is innovation with the goal of
making money. And yes, software companies talk
about we want to change the world.
And I'm sure that there's that. That's also true.
(22:20):
But the bottom line is it's about money.
And so when we talk about trustworthy AI versus deceptive
AI, how do we overlay this software company issue on top of
it? Anastasia thoughts on this?
30 years ago I was working at the Munich reinsurance company
as a liability on the right and even now there's no not such a
(22:43):
thing as a software liability within the product liability
category. So European Union has updated
product liability directive, butthe rules will be introduced
only in December 2000 26 and in the United States, to my
knowledge, please correct me if I'm wrong, so far they aren't
(23:04):
any kind of, you know, rules which are saying that software
vendors are within this liability rules what what is
applicable to for example, automotive companies or consumer
electronic and whatever. So we need to somehow calibrate
what we are talking about. For example, in July this year,
(23:26):
there was a bug which was actually played into a software
over an overnight update with a cybersecurity vendor.
And then all Windows machines went down at the airports and I
think around 5000 flights were either cancelled or delayed
(23:47):
globally. And I think in the US the figure
of financial damage which is known today is for 5.4 billion
U.S. dollars. So damage from this update, so
there is some legal process which is going on, but so far
actually that there will be somedamages paid.
(24:07):
So this prospect is really, really low because it's very
difficult to determine what is the negligence or misconduct
here. Then tort law actually excludes
a software from this liability things.
And then obviously, if there aremultiple parties involved, who
did what and how, this is really, really complex.
(24:29):
And if we are, for example, withfood industry or Pharmaceutical
industry, the world looks completely different.
So the world is being eaten by software.
So Marc Andreessen said it in August 2011.
But there is no product liability for software vendors.
And now we are going into AI. Yeah.
So fantastic. So European Union is
(24:52):
implementing the European AI Act, so all vendors must comply.
It's really killed AI ecosystem in Europe.
By the way, I, I represent myself, I don't represent any
big grant. So I was really against the
European AI Act because it did not solve one single risk in AI.
(25:13):
It impose huge amount of costs on AI companies and investors.
Venture capitalists are saying we are really our fingers off
the European ecosystem because it's all too costly and we need
now to balance what do we want. At the same time, 5 European
countries, which is Italy, Belgium, Austria, Germany and
(25:35):
Netherlands are moving into massretirement this decade.
And if you have something like that, you must think about
automation and AI. And there's no policy, there's
no thinking how to actually balance the inevitably declining
GDP due to this mass retirement.And we have the European AI Act,
(25:56):
so Europe shoot itself into the knee.
This is my interpretation with all the caveats and that this is
complicated, that they are issues, that thoughtfulness is
needed, and all of the above. I'm in violent agreement with
your position on the dangers of regulations.
(26:16):
Sort of. You know, I'm a scuba diver.
The regulator's pretty important, right?
You die without the regulator. But if if you over regulate,
then you can't breathe. And there's a, there's a
tendency right now to take whatever pre-existing laws there
might be GDPR in this case and write something that looks like
that for AI. Well, it's not the same thing.
It's not even close to the same thing.
(26:38):
And So what you wind up with is this tapestry of complexity that
makes it very hard to take a step forward.
Now, I'm not saying that that was the right thing or the wrong
thing because like you, you know, I'm not, I've, I work with
those people all the time, but I'm not in that space and they
have to do something. I get it.
The, the, the, the software issue that you talked about, the
(26:59):
crowd strike thing, you know, was delivered kind of by
Microsoft, right? Because most people had
Microsoft. But you know, if you play it
back and you unpeel the onion and who produced the blank file
that got distributed with the update that got processed by the
software that was acting like a driver that was allowed by the
kernel of the operating system. This is a nightmare to say where
(27:22):
where is the smoke detector thatshould have figured this out.
And the world learned about how its own security was working a
lot that day, right? So that was a very telling kind
of moment for those in cybersecurity.
They kind of understood it relatively quickly and it wasn't
a big deal to fix. But Oh my gosh, the impact of it
(27:43):
was just epic. That also tells a story about
how connected the world is rightnow and how much we have to be
careful as we take these steps forward.
And so that is a counter argument for some regulation to
say you can't just do whatever you want to do and apologize
later because it had never happened before.
So we're in a very difficult time right now In in
(28:05):
epistemology, they're sometimes referred to these as critical
incidents or critical moments where there's a point of
reflection going on. It's hard to see when you're
part of it, but we are definitely part of it and we're
making little decisions right now that are going to have very
big impact later with imperfect information.
The these companies that your question started with, you know,
(28:25):
three hours ago, Michael, the question was something about,
you know, what companies can do.You know, companies are trying
to serve their shareholders, their customers, their employees
and their future market. And it's impossible to serve all
those at the same time. Absolutely impossible.
So if you want to maximize shareholder value, you wind up
doing kind of really stupid short term things that often
kill your company. If you want to maximize, you
(28:47):
know, employee satisfaction and engagement, then you wind up
doing things your customers don't care about.
If you do everything your customers want, then you don't
make any money because your margins go to zero and then your
shareholders get upset. So it's just a whole bunch of
ways to kill yourself. So now throw AI into this mix
and then throw all this regulation into this mix and you
get where we are right now in these boardrooms.
(29:07):
This is not an easy place to be.Where are we?
We are very similar to how when the automobile first came out.
It's worth knowing how long did it take before seatbelts arrive?
More than half a century now I'mhoping with AI we don't have
that. But again, to try and think that
we're going to immediately get it right within the 1st 6 to 12
months, probably not going to happen.
But let me zoom into a very specific sort of, and I think I
(29:31):
Anthony sort of said this when we were talking about earlier
about how when AI works and doesn't.
And what I think we should unpack is not only AI is created
equal, you know, we've been talking about generative AI, but
there is other forms of AI ranging from experts rule based
systems to decision support systems to computer vision.
And I think the way you're goingto trust, trust in computer
(29:51):
vision, which is deterministic, which it is actually much more
trustworthy and is not prone to any hallucinations whatsoever,
is dramatically different than generative AI.
And I raise that because I think, you know, if there is
anything that needs to be talkedabout at the boardroom level,
it's first understand the different tools when we talk
about AI, that it's not monolithic and you need to
(30:12):
understand whether you're reaching for a hammer or a
screwdriver. And the trouble is we're writing
regulations as if it is monolithic.
And then the other thing that's actually to me is how we somehow
think there's going to be 1 AI regulation to rule them all when
we know when IT systems came online, these things called
advanced data processing systems, later IT, you know, you
(30:32):
don't write IT regulation for health and think it's just as
good for the defense sector or just as good for the commercial
recommendation sector. I mean, there are existing laws
in the United States, there's HIPAA, there's the Bank Secrecy
Act for banks. And so I actually think the more
pragmatic approach is to go backto the existing rules that were
written for both human and IT systems and say, where does AI
(30:54):
fail? Because maybe it's too fast to
speed or scope and upgrade thoseexisting laws as opposed to
trying to rate policy. And I'll give a very specific
example right now in the United States, and it's not just the
United States, there's a thing called health level 7.
It's the international standard for interoperability across
medical systems. There is a standard for tracking
the provenance of a decision. You do record in your medical
(31:15):
record when a physician made a decision and on what data they
did. You don't record in the medical
record when an AI does. And the number one usage right
now for medical settings is actually physicians love to
type, either type or dictate three to four short bullets and
then say give me 20, sorry, 2 to3 pages of physician notes that
I will then put into the file. Well, aside from the fact that
(31:38):
probably 1, both the company or organization that is delivering
your health care should know wasthat physician notes actually
written by the physician or written by an AI?
And if so, which one? And the patient should know that
too, too. There is a very real risk that
if that AI output is later read back in by the same AI, there's
a thing called model destabilization.
(31:59):
Sometimes it's referred to as jargon as AI self
cannibalization, where basicallythe model essentially it's, it's
oversimplifying, but it over regresses and it starts to
collapse on on itself and it starts to make bad decisions.
And so I actually worry, Michael, that the question about
deceptive versus trustworthy AI,again, it gets back to human
organizations and what humans do.
We may see lawsuits three to five years from now because
(32:21):
companies didn't at least take the the first step of at least
tagging and labeling when and a decision was made by an AI or an
AI plus a human and then later if that was then fed back into a
machine. Did they did they do the
appropriate actions to avoid model destabilization?
Really love this example from medical industry and it might go
(32:41):
into further industries like food industry, but I have two
further suggestions for regulators and for those who are
serving on boards. So one is and by the way, this
is not an ultimate solution, butwe know that today more money is
being spent on capability research than on transparency,
safety, you name it, responsibility and it's like 97%
(33:04):
of all the papers are published on behalf of those capabilities
and only 3% on X AI. So this explainable AI and
whatsoever. So we might motivate company to
spend more money on that or to support a foundation or in
institute university to invest into that.
That could be motivational and not just punitive if we go into
(33:27):
regulation. And then I'm really, I mean, I
absolutely I'm scared about thisissue with cybersecurity because
AI is a tool, not just in the hands of good people, but
criminals are using this really every single minute.
And I think we just need to rethink how we approach
cybersecurity and how we regulate cybersecurity, things
(33:47):
like what is the exposure, cyberexposure, Can you quantify it?
Can you name what is this financial figure?
So this is tremendously important and very few people
are talking about it. You mentioned quantum
technologies early on and and certainly we are on the verge of
the point where everything that was encrypted will now be
(34:09):
readable, even the data that wasstolen, right.
We are on the verge of, we will have a way to do quantum
decryption before we will have areally good way to do better
quantum encryption other than scattering all the keys all over
the place. So the world is going to get
much more complicated. We have to start thinking more.
And I'm not going to say start thinking, because there's some
(34:30):
fine folks out there thinking about this, about novel cyber
malfeasance. What are the types of not just
the bad things that happened today with data exfiltration and
and Trojans and malware and all this stuff.
Yeah, we need to get really evenbetter at that all the time.
But that's necessary and not sufficient.
We have to, I spend a fair amount of time and David, you
(34:53):
know this thinking about what future bad guys might be able to
do in in less than a generation with technology that I suspect
will be available by that. And it's not hard for me to come
up with. As an example, one of the things
that we've looked at is using flocking and swarming
algorithms. So the way flocks of birds can
bifurcate and swarms tend to getbigger and bigger, right?
(35:14):
So there are clustering algorithms that behave like
that. Those two different behaviors,
if you start applying them to botnet attacks and if you start
imagining that the botnet attack, the botnet swarm or the
botnet flock, depending on what you want to call it, we'll get
smarter about how it's failing and therefore better at
attacking you. That is a horribly it.
(35:35):
It's just a horrific thing to imagine.
And I know how to write code that would do that.
And who am I right? So if, if, if we can do that
sort of thing at an unimaginablescale in the very near future
and we're just getting really good at fixing yesterday's
malware, we are in for a world of hurt in that world that
you're talking about. The good news is there are some
(35:56):
fine folks thinking about that. The the even better news is, to
your point, we need to incent them to think harder and faster
about that. Anastacio made a very
interesting point. She said that investment is far
higher for technology capability, AI capability than
it is for I'll use the term compliance or cybersecurity.
(36:20):
And you know, look, human naturesimply tells us that innovation
is fun and compliance is not. And innovation makes money and
compliance costs US money. And yes, there's a cost to
society, but it's not to me if I'm a software company directly,
(36:41):
unless of course there's there'sa data breach.
O Part of this is definitely trying to go against the tide of
human nature from that standpoint.
This is the argument for, you know, smoke detectors and fire
alarms and life preservers and all of that is, you know, I
have, I have life preservers on my boat because I'm required to
(37:02):
have them on my boat. And if I have a boat, I want the
best life preserver that's goingto preserve my life, right?
I want it, but most people will not think that way.
And if I'm going to put a ferry out there with, you know, 2000
of these things, maybe I start thinking about how much they
cost and and you know, what's the most cost effective
deployment of, you know, mandatorily require life saving
(37:23):
equipment rather than how do I save David Bray's life because
we need more people like. Him well, and so, but maybe if I
can also jump on that, that question that you asked Michael,
I think, you know, we've been talking about cybersecurity, but
you can actually create a whole lot of damage without ever
breaching a system. And what we're seeing right now
and This is why trust versus deception is so important, you
(37:45):
know, so using 10 minutes of compute time from worm GPT,
which is the dark side cousin ofChatGPT and some plug insurance
using data stolen from healthcare data breaches.
And many of us may have been affected by a data breach in the
last year. That data can then train and
create 1,000,000 realistic looking medical records complete
with chest X-ray, physician's notes, doctor's notes, or a
(38:07):
claim at $250, which is below the fraud detection threshold
for several of the services herein the United States because
it's more expensive to adjudicate it than to pay that
out. And that's 1,000,000 records for
upper respiratory infection. Now what the points to, and this
is what Anthony's talking about is again, the solution is we've
got to incentivize those people who are doing innovation to find
(38:29):
innovative solutions to either adjudicate faster, adjudicate
with more veracity, whatever it might be at scale.
Given that AI is basically doingthe same thing that
unfortunately, you know, remember when the Xerox copier
first came out and we had to upgrade our dollar bills because
some people were doing called our copies of dollar bills.
So we upgraded that as well. The challenges and the unique
challenge with AI is just what'scalled generative adversarial
(38:51):
networks or Gans. And so the moment you create a
solution that's really good at filtering, this is valid, this
is not valid, then a bad actor will then use that GAN to say,
use that GAN to get good at fooling it.
And so it's going to be predatorprey relationships.
But I do think this points to atthe speed at which this is
happening, we have to figure outways to incentivize either
(39:11):
because we're shining a spotlight on it because
companies are realizing that's awhole massive amount of money in
terms of payments that they shouldn't be made for fraud
purposes or whatever. But there has to be an incentive
for market based solutions to this because central planning
will not get us out of these solutions.
But you do have to figure out a way to shine the spotlight on it
because the speed at which this is moving, it's almost like what
we're seeing in Ukraine where every every six to nine months
(39:35):
there is a generational leap in terms of how they're using drone
on drone for conflicts. The only difference is this is
going to be behind the scenes interms of AI spoofing.
How do you know if that image isreal, that video is real, or
that person you're talking to isreal?
This is a perfect time to subscribe to the CXO Talk
newsletter. Do that now so we can notify you
(39:57):
of upcoming shows. OK, so questions.
So who Wei Wang on LinkedIn? Says that data collection right
now is being affected not only by linguistic transliteration or
translation nuances, but we're also seeing complex data
patterns when the technology, languages, and platforms are
contributing to changes in how the data is stored.
(40:21):
What do you recommend in navigating and preparing for
these increased complexities against the rate of how frequent
we are collecting data like this?
And before you answer, I have a request to the audience which is
you got a dumb moderator so justkeep your questions short.
I've got to jump on that one. I think Anthony paid that
(40:43):
individual to ask that. Question.
I did not. I did not but I say it's a
brilliant question so I'm going to paraphrase the question.
There's lots of languages out there and there's lots of people
speaking in those languages and we're trying to teach AI to suck
in data spoken in those languages or written in those
languages. And a lot of times the the
systems that are showing you that language, it's not the way
(41:06):
it was originally written. So I'm sure most of us have
people in our social network where they write something in
their language and then you see what they allegedly said.
And then there's a translated button on the bottom.
So it's been translated or I'll say transformed.
So there's a lot of data out there that's gone through
linguistic transformation eitherintentionally or
unintentionally, either at the time of writing or subsequent to
(41:28):
the time of writing. And that introduces all kinds of
nuance into the language that weconsume by our AI systems.
That's got to have an impact. And it turns out that it does.
And there are two different arguments to, to and there,
there's no winning argument hereyet.
So the Turing argument is that if I get enough and most people
you will find addressing this problem, we'll we'll use this
(41:49):
argument. As long as I get enough examples
of that language, I'll be fine. Just give me millions and
billions and billions of articulations and no matter how
complicated it is, I will regress around it and I will
figure it out. The contrary argument to that is
that when we speak a language, we change it and we with the
nuance gets introduced into thatlanguage.
Things like sarcasm and neologism and borrowing words
(42:12):
from foreign languages, all of these things confound to our
ability to understand. And so right now we're speaking
in a certain way that is quasi academic, intelligent.
We're trying to speak in complete sentences.
We know that there's going to bea transcript later.
So we don't want to look like idiots, right?
So we're we're trying to say things in a certain way.
If we talk maybe over an adult beverage, it might come out a
(42:33):
little bit different. So how do when, when you
introduce language on top of that, how does it perturb your
ability to understand? Is that person upset?
Is that person lying? Is that person, and one of these
people is an imposter? Which one is it?
These are all right. Now you can have an honorary PhD
in computational linguistics by tackling any one of those
questions really well because I'm sorry to tell you, but the
(42:56):
technology hasn't gotten there yet.
It's getting better. There are things as Huey
mentioned in her question, thereare things that you can do to
recognize commonly occurring graphemes, to recognize things
like David mentioned misspellings before.
Some misspelling can be intentional, some misspellings
can be accidental. Every time I write something to
Anastasia my my autocorrect changes the spelling of her name
(43:19):
because there's another person Iknow that spells the name
differently, and if I don't catch it, I send it to her
spelled wrong. She knows this now, so hopefully
she's not offended. These are things that are
happening all the time and it's going to get worse.
So what we have to get better atis not just accepting that
computational linguistics is give me the dictionary, because
there are low context languages where there aren't.
(43:40):
There isn't one really good source, or there are languages
where the only really good source is the religious text,
right? And that's not how people speak.
So now you've just consumed the Bible and you're trying to speak
English. We don't speak like that
anymore. So there's that problem, the low
context problem. There's the multilingual
disambiguation problem where Peter, people are writing in
more than one language. And then there's the language
(44:00):
transformation problem where people are changing what was
written and you're reading what has been already transformed.
All of those are Wild West rightnow and and you can you can make
a lot of impact or working in any one of those fields right
now. Greg Walters on LinkedIn says
we've lost the EUI think, quote UN quote, alien is the best way
(44:21):
to view AI in that vein. Other than that, we have no way.
Shouldn't we look at AI through a brand new lens?
A lens with no previous anchors or history or KPIs or best
practices? AI does not fit only in a
vertical or horizontal. It is everything, everywhere,
all at once. There are no experts.
(44:41):
What is a new perspective? AI as mirror.
When 2001 happened, I was responding to 911 in the anthrax
events. There were conspiracy theories
that have were around, but none of them were able to take root.
I mean they were saying like we've done it to ourselves,
false flag, US have done it but they didn't take root.
Fast forward to 2009, I'm on theground in Afghanistan and sadly
(45:02):
there's an event in western Afghanistan where the Taliban
actually took a photo, a valid photo of AUS fighter jet flying
overhead and then at the same time briefly took a photo of a
propane tank detonating and unfortunately killing innocent
Afghans. Both were true photos, but they
were clearly out of context because then they went on social
media and claimed US air strike kills innocent Afghans.
(45:24):
The Department of Defense that we're investigating, and it took
more than 4 1/2 weeks to figure out what happened during that
time. Of course, the news media was
blaming the US and our own ambassador apologized.
That took 4 1/2 weeks to actually put that
characterization of what really happened versus the deceptive
narrative to bed. And then as you know, Michael,
back in 2017 when I was at the Federal Communications
(45:44):
Commission, there was an event in which we were getting 6007
thousand 8000 comments a minute at 4:00 AM, five AM, 6:00 AM.
And we were told by our lawyers we couldn't test for bots.
Speaking of CAPTCHA, Anthony, wecouldn't do invisible means
because that would be seen as surveillance.
And we couldn't block what was perceived to spam 100 comments
from the same IP address per minute.
(46:04):
And it took more than 4 1/2 years at that point in time for
eventually New York Attorney General to adjudicate and say of
the 23 million comments we got, 18,000,000 were politically
manufactured, 9 million from oneside of the aisle, 9 million
from the other aisle. So over the last two decades, we
are getting longer and longer tails for the more valid, more
(46:25):
authentic, accurate characterization scenarios to
get themselves out. We went from they didn't take
root in 2001. Now it took 4 1/2 weeks in 2009,
2017 it took 4 1/2 years. We are unintentionally through
no one technology. I would submit Internet,
smartphone, generative AI. We are laying the seeds in which
it is. You know, it used to be the
(46:46):
adage was, you know, a lie can go halfway around the world
before truth gets on its sneakers.
I would say at this point in time, I'd like and get to the
other end of the solar system where the truth gets on its
sneakers. And so This is why I would
submit the solutions have to be technology neutral and also have
to account for humans doing things regardless of anyone
machine. I still would say at the end of
(47:07):
the day, it's going to be sectorspecific because the severity of
doing this, if you're recommending something to buy on
a website is much different thanmaking a decision about your
health. It's much different than making
a decision on a battlefield. Anastasio there are two
questions and I'll combine them and direct them to you.
Elizabeth Shaw says that there are strong agendas that power
(47:31):
deceptive and malicious use of AI, such as greed, power, ETC.
How do you encourage the ethicaluse of AI as opposed to the
deceptive use? And then Arsalan Khan says that
there are many underlying issuessuch as security, data bias,
(47:53):
culture, jobs, and and so forth that AI can help or make worse.
He's asking fundamentally the same thing.
How do we incentivize organizations to take a, an
ethical as opposed to a deceptive approach?
(48:14):
And I'm, I'm paraphrasing these two questions, but that's
fundamentally we spoke about that earlier.
The incentive is incentivizing. If you incentivize prospective
customer to be more aware and tobe more educated on what is out
there, because sometimes the issue is not the technology, but
the business model and the why behind a certain construct in,
(48:37):
for example, how social network is configured.
And I think that the reduction of confusion and demystifying of
AI is a very noble cause. So in my eyes, there are three
fundamental buckets of risks in AI and one bucket is everything
which has to do with design. For example, we are talking
(48:57):
LLMS, and I mean, for my taste, LLM will always hallucinate.
You might reduce certain things,but you know, it will still stay
just because I'm not going to dive now into the theory of
computation and all of the above, but it will hallucinate,
period, because of the design. We are talking about AIS as if
(49:20):
AIS are quite new. But actually the technology, the
roots of technology is in the 40s, fifties and 60s of the last
century. And we must go into a new wave
of architectural design to improve.
So this either by the construct,by the system, an issue or
(49:41):
because there is a human mistakeor obviously this is always
possible, then there is a malicious intent.
So we talked about cybersecurity.
Criminals will apply AI for the dark purposes and some companies
might want to seduce their customers into certain thinking.
So the more educated the customer is, the better.
(50:03):
Sometimes we are talking about scalable problems and sometimes
we are talking about basic stupidity on behalf of a
customer. And when I work with companies
and I review their AI portfolio,sometimes I'm like, why are you
spending this money on that? And the issue is that the
customer believed that the salespersonnel of a certain vendor
(50:26):
and the vendor consists of 85% of sales and only 15% of
engineers. Obviously, this vendor cannot
execute. I'm not now going into ethical
and trustworthy AI, just the basic configuration of whatever
systems. So the more knowledge on behalf
of a customer, the better is theoutcome.
(50:48):
And last but not least, and thiswill be very specific to an
industry or business function. This is so-called human in the
loop. This is the third bucket of
so-called risk. Now define how much human and in
what kind of loop when are we going to introduce the
supervision and the human feedback?
Once again, this is a balancing act and and this is a play in
(51:12):
between of whoever is building and introducing the system and
whoever is actually the customer.
So it might be an answer which is not very simple.
But to my eyes, this education and asking good questions, being
capable to review the vendor portfolio and read it to write
(51:32):
down what type of problems are we solving, do we need this and
at what cost. AI might appear new, but our eye
is still our eye. If I have this kind of, you
know, rule of the term that 25% of my revenue line will be spent
on compute and 15% of my revenueline is being spent on cleansing
(51:57):
data. If I am great.
Yeah, then you have your economics and you need to keep
it in mind before deciding whether you are going into a
certain AI implementation or not.
What will it cost you? Do you really need to automate
here to introduce an AI agent here?
Or maybe a human will do just fine.
(52:18):
Let me just ask each of you verybriefly for a final thought.
David, final thoughts on this topic.
This is not a tech specific issue.
This is a society and company level issue.
And it's going to be solutions that look not at the tech
specifically, but look broadly. And again, recognizing we're
wrapping up data, data, data. We didn't talk at all about data
(52:41):
governance. We didn't talk about data
cleaning. I know, and Asajj just briefly
touched on it. You know, one way where you
could do most everything right in AI and completely fail is if
your data is bad. And so I would say at the end of
the day, the, the, the, the intent of pursuing more
trustworthy and less deceptive AI begins first and foremost
with putting yourself in the shoes of your different
(53:02):
stakeholders and making sure whatever you do, tech or not,
data or not, you are thinking about them and how you deploy
things. Anthony, final thoughts.
Couple of things. One, beware shiny objects,
right? There's going to be more and
more shiny objects, right? Whatever they're called right
now, it's LLMS. That's great.
What problem are you solving? Always step back and ask, what
(53:24):
do I have to believe? What's the problem I'm solving?
Don't get distracted by the shiny object because there's
another one coming right behind it.
The second thing is, and that's to David's point about cleaning
the data. The second point is regulation
is going to happen with you or without you.
So get involved. Make sure that the regulators
understand the unintended impactof some of the things that
(53:45):
they're considering. We just, we just wait for these
regulations to come out and thenit's kind of too late.
And then the third thing I wouldsay is, and I often say this, be
humble. You can't solve this alone.
Get help. There's lots and lots of folks
out there that have lots of expertise.
We should make new mistakes, right?
And we do that by involving other people in our decision
(54:07):
process and and being humble enough to realize that I don't
care how smart your smartest people are, you should bring in
some people that disagree with you.
And Anastasia, it looks like you're going to get the last
word here. Final thoughts.
We might be in technology and AI, but we are all in people
business ultimately, and it's really about human leadership
(54:27):
and thinking and humility. And I don't encourage people to
wait for some decision from let's say Alphabet or from some
new regulation out of WashingtonDC or Brussel.
I would really encourage local communities to look into the
local ecosystems and see what kind of colleges, universities
(54:49):
are out there interested in AI, offering some courses, what
schools are interested? What could you do for those
kids? What might be the local startups
to do, let's say startup breakfast?
So they they explain what they're doing and how they're
hiring into the company. Maybe some incumbent companies
which are adopting one or another type of technology.
(55:14):
So I would really encourage thisbottom up movement rather than
waiting for some big guy somewhere to decide for you
because this is how you learn and this is how you create a
dialogue and ultimately the progress will happen through the
dialogue. We are dealing, as all three
have said, with human rather than specifically technology
(55:38):
issues. And so the question then becomes
how do we harness and focus thishuman energy for the ethical use
of AI? And I think with many other
things, it's going to require carrot and a stick.
There's no simple solution here.And as complex as the technology
(56:01):
is, the human aspects are alwaysharder.
So, Carrot and Michael, are you building a snowman is what
you're talking about? You know, I'd love to build a
snowman. It's actually snowing right now
here in Boston. Wait, are you saying do you want
to build a snowman? Sorry I'm.
Sorry. Sorry, Michael, back to you.
Anyway, a huge thank you to our guest, Doctor Anastasia
(56:24):
Lauterbach, Doctor David Bray, and Doctor Anthony Scrofignano.
Thank you all so much for being here.
I'm very grateful to you all anda huge thank you to everybody
who watched today and especiallyyou folks who asked such awesome
questions. I always say this, you guys in
the audience are amazing. Before you go, please subscribe
(56:50):
to our newsletter and subscribe to our YouTube channel.
Check out cxotalk.com. We have great shows coming up
and our next show will be back at the usual time of 1:00
Eastern. Thanks so much everybody and I
hope you have a great day.