All Episodes

September 26, 2024 28 mins

Have you ever questioned whether AI is more hype than substance? In this episode of Trading Tomorrow, Don Welch, Vice President for Information Technology and Global University CIO at NYU, delivers a pragmatic take on artificial intelligence (AI). Known for his candid skepticism, Don traces AI's history from its early origins, debunking common myths about its capabilities and limitations. The conversation also dives into the growing problem of AI washing, where companies overstate the capabilities of their AI products. This is an engaging conversation, you don’t want to miss.  

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:06):
Welcome to Trading Tomorrow navigating trends in
capital markets the podcastwhere we deep dive into
technologies reshaping the worldof capital markets.
I'm your host, jim Jockle, aveteran of the finance industry
with a passion for thecomplexities of financial
technologies and market trends.
In each episode, we'll explorethe cutting-edge trends, tools
and strategies driving today'sfinancial landscapes and paving

(00:29):
the way for the future.
With the finance industry at apivotal point, influenced by
groundbreaking innovations, it'smore crucial than ever to
understand how thesetechnological advancements
interact with market dynamicsinteract with market dynamics.

(00:49):
To most of the world, onNovember 29th 2022, ai was known
as an aspirational technology,something more closely
associated with a Will Smithmovie than real life.
One day later, that changed.
When ChatGPT launched.
A frenzy began and by January2023, chatgpt hit 100 million

(01:10):
monthly active users, one of thefastest-growing technology
audiences we've ever seen.
It brought the AI conversationto the forefront as people
witnessed the advances of thistechnology for themselves.
While there was and still a lotof excitement, there is also a
lot of fear and confusion, butAI technology has been around
for themselves.
While there was and still a lotof excitement, there is also a
lot of fear and confusion, butAI technology has been around
for decades and, while itrecently moved into the

(01:31):
mainstream conversation, thoseworking in IT and technology
have been looking at itspotential power, challenges and
impact for quite some time.
Don Welch, the Vice Presidentfor Information Technology and
Global University CIO at NYU, isone of those people.
He's a self-proclaimed AIcurmudgeon.
Don began his career as an Armyofficer, specializing in a

(01:54):
combat-oriented field.
Don pivoted towards informationtechnology, earning a PhD in
computer science.
He has since held influentialroles in the IT sector,
including CTO, ceo and CIO.
Don, thank you so much forbeing with us.

Speaker 2 (02:09):
Thanks, thanks for having me.

Speaker 1 (02:10):
So Don you have described yourself as an AI
curmudgeon.
That's a pretty strongcharacterization there, so maybe
you can explain what that meansand why you've taken this
perspective.

Speaker 2 (02:22):
So I will start off saying you know what is AI.
My favorite definition of AI isan unfortunate word choice made
in the 1950s.
It is, you know, an umbrellaover, depending on how you want
to count over, a dozen differentprogramming techniques and
methodologies that have beenaround.

(02:44):
You know, some of them sincethe 50s and some of them, you
know really were, you know,developed in the 90s and 2000s,
but it really only come tofruition with more computing
power that we've got right nowand recently.
You know, my perspective isthis is still software.

(03:06):
It's software that is very goodat solving certain categories
of problems and really bad atsolving other categories of
problems.
It is not Skynet, it is notgoing to kill us all.
It is, you know, it is justtechnology, and we've been

(03:27):
dealing with technology for, youknow, 50, 60 years now, and you
know, I think we should justcontinue to deal with it that
way.
So, and that way, I think I'mkind of an outlier.

Speaker 1 (03:39):
Well, you know so, breaking news it's not going to
kill us.
I think you know you just shutdown half the podcast universe
in terms of AI.
But you know, I mean, there'sso much hype surrounding Gen AI
at this point and you know whatwould you say some of the
biggest misperceptions are aboutit.

Speaker 2 (03:56):
Yeah, so you know Gen AI and of course, even that is
a broad category.
You've got Gen AI that willgenerate sounds or images, and
most of us think of it as thetext generators, the large
language models.
And you know, I think thebiggest misconception is, unless
there is another you knowprogram, if you will, or model

(04:21):
behind it, if you will, or modelbehind it In Gen AI, the way I
like to think about it is itwrites.
It has a store of, you know,many, many millions, in most
cases, of programs to write textin some language, you know, in
our case in English, and so itis this really great interface

(04:43):
of how we can interact with acomputer, but it doesn't
necessarily have good models ofthe world or whatever you're
talking about, unless you putsomething behind there.
You know so in some casesthey've done that.
But if you ever try to do mathand chat GPT, you know you were
dealing with, you know someonein the elementary school.

(05:06):
If you were lucky and dependingon how you worded your questions
, the, you couldn't even getaddition to come out right.
So so I think a lot of peopleand I've seen this especially
especially amongst our businessleaders they assume because the
AI is speaking very coherentlyand eloquently, that it actually

(05:31):
knows what it's talking about.
And I tell people to thinkabout it as your know-it-all
uncle at Thanksgiving, who canspeak very eloquently on a topic
and sound very pervasive and insome cases he might even know
what he's talking about.
But you really have to beskeptical and you know and

(05:55):
understand.
You know, is it correct?
Obviously, people have heard ofhallucinations and so forth.
But even in the areas whereit's not a complete
hallucination, a lot of the LLMsdon't have that kind of depth
that people assume that it doesin terms of you know, like human

(06:16):
intelligence, or even you knowcomputer intelligence.
So you know.
It all comes down to the righttool for the right problem.

Speaker 1 (06:24):
It's not going to kill us.
All right.
So that's headline one, but youknow what are your top three
biggest concerns about AI?
You know, let's go three to one, let's start at three.

Speaker 2 (06:34):
OK, the top three biggest concerns.
So I think it is.
You know, it is a great set ofprogramming techniques that are
great for solving certainclasses of problems.
I'll say, with my number threeconcern is that we are going to
over-rely on AI and, as we weretalking about earlier, ask it to

(06:58):
do things that it's not verygood at.
If I go to my number twoconcern, it's the other
direction, that we are going tounder rely on it.
And I will take the example ofa self-driving car, and I right
now would rather have face aself-driving car on the road

(07:19):
than I would a teenager with acell phone.
I think we are being overlycautious about the progress we
can make in some cases.
That we're a little too worriedand perfect will be the enemy of
better when we're with AI.
And then, if I go to my numberone concern, it is that we are

(07:40):
going to regulate it in a notsmart way.
I think it is very difficult toregulate new technologies.
You know that's difficult,regardless of what we do, and
you know you hear all thediscussions about social media
and so forth, but it's hard andany attempt at regulation, I

(08:00):
think, is going to be highlyprone to not having the impact
that's intended.
Of course, we see it a lot oftimes, but in deference to the
people who are trying to writethose regulations certainly our
experts, but it is, you know, itwill not be easy and I think we

(08:28):
could really end up with a lotof unintended consequences.
Forget about the fact thatother countries, especially some
of the countries that we may becompeting with in many
different fields, are not goingto put restrictions on their
development and are going torush right ahead, and that could
have a lot of unintendedconsequences.

Speaker 1 (08:49):
I was going to ask you to follow up on
country-specific regulationbecause even now it seems even
the way the models are beingtrained is potentially dangerous
within certain regimes that arelooking to influence will over
populations and things of thatnature.
So, is there any way to combatthat?

Speaker 2 (09:14):
It's a really hard problem.
One of the more interestingsolutions that I heard develop
AI systems that will monitor AIsystems.
That sounds good, but onceagain, you know that's going to
be hard, but I think it is.
It does point in the rightdirection.
So you know, if we think about,you know, ai in a really

(09:36):
general sense.
Ai is good about dealing withmessy input and providing
imprecise answers.
So what I mean by that?
If you've got a computer thatis calculating your bank account
, you want precision.
If you are asking a computer towrite a paragraph for you,

(10:00):
happy to glad doesn't make anydifference.
You Happy to glad doesn't makeany difference.
You want it good enough, and AIis really good at good enough
solutions, especially whendealing with messy data.
Being able to do something thata human would do, which is look
at an AI system and see if theinputs and outputs and results

(10:26):
are things that you want.
That's going to be a lot ofmessy data, so the future may
lie in that area, but I thinkright now it's just.
You know it's a dream and it'sprobably in a lot of
researchers' labs or theirhigh-performance computing
environments, but I think shortof that, it will be really hard

(10:53):
for us as humans and especiallythe messy way we do our
government and so forth to comeup with the right solutions and
balance all those competingperspectives and requirements.

Speaker 1 (11:02):
I'll say this Messy government seems to be a popular
theme on this podcast as oflate.
So you know, one thing that hasbeen a popular concern recently
is around the concepts of AIwashing.
You know, the practice ofcompanies or people exaggerating
or falsely claiming theirproducts or services or

(11:23):
technologies and what can bedone with AI.
Obviously it's done to enhancetheir appeal or marketability.
I mean, have you seen anyexamples of this?

Speaker 2 (11:35):
I think the better question is have I seen any
marketing or sales pitches fromsoftware that have not included
AI washing?
And I don't remember any thathave not told me about their
cool new AI feature, so forth.
So, and once again, I think ifwe go back to, you know why I
consider myself a curmudgeon,even though people are doing the

(11:58):
you know AI washing.
We go back to first principlesand due diligence.
If you're going to buy a pieceof software, before you do it,
understand what yourrequirements are.
What problem are you trying tosolve?
What does it need to do for you?
And then evaluate does thisactually solve my problem?

(12:19):
And if you do that and it'susing AI techniques, it's not
using AIT techniques, it doesn'treally matter, as long as it's,
you know, doing a good job ofsolving your problem.
Of course, you know the bane ofevery IT person is the business
leader that comes and says Iwant this solution.
Well, what problem are youtrying to solve?

(12:41):
This will solve my problem, youknow.
But let me buy X software andyou know trying to get them to
go back.
Okay, let's identify it.
And of course, you knowbusiness leaders.
Look at us.
We're.
You know we're slow, we'rebureaucratic, you know we.
You know we slow things down.
But you know, obviously we haveour reasons to make sure we buy
the right software, that weintegrate it appropriately, that

(13:04):
we train you know all thethings that make software
implementations successful.
We really have to do and it'snot something that you know we
just pick a solution and turn iton and life is better.
And you know, unfortunatelymost IT people know that lesson
quite well.

Speaker 1 (13:23):
You know.
One of the things I'm curiousabout is you know how can
enterprises, you know, build andmaintain trust if you will with
AI, whether it's internallyamong employees or more
externally with their customers?
You know how do you maintaintrust.

Speaker 2 (13:41):
I think that's kind of at the bedrock of every IT
organization and softwareservice that we deliver.
If you've been involved withany software deployment, there's
a high level of mistrust withanything that's new, even if
forget AI.
But you're asking somebody tochange their business processes.

(14:01):
It's not doing it the old waythat they were doing the
button's in the right instead ofthe left and this is just too
confusing and so forth.
So that trust, I think numberone coming up with a good
quality solution is kind offoundational.
Does it really solve theproblem that you are trying to

(14:22):
solve?
And then having a good changemanagement program why are we
introducing this?
Making sure people understandthe why and what the goals of it
are.
Giving them the right training.
You know providing hyper carewhen you deploy it so that you
can help them leverage it andget that success.

(14:44):
You know delivering all thecapability that is.
You know that was asked fororiginally.
You know all of those kinds ofthings I think contribute to
that trust overall.
So whether you know you'rebringing in a new AI system or
you're bringing in a new ERPwhich may have AI or not, you've

(15:07):
really got to go through allthose steps and pay attention to
them, because I you know, Ithink one of the things that we
in IT sometimes do is we looktoo much at the software and the
system itself and say thisworks as designed, it's working
well, and so forth, and ignoringthe aspect that this is a tool

(15:32):
that has to be used by people.
And so you've got to payattention to the people aspect
and, as you point out, build andmaintain that trust.
And that comes from you knowthe support, understanding what
their requirements are,understanding what their fears
are and their hopes for the newsoftware.
All of those aspects ofrelationships, I think are you

(15:56):
know the way you build andmaintain that trust.

Speaker 1 (15:59):
You know, one of the things we've observed over the
past X amount of years is reallyIT coming a lot closer with you
know the end users.
You know in terms of not justsoftware decisioning but you
know, as part of the procurementprocess, having you know to

(16:20):
upskill in terms of domainexpertise as it relates to
things that are being deployed.
You know, and I think in someways you know the cloud became a
facilitator of that.
You know very closecollaboration.
I mean, how are you seeing theevolution of IT evolve at this

(16:40):
point?
You know, within thatdecisioning process?

Speaker 2 (16:44):
A good question for an old guy that's been around
and seen things move and youknow, if you think of technology
as a stack, which most ITpeople do, at the very bottom
you've got, you know, yourhardware and then you've got
system software and all the wayon up to what we call the

(17:05):
wetware, the user.
Most of IT's work has beenmoving up the stack over the
years.
When I first started, for everyone computer you had a fairly
large team of operators to dothe care and feeding and keep
that machine going, and theywere all focused on the
technical aspects of it, notabout solving a business problem

(17:31):
.
And then over the years we'remore.
If I think back 30 years agoand you might be able to have a
system administrator, you knowadministering.
You know a dozen computers andbut later you know 50 to one was
a good ratio and you know now,with you know, as you mentioned,

(17:51):
like with cloud systems, cloudplatforms, whatever your
administrators are supporting500 machines or even more.
So you've got fewer people thatare focused on those purely
technical tasks and thedifficulties of keeping the
technology running, and that hasfreed IT.

(18:11):
Teams have changed and you havea lot more people in the team
who are focused on managing theprojects, doing the change
management, the training, thebusiness relationship managers
who are working with thebusiness leaders.
I have a friend of mine who wewere socializing a little back a

(18:32):
bit.
I have a PhD in computerscience and he asked me we were
talking about my job and what Ido.
And he's like, how much time doyou spend on, you know, on
actual technical things andtechnology?
And I said, well, if you reallywant to stretch it and count

(18:53):
the times that I'm in rooms withpeople who you know, who deal
with the technology, or whenother people are talking about
technology, you know maybe 15,20%.
You know.
All the rest of the time it is,you know, dealing with people
dealing with business problems.
You know understanding thoseproblems and I think that's very

(19:14):
indicative of the role of IToverall is that it's not about
the technology, it's aboutproviding solutions, and we have
said those words for years.
But I think there really is atransformation there that the
future is for, at least theshort-term future, are people.

(19:37):
You have to understand thatbusiness domain.
You have to understand theusers, have to be able to relate
to them and understand thetechnology to deliver those
solutions.
So you know this shouldn't be asurprise to anybody, but
certainly I think that trendwill continue.

Speaker 1 (19:56):
What's your advice to IT leaders or what do you find
is important in terms ofcollaboration with non-technical
teams when integrating AIsolutions?
I mean, clearly, every vendorin the world is in an AI arms
race.
You're not moving forwardunless you have AI.
And then I've seen instanceswhere IT all of a sudden steps

(20:18):
in and goes whoa, whoa, whoa,whoa, I'm worried about security
.
I'm worried about security, I'mworried about this.
You know, let's deploy thingsin small ways and test it, and
you know.
So what's your advice inintroducing?
You know, arguably, you know,solid productivity gains if used
right.
You know how should people bethinking about it from the IT

(20:42):
and the non-technical IT person.

Speaker 2 (20:44):
Yeah.
So I'll use a possiblycompletely unrelated analogy,
but I was an offensive tackle inhigh school and college During
my career.
Not that I was that great, butI never actually touched a
football in a game Never, youknow.
Never, you know, picked one upor was given one or whatever.

(21:05):
But you know I played thisinstrumental part to you know,
in high school, our fullbacksetting the state rushing record
one day.
And you know I think IT.
You know we are very muchoffensive linemen in an
organization that is not atechnical organization.

(21:27):
If you're a financeorganization or retail, in my
case, education, we're not abouttechnology, but our
organizations can't succeed,can't even exist without
technology.
It's the lifeblood of us now.
So if you think of yourself inthat role of facilitating the

(21:48):
success of your business, youknow you're the offensive
lineman for the offense.
You make that quarterback orthat halfback look good.
That's your role.
And, as I've said to many, youknow business leaders that I
deal with, like my job, is tomake you look good.
You know so I, you know, helpme, help me make you look good,

(22:09):
help me make you succeed, andyou know so some of the things
that I do can be annoying, youknow you will.
You know we'll see in afootball game when a running
back is behind a couple of thelinemen waiting they could run
much faster, but that lineman isdoing their job and trying to
move the linebacker out of theway or whatever.

(22:29):
But it takes time, and soasking people to fill that role
of let me do my job and I'llmake your job easier.
You know, if we address theprivacy and security issues now,
you know we can avoid, you know, bad adverse impacts on our

(22:51):
business.
If we, you know, if you let medo my job now, this tool will
work better longer.
You know, we make sure that wehave the data integrity and, you
know, and all those kinds ofthings that business leaders
don't want to necessarily hearabout, but they want, you know,
they depend on it to work.
So so that's, that's kind ofthat's my approach, and you know

(23:16):
an analogy that you know manypeople understand and many
people don't.
But what the heck?
You know.

Speaker 1 (23:23):
So, sadly, Don, we've gotten to the final question of
the podcast.
We call it the trend drop.
It's like a desert islandquestion.
So you know, if you could onlywatch or track one trend in AI,
what would it be?

Speaker 2 (23:35):
Self-driving cars, autonomous vehicles.
So to me it kind of it isreally a good indicator for AI
and software systems,intelligence systems, going
forward.
It has a very real, physicalpresence.
I mean, we all know what carsare and how they drive and so

(23:56):
forth.
I think the potential is great.
Basically, once we're allself-driving cars and they're
all talking to each other,traffic jams disappear, auto
accidents disappear, et cetera,et cetera.
A huge upside.
But there is a large downside,especially in the transition.

(24:19):
So how do we as a societyaddress self-driving cars and do
it in a way in which we balanceall the competing priorities
you know, the security, theprivacy, the risk and so forth
and you know, hopefully get asclose to the maximum amount of
benefits that we can get.
And so I think that's one thingthat people all understand and

(24:43):
will watch, but I think it'salso indicative of the way we
will use technology for allother kinds of things going
forward.

Speaker 1 (24:52):
You know it's funny.
I know it's the final question,but I want to follow up on that
because I think the wholeself-driving car issue is
fascinating, because it's alsogoing to bring in that whole
messy political situation aswell.
Right, and especially aroundsocietal issues equity,
affordability, all of that and Ithink it's not just the

(25:18):
technology, which is amazing inthe sense of AI, cloud, edge
computing, et cetera but you'regoing to have this whole social
component as well, and I thinkit's one thing to put in
automated toll systems andeliminate 60 jobs or 150 jobs in

(25:39):
New York State, but it'sanother thing when you create a
society of haves and have nots,and you know so I think there's
going to be a lot of mess withthis as well.
You know, what do you?
How do you react to that?

Speaker 2 (25:51):
Yeah.
So I think there's always beena lot of mess with any
technology change, certainly aswe move from an agricultural
society to industrial society.
There are a lot of people whosuffer in that transition, but
generally we ended up better off.
I think as we've moved from anindustrial to more of a
knowledge-based same kind ofthing, there's a lot of pain.

(26:13):
We've got Rust Belt cities andso forth that are there.
I don't think that this will beany different.
As we incorporate more and moretechnology, people who are doing
jobs that are routine are goingto find their competition is

(26:36):
from technology and, in a goodsense, we are going to have very
capable people who can do more,because AI is doing more of the
simple and the rote things.
But that's going to be achallenge for a lot of people to

(26:58):
be able to step up, providevalue to their organization, so
that there's organizations thatmake sense, to compensate them
appropriately so that we don'thave that much of a divide.
I think it's going to be a verylarge challenge.
It's going to have to startwith our you know our K-12

(27:19):
education and go all the waythrough lifelong learning, but
we're certainly not prepared totransition, you know, an entire
society into the kinds of thingsthat we could.
And, as you say, you know,there are politics involved.
There are, you know, there'stechnology, there's education,

(27:41):
there are all these socialissues.
And I think we're going to haveto navigate it very, very
carefully, as we do anytransition, and you know, and
who knows, what the next onewill be.
I probably won't be around forthat one.
This one will be challengingenough for me, but the I think

(28:02):
that's a, you know, it's a veryvalid concern.

Speaker 1 (28:05):
Well, if the New York Taxi Limousine Commission
thought they had a handful ofUber, wait till they take on
Elon Musk.
Don, I want to thank you somuch for your time today and
your insights, and I reallyappreciate you having me on the
pod.

Speaker 2 (28:18):
Yeah, thanks so much, James.
I really enjoyed it.
This was a lot of fun.

Speaker 1 (28:29):
Thanks so much for listening to today's episode and
if you're enjoying TradingTomorrow, navigating trends in
capital markets, be sure to like, subscribe and share, and we'll

(28:49):
see you next time.
Advertise With Us

Popular Podcasts

24/7 News: The Latest
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.