Sol Rashidi has built AI, data, and digital strategies inside some of the world’s biggest companies—and she’s seen the same mistakes play out again and again. In this episode, she unpacks why AI initiatives often stall, how executives misread what “transformation” really requires, and why the future of AI success isn’t technical—it’s cultural. If you think AI is just a tech problem, Sol is here to change your mind.

Follow Sol's work:

What's New In Data is a data thought leadership series hosted by John Kutay who leads data and products at Striim. What's New In Data hosts industry practitioners to discuss latest trends, common patterns for real world data patterns, and analytics success stories.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Welcome to what's New in Data.
I'm your host, john Coutet.
Today, I'm joined by SaulRashidi, former Amazon and
Fortune 100 C-suite exec, holderof 10 patents and best-selling
author of your AI Survival Guide.
In this episode, we get realabout what actually makes AI
projects succeed.
Saul shares our lessons fromhundreds of enterprise

(00:26):
deployments, such as why mostcompanies can pick the wrong use
cases and why launching isn'tthe same as scaling.
If you're leading AI or datainitiatives, this is a
conversation you need to hear.
Let's dive right in.

Speaker 2 (00:51):
Hello everybody, Thank you for tuning in to this
episode of what's New in Dataand I'm really excited about our
guests.
We have Sol Rashidi Sol, howare you doing today?

Speaker 3 (00:55):
I am good, John.
Thank you for having me onboard.

Speaker 2 (00:59):
Yeah, absolutely.
You know, we've been talkingabout doing this episode for a
while now and I've beenfollowing all your writing.
I'm a big fan of your book.
I have both the written and theaudio version of it and I love
just referencing it and goingback, because there's just so
much useful advice there.
Thank you.

(01:22):
You've been talking about this,I think almost a year since
verbiang, I would yes yeah it'syeah, yeah well we were together
with uh, with chris tabb and uh, joe joe reese and the whole
crew there and yeah, yeah, yeah,yeah, that was a fun time.
I think that was that was thefirst time I was like, oh, you

(01:44):
know, you should come on thepodcast soul and you're like,
yeah, let's do it.
And you know, a year flew by,which is very, very common these
days.
It's like a year goes by, likea week now unfortunately so,
unfortunately so.

Speaker 3 (01:57):
But where there's a will, there's a way, and we made
it happen.
So even if it took this long,doesn't matter.
We're here together now.

Speaker 2 (02:05):
Sol tell the listeners about yourself.

Speaker 3 (02:12):
Sure, oh boy, my data journey and AI journey
completely haphazard.
I always say it was at thewrong place at the wrong time
and in retrospect I said yes tothe right thing, but at the time
I felt like it was a big, bigmistake.
But I had no direction.
I didn't know what I wanted todo, and so down this path that
we went, and I think everyoneknows, especially a day-to-day,

(02:35):
that I was a rugby player on thewomen's national team.
I thought I was going to playprofessional sports my entire
life.
It turns out as you age, youget hurt, and when you get hurt,
recovery just takes a lotlonger, and so I was like, okay,
I'm done sleeping on futons andeating ramen noodles.
I got to be a professional andgrow up and I applied for a
series of jobs, and the firstjob that accepted my application

(02:57):
was to be a data engineer.
And from there I was a horribledata engineer because I could
write code.
I could not writeproduction-ready code, and my
team came back to me in a very,very short time period and
they're like, listen, we loveyou, but you're creating more
work for us and you'realleviating work, However, even
though you know our world, butyou can't write production-ready

(03:19):
code.
You seem to do really well withthe business, so why don't you
go gather requirements, bring itback to us, we'll develop
what's needed.
Then go back and tell thebusiness what we've developed
and why, and, like you, justkind of be our translator.
And that's just how it started.
And, by the way, this was inthe late 90s, early 2000s.
At first I thought it was arejection, but that notion of

(03:40):
being a translator, ofconnecting the two worlds
between functional and technical, bridging the gap, it turned
out to be a massive assetbecause there were very few of
us who could actually do it, andmy career has just grown from
there.
And so the odd thing was, evenafter data engineering, I was so
good at sales that I became theVP of sales of business

(04:02):
development and sales for aboutique agency selling staff
along and resources fortechnical projects.
So it was weird.
I went from being a dataengineer to actually being in
business development and it wasthen that we were trying to sell
our big deal to Ingram Micro.
I'll never forget this.
In the early 2000s I'll neverforget this.

(04:22):
In the early 2000s, and we wereup against Accenture before they
rebranded themselves becausethis was pre-Enron days or
around the Enron days and MDMstarted becoming all the rage.
But we were losing the deal toAccenture because the CIO, who
my CEO had a relationship with,said listen, we got to give it
to the big guys.
You're a small, no-name companyand unless you can come back to

(04:45):
us with something that'scompletely differentiating,
we're going with the big dogs.
My CEO sent me to Texas forthree months to go get SAP MDM
certified and I was like thismakes no sense.
Long story short, lots ofcompetition, very technical
tests and I came out one of 11people who was MDM certified

(05:06):
Globally.
We won the deal.
I started leading MDM teams andIBM recruited me and, like
everything else, it's justhistory.
I went from being a rugbyplayer to a data engineer, to
getting into sales and projectmanagement, then to becoming
this really tactical MDM lead onthese massive SAP ERP

(05:27):
deployments.
And then I went to grow, managea team, grow a practice, manage
a P&L.
And then 2011,.
Watson went to market and I gotto be a part of that story.
So I have launched Watson.
And then from there, you know,I went to Ernst Young as a
partner no-transcript as CDO,merck Pharmaceuticals as CDAO a

(06:15):
sale order as CAO wentindependent.
Now I'm with Amazon and, yeah,I love the startup space and
fast forward.
Here I am, but I still askmyself every day what do I want
to be when I grow up?
It's still a big question in mymind.

Speaker 2 (06:32):
Yeah, and I love the whole career journey because,
especially like being a dataengineer, software engineer in
the 1990s is so much differentthan how it works now.
Because, like everything backthen was like tied up in all
these proprietary tools andthere was just everything was

(06:53):
like a secret.
So I like how would you becomelike a software engineer?
You had to go look in textbooksand, you know, go read
proprietary industry reports andyou basically had to just get
locked into this, this knowledgesomehow.
So, like going fresh from rugbyplayer to engineer almost seems
like an impossible feat.
Back then it was like now youcan just like anytime you run

(07:13):
into a blocker like question,you can just like ask chat, gpt
or something like that.
But it's so cool that likeopened a door for you to get
into the like the requirementsside, like what, not like how
should we build this, but whatshould we build it and why?

Speaker 3 (07:31):
And that's huge.
And you brought up actually areally good point, john, and I
think it's something thateveryone's got a question today,
like that leap from rugby todata engineer.
That was a massive leap, butdifferentiating yourself was
actually, frankly, a bit easierback then because you know the
world of data engineering back.
There was in one big tabularformat, everything was
structured, data warehouses andthere were classable ways of

(07:52):
doing things Metadata, tagging,cataloging, aggregating the data
, putting it into a queue.
Yes, I'm that old.
I remember the cheap constructs.
Like you followed a protocoland that's how every corporation
enterprise was doing it.
But right now, I mean, it'samazing and it's intense and

(08:13):
it's astronomical of thedifferent options that you have,
and I always say you're only asfaithful as your options.
It's a New York City sayingYou're not stuck doing one thing
.
There are multiple ways toapproach a problem, and so the
optionality in and ways toapproach a problem, and so the
optionality in and of itself isa struggle, I would say the
first thing.
The second is how do youdifferentiate yourself in your
career, knowing that everyonehas access and they are one

(08:34):
prompt away of understanding anindustry, a sector, an approach,
now whether they have the levelof depth to be able to follow
through on what they'veresearched is a different story,
but at a superficial level,everyone knowledge has been
democratized, so everyone hasaccess, and so you know, when
I'm working with someindividuals, mentoring them,
advising them or whatnot, um,it's really around.

(08:56):
How do you differentiateyourself when everyone has
access the same things?

Speaker 2 (09:01):
yeah and yeah that.
Yeah, I love the way you youdescribe that where knowledge is
really democratized.
Now, you know, this is one ofthe things.
That's that people try to evendifferentiate.
When they just read somethingon linkedin or x or blue sky, uh
, you know, making sure it'scoming from an expert with

(09:24):
experience, someone who's beenthere and done it before, or if
it's just someone who kind ofchat, gpt'd it and say, hey,
write something credible for me.
And you know, at a superficiallevel, like you said, like you
know a lot of, there's just somuch of that coming out now
because it's like at a surfacelevel, people can kind of say
the right things, which is, youknow why, having like the

(09:48):
learned experience, the realworld experience, where your
knowledge is actually likeempirical and acquired over the
years and I only say this in away when you're trying to speak
to others and teach others,right, because I think it's
great that anyone can learnsomething and do it, and I think
that's that's one of theamazing things that we have out

(10:09):
of ai, and it's one of thetopics I want to get into with
you is actually, uh, you wrotethis incredible book, uh, your
ai survival guide, and you knowit's the way you've you put your
thoughts together.
You know is really incredibleto me there, but I want to ask
you, like, what gap in themarket did you want to address

(10:30):
with that book, and what is itabout?

Speaker 3 (10:45):
interesting.
But I think you know I was likewhy me?
Why would anyone want to listento what I have to say?
And if you look at my career,like I got into the social media
thing late.
I started writing on LinkedInlate.
I would even say I wrote thebook a bit late, but I got a lot
of encouragement from folksaround me and the encouragement
was is the reason why I got intosocial media very, very late is

(11:07):
because I was really busy doingthe work, not talking about the
work, and I have neverstruggled with imposter syndrome
because I'm not one to read andregurgitate and take shortcuts.
I go dig into the details andyou know most of my employees,
like in the past, could attestto say that like it probably
fatigued them or bothered themthat I would ask so many

(11:29):
questions but I was like I can'tunderstand what you're trying
to tell me or what you're askingof me, and I can't understand a
day in the life in your shoes.
Unless I can get to that level,I won't ever do it again.
Meaning, once I understand whatyou're going through and I have
that understanding, thatcompassion, that empathy and why
you're asking me what you'reasking me, then I trust you and
I'm able to move on, but unlessI know what you're going through

(11:51):
day to day, I won't be able tohelp you as well, and so I will
ask questions, I will askquestions and I will ask
questions, but again, like Istarted, this, linkedin stuff,
the book stuff, all 2024.
I am really really late to thegame, but it has served me well.
As a result of the book Meaning, because I was so busy doing

(12:13):
the work and not talking aboutthe work, and because I have
over 200 deployments under mybelt, I have all the post-mortem
notes of every deployment I'veever done.
Writing the book was actuallythe easiest exercise for me and
the support that I got andencouragement I got in writing
is listen, sol.
Most people are going throughthis the first time.
You've done this countlesstimes.

(12:33):
You should share what you knowto help others avoid the same
mistakes you've made.
And I was like you're right,it's really a book of
assumptions I made that didn'thold true, mistakes I made that
went awry, areas we needed tocourse correct and where, along
the line, like folks shouldreally consider on how to

(12:53):
de-risk their AI deployments.
And so it's kind of funny.
It's not like a philosophicalbook by any means.
Book by any means.
It's a complete how-to andpractical book of like all the
things to avoid that I did inthose 13 years and with those
200 plus deployments.
So at first I thought it wasmeant for non-technical mid to

(13:15):
executive level individuals, butI've gotten a lot of pings on
LinkedIn from MLOps, folks, dataengineers, data scientists,
machine learning experts sayingthat you know I'm the founder
and I need to scale this.
Or you know I'm responsible andon the committee and I need to
deploy this and I don't know howto push back, even though

(13:36):
instinctually I know what'sgoing to go wrong.
But you've given me thebusiness terms to use.
So it's been a blessing, butit's really just to help others
avoid making the mistakes I didover the past 13 years.

Speaker 2 (13:49):
The book is really uniquely valuable because one of
the things I've noticed whenyou give talks and from all your
experience and all the thingsthat you accomplish, there's
this instinct for kind, forcutting through the fluff and
getting to the core of whatactually makes these ambitious
data and AI initiatives work inthe business.

(14:11):
That's why I really lovereading your book and enjoy your
talks.
When they go give their talksthere, like this sort of essence
to it.
You know some people might be aphd talking about this thing
that they've researched for 20years and they're an expert at
it, um, and then you know, forexample, we mentioned joe and

(14:34):
joe reese and uh and matthewhousley.
You know they talk about it ata fundamental level of data
engineering and pipelines anddata infrastructure and but you
know what makes you know, whatmakes you know your view on this
.
Like you know, it includes allof that, but it also gets to
this kind of practical elementof we have to make it work.
Like you know, companies canbuild anything right If they

(14:57):
throw time and money at it, buthow do they build stuff that's
actually successful?
And there's, like these, what Ifelt like were unspoken truths
about it that some people arejust kind of skillful and able
to do it, and I think that'scommunicated in your book really
, really well and it's sort of aroadmap for people who might
have, like, either the technicalskills or some other hard

(15:19):
skills that want to kind ofdevelop like the soft skills or
like okay, I can build all thisgreat stuff, how do I get the
buy-in and how do I make thissomething where an executive
will give me budget to do it andprove that it's successful?
So one of the things I want toask you is, like what's like the
biggest unspoken truth that AIleaders aren't acknowledging
when building these ambitiousprojects?

(15:42):
That is such a good questionthese ambitious projects?

Speaker 3 (15:43):
That is such a good question and I'm very much known
for my transparency sometimes,you know, at the gengret of my
leaders that I've reported into.
I would say the first one isnot everything needs to be
solved by AI.
You know why use a chainsaw ifa scissor does the trick.
We've got a lot of amazingtechnological advancements,

(16:06):
quite frankly, that have beendeveloped the past 20, 30 years
that work.
Our orchestration models like,just as an example, or robotic
process automation right, rpaisn't new.
Now we're like, well, no, it'sintelligence-based now and
that's what the automation withthe young is about.
I'm like, okay, but quitefrankly, most of the use cases,

(16:28):
most of the business processesthat need to be automated,
actually don't embedintelligence into them.
A massive sequel go with amajor decision tree actually
does the job just as well andit's coded, static and unless
your business processes changequite frequently, this works and
I hate to say it, but mostenterprises don't change their

(16:49):
business processes frequently.
So there's, I only share thesebecause I think the biggest
unspoken truth is not everythingneeds AI.
I think that's the first one.
The second one is is that youknow, while everyone's chasing
foundational models and LLMs andfine tuning and rag.
One thing I always ask people toreconsider is when you're past

(17:12):
figuring out your prototype andyour proof of concept and you're
ready to push it intoproduction.
Pushing it into production doesnot mean you're ready to scale.
It just means that you've juststarted the real work.
Because in the green scheme ofthings and total deployment
schedule mean you're ready toscale.
It just means that you've juststarted the real work.
Because in the green scheme ofthings, in total deployment
schedule, 30% of it is, quitefrankly, figuring through the

(17:34):
technical components, the datacomponents, architecture
components, the accuracythreshold components, user flow,
workflow, etc.
The remaining 70 percent comesdown to how am I going to get
users to use it, adopt it andweave it into their day-to-day
without crippling theintellectual capacity of my
workforce?

(17:55):
No one's talking about that.
That is 70 percent of work,which is why we have these
amazing capabilities thrown intoproduction and they think
that's scaling.
No, production does not equalscaling.
Adoption equals scaling andfolks aren't there yet, so
they're not talking about it.
But that's going to be a majorissue.
So like the big chunk is afterthe fact.

(18:15):
So I always say that you knowwhen deployments go wrong it's
not because of bad models, it'sbecause of bad integration, and
I, unfortunately, am seeing alot of that right now yeah, that
and that's such a great commentbecause you know everything
sounds amazing in design phasesor architecture review boards
and approvals.

Speaker 2 (18:35):
you know, because you have to defend what you're
going to build right, and that'swhen people will sort of get
into the practice of overpromising.
And then you, you go toproduction and either it's too
slow or it crashes.
There's other issues.
People say, yeah, it works,it's not that useful and, like

(18:56):
you said, that's like step one.
That's when the journey startsright and I think that's where
leaders have to understand justbecause you, like you said, just
because you launched it right,doesn't mean that's scale yet.
That's like that's step one.

Speaker 3 (19:12):
Now you're kind of crawling right, yes, like you've
opened the door and you'repeeking around the corner.
The real stepping into theopportunity is wide scale
adoption, not a deploymentwithin a specific function
amongst a specific set ofindividuals, of deployment
within a specific functionamongst a specific set of
individuals.
But you know, and that's partlywhat the book covers and partly

(19:35):
what I'm evangelizing, becauseI think not only is the adoption
going to come from, or thescale is going to come from, the
adoption and the integration,but how do you do it?
Where and this is kind of likemy next business purpose, if you
will how do we leverageartificial intelligence for good
at the individual level, at thecompany level and then at the

(19:55):
community level?
And what I mean by that is, youknow, as we lean heavily into
automation, augmentation andautonomous agents, how do we
still empower our workforce andour individuals and their
cognitive abilities, theirintellectual strengths, their
ability to solve problems?
Because the original intent wasalways let's build automation

(20:20):
and let's leverage autonomousagents to free up capacity and
bandwidth.
Some are translating this asdisplacing the workforce.
Well, no wonder there'smistrust and no wonder people
aren't going to adopt it.
But what I want everyone tounderstand is it's not a game
about displacing the workforce.
It's a game of how do we createthe most amount of value with

(20:45):
minimal dependency on manuallabor and then how do we
reallocate the workforce to workon problems that continue to
plague the company.
So let me give you just a basicexample.
About a year and a half ago, Ihelped a company yes, we deploy
some autonomous agents andautomation and a variety of

(21:06):
functions and my ask to them wasI said here's how much I'm
going to charge you for yourstrategy and to help you deploy.
I will work with any vendor youchoose.
That's not an issue.
But if I build 17.6 to 18.5%additional capacity with this
team of 15 individuals, yourpromise to me, your payment to

(21:29):
me, because you paying me thestrategy work is optional.
But what I am asking for is donot let go of any of those 15
people.
Instead, get those 15 peopleinto a room to discuss the next
business problem that you wantsolved and why you haven't been
able to solve it the past yearor two or three or four, because
we were meant to redirect ourabilities to problem solving,

(21:52):
not necessarily to reading,regurgitating, copy and paste
and doing mundane, repetitivetasks.
So my ask to them legitimatelywas, is that, with this
additional capacity andbandwidth, the intent isn't to
shrink this team from 15 to 12or 11.
It's to reallocate these 15individuals to solve a business
problem that's been plaguing youfor a really long time and get

(22:14):
them in a room to do it, andthat's your payback for me.
And like we're just notthinking of it that way with AI
deployments.

Speaker 2 (22:22):
Yeah, there's this.
Sometimes I shouldn'tovergeneralize, but you see,
this obsession with the hardcosts and executives really need
to think about those.
Like you said, taking that roomof 15 people and rather than
cutting headcount there, how doyou get more output?

(22:44):
And then how?
So how, and if we were havingthis, this conversation, how
would you measure that and helpadvise the company on how to
best measure, measure thatincreased output they'd get from
deploying ai?

Speaker 3 (23:00):
this is where you've got to go through some of the
nitty-gritty details that mostpeople don't like, or short
circuit.
You know I in the past we usedto do what we call day in the
life measurements.
We would ask I would interviewa team what are the top five
questions you get on an email orin conversation?
And then we would go through aday in the life and go how do

(23:23):
you answer this question?
Emails, systems, processes,slacks or Teams messages, how
many people would you interactwith?
Who are you waiting for?
How long does it take to answerthat question?
And then we would do theseacross a series of just
questions, like three to fivequestions, depending on how much
time we have to understand, howmany people are involved, how

(23:47):
long it takes Is it reallyinternet?
How many people are involved,how long it takes, is it really
internet systems and work flowsor sneaker net, slacks, emails,
phone calls, and how long doesit take to answer a simple I
shouldn't say simple a businessquestion that, quite frankly,
this team should know.
And we would go through andmeasure productivity, efficacy,

(24:10):
efficiency, duration, like wejust go through a series of
missions, and then we would say,okay, this is our before
benchmark.
Now the question is is how muchof this can we improve after we
introduce all these amazingpromises?
And then we would measure itagain.
But, where the trick lies.

(24:30):
Most people are very muchpro-AI let's go do this.
It's going to solve worldhunger, right?
There's this big, big, big,massive push to doing it.
I've actually had and I don'tknow if this is stupidity or
guts to actually tell a fewdifferent businesses actually
the way you're doing it now,although it's not efficient, is

(24:56):
a hell of a lot cheaper.
So your return on investment ismore through the manual labor
than to deploy AI and they'relike wait, what do you mean?
I'm like it's not about justcost, but at this point in time,
if I'm looking at a six-yearevaluation of your return, at a
six-year evaluation of yourreturn, what it's going to cost
across models, workloads, gpu,cpu, orchestration, data

(25:18):
infrastructure, devops, your CICpipelines and everyone that's
going to be needed to beinvolved in actually automating
this process, you're not goingto make back your money for
another six to seven years,whereas if you do it the same
way right now.
It's a small team, quite frankly.
It's lean.
It's not only more efficient atthis point in time, because

(25:40):
it's going to take us about 19months to be able to deploy this
, considering the maturity ofyour existing environment, but
it's more cost effective.
So, if you're okay waiting sixto seven years to actually make
back on your money, let's goahead and do it.
But if you're not keep doingthings the way you are right now
and that's the conversation noone's having it's just actually

(26:01):
suggesting if you expect resultsnext year.
I wouldn't expect them.
So I've always outlined you canexpect results in three years,
and here's what it's going to be.
Or actually, this one's goingto take six to seven years
because you actually don't havea process.
You have something that's beenstitched together by a group of
individuals who's just beendoing this for a long time.
We actually have to create theprocess before we can automate

(26:25):
the process, and then we've gotto teach an entire workforce
about this new process we'vecreated.

Speaker 2 (26:32):
Yeah, and this is one of those common examples of
moving slow to move fast.
Right, and I think you knowgood executives understand this
that you know that getting realefficiencies within large
businesses does require thissort of long-term thinking and

(26:55):
through that basically you'veoutlined so much of this in your
book as well is also findingthese opportunities within the
company where what's a realproblem that we can solve and
are we ready to solve it?
We don't have to go through thewhole thing because it's in
your book, but I wanted to atleast ask you at a high level
about the readiness assessmentthat you have for executives.

Speaker 3 (27:18):
Yeah, that was something I created right after
IBM.
I had done enough proof ofconcepts and prototypes and
pushed a few things intoproduction, and part of it was
is REI.
Deployments wasn't reallydependent on the technology but
on the maturity and thereadiness of the organization.

(27:39):
And then readiness goes throughand there's like five key areas
and you ask a series ofquestions Would you say your
work forces A, b, c, d or E?
Would you say they know X, y, z, et cetera, et cetera, and then
, depending on that score, it'llactually let you know how ready
are you for a deployment.

(27:59):
Another one that I actually justdeveloped about a year and a
half ago, right after the bookwas released oddly enough, I
didn't include it in the bookwas what I call the discernment
framework, because once again Igot back into the business of
deployments and strategy andadvice and coaching, both in
startups and enterprise, and Iwas like something that was so

(28:20):
obvious for me and somethingthat I just followed wasn't
obvious to others, and so Icreated the thing called the AI
discernment framework, and it'sa four quadrant framework and it
essentially outlines when youshould use AI, only when you
should use humans, only when youshould use AI plus the

(28:41):
assistance of humans, when youshould primarily lean on your
workforce, humans with theassistance of AI, lean on your
workforce, humans with theassistance of ai, and a four
quadrant framework.
And I have about over a hundreduse cases in each of those.
Um, so that folks don't overlylean and go oh, ai should lead
and humans should assist.
No, no, no, it's actuallyreverse humans should be leading

(29:03):
and ai should be assisting.
And here's why, um, it was justso obvious for me because,
again, I'm too close to the work.
But when I realized I'm likewell, one of the gotchas is they
picked the wrong use cases.
They couldn't even deploy thisuse case, even if they brought
in an army of consultants ifthey wanted to.
So the readiness assessmentcame out of there.
But picking the right use case,the framework was developed out

(29:27):
of that.
So I always say it'sunderstanding how ready you are
and how complicated you can getwith this, selecting the right
use case based on thecriticality and the complexity
framework that I have in thebook, and then understanding
from there what field of playyour use case should take.
Can it be completely dependenton its own?

(29:49):
Does it need a human?
And like all the differentinterpretations of it.
And those were just some of thegotchas that, again, are just
so obvious in my head, but aspeople are going through this,
it's just obvious.

Speaker 2 (30:02):
And that's the perfect example.
You know, when I mentionedearlier in the pod that you know
your perspective is so commonsense and practical of getting
business adoption and businessand then some, uh, you know some

(30:33):
misconceptions about thepotential value you can get from
data and ai.
Right, so you know, because yousee examples, or like companies
will try to, you know, adoptsome sort of like.
Maybe they'll look at likesemantic layer before they even
have a good internal data model,the right data platform set up
Right, and they'll say, well,you know, the semantic model,

(30:55):
you know, is invaluable.
Well, yeah, at your stage, yes,it's not valuable because you
don't have the right foundations.
Right, but it requires someoneto be there and just ask the
right questions, which you know.
And that's what I really lovewith the perspective that you
bring, that's grounded in, yes,the technicals and what you have

(31:16):
to build.
But is it the right thing tobuild and is this the right time
to do it?

Speaker 3 (31:21):
100%.
And if you should even use AIto do it, not every horrible
needs AI Like if you're hanginga painting on the wall, you
don't need a jackhammer A hammeris going to do the job.
So I think having that breadth,depth and span of technical
knowledge to go is therepre-existing technology that can
solve this problem that you'retrying to solve is extremely
helpful.
Um, and then knowing where aishould and should not be applied

(31:43):
like that's also helpful, and Ithink that's just where I
struggled.
I was like this stuff is known,everyone knows this, everyone
knows this stuff, or so Ithought.
And then I was like wait aminute, no, not everyone knows
this stuff.
Everyone's repeating some ofthe things that I went through
10 years ago, 15 years ago.
Maybe it would be helpful if Istarted disclosing these things
more publicly versus keeping itall in here.

Speaker 2 (32:05):
Yeah, yeah, and that's where I really, you know,
I definitely recommend the bookbecause there's just a lot of
these kind of frameworks, waysto approach these ambitious
projects within largeorganizations, because large
organizations just doinganything within them can is its
own beast right, which is kindof outside of any you know

(32:25):
technical way of approaching itright.
So just getting that leadershipbuy-in, getting the internal
buy-in, like even from you knowpeople that are adjacent to your
organization who have to bechampions of what you're doing.
And but I also wanted to askyou you know, you mentioned that

(32:46):
, you know in your book and evenon this pod you mentioned about
making existing teams moreefficient.
How should leaders think aboutbalancing AI, automation and
human expertise?

Speaker 3 (33:01):
Yeah, and, to be honest with you, if you ask me
what keeps me up at night, thisis now the problem that I was
meant to solve.
Yeah, like, for a long time Ijust kind of haphazardly stepped
into positions and took rolesand jobs and like didn't have
the purpose.
But more and more, now there'sthis seed that's starting to

(33:22):
harvest and it's funny because,like it's just getting.
This noise is getting louderand louder.
This seed is starting to bloom,like if you talk about my
purpose right now, it is reallymeant towards how do we leverage
all the amazing capabilities ofartificial intelligence, but
not at the cost of ourintellectual atrophy?
And I'm seeing this acrossmulti-generations.

(33:45):
You know, my children, who areseven and nine, are digitally
native and it's.
It's funny because I you know,when you see the picture of like
how apes started increasingtheir posture and we became
humans, I'm watching thegeneration starting to decrease
their posture with their screensand like we're going back to,
like the old eight formations.
I'm like, oh crap, like it'sjust funny watch people walk and

(34:10):
we are starting to degrade ourposture back to our early
historic ways of like, how westarted as apes.
Okay, well, put that image andthought aside.
But I'm also learning that withthe younger generations, common
sense is less common than whatwe're seeing you know even at
the adult stage, than what we'reseeing you know even at the
adult stage, because the abilityto question and push back,

(34:31):
that's a muscle that hasn't beenflexed.
So if they read it, theyfundamentally think it's true.
Or if an influencer says it,well then it must be right.
You and I both know that thatsure as heck isn't right.
An influencer doesn't even meanthey've even done the job.
An influencer just had a goodstrategy on social media and
either got there early or as agreat marketing tactic and
created a voice around them, butit does not make them the

(34:54):
source of authority for asubject or a domain.
But that's not the world welive in.
So now you add that, and thenyou add artificial intelligence.
You know if I'm looking atpeople in the artistic realm, in
the content creation realm, instrategy realm, in positions,
everything is a prompt away fromgrok chat, gpt, like you name

(35:18):
it, soup du jour.
Pick your foundational model ofchoice.
What's happening is that we are,while knowledge has been
democratized it's great becausewe get access to information but
we are not taking it a stepfurther in asking the questions.
We're using it also as a sourceof truth to check a task on a

(35:42):
task list, but we're not usingit as a place to start our own
individual forensics.
And so I think we'reshort-circuiting the learning
process.
And if we short-circuit thelearning process, we stop
learning, we stop being curious,we stop and we weaken the

(36:04):
synapses of the neurons firingand being able to create these
interconnected connectionsbetween different data facts and
create a point of view.
We will all end up justregurgitating exactly what we
read off of any of thesefoundational models and we're
all going to start sounding thesame, and I can already hear it
in some people's tones.
I was at a conference once andsomeone wanted to ask a question
.
I could tell in an instant thatwas not from that person's

(36:25):
heart, their tone or theirmindset.
They had asked the question inchat.
Gpt got the question and thenthey got up and asked it, hoping
to sound intelligent, but itcame across extremely scripted
and insincere.
And so when you take the imageof the apes in the evolution,
with our inability to questionthe source, with the fact that

(36:48):
we're short-circuiting thelearning process, what I'm
afraid of is we're all going toexperience intellectual atrophy.
The thing that makes us thestrongest, our ability to think
critically, solve big problems,invent, imagine because we're
just going to outsource ourthinking to all these amazing
capabilities that exist withinus or exist for us, and so

(37:11):
that's kind of the crusade thatI'm on is how can I help big
companies and small companies,while we introduce these amazing
capabilities, to not outsourceour thinking properly, integrate
AI with the human workforce, toincrease our largest investment
, which is human capital, sothat we become empowered and not

(37:32):
disempowered?
So I always say I want todeploy AI for good, at the
individual level, at thecorporate level and at the
community level, so that itstrengthens us, it doesn't
displace us.
Yeah, absolutely I thought, oursfelt worn out.

Speaker 2 (37:50):
No, but the important thing there is that there has
to be a way to uh, at the end ofthe day, I mean, people know
this now, like, whether you'rean executive or you, you manage
a small team but that aspect ofjust being able to go ask
someone who, who owns somethingand really understand something
within the business eithereither they know a certain

(38:11):
customer really well at anintuitive level, or they know
some code base at an intuitivelevel and they're just the
expert on this thing right, andhaving a human there is so
important because you trust thatperson and they're the ones
with the authority to really getstuff done.
So now introduce this.

(38:32):
You know tying this into AI,right, we're talking about AI
really empowering and makinghumans more efficient and
automating the stuff that shouldbe automated.
So, like, where do you see,like just conceptually, at a
high level, you know howcompanies should look at.
You know human in the loop forAI.

Speaker 3 (38:50):
Yeah, so I was chatting with someone who used
to work for me a long time agoand I think if we're not careful
and if we don't do somethingnow, we're going to pivot from
AI being our co-pilots to usmanaging AI cockpits, and that

(39:11):
can be dangerous on many levels.
So I do think that we need tohave a human in the loop for
decision making and governanceat all times.
Now, easier said than done,because I think there's always
going to be a place or timewhere autonomous thinking ai
agents are 100 going to be notonly more profitable and more

(39:33):
efficient, but the businessdecisions that need to be made
aren't critical.
There's classic dataaggregation case studies, right.
Instead of I hate to say this,but maybe a data analyst having
to stitch together and cobble upinformation and then create
this massive translation layerto be able to run a dashboard or

(39:54):
a report, we could actuallydevelop that individual in a
very different way.
But I do think that a humanneeds to be in the loop, and
part of that framework of AIonly human human, only AI plus
humans, and then humans plus AIactually distinguishes where a
human should be in the loop andwhy Like if we're talking about

(40:16):
national defense contracts.
That's not something you shouldautomate, not at this point in
time.
There's too much at risk.
If we're talking aboutcriticality of a certain change
with partnerships who you'regoing to choose to work with in
that deal term, that needs to bein the human in the loop.

(40:36):
Now, I know that AI is beingused a ton for data analysis,
but when you have to publiclyreport your 10K, that should
also require a human in the loopbecause you are technically
responsible.
So I think AI agents are goingto provide a ton, a ton of
benefits.
There's no doubt about it.
But again, where you, ai agentsare going to provide a ton, a
ton of benefits, there's nodoubt about it.
But again, where you apply, itis going to be key.

(40:58):
But at the end of the day, Ithink the information should
always be verified by having ahuman in the loop, because the
risk exposure of not doing so istoo massive.
Like one of my favorite usecases, I think it was about
three and a half four years agoA driverless vehicle hit someone
that was walking in the walkway.
Well, the first thing theperson did was try and sue the

(41:22):
owner of the driverless vehicleOkay, but when you peeled back
the layers.
Technically, that person wasbreaking a law because they were
crossing at night when theyshouldn't have been crossing,
and the street signs had saiddon't cross because they were
still moving traffic.
Well, there wasn't a human inthe loo, they had actually

(41:44):
leveraged artificialintelligence to do the decision
making.
But in something like that, youcan understand all the different
nuances, because the driverlessvehicle was technically abiding
by the law and by the law theyhad right away, and the user who
was crossing the street brokefour laws in the making.
Who's at fault and who shouldbe doing the decision making?

(42:06):
These things require humaninterlude.
It's very subjective, it's notblack or white, and
unfortunately, I would say ourworld is 50 shades of gray more
than things are black or white,and so I think we shouldn't lose
that human in the loop capacityor component for things that
require critical thinking, andunfortunately, I think some

(42:27):
people are trying to do that.

Speaker 2 (42:29):
Yeah, and I think that's the right way to look at
almost every AI project is youknow what is the human in the
loop and who's the owner?
There still has to be someoneaccountable, like you said.
You know the first person theylooked at, when a driverless
vehicle, you know, hit apedestrian, was, okay, let's
point the finger at the driver.
Or is it the software maker oris it, you know, whatever right?

(42:53):
So at the end of the day,someone's accountable for this
AI and that person that'saccountable right for delivering
it and its quality.
And you know we can get allinto fine tuning and
reinforcement, learning and howthat's applied and all this
stuff.
We're actually going to cover alot of this, you know, just as
a sidetrack, you know I'mrecording this from our Palo

(43:15):
Alto headquarters and Sol.
You'll be joining us on sitehere in May for our AI Executive
Roundtable, which I'm superexcited about.
We'll dive into some of thesedeeper topics and I can't wait
for that.
That'll be fun.

Speaker 3 (43:30):
Yeah, me too.
I'm super excited.
But if you notice, john, inthis entire conversation we did
not talk about API toggling,model reinforcement, we did not
talk about fine tuning and ragtechniques.
We did not talk about capacity.
That stuff can be solved, fornot only do we have extremely

(43:52):
smart individuals, but there's alot of wonderful companies out
there that can actually solvethese problems for folks.
It's the applicability inbusiness, it's the
accountability, it's the humansin the loop and knowing where,
when and how there's a space andtime for everything, and I
think that level of discernmentneeds to be applied if you want
it successful At least, that'sbeen my experience over the past

(44:12):
13 years of deployments youwant it successful.

Speaker 2 (44:15):
At least that's been my experience over the past 13
years of deployments.
Absolutely, and that's one of myfavorite reasons for following
you is that I can always kind offind that practical common
sense and really just provenstrategies for really deploying
AI in a way that's successful.
And I really do think aboutthat.
Uh, in my I won't sayday-to-day, but week-to-week or

(44:39):
in quarterly cases is like, howdo we make sure that you know
what we're doing is reallysuccessful and valuable?
Because, like you said, there'sthere's all these uh you know
just naming things yeah, modelcontext, routing and
reinforcement and all thistechnical stuff and you know any
team can implement, you knowany of that, but the question is

(45:00):
you know when and doing it atthe right, doing it for the
right team at the right time?
So, and I'm going to be reallyexcited to continue talking to
you about this at our upcomingevent- I'm very, very excited
about the event.

Speaker 3 (45:15):
So thank you for having me on this podcast.
Thank you for inviting me toyour event in May.
You guys are doing amazingthings and I'm a big fan, so any
which way I can support youknow, I will.

Speaker 2 (45:26):
Thank you, Sol.
Thank you for joining us thisepisode of what's New in Data.
We'll have a link out to Sol'sbook and her LinkedIn for all
the places you can continue tofollow her down in the show
description and show notes.
Sol, thank you again forjoining and we'll see you soon.

Speaker 3 (45:44):
You got it.
Thanks, John.
Thank you.

Popular Podcasts

Are You A Charlotte?

Are You A Charlotte?

In 1997, actress Kristin Davis’ life was forever changed when she took on the role of Charlotte York in Sex and the City. As we watched Carrie, Samantha, Miranda and Charlotte navigate relationships in NYC, the show helped push once unacceptable conversation topics out of the shadows and altered the narrative around women and sex. We all saw ourselves in them as they searched for fulfillment in life, sex and friendships. Now, Kristin Davis wants to connect with you, the fans, and share untold stories and all the behind the scenes. Together, with Kristin and special guests, what will begin with Sex and the City will evolve into talks about themes that are still so relevant today. "Are you a Charlotte?" is much more than just rewatching this beloved show, it brings the past and the present together as we talk with heart, humor and of course some optimism.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.