All Episodes

September 22, 2024 89 mins

In this special episode, Andreas Welsch launches his new AI Leadership Handbook together with fellow AI leaders:
- Matt Lewis (Founder & CEO, LLMental)
- Brian Evergreen (CEO of The Future Solving Company)
- Maya Mikhailov (Founder & CEO, SAVVI.AI
- Paul Kurchina (Enterprise Architecture Community Leader)
- Harpreet Sahota (Developer Relations Expert)
- Steve Wilson (Chief Product Officer at Exabeam and Co-Author of the OWASP Top 10 for LLM Applications)

Key topics:
- What’s keeping Chief AI Officers up at night?
- What’s the hardest part for new AI leaders coming into an AI leadership role?
- What does a future with AI agents look like?
- How can AI leaders succeed in this phase of AI adoption where we’re just coming off the hype?
- Why do we still need to educate business leaders and stakeholders about AI?
- Why is AI here to stay this time and why should IT teams and CoEs care?
- How has RAG evolved over the last 12 months?
- How does data play into concepts like RAG and agents? And why is it important to still keep an eye on technology?
- What is happening in the cybersecurity space since the release of the OWASP Top 10 for LLM Apps?
- How are bad actors exploiting LLMs’ personalization capabilities and why do leaders need to know about LLM vulnerabilities?

Listen to the full episode to hear how you can:
- The sheer possibility of what AI can do now can seem overwhelming—even for AI experts and leaders.
- AI agents promise autonomy, automation, and optimization beyond anything currently possible, with agents negotiating with other agents to find an optimal solution.
- Data remains a critical ingredient for any successful AI project that many leaders still neglect.
- IT leaders are asking for tangible examples and return on their investment when working with embedded AI solutions.
- Agentic RAG has emerged as the latest iteration of RAG systems—however, data quality remains critical to achieve high quality outputs.
- The cybersecurity discourse has evolved from avoiding embarrasing PR to avoiding data leakage and legal consequences with bad actors looking to exploit these new LLM-based systems.

Order your copy of the AI Leadership Handbook today: https://aileadershiphandbook.com/order


Watch this episode on YouTube:
https://youtu.be/TMSfRMYooy0

Questions or suggestions? Send me a Text Message.

Support the show

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


Level up your AI Leadership game with the AI Leadership Handbook:
https://www.aileadershiphandbook.com

More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Andreas Welsch (00:00):
Today, we're celebrating a special occasion,

(00:02):
the launch of my first book, theAI Leadership Handbook.
Over the last two years, I'vetalked to so many leaders about
how they can use AI in theirbusiness, and everybody knows
it's super important, andeverybody knows it's super
urgent.
But what I found is that onlyfew know where to really start
and to do it successfully.
And that's why I've invitedthose that have been doing it
successfully onto my show,What's the BUZZ?, and asked them

(00:25):
to share how they have turnedhype into outcome.
Now watching 60 some episodesthat we've recorded over the
last two and a half years andpiecing everything together gets
a little cumbersome.
So the idea for the AILeadership Handbook was born.
It's your one resource thathelps you prepare you to lead AI
programs in your business with a360 degree view beyond just

(00:48):
technology.
And Because it's a little boringif you just listen to me, I've
invited some of the guests back.
For example, Matt who's alreadyjoined us.
He's a CAIO in the life scienceindustry and really excited Matt
to have you on to share a bitmore about what are leaders

(01:08):
seeing today?
What's keeping them up at nightand how can CAIOs sleep well,
again.

Matt Lewis (01:16):
Thanks so much for having me, Andreas.
And congratulations on thelaunch of the book.
When I first joined your programlast year, we talked about the
role of the Chief ArtificialIntelligence Officer.
It was really an outpouring ofengagement from across the
industry.
Transformation, innovation,large enterprise and a small

(01:36):
startup ecosystem as well interms of folks like really
interested in how this work andreally be progressed forward to
drive outcomes.
And I've really enjoyed ourconversation since and just so
great to see the book out in theworld and.
Thank you again for inviting meto write the foreword to it.
It was really the highlight ofmy summer.
And so happy to see it out inthe world and looking forward to
seeing folks using it to reallynot just advance the discourse,

(02:00):
but really to start making somereal change out, out in their
environments, whether they're inlife sciences, healthcare, or
the regular industries or otherplaces of the world.
Thanks again for having me onthe program.

Andreas Welsch (02:11):
Awesome.
And again, thank you forproviding such a beautiful
foreword.
Folks, if you would like to notonly read the book, but read
Matt's foreword, too, let me puta link in the chat where you can
buy the book.
If you're looking for it onAmazon, just search for AI
Leadership Handbook.
Now, folks, if you're justjoining the stream, drop a

(02:33):
comment in the chat where you'rejoining us from, because I'm
always curious to see how globalour audience is today.
We have more than 400registrations, so I'm sure we
get a good global coverage.
Matt, when we're all preparingand we have a we have a total of
six guests that there will bejoining us, but when we're all
preparing, we said it would beawesome if you guys can ask me a

(02:56):
question as, as well.
So we'll do that at the end ofeach section as we move from
speaker to speaker.
But, we spoke about a year ago,like you said, and I know you're
very well connected in theindustry and in the domain.
What are you seeing?
What's keeping CAIOs up at nightone year, one year further?
And especially almost two yearssince the introduction of

(03:19):
ChatGPT.
What's keeping them up at night?

Matt Lewis (03:21):
Sure, It's a really great question.
First of all, just in terms oflocation and all the rest.
I am based in New York.
Today I'm actually up inCambridge, Massachusetts.
Attending an artificialintelligence meeting at MIT.
And so happy to be spending thistime with you in the audience.
But just in terms of locationsand the rest for those that are

(03:44):
in this area.
It's so much happening in somany places.
And it's really just a greattime to be in the space and to
be alive in general.
But in terms of what's keepingChief AI Officers and really
anyone that's deep in on AI upat night.
I think that really is like themost vexing kind of challenge
that we face is I've chattedwith other AI experts in the

(04:07):
space and that there are realsolutions and real value to be
accrued by implementinggenerative artificial
intelligence and other AI acrossthe ecosystem.
But one of the, one of the kindof like weird paradoxes of AI
adoption is that even as itallows for increased efficiency

(04:29):
and outsized productivity withinand across the workforce, it
almost is like this like doubleedged sword that the folks that
are actually leading the workand helping to progress it
forward don't necessarily findthat their days are any shorter
or that the, that they have moretime.
If anything, they're, maybethey're more excited about their

(04:52):
futures or more excited aboutall the things that are
possible.
And I've spoken to a number ofpeople, like myself, that are on
the speaker circuit, that are atconferences, that are leading
businesses within theirorganizations, and potentially
are involved in the not forprofit environment, like myself,
of a foundation that I'm a boardmember of and other activities I
do beyond.
My organization and I think it'schallenging truly to get to bed

(05:14):
on time because people are upuntil midnight or beyond finding
new challenges to address andall the things that can really
be done now with generativeartificial intelligence that.
For many years, sometimesdecades went unsolved.
So it is a true problem.
This question of being up atnight is both a figurative, but

(05:35):
also a literal challenge as AIbecomes more and more practical
and pragmatic and possible.
It literally means that peoplethat are deep in on it are
struggling with their mentalhealth and struggling with their
stress and their anxiety,because there's so much that can
be done.
How do we prioritize our time?
I think though to the point ofyour question, I think, we're in
a point now, I think, in thelife cycle of generative AI

(05:57):
where last year was really a lotof hype and a lot of people were
saying wow, this is this isamazing.
This is great.
And I think when we spoke lastyear, people were asking do I
really need someone like Matt?
Do I need a Chief AI Officerwithin my enterprise?
I think many organizations havepassed that consideration and

(06:18):
have recognized that, yes, ifthey believe that they're still
going to be relevant in five toseven years as an organization,
that they need someone to headup the AI kind of strategy and
portfolio and product and Peoplethat are augmented by AI and
typically that's a head of AI ora chief AI officer to help
catalyze that consideration andeven the federal government here

(06:40):
in the U.
S.
has done that recently and arehiring literally hundreds of
chief AI officers across all themajor agencies.
Every major organization now iseither having a CAIO or is
hiring for that position.
And it's really hard to argueagainst it.
We're, now in a position, unlikelast year, where people were
like that's an interesting role.
It's a curiosity what do we dowith it?

(07:01):
Where now people are actuallyputting people in those roles.
And it's more a question of how,like, how do we use that
person's time?
How do we value and respect therole that it is?
It really extracts the mostvalue from it so that it helps
the organization here and now,but also is set up well for
success in the future and it's alittle bit, a little bit

(07:23):
antithetical perhaps to some ofthe roles that people may have
come from before they're in thisposition.
I was the Chief Data AnalyticsOfficer for many years before I
was Chief AI Officer.
And in that type of role, wewere always like scurrying
around, trying to find things todo, like trying to find ways to
justify the role.
I can't tell you how many timesI went to a conference where one

(07:43):
of the titles was like, is theCDA role, CDAO role going to be
obsolete in a year, two years?
I never hear that topicdiscussed at AI conferences
because there are so many workstreams.
That are relevant and resonantfor the CAIO role that it's like
the opposite problem.
It's like, how do you reallyscale and democratize the work

(08:06):
of the CAIO role so that itprovides value to the business
here and now, but also so thatit can be done at a measured
pace so that people can adopt itand accelerate its consideration
across the business.
I think prioritizing.
and aligning with the people sothat it works for everyone is
probably the biggestconsideration.

Andreas Welsch (08:25):
See, from my own experience building and leading
an AI CoE, I remember that wasone of the toughest challenges.
First of all, staying on top ofall the things that are coming
out, figuring out where is thevalue, what are the things that
we should be prioritizing, weshould be pursuing.
That's just because They arefancy or we can do them, but
because they actually deliver ameasurable business value.

(08:47):
And I think that's a key part ofwhat you mentioned as well, of
that AI leadership role, helpingnot only guide your data
science, AI teams, but also workvery, closely with your business
stakeholders to tease out whereare those opportunities to bring
in AI and how do we measure it,right?
That, I think that was the otherthing.
Certainly everybody wantshappier employees and fewer

(09:09):
clicks and you name it.
But very few people are willingto put a price tag on it and pay
for it, right?
So how can you measure it?
I think is another dimensionthat you mentioned too in, in
that.
Now, By the way, if you're inthe audience and you have a
question for Matt or for any ofthe following guests, please
feel free to put it in the chat.
We'll take a look in a couple ofminutes and pick one or two of

(09:31):
those.
Now, everybody starts somewherein a leadership role on AI.
I think it's getting easier toget started and to get your
hands on it.
But what's your recommendation?
What's your advice to leaderscoming new into this role?
What's the hardest part thatthey should prepare for?

Matt Lewis (09:49):
Yeah, it is.
It's another really importantquestion.
And I think again, unlike a lotof other C suite positions or
really any leadership positionin an organization you have to
think about like the context.
of the role that this iscontributing to unlike say like
a Chief Innovation Officer orchief commercial officer or

(10:11):
Chief Data Analytics Officer orreally any like leadership
position most of these types ofroles that contribute to the
health or safety or risk of theorganization have existed for
years, decades sometimes, thefunction of augmented
intelligence in any business,whether it's in pharmaceuticals
or in banking is brand new.

(10:33):
So the people that are cominginto these roles, whether
they're new to AI, or they'rejust new to the role, have
really a dual responsibility.
They have a responsibility toarticulate and architect the
thesis about what their kind oforganization's narrative for AI
is going to be.
That is like what they stand forand what they won't do within
the organization.

(10:54):
For example, like Gen AI can beused for lots of things, but
what really makes sense for themto do as a business and what
should they never do?
If they don't, if you never sayno, you don't have a strategy.
And that's really importantbecause there is no kind of
history.
For the organization to considerwith regards to Gen AI, and you
have to have principles.
Otherwise, what's the point?

(11:15):
And so that's an importantconsideration.
So that the leader in that roleis essentially building the
thesis, building theorganization, building their
team.
Which historically most likelydoes not exist when they come
into role and there's a anunderstanding of culture and of
process and of systems and ofthe organizational strategy and
its priorities and where it isin the commercial market, both

(11:36):
internally and externally andwhere it wants to go, which a
leader of the business, ifthey're coming from the
business, they're an incumbent,they may know a bit of, but not
everything.
If they're coming externally,they know very little.
So you have to recognize what'spossible to achieve within a
business that is essentiallybrand new to Novo.
And then also there's like asecond consideration that.

(11:58):
While you have to build abusiness to make the AI work,
you're also directly responsiblefor transforming the business at
large in order to be successful.
The CAIO role is really a, ithas an internal role to stand up
a new business, like a startuptype role, but you're also
transforming the business in thefuture so that it's successful

(12:20):
and is future proofed againstcompetitive threats, both that
exist in the presentenvironment, as well as those
that are coming from the startupecosystem and other places.
So it's, you have to wear bothhats, like how do you build for
the present, but also how do youdefend for the future?
So it's, it takes a reallyconsiderate approach to do both
those things and not to getstressed out by it so that you

(12:43):
can sleep at night.
Here's the earlier question.

Andreas Welsch (12:47):
And I was just having a conversation with the
CEO the other day and they askedexactly that same question.
What are the threats that weneed to be prepared for and look
behind our back?
And what are the opportunitiesthat we need to seize and look
what's ahead of us with AI andbecause of AI?
So I think great pointsummarizing that.

(13:08):
Now, I'm curious.
You've read the book cover tocover, obviously, before you
wrote the foreword.
I'm wondering, first of all,what got you excited, and then
what's the question that youhave for me?
Because that's the thing none ofthe guests and I have aligned on
before.
So if you follow the show and Iusually ask my guests, if AI
were a dot what would it be?

(13:29):
It's the surprise question.
I'm curious, what question youhave for me?

Matt Lewis (13:35):
Yeah I when I agreed to write the forward, I didn't
completely realize that I had toactually read the book to write
the foreword.
I was so excited to, by theopportunity to do it that I of
course immediately agreed.
And then I stuck, set out tostart writing and I was like I
can't really do this withoutknowing what the book is about.

(13:55):
So I started reading a littlebit about it.
I was like, damn, I got to readthis whole thing before I can
really write the forword, and Iread it.
And I was like man I honestlycouldn't have written a better
book myself and I had some youknow designs I'm trying to do
something similar.
But this is really the way thatI would have articulated how a
leader within any business andany enterprise, is trying to

(14:18):
adopt and augment their teams,their organization from a really
considerate perspective.
And I, it really was sopractical, but also so well
researched and considerate andhelpful.
And I, hope some of that cameacross in the forward and in the
book itself.
But I think my question to youis as we think about the
trajectory from really goingfrom one of And I think that's a

(14:38):
really important piece ofawareness and hype as people
have described it to this kindof place of action that we're in
now of how the book is even atthe current length is still
probably a little unwieldy for,but the average kind of
executive enroll.
So not everyone has time to readthe full book, if you will.
If someone is not an AI expert,and I see some friends that are

(15:00):
here on the program that are onthe right side of the screen, if
you will, and where they haveteams that are not as deep in on
the content as you and I are.
How would you recommend thatthey use the content in the book
to act as a foundation or as aspringboard to actually make
change within the organization?

Andreas Welsch (15:21):
I love that question, first of all.
Thank you.
Look, I think that the keythesis that I want to get across
with the book is it's not abouttechnology.
Definitely not just abouttechnology.
Yes, there are so many thingsthat we can do with AI and
because of AI, but none of themwill really work and materialize
unless you bring your peoplealong.
And that means that you need towork closely between your data,

(15:44):
IT, AI, and business teams tobring them together to find the
most valuable ideas to pursue.
You need to align AI with yourbusiness strategy to begin with,
otherwise you're most likelyjust running science projects
that don't go anywhere, right?
You So you should think abouthow do I create a following
around this within myorganization.

(16:06):
So you need to look at thingslike multipliers, communities of
practice that can help you notonly enter different business
functions, finance andprocurement in HR, where you
want to help them get more valueout of AI, but also become
advocates for you and bring backinformation and feedback to you.
What is working?
What is not working?

(16:27):
So, really have advocates andmultipliers for you.
Those are things that you can doeven without a data science
background, right?
Those are some core changemanagement organizational
principles that you can follow.
And that's why I wanted to makesure that they're easy and
understandable and accessible inthe book as well.
Great question.

Matt Lewis (16:48):
Yeah.
Again, thanks so much for havingme here and having me in the
book.
It was, it really was a reallyspecial opportunity to
participate in as the first bookthat I have both an autographed
copy by the author and whichI've contributed content to.
So it's a real milestone in myprofessional career and looking
forward to our nextconversation.
So thank you so much again,Andreas.

Andreas Welsch (17:10):
Wonderful.
Thank you so much for being withus, Matt, and for sharing all
your learnings with us.
Of course.
Awesome.
So then let's see, let's moveover to our next guest.
Hey folks our next guest isBrian Evergreen.
Brian is the author ofAutonomous Transformation that

(17:31):
he published through Wiley.
One of the most acclaimedmanagement books of the year by
I Thinkers 50, one of theleading organizations looking at
in ranking management books.
So I'm super excited to have youon, Brian.
Last year, we spoke abouttransformation and reformation
and autonomous transformation.
So I want to make sure that wepick up there again one year

(17:51):
after and see where are we?
Why is AI leadership stillimportant?
What does transformation mean?
So thank you for joining.

Brian Evergreen (17:59):
Absolutely.
Thank you for having me,Andreas.
And I'd say that we are in aninteresting moment where people
have seen that there is value tobe had here when it comes to AI,
but they're struggling to tocapture that value themselves.
And I think one of the biggestreasons that we've seen
something I think that I andmaybe and you and many others

(18:20):
warned people against when theAI, so Gen AI hype bubble
started to really grow was youcan't AI is not something where
you can just get quick wins.
We talk a lot about AI usecases, and I've been giving this
example lately, and that I thinkis I really like at least which
is that if you were to picturethe most beautiful building that

(18:44):
you've ever been inside of, andjust walking in and just being
awed by that building.
Now think, how did they, how didthat building come about?
Did they come together with thearchitects and The the the wood,
woodworkers and the people thatare shaping the metal that all
these people that came together,they start and say, okay here's

(19:05):
a set of tools and let's startwith a great use case for the
first one.
And then we'll go from there,right?
There's no way that would havehappened.
Instead, they came together andsaid, like with Duomo is, which
is one of my favorite examples.
They were pushing the boundariesof what was possible.
possible from an architecturalstandpoint.
And so being able to say, yeah,could we even build a dome that

(19:27):
big?
What would have to be true forus to make such a big, beautiful
dome?
And and I'm so glad they did.
And every beautiful buildingwe've ever been inside of.
Even our homes today, the way weremodel is not by picking up a
tool and walking around ourhouse.
We always start with what wewant to do first.
And so I think from a sort ofstate of the union around AI I

(19:47):
think that a lot oforganizations have been
disappointed by the fact thatthey've run up the hill.
Using the same sort of system ofleadership and management and
strategy that they had beenusing for digital transformation
and trying to carry that over toAI, which has a lot more
complexity.
So organizations I think havebeen really disappointed by
those results and are startingto I'm the concern I have is

(20:12):
that some of them may end upgetting to the point where they
say, okay, then it's just not.
AI, it's an AI problem whenreally it's not at all an AI
problem.
It's a way of doing strategy, away of planning, a way of
setting a vision and turningthat vision into strategy, and
then taking that strategy andusing storytelling to ignite and
spark action across yourmeaningful, purposeful, useful

(20:35):
action across your organization.
And if any part of that systemis broken.
Then it doesn't matter what thefront end of that system is.
It doesn't matter if it's an AIproject or it's a blockchain
project or it's anything or anSAP project, right?
If you don't have those piecesall in place from a great vision
that, that people actually careabout bringing to life, a a

(20:58):
strategy that, that is, I callit designed for inevitability, a
strategy that should work.
And then And then storytellingthat's really resonating with
people about that.
Then it's, you're going to beburning money alongside the
other 87 percent of AI projectsthat fail.

Andreas Welsch (21:15):
I think that's a really good way to frame it.
I really like the analogy of thebuilding and the beautiful
building exactly right.
Nobody ever said, let's take ahammer and a chisel and figure
out what we do with them.
But what do we want to build tobegin with?

Brian Evergreen (21:33):
Yeah.
Let's just look for some lowhanging fruit and we'll go from
there.

Andreas Welsch (21:36):
While we are already seeing that, yes this
bubble is getting a littlesmaller with Generative AI.
People I feel have, used a goodamount of it, have a decent
understanding of at least how doI use ChatGPT, what can I do
with these kinds of tools,especially if you're in the tech
sector, I feel that's moreprevalent than in others.

(21:56):
As this is coming down the peakof the hype cycle, I see the
next one just that's coming up,right?
And that's the most favoritetopic of the summer if you've
been in AI.
Agents.
Software components.

Brian Evergreen (22:10):
Hot agent summer?

Andreas Welsch (22:12):
Yes, right?
That take a goal and then go andfigure it out and everything is
magic.
Now I think in a lot of thoseconversations, we see that
people are approaching it justlike another automation project,
right?
And instead of you having todefine rules and program every,
line of code, you just give it agoal and it magically figures it

(22:32):
out.
I saw an article by you just acouple of weeks ago, having a
very strong and spiky point ofview on that topic.
So I'm curious, what are youseeing with agents?
Where is this going?
And why is it not?
Your next automation project.

Brian Evergreen (22:46):
I great question, Andreas.
And I, do like to be spiky.
So I'm, glad that you that youappreciated that article.
So I think you might bereferring to AI agents are not
automation.
Which is something that I wroteon my, sub stack about a lot, I
think a lot of organizations orjust people, we think about
agents it's, and I will, saymany sales and marketing

(23:07):
organizations that are workingto try to, Capture the current
excitement about agents and usethat to, to sell, are, sometimes
miss reallocating and sayingsomething that I would a Rube
Goldberg machine for those whoare familiar with those is
that's basically an AI agent aRube Goldberg machine for those

(23:28):
who aren't familiar.
You'll know if when I describeit, which is a ball swings and
then it hits something else.
And then that thing rolls downsomething and bumps into a
domino, which then starts, andit's a chain reaction.
This happens, then this happens,and this happened, then this
happens.
And that is essentially aperfect, I think, analogy for
automation, which isessentially.

(23:49):
A recipe, a series of eventsthat you can prescribe.
First do this, then do that,then do this, then do that.
If this, then do that, right?
And you can think through andcome together, bring your
experts together to think reallydeeply about how all those steps
are going to come together.
Where AI agents are different isthat instead of saying, I'm
going to teach you as an AI, asautomation, the steps.

(24:13):
Instead, I'm going to teach youthe, when, I'm going to teach
you how to research the way, thetwo main additions, how to
research the way that I wouldresearch, and then the, how to,
reason the way that I wouldreason based off what I found.
And then at what confidencelevels I would make a decision.
Whereas before we're justteaching it a recipe and a

(24:35):
series of steps, in this case,we're teaching it a little bit
more about how we would thinkthrough things and at what step
of, researching reasoning wewould feel comfortable then
making a decision.
That's one major difference.
Another one is that instead ofwith automation, you might have
an, a given automation sequence.
You might have a step where it'sokay.

(24:56):
Now check with the human.
If you, if at this step, if thisequals that check with the human
to sign off, but with agents,you can set that instead at
thresholds, confidence levelsthat would span the entire
thing.
So in other words, if anything'soutside of the norm, at any
point, check with a human.
Whereas with an automationproject, if it passes step two,

(25:16):
where that was the check with ahuman step, and things start to
go haywire on step three, it'snot going to bother checking
with a human, because it's justgoing to follow the sequence.
Whereas an AI agent would bemore dynamic, and would know,
okay, any time anything getseven slightly outside of this
this box that I've been told Ican go operate within, then I'm
going to check with a human.

(25:36):
So those are a couple examples.
I think the last that I'll saythat I'm very excited about is
that I think it'll redraworganizational boundary lines.
I think 2D chess is to say thatwe're going to figure out how to
create AI agents that cannavigate the current interfaces
across the internet that will gofind, learn how to understand

(25:57):
menus and then Click throughthose different menus and use
search bars and to navigate theexisting construct of the
internet.
I think 3D chess is sidesteppingthat and then having agent to
agent, almost like a newinternet in a sense that will be
leveraging APIs and.
Have a whole agent marketplacethat's happening behind the

(26:18):
scenes that will help us as,leaders and as individual
consumers to be able to not haveto think so much about so many
of the things that we do on aday to day basis.
Like if I were, my favoriteexample is if I worked at a.
And let's say, Andreas, you didas well.
And we each needed lumber forour projects that we were about

(26:41):
to start.
And you live on the other sideof the coast from me.
And so I, call up in Seattle andI call Acme Lumber and I say,
Hey, I need lumber in Seattle.
And they look through theirwarehousing and they say, okay,
we have lumber in in New YorkCity.
And and so we're gonna have todrop a purchase order that is
gonna include the extra cost fora truck driver to drive it over

(27:04):
to you and the, this, thereforethis is the shipping time and
all of that, right?
And the impact to from a CO2emissions perspective.
And then let's say you're on theopposite, side of the country
and you call up a b, c lumber, aseparate lumber company.
And you say, I need lumber in,in in New York city.
I don't think you're, I don't,where are you based Andreas?

(27:25):
You're in the Middle East.
Philadelphia.
Okay, never mind.
Philadelphia.
So I need lumber fromPhiladelphia.
And I say and then the ABClumber says great.
Let's look in our warehousing.
Oh, it looks like we have somein Seattle.
So now you're, we're drafting apurchase order where you're
going to pay extra.
And now two truck drivers aregoing the same lumber of all the
way across the country.

(27:46):
Where and, that's because itwould be too cost prohibitive
and it would take too much time.
Okay.
for every lumber company tomanually call all the other
lumber companies to check ontheir warehousing at any given
time between human to human.
In the agentic future, theability to have AI agents that
just query an entire marketplacein a matter of other agents in a

(28:09):
matter of milliseconds, thosetwo companies could actually
trade us as customers.
Because for us, we want theright quality of product.
We want the right quantity andwe want the best possible price
and, speed in terms of logisticsthat we can get.
So we don't care.
I don't actually care if it'sAcme versus ABC lumber.

(28:29):
I just want the best lumber Ican get at the best price and
you too, right?
So behind the scenes, those twocompanies could trade customers
without us even necessarilyknowing or caring, saving us
both money, saving positivelyimpacting the human experience.
Cause those two truck driversnow don't have to traverse the
country for those extra orders.
And a better impact to theplanet.

(28:50):
So that's one example.
And if you span that acrossevery industry, that's the
future from an agenticperspective that I'm most
excited about.

Andreas Welsch (28:58):
That sounds really exciting.
Sounds like there's a lot morework to be done and also for AI
leaders to get ready for this.
So we have an entire chapter onthis.
Perfect.
an entire chapter on this in theOutlook on AI agents, where are
things, where are they going?
And also an entire chapter ontransformation, autonomous

(29:18):
transformation, and many of theinsights that you've shared.
So Brian, I'm curious.
What's your question?
I haven't read it too.

Brian Evergreen (29:28):
Yeah I thought about this and I would say I
want to ask you one that'sexciting as the one that you
asked me.
Okay.
And so I would say if AI were aFriends character, which would
it be?

Andreas Welsch (29:41):
Good question.
Joey.
Always into something new,something troublesome if you
don't pay attention.
But definitely you can't have acast without it or without them.

Brian Evergreen (29:55):
I love it.
That's a great answer.
That's a great answer.

Andreas Welsch (29:59):
Awesome.
Brian, thank you so much for thequestion and for the
inspirational talk.
I know people or let me ask youthis.
Where can people find you to getmore inspiration?

Brian Evergreen (30:11):
They can find me right here on LinkedIn.
Happy to connect with anybody.
Feel free to reach out to any,anywhere I can be helpful.

Andreas Welsch (30:17):
Awesome.
Thank you so much, Brian for theinsights and for supporting the
launch of the book.

Brian Evergreen (30:22):
My pleasure.
I'm so excited for you, Andreas.
Anybody who's listening, go buyAndreas book if you haven't
already.
And I've already skimmed throughit.
I'm going to be reading it moredeeply this weekend.
And but from what I've seen sofar it's a great read and and
very practical.

Andreas Welsch (30:38):
All right, and let's move over to our next
guest, to Maya.
Hey, Maya.

Maya Mikhailov (30:46):
Hi, Andreas.
Hey, look what came in the mailyesterday.
I'm so excited to start readingthis.
I have so many plane flightsahead, and believe it or not, I
still like flying paper books.

Andreas Welsch (30:59):
Awesome.
I'm sure folks in, in Australia,will join or watch the
recording.
Be prepared summer is about tostart.
Get your paperback.
awesome.
Hey, you you were on the show, Ithink about a year ago, maybe
five quarters, something likethat.
And we talked about how can youget your leadership on board and

(31:21):
manage stakeholderrelationships, especially with
your senior leaders.
And, from my experience as wellit's super key to, to have your
stakeholders on your side, notjust when you're you know, AI is
the shiny object and everybodywants to do AI, but also help
them understand why should you,or why should we do things a

(31:42):
certain way?
Where are some of thelimitations?
Yes, generative AI is great, butit still won't do your demand
forecast.
I think that was a great examplethat you mentioned on the
episode together you've led somany AI programs in the most
senior levels of executiveleadership in financial
services.
CEO of Savvy AI.

(32:04):
Maybe share a bit about, whatSavvy is all about, but I'm
wondering what is critical inthis phase of AI adoption where
we're just coming off the hype?
What do new and existing AIleaders need to tell to their
leadership about it?

Maya Mikhailov (32:18):
Absolutely.
And Andreas, first of all, superexcited to join you on this huge
day for you.
Not many of us get to say thatwe are a published author.
But you do, which is welldeserved.
Brian made some great points,your previous speaker, Brian
made some great points about abit of disappointment that's
setting in with AI adoption aswe're coming off of this sugar

(32:40):
high of AI, if you will.
But I think there are three keyelements that organizations need
to address.
When adopting ai and that'sreality data and courage.
These are the three key elementsright now of going over the hype
cycle and getting into somepractical implementations.

(33:01):
And the first one is a bit ofreality that's sinking in.
Yes.
We're coming off of as Brianpointed out, hot.
AI agent summer, the new brat,if you will.
But the reality is that AIprojects need to align with an
organization's goals andstrategies.
They need to be able to besomething that can be scoped and

(33:23):
achieved.
No longer can we talk about.
What's cool to do with AI?
What's creative to do with AI?
Let's talk about what can we dothat generates ROI for the
organization?
What is a tactical solution thatcan create value?
So on the other side of thathype cycle is tactics.
It's implementation.
It's practical solutions.

(33:44):
It's a bit of the boring stuff,but the reality is, that the
shiny the, whole shiny object.
Phase of AI is coming to alittle bit of an end when I look
at enterprises right now.
But the fact of the matter stillremains that AI can achieve
results.
It has been achieving resultsfor organizations for decades,

(34:05):
like Netflix, Amazon, JP Morgan.
I can literally go down the S& P500 and show you how they've all
been using different facets ofAI and not just generative for
years to achieve results.
But I think reality is stillthere.
Super critical right now, let itget scoped, make your CFO happy,
get it out the door, and attachit to some real business

(34:27):
metrics.
The second critical element isreally data.
A lot of enterprises have spentthe last decade investing in
data system.
And Andreas, maybe you've heardthis too.
You've heard the phrase data isthe new oil.
and yet when it comes to AI, I'mstill shocked at how many
companies will tell me ourdata's not ready yet.

(34:50):
And I'm not sure we trust ourdata.
And that comes, I don't know.
Have you heard that as well?

Andreas Welsch (34:56):
Occasionally.
Yes.
Yeah.
And it's crazy, Right.

Maya Mikhailov (35:01):
It's so crazy.
It's what have you been doingfor the last 10 years?
What are these tens of millionsof dollars that you've spent on
data architecture, on datapiping, on data transformation?
What did you invest to?
So I guess I'm a little bitconfused why there's a fear that
if data isn't perfect, it's notAI usable.
I guess my bigger question ishow are you running your

(35:22):
business right now if you don'ttrust your data?
But if anything, Investing in AIwill show enterprises how and
where they need to make betterdata investments, and it'll also
show them insights that havebeen locked away in like the
dark reaches of their enterprisewarehouse data warehouse systems
that need to be brought tolight.
So I think that getting overthat data perfection in AI is

(35:46):
really critical in moving beyondthe hype, because you'll always
be chasing data perfection.
And the last thing is a littlebit of an obvious one.
It takes courage right now.
Data gathering, data prep, weused to joke that was like the
slowest part of any AI project.
Now I'm finding increasinglythat it's just fear that's

(36:06):
dragging down some of thesetimelines.
Constantly being stuck in thislike model tweaking and
validation.
We have to get it more perfectand more perfect.
And yes, 94 percent accuracy ismuch better than 92 percent
accuracy.
Currently, the team that'sworking on this problem might be
working at 80 percent accuracyand taking six months to get
into production versus six daysthat it may take if you attach

(36:30):
an AI program to it.
So I think the.
Doomerism, if you will, a littlebit, the doom and gloom of the
AI conversation.
It's needed to throw somecaution into the hype cycle, but
then again it's swung thatpendulum so far that there is a
lot of fear.
So I would say that when itcomes to launching AI systems,

(36:50):
You need to have courage to getlive and get it done.
And I work with a ton offinancial institutions and
financial services companies.
And trust me, like riskavoidance is basically in their
charter, but there has to be abalance between being too afraid
to do anything.
And then also that the otherside of that, which is we let's

(37:10):
throw everything into productionright away.

Andreas Welsch (37:13):
I really love it.
How are you summarizing thatright then?
I think, again, those threepillars make perfect sense,
without clear data, you willstruggle, but you shouldn't
overthink it or just stretch itout too far, so you're behind on
your AI projects, but it doestake courage, like you said,

(37:34):
right?
Last time we talked, youmentioned a lot of the advice
around managing your executivestakeholders and how to manage
them.
Also talk to them that yes, AIis here, but it's not the,
solution for each and everyproblem.
So if you're buying the book andreading the book, there's a lot
of that in it as well, what youshared.

(37:55):
And I know you're a bigproponent of understanding first
what the problem is that you'retrying to solve and then
figuring out what technologyshould we actually use.
I thought, hey, Gen AI hadsolved all of that for us and
everything was just super easy.
But I have a feeling we stillneed to educate leaders and
stakeholders about all thosethings.
And just wondering, how do younavigate those conversations

(38:17):
with leaders that are a littlemore familiar with Gen AI?
Or maybe even think that theygot it all figured out and know
everything?

Maya Mikhailov (38:24):
Very, delicately.

Andreas Welsch (38:26):
Especially if they're your director.
Yeah, exactly.

Maya Mikhailov (38:30):
Very delicate.
It was quite a lot of tact.
First of all.
I will say this.
I think there is still a massiveeducation gap with leadership
and with ai because I don'tthink that a lot of leaders
really understand that AI isn'tone thing.

(38:50):
It's not just generative.
Right now those two words arebeing used so synonymously.
That AI is just being used as ashortcut for Generative AI.
And they're not thinking aboutit as an entire toolkit, that
it's not one model.
It's not just generative AI torule them all.
You're talking about tools andtalking about using the right

(39:10):
tool for the right problem.
And, that's fundamentally wherewe start.
Which is that basic kind of ahamoment of there's not just one
type of path to achieve theresults you're trying to
achieve.
It may be generative, it may benot.
Generative definitely has somevery useful applications.
Look, at all the convenienttools that Apple just introduced

(39:33):
yesterday.
Take a photo, ask a question,get information, get
appointments and more.
These are very practicalapplications of generative.
And ones that work for you.
with the problems that they'retrying to solve, but it's really
not the only game in town.
And I think when we talk toleaders, we give them examples
of sophisticated organizationsthat are looking at generative,

(39:55):
but they're looking atgenerative as part of the
solution.
They're looking at the rightmodel for the right problem.
And sometimes they're looking ata process of chaining together
multiple models to solve acertain problem or certain
process.
For them, it's not the modeltype that matters.
It's about, is this the best wayto solve this problem?
Is it even about AI or is this aprocess problem that's just

(40:18):
broken?
So when we talk to leaders, wekeep stressing that more
education is needed both by theleadership teams, but also by
their teams themselves.
Right now, it can't be top down.
You have to let your peopleexperiment with AI tooling.
You have to let your peoplebring problems to the table that

(40:39):
they're seeing that they believethat AI can solve for.
And the leadership need not beprescriptive, meaning they need
not say, Oh my gosh we have anew ChatGPT model.
Let's figure out How do you useChatGPT in our organization?
That's not the solution.
The solution is frontline teams,SMEs, experts in your company

(40:59):
who know the problems that needto be solved.
They know where the data residesand they might have a different
tactic to solve them.
So it shouldn't be top down.
Here's a tactic to solve it.
It should, let's educateourselves in the toolkit.
Let's expose our folks to thetoolkit and let them get their
hands on it.
So they can find the rightsolution for their particular
problem.
And, finally.

(41:20):
Leaders need to be educatedabout how their teams are
feeling about AI.
And this is really important.
Look, this is a new technologyand in an odd way, what I'm
seeing in enterprise is that thetech part of it is almost easier
to implement than the changemanagement part of it.
Because so much of the originalconversation about AI was about

(41:43):
human replacement.
How many workers can we replace?
And headline after headline ofcompanies saying, We've replaced
X amount of workers.
We shut down this.
We don't hire anymore for that.
Now, When your team as a leader,you start talking to about AI to
your team, and that's whatthey've been hearing in the
public discourse, they'reimmediately thinking to

(42:05):
themselves, my job's at risk.
I don't want to use this toolbecause what they're really
trying to get me to do is totrain my replacement.
So I think it's up to leaders.
to educate their teams, and tobring them to the table, to
bring stakeholders to the table,and to really treat this as a
human augmentation tool, and nota human replacement tool.

(42:26):
And that's the way they're goingto get their teams to adopt this
faster.

Andreas Welsch (42:30):
I love that focus on humans and making sure
that you, Have that connection,that you have that transparency
as a leader with your employees,that you are being sincere about
that.
And also what you mentionedearlier that there are different
technologies and they all servea different purpose.
So there has to be some level ofan understanding or at least
willingness to understand whattool do I use for what job.

Maya Mikhailov (42:53):
Oh, absolutely.
And I think a lot this year, I'mseeing that at the enterprise
leadership level where they'vestopped becoming prescriptive.
It stopped being, how can wesprinkle some gen AI or how can
we sprinkle ChatGPT on this?
And now it's okay, let's look atour problems.
Let's start talking to our teamsabout what really needs to be

(43:13):
solved.
And let's open our minds alittle bit that it's not just
one model that is the solution.
There are different processesfor this.

Andreas Welsch (43:22):
I love that.
Really good call to action as,as well for those of you joining
us live or listening.
And again just to reemphasizethat the part about when should
you use machine learning, whenshould you use generative AI has
inspired an entire chapter here.
Make sure to get your copytoday.

Maya Mikhailov (43:40):
I got mine.

Andreas Welsch (43:42):
Awesome.
Now, question to you as well.
What's your question for me?
I can feel it's getting a littlehotter.
My seat.
I'm on the hot seat.

Maya Mikhailov (43:53):
Alright, so I have a question for you and I
have a prerequisite.
You can only give me a one wordanswer.

Andreas Welsch (44:01):
Okay

Maya Mikhailov (44:02):
So you can't couch this in anything.
Ready?

Andreas Welsch (44:05):
Ready.

Maya Mikhailov (44:06):
Name me the most overhyped and underhyped thing
in AI.
One word answer.

Andreas Welsch (44:17):
Agents.

Maya Mikhailov (44:19):
Is that overhyped or under hyped or
both?

Andreas Welsch (44:21):
Both.
It, depends what side of thebuying equation you're on.
I think on the selling side andon the vendor side, it is
starting to, get a littleoverhyped.
And I think on the businessside, on the buying side, it is
under hyped, at least theunderstanding yet of what will
be possible and, how to preparefor it.

(44:42):
So your hypothesis.
Hot asian summer is turning intoan asian espresso fall.
I don't yet have the hot song offall dialed in yet.
It's maybe your pumpkin spicelatte that you've been waiting
for.
Back in season, right?

(45:04):
Something like that.
I love that question, by theway.

Maya Mikhailov (45:08):
Starbucks will be so happy.
Oh yeah,

Andreas Welsch (45:11):
I need to see if I need to turn this into a
promotional

Maya Mikhailov (45:15):
They should be sponsoring this entire livecast.

Andreas Welsch (45:19):
They should.
Awesome.
Hi, thank you so much forsharing all your expertise.
Like I said, lots of goodinformation in here as well,
inspired by our conversationlast year around how do you work
with your senior stakeholdersand manage those relationships.
Perfect.

Maya Mikhailov (45:38):
Thanks so much for having me.

Andreas Welsch (45:40):
Thanks.
So let's see, our next guest isPaul Kertina.
Hey, Paul, thank you so much forjoining.

Paul Kurchina (45:47):
Thank you.
It's a pleasure to be here andI'm armed.
I've got one physical bloggerand I actually have the digital
book here as well.

Andreas Welsch (45:55):
Boy, you're really well prepared.
Thank you so much.
Now for those of you who don'tknow Paul, I know you are a
prominent figure in the SAPecosystem.
You're an evangelist.
You run the enterprisearchitecture community.
You've been in and around theSAP ecosystem for.

(46:17):
More than 25 years.

Paul Kurchina (46:19):
Let's say over three decades now.
I'll call myself out.

Andreas Welsch (46:23):
Oh, okay And you've you know, you've seen so
many trends come and go whetherit's from large enterprise
vendors Or if it's in the techindustry as a whole So I'm super
excited to have you on and hearfrom you what you're seeing when
you do talk to enterprisearchitects who are You know, now
being faced with that realityof, Hey, yes, vendors are

(46:43):
actually not just talking aboutAI, they're integrating that
into the Application.
whether it's in the short run orlong run this, will impact me.
It's something I need to learnmore about.
I need to be aware of.
And I know you've, been a greatsupporter of helping these
communities understand moreabout AI, and we've worked quite

(47:04):
a bit on that when I was at SAP.
But I'm wondering, what are youseeing why should IT teams and
SAP COEs even care about AI?
What's keeping them up at night,and how can we help them sleep
peacefully?

Paul Kurchina (47:18):
Yeah, and just before I respond, I just want to
tag in a few things that may myjust mentioned it in a previous
thing.
It was interesting.
We always think that we need tohave our data perfect, right?
Our data right.
And we was at an event last weekand the speaker was, I had to
just stop for a second and said,can you repeat that again?
She reinforced the fact abouthaving AI work on the data you

(47:43):
had now and the low hangingfruit and quick ROI about just
dealing with that data.
My mindset had been.
That you had to get it perfect,right?
No, you can get value today tohelp you gauge what you should
do.

(48:04):
I'm glad she reinforced that,that fact as well.
And AI has been around for awhile.
I was an SAP customer, a utilitycustomer.
Back around 2004, we were takingsensor data into the cloud doing
something called similaritybased modeling to predict

(48:24):
failures, weaks, and events onassets.
Back in the day.
A lot of that is not, this isnot that new, and I really liked
your point as well, and there'sa good thing about I, I easily
distracted by being in themiddle of your speakers, forced
me to listen to all the othergreat talks so far.

(48:46):
And that aspect about all thedifferent AI tools.
Tom Fishburne did a great postyesterday, or cartoon, about
showing a massive AI hammer,right?
Almost the solution foreverything.
And I could not agree more aboutsome of the points.
As we're learning about AI,understand it's just one tool

(49:07):
inside our toolkit.
To be applied in the right way.
And sometimes I think in it, andyou and I have been around that
we get too carried away.
It's the hot or the new thing,and that will solve everything,
right?
And it doesn't.
So let me just circle back towhat I'm hearing from the many
customers I deal with.

(49:29):
Quite frankly, everyone is Ilike to think of it as we almost
have, AI is like a dense fogthat keeps on getting denser.
in terms of what's out there.
I've got a buddy of mine, JulianMoore, in Australia as well that
and by the way, I'm coming toyou from Calgary, Canada.
Julian awesome, helps AI, helpsassociations in that market deal

(49:53):
with AI.
And Julian plays with 20different AI tools a day.
A day, in terms of tools.
And relating back to this world,we're seeing the onslaught of
all this AI stuff, so to speak.
And I think some of us arefrozen in the sense of knowing

(50:15):
what to, what should I reallyjump in on, right?
What do I want to invest on?
The old, I forget what the termfor it is, the story about when
they put 13 jams a jar in agrocery store.
Sales go down, they reduce tofour people buy.
And it's almost a bit analogousto that, what I'm hearing from

(50:36):
many different customers, inknowing where to place their
bets, the SAP audience, as hasbeen very keen on, activating
things that SAP creates by theirsolutions.
In many cases, waiting forthings that until they're there,
turning them on in essence,right?

(50:57):
And when I'm hearing from SAPcustomers, it's interesting in
preparation for this last night,I asked a few hundred customers
in one of my recent events, howcan we help you in terms of on
with AI?
And what you have to do as anenterprise architect.
And it's things like,understand, I asked for the top

(51:18):
10.
So of course I used AI to takethe 300 plus things and give me
the top 10.
So customers want to know are,what are the capabilities that
are built in?
No surprise.
What's built in.
They want to know what's on theroadmap, right?
What is coming at what point intime?
So they know, do I need to lookat other solutions?

(51:40):
Do I need to perhaps build myown, but get a gauge of what's
coming?
Get a sense about AI and userenablement, the whole experience
and engagement side.
Get a sense of theinfrastructure that's required.
Customers aren't necessarilygoing to the public cloud
scenarios in the SAP world.

(52:02):
Some of them have to, aredealing with their own hosted
scenarios.
How can they ensure they're ableto take advantage of the AI?
Data analytics, the whole, thatwhole area as well.
I'm keen on that.
Many customers are it'sinteresting, are making the move
from their legacy ECC systems.

(52:23):
over to the latest S/4HANA.
And what's on their mind in thatmove is not just what
capabilities I can takeadvantage of, but the other
context is, how can I use AI toaccelerate my transformations?
What can I do, because quite,and I've been involved in R/3

(52:46):
back in the old SAP days, S/4since it came out in 2015.
Seriously, we can't approachthose implementations the way
we've done them over theseyears.
What is the new state of the artautomation and how can AI help
to accelerate those?
Of course governance and, thecontrols in this are all
something that are top of mind.

(53:09):
And, the last one I'll highlightis probably the key one is, The
human interaction and engagementwith AI.
And it's interesting.
There's a reluctance and I,think you find it as well,
Andreas, is especially peoplearound this package software
world for a while to really diptheir toe in the water and try

(53:32):
things.
And what I always encouragepeople, and I'm seeing a lot of
architects do it in other rolesas well, just start playing with
basic things to help you in yourday to day.
And it almost takes a, bit ofpushing on people.
Some of them just have to, oncethey start trying a few things,

(53:54):
and often I think you have toextrapolate to, someone's day to
day personal life.
And get to try things.
That's always been that case.
Then they think I can do thishere or there.
Let me think about how the usecases as well in my world.
So seeing a number of differentthings, but I think what's,
missing and your book is a greatresource to help on it, they're

(54:17):
looking for not a really asilver bullet, but give me
playbook, in terms of what I cando to educate myself as well as
help educate.

Andreas Welsch (54:30):
Thank you for summarizing that, that deeply
resonates with me, right?
Coming from an enterprisesoftware background and having
seen similar things that youmentioned, I can only imagine
Certainly, large companies,organizations, are not just wall
to wall one vendor, right?
There's a number of differentsystems.
Maybe you use your Salesforcefor your sales in CRM and your

(54:52):
workday for HR, and maybe youuse SAP.
I was looking at NetSuite's siteand for CX, Intercom the other
day for a client.
Everybody's adding AI to it, soI can only imagine that there's
this effort to understand whatis available, what can I use,
how much does it cost, does itsolve a problem?
I don't know.
It just grows exponentially.
But even if you do that theunderlying premise of how do I

(55:16):
make this usable in myorganization, how do I bring
people along, how do I scalethis, is still one of the most
important questions in additionto technology that leaders need
to solve.

Paul Kurchina (55:27):
And I think that point of whether you call it how
do we augment ourselves, right?
If you call it augmentedintelligence or whatever it is,
as you're waiting for, I'll termit, some of the dust to settle.
On, some of these things.
You have to, there's almost likea couple of paths here.
And I'm interested in your viewson this.

(55:49):
One is in terms of, as you'redelivering value in certain
things inside your organization,you're monitoring where things
are going.
But I think more now, more sothan ever, I'll say what's
different now is a number ofdecades, all kinds of different
technologies, right?
We've, I've seen over the courseof time and involved in, and

(56:10):
maybe the analytics is more backof a different scale to the,
mobile devices like we have.
I remember being involved withSAP.
Actually, my first book, nowthat I think about it, was
called Mobilizing SAP back in2004, dealing with RIM and
others.
So that was almost more of theBlackBerry was more in the
business world, right?
And then with iPhone, it bled inthe personal world and roll

(56:32):
forward to now and the recentiPhone launched yesterday and
things like that is, it'sprobably, we used to say as
well, that consumer worldpushing the business side, but a
certain degree that's impactedor impacting now our
personalized also so much.
So it's almost like this Venndiagram between the two worlds.

Andreas Welsch (56:53):
Great analogy, right?
Seeing that overlap and again, Ithink it, it takes leadership,
whether it's, in your businessor in, in your personal life
too.
Look for these opportunities,like you said, and experiment
with them, play with them, get abetter understanding of what can
I actually use this for?
Is this useful?
How can I make it even moreuseful in what I'm doing?

(57:16):
So I've learned the

Paul Kurchina (57:17):
drill now and I'll flip to you.
I've learned over the few calls.
I've got a question for you.
Thank you.
It took me three speakers to getthere, but I'm there.
So for my audience tends to becustomers and Partners, and even
SAP in the SAP ecosystem, right?
That, what is your advice tothem on how to best use,

(57:40):
leverage the book in theirorganization?

Andreas Welsch (57:45):
I think the key part is that there is an
abundance of technology aroundyou.
Vendors like SAP have beenembedding it left and right.
In many, cases, you canexperiment with these things.
There are trials available whereyou should ask for one.
The part that a software vendor,just by selling you software, is

(58:08):
not going to solve for you, ishow do you What are the right
things that we should pursue?
How do I put them on a timeline?
How do I assess value for thesecapabilities?
What should I really beimplementing?
It's one thing to try them outand see what can we do with it.
But then when you have multipleof those, how do you move them

(58:29):
from An idea state to really animplementation and an operation
state.
And the part that's key and it'sdear to my heart is how do you
scale that across theorganization?
So when I was at SAP, we built acommunity of multipliers within
the S/4HANA organization.
And we, we, created this networkof champions, if you will,

(58:52):
people that understood or.
Got more training on what is AI?
What can we use it for?
They understood also from theirpeers, what have they tried out
before?
What's working well?
What should we adapt?
What are we not going to repeat?
And really form this communityof practice, of multipliers, of
champions within yourorganization.
Even if you're a smallorganization, right?

(59:13):
And you know everybody.
in your startup and haveconversations with people, see
who are the more technical onesthat do want to get engaged,
that are excited about this,because they are much, much
closer to the business and tothe business process and the
business problem that they seeevery day, that they can bring
ideas and information back toyou, what's working, what's not
working, how can we apply AI.

(59:34):
So really, that's, a key part,in my opinion whether you're
focusing on SAP or any othervendor as an IT organization, as
an AI organization.
Bring your business stakeholdersalong to help prioritize what
should we be looking at.

Paul Kurchina (59:49):
All right.
Take care, my friend.

Andreas Welsch (59:50):
Thank you.
Alright.
While we are in Canada, let'smove over to another fellow
Canadian, Harpreet Sahota.
Thank you so much for joining.

Harpreet Sahota (01:00:01):
Andreas, man.
Thank you so much for having me.
Congratulations on the book.
It's sitting right here on mydesktop as we speak.
Just got it in the mail, man.
So thank you so much for forsending me a copy.

Andreas Welsch (01:00:11):
Wonderful.
I also need to say thank you toyou because we recorded an
episode, I think it was in thefall of last year, about RAG,
Retrieval Augmented Generation,and fine tuning.
And maybe before we get intothat, I know you're super busy.
You're into developer relations.
You have your own show and livestreams.

(01:00:32):
You go very deep on thetechnical topics.
So excited to have you on as weswitch more from the business
strategy part two.
Some of the more technicalthings and what can you actually
do with this?
It's not going to get tootechnical for those of you in
the audience, but reallyappreciate your perspective
there, right?
We recorded an episode lastyear.

(01:00:54):
I created a short clip, 40seconds, something, put it on
YouTube.
It's the most watched clip on mychannel over the last 12 months,
more than 14,000 or 15,000people have watched It.
You were talking about retrievalaugmented generation, fine
tuning, right?
Last year, everybody was tryingto figure out, should I go this
way, that way?
Is fine tuning something weshould be doing or not?

(01:01:17):
If people are in and around AI,they know they're sitting on a
goldmine.
which is their business data,but we also know it's messy,
it's dirty, Maya talked aboutit, Paul just talked about it.
How does data play into theseconcepts like RAG, like fine
tuning?
What should people really bethinking about now at this stage

(01:01:38):
of the, game and the adoption?

Harpreet Sahota (01:01:40):
Yeah, so I guess from perspective of, just
retrieval augmented generationdata quality obviously is super,
super important especially forretrieval accuracy because the
retrieval aspect of RAG RAG isRetrieval Augmented Generation,
that retrieval aspect, it needshigh quality relevant data in

(01:02:01):
its knowledge base because ifyou have poor quality data, then
you're going to get irrelevantor inaccurate information that's
retrieved from your database.
You might miss relevantinformation because of a poor
representation of that data orpoor indexing.
Or you might just get biasedresults.
And then in the generationaspect RAG, the G is for

(01:02:24):
Generation.
we use the retrieved informationto produce an output.
And so think about how dataqualities can be affected there,
right?
If we have accurate and relevantdata that's retrieved in that
retrieval process will end upwith more contextual and
coherent and factualgenerations.

(01:02:45):
So make sure that you have justdiverse and high quality data
will really help generate a goodresponse and contextually
appropriate response.
When your data sets are.
Nice and clean and wellstructured.
Of course, this is going toimprove the model's ability to
your retrieval pipeline, theembedding model, to retrieve the

(01:03:06):
right documents.
When you think about cross modalretrieval talk about multimodal
RAG systems, this becomes evenmore important because you need
to have high quality, aligneddata that's across different
modalities that you have, text,image, audio, video, what have
you.
So yeah, just high quality, gooddata means that you are having

(01:03:30):
good representations that arethen into your vector database
and will improve retrievalquality.

Andreas Welsch (01:03:37):
Perfect, so data is still important.
Sorry folks, still need to getyour data in order, but don't
overdo it, right?

Harpreet Sahota (01:03:46):
Yeah dude, like if I just quickly like I want to
be, I was once one of the peoplewho would ignore data and just
be like, Oh, it's all about themodels and all about building a
dope model and look at thismodel.
And I think over the years andmore so just ever since I've
been working at Vox151 wheredata quality is like the thing,

(01:04:06):
like I now truly understand andappreciate the importance of
good quality data and likereally understanding the impact
that it has, not only on anymodel that you're training or
fine tuning but just youroverall system.
So I used to be one of thosepeople that would sleep on data
quality.
And it was just this abstractconcept to me that I didn't

(01:04:27):
truly appreciate because I wasjust so into I need to get the
right learning rate, the righthyper parameters and the right
configurations or the rightwhich layers should I freeze or
unfreeze.
But yes, like the actual.
Quality of your data is going tohave the largest outsized impact
on whatever you do downstream.

Andreas Welsch (01:04:44):
Awesome.
Now, I heard you mentionsomething around modal, multi
modality, right?
Image, text, video, audio andcertainly data quality being
important there.
But for things like RAG.
How far out are these things?
Are we talking quarters, months?

(01:05:05):
Is it already here?
So a bit into the looking glassor, maybe, even just just what
is available today when it comesto model modality?

Harpreet Sahota (01:05:15):
I definitely, yeah, definitely still think it,
it has a little bit of ways togo.
There has been some good modelsthat have been put out that show
some promising results.
For example, Meta releasedsomething called Chameleon.
It was their newer version ofthe Chameleon model which has
which is great because you canuse the embeddings from that
model to do cross modalretrieval.

(01:05:36):
So it is an improvement overClip or ImageBind.
And then Apple released their itwas called 4M21, which was 21
different modalities that theycan handle with one model.
So models like this are becomingmore and more powerful and it's
really pushing what is capablewith multimodal RAG.

(01:05:58):
I think still a long way to goto have the results that we're
having with text only RAG.
But, the research is coming andyeah, we're working on it.
We're going fast, I'd say.

Andreas Welsch (01:06:12):
That's, exciting, right?
I think so many things have beenmoving so fast over the last two
years, ever since ChatGPTdropped and it feels like
there's no week without any bigannouncements, something
impressive, something exciting.
Sometimes it's a little easy.
Matt was saying at the beginningtoo, to get carried away and
focus too much on what's newthis week without thinking about

(01:06:34):
the practical application.
With Retrieval AugmentedGeneration, I think people have
realized that, hey, yes, yourlarge language model has a
fairly good understanding of howto construct language, how to
understand language.
Certainly with knowledge cutoffs and all these kinds of
things, there are certainlimitations that the model might

(01:06:55):
not have the latest information.
Because it wasn't available atthe time it was trained.
Hey, Retrieval AugmentedGeneration.
We can take what a user hasentered in the chat, for
example, turn that into vectors,compare that against vectors in
your database.
Pull out something that mostclosely closely resembles that.
Data is, important and to getgood results.

(01:07:20):
Now, with Brian, we're talkingabout AI agents and that being
the, hot summer of AI agents.
How, does data quality fit intothese new and emerging concepts
as, as well?

Harpreet Sahota (01:07:33):
Yeah Like, when you think about agents within
the context of large languagemodels themselves it's not like
a it's just a pattern at the endof the day, right?
There's a reasoning and actionpattern.
And these are just emergingproperties that, that come from
bigger and bigger models, right?
And the reasoning and actionpattern there's, a call to a
language model.
There's, the, language model isreasoning over the query.

(01:07:55):
It's thinking about actions totake, or it's thinking about
responses to give to the user,and then it go ahead and it
takes that action.
I don't think any of us youknow, unless we're working at
those frontier model labs aregoing to be building models for
agents or fine tuning models foragents, we might be doing
something more closer to agenticRAG, where we're combining this

(01:08:19):
reasoning and action patternwith the RAG pattern, right?
So this just like mashing up twopatterns.
So what is RAG?
RAG is just Dense VectorRetrieval within Context
Learning.
And so if we step back again,like just think of the whole
process of RAG we can, split itup into a few different phases.

(01:08:39):
There's like the preparatoryphase where we have some
document processing, where we'rechunking splitting documents
into manageable pieces.
Then there's the embedding, thenpushing everything to a vector
database.
And then there's the queryprocessing, and this is where
we'll see the reasoning andaction loop occur.
A user query will come in, thelanguage model will reason over

(01:09:02):
that user query, and perhaps youmight have a module that is
specific to some type of query.
Let's say you have a homeworkassistant for a high school
where you know, you're teachinggeneral education classes.
The query comes in, you mighthave the language model reason
over, okay, is this a historyscience or physics question?

(01:09:23):
And then it will route the queryto the appropriate kind of
vector database or index to pullthe information.
So you see this happen morewhere the agent is part of the
just meshed up in, into thepattern.

Andreas Welsch (01:09:39):
Awesome.
That's the part that really getsme excited when you can connect
your own data to these, systems,to agentic RAG, pull out
information that is relevant,that is specific, right?
Get around some of thoselimitations that LLMs, natively
and inherently have, but alsouse the strength of both
approaches.
You've also inspired an entirechapter on RAG and getting

(01:10:03):
better results from your LLMswith your data.
So I'm excited to share thatwith the world as well.
Harpreet, now I'm curious,what's your question that you've
prepped?

Harpreet Sahota (01:10:17):
Yeah there's, folks like me that are in the
industry that are we're morenerds, if you would, like we're
more hands on.
We're reading the latestresearch where we're, very hands
on, right?
We, understand the limitationsand we understand what AI is
capable of, and quite senior inour roles, but not necessarily

(01:10:38):
leadership, right?
So we're senior, maybe we're,but we're not leadership, right?
At the end of the day, And Ifeel like there can be some
tension between folks like meand folks that are in the
business that are like, we needAI.
We need to get this thing,everywhere.
And I'm wondering what.
Tips or advice do you have forfolks that are like me who are

(01:11:00):
super technical senior in theircareers?
Yeah, we we understand, ofcourse you could throw around
vague terms, business value andall this, all these buzzwords
that business people use,whatever, we get it.
But we just can't communicate toleadership properly like the
limitations okay.

(01:11:21):
Like, how can I tell, how can Itell a leader that idea is not
going to work out and here'sexactly why.

Andreas Welsch (01:11:27):
I think that's a great question.
Look, I think, in my opinion, ittakes both sides to be open for
this kind of dialogue.
As a data AI practitioner andexpert, do you have a really
solid understanding of thetechnology.
You are looking at the data.
You see what is possible andwhat is not possible.

(01:11:50):
I think through examples andMaya was saying, doing that
diplomatically and tactfullyshowing what is possible or also
asking questions, right?
You can probably get.
Get closer together on what itis that we're trying to solve,
right?
Why is this important?
How are we helping the business?

(01:12:11):
Why do we believe, on the otherhand, that AI is the solution
for this?
Or, again, maybe it's notgenerative AI, maybe it's not
RAG, but it's machine learning,and it's your, whatever, K means
algorithms and type models.
I think just, level setting isprobably one around enablement,
shared understanding, seeingwhat is the actual problem that

(01:12:33):
we're trying to solve and why isthis important.
Looking at technology as a meansto do it.
Maybe sometimes AI, Gen AI,machine learning isn't even the
right solution for it.
Being able to articulate thatand being heard is the other
thing.
But those would be some of therecommendations that come to
mind for me.
Definitely seek that dialogue.

Harpreet Sahota (01:12:55):
And what about the flip side of that, where you
have people who are maybe newand they just want to they see
a, they're technical, perhapsmaybe just early in their
career.
And now they're like they, havea hammer now AI problem.
What advice would you give tothem to filter out from bad
ideas?

Andreas Welsch (01:13:14):
No.
I would say practice how to usethat hammer in a, in an isolated
en environment.
Look for a use case.
Maybe that's more in, in yourpersonal space.
Experiment with the technologyand then bring it to your
business.
Hey I've learned something new.
If we try out X, Y, Z, or Ithink here's how we can apply

(01:13:34):
this.
I think if you're in a technicalrole, it's absolutely critical
that you're at the top of yourgame, that you understand what
is there, what what is new.
But you also need to understandhow can we use it and for what
purpose.
So also be curious, beinginquisitive of how this can help
really move the business forwardor make a measurable impact,

(01:13:55):
right?
Maybe that's the better way toexpress business value a
measurable impact.
Awesome.

Maya Mikhailov (01:14:02):
Thank you.

Andreas Welsch (01:14:02):
Wonderful.
Harpreet, it was a pleasurehaving you on.
Thank you for walking us throughsome of the new advancements in
RAG, agentic RAG, and why dataquality is important and is
still important.

Harpreet Sahota (01:14:14):
Thank you very much.

Andreas Welsch (01:14:15):
Awesome.
All right.
Thank you.
So let's move on over to ourlast guest.
Steve, Wilson.
Hey Steve, thank you so much forjoining.

Steve Wilson (01:14:25):
Thanks for having me, Andreas.
Excited to be here.

Andreas Welsch (01:14:28):
Hey we connected last summer when I saw a paper
come out in a report come out,by a foundation called OWASP, it
was called the top 10 for largelanguage models.
You are one of the lead authorsor co authors on that report.
And we started getting into aconversation about how is
security evolving around largelanguage models.

(01:14:52):
And had an episode on that.
It's now been nearly a.
Also, a year since we've talkedabout that, and I'm sure many
things have evolved when itcomes to large language model
security.
We actually have a dedicatedepisode in that next no, in two
weeks, that I'm really excitedfor because I've been struggling
getting people with deepknowledge in front of the camera

(01:15:13):
on this topic of LLM security.
But before I keep on ramblingmore, maybe if you want to
introduce yourself briefly andwhat OWASP is all about, and we
get into what you're seeing inthe cybersecurity space.

Steve Wilson (01:15:25):
Yeah, really quickly.
I'm Steve Wilson, and I spendall my time thinking about the
combination of AI andcybersecurity.
My main job is I'm the ChiefProduct Officer at Exabeam,
where we use AI to sift throughpetabytes of data and find
threats for organizations.

(01:15:45):
Last year, I got involved withthe Open Worldwide Application
Security Project, which is OWASPfor short, which is a 20 year
old foundation with 200, 000members dedicated to building
secure software.
And I put together the firstproject there to research the
vulnerabilities specific tolarge language models.

(01:16:06):
And then that led to led to myown book journey, which should
be coming out later this month,which is writing a book for
O'Reilly on the topic.

Andreas Welsch (01:16:14):
I'm super excited for you.
Can't wait to, read that.
You've you've already shared asample chapter and I am
definitely ready to, read theentire book.
Yeah.
Hey, what are you seeing in, inthe security space when LLMs?
I think we've, seen on At leastthose top 10 examples of how

(01:16:36):
generative AI can be used foradversarial intents.
How much of that is, is real?
What is being used?
How are people using it and howcan you prepare or defend
against it?

Steve Wilson (01:16:49):
Yeah, I think what we see is it gets more real by
the day.
And a year ago when we firststarted this, people were just
ramping up their first LLMprojects.
And so people were looking at,hypothetical vulnerabilities
that were pretty clearly there,but they often resulted in the,

(01:17:11):
the end state being.
Embarrassing the organizationwho was creating the LLM, right?
It might put you in the,headlines because somebody made
your LLM call somebody a badname or do something that was in
poor taste.
But increasingly we are seeingthese now put into mission

(01:17:31):
critical operations.
And it's, the topic of whatyou've been talking about all
day is that this istransitioning to reality.
And we now see these, veryhypothetical vulnerabilities
with names like indirect promptinjection that a year ago,
somebody put out a paper.
describing how they had puttogether a system to evaluate

(01:17:55):
resumes and somebody could embeda secret code in the resume.
And you cover this in your book.
So that it would change therankings on the person who
submitted their resume.
And it's okay, that seems likethat's going to be real.
What we've seen in the last fewmonths is Microsoft Copilot and
Slack falling victim to thatexact vulnerability.

(01:18:18):
And these are real, and theseare now attached to things which
are holding our more, mostimportant data, and people are
learning hard lessons now.

Andreas Welsch (01:18:29):
Yeah, I remember seeing that paper and trying
that out and having done someacademic research in that space
of how do you use AI and machinelearning at the time, about
five, six years ago to rankresumes just, seeing that is
possible.
It could be possible to game thesystem that way was, just mind

(01:18:49):
blowing.
And, especially as, we want moreand more people to use
applications that have largelanguage models and engine AI
embedded, so you're almostpushing them to, towards that
edge.
And you need to educate them, Ithink as well, and what could be
possible, but certainly, beingcareful about that.

(01:19:10):
Now if it's, not just about.
Prompting injection, right?
I think a couple of months ago Ihad Carly Taylor on the show.
She's a senior machine learningmanager at Activision and she
was talking about large languagemodels being used for spear
phishing, targeted phishingcampaigns.
What are you seeing there?
How are large language modelsbeing used by the bad guys, if

(01:19:34):
you will?

Steve Wilson (01:19:35):
Yeah, so you know we've seen bits in the news
where organizations have beentargeted with things like deep
fake zoom calls.
There was a bank that somebodytransferred a bunch of money.
Very real vulnerability.

(01:19:56):
But what I'll tell you is it'sgotten to the point where I have
now spoken to people who haveconducted interviews.
That were deep fakes and peopleworking at companies that hold
mission critical data, who oneof the interviewers you know,
because we're often these daysinterviewing people in other
countries and things like that'sbecome routine, but somebody

(01:20:19):
became suspicious.
They had somebody who was alittle more trained in this from
the security team conduct afollow up interview.
And they said this is not, aperson.
So this is real and this ishappening.
And a year ago, what I wastelling people is your previous
phishing training, we were allused to Nigerian print schemes

(01:20:41):
where things were barely writtenin English and you had to look
at them Just have the tiniestbit of skepticism to spot these
fakes, right?
The URLs are misspelled and thisand that.
Now a standard phishing email,those are so good.
They're flawless every time.
There's no excuse.

(01:21:02):
But we've moved beyond that,where we're not just getting
emails and we're gonna wind upwith deep fakes.
We've seen examples of peoplegetting phone calls with their
friends and relatives, havingtheir voices cloned.
This is not only real, it'sbecoming routine.

Andreas Welsch (01:21:20):
I think that's where it gets really scary.
And I feel just having moreawareness and education about
this, even if you are in theenterprise, especially if you're
in the enterprise you need toeducate your teams that here are
some of the risks, how othersmight be using this.
on you or to trick you intodoing something and believing
something.

(01:21:40):
So corporate and cybersecurityare asked to step up their game
and evolve as well.
Now, what do you think leadersneed to know when it comes to
LLM vulnerabilities, right?
There's a huge push on, on onehand by vendors to put as much.
Gen AI in there, so marketingcan say, Hey we've, got the best

(01:22:02):
and the most AI capabilities.
Sales is getting excited becausethey can sell more and they can
upsell you.
Maybe you're even building yourown things where it's
differentiating in, in yourbusiness.
What do you really need to knowas a leader?
How can you protect yourself andyour company against some of
those vulnerabilities?

Steve Wilson (01:22:20):
Yeah, I think, what we see right now is with
the nature of pre trainedtransformers, it's so easy to
build something that lookscompelling.
That, that first demo stepbecomes almost, trivial.
People can put together amazinglooking demos in an afternoon.
And as a business leader it'sreally easy to get excited about

(01:22:44):
that.
What as you push those intoproduction and the cyber
security space in particular, wesee so much use of large
language models.
Every vendor is adding one.
If you don't do this right, youwind up with something that is
not only insecure in atraditional sense, it's actually
borders on embarrassing.

(01:23:04):
And what we see is peoplepushing wrappers on top of
ChatGPT out into production.
They're not provided withsufficient sort of training and
data to really answer focusedquestions.
So they, hallucinate.
They're, they're not focused.

(01:23:27):
They're actually expensive tooperate.
And and they're vulnerable tovirtually every one of those top
10 vulnerabilities.
And what I've seen at, Exabeamis.
Us having to look at thisthrough a very focused lens and
put on a product management hatbefore you put on the

(01:23:47):
engineering hat and say, what isthe very specific business
problem I'm trying to solve?
And what is the most constrainedway that I can solve that?
And if I can do that, I can feedthe model exactly the data that
it needs.
I can eliminate, or at leastdramatically reduce things like
hallucinations.
I can make the output.

(01:24:08):
very focused.
I can limit the scope.
One of the things I talk topeople about is is limiting the
scope of what your bot does.
It's the first thing on mychecklist from my book, which is
don't try to build guardrails bybuilding a deny list.
Build your guardrails by usingan allow list.

(01:24:31):
These are the only things thisbot is allowed to do and
everything else is out of scopeis going to be far more
effective than running around,trying to play whack a mole,
convincing it what to not do.

Andreas Welsch (01:24:42):
I really like that approach.
Makes me think back of my daysin IT and firewalls and zero
trust security and all.
Steve, one last thing.
What's your question for me?
We have a lot more time to talkabout security on the show in
two weeks.
And there's a lot of goodinformation that you've already
shared.

Steve Wilson (01:25:01):
Yeah I'd say the big question for you is in your
interactions with businessleaders and people in these new
roles, like a Chief AI Officer,what, do you see as the short
best key to get them to thinkabout the security team is an
ally rather than an adversary indeploying these technologies.

Andreas Welsch (01:25:24):
Poof! That's a tough one.
That's probably the toughest ofall questions today.
Look, I think the key part isunderstanding on one hand, what
is the opportunity, but alsowhat is the risk?
But it's nice to have a fancychatbot that doesn't always send
you into an infinite loop, orsends you back to the start.

(01:25:48):
We've all been used to it.
It's nice.
To have a better customerexperience.
If you're building this thingon, your own, what are the
risks?
What are the risks that we'regetting into?
Not just of this things, sayingsomething silly.
If you're a parcel service,we've had that experience in
Britain a couple of months ago,or if you're a a pickup car

(01:26:13):
dealer thank you.
And you buy your pickup for adollar.
Those are embarrassing, but italso gets malicious.
So what are the risks ofsomebody gaining unauthorized
access to our systems orexposing information that we
have trained this agent, thischatbot with and how can we
mitigate that, right?

(01:26:34):
I think that's the key part tolook at as well, not just the
opportunity, but also understandwhat is the risk that we're
getting into and how can ourteams that we already have help
us protect against that.

Steve Wilson (01:26:45):
Awesome.
Hey, Andreas, thanks for havingme on.
I enjoyed it and I'm lookingforward to next time.

Andreas Welsch (01:26:50):
Perfect.
Thank you so much, Steve.
Really appreciate it.
Talk to you in two weeks.
Awesome.
Folks, we're getting close tothe end of the show.
Thank you so much for joiningand for celebrating the launch
of the AI Leadership Handbookwith me.
If you haven't already done so,go to Amazon, look for the AI
Leadership Handbook and buy yourebook or paperback copy today.

(01:27:11):
You can learn about all the ninekey steps.
I'm super excited and thankfulfor all the great guests that we
have had on and to dive intosome of the chapters and some of
the topics here as we'reexploring more and more
generative AI.
Not just exploring, but alsobringing it into your business.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.