All Episodes

July 7, 2025 52 mins
"You can try to develop self-awareness and take a beginner's mind in all things. This includes being open to feedback and truly listening, even when it might be hard to receive. I think that's been something I've really tried to practice. The other area is recognizing that just like a company or country, as humans we have many stakeholders. You may wear many hats in different ways. So as we think of the totality of your life over time, what's your portfolio of passions? How do you choose—as individuals, as society, as organizations, as humans and families with our loved ones and friends—to not just spend your time and resources, but really invest your time, resources, and spirit into areas, people, and contexts that bring you meaning and where you can build a legacy? So it's not so much advice, but more like a north star." - Sabastian V. Niles

Fresh out of the studio, Sabastian Niles, President and Chief Legal Officer at Salesforce Global, joins us to explore how trust and responsibility shape the future of enterprise AI. He shares his journey from being a high-tech corporate lawyer and trusted advisor to leading AI governance at a company whose number one value is trust, reflecting on the evolution from automation to agentic AI that can reason, plan, and execute tasks alongside humans. Sabastian explains how Agentforce 3.0 enables agent-to-agent interactions and human-AI collaboration through command centers and robust guardrails. He highlights how organizations are leveraging trusted AI for personalized customer experiences, while Salesforce's Office of Ethical and Humane Use operationalizes trust through transparency, explainability, and auditability. Addressing the black box problem in AI, he emphasizes that guardrails provide confidence to move faster rather than creating barriers. Closing the conversation, Sabastian shares his vision on what great looks like for trusted agentic AI at scale.

Episode Highlights
[00:00] Quote of the Day by Sabastian Niles: "Portfolio of passions - invest your spirit into areas that bring meaning"
[01:02] Introduction: Sabastian Niles, President and Chief Legal Officer of Salesforce Global
[02:29] Sabastian's Career Journey
[04:50] From Trusted Advisor to SalesForce whose number one value is trust
[08:09] Salesforce's 5 core values: Trust, Customer Success, Innovation, Equality, Sustainability
[10:25] Defining Agentic AI: humans with AI agents driving stakeholder success together
[13:13] Trust paradigm shift: trusted approaches become an accelerant, not obstacle
[17:33] Agent interactions: not just human-to-agent, but agent-to-agent-to-agent handoffs
[23:35] Enterprise AI requires transparency, explainability, and auditability
[28:00] Trust philosophy: "begins long before prompt, continues after output"
[34:06] Office of Ethical and Humane Use operationalizes trust values
[40:00] Future vision: AI helps us spend time on uniquely human work
[45:17] Governance philosophy: Guardrails provide confidence to move faster
[48:24] What does great look like for Salesorce for Trust & Responsibility in the Era of AI?
[50:16] Closing

Profile: Sabastian V. Niles, President & Chief Legal Officer, LinkedIn: https://www.linkedin.com/in/sabastian-v-niles-b0175b2/

Podcast Information: Bernard Leong hosts and produces the show. The proper credits for the intro and end music are "Energetic Sports Drive." G. Thomas Craig mixed and edited the episode in both video and audio format. Here are the links to watch or listen to our podcast.

Analyse Asia Main Site: https://analyse.asia

Analyse Asia Spotify: https://open.spotify.com/show/1kkRwzRZa4JCICr2vm0vGl

Analyse Asia Apple Podcasts: https://podcasts.apple.com/us/podcast/analyse-asia-with-bernard-leong/id914868245

Analyse Asia YouTube: https://www.youtube.com/@AnalyseAsia

Analyse Asia LinkedIn: https://www.linkedin.com/company/analyse-asia/

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:01):
Do you manage your own IT for distributed teams in Asia and
you know how painful it is? S of L helps your in house team
by taking cumbersome tasks off their hands and giving them the
tools to manage IT effectively. Get help across eight countries
in Asia Pacific, from on and offboarding, procuring devices to
real time IT support and device management with our

(00:25):
state-of-the-art platform. Gain full control of all your IT
infrastructure in one place. Our team of IT support pros are
keen to help you grow, so check out ESEV el.com and get a demo
today. Use our referral code Asia for
three months. Free terms and conditions apply

(00:47):
you. Can try to develop
self-awareness and try to take abeginner's mind really in all
things, which also includes, areyou open to feedback?
Are you able to really listen even when it might be hard to
receive that feedback? But I think that has been
something I've really at least tried.
And I think the other areas we think of 1's life just as a

(01:09):
company country, can have so many different stakeholders.
As humans, you have many stakeholders.
You may wear many hats in different ways.
And so as we think of the totality of your life over time,
what's your portfolio of passions?
How do you, how do we as society, as organizations, but
as humans and families, our loved ones and friends, how are

(01:31):
you choosing to not just spend your time and resources, but
really invest your time, your resources, your spirit,
hopefully into areas and people and contexts that bring you
meaning and where you can build the legacy.
So not so much advice, but it's kind of like a North Star.

(01:53):
Welcome to Analyze Asia, the premier podcast dedicated to
dissecting the pulse of business, technology and media
in Asia. I'm Bernard Leung, How does
trust and responsibility be incorporated into enterprise AI?
With me today, Sebastian Niles, President and Chief Legal
Officer for Salesforce Global, to tell me about how AI

(02:14):
governance can actually be placed with digital
transformation and together withAI today with Salesforce.
So first of all, welcome to the show, Sebastian, and
congratulations on the launch ofAgent Force Three.
I just got thank you the announcement and also thank you
for hosting me in the Singapore Sales Force office.
Delighted to be here and happy to join you today.

(02:35):
So prior to this I did a lot of research on you so we can make
sure. That have a dossier.
Get good origin story from my guests.
How do you start your career? Wow or urgent story?
No, look what I had always had apassion for connections between
topics and areas and potential areas of impact.

(02:58):
And one in particular just is around the intersection of law,
business, you know, and technology.
Actually, when I had a graduatedhigh school, my mother had
actually found an old article like high schools paper.
And it's like, oh, what are the people going to do?
Right? What's their plans right?
You know, plans can change, but the reason she should have

(03:19):
pulled this up and it was after I had joined the sales force in
this capacity as president, chief legal officer, is there is
a line in this article where it says it's expansion.
What do you want to do when you grow up?
It's in high school. And there's a very specific
phrase it says that I want to beone day a high tech corporate
lawyer. And she was just chuckling about

(03:41):
that. But do you think just that
intersection of how to think about law, how to think of
policy, how to think of impact. But then particularly in the
context of the role of business as well.
And what is the role of the private sector, whether it's in
kind of helping to solve some ofour most complex problems and
ultimately solve some of the problems of could be of other

(04:02):
businesses, could be problems ofpeople, could be problems of
planet, problems of communities.And how can businesses work
together. But then with the public sector,
right and with the civil society, because I think I do
very much believe in Salesforce has been a key part of our ethos
and our founders that business can also be one of the greatest
platforms for change and positive impact.

(04:23):
You have a long and illustrious career as a partner from work
till Lipton, Rose and Katz sincegraduating from Harvard Law
School, and then you've been staying there for pretty long
time. I really admire the kind of
tenacity and loyalty to from. How do you end up in your
current role with Salesforce? Yeah, No, I had one career for
and it was almost 20 years or orso.

(04:44):
And when I had transitioned to this role that Salesforce, a
close friend of mine who as a CEO of another company, they
send me a text message. He goes, oh, Sebastian, you
finally have a real job. I said wait, a job.
But it is, as you know, it's a transition.
But I think what's been this kind of fascinating though for

(05:05):
me around it is as a partner walk to Lip and Rosen and Katz
and leading various practice groups and whatnot, had very
much developed and had the privilege of serving as a
trusted advisor at scale to companies, governments,
different contexts around the world, either on top of mind
critical opportunities they wereseeking to achieve or challenges

(05:25):
that they were seeking to, you know, navigate.
And so when joining Salesforce, they're going from being a
trusted advisor in that one context to joining a global
company whose number one value is trust right at a very, is
this a very meaningful and sort of genuine throughput there?
Because what I'd also share withyou, what I continue to find at

(05:49):
Salesforce is in so many ways, we are serving as trusted
advisors to so many different companies, industries,
governments, nonprofits, you know, and what not as each of
these organizations and institutions seek to, you know,

(06:09):
whether it's transforming themselves or imagining
themselves, you know, with AI ortrying to achieve their mission
or purpose, but they're really partnering with Salesforce.
And so that's just been an interesting element.
And then how to bring to bear, particularly for customers and
partners and a broader ecosystemto have the kind of change that
they're trying to achieve with our solutions and whatnot.

(06:30):
So much of it does revolve around, OK, how do you bring
trust to it? How do you think about
responsibility? How do you think of impact?
And so that's been just kind of a little more surprising than I
was expecting that there actually is that type of
throughput, if that makes sense get.
To that intersection of trust and responsibility with
enterprise AI today, Sure, sure.I want to just ask you, from
your career journey, what are the valuable lessons you can

(06:52):
share with my audience? Well, I was trying to be
cautious and giving advice. They're really like once one
size doesn't fit all and you know, everyone has their own
context and situation. I suppose not so much advice,
but you know, perspective is as much as you can try to develop
self-awareness and try to take abeginner's mind really in all

(07:18):
things, which also includes are you open to feedback?
Are you able to really listen even when it might be hard right
to receive that feedback? But I think that has been
something I've really at least tried, right?
Sometimes they trying implies failure.
But you know that I think that'san extra area as as and I think
the other areas, you know, as wethink of 1's one's life, you

(07:41):
know, just as a company, you know, or a country can have so
many different stakeholders as humans.
I'm sure this you find this right.
You have many stakeholders. You may wear many hats in
different ways. And so as we think of your like
the totality of your life over time, what's your portfolio of

(08:02):
passions? How do you, how do we, how do as
society, as organizations, but as humans and families, our
loved ones and friends? How are you choosing to not just
spend your time and resources, but really invest your time,
your resources, your spirit, hopefully into areas and people

(08:24):
and contexts that bring you meaning and where you can build
the legacy. So not so much advice, but it's
like a North Star that I think is.
I think it's really important. I think it's a very good advice
that actually leads into the topic that we're going to talk
about, which on the main subjectof Salesforce agentic AI trust
and responsibility. Yes, yes.
I think it's always baseline my audience.
Can you briefly introduce Salesforce and its global

(08:47):
mission and how is it currently evolving in an AI first world?
Sure, sure. So, so you know, the Salesforce
we have for about 26 years now have really focused on how do we
deliver on five core values. And I start with values because
it actually really informs how we think about supporting our
customers and organizations and reimagining themselves with AI.

(09:09):
You know, in our 5 core values #1 value is trust.
Then we have customer success orcustomer stakeholder success,
then innovation and then equality and sustainability.
And when we think of that kind of portfolio, those values of
trust, of customer success, innovation, equality and
sustainability, each of those have guided how we approach

(09:31):
whether it's helping our customers connect with their
customers, you know, in a whole new way.
We've, you know, we were pioneerpriggling around initially
predictive AI in the now around generative and agentic AI, but
CRM, the #1A ICRM people would, you know, say about us.
But also more broadly, when we sort of saw our, our customers,
we're very led by what are our customers priorities, What's the

(09:55):
feedback? We're talking about feedback.
What's the feedback that our customers are giving to us about
what they need, what their challenges are.
So some of that may be, you know, they're dealing with
siloed data, they're siloed themselves, fragmented data and
whatnot. So it's a, we need solutions for
a data cloud solution. But I highlight that because all
of our different set of solutions are across our deeply
unified platform are designed toenable our customers to really

(10:20):
connect more effectively with their own customers or right
achieve their own objectives in various sets of ways.
And you mentioned Agent Force earlier, which we can talk about
as well, if you'd like. I think this is so interesting.
So one of the other things that we probably also come to and
it's supposed to help 9 audienceto understand is the concept of
agentic AI, right? How do you, how do you currently

(10:41):
define agentic AI and how is it different from say traditional
automation or AI tools? I think you'll have a really
unique definition towards that. Yeah.
So it's noted Salesforce comes at this.
We really only specialize in that kind of enterprise grade,
enterprise ready type of solutions, you know, in
partnership. And when we think of artificial

(11:02):
intelligence, particularly the current era of agentic AI,
which, you know, people have said Salesforce, we're really
the ones who had defined it, leading it and whatnot through
agent force. But the idea is that we will
have no doubt live in a future of humans with AI right,

(11:23):
advancing, right, wherever the priorities, right, may be,
right, humans with agents driving stakeholder success
together. And so these AI agents under
supervision, control direction of the organization of the teams
of the humans, you know, able toact right, to reason, to plan,

(11:44):
to execute tasks, to achieve some of the objectives that are
set. And you know, as you can
imagine, when we think of artificial intelligence
evolving, you know, in that way.So you really have this robust
partnership between humans, you know, with AI agents, it raises
the stakes right around, you know, the need for responsible,
you know, AI we, you know, when we listen to our customers, they

(12:06):
all like you're telling us, theyreally want, they want safe,
trusted, responsible, right, AI services that will deliver at
scale with speed and trust. And so it's just kind of that's
kind of how and we think again, it'll apply really across all
sorts of very practical, you know, use cases in a Singapore
Airlines, actually one-of-a-kindof a really incredible customer

(12:26):
that's been using a number of our different set of solutions
to really reimagine how do they right, really take their
customer experience to the next level of personalization, you
know, of impact as well as all their other, you know, sets of
goals. And the reason I mentioned
Singapore Airlines as well, which goes to your AI governance
point is, you know, Singapore Airlines, you know, is
partnering with us, you know, others.
It's actually around what's the thought leadership around?

(12:49):
How can we think of AI trust, AIresponsibility, AI impact to
create really kind of prickly inthat industry, but can be in any
set of industry, just new motions, new operating models so
that they are really kind of defining, you know, the future
in a world of humans with, you know, AI and agents.
So coming from that point of view, sure, what could be your

(13:09):
mental models to think about theconcept of trust and
responsibility in the era of agentic AI?
And I think you brought a very, very good example of the
Singapore Airlines, the intersection of governance and
ethics across the organization on AI itself.
Well, you know, when it comes tokind of mental models or
operating models, I've probably come at it and as follows.

(13:30):
I think sometimes people in, youknow, organization of different
contexts can can I think wronglyview trust and innovation as
somehow at odds. Yeah, they're not mutually
exclusive. Right.
And I think that's really just, you know, fundamental.
And I think that's kind of one of the key kind of mental sort
of operating models is actually rather than trust innovation

(13:54):
sort of being at odds, particularly for the types of
impact, you know, and change andtransformation that's possible
now trusted approaches and bringing trust into the product
design, development, deployment,adoption right at the outset and
not as an afterthought actually becomes an accelerant, right?

(14:16):
It's more like trust is almost propulsion, right, to give
people not just faith, but actually enable you to go faster
right around these, you know, set of topics.
And I think that's just a critical kind of, you know,
piece around this across, you know, whatever the sort of use
cases may be and. But we can talk about the use
cases, right? How?

(14:37):
Yes, do. You have any thoughts about say
the use cases that allow the shift from say automation to
agency where the trust and responsibilities clearly visible
today? No, I think I think it's an
important point. So I think ultimately what we
find with our our customers and look even ourselves, because at
Salesforce we actually say, hey,you know, we need to, there's a
couple of cliches, you know, let's say eating our own, you

(14:59):
know, cooking. But we said we need to be
customer zero. We want all of our customers.
You know, our customers tell us they want to be agent force
companies. They want to be agent first,
they want to be AI first. Then we say, OK, well, we need
to show that model and be in thearena, you know, with them.
And so how we've been deploying agents, deploying Agentforce,

(15:21):
you know, whether it's, you know, when we've said, you know,
our help.salesforceyouknow.com actually external facing as well
as a whole slew of internal agents in terms of the human
teams being able to rely on these identic set of
technologies. Sometimes, look, sometimes it is
just, you know, kind of, you know, providing 24/7 right type
of engagement, you know, and support.

(15:41):
Sometimes it's actually really unlocking capacity to tackle
projects that maybe we otherwisewouldn't have the capacity, you
know, to to tackle. You know, you've heard the the
concept that sales as we've beentalking about is, you know, when
you think of humans with AI driving success, well, what it
means is you're going to have this mix of human labor with
digital labor. Yeah, right.

(16:02):
And dealing with a lot of shortfalls in the healthcare,
you know, system. We see that so much there's real
shortfalls, right, of folks, andyou also have, you know,
experts, you know, and, and, andpeople who are, you know,
working, working around the clock, who how could you unlock
their capacity? So it could be automating,
right, certain tasks or taking on routine, you know, sort of
type of administrative work. And that's a critical part,

(16:25):
right, of all this. But then there's also what is
augmentation, you know, look like how do you improve the
decision making using right sortof AI, you know, around these
sets of issues or, you know, youhave consumer OpenTable, right,
open tables using, you know, agent force.
Can you imagine kind of around the world to kind of help, you
know, people who want to have kind of fan connection, you

(16:46):
know, community, you know, good food reservations and whatnot.
And they're doing that right at scale to actually execute on,
you know, these sometimes, you know, different sets, actually
complex logistics, just like, you know, we may have
governments working on these things, you know, very large
enterprise financial institutions, you know,
manufacturing companies kind of a whole, you know, sort of slew
of folks. And and I think that ends up one

(17:06):
piece you'd asked me earlier in my apologies, I didn't I didn't
really get to it is. So if we think of humans with AI
agents working together, our vision around that for what is
trusted agentics mean it it's going to have to have robust
governance. It's going to have to need to

(17:26):
have the right ethical guardrails and it's going to
have to have, yes, you know, compliance at scale, but a
critical piece for any institution enterprise of not
just size or scale, because thisalso applies for small, medium
sized businesses, but any institution or company of
seriousness. They actually need to have the

(17:46):
right governance around it. So what I mean by that is they
need visibility, right? They need control, they need
observability. What does it actually mean?
You know, what are the agents doing?
And also, when are the agents effectively doing hand offs to
humans? And then when do humans actually
be able to say, OK, hey, you know what, let's go get the AI
agent doing it. And by the way, you know, here's

(18:07):
the thing that really, you know,we have these weekly AI summits
where we're in deep with our, you know, incredibly brilliant,
you know, our chief AI scientists, our technology
product teams, You know, I and, you know, sort of the rest of
the sort of, you know, C-Suite, we're in the thicket, you know,
with them every week going through all these sets of items
is as we look to the future, Butactually the the present that
sort of we're building is it's not just human to agent hand off

(18:28):
interactions. It's actually agent to agent,
right? And you can have agent to agent
to agent, right, and then sort of go back to the human.
And so how does you, you know, when you think of these
guardrails, there's going to be different sets of guardrails.
Some are preventative, some are runtime guardrails, some are
like self improving guardrails. But you I get pretty excited
about this because I think the impact, but please, it's.

(18:48):
Interesting, because the guardrails are determined by
humans, we have to set certain rules to make sure that things
don't go out of hand. And also you rightfully also
find out things like observability within the AI
agents, what are they really doing?
And I also notice in the Agent Force 3 launches a lot that
seems to be centered along say the command and dashboard to

(19:08):
help. Yes, it is so the.
Different agents and what they're doing.
Because think if you, if you're an agent boss, you yourself,
right, right. You're, you're running, you
know, sort of commanding a, you know, managing a whole sort of
fleet of agents. Yes, Right.
Yeah. You have your own agentic span
of control, right. You need a command center.
That's right, right. And the model context protocol
is, is actually now, but people are not thinking about the trust

(19:30):
level where you need to have some trust before it just the
agent goes through and do something else, right.
So if I were to ask that. I agree with you.
We often talk about the promise of artificial intelligence, but
what are the real world consequences with untrusted AI
from your point of view? Particularly, I think you have
rightfully pointed out, like in healthcare, in financial
services, these are domains where high trust are really

(19:54):
required because we have so muchconfidential data about the
customers. No, you've touched on it.
That's exactly right. Is that we're going to be
seeing. You know, artificial
intelligence deployed in contextwhere the stakes are, you know,
very high. And again, there are upsides and
promise, right? You know, to that you can think
of. There's a lot of areas where,
you know, unfortunately there are medical mistakes that have

(20:17):
occurred, including, you know, things that affect people with
fewer resources. So part of the promise is
actually how could you use artificial intelligence to
actually raise the floor of thatquality so that really every
individual human can have essentially access to this
virtual team of experts. Humans, you know, human
physicians, absolutely, but alsoAI supporting those diagnosis,

(20:40):
diagnosis of supporting the follow up, right?
And also sometimes, you know, reducing burnout because
sometimes mistakes takes happen right, in healthcare or other
things from humans that are tired, right and burned out.
So that's also, or maybe can AI can, you know, help sort of
around that? But you know, to your to your,
you know, sort of point, like, you know, what are the stakes?
You highlighted some of them, which is and you know, you, we

(21:01):
see this still with the generative AI, you know, issues
that come up is look, accuracy, you know, hallucination, right?
And by the way, I think, you know, trusted genetics does rely
on being grounded on trusted data, but also have visibility
and understanding what are the data sources you know, coming
from, how do we understand it? How do you also have consistency

(21:22):
maybe across, you know, different data domains, you
know, in different sets of areas.
But, you know, I think I'll sortof step back a little bit.
When we think of agents and you tell me, we think of we think of
our, we think of agents. We think of the different apps,
you know, we think of the, you know, different versions of AI.
We think of the metadata. We think of, you know, sort of
adding agentic layers to these things.

(21:44):
I highlight all this because what we also, you know, think
about quite a bit, you know, in these areas is they're kind of
five attributes as such, like what is an agent, an AI agent
supposed to do? You know, 1 is going to be OK.
What's what's the role? What's the task that they're,
you know, doing, working with humans?
Another one is the data or the knowledge, Like what's the
knowledge they're supposed to have access to?

(22:05):
Or maybe what's the knowledge data?
They're not supposed to. So to have access to, Yep, you
know, and then you have the whole set of guardrails, right?
What should they do? Not they by which by the way,
well, I'll get to this sort of in a moment.
The channels where are they going to work?
Like even the slack Slack has been such an incredible, you
know, I was just speaking to a customer the other day and
they're like, you know, we basically Slack is our operating

(22:26):
system for how we kind of which is like really, you know,
exciting your company runs the slack fantastic.
If you have feedback, you know, we'll have I want to make sure
you're successful with the agent.
That's terrific. Great to great to hear that.
But anyway, when we think of again, agents, you know, what's
the roles, what's the capabilities like the tab, What
should they be able to do? What are the guard rails?
Where the channels, right, sort of in which they work?

(22:47):
What's the data, the knowledge they have to here's the
interesting and complex element.So take you and me.
We've got jobs. Well, as humans, what are the
elements of our jobs? We have guard rails, codes of
conduct, requirements, things we're supposed to do, not going
to do. We have responsibility, we have
roles, we have what's the knowledge, what's the data we're

(23:08):
supposed to access? What are the channels
confidentiality? Where do we work?
Do we do our work in Slack? Do we do the work on the phone
with voice? Do we do it in person?
But that's I think an interesting element.
When you think of these 5 dimensions of how you would
design or craft an AI agent, they're actually very analogous
to kind of what are the dimensions of just how we

(23:28):
approach our, you know, our workand not to overly, you know,
anthropomorphize. And yet I think there are
certain important insights rightaround that.
When we think of what's that future human AI workforce, human
labor with digital labor coming together, that actually, I think
does give us different sets of insights, including how you
think of trust, responsibility and impact so that you can, you

(23:51):
know, achieve the results that you want to achieve.
How would you think about this situation where in some
situations when the agentic AI are a bit like black boxes where
you really do not know what is thinking and what is doing how,
how do you put that guardrails into that kind of situation?
I, I think this is critical. I think that for enterprise

(24:12):
grade, enterprise ready agentic AI systems, they're going to be
3 elements that are fundamental.And this is how, you know, we
build and develop it sort of with our customers.
We're going to need transparency.
We're going to need explain ability.
We're going to need audit ability, right, Meaning what
occurred and why and be able to kind of go back and sort of see

(24:34):
the different, you know, steps because and that's different
than maybe, you know, kind of consumer facing, right consumer.
But at least again, within the enterprise, there's this
heightened expectation of understanding what exactly
occurred. And we actually do believe is
achievable with the right if youbuild actually the technology in
sort of the correct, you know, the correct way so that you

(24:55):
actually deal with this, you know, black box you.
Know the audit trail is important because one of the
things about agents, they could make 1000 decisions and 999 will
work and then there's just one decision.
Maybe just just miss something, right?
And then you need to have a way to actually go and audit back
the trail. Why do you do this wrongly?
Right, that's exactly right. Human beings.

(25:17):
Just like human beings. No, this is exactly and look,
and when we think of the, the, the roles of the future, the
jobs of the future, the needs ofthe future, we're going to have
like what does quality control look like in in a gentic right
era, right? And how do you think about and
also errors? How do you think about, you
know, checking how do we have, you know, those sets of issues

(25:39):
because it's not just OK, were mistakes made or did it fully
align with what we sort of wish to occur?
It's also OK, how do we drive better effectiveness?
I think the the other piece is what are the new right roles,
the new tasks, the new jobs thatcertainly, you know, get
created, you know, around this, Some of them do involve, by the
way, you know, who are the people who are going to really

(26:02):
kind of understand and unpack what exactly do the AI do right
and how do we think of that experts in that collaboration
between human and AI? And then, you know, there's a
lot of obviously you can imaginelegal and policy and different
set of issues, but also, you know, for, you know, you can
imagine any member of AC suite for ACFO, right, for a chief
operating officer, right? You know, for all these sort of

(26:23):
different sets of folks, how will they lead, right,
effectively and reimagine some of their own right, functions
and priorities and teams when you have the possibility of AI
to help you, you know, accelerate but also make better
decisions and ultimately kind of, you know, presumably achieve
your goals faster. I think the other element there

(26:45):
though, is effectively, if we kind of get to how do we get to
better decision making between humans and AI?
Well, maybe the AI can help you or help your organization come
up with the right goals in a waythat hadn't been before.
So there isn't, there doesn't just have to be the element of
what do we automate and what do we, how do we get rid of the

(27:06):
tedious tasks? You know, absolutely.
But it can also maybe have a very sort of as a sort of
strategic right partner or thought partner.
Of the priorities and help you to organize those priorities
into yourself. Think that is actionable?
I think I'd agree with that. What about you though?
Tell me, how are you kind of tackling AI?

(27:26):
What do you see as the opportunities or the risks or
the? I work in a enterprise AI
company. Since I'm a startup, I have the
perfect architecture to do a lotof things using AI.
So what I discover now is that alot of past is repeated or I
found the which ultimately usingAI.
Well, yeah. So you can think of like via

(27:47):
customer call where I was still required guide rail to ask the
customer, can I record this? But the biggest difference now
is that I can capture most of the requirements through that
conversation. And the beauty is that I just
have to press 1 button and say hey and ask the large language
model and say hey, you know what?
Can you retrace the conversation?
Can you bring out all the features that the customer

(28:08):
wants? And then can you organise it
into a statement or word for me?And that is something that used
to be done for about 3 to 5 daysby a consultant and then that is
now being reduced into say minutes.
So I think that is one of the biggest change of how to work
productively. Yes.
But I have to ask you this question.
What is the one thing you know about trust and responsibility

(28:31):
in the era agent AI that very few do?
Oh, that's a great question. Just that it requires vigilance,
it requires also taking trust seriously.
You know, in the in the generative AI, you know, I used
to have a phrase that trust begins long before the prompt

(28:53):
and continues long after the output.
And I think that element of whatdo we think of end to end trust
first processes and taking responsibility and impact
seriously? Yes, it's a good one.
I'm actually going to be teaching a class on that after
we have this conversation. I like that line.

(29:15):
You feel free, feel free. But OK, that's interesting
coming. Back to this conversation now, I
think given that you talk about working with customers, how is
now Salesforce working to say ensures transparency and
accountability in a way that theAI agents make decisions, but
also at the same time being ableto put the guard rails there?
So I mean, we talk about the agent.

(29:36):
For example so. How is this being done in
reality? Yeah, sure.
So the one that does depend sortof on the use case what is
needed, we try to really design and deliver solutions that are
responsive to the customer needs.
And you know, as you can imagineparticularly we have a lot of
customers that really want to benefit from our deeply unified
platform and like the full, you know, force and power, you know,

(29:59):
kind of across the sets of solutions.
But sometimes it can involve, you know, trust patterns, right?
Sort of really kind of understanding what are the
different sets of pieces of how it comes together so that again
you achieve the outcomes that you want effectiveness and
relevancy sort of an accuracy. Sometimes that also evolves.
Again, we have a trust layer because there is at the base.

(30:21):
One thing is, you know, at Salesforce, at least we're not
sort of releasing just agents into the wild, right?
It's, you know, the, the sort ofagentic technology that
customers expect from us and askus to build and and and deliver
is really grounded, right? And, you know, just to give me,
you know, decades of trusted protocols of respect for

(30:41):
enterprises, say access controls, you know, also, you
know, being dependent on, you know, customer specific
workflows, right? And data, right, that they've
been using, you know, sort of and relying with us, you know,
for for a long, a long time. So those are, I think, you know,
some of the elements around that.
But again, you know, you company, you know, companies
need to have what's their data strategy, what's their data

(31:02):
governance, making sure again, the agents and the sort of all
the different sort of solutions are grounded right in that
relevant data, the workflows that are most important, right,
Yeah, to to that customer. But we also try to make things
easy for people. Salesforce is a is a company is,
you know, we've always really believed when we think of
business as the greatest platform for change and impact.

(31:24):
We believe in democratizing access to the technology,
empowering, you know, people. And so look, we have, you know,
out-of-the-box use cases right across every single, you know,
industry. Again, it could be, you know,
for governments, how they think of, you know, serving their
constituents, you know, better financial services, healthcare,
right, manufacturing, consumer facing, you know, retail, you

(31:47):
know, I could go down sort of all these different sets of, you
know, kind of buckets, but really kind of just have OK,
here the workflows that we know right, are most impactful and
aligned to what your business objectives are, what your actual
sort of, you know, needs are. And then we just sort of can you
know, Co build it together and just it was with a customer the

(32:08):
other day, you know, the CTO, she's like this just works,
right. And similarly, we do a lot of
work with, we think of enablement democratizing access
technology. Our kind of one of our
enablement engines is we have something called Trailhead,
which is really kind of just howdo we drive a number of these
business councils right around AI impact and, you know, AI

(32:31):
ethics, but also how do we have AI, you know, acceleration and
Trailhead is one of the methods where we think how do, how can
communities, how can companies build capacity, AI literacy,
digital literacy right around these items?
And so you, if you ever want go on Trailhead, it's available.
You can become, you know, we call it become an agent Blazer.
You know, for years we've had this thing called Trail Blazers,

(32:53):
Trail Blazers, you know, we havemillions around the world.
We want to have there be millions of agent Blazers, but
we care about this because we actually don't believe that AI
and agentics should be wholly implemented and delivered fully
top down. The point is you should actually
have, you know, the actual people, the teams, companies of

(33:15):
any level, you know, yourself, but actually any of your, your
team members, you know, come work, come build on our
platform, you know, give you'll give you a button build, build
my first agent. Because having your hands in the
soil, it not just gives you kindof confidence and and control
that this technology is real, Its president can be developed,
but it's actually very empowering because you may come

(33:36):
up with use cases that are even more impactful, right, sort of
than the ones that we may sort of have out-of-the-box.
Then we can build something, youknow, new, you know, sort of for
you, particularly as you think of bringing together disparate
data and, you know, siloed data,you know, and the like.
So that ultimately you can kind of have all the different sort
of enterprise, you know, sets ofareas of importance to be

(33:57):
brought together. So you have effective agents or
other AI tools. Interesting that point.
I think there was a very good study that 70% of the AI that is
supposed to be implemented is actually all done in the
wrangling and the harmonizing ofdata and you put that out really
specific. Foundation.
For ethical and human. Yes.
How does that office work and how do you partner, say, with

(34:19):
your product and engineering teams?
I think this is one of the most unique things about.
Support we haven't seen yet. Yeah.
Well, look, I think, you know, rooted, you know, in our number
one value of trust is how do we operationalize that.
And so, yeah, we have our Officeof Ethical and Humane Use, you
know, of technology. Just someone who said to me the
other day said you guys are the very rare, rare company that is

(34:39):
actually willing to say that a core value of yours is trust.
And similarly, like you're one of the rare companies that would
actually say we're a company, we're gonna have an office of
ethical and humane use, right oftechnology, really grapple right
with these set of different topics.
As you noted, the office of OEH,the office of ethical humane use

(35:01):
is an office I oversee, and it also is overseeing engaged
within our technology and product organization because we
want to have both of those, you know, sort of deep sets of
areas. And so, you know, the the work
of that group, whether it's on policies, whether it's on and I
mentioned earlier, you know, there's a concept in
cybersecurity of a shift left right, you know, you're familiar
with it. But similarly, when we think of

(35:22):
trust and ethics and responsibility and observability
and whatnot is OK, how do you build it in to the product
design early on? And then also testing and sort
of really testing and being willing to sort of assess is
this doing the right things? Is it 'cause other issues being
created? And how do you, you know,
resolve that? I encourage you to read and
actually anyone who's listening or watching, we've issued our

(35:43):
inaugural, it's the first ever trusted AI and agents impact
report and we really seek to engage, you know, and sort of
think about, you know, these setof issues.
And you know, as a company, we're a public right, publicly
traded company. We have a board of directors, we
have our internal governance, you know, systems and we
actually deeply care and think about what's that oversight of

(36:06):
artificial intelligence as a company and across, you know,
sort of our different sets of sets of stakeholders so that
we're navigating and calibratingopportunity, you know, and risk,
you know, effectively. And so we engage really at every
level, whether it's our board ofdirectors on these issues,
whether it's our management teams, whether it's, again, each
of the sort of different sets offunctions both on customer 0,

(36:27):
but also making sure we have these really rapid feedback
loops. Just like with customers, we
want to make sure we're really, you know, we just had a, you
know, CIO Advisory Board meetingtoday is just really in there,
right, sort of with these incredible, you know, leaders
who are building or architecting, who are deploying
and making sure we're getting all this sort of feedback, even
the tough feedback because it's really important because we are

(36:49):
really committed to that, you know, North Star of customer
success. So.
I guess the when it comes to customers and how do you
approach, say building trust with enterprise customers, I
think they also become increasingly nervous about
handing control away to autonomous AI systems.
I think this is one of those things.
And plus, now AI agents are alsoevolving from rules to

(37:10):
reasoning, so. How?
Do you sort of help them or guide them towards thinking
about handing that part of that control, but still able to still
allow them to function in this new world?
And thank you for posing that look in the Enterprise grade
context and in a sense similar to how cybersecurity evolved.

(37:33):
But when we think of AI systems and agentic AI, our concept that
we invented of trusted agentics actually hits this point, which
is what's the shared responsibility?
Yes, what's the shared trust, the shared discipline around
these areas where actually the customers want to have control

(37:55):
and they should they want to have confidence, you know, in
these systems, they want to alsomake sure, you know, they're
achieving, you know, compliance,you know, at scale.
And so the reason I mentioned that is just this area of, you
know, OK, the customer will havecertain specifications and how
what they want the agents say todo or they may want sort of
their other AI systems, but thenthey also want to have the
control and flexibility to kind of sometime fine tune the

(38:19):
different set of areas. And right, So, you know, we may
have, you know, tools and solutions, trust related items
for that. And then it's the customer's
decision actually in terms of, OK, where do they really want to
turn the dial in different sets of issues around sort of the use
cases, you know, on the context.And so a big piece of this
actually, again, there is the making sure the customers are
empowered, whether it's through the command centers, right, for

(38:40):
humans and agents. But that visibility, the
observability, you know, the governance, but then also what,
what are the, the, how they think about the guardrails and
their view on guardrails may also evolve, right?
So this is a question that I usually get from CEO's when they
teach class, proper classes. I've been consulting for them.

(39:00):
They will, they will tell me that you know what my 9 year old
kid should, what should I do with them, with AI?
And then my first answers, yeah,you should let them use it, but
you should be there with them that way, right?
So what what they're really worried about is that the kids
outsource their thinking into the AI.
So, but then I'm going to move this question this and I think
this analogy to solve bring it to our conversation, right.

(39:24):
So in the future, how would you envision, say humans, AI agents
collaborating effectively without one side being overly
dependent on the other? I mean, humans had that tendency
of being overly reliant on humans.
Well, our source are thinking away.
I mean, look, I think the reality of that is, you know,

(39:46):
don't humans also sometimes havea tendency to maybe over rely on
other humans in a context? I mean, you know, we we all do
that. And I share that actually again,
because I think there are, you know, similar principles.
And if you also mention around, you know, as we think of what's
the society that we want to have, hopefully the society we
deserve to have, but the humanity is actually very much

(40:08):
in control. Have these items of, of defining
what is, you know, I used to think that, you know, the secret
of impact or success was having a really first class view of the
future. I don't really think that
anymore. I think the secret is more how
do you have a first class view of scenarios, you know, around
the future. And, you know, we've had at
Salesforce, the futures team fora very, very long time.

(40:28):
And, you know, I bring them in and we kind of debate, we kind
of see different set of scenarios.
When you think of as as parents or educators, it all starts with
really embracing and understanding again, that the
future is one of humans with AI humans and AI agents, you know,
together. And So what that means is, OK,

(40:50):
how do you educate? You know, for my own view, I've
three children, you know, blessed to have them.
My view on it is I actually really want them to be engaging,
you know, with the, you know, technology actually say here,
take some of these AI fluency courses, right?
Engage with the debate. Sometimes I get emails and I'm
like, did they write this? Did they send it?
So did they? But the key thing is using that

(41:12):
to then engage in the human conversation because ultimately
we need AI to help us spend moreof our time on that, which makes
us more uniquely human, right? And also when you think of, you
know, if you're from a, you know, the sustainable
development goals, for example, all these issues of clean water,

(41:34):
right, access to resources, you know, healthcare and whatnot.
Look, these are all areas that humanity has yet to solve and
get right. And that's one of the reasons
now, you know, we've had these series of global AI summits and,
you know, Salesforce, we were engaged in each of them, right?
They started in Bletchley Park, right?
You know, and in the UK around safety.

(41:55):
And then when I was in Korea around the next summit that
talks about innovation, safety and inclusivity, that in Paris,
right, we're talking about AI action.
Then there'll be 1 in India thatwe're working on, you know, so
with the government and the teamthere is just, we really need to
embrace the technology. So that one, we understand it,

(42:17):
but two, we're able to deploy itin the ways that are most
meaningful and useful to us. Because one of the challenges,
you know, I used to do work around the digital divide.
So similarly, right, going forward, what is that?
Are there going to be sort of divides?
Are you going to have certain companies and certain industries
that embrace this in a trust first way and then are able to

(42:37):
right accelerate and leap past right there, you know,
competitors. But similarly, you know, people
are worried about that in their communities and their
neighborhoods, You know, families is what does
competition look like? But I think really it's, it's,
it's going to have to be more of, you know, we, we right now
at Salesforce, we every year we do the future force, which is we
bring like of the pleasure of having these interns, you know,

(42:57):
come in, you know, so I've got awhole host of interns, you know,
sort of at, at Salesforce. And just, for example, I was
just, you know, with some of them, they're the ones who chose
to join our, our legal corporateaffairs A-Team.
And I was just chatting with them around AI, how they use it.
And they just start showing me like, yeah, here's how we're
using AI to work through this problem.
Here's that. And we did actually a little

(43:18):
clip because I said, wait, this is really kind of exciting.
They're just so natural with it and comfortable with it.
And I said, hey, you've got thiswisdom here.
We need to make sure we're sharing that right, with sort of
everyone. We got, you know, 7577 thousand,
you know, people at the company.We have to really be learning
from each other constantly. Just earlier, you know, today I

(43:39):
was doing a, a global town hall.And you know, we sort of do it
from here. We're doing here from Singapore
because the region's very, very,you know, important to us, you
know, but globally. But one of the areas that we
were really talking about is what is AI transformation mean,
but in very hands on right level.
And a key piece of it was we're going to handle it in a very

(43:59):
empowering way because each of these folks on our teams, they
know best their own work. They know what excellence and
effectiveness looks like. So we want actually these teams
to be the ones that start to say, how might I use AI to
augment, you know, myself to automate a way the things I

(44:20):
really don't want them to free me up, right, for sort of
additional, you know, sets of tasks.
And that's what it means. You can democratize access to
these technologies, not just kind of in an abstract way, but
like with the people with whom you're working and leading and
managing. And then also how you know
again, and kind of at every level and be open to just
learning. And I think that's going to be
one of the most important areas going forward for our families,

(44:43):
for our communities is curiosity.
So what's the one question that you wish more people will ask
you about trust and responsibility with Agenda AI?
What's the one question? That you wish more people would
ask? Well, maybe a little sort of
silly, but I think you'll get itmore of how can governance and
guardrails and guidance be a catalyst and be an accelerant.

(45:10):
I'm going to follow up with. For innovation, no, no, but
really, you know, rather than kind of this sort of cliched
view of, oh, it's a blocker, oh,I don't want to kind of deal
with these, you know, sets of issues.
I think honestly that's that's Ithink a key area.
How would you solve? Give them like some high level
guidance like this is how you should approach it.

(45:32):
Then if I were to ask that question to you, like just give
them like maybe 3 very importantprinciples.
Sure. Well, look, I think you know
first that in order for you to get the wait, can we start over
on that? OK, no, ask the same thing.
OK, so I'm going to ask you thatquestion.
What would be your guidance to people to bring this governance

(45:56):
into them so that they don't feel encumbered by it rather
than, you know, embracing it, which I think this is the quote,
one question that you wish one people would ask you.
Look, I think one, when you're dealing with enterprise grade
right and needing enterprise ready solutions, guardrails
provide confidence to move faster.

(46:20):
See, I believe that effective governance, common sense
guidance, sort of ethical, common sense guardrails provide
permission and they provide clarity because then folks know,
OK, how can we, how do we move forward?

(46:41):
How do we make this safe, responsible, ethical?
And I think that's actually partof why, you know, you kind of
think of it, you know, how many people think of it in personal
relationships of contact, you know, the speed of trust.
But similarly, that's I think that's a key piece to this is
that in a, you know, again, I think in an organizational
context, in order to have adoption and scale, which is

(47:05):
what you need with AI and in order to really be comfortable
with using agents to drive forward your workflows to
partner humans with AI on these issues.
It actually only will occur witha strong sense of, OK, where
does trust, responsible and impact come into play?

(47:25):
Because we're not actually just playing around, you know, sort
of which is good. It's good to play.
It's good to, you know, games and whatnot.
But again, in these more seriousright context, people want
effectiveness. They want to know sort of what's
the business value, but they also want to have, again, that
visibility and the observabilityand the governance around it so
that they can stay in control, have that, you know, command

(47:46):
center and then, you know, correct, right, sort of along
along the way. So does that address kind of I?
Think that that's a very good advice.
So my traditional closing question then, what does great
look like for trust and responsibility on agentic AI for
sales force in the next couple of years?
Well, that's great. Look like, look, I think we're

(48:06):
in a just really once in a generational moment here, you
know, at sales force, you know, people have said that we were
the first to move on agents. They know at scale.
What does great look like? It means we're also known to be
the first to move with integrity, with trust, you know,
at the forefront. It's going to mean even wider

(48:27):
sort of global adoption. It's going to mean more use
cases across all sets of, you know, could be regulated
industries. It could be other, you know,
sets of cases. The other thing honestly where I
think what great looks like is going to be at that kind of
regional level sort of in each country in each context, not
one-size-fits-all. One big area that we're very

(48:48):
committed to and focused on is actually for the small and
medium sized businesses and commercial businesses as well
as, you know, we serve the largest enterprises.
But we really see this incredible opportunity, you
know, for the small and medium sized businesses here to scale
up right to, you know, there's a, you know, common sort of
statistic that Austin says, oh, most, you know, small
businesses, they're really the engine, right?

(49:09):
There's so much growth. Most small businesses fail where
our vision is, you know, with agent force, with different sets
of our solutions, But really, you know, small businesses
really embracing the humans withAI bringing in sort of the
agentic layer to the small medium sized businesses.
What if that was no longer true?What if instead of most small
businesses failing, what if moresmall businesses succeeded and
then they were able to grow their impact, Yes, grow their

(49:30):
revenue, right? And include, I mean, imagine
what that would mean for communities, for families.
So I think that's a big piece. I think the other thing, what
does great look like? Salesforce has always been
committed to the ecosystem approach to success.
So an ecosystem right, of a gentic trust, an ecosystem of
what is customer and stakeholdersuccess, you know, look like at

(49:52):
scale when it comes to AI, an ecosystem approach to
innovation, ecosystem approach to equality, an ecosystem
approach to sustainability. Yeah, all of these sort of sets
of areas. That's how we think.
We have, you know, we have a, you know, Salesforce ventures,
we're, you know, we're just havethe the privilege of partnering,
you know, with a lot of these companies.
We're investing right in these companies, but we're so

(50:13):
committed to that customer success is what great looks like
is just really incredible levelsof continued customer success at
scale with speed but always in trust.
So I'm going to look forward to see what's going to come out in
Asian forces the next. Asian force 4 point O You're
already thinking about it, yeah.So best of Many thanks for
coming on the show. So in closing, very quick one.

(50:34):
Any recommendations that have inspired you recently?
Oh for Oh my gosh, enough food, books, movies and whatnot.
Well, actually, you know, I and there's all, there's so many
terrific, you know, books out there around AI, you know, for
AI 2027, there's this or that, but there's actually a book I've
been rereading and it's called The Future Self.

(50:58):
That's why a friend of mine is aprofessor.
But what it really kind of talksabout is that when we want to
have impact, when we want to have hope, when we want to have,
you know, just sort of a view ofkind of what are we going
forward? It's not actually the past that
drives us. It's actually a view of the
future of our own futures as individuals, as humans, of our

(51:22):
organizations. And this is, I think, just as
such a powerful book, particularly now, right?
And just of what's that view we're going to have of the
future? Because once you have a first
class view of at least a set of scenarios around the future, you
then are empowered to decide which is the future scenario I
want to avoid, but which is the future scenario I actually want

(51:43):
to accelerate and have manifest.Yeah.
And so that that actually is just what I really would
recommend it. Great book.
Recommendation How did my audience find you?
Oh, online, I think it's typed by name if you want Sebastian
Niles, Salesforce. But I would also say, you know,
try our help.salesforce.com, give us feedback and whatnot,
but you're happy to, and hopefully they'll find us on

(52:04):
this podcast. And of course, you can
definitely follow this podcast and Many thanks for coming on
the show. I definitely can find us on
Spotify you to every channel. However, I really enjoyed this
conversation and I think I took a lot of great lessons before
going back to teach my class. So, Sebastian Meng, thanks for
coming on the. Show wonderful.
Invite me to your class one day perhaps?
Definitely. It's a real pleasure.
Advertise With Us

Popular Podcasts

CrimeLess: Hillbilly Heist

CrimeLess: Hillbilly Heist

It’s 1996 in rural North Carolina, and an oddball crew makes history when they pull off America’s third largest cash heist. But it’s all downhill from there. Join host Johnny Knoxville as he unspools a wild and woolly tale about a group of regular ‘ol folks who risked it all for a chance at a better life. CrimeLess: Hillbilly Heist answers the question: what would you do with 17.3 million dollars? The answer includes diamond rings, mansions, velvet Elvis paintings, plus a run for the border, murder-for-hire-plots, and FBI busts.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.