Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Everyone's talking about a gentic AI, but most enterprises
are ready. I'm Michael Kraigsman and today
on CFO talk #903. Christian.
Kleinerman, Snowflake's EVP of product, shares hard truths on
what it really takes data readiness, governance, AI,
(00:22):
economics, and workforce impact.We're the company delivering the
AI Data cloud. And what does that mean?
The air data cloud, we have two lower stars with two important
directions. 1 is we want to helporganizations with the entire
life cycle of data from the moment data is born or created,
(00:46):
whether it's a sensor, whether it's an application or a device,
all the way through transformation and enrichment
and most important, through analysis and consumption of the
data. How do you get true insights?
A lot of that data. And at Snowflake, my role, I, I
run the product team, a number of functions, product
management, product design, technical writing, data science,
(01:09):
variety of capabilities. I've been with the company 8
years, very happy, very proud ofwhat we do.
Christian, we're talking about AI agents today.
What can enterprise grade AI agents reliably accomplish?
Where AI truly shines and truly has potential in the enterprise
(01:34):
is when you make the context of that enterprise available to
those state-of-the-art models where now I can go to a chat bot
and ask a question and it's not generic knowledge from the
Internet. It is based on knowledge from my
company, maybe my customers, maybe my products.
That is what where I would say the most interesting use cases
(01:57):
are coming up today. I will classify them on, on, on
2 categories. 1 is read type of use cases where I just want to
retrieve knowledge. My organization has a lot of
documents, a lot of institutional information that
has been hard to access until now.
AI shy into being able to retrieve the right information.
(02:17):
So this is where you see internal assistance, customer
support or customer experience assistance.
All of those we are just trying to retrieve information in a
helpful manner. The other set of use cases,
which I think is newer, but they're also happening and
they're wonderful use cases, is when these agents can start to
(02:38):
take action. They can start to call an API,
maybe close a service support ticket, maybe draft and send a
response to a customer, maybe build a demo.
There's a number of possibilities, but the the most
interesting thing is the state of the technology, AI and models
(02:58):
I think is far ahead from the types of applications and
potential that we have in front of us.
Now let's take a moment to learn.
About Emeritus, which is making CXO talk possible If you're a
business leader navigating change or driving growth,
explore executive education programs with Emeritus, a global
(03:20):
leader, They offer programs developed in collaboration with
top universities designed for decision makers.
Like you, there's a. Program tailored to your goals,
whether that's AI and digital transformation or strategy and
leadership. Find your program at
www.emeritus dot org. That's a really interesting
(03:43):
point. So you say that the state of the
technology is in advance of whatwe in the enterprise can absorb.
Can you elaborate on that point?That's a key issue.
These models, AI models can do so much as a function of the
context that you provide to them.
(04:04):
So the more information I, I make available, I want to, I
want to feed the information about our customers.
For example, now they can start to reason about this is what
happened with a customer that's more traditional descriptive,
backward looking type of analysis.
But then there's so many more things that I could start
thinking of doing in terms of myengagement with my customer.
(04:25):
What should I be doing, what others in the industry are
doing, ask interesting questionsand reason in ways that truly
have been unthinkable in many ways for for us today that these
models are ready to do so. I I do have a high conviction
that the state of application and use cases of these models
(04:46):
are still being explored in manyareas, are still being pushed
forward ahead of additional developments and on the models.
And what are the limitations that are driving this this gap?
Most of the scenarios where we see projects going wrong or or
projects getting shut down, theyrun through through two types of
(05:09):
issues. Issue #1 is correctness of
results or correctness of actions.
The promise of AI is amazing andto the extent that you can trust
the results from that AI, everything's wonderful.
But we have heard scenarios of acompany rolled out an assistant
(05:31):
and internal customer service reps are starting to give wrong
answers. That is unacceptable.
So I think that that plays an important role.
The other aspect is access to the right amount of data or the
right context. And when I say right, I mean in
terms of security and privacy. We have heard about
(05:53):
organizations that try to roll out their own solution in the
last, I don't know, 6-12 months and all of a sudden they were
giving information to people that wouldn't have had access to
that information. So that leads into a security
and privacy concern. I would say to the extent that
there are solutions that delivercorrectness of results and
(06:17):
enforces existing governance andsecurity policies, that's when
AI shines. And and of course that's a lot
of what we have been working forthe last 1824 months at Soviet.
What is the driver of this gap between the technology
capability and the enterprise ability to make use of it?
(06:39):
I think it comes down to in manyareas, organizations are not
ready to provide that context, provide that data in a
structured and organized fashionto the AI models.
The, the canonical conversation I have with, with many of, of
our customers and our organizations is when you ask a
(07:00):
question, tell me something about customer XYZ, do you know
where to go and answer and, and get information about that,
that, that customer in your organization?
Because if data is siloed in allsorts of places and nobody
really knows what's where, it's going to be a lot more
challenging for any AI model to be able to understand that.
(07:23):
Whereas what we see, at least the technologies that we've done
with Snowflake and other playersin the industry are doing
similar, is you end up curating what we call the semantic view,
which is a translator between business terms and where the
answer to certain business termsor certain business questions
(07:44):
may lie. That is the type of data
governance and data organizationand data quality that is
prerequisite for organizations to be able to tap into those
results from AI. Is this gap of organizations
being able to adopt the full capability primarily the result
(08:08):
of or insufficient data, or is it something else?
For the scenarios that are data-driven, the answer is
largely yes. The the most common conversation
we have with with organizations these days are how do I
accelerate my data quality efforts, data rationalization
(08:32):
efforts so that I can have my data be AI ready and, and that's
a combination of things. Maybe it's simply a matter of
eliminating boundaries or silos between data sets.
Sometimes it's as simple as saying I don't have canonical
sources of truth for data. If I have five different
(08:52):
versions of what my customer list is, it's difficult for the
AI to figure out which one is your, your official answer.
And, and then the the last one is security.
Do I have clear policies on who can access what and how do you
get AI to, to enable that? So I would say that that is
true. The data governance, data
quality is an important prerequisite for leveraging
(09:16):
data-driven solutions in, in, in, in the enterprise.
There are other use cases where we have not talked about the
about them, things like coding assistance that help with
enterprise application development and other things.
But at least for the majority ofthe use cases that we spend most
of our days, which is enabling organizations to get value out
(09:38):
of the data. Yes, there.
The short answer is it's about data quality and data
governance. What are some of the very
practical use cases that you see?
I'll do in reverse order since Ijust mentioned coding assistance
and I am completely fascinated by the types of use cases that
(10:01):
code generation can provide for any of us these days.
Some of you may be thinking, well the coding assistant is
only for developers and engineers, but we're seeing all
sorts of functions of use cases being able to leverage coding
assistance. I'll tell you something that is
super exciting to me. Many of our solution engineers
(10:25):
that the folks that are like technical at Snowflake that are
trying to engage with customers,they can build customized demos
for every single conversation that each one of our customers
or put the prospective customersare interested in.
This is sort of unheard of. The the I've been doing
enterprise and and databases forfor 25 years and for the longest
(10:48):
time you had a canonical one demo, often times something
about a bicycle shop or something like that.
And no matter who the audience is, you showed up with your
bicycle shop them. But that doesn't have to be that
way. It's very much easier for me to
say, dear coding assistant, I amabout to meet with a Consumer
(11:09):
Packaged Goods company. These are the priorities for
them based on maybe the last four earnings results or
earnings reports and how do we show the value of the technology
in that context and then the coding assistant can go and
generate an example, a demo or asample that can be used.
Now let's quickly hear from. Emeritus, which is making CXO.
(11:31):
Talk possible. If you're a business leader
navigating change or driving growth, explore executive
education programs with Emeritus, a global leader.
They offer programs developed incollaboration with top
universities, designed for decision makers like you.
There's a program tailored to your goals, whether that's AI
(11:54):
and digital transformation or strategy and leadership.
Find your program at www.emeritus.org.
Coding assistance are generatingvalue for functions way beyond
what I would call traditional application developers or
software engineers. I'll give you one other example
(12:15):
on, on, on that line of thinking, pro product managers
in my organization now they're not just waiting for, for
engineering teams to go and create products or, or early
moths or functional prototypes. They're they're doing it in
themselves because it's fairly easy for pretty much anyone
(12:35):
organization to guide a coding agent into building solutions.
So that's on the, on the coding side.
And the the other set of use cases that we're completely
excited about is it just unlocking the value of data.
The trend that we have seen in the last maybe 20 years is data
democratization is a trend. Is it something that we want
(12:57):
more and more people in a company to be able to get access
to what business intelligence did 20 years ago, which was data
is not a problem of IT. Data is an opportunity for
everyone to inform the day-to-day decisions.
I think AI is materially dramatically expanding the set
(13:18):
of use cases we we've introducedat Snowflake, something we would
call Snowflake Intelligence, which we think of it as the next
level of unlock on companies be able to do more with their data.
We have a question from Twitter from Arsalan Khan, and he says,
right, His question is provocative, but I'll ask it
(13:41):
anyway, he says. Will the AI bubble explode when
organizations realize the best they can do with AI is chat
bots? We're talking about the under
utilization of the capabilities really.
We already see use cases that gofar and beyond chat bots.
They the notion of chat bots being AI comes from that first
(14:06):
category of use cases that I described, which was the more
read only capability. I just want to retrieve data and
I think there's plenty of opportunity there.
But now that we see agents be able to take action, I think
that's very, very different froma chat box.
Yes, in many instances the interface is conversational in
(14:28):
nature, but that doesn't mean itis a chat.
But if, if, if anyone that is listening has used any of the
coding assistance cloud code or cursor, we introduce also Cortex
code at Snowflake. You quickly appreciate that this
is so much more than a chat, butit's not I ask a question and I
(14:49):
get an answer is more. I give a command and maybe an
hour later, sometimes 4 hours later, I get solutions answers
to the the the problem or or theproblem that I specified that
are typically tasks that would take humans days or or weeks.
So I would say now that agents can autonomously take action,
(15:14):
follow up on items, reason what we see is is effectively an
augmentation of, of human capability that I I think is
unprecedented from, from our times.
I, I was chatting with an organization a few weeks ago and
they have a data team. They have a number of analysts
traditional, but they also augmented it with agents AI
(15:38):
agent. Actually, they have a name for,
for one of the agents and they think of it of, of the agent
that's a member of the team. And when a request goes out that
goes to the, the, the, the channel in Slack that every
member of the team gets to answer questions.
The agent is part of that. And believe it or not, it has a
(15:58):
lot of productivity, is very able to, to help with others.
We do the same thing at Snowflake.
We would have a number of integrations with the tools that
our employees use on a regular basis.
And it goes materially beyond the chat bot.
So I, I do think that the, the potential is way higher than
many of us see today. I think on a regular basis we
(16:21):
see that aha happening from, from individuals, which is a
yeah, it's cute. I can ask questions, I can get
answers and half of the time areright or wrong.
We're we're moving quickly past that.
And once we see AI based technologies, agents taking
decisions, making actions, making changes in our behalf is
(16:42):
truly exciting. Christian, there has been so
much hype out there. How can business leaders
evaluate vendor claims about autonomy, about reasoning, about
elaborate workflow capabilities?How can business leaders look to
see what what is real when it comes to AI agents right now?
(17:06):
If we all pay attention to a lotof the claims are there, it is
dizzying because many of them sound similar, overlapping.
It's complex. My recommendation to everyone is
go and try the technology. The beauty of this era of AI and
and and agents is most products are a couple of prompts away
(17:31):
from someone trying it. I think instead of saying I am
going to run a 3 month proof of consent on AI solutions yada
yada yada, now is a how about wespend an hour with a product and
we'll see how good or not they are.
We're doing this on a regular basis, just evaluating new
models, new products, new applications of the models and
(17:55):
quickly you can see whether whatthey're doing is materially
better than the state-of-the-artversus just another quick
wrapper on top of the, the, the AI model.
So my, my recommendation, because it's hard to, to tease
apart the nose is try it. We're in a world where anyone
that speaks English can largely use most of the products based
(18:19):
on AI. So that's the path forward, my
opinion. So basically.
You're saying you've you've got to do the empirical testing for
yourself. You can't.
You can't rely on vendor marketing in order to determine
what's real and what will work in your particular environment
and use case. Correct.
And. And that has been true for
(18:39):
certain aspects of enterprise technology for many years.
It's only that the cost of the POC is going down materially.
I can. I've been doing this long enough
that I remember when if I wantedto do apoc for an appliance
database appliance you would geton a sign up sheet and they will
tell you well your turn to get the appliance shipped to you is
(19:02):
8 months from now and then you get it for a week.
That sounds completely crazy in this day and age.
What what's happening with theirproducts is just go and and try
it. Usually it's on the spot and an
hour later you have a very good sense of what what it is.
And then the other thing that has happened traditionally that
(19:22):
I don't think it works that wellfor AI is benchmarks used to try
to simulate or model customer use cases.
And then you could say, well, this vendor is better on this
merit benchmark. So maybe I'll go without a POCI
think. Even though there are plenty of
benchmarks that are constantly evolving and and vendors
(19:43):
leapfrogging one another, I think nothing substitutes
leveraging or trying a product for the needs of your
organization. You raise a very.
Interesting point about the experimentation is that
something that is fundamentally different about AI in the
enterprise that it lends itself to trying this and trying that
(20:09):
and iterating extremely rapidly 100.
Percent. In an ideal world that would
have been true of all enterprisetechnologies in the past and SAS
applications in the past. But it's harder.
It's expensive to go through a quick evaluation of a SAS app or
(20:29):
a quick evaluation of invention appliances as a use case.
But traditional database in knowledge.
What's unique about AI is that in many use cases, the time to
create and test a pilot is always semantically lower than
some of the other technologies Ijust mentioned, which opens this
door to say, I don't need to go and just read the, the, the
(20:52):
vendor claims and decide based on that.
I can just try it. And then we hear on a regular
basis, I, we, we did this snowflake world tour where we
take our, our message to a number of cities, 23 cities.
And I heard directly customers like I just did a a test of your
technology with two other vendors and this is what we
like, which is amazing like that, that that was the the
(21:15):
friction used to be so high thatthat didn't happen.
Now it's. Materially.
Lower. I'm not going to downplay that
the cost is 0, but it's so low that is not unthinkable to try
3-4 technologies in the period of a day or two.
So there's a real. Mindset shift that must take
(21:35):
place for people. Managing and leading.
AI deployments and when it comesto agents, how how does this
approach overlay that the the agent world agents are?
The the the best manifestation of AI, which is is the AI
(21:57):
models, but they have the ability to reason through some
context and be able to invoke some tools or take some action.
So I would say at this point, agents is the, the best
instantiation of, of what I would say the, the, the AI
models and it, it has materiallychanged the effort needed to go
(22:19):
and try one of the solutions. There's also a number of
interoperability protocols emerging.
MCP is, is gaining a lot of traction.
I think no customer conversationI have these days goes without
saying. How do you plug into an MCP
ecosystem and can I be part of my broader agent strategy?
(22:41):
Same things happening with agentto agent protocol.
And I think more and more interruptible is going to is
going to emerge. But the the notion of companies
should much easier Again, I wantto be careful then I'll say 0
cost, but much easier try this technologies is is is changing
how what changing what's possible and I would encourage
(23:04):
the companies to change how theythink about evaluating
technology are there. Limitations of agents today
based on the technology, limitations of data and so
forth, where the where the boundaries of where we go into
the bleeding edge that becomes maybe dangerous.
(23:24):
We don't know the. Bounds and then that was a
little bit of my comment earlierthat the, the state of the
technology may be ahead of our applications where where most
companies take a, a, a, a slower, more guarded approach is
in regard to to the two issues II was referencing, which is 1
(23:45):
correctness of results and security.
Privacy compliance guarantees correctness of results does not
just apply to did did the agent or the chatbot give me an
incorrect answer? The more we talk about agents
taking action, it is important that that action is also
(24:07):
correct. Like probably one of the last
operations that will, that will be fully automated with AI is
going to be things like a bank transfer or a wire transfer
because you, you want to make sure that those things are 100%
correct. But in the meantime, there are
all sorts of use cases in, in the consumer world, all the
(24:29):
examples are take a reservation to a restaurant.
But in the enterprise world is it's, it's, it's simpler.
It's like close a support ticketor generate a, a set of
questions for this review that I'm about to go to or generate a
set of insights for maybe a board meeting.
And not not that the stakes are not high, but they're lower than
(24:52):
some of the more mission critical use cases.
Can you? Talk about the data and
governance gaps that can cause problems with agent deployments.
You touched on this earlier, butreally drill into the data for
us. This is in my mind, the single
biggest impediment for companiesto tap into AI today.
(25:19):
And and it comes down to if the data, the states of a company is
not well organized or is not broadly accessible, there is
little magic that the AI can do.Like I, I, I, I like to say AI
is a, is a turbocharged, super capable technology, but it's not
(25:44):
magic. If you have two data sets, say
with customer lists and they're inconsistent with one another, a
great AI solution may tell you, well, I found two lists and
according to this list, Michael is a customer.
According to this list, Michael is not a customer.
They cannot make a judgement call on that.
(26:05):
So I would say cleaning up data,having a single version of the
truth, which is sort of the, theHoly Grail in enterprise data
systems is like for any question, there's only, there
should be only one answer, or atleast factual.
Having that type of rigor matters a lot because all, all
(26:25):
of the AI systems are doing is reasoning and retrieving based
on the data. But if the data is messy, nobody
can help you. I, I, I usually use a limo
status. Like if you were to ask the best
analyst in your company to answer a question, would this
best analyst know where to go, know what to do?
Then there's a path for for AI. If the best analyst says no, I
(26:48):
just need to go have lots of meetings because all the
knowledge is in people's heads. That's going to make it harder
for AI to do it. There are promising efforts on
AI trying to infer some of that knowledge from the data, but
that starts to add additional levels of non determinism and
potential risk. We have a.
Question from Arsalan Khan on Twitter right now very
(27:11):
specifically about the data thatis institutional knowledge
residing in people's heads. And there's a number of
implications of that, including our salon raises.
Will people feel we don't need them if they are, if they share
that information in? Most technology changes some
(27:36):
resistance like that comes and Ithink the, the, the, the reality
is what a lot of this technologycan do is enhance productivity.
Yes, maybe rely less on tribal knowledge and what's on people's
head, but it also frees up individuals to go and work on
additional tasks, higher value tasks.
(27:57):
I forget who said it, someone said it publicly and now I
repeat it almost on every instance.
I, I, I get a chance, which is Idon't think individuals should
be worried about being replaced by AII think they should be
worried about replaced by individuals that know how to use
AI is because that that's the, the, the, the more real thing.
And yes, relying on on on institutional knowledge or
(28:20):
tribal knowledge in my head, that's not good.
Now I'm going to be be be calledat all times of the day, which
you can say it's good for independence and job stability,
but that's not long term good for the company or for the
individual. So how?
Should organizations manage and govern all of this unstructured
data that not even in people's heads, you know, documents,
(28:43):
images, audio, how should organizations deal with that?
There's. A broad trend in the industry
and, and, and it's not like we're, we're leading much of
this, which is how do we capturein what is called semantic
models and semantic views and ontologies that business
knowledge that often times is either embedded in tools or
(29:06):
embedded in, in people's mind. But how do you codify it in a
way they can be made available to the AI?
So that is happening. We, we started a cross company
in a consortium to do interoperability of semantic
models. We're, we're very excited about
number of companies participating in that.
And it's all about how do you take that corporate knowledge
(29:29):
and codify in a way that you canmore intentionally make it
available to, to AI. So I wanted to, to add that to,
to the comment on knowledge in people's minds or embedding
tools. Now let me shift to, to your,
your question on unstructured data, which your premise was
spot on. Unstructured data, you don't
even have to know. Is it in someone's head or
(29:50):
something like that? No, the reality with
unstructured data a is the vast majority of data that an
organization has their estimatesthat put it at 90 percent 8595
that the specific doesn't matter.
But I think most of us would agree vast majority is just on
and on documents and logs and emails and all of that contact
(30:13):
context, which is truly difficult to retrieve or has
been truly difficult to retrieveup until AI.
In the world of AI, there's still some amount of OK, you
need to go give it the right index.
You need to figure out if you'regoing to do this type of
semantic search, approximate how, how important this
(30:36):
correctness and citations. But what AI has done for a
structured data is as if overnight it turned on the
light. Like imagine that we had all
this data around us, but we if, if it's dark in the room, you
don't even know that it is there.
Now you open your eyes and like,oh, there's so much for us to
(30:57):
learn as a company. It's incredibly exciting what
what AI is doing for unstructured data we have.
Really an important question from Sai Panamuru from on
LinkedIn who says why are so many projects getting stuck at
the proof of concept stage in mymind?
A lot of what we saw maybe 12-18months ago, there was this
(31:21):
general sense that AI is so powerful that it's easy for me
to just go and roll up my sleeves and build the solutions.
I'm going to build my customer support agent.
I'm going to build a talk to my database agent.
It's not too hard and many of those initial trials produce
(31:43):
great results. Like in isolation.
I know exactly the question. I'm working with maybe 5 tables
in my database and everything works great.
And what we've seen I, I would say for sure last year plus is
that by the time you try to rollthose PO CS or those organic
efforts into production, then you get the intersection with
(32:05):
reality. And, and reality is harsh.
Reality is like, Oh yeah, you tested it with five tables, but
we have 100,000 tables and they're not clearly named in, in
the, in the, in the POC. They were called orders and line
items and customer in, in in theproduction system is something
like table one, Table 2, Table 3.
And it's impossible to know what's in there.
(32:27):
So, so the, the, the reality operates have a different scale.
It's messier back to this data quality concept.
It matters on how organized and how discoverable and
understandable data is. And the other piece that that I
mentioned is correctness and trustworthiness of solutions has
stopped many pilots from going to production and it has shut
(32:49):
down projects that went into production just because again,
they, the POC sounded great, butthe real system is providing
wrong answers. I don't know, one out of four
times, which you can say, well, it's not too bad, but like one
out of four times if you're serving, I don't know, 1000
customers in a day, that sounds a lot of unhappy customers.
I think that's the friction on, on do it yourself type of
(33:13):
solutions, which is why ourselves and that's Novic
analysis in the industry have seen a lot of momentum and
uptake with our solutions. Like I mentioned slowly
intelligence because it's harderthan it seems.
And now customers are ready to say I want to put in production.
I don't want to have to roll up my sleeves this.
Issue of the leverage of agenticAI also means that when you have
(33:42):
errors, there's a magnification of those errors that can grow.
And we have a question from Elizabeth Shaw on Twitter, who
says models and algorithms have a tendency to drift or become
polluted and with a gentic AI that can cause havoc.
(34:05):
How should companies address this set of issues?
And who should be responsible for this?
Inside organizations? Some of these?
Trends and problems were there and were visible in traditional
machine learning technologies like what when when someone was
creating a recommendation enginewith a again, predictive ML, the
(34:28):
the discipline emerge to how do you do evaluations and how do
you monitor for drift? And how do you make sure that
you're not over fitting a model and, and some of those
techniques are effectively, how do you govern, monitor and
maintain non deterministic technologies?
(34:48):
Like back in the day when, when everything was 100%
deterministic and your balance is above 0 or not?
That, that's, that's easy that that will not drift.
But as we've got into more sophisticated solutions and ML
was the beginning of that and a lot of best practices have
emerged. And how do you govern ML into
production? Now they're in OverDrive with
(35:09):
AI, which by definition, even with all the same inputs, is
going to give you a variable outputs.
And I think the same set of patterns are emerging to govern
AI solutions in production as Snowflake.
We've we've enabled our customers to do logging of
responses, track evaluations, beable to see that what was
(35:33):
intended to happen is deliveringthe answers that customers would
need to deliver. So I would say there's president
with machine learning, we need to go and do a lot more as an
industry to help organizations cope and put into production
with confidence solutions that are inherently very varying and
adapting to context. So.
(35:54):
On this point of the nature of the technology being non
deterministic, how should business and technology leaders
measure value, measure return when the cost, the consumption
patterns, the productivity gainsremain uncertain.
(36:15):
It's not the same as putting in an ERP system where you.
Know you you have your. Defined order flow, for example
in. All instances the benchmark or
the evaluation has to be a function of what am I getting in
return for what I'm putting in. I'll I'll I'll I'll give you an
example. Some consumer level mobile app
(36:39):
was trying to make some LLM calls in a critical path of the
application just to show a slightly better recommendation
that's expensive because the uplift in the quality of the
recommendations didn't come anywhere close to the cost of
the AI. So I would say that this is no
(37:02):
different predictive and unpredictable technologies like
what do you put it in and is thedelta greater or not than the
benefit that you get? So, so that one is, is, is more
the exceptional case in many instances.
What we're hearing is it's productivity gains for employees
and organization. Often times measuring orders of
(37:25):
magnetism, a use case that maybean analyst can take, I'm going
to make it up five hours can be done by an agent in 5 minutes.
The the ROI in there is so large, there is plenty of room
for AI companies to make money, including the all, all the
(37:48):
platforms in between. And still the company gets a
positive ROI on that. So it's not to say, yeah, let's
let's charge whatever for, for AI and, and have companies pay
as much. But depending on what it is
benchmarked against, in many instances, the, the ROI is so
large that there's actually a lot of reason to be optimistic
(38:09):
about this. And, and back to the, the, the
previous question, the premise of it's a bubble.
No, I, I, I think there's a lot of real productivity in here.
It's an. Extraordinary comment that you
just made that the, the ROI is so large potentially and I think
that gets right to the heart of all of this vast amount of
(38:32):
investment that's being made in AI right now.
But we have a a really interesting question on Twitter
from Miles Sewer, and I'm Christian.
I'm going to ask you to answer the next questions pretty
quickly because we're simply going to run out of time.
Miles Sewer says this research shows that much data is
(38:57):
immature, unwrangled, and without a semantic layer.
It's a market limiter for every software company.
And what is Snowflake working onto help with this?
Data quality and and and data claiming is get in the way of
companies leveraging AI. We have launched capabilities to
help extract semantic information from a number of
(39:20):
sources, whether it's existing BI tools or query logs.
So we're helping organizations extract that knowledge.
And also we're partnering with ahealthy number of third parties,
companies, start-ups that are working on inferring some of the
semantics from data. So it's a work in progress for
the industry, but it's very exciting what's going on very,
very quickly. Sai Panamuru comes back and from
(39:44):
his earlier question and he saysare there key solutions you can
recommend to or advice for moving from moving AI from proof
of concept POC to production andvery quickly please it's too.
Generic of a of a question because depends on what AI and
what solution. But if you wanted to do talk to
(40:05):
your structure data, I'll plug in price of intelligence we
have. A question from Arsalan Khan on
Twitter. Folks, keep your questions
coming in. We have a few minutes left.
We'll try to get through these really fast.
Arsalan Khan now wants to shift the conversation to the
workplace implications, and he says that performance reviews
(40:29):
are a mix of objective and subjective opinions.
When AI gets involved, it's moreobjective.
And should AI be used in hiring and firing?
And what I'm really interested in is the impact on jobs from
agentic AII would think of. AI for people related matters as
(40:52):
an assistant, not a decision maker. 2 reasons.
One is AI results are still as good as the inputs that you
provide to the AI and there's subjectivity in curating and
choosing those inputs. And the other thing is we, we
also established that there is some variability in the
responses that AI produces. So I would say do leverage it,
(41:16):
maybe use it for comparison. We were, we were summarizing an
interview feedback last night for someone, but it's summarize
not replace it for a decision maker.
Do you have? Much interest in cause of
concern among your customers forthe impact of a of a genetic AI
on jobs and the workforce. Everyone has the opportunity to
(41:36):
do more with their data, which should translate to faster
business outcomes, Better Business success, and that's
what, at the end of the day, gives prosperity.
Jobs will evolve. Jobs are evolving already.
But I, I, I, I, I'm on the optimist side that this is a
productivity play, massive productivity play.
(41:58):
But people will adapt. Jobs will adapt.
Companies will do better if theyleverage AI.
What? Advice do you have for senior
business and technology leaders on managing this period of rapid
change and it's particularly in relation to agentic AI?
Don't. Take on large scale AI roll outs
(42:21):
When, when, when you hear we're going to buy 100,000 licenses of
XYZ solution and put it out there and it's like, well, do
you even know if it works or not?
I, I think the better approach in the same way that I earlier
said the cost of evaluating a technology is fairly low.
I would say find smaller use cases, put them out in
production. If it works well, scale it.
(42:44):
That's the, the, the mode I think all of us should be in.
Technology is advancing very fast.
If you try something four monthsago and it didn't work, there's
a chance that it may be working today.
So just stay in that constant state of evaluation trial, small
rollouts and scale based on success.
So constant. Iteration constant, trying new
(43:04):
things because it's all changing.
Take what works, put what works into production, and discard
what doesn't work, basically. Correct.
That's correct. All right, and.
With that, we are out of time. A huge thank you to Christian
Kleinerman, Snowflakes executivevice president of product.
Christian, thank you so much forbeing with us and taking your
(43:27):
time. I'm very grateful to you now
like. Likewise, Michael, thank you so
much for for having me and I look forward to doing it.
I get some time, everybody who watched.
Thank you for those great questions.
But before you. Go.
Subscribe to the CXO Talk newsletter so we can keep you up
to date on our upcoming shows, everybody.
(43:49):
Hope you have a great. New Year, our next show is in
early January and we'll see you again next.
Time. Take care everyone.