All Episodes

June 26, 2025 33 mins

What happens when you try to monitor something fundamentally unpredictable? In this featured guest episode, Wayne Segar from Dynatrace joins Corey Quinn to tackle the messy reality of observing AI workloads in enterprise environments. They explore why traditional monitoring breaks down with non-deterministic AI systems, how AI Centers of Excellence are helping overcome compliance roadblocks, and why “human in the loop” beats full automation in most real-world scenarios.

From Cursor’s AI-driven customer service fail to why enterprises are consolidating from 15+ observability vendors, this conversation dives into the gap between AI hype and operational reality, and why the companies not shouting the loudest about AI might be the ones actually using it best.


Show Highlights

(00:00) - Cold Open
(00:48)
– Introductions and what Dynatrace actually does

(03:28) – Who Dynatrace serves

(04:55) – Why AI isn't prominently featured on Dynatrace's homepage

(05:41) – How Dynatrace built AI into its platform 10 years ago

(07:32) – Observability for GenAI workloads and their complexity

(08:00) – Why AI workloads are "non-deterministic" and what that means for monitoring

(12:00) – When AI goes wrong

(13:35) – “Human in the loop”: Why the smartest companies keep people in control

(16:00) – How AI Centers of Excellence are solving the compliance bottleneck

(18:00) – Are enterprises too paranoid about their data?

(21:00) – Why startups can innovate faster than enterprises

(26:00) – The "multi-function printer problem" plaguing observability platforms

(29:00) – Why you rarely hear customers complain about Dynatrace

(31:28) – Free trials and playground environments



About Wayne Segar

Wayne Segar is Director of Global Field CTOs at Dynatrace and part of the Global Center of Excellence where he focuses on cutting-edge cloud technologies and enabling the adoption of Dynatrace at large enterprise customers. Prior to joining Dynatrace, Wayne was a Dynatrace customer where he was responsible for performance and customer experience at a large financial institution. 


Links

Dynatrace website: https://dynatrace.com

Dynatrace free trial: https://dynatrace.com/trial

Dynatrace AI observability: https://dynatrace.com/platform/artificial-intelligence/

Wayne Segar on LinkedIn: https://www.linkedin.com/in/wayne-segar/


Sponsor

Dynatrace: http://www.dynatrace.com 

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
It becomes very, you know, imperative that, you know, you were observing
how the health of everything was working before it becomes very imperative
that you monitor the, uh, the backend of your ai, you know, componentry
that you're putting in place now, particularly because in a lot of cases
you don't necessarily even know what outcome is gonna happen, right?

(00:20):
It is, and to an extent, non-deterministic.
You wanna understand, you know.
What's the, what's the performance of this now that it's model I'm working with?
How much is it costing me?
Uh, that's another big thing, right?
Depending on how I'm using it.
So there's all these different things that kind of come into, come into
it that become just as critical because it's, it's even more complex than.
Even the complexity you already had that was

(00:41):
running, you know, your your main applications.
Welcome to Screaming in the Cloud.
I'm Cory Quinn.
I'm joined this week on this Promoted Guest episode by
Wayne Seger, who is the director of Field CTOs at Dynatrace.
Wayne, thank you for joining me.

(01:02):
Thank you very much.
Pleasure to be here.
This episode is brought to us by our friends at Dynatrace.
Today's development teams do more than ever, but challenges
like fragmented, dueling, reactive debugging, and
rising complexity can break flow and stall innovation.
Dynatrace makes troubleshooting outages easy with a unified observability

(01:22):
platform that delivers AI powered analysis and live debugging.
That means less time grappling with complexity.
More time writing code and a frictionless developer experience.
Try it free@dynatrace.com.
One of the challenges of large companies is as they start
having folks involved in various different adjectives,

(01:43):
there's always an expansion specifically of job titles.
People start to collect adjectives like they're going out of style or whatnot.
What is Dynatrace and what do you do there?
Yeah, so, so Dynatrace is, you know, at the very.
You know, very definition of it.
We are an observability and security company, right?
So we're, we live, we kinda live in that space now.

(02:05):
What we do is certainly very broad.
I mean, really we like to say our goal or our vision is to make software work
perfectly, or a world where software works perfectly now, very aspirational.
Uh, that I think we would all like to be there, but, uh,
we also know that that's, you know, certainly challenging,
but that is, you know, what we do and what we strive to do.
And so what we predominantly focus on is, you know, helping customers and

(02:30):
helping businesses understand, you know, if there are problems, really
the health of their systems and where said problems are, and ultimately.
You know how to fix them in a timely fashion, or
predominantly what we'd really like to do is automate
those things so they don't even impact people, uh, at all.
Right?
So the best way I describe it to everybody, you know, even the, uh, when

(02:50):
somebody like, you know, my mom or somebody who's not in the technology space
and answers I say is you, uh, you ultimately, everybody takes a every take.
Everybody takes a flight, everybody takes
an Uber, everybody checks into a hotel.
Most of the time you're doing that through a digital interaction.
At some point when it doesn't work, it's a really bad experience.
We work to prevent those bad experiences.

(03:10):
Unfortunately, we've hit a point where saying, oh, we're an observability
company, is basically a half step better than, oh, we are an AI company.
It's, well, you have just told me a, a hemisphere that you live in.
Great.
Who are your customers, uh, that are, that are your bread and butter?
Where do you folks start?
Where do you folks stop?
Yeah, so we, we predominantly focus, um, you know, where our customer

(03:33):
base is, is I would say the larger kind of scale enterprises.
Now, certainly that, you know, we're not excluding anybody by saying
that, but in terms of where our, our, a lot of our install base
is, is you can start to think about, you know, a global 15,000 or
something like that in terms of, uh, you know, in terms of rankings.
It's really a lot of those type of type of customers.
But like I said, it can also span, you know, in down,

(03:55):
you know, down into, you know, smaller companies.
Because at the end of the day, if they've got a digital
property that somebody's using, they need to ensure that
it's actually working and that experience is, is done well.
Uh, it's, it's strange.
I tend to live on both sides of a very weird snake where I, uh,
where I build stuff for fun that runs all of 7 cents a month.

(04:15):
I confess, I've never used Dynatrace for any of these things yet when
I fixate WS bills for very large companies, you folks are everywhere.
So it's, it's always interesting to realize that.
Yeah, some folks think that you are.
The end all, be all of Observ abilities and others will
misspell your name because they're that unfamiliar with it.
Uh, I want to give you folks some credit as

(04:36):
well when I visited your website@dynatrace.com.
As of this recording, I'm sure some market person will fix
this before we publish, but AI is nowhere above the fold.
Yes, it's the first thing below the fold, but you have not re badged
your entire company as the AI company, which frankly is laudable.
Yeah, I, I appreciate that.

(04:56):
And, and there is a little bit of a reason for that, or maybe there's
a little bit of a history to it is now we can go back and talk about
the history of the company, which won't bore you too much with,
but, but basically since we, uh, you know, since we launched kind
of the, the platform that we know of today, kind of our flagship
product, which was about 10 years ago at this point, really, um, is.

(05:18):
You know, we, we built it, we actually did build
AI slash machine learning at the core of it.
Okay.
So we did do that.
So that, and that was a while ago.
So it's very much been a part of the platform for a long time.
And we were talking, like back then we actually had AI in a lot of
the marketing and, and at that point people would not even believe us.
They would say that that.

(05:38):
You know, that's not a thing that doesn't work.
You were AI washing before AI washing got big.
It, it, it, it was exactly right.
And so, um, and so we've obviously, you know, changed a little bit of that.
Now it's, we find it interesting now that Right, every company
out there, like you just said, has AI plastered on something,
whether they're doing it or not, they're marketing towards it.
Um, and so, so that's kind of where we kinda

(06:00):
look at it is, is we've evolved the platform.
Of course as AI has evolved, but we've actually had it.
As a core piece since it's, since its inception,
right.
If you're, I guess you're orthogonal to what we do in that
you are, we both are sort of overlap, not really orthogonal.
If I learn to use words correctly, that would be fantastic, but we
are alike in that we both have large data sets that we have to make.

(06:24):
I. Decisions based upon.
So like are you using AI in this?
Uh, well, I don't really know how to answer
that if you're not using machine learning.
I have several questions about what you think I would
be doing to wind up getting to reasonable outcomes.
But am I just throwing this all into an L-L-M-A-P-I Categorically, no.
That would cause way more harm than it would Good.

(06:46):
So where does AI start and stop is honestly
increasingly becoming a question for the philosophers.
Exactly.
Yeah.
Agreed.
So I wanna talk about something that you folks have done that I find
fascinating from a, because I care about it very much, but I see it
from a potentially different angle, and that is observability of Gen ai.

(07:08):
Uh, and that can mean.
Two different things.
The idea of, I'm not, I don't wanna conflate it with, oh, we're
using AI to wind up telling you what's going on in your environment.
You have large customers across the spectrum, but biasing toward
the large, who are clearly doing a bunch of gen AI things.
How are you thinking about observability for what is effectively
the only workload people are allowed to talk about this year?

(07:32):
Yeah.
And so what we're, what we're definitely seeing and where we've kind of
focused is customers are specifically, like you said, in the larger size.
What they're doing is they're, everybody's doing some sort of project,
like you said, it's, people are working on it, they're talking about it.
Um, now is it the most wide scale?
Is it running their most, uh, critical revenue line application yet?

(07:56):
No, not really.
At least I'm not seeing that.
But that's maybe an aspiration of course.
Where, so where we, but what we do look at it is the, these
AI projects and these new workloads that they're developing
are becoming a piece of their, let's say their broader, I.
Kind of application, you know, landscape.
So it becomes very, you know, imperative that, you know, you were

(08:19):
observing how the health of everything was working before it becomes
very imperative that you monitor the, uh, the backend of your ai,
you know, componentry that you're putting in place now, particularly
because in a lot of cases you don't necessarily even know what.
Outcome is gonna happen.
Um, right.
It is.
And to an extent, non-deterministic.

(08:39):
Uh, and you wanna understand, you know, what's the, what's the performance
of this Now that it's model I'm working with, how much is it costing me?
Uh, that's another big thing, right?
Depending on how I'm using it.
So there's all these different things that kind of come into, come into
it that become just as critical because it's, it's even more complex than.
Even the complexity you already had that was
running, you know, your, your main applications.

(09:01):
Yep.
I, I see it from the cost side, but it tends to take the perspective
of being at the more of the micro level than it is the macro.
Uh, companies generally aren't sitting there saying, well, we're spending
a few hundred million a year on AWS so in our next contractual commitment,
we're gonna boost that by a hundred million because of all the Gen ai.
But they do care that, okay, we have a workload that effectively can run.
Indefinitely, continue to refine the outputs, have agents discuss these things.

(09:26):
At what point do we hit a budget cap on that workload and
then say, okay, and now this result is what we're going with.
You see that sometimes with model refinements as well,
E, exactly.
That was actually the next point I was gonna make is,
is that's what we're also seeing too, is people look at
it from that perspective of, well, this we're change.
We'll, we'll change the model out.
And we'll see.
Okay, was this, could this have been more

(09:46):
cost effective in the more macro world?
Right?
So maybe it, like I said, it's, it's kind of a rounding error in terms of our
like cloud bill today, but it now gives us the opportunity to understand how
we can be efficient with this when we do scale, which inevitably will happen.
Uh, I, I do find that when people are trying to figure out
does this thing even work there, there's not even a question.
They reach for the latest and greatest top tier frontier

(10:09):
model, uh, to figure out is this thing even possible?
Because once it is an okay, yes, it turns
out for, for let's take a toy application.
I built that to generate alt text for
images before I put them up on the internet.
Uh, can it do this?
Terrific.
Great.
Now, if I start using this at significant scale and it starts
costing money, I can switch over from Claude four sonnet.

(10:30):
All the way back to, I dunno, Amazon Nova or an earlier Claude
version or whatever, the economics makes sense, but at the moment,
the cost for this stuff on a monthly basis rounds up to a dime.
So I really don't care all that much about cost at the current time.
And that seems to be where a lot of folks are at
with their experiments of, does this even work?
If it's expensive, so be it.

(10:52):
People's time and energy and the, and the lack of focus on other
things is already more expensive than this thing is going to be by far.
At least that's how I'm seeing it.
I think I see that as well.
I'm, I'm kind of in agreement with you there 'cause
it's not, it's, it's certainly not at any scale yet.
You're right.
It's, it's very much in the stage of can we make something that works?

(11:14):
And then I, I think people are starting to think about this is.
Can we not only make it work, but does it actually provide value
back, which I think is the other big thing that people have.
I struggled with.
It's like, this is cool, but, uh, cool doesn't
necessarily, you know, make or save us any money.
Right.
And, and in many cases it seems that companies are taking what

(11:35):
they've been doing for a decade and a half and now calling it
ai, which, okay, and there, there are a lot of bad takes on it.
People are, oh, we're gonna replace our
frontline customer service folks with a chatbot.
Uh, cool.
I've yet to find a customer that's happy with that.
Uh, to give one great example that I found, uh, I was.
Poking around on a, on Reddit last night, looking at a few of the

(11:57):
technical things as I sometimes do when I'm looking for inspiration.
And someone mentioned that, uh, they canceled cursor because that is the first
time in, uh, 20 years where they've done that for just poor customer service.
And I had the same experience.
I emailed in about a billing issue.
I got a robot that replied that was in very fine gray text at the
bottom, the fact that it was a robot, so I missed it the first time.

(12:18):
And then it basically chastised me for, uh,
sending a second email in a couple days later.
This will not improve response times.
It's okay.
I understand the rationale business side of why you would do that.
People as people don't like it, they want to be able to reach out and talk.
To humans that that's something that the big enterprise clouds had to learn
is that okay, if you're talking about large value transactions, they want a

(12:41):
human to get on the end of the phone or take them out for dinner or whatnot.
That doesn't necessarily scale from small
user all the way up to giant enterprise.
Customers have different profiles and need to be handled differently.
That's exactly right.
Yeah.
And, and I find your story kind of funny in a sense that, that the,
the ai, which is the promise of it, is supposed to, you know, make
things more efficient and it literally just did the same thing of.

(13:02):
You know, being it was rude to you and said
that it couldn't even do the job any faster.
Exactly.
Where I do see value for things like that with frontline
support is okay, ticket comes in, look at it in your
ticketing portal as a human customer service person.
And it already picked up the tone.
It rewrites a couple of different responses and links
you to internal resources that are likely to help.

(13:23):
Uh, the only way I've seen customer facing ai, things like that make
sense or what, it is very clear that it's an AI thing coming back.
Or it gets human review before going out the door.
Yeah.
Yeah.
And, and that's even what we're seeing too, is.
Like what I, I like to call it human in the loop.
Yeah, that's a good expression.
I like that

(13:43):
is what I see.
That, you know, where, where some people are doing that today.
Um, and this is more, more in when we start talking about autonomous
or using more autonomous like operations where it's like, okay,
great, something can tell me the problem, you know, like that, like
Dynatrace, of course we can point you to causation of a problem.
Um, even.
Tell you in some cases what it is that you should resolve or

(14:06):
do, or at least give you suggestions of what they should be.
Now, would you have some other, you know, agent interact with that
make the change and immediately go and push it out and say, we're done?
Probably not.
Um, and that's where again, human in the loop
is where I starting to see a lot of people.
You know, that's more of where people are envisioning it.
Um, right now, uh, it is more of the strategy.

(14:29):
Uh,
it, it's somewhat somo to look at observability as it tells me
when the site is down, well, great pass a certain point of scale.
The question has to become how down is it?
But it goes significantly fur, uh, further than that.
Uh, since you have that position of seeing, uh, these
entire workflows start to finish, how do you find that?

(14:49):
Companies specifically in the enterprise space are taking thing, taking these
projects from development to small scale production to large scale production.
Uh, while I guess being respectful of the enterprise concerns,
uh, obviously there's performance and security, but compliance
starts to pay a large, play, a large role in it as well.
What are you seeing?

(15:09):
Yeah.
So I would say that if I went back, and again, the space moves
very quickly, but so if you only go, went back, uh, you know, six
or eight months ago, um, people were raising their hands saying
that, you know, compliance was the biggest blocker of everything.
Um, meaning that.
You know, you could, you could maybe test some things out,
but in terms of what you could do, what data you could use,

(15:33):
it was it, it became, you know, let's say very challenging.
Now I've seen that, where that started to open up a bit because
companies have created, you know, AI centers of excellence internally
that are, you know, let's say a little bit more designed to understand.
What the compliance needs should be.
Uh, and then what is acceptable and certainly then what isn't.

(15:54):
And that's kind of given people guidance as to how to maybe
fast track or what, what is acceptable with their projects.
So that's one of the things I've seen be more predominant is this actual.
AI Center of Excellence that has come up in a lot of enterprises.
Um, and, and so that's helped a lot.
And then with kind of, with that, with that in mind,

(16:15):
that's allowed, uh, you know, that's allowed customers
and you know, let's say people in the company to, I.
Basically come up with, uh, you know, a better strategy of
what they really want to end up with at the end of the day.
So it's like starting backwards.
That's the other thing I've seen people start to do a little
bit more is start to think a little bit more backwards
as, yeah, this idea sounds cool, but what would it do?
How would it improve?

(16:36):
Either maybe it's an internal customer experience, which I
think is where a lot of people are starting with is like.
Start with our, um, our internal applications that we have that in
service, our internal users, what can we do to improve their lives, make
things more self-service, creating apps or AI-based apps that do that.
And then that gives us a lot of learnings to ultimately transition

(16:56):
and move things to, to things that may be more external facing.
So that's kind of the progression I've started to
see and, and still seeing it obviously as it matures.
Trusting your AI stack is non-negotiable.
That's why Dynatrace pairs perfectly with Amazon Bedrock.
Together they deliver on paralleled observability
across your generative AI workflows.

(17:18):
Monitor everything from model performance to token anomalies all in real time.
See how Dynatrace enhances AI with Amazon Bedrock.
Start your free trial@dynatrace.com.
Uh, something I have found is that a lot of these enterprises are over-indexing
from where I sit on the how precious their data is, like it is their

(17:39):
core IP that if we got out into the world, would destroy their business.
I've always been something of a skeptic on a lot of these claims.
Even take the stuff that, uh, like at the Duck Bill group, things
that we have built for internal tooling and whatnot, if that were to
suddenly leak because we're suddenly terrible at computer security.
It doesn't change our business any, it's not really a threat

(18:01):
because the value is the relationships we've built, how to apply
the outputs of these things, the context, the understanding.
Uh, if you, if I get access to all of AWS's, uh, code to run their hypervisors,
I'm not gonna be a threat to AWS, I'm not gonna build a better cloud now.
And I think a lot of companies find themselves in that position,
but they still talk about it as if it's the end of days.

(18:23):
If a, if a prompt leaks, for example.
I agree.
I mean, I think that there's, you know, there's probably
some correctness to the concern in, in certain industries.
Oh, I'm painting with a very broad brush.
I, I, I want to be clear here.
Yeah, yeah.
But I think, but I think that there is probably some over
hesitancy and that is where I, I, again, where I've seen when
people have created kind these more AI centers of excellence.

(18:45):
And they've brought on people who aren't.
Let's say your, your traditional, let's say compliance folks
who kind of look at things in a very black and white manner.
They're usually more of a, okay, what does this really mean?
Like if this data gets out there, does it matter?
They look at it from that perspective versus like, you
know, maybe a more traditional black and white compliance
person would say data getting out there that equals bad.

(19:07):
That never happens.
Right?
You know what I mean?
That's an immediate no.
So I'm seeing some of that, you know, come around and change and,
and that's kind of maybe one of the driving forces that have, you
know, somewhat, somewhat grease some of the skids to allowing,
you know, enterprises to adopt things a little bit more readily.
Yeah.
And I think that also people at companies tend to get a little too insular.
I've seen it with my customers, I've seen it my own career.

(19:28):
Uh, I find it incredibly relaxing to work in
environments where we're only dealing with money.
'cause I had a job once where I worked with, where an
environment where if the data leaked, people could die.
That's, that is something that is, that weighs on you very heavily.
But when all you do is work at a bank, for example,
it's, it's easy to think that you're the only thing that.
That colds, the force of darknesses at bay

(19:48):
is the ATMs spitting out the right balance.
Maybe you're closer to that than I want to acknowledge, but, but
there is a sense of, at some point, what are we actually doing?
What is the actual risk?
What is the truly sensitive data versus the stuff that just makes us feel bad or
is embarrassing or makes us a violation of some, uh, contractual breach issue.
There's, it is a broad, there's a broad area

(20:09):
there, and there's a lot of nuance to it.
I'm not saying people who care about this stuff are
misguided, but it does lead to an observation that.
A lot of the upstart AI companies are able
to innovate far faster and get further.
Then a lot of the large enterprises specifically because as a natural
course of, of growth, at some point your perspective shifts from

(20:31):
capturing upside to protecting against downside to risk management.
Uh, if a small, uh, small company starting in a, a coding
assistant winds up having its say unhinged, ridiculous things well.
That's a PR experience that could potentially end in hilarious upside.
Whereas for a giant hyperscaler with very
serious customers, that could be disastrous.

(20:53):
So they put significant effort into guardrails rather than innovating
forward on capabilities because you have to choose at some point.
That's right.
Yeah.
I, yeah, I agree.
And I see that, you know, as well, right?
I mean, that's the, that is the big thing is,
is it depends, it depend on the company size.
Um, and, you know, ultimately what's, like you said, what's the risk

(21:13):
that they could, that they could accept, uh, at the end of the day,
if something, if the worst case scenario actually happens, right?
Oh, absolutely.
It's, it's the same mentality as well that causes companies
to freak out about things that frankly they don't need to.
Like, well, I was about to sign a deal for multiple millions
of dollars with this company, except that one dude on
Glassdoor who had a bad time working there for three months.

(21:34):
I don't know.
Nevermind.
Like, that does not happen among reasonable people.
But I understand the reflexive, oh, dear God, what's happening here?
Yeah, I agree.
And, and I'm kind of in that camp as well, which is that,
um, there's always gonna be positive and negative things out
there, uh, on a, on a, on, you know, on any company, right.

(21:55):
Um, but I, I tend to, I tend to live on more of the rational side of things,
which like, I think you get to is some, some bad things or, you know, some
negative Reddit posts that may happen because of a bad experience is not.
That again, that's not gonna destroy a company, right?
People are gonna, people are gonna buy because they like
the people, they like the product, they like the technology.
I, I agree that's, that is what sensible people tend to do.

(22:17):
Uh.
Uh, getting back to, I guess, your place in the ecosystem on some level.
Uh, one thing that becomes a truism with basically every
workload past a certain point of scale is you don't have an
observability vendor so much as you have an observability pipeline.
Different tools doing different things, uh, from different points of view.
As je I proliferates into a variety of workloads and.

(22:41):
A variety of different ways.
Why is it that customers are going to Dynatrace for this
instead of, ah, we have 15 observability vendors and now
we're gonna add number 16 that purely does the AI piece.
Yeah.
Really good question.
Um, and I, I think the, the answer to that lies where we're
part, and it's particularly again, in the more enterprise space.

(23:03):
What the, the shift that, that I've observed happening, pun observed, right?
Um, that I've seen happen is even before.
I'd say the AI boom or people really going into AI was, uh, ultimately that.
People were consolidating things, they were really more on a consolidation play.

(23:23):
Uh, which is to say, you know, we're trying to, we're trying
to get down from the 15, like you said, maybe in your example,
you're trying to get down from the, the 15 different vendors
that do very similar things and maybe get down to, to four.
I just make up a number, right?
It's not always many to one.
It rarely is, but it can be many to few.
And the reasons, you know, there's a ton of reasons

(23:43):
why that's, there's beneficial around doing that.
There's economics, there's, you know,
there's efficiency gains and all that stuff.
And so I, I've seen that start to happen, and so that's why,
kind of going back to your question, why, why would somebody.
Let's say, you know, look at somebody again like a Dynatrace when you
get into the AI observability space, instead of finding some, let's

(24:04):
say, point solution that maybe does that specific niche, it goes
back again to, it's a consolidation play because you know, customers
just don't want to have to manage a portfolio of, you know, 16.
Or 20 things that very are are very similar.
Oh, I agree wholeheartedly with that.
Absolutely it does.

(24:24):
But the reality as well is so many, uh, there are a bunch
of terrific observability companies that once they reach
a certain growth tipping point, need to do all things.
And I think it's to their detriment where if you have a company that, I
dunno, emphasizes logging, and that is their bread and butter and that
is what they grew up doing, and now they're okay, we need to check a box.
So we're gonna do metrics now.

(24:45):
A week after they launch someone who's never heard of them
before, stumbles across them, implements their metrics.
Solution like this doesn't seem well baked at all.
I guess it's all terrible.
Across the board.
It becomes harder and harder to distinguish at what, at
what areas a particular vendor shines versus, which is
more or less check the box as part of a platform play.
Ideally in the fullness of time, they fill in

(25:06):
those gaps and become solid across the board.
But it, it still also feels like a bit of the
multi-function printer problem where it does three things.
None of them particularly well.
How do you square that circle?
Yeah.
It, it's, it's a very difficult one to square, like you said.
Um, now the way that we look at it, you know, and this is,
you know, it could be different from company to company.
The way we look at it is we, we try not to be the, oh, well, we'll

(25:29):
release this thing because it, you know, sounds like the market thinks
we want it, but it does maybe 5% of what people really needed to do.
Um, and so we look at it from that perspective
of, you know, what is the real pain point?
Like I always like try to work backwards.
What's the real pain point that customers have?
Is there additional value that they gain?

(25:51):
Because we can, because of the rest of the platform, right?
The, the synergy you can get from the rest of the platform.
If the answer is no to those, to those questions, then we have to,
to discuss whether that's an, a real area that we wanna invest in.
Um, because like you said, it, it's like,
great, we do this one little thing very well.
But even from a Go to Market standpoint,
it, it doesn't usually take off because.

(26:13):
You, you're trying to, you're trying to sell to, to an audience or, or, or, or
provide value to an audience that you're only doing 5% of what they really need.
Yeah, that's, and I think that that is the, the trick and the challenge that,
because you simultaneously have to, you want to be able to provide all things
to all people, but you also have to be able to interoperate effectively because.
Every environment is its own bespoke unicorn, past a certain point of scale.

(26:36):
No one makes the same decision the same way.
So as you start aggregating all those decisions together, things that
make perfect sense for one customer might be disastrous for another.
And, and you're always faced with a challenge of
how configurable do you want to make the thing?
Do you want to have it be highly prescriptive and it'll be awesome this way?
Or do you want someone to basically have to build their
own observability from the spare parts you provide them?

(26:58):
It's a spectrum.
Yeah, I, I agree.
A a hundred percent.
And like I said, kinda going back a little bit like the way that we
look at it is, you know, we view it as the, the power in observability
now, and then the power of observability kind of going forward is
having, if you're collecting a whole bunch of data and that's getting
exponential, but the data in context, having the contextual of it,

(27:22):
and that's what can provide you actual value at the end of the day.
Uh, so that's why I said is that, would we go into something.
That doesn't kind of marry that up or doesn't make sense.
Like I said, it doesn't provide value to a lot of people, but it can.
So again, going back to the AI observability stuff is.
One of the things we do and, and that is, is a little bit unique to

(27:43):
us, is we have a topology model of, uh, you know, of a, a customer's
environment that like real time view of like, this dependency belongs
to this and this and that, and all these, all these type of things.
And.
Now that you're injecting a brand new kind of, let's say, piece to your
topology, that being the AI infrastructure, if you're going to do that, uh,

(28:04):
inside of your application space, you're gonna wanna have that context now
of how do these maybe distributed systems, how are they interacting with it?
And then, so there's, like I said, a lot
of value in doing it in that, in that case.
And that's, and that's really where we focus on.
Yeah, and I think that's a very fair positioning to have.
And I, I, not to name names, but I do talk to a lot of companies

(28:26):
about observability because, like it or not, the ultimate arbiter
of truth of what's really running in your environment is the AWS
bill Observability via spend is not a complete joke, and I hear.
Complaints about vendors.
It's always the squeaky wheel that gets to the grease.
I don't hear complaints about you folks very
often, though I do find you in these environments.

(28:47):
So your positioning and the way that you're talking to customers and
the way you're addressing their problems is very clearly onto something.
Believe it or not, you don't just live in, uh, in
one of the boxes of the Gartner Magic Quadrant.
You, you are out there in the real world.
Yes,
yes, yes, we are.
Uh, and like you said, we, we, uh, we.
We are predominantly deployed, like I said, in the larger enterprise space.

(29:10):
And you know, the way that we look at, you know, when we deal with
customer and interaction, so can, may maybe go back to your point
before where, where we're not coming up as the squeaky wheel.
We, we prefer obviously to be the valuable wheel or the
valuable cog in the wheel per to per se is, you know, we, we
work very, very diligently to ensure that, you know, we're.
We're solving the customer's problem, not just

(29:31):
finding a way to get them to use something new.
We want to focus on the fact that you have a
new challenge and how can we help address that?
Where's the value in the platform that can help you address
that, not just, Hey, use this new thing because we have it.
And maybe you gotta be cool.
I cannot adequately express how much of a differentiator that is

(29:53):
between you and other vendors that I come across fairly regularly.
It's, well, we need to boost market share.
Buy this thing too.
It's, but, but I don't want to buy this thing.
Well, tough.
It's now getting rolled into your next contract.
It.
Becomes a challenge of at, at some point you
have to start going broad instead of going deep.
And I still think that this is an emerging enough space where that doesn't work

(30:16):
for everyone, nor should we try and force that square peg into the round hole.
No, and like I said, and, and at the end of the day, all it
does, if you do that, it creates frustration on two sides, right?
Because one.
As you can, as you're saying, like customers get frustrated because now
they have, are either paying for something that's not really valuable,
um, or it's not working out, and then, you know, then what, from from a

(30:39):
company standpoint, you've now just created a, a, let's say a, a bad taste
in somebody's mouth or a bad relationship and it, like I said, it doesn't
really, it doesn't really have benefit in the long term for either side.
It doesn't work.
It can't work.
It's, I like the idea of sustainable companies doing things
that are not necessarily as flashy, but they get the work done.

(31:01):
I've, I've always had an affinity for quote unquote boring companies.
It's, it's a lot less exciting than living on the edge and being
in the news every week, but maybe I don't want my vendors to
constantly be Jo, be jostling each other for headlines instead
of solving the actual problem that they're paid to solve.
Exactly.
I really wanna thank you for taking the time to chat with me.

(31:23):
If people wanna learn more, where should they go?
Yeah, so easiest way of course is dynatrace.com.
Plenty of information there.
Um, and we also have, uh, well, really two things you could do.
You can certainly take a free trial.
Again, no strings attached.
You don't have to put any payment information in nothing like that.
Um, and you could just deploy it in your own environment, actually

(31:43):
see it work, which is very well, and, uh, you know, then what you
also good to get access to is, uh, we have a, a playground as well.
So if you don't even wanna ins, you know, try it in your
own environment yet you're not ready or can't do it.
We have a data, that actual environment that's running that
you can actually play around with the actual live product,
which is fantastic.
I I love that easy, that easy exposure to here, play with this

(32:03):
and see how it works in a, not sure, somewhat contrived, but not
massively so as opposed to, oh, you wanna learn how our product works?
Click here to set up a call with the sales team.
I. I understand that that is how enterprises buy, but there's also small
scale experiments where people just want to see if the thing works.
Usually in weird hours and putting that blocker in for people not being able

(32:24):
to get to actually kicking the tires doesn't serve anyone particularly well.
I, again, it doesn't work for every product, but it should for this.
Yep.
I agree.
Thank you once again for your time.
I really do appreciate it.
Uh, Wayne Seger, director of Global Field CTOs at Dynatrace.
I'm cloud economist Cory Quinn, and this is screaming in the Cloud.

(32:45):
I. If you've enjoyed this podcast, please leave a
five star review on your podcast platform of choice.
Whereas if you've hated this podcast, please leave a five star review
on your podcast platform of choice along with an angry, insulting
comment that we'll have no idea when it showed up, just because
honestly, we don't have great observability into those things.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Medal of Honor: Stories of Courage

Medal of Honor: Stories of Courage

Rewarded for bravery that goes above and beyond the call of duty, the Medal of Honor is the United States’ top military decoration. The stories we tell are about the heroes who have distinguished themselves by acts of heroism and courage that have saved lives. From Judith Resnik, the second woman in space, to Daniel Daly, one of only 19 people to have received the Medal of Honor twice, these are stories about those who have done the improbable and unexpected, who have sacrificed something in the name of something much bigger than themselves. Every Wednesday on Medal of Honor, uncover what their experiences tell us about the nature of sacrifice, why people put their lives in danger for others, and what happens after you’ve become a hero. Special thanks to series creator Dan McGinn, to the Congressional Medal of Honor Society and Adam Plumpton. Medal of Honor begins on May 28. Subscribe to Pushkin+ to hear ad-free episodes one week early. Find Pushkin+ on the Medal of Honor show page in Apple or at Pushkin.fm. Subscribe on Apple: apple.co/pushkin Subscribe on Pushkin: pushkin.fm/plus

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.