All Episodes

May 6, 2025 48 mins

Send us a text

Justin Ryburn is the Field CTO at Kentik and works as a Limited Partner (LP) for Stage 2 Capital. Justin has 25 years of experience in network operations, engineering, sales, and marketing with service providers and vendors. In this conversation, we discuss startup funding,  the challenges that organizations face with hybrid and multi-cloud visibility, the impact of AI on network monitoring, and explore how companies can build more reliable systems through proper observability practices.

Where to Find Justin

Show Links


Follow, Like, and Subscribe!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
You have what you call general partners and you

(00:01):
have limited partners.
The general partners are thepeople who are basically running
the fund.
They're raising money, and thenyou have what are called
limited partners, so they're thepeople putting the money into
the fund.
I specifically picked stage twobecause their particular
investment strategy.
They're a venture capitalistfund, so they're investing in
startups and they'respecifically trying to raise

(00:22):
money from limited partners,from LPs that are go-to-market
leaders at other startups.

Speaker 2 (00:40):
Hello and welcome to another episode of the Cloud
Gambit.
With me is my co-host, yvonne,who just got done traveling.
How was your trip, yvonne?

Speaker 3 (01:06):
Yeah, yeah, just got back from Google Cloud like
workload migration and cloudfoundations and all of that, so
it was a good week, glad to behome though.

Speaker 2 (01:17):
Were you excited to talk about AI all week?

Speaker 3 (01:21):
I think so.
I mean, it's always interestingto see what's coming, and we
know that it's going to changethe world, and so, yeah, it's
always fun to talk about whatcould be.

Speaker 2 (01:32):
Awesome.
Any traveling difficulty, bythe way, or was it all smooth?

Speaker 3 (01:37):
It was pretty smooth.
Yeah, yeah, I booked directflights, which is always a good
trick, and yeah, what I will sayis I drove through some pretty
good Kentucky flooding to get tothe airport, but after that it
was good.
Had a lot of water here thisspring.

Speaker 2 (01:55):
Yeah, lots of that to go around.
Well, anyway, with us is Justin, who was actually recently in
Kentucky as well doing somesightseeing, and we had the both
of us, yvonne and myself bothhad the privilege to hang out
with Justin at a USNUA event nottoo long ago in Lexington,
kentucky, of all places.

(02:17):
But how are you doing today,justin?

Speaker 1 (02:19):
I'm doing well, doing well.
I actually missed out on GoogleCloud Next because I was in
Kentucky.
We were in opposite places yetat the same time, Yvonne.

Speaker 3 (02:27):
Yeah, hey, the Bourbon Trail is always a good
thing.

Speaker 1 (02:31):
It is fun.
Like you said, a lot of waterin Kentucky, a lot of flooding.
It was amazing, just in the fewdays we were there, how quickly
it receded, though.

Speaker 3 (02:38):
Yeah.

Speaker 2 (02:41):
Yeah, my whole backyard was pretty much flooded
flooded.
We have a creek that runs tothe back of our property.
That thing was like a river.
It was crazy.
My son actually took a.
We have one of those liketoddler plastic play pools, like
the round ones.
Like he threw that sucker inthere and jumped in and just

(03:04):
went.
I'm like, yeah, don't do that.
You could, you could get hurt.
It's a little too vicious to be, uh, doing that right now.
So good, good times.
But anyway, thank you so muchfor joining us.
Um, I think you we frequent alot of the same circles.
I feel like I see you at prettymuch every conference I go to

(03:26):
reinvent.
We ran into each other atreinvent uh, recently, um was
that last year, yeah, lastreinvent yep, yep, that was in
december in vegas.
Yep yeah, um, most of the umnetwork automation forum
conferences, all sorts of stuff.
But yeah, you've been, you'vebeen up to a lot of interesting

(03:46):
things lately.
But before we get into thetopics of the conversation, do
you want to just give us alittle bit about your background
, just kind of when you startedin tech?
It doesn't have to be too deep,but just a summary.

Speaker 1 (04:00):
Sure, I'd say the summary of my background came up
through the networking silo.
I'm old enough to have startedbefore there really were silos.
But, as people kind ofspecialized in IT or in
technology, networking was whatI was passionate about.
So I spent a lot of time doingthat and, like you said, I now
spend my days going to a lot ofthese conferences and doing

(04:21):
either public speakingengagements or helping us staff
booths and so forth.
My current job title is fieldCTO with an organization called
Kintec and we do somecloud-related stuff and some
traditional network-relatedstuff doing what we call network
observability Field CTO asYvonne asked a little while ago

(04:45):
what in the heck is a field CTO?
Yeah, I find it's about 50-50.
When I introduce myself,whether people know what that
means or not, so I'll explain.
And you know, like a lot ofjobs in tech, it means different
things in different companies.
But the way we defined it atKentik is I do a little bit of

(05:06):
brand awareness and thoughtleadership, so doing some blogs
and podcasts and speakingengagements at events, do a bit
of executive involvement withour customers, so building some
executive relationships withsome of our key buyers.
Really, what that comes down tois translating from one
engagement to another the usecases for our product and how

(05:28):
customers have found success outof our products in the past.
So, for example, you know, ifI'm at reInvent, where you and I
ran into each other, williamand I talked to five different
customers I start to pick outthemes from those conversations
that I can then bring to otherexecutives at our customers and
say hey, you know, I've heardfrom two or three, four other
customers that they use theproduct like this.

(05:49):
They're really getting value.
You know, this is where they'rehelping save money or they're
helping be more efficient in howthey run their operations by
implementing the product in thisway.
So it kind of helps me bringsome value to those
conversations.
Which is really what theexecutives are looking for is to
be able to not just understandyour product and how the widgets
work, but how are they reallygoing to have a successful

(06:10):
business outcome from that?

Speaker 2 (06:14):
Gotcha.
So that sounds like it rolls upunder sales.
Yeah, it does.

Speaker 1 (06:22):
It rolls into the sales organization.

Speaker 3 (06:23):
I'm peers at our particular company, peers with
the VP of sales, but help himkind of drive revenue Right kind

(06:46):
of willing to go out and beatthe bushes and spark a lot of
those initial conversations andthey're willing to hear no, a
lot and it is an important andvaluable role.
But the peer of this, thetechnical peer of the sales
leader, is really the person whobrings trust and credibility to
the product and is able to say,to validate the stories that
the salespeople tell in a waythat's going to resonate with a

(07:09):
technical audience.
Because at the end of the day,the thing that you're selling
has to work and the job of youknow technical sales leaders is
to validate that and it's apretty important thing to do to
be able to put reality to a lotof the stories that our friends

(07:31):
and peers over in sales tell.
My accent's getting the best ofme here.

Speaker 1 (07:39):
Yeah, and I think what we're seeing is the economy
tightens up a little bit,interest rates are high.
There's a lot more pressurethan ever on the CFOs to make
sure that if they're going toinvest money, in buying you know
, something from a vendor thatthey're going to have a
successful outcome.
They're going to have an ROI forthat spend Right.
And so you know more and moresales folks.

(08:02):
As you're saying, you're havingto have conversations around.
Well, I understand how theproduct works.
I understand what we're tryingto accomplish.
What's the business outcome?
How is this going to help me?
Where's my ROI going to comefrom?
From this?
And sometimes, with things likepublic cloud, it may be obvious
, because they're going to beable to shut down a data center.
They're going to be able to bemore efficient with their
operations.
They're going to be able to bemore efficient with their

(08:24):
operations.
They're going to be able toscale faster.
Some of those are obvious.
Some products are a little bitless obvious.
Unfortunately, my employertheir products like that.
It's a very highly technicalsale and it's not as big a brand
, household name and so forth.
Not every executive understandswhat they may be able to get
out of our product.
That's where myself and my teamcome in makes sense.

Speaker 2 (08:47):
How, how would you?
So?
You're one.
I've done some work with one ofyour counterparts uh, phil
travasi, and uh, how would youdifferentiate?
Like, what a field cto doeswith a tech evangelist is like
one facing community and one isfacing customers, but they're

(09:07):
kind of sort of doing the samething, different lens.

Speaker 1 (09:11):
Yeah, there's definitely like, if you think
about it from a Venn diagram,there's definitely some overlap.
So, phil and our technicalevangelist team, they report
into marketing officially, right, and so they're doing a lot of
that brand awareness and thoughtleadership as well that I was
talking about.
They're helping us buildpipeline, so they're trying to

(09:34):
build awareness in the communitythat we're out there, that our
product exists, so that oursales teams could have very
initial conversations with ourcustomers.
That's really what they'remeasured and compensated on is
driving what we call top offunnel leads and getting more
interest in call top of funnelleads, um, and in getting more
interest in the top of funnel.
Where my role comes in, I do alot of the same type of of tasks
when it comes to the externalfacing brand awareness stuff,

(09:55):
but it's more to um, let's saykind of mid-funnel, more with
existing opportunities.
We're already engaging withcustomers or with existing
customers, helping them get morevalue out of a product, helping
them expand the usage of ourproduct to get more value out of
it.
So, yeah, there's definitelysome overlap and I work pretty
closely with Phil and the restof our tech evangelism team.

(10:15):
We share a lot of ideas andcontent and so forth and
collaborate on a lot of that.

Speaker 2 (10:22):
Gotcha.
So sometime, and thank you,that was a really good
explanation, by the way, that'sone thing.
I was actually reading a blogthe other day about the trying
to pick apart the differencesbetween all the different
evangelists.
You know, sales field, cto type.
A lot of these are new titlesor new roles that didn't exist

(10:42):
for sure that long ago, so itcan kind of be confusing, for
sure.
But sometime earlier last year,I think I think it was last
year, uh, you, you mentioned oryou made an announcement on
linkedin that you were, you know, joining, uh, stage two capital
is as an lp, and well, actually, let's.

(11:04):
I don't want to get too aheadof myself here, but, um, do you
want to just start, like beforewe get into the details, by
defining what a limited partneris in venture in the first place
?
That might be helpful.

Speaker 1 (11:17):
Yeah, sure, so, um, we'll come back to venture
capital in just a second.
But the way the, I'll say, theSEC defines private money funds,
there's a specific designationwhich I don't remember off the
top of my head.
When you're setting up one ofthose type of private money
funds, you have what you callgeneral partners and you have

(11:40):
limited partners.
The general partners are thepeople who are basically running
the fund.
They're raising money fromprivate investors to go and
invest that money one way oranother to get returns.
Could be that they're investingin real estate by buying
properties.
Could be that they're investingin data centers.
We hear a lot of money aboutcompanies like BlackRock

(12:01):
investing in AI.
They could be doing some stufflike that.
There's a lot of different waysthey could be investing that
money, but the general partnersare the ones running that.
That private money fund, andyou think of it as like the fund
manager in a mutual fund isprobably the easiest way to
think about it.
And then you have what arecalled limited partners.
So there are the people puttingthe money into the fund and
they can have all kinds ofdifferent relationships with the

(12:23):
fund.
Some of them can be just likehey, you know, I'm giving you my
money, I'm trusting that you,as a general partner, are going
to return my money to me plusinterest, and they're pretty
hands off.
I specifically picked stage twobecause their particular
investment strategy they're aventure capitalist fund, so
they're investing in startupsand they're specifically trying

(12:43):
to raise money from limitedpartners, from LPs that are
go-to-market leaders at otherstartups.
So they're going out trying tofind people to invest in their
fund that not only can obviouslygive them some money that they
can then turn around and investin these startups, but also
people who can invest some timebased on their experience to
help coach their portfoliocompanies.
So help with their CEOs and say, hey, you've been successful

(13:06):
before as a field CTO.
What does that look like tobuild a field CTO team?
Or, prior to this role, Iactually helped build the SE
team here at Kent right, thesolution engineering team.
Like, how do you build out andscale the solution engineering
team at a modern SAS company?
There's a lot of you know as afounder.
There's a lot of things thatpeople who are founding a
company just don't know becausethey don't have the experience

(13:26):
right and surrounding themselveswith people who've been there
and done that and understandwhat they're going to need to do
to be successful and can givethem some coaching is what a
stage two is trying toaccomplish with their, with
their LP network.

Speaker 2 (13:39):
So you're basically blending together, you know, go
like basically the venturecapital aspect of it, but also
the go-to-market and coaching,because, like, let's be real, if
you're working for a startupand you want to increase sales,
the way to do that is to notquadruple your sales team and
just do that and like nothingelse.
Like there's other things thatyou might want to pay attention

(14:02):
to.

Speaker 1 (14:02):
So you're taking that if only were that easy.
For four extra reps equals fourextra.
If only it were that easy.
For for extra reps equals forextra revenue.
If only it were that easy.
Unfortunately, it's a littlemore nuanced than that right.
There's things you know likethey call product market fit
right when you got to make surethat your products evolved to a
phase where there are repeatableuse cases and there's a

(14:24):
repeatable different aspects tothat that.
You know founders need somehelp in coaching in a lot of
areas.

Speaker 3 (14:32):
Well, and so you mentioned product market fit.
What's the sweet spot there forKentik, like, what's your ideal
customer?
Where does Kentik make the mostsense?
How do you see that playing outin the market?

Speaker 1 (14:50):
Yeah, so, like I said earlier, we're a network
observability platform and whatthat means is we can ingest data
from a lot of differentnetworks, and so the way we
really differentiate ourselvesis the variety of different
networks that we can ingest datafrom.
So we can do traditional datacenter networks, we can do
enterprise SD-WAN environments,we can do I call it the big four

(15:14):
cloud providers, so Amazon,microsoft Azure, google Cloud
and Oracle Cloud we can ingestthey all call it something
slightly different, but it'sessentially VPC flow logs, it's
telemetry data about the trafficthat's flowing through those
networks into our product.
We do the same thing forservice provider on large, big
carrier networks.

(15:34):
So we can provide them networkobservability around the traffic
that's flowing in thoseenvironments.
So there are some performanceuse cases.
Obviously, there's someplanning use cases.
There's some security use cases.

Speaker 2 (15:47):
There's some security use cases that customers, uh,
can use our product for talkingabout like there's a lot of
stuff when you, when you sayobservability, there's a lot of
like changes with observability,like over the past few years.
I mean especially with likepublic cloud, um, the cncf, open
telemetry, kind of like settinga framework or a baseline of

(16:09):
where some vendors beginbuilding on, you know, as far as
the technology is concerned.
But do you like?
One question that I have justout of the gate is like do you
see a big difference?
Because like Kentik is framedas like network observability,
like that's the focus, that'sthe essence of what kentik does.

(16:30):
But do you see a lot of likeoverlap with cloud stuff or
cloud products or cloud products, just different, you know,
because they're really givingyou observability wrapped up for
for different things other thanthe network.

Speaker 1 (16:48):
Yeah, I mean again, like most things in tech, right,
there's a Venn diagram.
You know, what we find with alot of our customers is the
cloud native product offerings,depending on the particular
cloud we're talking about.
Some are more mature thanothers, some are better than
others, but the reality is mostof them are really only looking
after their own cloud networkright.
So if I'm going to go log intoAmazon's portal, they have

(17:11):
CloudWatch, right, andCloudWatch does a great job for
customers who are only inCloudWatch.
But if they have this is aspecific example from a customer
that I was talking to here awhile back they have a data
center that's using Cisco's Iforget what they call their SDN
product in their data center,but they have their own portal
right where they can see theirdata center fabric.

(17:31):
They can see all theapplications deployed on data
center fabric.
They have a portal they can login.
They can see all that telemetry.
But if they have an applicationthat's in there that's going
across their SD-WAN andterminating in AWS, they have
three different portals they'vegot to log into to look at that
data.
They have three differentportals they got to log into to
look at that data.
They got to look at it in theirdata center and they got to
look at it on their SD-WAN andthey got to look at an ADBS to
troubleshoot it.
The approach that we've reallytaken is like let's pull all

(17:53):
that data into sorry, I'm goingto use a marketing term single
pane of glass so the customerscan see across.
You know, I guess you wouldcall that hybrid cloud, right.
But multi-clouds and other usecases we see with a lot of
customers where they have someworkloads in Google and some in
Amazon and there's trafficflowing between them and they
need to be able to dotroubleshooting.

(18:14):
They need to be able to see thetraffic that's traversing both
those clouds so that they havefull visibility of all of their
traffic across all the variousdifferent environments.
And that's typically reallydifficult to do, or near or
impossible to do just using thetools that any given cloud
provider provides.

Speaker 2 (18:34):
So is that the pain point right there?
You know so, being a field C2,I'm sure you're privy to all
sorts of customer pain points.
I'm not imagine so, do you like.
Is visibility the main thing,or are there other, like big
pain points, just kind of thatyou see is trending across the
board with, you know, customers?

Speaker 1 (18:54):
I mean, at the end of the day, it all comes back to
the visibility, right, and thereason that they care about that
visibility is going to differfrom customer to customer and
what they're dealing with andhow I mean even which team
engaged us right.
And so if I were to break itdown the next layer, deep of
details what I would say is itcomes down to one of three
things either performance, costor security, right?

(19:14):
So a lot of times when we getjust cloud customers
specifically to come to us, theyhave migrated to the public
cloud.
Their dev team probably startedthat initiative.
The networking team was broughtin late in the game and so it
wasn't well architected.
And so they're spending a lotof money on the cloud

(19:36):
environment maybe a lot morethan they planned on, a lot more
than they hoped for, and theywant to be able to see all their
cloud traffic, to be able tofigure out how to re-architect
the network to be morecost-effective, right.
Right, that kind of dovetailsinto performance, um, those are
kind of two sides of the samecoin, right?
If it's not well architected,if it wasn't well designed,

(19:56):
you're going to have someperformance implications from
that as well.
Um, and then you know.
The third one is around securityright.
A lot of companies, if they hada traditional perimeter
firewall type of securityposture, when they migrate
applications to the public cloudit's a lot harder to figure out
whether the security posture isbeing enforced correctly, right

(20:21):
, and so being able to seeacross all those environments,
see what's traversing it, seewhat's being accepted, see
what's being rejected, even beable to get proactive alerts
when you know something thatused to be accepted is now being
rejected because somebodydeployed a new EC2 instance or a
new GCE instance and didn'tupdate the firewall filters to
allow that traffic in.

(20:41):
You know, there's a lot ofdifferent things around that
that we can help customers with.

Speaker 3 (20:47):
So you guys ingest a ton of data from a ton of
sources.
Can you talk a little bit abouthow you manage that scale and
availability of that data?
Because flow data can get outof hand very quickly from a
volume standpoint.
So can you talk about how youhandle that a bit?

Speaker 1 (21:07):
Yeah, that actually was the original problem
statement for which the companywas founded.
Right was this is a big dataproblem.
So our founders had, I'll say,tried to solve this on their own
when they were running networksand realized it's a lot bigger
problem than they had realizedand and started kentik.
But, um, you know, we run ourown I'll say our own data center
.
We lease space in a data centerand have our own infrastructure

(21:29):
that we pull the data into andwe've built our own systems that
are optimized for flow data,because flow data is very, has
very high cardinality and whatthat means is like one flow
record versus the next may havebig differences in the various
different fields that are in it.
The, you know, a lot of timespeople want to get down to a

(21:54):
given IP address so that youknow you have to store all the
IP addresses that's in the flowrecord.
So there's a lot of differentchallenges with, you know, flow
data whether it's coming offtraditional network or whether
it's coming from the cloud,versus things like SNMP or
streaming telemetry that are alittle more structured, I guess,
in the way the data is ingested.
So we ingest the data, nomatter which protocol we get it

(22:16):
in from.
We normalize it into our owninternal format that's slightly
compressed and then we doenrichments.
So we'll take in the data.
I'll just use Google Cloud asan example.
We'll take in VPC flow log fromGoogle Cloud.
It'll tell us things like youknow source and destination IP
addresses, but what a customerreally wants to know is like
well, which GC instance doesthat belong to, which project

(22:37):
does that belong to, right?
And so what we can do is scrapethe API to pull in that
metadata and store it.
So when we ingest the new flowlog, we can enrich it.
Basically add more columns foreach row that we ingest that
shows that additional metadataand store that along with it.
So when they go to query thatdata, they're able to do the
query really fast.
Because that's the otherchallenge that we see.

(22:59):
You know a lot of customers whotry to build their own stack is
like if you try to do thatcorrelation at query time, your
queries become really complexand really slow to return
results.

Speaker 3 (23:10):
Well and so slow that the data is then not useful to
you, right?
If you're trying to doreal-time troubleshooting or
understand what's going on inthe moment, querying vast
volumes of data is not easy todo unless you have very
well-tuned data systems andstructures to do that.

Speaker 2 (23:31):
Well, one thing I was going to say is like bringing a
lot of this together, somethingI don't even.
Maybe you could just educate me.
I don't even know if this islike a marketing tactic with
Kentik or what it is, but Doug,I can't remember his last name
now for some reason he does likethe rca stuff that he'll post
on like linkedin, oh yeah dougmadori yeah, those are so good

(23:55):
like I pretty much read like allof them like end to end, like
they're all fantastic.
How did how did that start?
I mean that's like a goodmarketing exercise for kentik uh
100.

Speaker 1 (24:06):
This goes back, so this goes back.
So he's a tech evangelist aswell, so he's peers with Phil
that we were talking about alittle earlier, right and again
his.
At the end of the day, whathe's trying to do is raise
awareness that we you know thatwe're out here as a company, as
a brand and, to your point, dougdoes a fantastic job of doing
post-mort Could be on underseacable cuts, could be on BGP

(24:27):
hijacks or leaks that take placeout on the internet.
Those are the two biggest onesthat he does most of his
reporting on, and he does afantastic job.
It's not really even a Kentikpitch when you read them right,
it's just like hey, kentik hasan interesting data set.
We have interesting, you know,big data in our system that we
can anonymize and see thesethings that take place on the

(24:48):
internet that are interesting toother people who are, you know,
uh, in the industry, and so heprovides really good analysis
around.
Um, you know a lot of these justcall them internet weather
things that take place, whetherit's uh, you know, an undersea
cable that was cut, eitheraccidentally or as a high, you
know as a, as an active attack.

(25:08):
He can see changes in that.
He can see when, like, theinternet shut down because a uh,
you know, a political regimehas shut the internet down
because they're about to do acoup or there's a test coming up
.
There's a lot of differentreasons that countries will shut
down the internet in theircountry to to knock their
country offline for variousdifferent reasons.
That countries will shut downthe internet in their country to
to knock their country offlinefor various different reasons.

(25:28):
There's a lot of reallyinteresting, fascinating
geopolitical and uh internettype of things that that doug's
able to see.
Um, and a lot of it just comesdown to looking at the bgp
routing table, but some of itcomes from, uh, the aggregated
anonymized flow data that wehave in our system as well yeah,
and they're so good.

Speaker 2 (25:46):
I one of them had gotten and this is like why I
said an amazing marketingexercise.
One of them had gotten pickedup by like some major, like a
huge tech publication.
It was like referenced in there, I think it was like tech
crunch or one of those.
Not tech crunch, it was.
Maybe it was tech crunch, butthat's huge.
I mean, that's something thatmarketing teams would pay top

(26:07):
dollar to.
You know, get featured in, youknow, on some of these pieces,
yeah, Well, a hundred percent.

Speaker 1 (26:13):
Doug's reporting is amazing.
I mean, um, he, he was actuallycalled by the Washington post
the man who sees the internetright, cause he can look at
these different outages and hedoes nice reporting on I don't
know what, what, what was goingon during the event and how it
happened, and so forth A lot ofthem.
The news outlets in thiscountry will pick up his phone

(26:33):
and call him when you know, whenthere's any type of internet
outage, because he's just got areputation of uh, of doing a
good job of of reporting on thisstuff and you asked.
I don't think I fully answeredyour original question, which is
you asked how this gets started.
Doug was before Kentik atanother company called Renesys
who was ultimately acquired byDyn and then they were acquired
by Oracle and they did a BGProute table analysis and so

(26:57):
that's kind of how he gotstarted, was way back at Renesys
doing analysis of the BGP tablejust to find these types of
trends in the industry.
He's just been doing it longenough that he's built his own
brand and reputation as someonewho has really good knowledge on
this type of thing andinteresting things to say and,
again, does it in a way that itprovides value to people beyond
just a product, pitch right,which is really the um, the art

(27:21):
behind tech evangelism.

Speaker 2 (27:24):
Yeah, and it's incredible too because, like
now's like the best time to dothat type of stuff, because you
have some I mean so manybusinesses you know are starting
out in the cloud and wheneverthere's an outage or whenever
something major happens, youknow whether it's like a
Cloudflare thing or whatever itis, you know, okay, like
something happens in the, theinternet for some provider or

(27:47):
multiple providers impactedcould, you know, have serious
problems and everybody feels it.
It's huge, like the whole worldknows about it.
So coming back and saying, ohhey this is kind of how that
happened.
This was you know the you knoworder of operations of what led
to you know xyz.

Speaker 3 (28:05):
It's huge uh, it's very cool yeah yeah, no, uh,
question that I have.
So there are always emergingtexts and trends.
Like you know, there's opentelemetry that we're hearing a
ton about their technologies,like eBPF and what emerging

(28:27):
technologies and we can eveninclude AI in this list.
Do you see, you know, impactingKintec's business, how you
provide observability?
Are there new trends thatyou're incorporating into your
platform?
What do you, as you look to thefuture?
What, where are some of the bigimpacts that you see and how

(28:48):
are you, you know, changing yourbusiness to respond to those?

Speaker 1 (28:53):
I think all of those are interesting, but I'm glad
you added AI, because that wouldbe my answer Right.
You know, I'm sure you heard alot about AI last week at the
Google Cloud Next conference too.
Right, and we're we it in ourproduct too?
We're still experimenting witha few use cases that we think
will be really interesting forcustomers.

(29:13):
The very first one was what wecalled Journey AI.
It's essentially using an LLMto allow customers to, in
natural language, ask a questionand get back a visual as a
response, right?
So for a long time, customerscould and would have to go into
RUI and navigate and buildthemselves basically the

(29:35):
graphical equivalent of a query,right?
Like think it was like aGrafana dashboard, right?
You can build all thesedifferent panels, all these
different data and differentgraphs on it, but you have to
know how to do all of that,right?
Your C-level executive is notgoing to take the time to learn
our UI and its nuances fromevery other UI that they've seen
, right?

(29:56):
So the idea is all right.
Let's give them the ability togo in and say you know, hey,
kentik, what's going on with mynetwork today?
Why do I have a performanceissue?
And for us to be able to go andturn that into a query using
basically an API, into a coupledifferent LLMs.
We have a couple of them thatthey can choose as an option.

(30:17):
So that was our first foray.
That actually works amazinglywell.
But we want to take it a stepfurther and say, okay, we have
all this rich data across flow,across synthetic tests that we
do, across all the cloud stuffwe take in, all the SNMP
instrument telemetry data wetake in.

(30:37):
How do we provide faster rootcause analysis?
Right, how do we get to theroot cause of a problem faster?
And so we have a new featurewe've come out with, called
probable cause analysis, whereyou can basically highlight a
spike in traffic in our UI,right click and have the AI say
what's the most likely cause?
Um, you know of this.

(30:58):
And of course it's going throughand looking for correlation and
then the data points to say,well, at the that, uh, you know,
I had this spike in traffic.
I'm gonna pick a bad examplemaybe, but like, fortnight was
really released a new episode,right, and so that's the.
You know the increase in yourott traffic is coming from these
three cdns, and those threecdns are the ones delivering the
primary, you know, the majorityof the traffic from this new,

(31:20):
uh, fortnight game that wasreleased, right, you know.
And then being able to take ineven more data to be able to get
down to actually suggestingremediation, is really where we
want to go with this right beingable to say not only can we
help you figure out what was theprobable cause of changes to

(31:42):
traffic patterns in your network, but also try and suggest some
uh things you might do to to fixthat Right.

Speaker 3 (31:51):
Very cool, that's really cool.

Speaker 2 (31:53):
Yeah, is AI a feature or a product?
And, like I think, for for mosttech companies that have been
around, it's like all thatGoogle did with Google docs,
with Gemini, is like really,really useful.
It's not a new product I mean,they do have new products around
Gemini, of course but it's liketaking something that was
already awesome.
You know something that manypeople out there use, which is

(32:17):
Gmail and Google Docs, throwingsome really useful AI on it and
wow, it's just it.
Like as a user of Google Docsmyself all the time, some of
those features are just sonatural, like once you just use
them once or twice, it's like,oh, now I can't imagine my it
just becomes part of yourworkflow, yeah Well yeah, and I

(32:40):
think at some point it's noteven a feature, it's just
embedded in it and we don'treally see it anymore.

Speaker 3 (32:46):
But I think one of the one of the powerful things
I've seen folks do with AI andJustin described it here is how
do we take requests in naturallanguage, translate those so
that we query our system ofrecord and then hand back
meaningful data?
And so sometimes the AI is notnecessarily actually processing

(33:10):
the data and it's not doinggenerative things with it.
We're using it as a translationengine to help us say what we
want in natural language,translate that into the
technical language of our systemof record to be able to turn
data around, and that way youget the benefit of the

(33:31):
generative part, the naturallanguage translation, but you're
not getting the hallucinationpart, because you're turning
this natural language into adeterministic question to your
system of record.
Right, and that is where we seea ton of value coming from AI.

(33:52):
It's like how do I interactwith my systems in a way that
doesn't require me to have tospeak to them in their language?

Speaker 2 (34:01):
Yeah, that's a really good point and that's something
that I wanted to ask.
This is almost like a thoughtexperiment, justin, but just
Yvonne kind of teed it upperfectly.
I've heard and I don't want tosay three like maybe the last
month, three weeks to a month,two separate individuals and two
separate conversations.
So the first one, who's an ITleader at a big company, was

(34:27):
talking to me about like, yeah,this AI and the networking stuff
.
You know, when it figures outwhat the problem is, I just want
it to fix it and let's go offinto the sunset.
And then I have another personthat I'm talking to that's like,
oh, I hope they never get onthe hook of AI trying to
actually change my network flowsand changing traffic patterns

(34:47):
and changing routes, because,you know, the moment something
breaks is the moment we're goingto have to stop using it.
So these are like two reallydifferent stories.
And you know, with the networkis we've both, or all three of
us really have been very deeplyembedded in network engineering
for a long time and it's uh Iknow yvonne and I just the other

(35:09):
day were discussing automationlike wow, is it still still a
thing?
Like we're not automatingnetworks still, and it's like,
still why?

Speaker 3 (35:17):
are we not there yet?

Speaker 1 (35:19):
yeah, I mean that's why autocCon exists.
Right, they run twice a yearfor that very reason.
Yeah.

Speaker 2 (35:25):
Exactly.
So how do you frame that withthis AI thing of almost like
de-risk?
And you know, we know that,like if we go and we do
something on the network thatcauses an outage that leads to
millions of dollars in loss,then the business is going to
say, hey, we don't do that thinganymore.
Then the business is going tosay, hey, we don't do that thing
anymore.

(35:45):
Or you know, they probably aregoing to go to the extreme to
not do that thing anymore andfreeze changes or something
crazy.

Speaker 1 (35:50):
Yeah, they're unlikely to take the time to
understand the details of whatwent wrong.
They're just going to ban whatthey perceive as what went wrong
.

Speaker 2 (35:58):
Well, they've got to buy Pentix, they know right.

Speaker 1 (36:01):
Yeah right, you know I don't have an easy answer here
.
I mean, the thing I would say isjust, there's going to have to
be some amount of checks andbalances and you're going to
have to build a little bit ofthe trust over time, whether the
trust is in the human beingsthat are writing the code or the
trust is in the AI enginethat's behind the scenes

(36:32):
no-transcript.
You know from Kentik and fromour product strategy, at the
moment we have no plans toactually make any changes on any
customer's network.
It's not really part of ourroadmap or strategy.
We have a partnership with yourday job employer, william, with
Itential, where we're more thanhappy to do some API

(36:53):
integration with the twocompanies and let them handle
the automation, because there'sa lot more that they can do to
build entire workflows, buildsome checks into the product and
a lot of things that justaren't, you know, at least in
the near term part of ourroadmap.
Right, and I think that's whereyou really start, where a
customer can really start tobuild some trust in AI,

(37:13):
suggesting changes at some point, maybe even going off and doing
the changes, but I think thefirst step is just like all
right, show me what changes youwould make.
Yeah, that passes the snifftest, or whoa no that's a

(37:34):
hallucination.

Speaker 2 (37:34):
Let's not do that.

Speaker 3 (37:34):
Let's back that out and a lot of that just comes to
having a check in the flow chart, if you will, or in the
workflow and how you build theautomation.
Yeah, I think a lot of like well, something bad happens, we just
have to put in a system to besure that that never happens
again, instead of taking a moremature approach, like we've seen
with SRE, where you know youhave an error budget and you

(37:57):
assume that there are going tobe so many failures over the
course of the year and youdetermine how many, what kind of
errors your business cantolerate, and then you measure
that and you use that errorbudget to help meter your risk
from a technology standpoint,like you know, because when

(38:17):
you've consumed all of yourerror budget, nope, we're not
doing anything risky at all.
But if you don't approach itthat way, then ultimately you're
going to end up in anenvironment where nobody ever
changes anything because they'reso afraid of breaking it.
And you have to build a culturein your organization that
allows for some degree of error,no-transcript organizations and

(39:12):
and what it is.
You know, what is it to be ahealthy technology organization?
And I think that's an evendeeper problem that that ai is
is not going to solvenecessarily, unless somebody
asks like hey, how do I build avery solid, stable IT
organization?

(39:32):
It's going to be oh, you needsite reliability engineering and
here's what you need to do.
And then they actually go do it.

Speaker 1 (39:39):
But yeah, one of that also has to do with building
redundancy into the system,right?
I mean, you know a lot of theways that SREs can build those
error budgets in is having aredundant you know pod of
applications, right, this onepod dies, this other one takes
over.
There's some redundancy builtinto that.

(39:59):
That way you can have an errorbudget If you've built your
network, the underlyinginfrastructure, to where you
only have a single path.
That single path is criticalbecause there is no such thing
as acceptable errors, especiallylike what I know in the
Kentucky area there's a lot ofhealthcare companies, right,
it's unacceptable for anemergency room to be offline for
a couple hours while you do aswitch upgrade.

Speaker 2 (40:20):
That's just unacceptable right.

Speaker 1 (40:21):
But there's no redundancy in the system and you
can't have that switch offline.
Then what do you do?
Right?

Speaker 2 (40:28):
This is true, yeah.

Speaker 3 (40:30):
Failure domains and canary deployments and all of
those you know like fundamentalsthat have existed and that are.
You know we continue to iterateon best engineering practices,
but those are some key ones.

Speaker 2 (40:46):
You're right, though, like both of you had great
points about the redundancy andbuilding.
You know the whole point isbuilding your system for failure
.
So then when it does fail,nobody really notices.
And I don't know if I ever usedthis example on the podcast
before, but I was in this.
I was up in like middle ofnowhere, canada, like middle of

(41:06):
nowhere, for a wedding, likemany years ago, and I could not
I mean, there was not manyapplications I could reach from
my phone.
It was pretty, you know, theservice was like okay, but one
of the apps that worked which isI still, it boggles me to this
day was Netflix.
One of the apps that workedwhich is I still it boggles me
to this day was Netflix.
I could get Netflix and I couldactually watch, you know, tv

(41:28):
shows and movies, and I alwaysthought that was amazing.
Like they have built theirplatform in such a way that,
like I don't know if I've everhad a Netflix outage.
It's like very, it's highlyavailable and there's so much
out there.
They've been good stewards ofthe technology that they've

(41:49):
built and also good evangelistsfor how they've built it.
They've published so much stuffon the internet.
They had this great blog.
I'll find it and I'll put it inthe show notes.
I remember reading it when itcame out.
It was about eventualconsistency.
Um, basically, you know, being astreaming service, like when
something launches, like howdoes that work?

(42:11):
Um, you know, asymmetric,symmetric data replication.
You know az's cross regionactive, active essentially
across the entire planet, itit's just incredible and it
really opens up.
I know it's just a streamingservice and it's not like a lot
of the problems that thecompanies I worked for in the

(42:32):
past are much more challenging.
You have many lines of business, many different applications
that do different things withdifferent scopes and different
priorities, some that mighttouch patients, others that
might not.
So those situations are muchmore complicated, especially in
health care.
But they've done a really greatjob of showing OK, this is
what's possible If you want toput in the time, the effort.

(42:55):
You know, like Yvonne wastalking through the SRE
mentality, you know thetechnology's there, you just
have to be able to harness itand change your culture.

Speaker 3 (43:03):
You know it's the technology's there, you just
have to be able to harness itand change your culture.
Well, the great thing about theNetflix example is, you know,
marrying what your businessneeds to run and what the
technology.
You know what the business cantolerate from the technology,
right, because if Netflix, I'vehad a stream die every now and
then, and you know that'sprobably some container crapped
out on the back end, and theyjust, you know, relaunched it

(43:28):
when you try to stream again andthen everything continues to
run.

Speaker 1 (43:30):
Put in some buffer and cached it locally on your
device so that you don't evennotice which is part of the key
there too.

Speaker 3 (43:38):
Yep, very much, very much.

Speaker 2 (43:41):
It's amazing, very much it's amazing.
But, like we said, like youknow, I know I've I'm on this
because I've just I've been,it's been such my experience.
My frustration is like, yeah,there's still network devices
out there that are in production, that are only reachable via
telnet.
Yeah, that's still a thing,that's we're in 2025 and that's

(44:01):
still reality.
And we wonder why we can'tautomate our networking, you
know, and we wonder.
But we have these giganticnetworks full of how many
vendors, how many interactionservices, how many different
ways, and then variety.
Variety can really kill someproductivity and it's not like

(44:22):
you can flip a switch.
You can't just say, okay, I'mgoing to refresh every campus
switch or branch device that Ihave, let's go.
It's not that easy.

Speaker 1 (44:34):
Yeah, there has to be a business justification for
doing those refreshes, right?
It's not just because the CLIis old and you need a more
modern one, right?
You're going to have to be ableto spin that to your execs on,
like, what are the otheroutcomes that this is going to
help us?
Right?
And back to the conversation wewere having about SRE, like, if
we can improve our uptime, ifwe can come up with business

(44:55):
outcomes that are going to helpjustify the spend to refresh
that equipment, that's different.
But just going to your execsand saying, hey, I want to spend
, you know, millions of dollarsto refresh my gear because you
know, CLI only allows tellingthat they're like I don't care.
That's not a business priorityfor me.

Speaker 2 (45:11):
Yeah, exactly, and I have one thing I want to fit in
here at the end.
I know we're probably coming upon what should be in time here.
But, vendor, like you havecustomer responsibility,
responsibility.
Like you have to be a goodsteward of your network, you
have to be on top of things, youhave to try to set the culture
and do what you can with thefunds that you have.

(45:33):
But like, where, what is avendor's role in this ecosystem?
And you mentioned, like ourteams, you know kind of doing
cool things together and that'sone of the reasons I like
working where I work is like wewe have a lot of flexibility
with the way that, like wesecure and we just do

(45:54):
integrations with third partystuff.
Like we have full support for,like you know, open API spec
schema swagger.
You know we we support thedifferent authentication
mechanisms, like you know, oauthto mutual tls, oidc and you
know, on top of the basics, uh,for authentication, like you

(46:15):
know, token based and and suchum.
So we have that flexibilitythere you know, to really enable
, you know, environments that dohave a lot of stuff that like
need some, they need integration, like they need to be able to
take the old but also work withthe new at the same time.
But like what?
What do you see is like how dovendors move into the future

(46:37):
with how they think about this?

Speaker 1 (46:41):
well, I think as more and more things move to sass,
we're going to see more and morethings be api first.
So a lot of the things that youdescribed are because these
vendors support swaggerdefinitions right and very well
defined, very standards basedapi to get access to the data or
to the various functions thatyou know, whether it's your

(47:05):
company or anybody else needs tobe able to integrate with, and
that's in contrast to some ofthe stuff we were talking about.
If the only access you have tothe device is to tell that in
and use you know an old expectscript or something more modern,
you know that is interactingwith that CLI and looking for
certain returns back from it andso forth that's going to be a
very brittle and very fragileway to interact with it versus a

(47:28):
much more modern API approach.
And so the more we see thesesystems moving to SaaS, I think
you're going to see a lot moreof these integrations be
possible.

Speaker 2 (47:37):
Yeah, 100%.
Yeah, you make me laugh becauseI was just messing with a
problem where I kept having toextend timeouts in different
places.
So we're like I could get thisresponse to go on to the next
thing and this response based onthe stuff that I was trying to
automate.

Speaker 1 (47:52):
It's something's never changed, but anyway yeah,
once you've dealt with a modernAPI, that stuff becomes really
frustrating to deal with, rightyes, it does.

Speaker 2 (48:04):
So I guess, as we wrap, where can the audience
find you or connect with you ifthey want to, justin.

Speaker 1 (48:10):
Yeah, I'm on LinkedIn or X or Blue Sky.
You can just search for Justin.
Last name is Ryburn R-Y-B-U-R-N.
You.
Advertise With Us

Popular Podcasts

Fudd Around And Find Out

Fudd Around And Find Out

UConn basketball star Azzi Fudd brings her championship swag to iHeart Women’s Sports with Fudd Around and Find Out, a weekly podcast that takes fans along for the ride as Azzi spends her final year of college trying to reclaim the National Championship and prepare to be a first round WNBA draft pick. Ever wonder what it’s like to be a world-class athlete in the public spotlight while still managing schoolwork, friendships and family time? It’s time to Fudd Around and Find Out!

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.