Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
For real this time, Warren, sound check.
Speaker 2 (00:05):
I think I'm still here.
Speaker 1 (00:07):
All right, you are welcome, glad to have you. Jillian, Hello, Hello,
welcome back after your time jet setting around the world
or whatever you were doing.
Speaker 3 (00:20):
I was helping somebody move last week.
Speaker 4 (00:22):
So now that I like live back home my family,
I keep on being expected to be like a real
adult who shows.
Speaker 5 (00:28):
Up for them.
Speaker 3 (00:31):
It's been quite the transition for me. So there we go.
Speaker 1 (00:36):
And then joining us today, Adriana. I forgot to ask
how to pronounce your last name Valleyla Valela Valela. Oh
you were close everybody that's an e in there. Yeah, yeah,
I should probably have my glasses checked. But hey, welcome.
I'm glad to have you here, happy to here right on.
So the big thing I want to shout out right
(00:58):
away is you are the host of the Geeking Out podcast.
I am right on, so tell us how that's going.
Speaker 5 (01:06):
So I started the podcast, say twenty twenty three. I
was in the fall of twenty twenty three. It came
on the heels of a previous podcast that I was
doing through work with former coworker of mine on the
Margarita Medina and Our podcast. And this was a work
related podcast, and you had the best name. It was
(01:29):
called on Call Me Maybe. Damn it. It was like
so much fun. We had about two seasons of it.
I want to say it was like about twenty six episodes,
and then we were no longer able to continue it.
So then I thought, well, I want to still keep podcasting,
so I started my own podcast, Geeking Out. And then
(01:52):
I was because we used to have like an editor
for on Call Me Maybe, and I'm like, damn it,
I don't know how to do any of this podcast
editing stuff. And my daughter, who I guess was fourteen
at the time, she's like, I'll help you out, mom.
So I'm like, okay. So she helps me out with
the editing. I've up to my editing game as well.
But she's also designed the logo for it, which has
(02:16):
copy batas, which I love. I don't know. They're just
fun animals, and I got introduced to them on Instagram.
I started like getting all these videos. I'm like, oh
my god, where have you been all my life? So anyway,
they're like our mascot, so cute. They're doable because I
just scored a couple at Miniso. I don't know if
(02:39):
you guys have that in the States. But yeah, it
was Yeah. I was like, oh my god, this is
the best. And yeah, and the podcast itself I have.
It's a tech podcast. I interview a lot of folks
in tech. I especially like to give voices to women
and other under ruppers groups, and of had a I
(03:03):
don't know a combination of people who are both super
well known in the industry and not so well known.
So I guess my highest profile guest was Kelsey high Tower.
I've had Charity Majors, Liz Fung, Jones, Hazel Weekly, so
lots of lots of fun guests and then other people
that I've met along the way where I'm like, you
(03:25):
have such a cool story, you should be on my podcast.
Speaker 1 (03:28):
Right, Like, once you start a podcast, like you're always
it feels like a sales role, like always be closing.
You know, you're always like trying to pull people into
the show.
Speaker 5 (03:39):
So true.
Speaker 1 (03:41):
True, So you're also a CNCF ambassador and principal developer
advocate at Dina Trace. That's correct, right on, Dina Trace
is cool. Like some of the stuff that they expose
and dig into, it's like, wow, you you went way
deeper into this than I wanted to go.
Speaker 5 (04:01):
It's so true and it's funny. So I just I
just started my job at Dina Trace in November of
last year, so I'm pretty fresh, and you know, I
I I came in because of my connection to the
open telemetry community and my previous job. I was a
developer advocate at light Step, which had gotten acquired by
(04:24):
a service now, so I'd gotten into the open telemetry
and observability community and and so when I joined Dina Trace,
I'm like, well, you know, it's like also an observability vendor,
but it's like so much more. And so one of
my one of my co workers, h Andy Grabner, he's
been with Dina Trace forever. I call him mister Dina
Trace because he's like he's so he's so passionate about
(04:46):
the platform and what we do and and everything. And
he's really helped me. Uh He's given me a tour
of the platform, and we have this video series that started.
It started because he was like, hey, you know, it
would be good for you to like get to know
the platform. And every time he showed me new stuff,
I'm like, dude, this stuff is so cool. You know
(05:06):
it would be really fun if we did like a
YouTube reaction video series kind of thing where you just
show me stuff for the first time and I react
to it. So then we started this video series called
Dina Trace. Can do that with open telemetry, and my
reactions are are like O natural because I can't act.
So it's been it's been a fun way to learn
about what the product does and also like share share
(05:30):
that same wonder with the rest of the world.
Speaker 1 (05:33):
So yeah, yeah, Dina Trace is like a gateway drug,
Like you come for the observability and then you're like,
oh wow, and then just go deep down the rabbit hole.
Speaker 2 (05:44):
That name sounds familiar. It was was Grabner on our
show already.
Speaker 1 (05:48):
He was, yeah, I think it's been Actually, I don't
know how long ago. I had this conversation with my
I have this conversation with my wife the other day.
Like I have three time periods in my life. I
can group things like prior to nineteen ninety, between nineteen
ninety and yesterday and today, and I can't get any
(06:11):
more granular than that.
Speaker 2 (06:13):
I think that's one up on most of the population, though,
like what happened five minutes ago and that's it, right, So.
Speaker 5 (06:22):
True Also, I thought nineteen ninety was like yesterday, Like
where did this time go?
Speaker 1 (06:28):
Don't do the math, don't do the math. It's just
gonna do no.
Speaker 3 (06:31):
No.
Speaker 5 (06:32):
It really depresses me when people are like so I
was like born in nineteen ninety three. I'm like, oh,
I was in my first year of high school. Cool.
Speaker 1 (06:44):
Cool. So we were going to talk today about observability
in the CICD platform, So tell me a little bit
about what that means to you.
Speaker 5 (06:55):
So, I guess the main thing is, I think when
we talk about observability, there's thillse this uh stigma. I
guess I don't know if stigma is the right word,
but we have this association that observability is like an
essry concern, right, because that's that's where when when things
go kaka in prad, you know, you you turn to
(07:16):
your observability solution and look through you know, the traces,
the logs and metrics to figure out what's going on
and all that, which is awesome, but you know, it's
it's so much more than that, because, first of all,
like it, observability is a team sport, right in order
for that telemetry to even get emitted in the first
(07:37):
place and it Yes, there's some telemetry that you automatically
pick up from from your infrastructure and whatnot, but there's
also the telemetry that has to be written by somebody, right,
so you're you're developers and and so the developers have
to care about observability, right, So now we're we're not
(07:59):
seeing observability is just like, oh, it's not just a
necessary thing. Someone had to put in that telemetry in
order for us to be able to even have this
conversation of understanding what our system's doing. And we can
take it a step further and say, well, you know
(08:22):
what if during the software development life cycle, like you know,
developers instrument their code. This gets handed off to QA,
and QA can use the observability data and say, hey,
I can use this to troubleshoot my code or to
troubleshoot the code that I'm testing, and I can provide
(08:44):
that feedback then to developers and say hey, I found
a bug and this is what's happening. Or they can
say hey, I found a bug. I don't know why
this is happening. You need to instrument better. So now
we're like we're shifting, We're shifting left right on observability.
So we've got so we've got the development side, we've
got the qaside, we've got the production side with with
(09:07):
our sries and whatnot. But then there's that also piece
in the middle, which is, you know, our CICD pipelines
that we've come to rely heavily on our CICD pipelines
to ensure that our code gets built and our automatic
automated tests get run and gets deployed to production. Awesome,
(09:29):
but like what happens when that pipeline goes KACA Because
that pipeline, even though it's internal in itself, it is
a production system. So how frustrating is it when you've
got you know, your CICD pipeline is working like a
well oiled machine, and you're like, this is amazing, And
then you come to realize that suddenly one day something
(09:54):
goes weird, some change was made, and you have no
idea why it's failing, And wouldn't it be nice if
we could also have observability into our CIICD pipelines. And
so we are starting to see a movement in that direction,
which is amazing because now we're no longer in the
dark around around our CICD pipeline. So this is for me.
(10:18):
I think this is a really fascinating topic. I dug
into it a little bit a couple of years back.
So I have I have this video course that I
did with O'Reilly that it came out last year early
last year, and as part of it, I'm like, hey,
I want to do something on observability CICD pipelines. Is
(10:39):
like just a short chapter on that. And then I'm
like starting to do some research. I'm like, crap, there's
like nothing on this, what the hell? And then I
was I was messaging one of my friends in open Telemetry.
We we're both maintainers of the Open Telemetry and User
SIG and we've done a bunch of talks together. She's
she's my hotel writer. Recently, we we we talked at
(11:02):
keep Con together all the time. It's like it's a
great partnership. And I'm messengering her. I'm like, you know,
there's really not a lot of material out there on
the observability of c ICD pipelines. She's like, that could
be a really good talk topic. I'm like, that's so awesome.
So we we put together we put together a talk
I think it was I want to say it was
(11:26):
I want to say it was cubecan Chicago in twenty
twenty three that we we talked. We did that talk
for Observability Day, and it's been nice to see that
space evolve over time. I think there's now an official
(11:46):
like CICD SIG within open Telemetry I want to say,
so then there's actual like movement towards standardizing the observability
around CICD, because at the time when we were we
were doing the stuff, like when we were investigating, it
was this mish mash of tools that were available. So like,
(12:06):
for example, depending on what tool used for CICD. So
Jenkins for example, had some observability capabilities built in, so
it did emit some like hotel some Hotel signals. But
then get Hub if you wanted to have like observable
get hub actions, there's some like homegrown options. But now
(12:28):
that means you're having to rely on someone else maintaining
that if you want to have observable CICD pipelines, And
then what if they stopped maintaining those get hub acts,
are they discontinue?
Speaker 4 (12:39):
Yeah?
Speaker 5 (12:39):
Right, So like that that's a little bit scary if
you want to rely on that get lab. At the
time when we were investigating they they were starting they
were having conversations around standardizing around that answable. At the time,
I think they had like an hotel plug in so
you can have like uh, when you're when you're doing
(13:01):
your your ANTSPEL playbooks, you have some observability around that.
And then there's there is I want to say, there
was like an hotel. I forget now what it's called,
but basically you could have hotel for Bash. And it
was funny. We were we were talking about this at
(13:21):
our talk and then later at that cupcon recent I
met the person who who created that, Amy Toby from
She used to work at Equinox, but she created that tool.
I'm like, oh my god, like mega fangirling. I'm like,
we talked about your.
Speaker 3 (13:38):
Tool and you're here.
Speaker 1 (13:42):
So when we talk about observability in the C I
C D pipeline, what kind of metrics or insights are
you looking to expose there?
Speaker 5 (13:51):
You're wanting to look at things like how long you're
spending on on your builds, how long you're spending like
at each stage of your pipeline, for example, identifying pipeline failures.
That's another thing, because and and and being able to
(14:11):
standardize it with open telemetry in particular, because I think
that's the main thing too, is because like a lot
of I would say, a lot of the industry has
moved towards standardizing on open telemetry, making sure that then
we're still continuing to speak that same language.
Speaker 2 (14:27):
Right, gotcha.
Speaker 1 (14:29):
Yeah, So then you still have access to all of
those same insights, but you get it from your same
observability tool that you get all of your other metrics.
Speaker 5 (14:37):
From exactly exactly. And and also like you know, you
add distributed tracing into the into the mix, and all
of a sudden, now you're also able to have this
nice visualization of like your your build pipeline, right you
can you can see like all of the stages nicely
as well through through your observability vendor, which I think
(14:58):
is really cool.
Speaker 2 (14:59):
I think I think every company I worked for had
a pretty good hands on strategy for managing observability of
their pipelines. Whenever it failed, someone got an email and
then they went to the product and they clicked retry.
Speaker 3 (15:14):
So my favorite one is when you're in a feting
and somebody's like, shouldn't this have been done already?
Speaker 4 (15:19):
And then I go and I check the pipeline and
I'm like, well it should have, but it failed and
so it didn't.
Speaker 3 (15:24):
And then.
Speaker 2 (15:27):
You didn't even have the emails. Yeah, that's step one.
Speaker 3 (15:31):
No, it's just it's too many emails and then I
turn them off.
Speaker 4 (15:34):
It's like how every like messaging platform is really great
until everybody's on there and then I get too many
messages and then I and then I turn it off.
Speaker 2 (15:40):
It's like, oh, I mean, let's let's go the horror
story route. So the one I know is we were
using SPN at the time, so already a great start.
Speaker 5 (15:50):
Oh damn.
Speaker 2 (15:52):
And this was an upgrade from what my previous company
had been doing, which was no source control for their
source code. So this was yeah, this was like, wow,
actually there is a company that knows how to do
source control. Because I had been using get for a
lot of years before that, and so I was shocked
that this is what the state of the industry was.
(16:14):
But the genius thing was that you couldn't figure out,
like you knew who the committer was for each each failure,
but you had no idea if it was like their fault.
So the genius thing was that they converted it so
that when an email went out, they tried to dynamically
figure out who made the change that actually caused the problem,
so that they could actually put the right people in
the email. Like this was. This is not a trivial thing,
(16:36):
especially if there's like you know, multiple things going on.
You have two thousand engineers committing to the same mono repo. Yeah,
I don't work there anymore.
Speaker 3 (16:46):
Did you take two thousand, two thousand engineers?
Speaker 2 (16:49):
Yeah? Yeah?
Speaker 3 (16:50):
Is that like real or is that hyper because that's
a lot of engineers too.
Speaker 2 (16:54):
That's a lot of engineers.
Speaker 5 (16:55):
Yeah.
Speaker 2 (16:56):
I mean there's like I mean monolists and mono repos
like this is the this is the Google way right there.
I think that's the two trains of companies here, the
ones that go well oiled monolith on one side and I
mean right before well the oil model that there's like
distributed monolith, and then on the other side is micro
services everywhere and individual repository. So the closer you get
to the monolith side, the more you have just engineers
(17:19):
thrown at the problem. I think Google last check was
like over one hundred thousand or something like. It's it's
a massive number that these companies try to make happen,
So two thousand is not that big in my in
my experience, you know.
Speaker 5 (17:35):
You mentioning SVN. I worked at a place that used
this version control tool called Harvest.
Speaker 2 (17:42):
I don't know if you beat me, because I don't
know that one. I like, I have a long list
of ones that I've seen and Harvest is not on
that list.
Speaker 5 (17:50):
And it was like such a piece of crap. And
it was one of those ones to where you had
to like check in check out the file. So like
while that file was checked check doubt, you know, no one,
no one could touch it. And I mean it was
better than I've worked at places where we had a
network drive with the source code, pray pray that someone
(18:15):
didn't overwrite your work.
Speaker 2 (18:17):
So at least by default, even on Windows, there's a
little bit of conflict resolution. But before I convinced my
company to move to GET, at the time they were
using before SPN, they were actually using uh Perforce by Microsoft,
and that didn't have file conflict resolution. So if two
people committed a file at the same time, it would
(18:37):
literally crash the entire database and you were not restoring
your source code.
Speaker 5 (18:45):
Now, has anyone ever used ClearCase.
Speaker 2 (18:50):
You go to tell us that you have an hotel
uh for for every single source codel?
Speaker 5 (18:57):
Oh my god, wouldn't that be something? But ClearCase I
think they were bought by IBM at some point and
it was the most ridiculous source control system ever because
you had to write like configurations for being able to
do the source control. So it's like akin to like
(19:19):
you know on mainframes where you have to write the
JCL to like run your code. It was kind of
like that. It was like so archaic and like it
hurt my brain and I'm like, I don't want to
touch version control ever again after this experience. And then
we moved to GET and I'm like, thank god, someone
(19:40):
understands me.
Speaker 1 (19:42):
So, whenever we're talking about putting observability in a CICD platform,
are there specific are there specific plug ins or specific
tools that help you instrument this, or are we talking
about just like using Bash and netcat to fire off
data to an endpoint?
Speaker 5 (20:03):
So you if you're if you're c i CD tool
supports it, then I would definitely say use whatever plugins
are available the official plug ins. Use that because I
think that'll give you some good insights. But barring that,
there is, for example, there is a plug in for
(20:27):
Java around uh, like specific for for Java builds. I
believe that gives you some additional c I c D
insights or stuff like around Maven and gradle uh. And
then there's this, uh I'm trying to remember now what
the hotel bash thing is called. I'm going to just
(20:50):
google this quickly dash.
Speaker 2 (20:54):
Uh.
Speaker 5 (20:58):
I can't find the one I'm looking for. Oh yeah,
open telementary cli hotel cli.
Speaker 1 (21:05):
That's I could imagine that that one is probably pretty
popular because I mean, let's just be honest, most of
your CICD stuff is just a Bash command.
Speaker 2 (21:16):
No, no, don't say that.
Speaker 5 (21:22):
I mean, barring that, you can definitely you can definitely
use that. It's it'll at least give you something.
Speaker 2 (21:29):
What do you have to instrument like within the pipeline.
So I think we talked a little bit about the
let's say, the steps or the jobs or the workflows
that you have and the amount of time that they're
spending in each one of them, or like where the
failures end up being, especially for I mean c ID
pipelines are notoriously flaky in some way, like maybe the
package repository is down, or the machine runs out of
(21:52):
memory or gets gets killed because gethub thinks that you're
running a crypto minor on there, or any number of
other things. But like you know, one one step above
that though, Uh, there's a lot that goes on during
the CICD pipelines. And I can imagine there's tools like
you have you mentioned a bunch like you know, say antsable,
but obviously there's I A C stuff. Maybe you know
(22:13):
you're building artifacts and whatnot. How hard is it to
get into all those tools and get all that telementary
data to actually you know, export it out somewhere, even
if the platform supports them.
Speaker 5 (22:24):
So one of the things that we explored was there's
this component of so for I don't know how familiar
you are with with open telemetry.
Speaker 2 (22:36):
You're going to get a range of in the audience here,
like there are some that are going to be experienced
and on the other side may have no idea what
you mean when you even say hotel all right.
Speaker 5 (22:45):
So, so basically open telemetry there's like a couple of parts.
There's like the A P I and s d K,
so use that to instrument use that to instrument your code.
So you go into your Java and Python code or whatnot,
and there's like a bunch of hotel languages that are supported,
and you go in and you write the instrumentation. So
(23:07):
that means like you're manually typing in like this is
where I want to insert my traces, my logs, my metrics.
There's also automatic instrumentation and where basically there's like essentially
like a wrapper around your code that as long as
you're using like certain like popular libraries like Python flasks
(23:30):
for example, it'll it'll automatically generate some telemetry like some
traces for for for your code there for any code
like using using that library. But then there's another component
in open telemetry, which is the Hotel collector. So that's
a vendor neutral agent basically, and you can think of
(23:50):
it as like an ETL tool, So it'll extract the
telemetry from multiple sources. So the sources can be like
your applicationmetry can be your infrastructure telemetry from various various
things at the same time. And then there's processors which
basically can you know, massage your data. You can add attributes,
(24:11):
remove attributes. You can do some some transformations. So for example,
you know you're using like underscores and you're naming conventions
for your attributes, but you're you want to switch to
like dot notation, use that, and you do. You do
that in the collector and then it gets sent somewhere
(24:33):
and you can send it to multiple somewheres at the
same time with the collector. So basically this enables you
to for example, if you don't have like an all
in one tool that can ingest all all of your
telemetry that you're collecting, then you can send it like
to one tool for traces, one for logs, one for metrics.
I don't recommend that ideally, you want everything sitting in
(24:55):
the same in the same back end, because then you
know you have like your single source of truth and
all the related data. But if that's your if that's
your setup, the open Plumetry Collector allows you to do that.
Now I'm trying to think of where I was going
with that when we were talking about the collector.
Speaker 2 (25:12):
You know, you can think about this for a moment.
You know, just to sell hotel, you have three components here.
You have your code that's emitting traces or data or
logged and you have someplace where you want to see them,
you know, maybe it's your grafhaunas, your elastic searches of
the world, and you could have a custom protocol and
for a long time you had a custom protocol, custom
libraries on the software development side to get that log
(25:34):
information from the source to the saying to the target
where you wanted to go. And that means that every
single time you want to change which provider you were utilizing,
or which language or which team you're working on, you
needed to find a new library for then the new
thing that you were utilizing. Wouldn't it be great if
it was some sort of standard that made this all easy.
Speaker 3 (25:52):
You know.
Speaker 2 (25:53):
That's hotel for me.
Speaker 5 (25:55):
Yeah, essentially, essentially.
Speaker 2 (25:57):
Yeah, well, you know, I can see one problem here
is making sure that every single thing that you were
utilizing could be supported. For instance, you mentioned Python flask.
You know, it's great that there's a library out there
that you can throw into Python flask or if it's
supported by flask by default, and it just it just
works because the output of those logs matches the standard.
But I can imagine there's lots of tools that you
(26:18):
could be using you brought up you know, bashed cli
as one of them, which you know don't have these
things by default. My example would be, you know, let's
assume everyone's using open tofood today. What is what does
that look like? You know it? Like? Does do these
tools I see offer configuration to make it easy to
do that or is it a matter of like having
to parse uh you know output you know, raw text
(26:42):
out and get it converted.
Speaker 5 (26:44):
Yes, now I remember I was going with that. Yeah, So, like,
for example, one one thing that a lot of these
tools have in common is that they met logs right,
and so you can so the hotel collector has this
component called the foul Logs receiver where it can basically
just logs and it'll parse out your logs given like
(27:05):
some rejects expressions, so that you can do something useful
with the data and send that to you to your
hotel backend. So that's where you, you know, things that
aren't necessarily like have hotel baked in, you can kind
of turn it otel esque and get it to send
you send the hotel data that you need to hopefully
(27:25):
troubleshoot a little bit better.
Speaker 2 (27:27):
So well, well, was about to make this joke, but
I'm going to steal it from him because you I
think fundamentally, it's like, oh, I have a problem. The
solution I know it's to use rejects. And when you
think that the solution is red jocks, and now you
have two problems.
Speaker 5 (27:42):
I know, I always have to look up rejects anything.
Speaker 2 (27:48):
I mean, I think this is where log for JA
the exploit ended up coming from, is having to parse
literally log messages that we're coming from everywhere, and you
could get it too, execute arbitrary code because of it,
and you know there's a number of huge gotcha's there
that if you'd not really experienced, especially parsing logs, that
(28:09):
you'll end up in situations with your reject will like
literally crash due to catastrophic backtracking.
Speaker 1 (28:14):
So that's that's something I hadn't thought of, like when
we first started talking about this, Like I was thinking
about like during the CICD process, you know, there's the
build time, but then there's also like time associated with
you know, with Terraform going off and doing terraform things,
which is can be really significant at times because sometimes
(28:37):
you know, Terraform decides to just completely delete and rebuild
this thing from scratch. You're like, dude, I just I
just wanted you to change this parameter, and so this
could be a really good way of exposing that.
Speaker 5 (28:52):
Yeah, yeah, exactly. It's it's all of a sudden, like
you're able to see the things that you weren't seeing
before necessarily, right, So it's not just the troubleshooting troubleshooting
when things go bad, but also like can I use
this information now to further optimize? And even the other
(29:16):
part too, which is like you think it's going okay
and you find out otherwise, right, like behind the scenes,
something very bad is happening that wouldn't have necessarily been
exposed because there's like no catastrophic failure of your pipeline
as far as you're concerned. It's completing and things are
you know, getting delivered, built and whatever.
Speaker 1 (29:39):
That's one of my favorite phrases to hear is oh,
that error message is okay.
Speaker 2 (29:47):
No.
Speaker 1 (29:48):
No, just the fact that you called it an error
message makes it not okay. Can we agree on that?
Speaker 5 (29:55):
Yeah, exactly.
Speaker 4 (29:57):
So.
Speaker 2 (29:57):
I think because we're in the engineering discipline, you know,
we have to be cognizant of this. And if we
have new data available, new metrics that we're able to track.
It's going to create a signal that caused us to
make a change. And so one of the questions I
want to ask you is do you feel like that
Hotel has caused this shift in the mindset or focused
areas that we've been dealing with in the last let's say,
(30:19):
before up and to this point. So I'm not sure
exactly how old it is. I want to say it's
like five years now, although that maybe that it's a
little bit short, not right.
Speaker 5 (30:27):
It started in twenty nineteen.
Speaker 2 (30:29):
Okay, great, So before that, you know, we didn't have it.
It's like, has there been some fundamental shift with how
we're tackling problems in the observability space? And you said
shift left? So really development teams, engineering teams in general,
compared to what we were doing beforehand.
Speaker 5 (30:46):
I mean, I think with Hotel, because so pre Hotel,
like it was kind of a free for all, and
there had been attempts at standardization, right because there was
open census on the one side from Google and then
open tracing from CNCF, and then like each vendor also
(31:07):
had their own thing. And so I think there's a
lot of time and effort expended into you know, maintaining
these like instrumentation libraries. And now with Hotel, I think
the conversation has shifted because we're like, Okay, this is
the single standard. We all agree that it's going to
(31:28):
work this way, and now we're not using like you know,
now it's not a single organization or individual organizations using
their brain power like I'm gonna say in air, it's
wasting their time on instrumentation libraries, because we're all like
as competitors working together towards a common goal. So now
we're we're you know, combining brains towards a singular purpose,
(31:51):
which means then we're we've essentially democratized data in the
sense that now all of these observability tools are ingesting
the same data. And so the differentiator is what do
they do with your data in a way that is
helpful to you?
Speaker 1 (32:09):
Right?
Speaker 5 (32:11):
And the answer to that question varies, right because what's
meaningful to me might not be the same, it might
not be as important to you. And I think the
other thing that Hotel provides that open sends us and
open Tracing didn't provide at the time was this unified
view of TRACE's logs and metrics where now we have
(32:33):
this ability. First of all, we have a standard for
these three signals, but also we have correlation of the
three signals. And I think the correlation is really important
because I I for me, at the backbone of observability
is the distributed trace, because it tells the story right
end to end. And then you've got the supporting actors.
(32:56):
We've got the metrics that give us an idea of
thing like our CPU usage and are our RAM usage
or how long we've spent on a particular process, or
even having an idea of like hey, I sold like
fifty telescopes last month compared to this month. And then
our logs that are like our point in time indicators, right,
(33:20):
and all these things separate are like yeah, that's that's cool,
that's useful, but together like they paint that full picture. Right,
we have this very rich understanding of what's happening with
our systems. And I think that's that's the thing that
observability brings us. And I'm going to borrow a definition
(33:42):
of observability that I really like from Hazel Weekly, which
is observability allows us to ask meaningful questions, get useful
answers and act effectively on the information that we get.
And I think we are we are getting to that point.
I'm not. I don't think we're fully there in the
(34:04):
same way that like you know, so many organizations back
in the day and maybe even to a certain extent now,
or like we're doing DevOps because we have a CICD pipeline.
It's like we have observability because we are collecting metrics
and sending them to Blah back in. It's like you're
you're on your way.
Speaker 3 (34:26):
There.
Speaker 4 (34:26):
Yet I think the real thing there now is can
we have an AI agent that will look through your
logs and tell you what the problem is, because that's
what I think I need.
Speaker 5 (34:36):
Yeah, And I think a lot of vendors and that
is a fair ask, and I think a lot of
vendors are moving in that direction. I mean, including Dino Trace.
Dina Trace has an AI assistant named Davis AI.
Speaker 2 (34:49):
And well, I guess that's the success of hotel is
because now you can shop around for the provider that
offers you that exact benefit without feeling like you you
lose your data or have to spend engineering time to
actually adhere to whatever backwards protocol that that provider is offering.
Speaker 3 (35:07):
Yeah, but like biologists don't like paying for stuff.
Speaker 2 (35:09):
I know, I know what it what it means. Like
when you say they don't like paying for stuff, what
they mean is they don't like paying for stuff when
that money has like clear price tags on it. But
when there's suspect value associated with running it yourself on
on prem hardware, then they have no problem shelling out
you know, hundreds of thousands of dollars for it.
Speaker 3 (35:28):
Well, that's true.
Speaker 4 (35:29):
I mean we can still have on site HPCs like
that's that's fine, That's totally fine.
Speaker 2 (35:34):
Right, just throw Grafana in your in your cluster and
you know, you can use your hotel collector and point
it at that and you get all your data into
whatever you're saying. You know that Graffana is pointed at
and you're going to go.
Speaker 4 (35:49):
I mean sometimes, but not all the platforms that I
use have like all these nifty tools you know built
into them, like like the AWS, like Healthcare Pipeline platform
does not have this stuff built in.
Speaker 3 (36:00):
It just sends everything a cloud watch and then it's like, well,
good luck with that.
Speaker 1 (36:06):
I think you, I think you started that problem. Definition
with the problem a ws.
Speaker 4 (36:14):
WS is listen, I'm still I'm still on my like
drifting like a WS pay my bills please, So we
can't we can't bad talk them because every once in
a while they do throw some credits my way.
Speaker 3 (36:24):
So I love you as.
Speaker 2 (36:26):
And if you stick around at the end of the episode,
I actually have something to say about a WS credits.
Speaker 3 (36:31):
So you have a coupon to send me. I love coupons.
Speaker 2 (36:36):
Well, Jillian, unfortunately you're going to be excluded from this.
I wonder why a host of this podcast is going
to be excluded from a giveaway we're having.
Speaker 3 (36:47):
Oh okay, that's.
Speaker 5 (36:50):
Damn it.
Speaker 1 (36:52):
Fine. So one of the things I want to dig into,
like I see the benefits of US, where I struggle
with implementation. No, you know, like the developers I support,
they see the benefits of this, but only after I
build the whole thing for them and show it to them.
(37:15):
So what are some ideas you have to like get
them excited to kind of like throw a few hours
of their own time towards getting this implemented.
Speaker 5 (37:30):
I think I think one way to be effective with
this is making developers responsible for their code and prod
like right after, right after it's deployed.
Speaker 2 (37:42):
Align the ascentives.
Speaker 1 (37:43):
Nothing tells a story like staring at a screen at
two am, knowing other reason you're there.
Speaker 5 (37:49):
Yeah, exactly exactly. I think another thing on exactly right
other thing I was going to say, you know, back
to what I was saying in the beginning about qa's
using that telemetry also during the testing phase to be
able to identify bugs in testing making basically making telemetry
(38:18):
quality gait. So before going into QA, you it's a
it's basically mandatory to have instrumented your code. Otherwise you don't,
you don't pass go. Basically that's another that's another way
to incentivize and and I think one way to think
about it, and I think this is where people have
(38:40):
have a bit of a hard time like wrapping their
brains around it.
Speaker 3 (38:44):
Uh.
Speaker 5 (38:44):
I see instrumenting code as no different than like, you know,
we're writing print statements all the time, we write log statements.
So like you're already writing logs, So do it the
open telemetry way. You're already like that's part of your mindset.
So adding traces here and there isn't a terrible idea,
(39:07):
especially if it can help you as well as a
developer debug your own code, and I think that's another
value add It's like, oh my god, I have more
insight when I'm writing my code to understand why this
like weird era keeps happening like every fifty runs of
the program, Like wouldn't that be nice?
Speaker 2 (39:27):
I think you could put your perspective though, on the
controversial side, right, Like, I mean, maybe there are the
engineers out there that they don't write any bugs, there's
no production, never goes down, there's no incidence whatever, and
then and then you know, this isn't a value added activity.
So you know, if they're out there and they're thinking
to themselves, all my code is absolutely perfect and I'm
not accountable for it. I don't know if there can
(39:48):
be another argument.
Speaker 5 (39:50):
I know, well, what can you do to question perfection
like that?
Speaker 2 (39:55):
You did say something interesting though, which is basically what
you're advocating for here, which is shift left on telemetry.
And one of the complaints that I've heard from uh
my collities across the industry have been, well, there's like
shift left testing and shift left telemetry and shift left security,
and now we're doing infrastructure as code instead of release
(40:16):
engineering I mean at some point, like if everything is
shift left, there's there's nothing left, Like everything is now
on the right, right.
Speaker 5 (40:25):
Right. I mean, yeah, you're right, but like I guess
the way I'll look at it is, but if you're like,
if you don't shift left, then it's going to be
so much more painful after.
Speaker 2 (40:40):
Oh, for sure, there's no question.
Speaker 5 (40:43):
And I don't I don't know. Maybe maybe it's me,
like my personality. I like learning new tech. So it's
like you got to learn you know, terror form, Yay,
that's awesome, you get to learn doctor, yay.
Speaker 2 (40:54):
Cool.
Speaker 5 (40:56):
I think I think honestly, for this to succeed, you
almost you have to come in with the right mindset,
and and there's no better way to do that than
with like fresh hung blood, right. The new folks coming
into the industry like that, that is the way. You
know that, that is the way that it works. But
for the older folks you're like, oh, I'm telling I
(41:17):
mean they I'll have to do all this extra crap.
Oh damn it.
Speaker 2 (41:21):
I like how you stopped after a terraform and doctor
and you didn't say, oh, we have to use Kubernetes.
Speaker 5 (41:25):
Yeah, like Kubernettes and I have a love hate relationship.
Speaker 1 (41:31):
That was a given, like who doesn't love Kubernetes.
Speaker 2 (41:36):
So I mean that's an interesting perspective that those that
are more experienced are likely I don't I want to
say that they're you know, thinking the wrong direction or
have the not the right mindset, but they're focused in
the areas where they think it has the most value.
Like maybe the experience helps tell them what they need
to do more effectively. And if you are looking at
something and say, oh, well, I don't know how to
(41:58):
test this effectively, and I don't know how to make
this is super secure, and I don't know where my
bugs are going to be, then you do want to
take all of these steps, at least a little bit
in each one of these directions. And that isn't to
say you need to design the perfect platform, but having
the logs end up on an ephoremal like on hard
disk of an ephemeral compute environment, like isn't going to
(42:18):
help you when there's a problem in that thing crashes.
Speaker 5 (42:21):
Yeah, very true, Very true. Yeah. I you know, I
think with this kind of shift left it like for
those of us who've been in the industry long enough,
I think embracing that is born out of one of
two things, extreme trauma for your like oh my god,
(42:43):
I can't take this anymore, and and the other is
just like pure curiosity like ooh this looks really cool,
and like having an open mind. And I honestly, I
think the most successful techies are the ones who are
open to change, open to like what what the new
tooling brings. And maybe maybe it's like, oh, this thing
(43:06):
doesn't do exactly what I wanted to do. And then
this was like where startups are born, right. I mean,
Krubernetes has got a whole ecosystem around it because of
I guess it does stuff, but like some stuff is
kind of gnarly to do, so let's make things easier.
And I think that sort of thing drives innovation. At
(43:29):
the end of the day, are you are.
Speaker 2 (43:31):
You happy where we're at, Like do you feel like
that there's like just one more thing, like get the
observability done in CICD pipelines and then everything will be great?
Or do you see like there's a concrete objective next
step to get to the you know, the pinnacle of
perfection and observability.
Speaker 5 (43:50):
Ooh, that's a great question. Actually it's a topic for
a talk that I'm giving next week. Out of observability,
Dan like, leg it, what is it? All right, I'm
gonna plug it. So basically, the ideas, why does observability
have to exist within the traditional confines of tech? Why
don't we bring observability and open telemetry to things like
(44:11):
the recruitment process, Like you know, it's it's a yucky
jobs market out there, and the recruiting process has always
been painful. You never know, you know, you send out
a resume, you don't know if you're going to get
a response, and when you do it might be a while.
And then when you finally get that interview, it's it
might be like a chunk of time between interviews. Wouldn't
(44:33):
it be nice if organizations put observability in their recruitment process,
for example, so that they have an understanding of like
how long it takes in each stage of the interview process.
Wouldn't it be cool to have a distributed trace that
represents the end to end recruitment process. And we can
(44:54):
take this to like several under several other industries, like
even the healthcare industry. Look at hospital r waiting times.
Understanding you know what if what if you do open telemetry,
apply open telemetry and observe ability to uh to ers
where you're like, okay, now you have an idea of,
(45:15):
like what the workflow is from from intake all the
way to getting treatment. You have an idea of like
the types of cases that get seen faster versus the
ones that don't. You can have a better understanding. You
have better data on racial profiling. For example, wait times
the amount of time you know first to get seen,
(45:35):
to get like imaging done, to get the results of
the imaging. So I feel like the sky is the limit,
and I think it's I would say easier. I'm going
to say in air quotes and in larger organizations that
have access to to be like monetary access to to
be able to like you know, purchase a subscription from
(45:56):
a SaaS vendor or or like run a homegrown solution
for observability. But I think I think it can open
up some really cool possibilities around that.
Speaker 2 (46:07):
See, I had a secret fear You're going to say,
government oversight.
Speaker 1 (46:13):
Why would that be relevant at all?
Speaker 2 (46:16):
Yeah? So I know, but I think you touched on something. Yeah,
well okay, yeah, so we won't we won't go there,
uh I do. I do want to touch on something.
I think you you added a nuance too, which is
that the value that could be captured here is actually
directly related to the business and not arbitrary tech metrics
(46:39):
about your running service like how many two hundreds and
three hundred do you have or how long it's running,
but maybe the value to the actual customer and or
the pain they're suffering through the you know, user experience
of your UI because you you automatically coded it using
one of these new vibe coding auto creation UIs, and
so you know, yeah, I think it's a really good point,
(47:02):
and like this could be applied to to every industry,
not just ones that are hard in tech. You know,
how are you collecting metrics today? How are you actually
evaluating these things? Don't you want data? And now I'm
starting to wonder all of those smaller companies that are
between zero and let's say two or five employees, you know,
what are they doing. I know that like maybe the
(47:22):
last thirty years was all about digital transformation, and I
still know companies are hiring digital transformation consultants to make
this happen. And I think you've hit on like, really
the why which is you're missing the data? You're like,
you're not collecting it, and this, this process, this standard
is what's going to help you achieve that.
Speaker 5 (47:40):
Yeah? Yeah, and I mean unfortunately, like we speak in
data right, like show me, show me the metrics, and
then tie the metrics to the money, right.
Speaker 1 (47:50):
Well, Jerry Maguire reference there, show me the money.
Speaker 5 (47:54):
Yeah, yeah, I do.
Speaker 2 (47:58):
I do want to share this because I I do
think this list is quite ridiculous. So you're a CNCF ambassador,
you have a podcast, you're an author, I wrote a
bunch of other things down here. But I see, like,
you know what conference speaker, right, what is there still
some milestone you're hoping to achieve next after this that
(48:20):
you're you know, currently focused on. You're like, no, I've
done enough things, you know, I feel accomplished enough.
Speaker 5 (48:26):
Oh well, I would love to, like someday, keynote at
a cupe gun. I got my first keynote last year
at KCD in Potho, Portugal, and that was like, that
was super exciting. I've never been asked a keynote. I
think I think the ultimate experience would be a keynote
on a at a really large conference that would be
(48:47):
super fun. I did also find out this one's up
there on the list. I found out last week that
I'll be going to keep Con Japan, which is super
exciting because it's the first ever keep conjup and I've
never been to Japan. And I'll be giving a talk
on basically what we can do to make our observability greener.
(49:12):
And it's, uh, it's it's a follow on, if you
will to a talk that I'm giving next week in
London with my same co speaker, Nancy Chohan, where we
it's the talk is next week is called how Green
is my Open Telemetry Collector? And it talks about like
what you can do to start looking at at optimizing
your hotel collector to make it, you know, more more
(49:32):
environmentally friendly.
Speaker 2 (49:34):
Do do you feel lucky now that there is another
piece of technology out there that is just so much
worse for the environment that no one's paying attention to
any sort of problem with story extra data I mean
storing the data. I mean now that that's a trivial
matter as far as impact goes.
Speaker 5 (49:52):
It's funny that, like, you know, I almost feel I
feel guilty, like working in this industry to be honest,
because like I've always, like since I was a kid,
I was like really into like environmental stuff, and you know,
like I bring a reusable cup to like Starbucks, or
(50:12):
like I love bubble Tea, so my local bubble tea place,
I'll bring a reusable cup. And you know, I've done
the reasonable shopping bags for like twenty years, and yet
I'm in an industry that is inherently terrible for the environment.
You know, data centers I think contribute to like one
or one to two percent of of like the world's
greenhouse gas emissions. And then you add AI into the mix,
(50:36):
and it's like ouch observability, I mean the fact that
we're trying to understand our systems better through observability. Well,
guess what, You're emitting a crap ton of data, So
your systems are are expending more energy in doing so,
and then your your observability tooling and ingesting the data
(50:56):
are also emitting a crop ton of energy to do that.
So it's like we're you know, we're adding to the problem.
But then I also feel like technology can solve the problem,
Like you know, those same AI agents that that do
expend a lot of energy can also helpless further optimize
(51:19):
our you know, our energy usage to lessen our carbon footprint.
I think it's all that will be a balance.
Speaker 2 (51:27):
Well, there's the paradox there, and I don't remember the
philosopher's name. Hopefully someone else does. If you increase the
efficiency or you optimize it, you end up with more
usage because it becomes cheaper and more so in the end.
So that's not Unfortunately, that's not a path forward that
(51:48):
I'm willing to I'm willing to bet on. But like
I'm still bringing reusable like paper bags to the grocery
store for my bread and vegetables, like I am, I
am just as bad, uh know, put it in a
backpack and no plastic bags or anything. And I've still
got the same paper bag that my wife is like,
why are you still reusing that? Carry stuff in?
Speaker 1 (52:09):
I always forget mine. So I ended up making it
a trip out of the store with thirty seven things
in my arms.
Speaker 2 (52:15):
Oh yeah, backpack.
Speaker 5 (52:17):
I used to buy the bag.
Speaker 3 (52:19):
It's ridiculous. At this point, I have like a closet
full of them.
Speaker 2 (52:22):
I'm like, yeah, just with you should donate them to
some other people, like just or sell them right outside
the store. Like just see like when Will comes out
of the store and he's carrying a.
Speaker 5 (52:33):
Lot of things, be like, hey, ooh, clever.
Speaker 2 (52:37):
I don't know if that's legal not for resale?
Speaker 5 (52:41):
Yeah, probably not, damn it.
Speaker 2 (52:46):
I think you just have.
Speaker 1 (52:47):
To be fifty feet away from the door, and by
that time I'm a very motivated customer of your product. Anyway,
if you haven't, if you haven't gotten to your motive
transportation after fifty feet, I mean, I I.
Speaker 2 (53:01):
Worry what's going on there? I will ask maybe you
can spoil it a little bit for us? What as
you mentioned data centers as not being that environmentally friendly?
Is it? Is it the data storage? Is it? The compute?
Is it you know, memory usage?
Speaker 1 (53:15):
Is it?
Speaker 2 (53:15):
The menu? The hardware manufacturing that's doing it? So the
building new data center, like do you know like which
area is contributing or is the most problematic for us?
Speaker 5 (53:25):
I don't know specifically, but I would gather like the
power consumption alone of data centers is huge and puts
like a massive strain on on power grids. So there's
definitely I would say, I would guess. Now, don't don't
quote me on that, but I would guess that that's
that would definitely eat into things a.
Speaker 1 (53:45):
Fair bit from some of the stuff I've heard. It's
the cooling mhm.
Speaker 2 (53:51):
I could I could believe that. Yeah, dealing with extra
heat is a huge challenge. But if it's the energy,
then what we have to make sure realistically, is it
the energy we're creating is green?
Speaker 1 (54:02):
Just build more nuclear power plants always the solution it is.
I can tell Warren Lust to disagree.
Speaker 2 (54:11):
Oh no, no, no, no, I no, no, I absolutely
agree there is no better form of energy, even though
there's all these problems with I say nuclear, but you
know we're saying fission right because we're not at the
fusion stage yet. And there's just a lot of arguments
where like what do you do with the wastewater? I'm like,
compare that to the mining of the raw materials and
(54:33):
the manufacturing of solar panels, or the actual damage to
like my migratory birds for flight pass for wind turbines,
and not to mention the non renewable ones, like you know,
it's just so absurd to me. Sorry, that's that's my
own personal rant. Just cancel, just cancel all non commercial aircraft.
(54:55):
There should be no private jets. You know that will
solve a majority of the world's problems from there.
Speaker 1 (55:00):
True, I can't disagree. It's not going to impact my life.
I'll tell you that for sure.
Speaker 5 (55:13):
Damn it, I'm gonna cancel my Gulf Stream.
Speaker 4 (55:15):
Order right.
Speaker 1 (55:18):
Hold on b RB.
Speaker 5 (55:20):
Yeah.
Speaker 1 (55:24):
But now that's like a few minutes ago, you brought
up a really interesting point about using hotel metrics in
other parts of the business, and like immediately my mind
exploded with like ten different things in the company I
work for right now, I'm like, holy shit, Like they
they totally need to see this on a on a
(55:47):
metrics dashboard. And it's like, you know, you mentioned recruiting process,
but I'm thinking like the sales pipeline or the implementation
pipeline whenever we implement someone onto our product, the employee
review process, what stage that's at, Like there's just so
many different things where like oh wow, even Jira, like
(56:11):
where are things stalling at moving tickets from from new
to done?
Speaker 5 (56:20):
Yeah, yeah exactly, or like onboarding new employees. It's such
like such a pain no matter where you go, there
is not a streamlined onboarding process right.
Speaker 1 (56:33):
Time to first commit. That's a big metric for me,
when I bring somebody on, like how long before their
first commit goes to production? How are you measuring that
manually at this point?
Speaker 2 (56:45):
Is it? Because there's just not a scale that you
would need to make it like you're not hiring that
many people like your turn rates you know low, and
so you know, I guess that maybe the counter argument,
why collect the data when the manual process is still sufficient?
Speaker 1 (57:02):
Yeah? For that for that specific example, time to first commit,
you know, it would be hard to justify automating it
unless you already had everything in place and it was
just building the dashboard that shows it. So that one
hopefully it's a little value. Like if you're putting on
that many new employees where you have to build a
(57:26):
dashboard for that, maybe you should be looking at metrics
of like what am I doing to piss my employees
off and make them leave? Maybe that's a better metric
for that scenario.
Speaker 2 (57:36):
I mean, I think you're onto something there, because if
you're if you're pushing the data towards to them and
they have to now consume technical dashboards. I think what
we're saying is we're hoping that by doing this, we're
changing the role from directly hands on to someone that's
more understanding of what like of a knowledge management process
is in that area. So you know you're talking about HR,
(57:57):
but not so it's not HR anymore. It's a new
kind of human resources where it's already being managed. Now
it's about improving the process and that's a whole other
step above it.
Speaker 1 (58:08):
Yeah, we don't have hright now, we have people business partners.
Speaker 2 (58:12):
M Well yeah, you laugh. But I do think that
there is something like all labels are wrong, some of
them are useful. And I think if you call it people,
they are more two things happen. They do think about
their leaders, like how to build leaders and whatnot, and
then more importantly about the careers of these people rather
than as you know, fundamental resources like your turn rate
(58:36):
is important and whatnot. And I think that's that's something
that's only happened recently.
Speaker 4 (58:43):
See, I don't know about calling HR people. I mean,
clearly they're people, right, but like they're not.
Speaker 3 (58:51):
They're not robots yet, Like they're not there for the employees.
They're there to protect like the company.
Speaker 4 (58:58):
So this idea that they're sure for you in your
career is maybe.
Speaker 3 (59:02):
We don't Maybe I'm going to disagree with that.
Speaker 2 (59:05):
Well, that's the point. So if this organization is there
to protect the company, then why of course the company
would want to be making decisions based off of a
metrics and a framework that is collecting actual data about
the organization before making those decisions. And there was something
there was a research study done like ten or twenty
years ago that was a consult came in and had asked,
(59:26):
like all the executives of an organization to make a
guess about how successful their sales will be over the
next couple of quarters. And they were all like, of course,
super confident about whatever it is that they were doing,
and just absolutely wrong in a lot of ways. And
I think you see the same thing over and over
again across the field. So like a majority of people
think they're better than average, which is statistically not possible.
(59:50):
And I think this is where, you know, having the
additional data just goes to show you that you're making
more accurate decisions no matter what they are. True story,
Will just can't wait to get back to to or
get to work so he can start implementing these in
his least favorite department.
Speaker 1 (01:00:12):
I am Hotel.
Speaker 5 (01:00:13):
Everything that's right Hotel everything.
Speaker 1 (01:00:16):
So, speaking of which, how did you end up as
part of the Hotel sid group?
Speaker 5 (01:00:24):
Oh? Well, so, first of all, I got into Hotel
because of my previous job at Lightstep, which was my
first job as a developer advocate. So before that, I'd
been doing a mix of like individual contributor work and
management work. I decided at that point at my previous
(01:00:45):
job that I'm done with management. Thanks, but no thanks,
had had a good run, but we dine and But
so the job before light Step, when I was a manager, right,
i'd been I was managing two teams thirteen people total,
and I was managing a platform engineering team and an
(01:01:08):
observability team. Platform engineering team was a Hashi Corp stack
and I knew Kubernetes and they were using Nomad. So
it was like, great, now I have to learn a
thing that I don't know, which my brain was like yay,
fun stuff. And also this observability team, which I was
new to observability. I've been dabbling. Like my understanding of
(01:01:31):
observability came from like reading Charity Majors tweets, and you know,
my thought was, well, I have to do right by
my team, by my organization and if I'm going to
lead an observability team at the organization, and we were
an observability practices team, So defining the observability strategy at
(01:01:53):
the company two cows, which is that two Cows if
you remember the yeah, yes, yes, but not doing that.
It was not the download of free Windows software anymore.
They're a domain wholesaler when I joined, and so as
part of it, I'm like at the time, I already
had like this blog on Medium where I've been using
(01:02:16):
the blog to like basically learn in public, right document
document cool things that I've discovered and share them with
the world, because my personal pet peeve is a lot
of stuff is documented very poorly in tech. People assume
that you know what they're talking about, and it makes
me think of those math textbooks where they're like, we'll
leave the proof to the reader, and it's like, no,
(01:02:40):
show me how the proof works, because I have no
freaking clue. This is you know how I feel with
regards to most technical blogs, so you know mine are
in excruciating detail. So I basically thought, well, I'm gonna
I'm going to learn nomad in public. I'm going to
learn observability in public. Blog blog blog as I'm as
I'm doing my job, and then one of my blog
(01:03:02):
posts got the attention of my former manager at Lightstep
and they reached out to me and said, hey, how
would you like to do this for a living? I'm like,
what you could do this for a living? And when
I started on the job, they said, you know, it
would be cool to like contribute to open telemetry. And
(01:03:22):
I had been in like, you know, super enterprise corporate
life pretty much up until that point, where like closed
source all the things, like the most open source stuff
we did was like Java and Maven and everything else
was like there better be, like, you know, a support
plan for this open source software, otherwise we're not going
(01:03:42):
to buy it, which otherwise we're not going to use it,
which I understand, like large enterprise, they got to cover
their asses. But so that was my first forayan to
open source. So I first started just contributing to the
hotel docs and then there was an opening in the
Hotel End User SIG and the end Users SIG basically
connects end users with each other with an open telemetry,
(01:04:05):
but we also relay feedback from the end users to
the open tele telemetry maintainers. So there was there was
a gap in leadership because one of the one of
the original founders of the SIG had changed jobs moved
away from Open Telemetry. So, you know, my my manager
at the time, asked if I wanted to step in
(01:04:26):
and help out, and that's that's how I got involved.
And at the time it was a working group and
then it was converted into into a SIG, And we've
done a bunch of things to like just really elevate
elevate the hotel community as part of the SIG. So
we do a bunch of regular things, like we have
(01:04:47):
this series called Hotel Me where we interview one of
our end users and they share how they use open
Telemetry in their own organizations. We have Hotel and Practice,
which is basically it's a meetup style thing where you know,
you have a cool presentation on something hotel, like, come
on and and and present to us, like you know,
(01:05:09):
you want to test out a talk that you want
to give, like, use us as your guinea pig, and
we do them as live streams and then the recordings
are available afterwards for folks to consume on the on
the Hotel YouTube channel we've run. We've partnered with the
other SIGs to run end user surveys to and understand like,
for example, the first one we did was on the
(01:05:30):
Hotel Collector. The collector folks wanted to partner with us
to understand how how end users use the collector to
help inform the direction of the collector, like what what
features are most important to the user so that they
can you know, push forward with those as part of
the Collector's roadmap. So that's effectively how I got involved
(01:05:53):
with Hotel. So most of my work is in the
sig I do. I'll pop in every so often and
update docs and read mes, especially like when I'm doing
research for my my technical talks and I'm digging into
a topic and I'll notice, oh there's a gap here.
I do two things. One is like I'll write a
blog post on it because I love to do that.
(01:06:16):
But then the other thing is like I want to
be a good open source citizen and also like I
want the docs, i e. The source of truth to
have the information that I also make available in my
blog so so that you know, we have that completeness.
And I encourage everyone in open source to do that
as well. You know, like so many vendors have wonderful
(01:06:36):
blog posts out there on observability, like on on Open Telemeture,
and I think it's great. But if you're noticing a
gap in the docks, like take take the time to
to update update those docks, update those read mes, because
it'll just save a lot of people a lot of
a lot of effort. Because the docs are the place
I think where most people start start their journey, and
(01:06:58):
then they'll you know, move to the blogs and the
YouTube videos and whatnot for added information.
Speaker 1 (01:07:04):
Yeah, for sure. And it's like writing docks is really hard,
so there's there's always room for improvement, and especially for
people who are just starting their career, like that's such
a great way to just start getting getting some experience,
you know, because you read the docs, you try it,
(01:07:27):
it doesn't work, you go off, you cuss and rant
for a while, and then you come back and then
you try it again, and eventually it kind of like
clicks and like making that poor request back to the
docks is a great way to start building a portfolio
of expertise that will ultimately help you move on to
bigger and higher paying roles big time.
Speaker 4 (01:07:50):
I think writing is probably one of the best things
you can do for your career or like whatever, whatever
the thing is that you do better with video or
audio or like whatever. Just but just start to get
your own voice and perspective out there.
Speaker 2 (01:08:02):
I don't think we should be discouraging people from entering
the industry.
Speaker 3 (01:08:09):
I'm saying they have to write it.
Speaker 2 (01:08:11):
I mean, I don't know about other people, but I
became an engineer because I wanted to just solve equations
all the time. That that was my goal. And now
I don't do anything with numbers or math in any way,
and I spend it all the time going to conferences
at writing writing books like Adriana. Yeah that's my life.
Speaker 1 (01:08:31):
Yeah. Well, communication is such a key part of being
a good engineer, and I think it's it's underplayed a lot.
But when it comes to writing, I know a lot
of people with engineering oriented minds either aren't good writers
or don't consider themselves good writers. And lately I've been
(01:08:53):
using AI so I'll write something up and then copy
paste it into AI and just have it just give
me some feedback on it. And I found that to
be really helpful.
Speaker 5 (01:09:03):
Yeah, I think I think it's a good starting point.
My caution on that personally is I find AI. You know,
it's like you use with hair, use with care. AI
can take away your own like personal writing voice. So
that that's just my my personal take on it.
Speaker 1 (01:09:24):
True story.
Speaker 4 (01:09:26):
Yeah, I don't think you can have the AI just
like straight up right stuff for you or it is
like very bland, like very very very bland. But I
think you can use something like most of the grammar
check tools all incorporate AI.
Speaker 3 (01:09:38):
Now, So if you're using like Grammarly or pro.
Speaker 4 (01:09:39):
Writing Aid or something and you're worried about punctuation or spelling,
that that would be me.
Speaker 3 (01:09:44):
That would be me worried about the punctuation and the spelling.
Speaker 4 (01:09:47):
It's not like listen, it's just not going to happen
if there's not some type of tool out there, or
unless I hired an editor, which like I'm not going
to do for a blog.
Speaker 3 (01:09:55):
Post, like nobody's going to do that. But I do
think these you know, these tools do catch quite a bit.
Speaker 5 (01:10:00):
But I agree, and you know, like I'll whenever I'm questioning,
you know, the grammar on something, I'll throw that into
AI too, you know, to verify either I'm heinously wrong
or like hey, I got it right, Yay.
Speaker 2 (01:10:16):
Do you have a tool of choice for your work?
Speaker 5 (01:10:20):
For me, I'll use a Microsoft co Pilot every so often.
So like for my talks, for example, I have like
talk mascots so on. On my slides, I'll have like
a theme a theme animal. So for example, for my
one of my talks next week, the the Green Collector Talk,
(01:10:41):
we have a polar bear wearing a green recycling t
shirt throughout. So Copilot generates some some fun, some fun animations,
and you know, sometimes I'll ask it to like I
did ask it when I was researching a talk to
like write me some terraform code to do X, which
was helpful, but then it hallucinated and generated me a
(01:11:05):
actually it was in terraformos pullumi. It generated me a
Pulumi function that didn't exist, and that kind of pissed
me off for an hour. I'm like, where is this
bloody function? Did not exist? But yeah, yeah, that's that's
that's that's the main one that I use. My dad
swears by perplexity. He says it's quite good. I've never
tried it, but he swears by it.
Speaker 2 (01:11:27):
At least one of your parents using a modern tools.
Speaker 5 (01:11:30):
Oh my God. My dad is a retired software architect
and he like learned Rust for fun two years ago.
He's seventy two, and he's like, yes, I'm writing my
own rust crates to do like some performance testing on
some code that I wrote, and I'm using statistical analysis
methods blah blah blah. I'm like, dude, I learned this
stuff in university and I don't remember a bit of it.
(01:11:54):
And he's like, you know, refreshing his knowledge on this stuff.
I'm like, do you.
Speaker 1 (01:12:02):
Right on? Well, this feels like a good time to
move over to Picks. What do you think?
Speaker 2 (01:12:06):
Yes, let's do it.
Speaker 1 (01:12:07):
Warren over to you.
Speaker 2 (01:12:09):
Well, I knew it was going to meet me first,
so I was actually surprised that you weren't just going
to immediately go to me.
Speaker 1 (01:12:14):
So yeah, I tried to change it up, Like man,
I always pick on Warren.
Speaker 2 (01:12:17):
So well, it's okay. I'm always prepared, so it works out.
So today I'm going to be super lame, but we're
going to have a survey that's going to be posted
on a venture in DevOps dot com slash survey at
because I have them. I'm going to give away five
(01:12:38):
awards of aws credits based on the responses. I don't
know how many of them out yet. We're going to
see based on the responses, and I'm not sure where
the questions are yet, but the survey is going to
be there. I assure everyone.
Speaker 1 (01:12:49):
Right on, that'll be cool. I'm looking forward to hearing
from folks on that. And I mean, you get in
AWS credits for it, so it's going to be a
little incentive to go into it.
Speaker 5 (01:13:02):
Yeah, right, who doesn't love cloud credits?
Speaker 2 (01:13:06):
Right for sure?
Speaker 1 (01:13:08):
Andriana should bring us a pick today.
Speaker 5 (01:13:10):
A thing that I really like. It's an activity. I'm
a rock climber, so I love bouldering. I think you know,
if if if you have kids that like to scramble
up things, highly recommend taking your kids bouldering, and also
as an adult try it out. That is my pick
(01:13:33):
for you, and as a personal thing. Every every city
that I visit, you know when when, whether it's on
vacation or at a conference, I always make it a
point of checking out the local bouldering gym. Bouldering is
a little bit scary for those who aren't familiar with
rock climbing because there's like the rope climbing and then
there's bouldering, which is like your up I want to say,
(01:13:55):
like ten feet up, no rope, big fluffy mat at
the bottom. You can still get injured. I sprained my
ankle twice, the same ankle from just a bad fall.
But it is great fun, especially if you're looking to
just like step outside of you know, whatever it is
(01:14:19):
that you do is your your day job. It's a
great way to just decompress because you've got nothing better
to do. But you know, focus on getting to the
next move and going up the wall, and you can't
your your mind can't flinch, can't get distracted because otherwise
you fall.
Speaker 1 (01:14:37):
It's very therapeutic, Jillian, are you back with us?
Speaker 3 (01:14:42):
Okay, So I'm gonna pick the newest Expeditionary Force book.
It's a science fiction series.
Speaker 4 (01:14:48):
That's like, unlike most science fiction, it's kind of campy
and it kind of like goofy and silly and doesn't
have a lot of horror or gore or things that
I don't like reading, which is very hard to find
in science fiction I found because a lot of them
have just stuff that I don't want to read about.
So that's an Expeditionary Force by Craig Allenson. It's probably
one of my favorite science fiction series right on.
Speaker 1 (01:15:13):
Cool. I'm going to go with two picks this week. One,
there's a guy, he's a kid, Let's be honest, he's
a kid. I think he's probably mid twenties. His name
is Dan Coe, and he's really fascinating just his take
and like the amount of work and effort he's put
into studying like philosophy and like the meaning of life
(01:15:41):
and your your purpose, and you're calling like really extraordinary
stuff for someone who's so young. He just released a
new book yesterday called Purpose Profit two that he's giving
it away for free on his website, the danco dot
com d A. N. Koe and started reading it. But
(01:16:01):
he's written so much other great stuff following him on
x His stuff there has been so cool. I'm just
gonna go right ahead and recommend his book before I've
even finished reading it. I just feel like the quality
is going to be there for anyone who's interested in
reading that. And then the second pick I have so
(01:16:21):
there's a really like humiliating story I'll go with first.
To set the stage for this, Like I spent my
youth going to a lot of heavy metal concerts and
playing guitar in heavy metal bands, and so it turns
out that because of all the headbanging involved, you can
(01:16:42):
actually get whiplash. And so I have like the long
term effects of whiplash from my taste in music, and
it's I've had lots of had some different problems with
it over the years. But recently I got this thing
called an iron neck to like strengthen my neck muscles.
(01:17:02):
And so that's my second pick. It's like this really
super cool looking head gadget that you put on your head.
Definitely want to you're the kid with the helmet, that's
what you write, right, Yeah, It's a total fashion statement,
absolute fashion statement. So if you're just looking to like
have you painted it? Like, if you want to like
(01:17:24):
either improve your neck muscles or just improve like your
social credibility in life, you want to go streatting around
town with the iron neck on. But yeah, right, we're.
Speaker 2 (01:17:35):
Gonna have to see this the whole next episode. I
think it's just gonna have to be a deal.
Speaker 5 (01:17:43):
Deal the next fashion statement.
Speaker 1 (01:17:45):
Right, So yeah, those are my two picks. Dan Coe's
new book at the danco dot com and iron Neck
and and like The reason I bring up the iron
neck is even if you don't have whiplash from spinning
your youth headbanging I'm sitting at a desk all day
long hunched over your keyboard also has negative impact on
your posture and your next strength. So this is a
(01:18:07):
way to help counteract that so that whenever you do
make it to old age, that you still have the
ability to you know, stand upright or even perhaps look
at sky.
Speaker 2 (01:18:17):
I think I'm gonna need to see some research on that,
all right, man, I don't know.
Speaker 4 (01:18:23):
I think it's a pretty bold claim to old age.
Speaker 1 (01:18:26):
I don't know about that.
Speaker 3 (01:18:30):
I want like a standing ovation from the universe if
I make it past sixty.
Speaker 5 (01:18:36):
So there's a podcast that I've been listening to called
Wiser than Me. It's with Seinfeld actress Julia Louis Dreyfus,
where she basically interviews all these like older ladies like
you know, high profile older ladies like Jane Fonda, Amy
Pan and it's been it was recommended to me by
(01:18:58):
actually one of my podcast guests. It is great fun
right on cool.
Speaker 1 (01:19:04):
And then there's your podcast as well, right, the Geeking
Out podcast that is correct.
Speaker 5 (01:19:09):
Yes, just look up geeking Out with Adriana Villela, because
otherwise if you just look up geeking out, it's gonna
like give you like so many different listings on on
the various podcasting apps. So geeking Out with Adriana Villela
do that search term. You should be able to find
the correct one. And there's a copy Vada on the
(01:19:29):
on on the cover art.
Speaker 1 (01:19:31):
So awesome. Adriana, thank you so much for being on
the show. This has been fun. Thanks for having me anytime,
come on back anytime that you want. Warren Chilean as always,
thank you both for being here.
Speaker 3 (01:19:48):
Thank you was fun.
Speaker 1 (01:19:50):
And to all of our listeners, thank you for listening
to the show, and be sure and check out the
website for the survey to get your elf some AWS credits. Alright, cool,
we'll see you guys next week.
Speaker 4 (01:20:07):
M hmm.