All Episodes

December 5, 2023 38 mins
In this episode Henrik and Leandro are back (and hopefully more often now) to talk about the new skills that technologies, methodologies and the industry are requiring from a Performance Engineer.
It is not the same old performance testing that you used to know anymore.
You need to let go of a few skills and engage in several others.
Which ones? Lets find out!
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:10):
Hello, profets, and welcome backto the Perfect Show, the super popular.
I wanted to say still the firstand only performance testing podcasts, but
ah, we have some friends aroundin the age onet and it is fun
to have more. And well youmay be asking who are these guys.

(00:30):
I haven't seen them in so longthat who is this people? So for
the ones that do not remember,I am your amigo Leandro also known as
in your Performer here today, comingwith a special episode in a while.
Because we're back, we're gonna bedoing stuff and I have my amigo mon

(00:52):
me and Henrick. How are you? Yeah, I'm good, I'm super
fine. Based in French, myname is Henrik Rex said. I work
for them Traits as a Cloud nativeadvocate and I also try to produce content
on the observerity world. So ifyou're looking for content for the observerity side,
check out is observable. And alsoI start today today today. In

(01:12):
fact, I just published the firstepisode of the Perth by Transect. So
Leandro, you do is an ora perf bite Espanol and now we have
the same one for the French friendsspeaking audience. They have now also content
for Performance Reader. That's super exciting. Yeah, lots of news, not

(01:33):
only perfects growing. It's silly sayingit's growing since we have been been around
for a while. But I promisewe're going to be doing so much more
stuff. We are going to bebringing. We were talking about specials and
a lot of interaction for if everythinggoes well, this episode should be streaming

(01:55):
at the beginnings of December and notstreaming published. It's weird how to think
of things now, right, wecould just click stream and we are in
the air, But okay, we'reannouncing. We are giving justifications of where,
where we and what which is notthe point why you most probably are
coming for this perfect episode and judgingby the title and what is the deal

(02:21):
today, well, I propose thisstartle to kind of get back on the
train and ramp up totally. Pontintended this show again with some topics,
modern topics, modern performance. Iknow that we like to discuss a lot,
how the panoram is changing, howeverything is coming up to be in

(02:46):
our profession, and performance engineers,which is becoming like a glory profession.
Right, A performance engineer is notanymore just that you're just engineering. It's
almost a central engineer, it's ansr IT. There's a lot of I
don't know. The practice has beenmanaged by different teams now and I think

(03:08):
because of the fact that we aredealing with more complex environments and speaking about
traditional load testing in I would saybirmudle or old fashioned technology architecture won't be
very helpful for you guys. SoI think it makes sense that we cover
the modern approach to give you ideasand maybe give you some inspirations if you

(03:29):
have a gig on running a testin a communities environment, for example,
Yeah, because the lines, asyou were mentioning, are getting blurry,
very very blurry in between. AmI a performance engineer? Am I an
saree? Now I'm doing SRA thingssome of us may be like even trying
chaos experiments. I'm a chaos engineernow this identity crisis that we are going

(03:52):
through. Everyone who used to belike, yeah, I just the script.
I overload a system and try tobring it down to see what is
happening. Nowadays we're having many differencesthat are starting to pop up. And
I wanted to ask you, Henrik, just think of this aspiring performance engineer

(04:13):
to get a job, what doyou see different nowadays in the job offerings
or not offerings, the requirements forsomeone to start on the performance trade.
What do we need to know now? I think because of the explosion I
would say of clouded technology. Ithink before we had to know about web

(04:35):
technology. Understand the concept of webwhich is still true. Nothing I've changed
on that area. Understand the conceptof the behavior of a garbage collector of
a proxy, and so on.But now because we are in this ephemeral
architectures with kubuneties, with a lotof auto scaling in place, there are

(04:58):
a lot of new things that youneed to understand and be somehow an expert
on it. Not an expert,but at least having a good suitable or
technical background. So then grasp atthe buryer list, right, Yeah,
I mean because at the end,as a good performance engineer, you have
to understand and provide recommendations. Andspecifically, if you are an environment where

(05:21):
you have ulto scaling available, thenyou can also think about, oh maybe
I will do less with one componentand introduce SUTO scaling. Then I will
do more with less or whatever.But that's that's something we didn't have in
the past, or or depending onthe architecture. Maybe we had it,

(05:42):
but again this is like a normalbehavior. Ultoscaling could be in place,
and there are a few things tounderstand. So and also I think because
now when we were doing performance ingigs in the past, we were relying
on the monitoring stack. There werebasically managed by the testing tool. Now
and because now the absurdity market isexploded and open temmetry is just right on

(06:08):
the corner, that brings you alot of I mean advantage, advantage to
be honest, in the way you'regoing to design, the way you're going
to build your testing strategy. Youhave so much data now, which is
I don't know, a luxury becausein the past we didn't have that.

(06:29):
So understanding all those small components hereand there, if you connect them and
you plug them, you will bevery efficient. So I think in the
background, I would say, understandthe notion of observability, what you have
at the moment and what you cando with it, plus cubilaties and also
all those modern low testing products thatare out there, more lightweight where you

(06:54):
don't necessarily the UI where you couldfire load and then you use something else
to do the analysis. I meanagain you can still use traditional test testing
tools for that. But I thinkthere's many, many great things that has
happened over the years, especially inthis cloud needy space. Would saying,
you know another one that I havingnoticing that this started a while ago.

(07:18):
We cannot say is completely new,but the way in which now we automate
our performance automations, it's very differentbecause in the past, our solutions used
to be monolithic, thick in theway in which that the application was served
to us. Almost everything was ina single page and we had constant post

(07:43):
backs, but we had stateful applicationsor solutions. And now with the services
and micro services architecture, I thinkthat descripting landscape has changed drastically because I
don't know if you ever had experiencestrying to record I don't know in ah
what what was the Oracle platform?That Oracle Application Server was something people.

(08:13):
I think that the other one,SYBIL I remember that was ugly to automate
on and people so distent. Yeah, or a SAP if you just take
case acap also, oh my god, I still it is a little bit,
but so many of these applications.When we were creating our atomiasis,

(08:35):
you required an interesting skill set tobe able to create this complex automation and
see all these code and view statesthat we used to record and correlate and
go crazy with. And nowadays mostof the time you can get away with
just an API call, here's yourHTP, you get your HDP post,

(08:56):
you assemble them. I like tocall it like the modern lego pieces,
and then if you need the biglow tests that we used to do,
you can put them together. Butnow what are we releasing And that's another
big difference. We are now continuous, agile and all these modern methodologies,
right, Yeah, But I thinkthe advantage of this macrocist approach is that

(09:18):
you can basically, if you,of course teach or try to help teams
to do their own testing. Becausethey build their components, their micro services
component, they do some testing aboutit. They sometimes do some security tests
on it. And now they canalso do fire loads some component levels so

(09:43):
on the micro services, so withthe RPC protocol or maybe HDP. But
at the end, the test itselfis very basic. It's a couple of
lines. Sometimes you don't even needto record at the at the end,
because if you know your micro services, you can do it like straight away.
So I think that that is isit's basically enabling doing early performance engineering,

(10:07):
and that that is exciting. AndI think now the traditional way of
doing our low test big bank loadtests, it's more like in on the
SR side, I need to shipto production, I want to check that
everything is there. I can dosome chaos engineering with load testing. And
so the the overall flow of testingthat we used to have in the past

(10:28):
is slowly changing, and I'm prettyconvinced that the big bang testing that we
used to have will slowly be willbe still there, but less often that
we used to deal in the past. I mean, actually, I think
we shouldn't have been doing those bigI call it about right, the big

(10:50):
ass loads tests. I don't thinkwe should have been running those as much
as we did in the past.And even today is even less necessary because
as I was saying, we arecontinuously pushing little pieces like it's not the
big bang release you just said it. It's just like, hey, in
this sprint, I'm releasing this littlemodule, this little service API, this

(11:13):
new call, or I'm updating afew things. But it's not like you
have to test a whole coverage andadding up these modern infrastructures, the cloud,
micro services or services. You canjust focus on that little piece.
And especially because of that that aswell, that the big, biggest low

(11:37):
test perspective. I don't think it'sso useful except and I keep telling that,
except you are this unnameable event ticketcompany. When Taylor Swift she shows
that to your platform, or youhave a Black Friday at Super Bowl AD

(11:58):
a Black Friday event there, youmay need it, but that's not part
of our continuous release, right,Yeah, and yeah, agree completely.
And also there is something also thathas changed specifically is the way features are
enabled on our applications. Now wedeliver small, small, small packages.

(12:22):
We can deploy easily one one updateon one component. We can even do
bluegreen deployments. So it means thatwe have to test basically maybe okay,
so I want to have let's saytwenty percent of the traffic and eighty percent
of the new release, and thenif everything goes well during a timeframe of
a couple of hours, then wecan remove the V one and then one
hundred percent goes to the V two. So that's that's a traditional brew green

(12:45):
deployment, which I think we werecovering pretty well in the past, but
something that I have a big questionon my mind. Now there is feature
flag is taking off very aggressively onthe market. And if you think about
performance, and now I say,okay, so a bunch of future flag.
I'm going to enable a few ofthem, so just a portion of

(13:05):
the traffic will have this feature,which means now we are going a new
component, is completely new that willdo some new I don't know business logics
because to target basically on the specifictype of customers and this has a performance.
I was asking myself what would bethe strategy? So do I need
to figure out which future flags?Okay, this feature flag will cover let's

(13:28):
say ten thousand or twenty thousand people, so we need to test when we
enable that feature, what would happenand so on. So I think this
is missing and we need to providean answer on how you should will be
the best mythology when you have todeal with future flags in your on your
applications. Well, yeah, andthis is also, as you say,

(13:48):
another level of construction that we needto understand figure out. You were mentioning
the future flags, but as well, let's say now with cloud Native,
where cuber neties. You have let'ssay a container with one version in a
pod that has like five instances,just let's to give a number. And

(14:11):
what you when you were saying theblue green blue green was like a big
deal before. But I have seenand heard of some places where okay,
out of the instances that I haveinside of this pod, start migrating,
but just move one to the newversion. It should be somewhat transparent most
of the time to your end users, they don't know which version that they

(14:33):
are touching. If the changes arenot drastic enough, you can start just
sampling without actually scripting at all.If you have good in instrumentation, good
telemetry, I don't see much ofa need actually to keep doing these atomissions,
right. We can just do microexperiments or continuous release. I'm not
saying big low tests, big lowtests. I agree, I agree,

(14:56):
agree, But also I think youcome back because you mentioned a point and
I think as a as a asa technology that that the performance engineer performance
engineer needs to understand because at theend you would be one of the person
driving Maybe the settings to have abetter behavior is a service smash because typically

(15:18):
when you do blogren deployment, youwould probably use traffic spits. It's one
of the futures covered by SMASH.You can do let's say you want to
protect a component. You can implementto try retry logic or request time out.
There's so many small features that isthere to protect your component to you
know that there's a limit. Youwant to implement those policies and then make

(15:41):
sure that the component won't be blastbecause you do the policy should be there
in place and protect a component.So yeah, so we smash clearly if
you want to have a new knowledge, a new skill in your portfolio or
your resume with say communities and servicemeash because service match you see that there's

(16:02):
a lot of good things that wedidn't have in the past and now just
by thinking, oh, I needto implement this and then boom, I
don't have to write code and justimplement the logic and that's it and there
are It's interesting now that you mentionedservice mess cuber neties because I am finding
some other levels of abstraction where theknowledge from the past can be very relevant

(16:26):
today and in kind of weird variationsbecause cuber nets and some of the pod
situations that we have internal load boalancers, the problems that we had in the
past with big loads and stuff likethat, that we had to check the
configuration, how are you passing theinformation? It's repeating now. But in

(16:48):
the cuber neties controllers that had tocheck the net with internal network of our
pots and how everything is happening.That if you didn't understand from those days
that we were like, hey,is your load balance around robin or is
it working this way? Or isit monitoring? If you don't understand how
some of those things work as well, it may be difficult to pinpoint situations

(17:10):
that will happen in kubernet is verysimilar to how they used to happen in
ban metal and another I mean agree, I mean most people just forget about
communities. Is just an orchestration layersitting in a in a physical machine or
virtual machine, and then it buildsa local network. So all the limitation

(17:33):
that we knew in the past froma physical machine in terms of CPU,
in terms of port and type ofthreads and so everything, all those stuff
is still out there for communities.It's not a it's an advantage of knowing
those because then you say you havethose natural reactions say, oh, but
I will allocate so many ports,so I will run out the port.
I mean that could be a situation. But yeah, all those things,

(17:59):
even the low bound as you mentioned. But the advantages that in the past
we were not able to touch that, Today we may have the option to
touch it because at the end,it's just a it's a yamal files,
it's a crd A Configuration Resource definition, so it's a custom object. So
you can basically touch backs and modifyit. If you see that the low
balancing is not efficient. Usually it'sefficient, but if you see that,

(18:22):
you want to make sure that communitiesis understanding that one of the replica or
the instance of your part of yourcomponent is dead and you don't want to
serve traffic to that. Then thisis where like a service smash with a
circuit breaker will make the great job, where it's going to be basically removing
for a couple of time minutes oneof the replica one of the instants until

(18:48):
it gets healthy again. And thistype of things. If you know the
base logic of infrastructure and networking plusthe new things, then you're going to
be a champion. You know whatyou got me thinking with this thing.
In the past where yeah, youneeded the infrastructure god to give you access
to the routing tables of the loadbalancer and the configurations. And nowadays and

(19:14):
a little bit part of my Frenchno point intendentit. You can say,
I'm gonna get and clone a littlepiece of the repository instant shiate this little
piece of code, have my ownJAMO configure these things and I can play
with it and find you need findthe right configuration even if you are not
able, which shouldn't be the casemost of these the times. You should

(19:38):
be able to feed this jammal andcollaborate because we are now another big difference
from what we used to have.We have access to the code. We
are part of a smaller team thatare collaborating. Hopefully you're talking to your
developers, to your infrastructure people.Everyone has some sort of communication and you
can say, hey, I'm gonnahave this is a micro service that I'm

(20:00):
working on and I'm testing how theload balancer, I want to try another
configuration. I'm going to bring itto my local to whatever. And that's
that's another one. Because some ofthe monolithical solutions of the past you needed
like a rocket size machine. Tohave the you had to haddle production,

(20:21):
all the solutions because it was amonolith, was everything or nothing. And
nowadays you can just like just theselittle pieces, I'm gonna took them,
I'm gonna play with them, havea new YAMO file, probably pull request
send it or even you yourself justsend it to Cubernetes or however you're managing.
That's a huge advantage and difference fromwhat we used to do before when

(20:45):
you were mentioned in earlier, someof these atomissions that we had to create
by reverse engineering the software, recordingand finding the correlations and doing all that
mess nowadays because in these days eventhe developers were already gone, the code
was compiled. Everything was just likegood luck finding the issues and trying to

(21:06):
roll back. There was no rowback. It was as messy as hell.
And nowadays is I found this oreven the developer can create that type of
automation, play with it, instantshape check, the load balancer. It's
not only you who has kind oflike you are not the bottom neck,
right but intended definitely. And someof these differences I think when you were

(21:29):
saying, Henry, engineers that arestudying or that are gonna keep moving in
the performance testing area, Cubernetes,what was the other one? I can
message the measure service massines. Buteven going at a lower level Docker,

(21:52):
there are so many things that containersthat I have made a couple of professional
performance engineers with seven or years ofexperience that got so used to recording,
correlating, executing the monitors, embeddedand our super thick, heavy, humongous
performance testing tools. And you're alreadylaughing. Did I bring memories? And

(22:17):
these situations were It's not like thatanymore, and these performance engineers cannot gets
That's another one that is super importantnowadays that you can clone, code,
can collaborate and understand these modern socialnetworks that are formed around our coding,
our projects, our repositories. Somany engineers cannot do a pull request or

(22:42):
even like do a loan of ourrepository to start playing locally, and do
not understand this doctor, this containertechnology that and and I can't myself among
some of those. Not that longago, I was like, I understand,

(23:03):
I know that there's a Docker thing, but never touched it. And
when you start to see like ohthat thisk image and okay, now there's
composed. You can just quickly bringa bunch of things like a magician out
of the hat. Everything is installed, ready to run, and you wouldn't
remember the all days when you hadto kind of download from a share drive

(23:23):
a bunch of code and compile ityourself or get an installer and I and
I have the right software and everythingto make the code running. It was
just a pain to employ deploy things. And now with container, it's just
simplifying so much to work. Andthis is what I like with the COMMUNIONITI
is because at the end you relyon a container and if you understand the

(23:45):
logic that you can do behind,you can do crazy stuff, honestly,
And some of those things were Ithink another because I see job offerings for
performance engineer and it's still the jameterknowledge that they understand test cases that they

(24:06):
and that tells me that they arestill having complex multi step test cases where
you have to definitely reverse engineer withrecording or at the very least understand what
is the flow of these services andknow what is going on. I think
in the front, and I thinkit's about the maturity of the organizations.
I mean, if you still havecorporations that are requesting for traditional you know

(24:33):
you mentioned being a big as slowtest requests every single month or every two
months, it means that they havenot started their journey to do continuous approach
continuous testing because at the end,I think testing or automation functional testing is
something that you can start understanding thatyou can put that as an automation.

(24:57):
But the performance as always being achallenge. We talked about it so many
years because you need to expertise,you need to be able to analyze.
But I think now with the stackthat is available now this is normally we
could change that because we could atleast run the tests, get a status

(25:18):
out of the tests by using SISLO and scoring mythologies, and then at
least you have greens and you trustthose greens. You don't have to go
back and look at is it areal green, is it a false green?
At least you're saving the time whereyou just focus on errors. And
I think if we are succeeding pushingthis approach to more project or customer or

(25:47):
accounts, then I think that themarket will change and the skills required to
deliver will change over time as well. That's a very interesting one because,
as you were saying, skill setwise, of course you gotta understand HTTP
requests, how it back end anda front end interact with each other,

(26:11):
how all these things work, whichwas needed in the past. I think
that was more the biggest core ofa performance tester or an engineer, depending
on what level are you talking about, and just to be able to generate
these simulations, these automations, andfor an engineer it was just like,

(26:32):
okay, now you understand networking databases, a little bit of development so that
you could catch performance issues that comefrom the code. But now on top
of all of that, well,I think that you can get rid a
little bit of the mess of correlationand automation processes. They are not that
ugly anymore. And if you're facingthose, probably your software is a little

(26:55):
bit outdated or has a sett Ithink. I think what would be very
useful for the end users is thatwe invite to example, the team from
test Cube for example, where weshow how they can automate the test using
any low test. We can pickone of them and then we can implement

(27:18):
this scoring approach to show you howit works, so you can see in
life. What would that would reallymean in terms of implementations, and then
from from there you can try itout from yourself. I think we could
We could try to do some someepisodes covering a few few ideas to to
at least have visually the concepts inmind, because I think we are presenting

(27:42):
and talking is great, but Ithink we need also to show show show
of things. It's one of thethings that we are mentioning are on themselves
big rabbit holes that yeah, weneed to bring someone or even among ourselves
like dig into and on tank gofor the people listening to us and trying

(28:03):
to keep up their performance skills upto date, or when you're if you
are listening to us and you arelike, yeah, I would like to
start as a performance engineer. Whatdo I need to know right? And
just to start the ramp down.I think that we listed some of these
skills that are performance engineer for thismodern world should have. I'm going to

(28:26):
add one that I don't think wementioned, like core, and that probably
is going to be a lot ofwhat we talk about next year. Spoiler
alert, We're going to be doingmore profiles next year, but open telemetry
as well. I think that understandingopen telemetry and how to observe the metrics

(28:48):
that you get from open telemetry,and how and how you can utilize the
open Tommy data. Because now Imean it's it's I mean we're not gonna
really too much, but yeah,definitely, I think it's it's it's a
standard. For those who have neverheard open temetry, it's a standard for

(29:08):
building standardized structure of metrics, traces, logs, soon providing and all the
all the revendors rely on it today. But at the moment we're mainly focusing
on the obserdity side. But nowkeep in mind that this data, which
is going to be structured in acommon way, you will have access to
it, and you can still thinkabout making a processor or something that analyze

(29:33):
this common schema to build up orgenerate understand the workload, understand the peak
traffic, understand the scenarios, interestand so so and so forth. I
think it's super exciting, and Ithink from a performance engineer perspective, I
see a lot of great potentials tomake more efficient and reliable low tests at
the end. Yeah, I know, not only low tests, performance in

(29:57):
general in general. Yeah. Agree, Yeah, I was going to mention,
I just totally forgot you knew metricsthat we haven't thought about, like
a response time? Yeah, howfast does the page tells us what we
are a querying or searching for?But we have new metrics that have appeared
in the landscape. How fast cana cubernators initiate a POD that we need

(30:21):
for extra load that is happening.How fast is he the the commissioning or
destroying the pod or the services thatwe are not using anymore? And how
are these resources? Because we usedto just like, hey, the RAM
is going down and that's all wecare and the CPU is happening. But
now we have several CPO metrics.When you were saying this from open telemetry,

(30:44):
we're getting nowadays so much information thatis super easy to drown on it.
And that's another key skill, howto tell which one is important.
And I think we should cover metricsthat are also for cumulaties or container environments.
So the notion of thorotling all thoseconcept needs to be understand understand precisely

(31:06):
because at the end this is ahuge bummer or each problem for performance.
So if you understand the concept,then you may be able to optimize the
system. So to wrap up,let's say, let's let's list them.
I mentioned open telemetry, some ofthe ability principles and to know how to

(31:27):
visualize and understand which other and recomentionedtomorrow. I think, of course the
basics of communities. I would sayservice smashes, service smage. I would
say also every base we we do, we did an episode but the s
l I s l O. Butreminding the concept of scoring, recently Captain

(31:48):
released the Captain projectal List the announceddefinition, which is super exciting for performance
engineer. So the old quality gateis is back in the community world.
So for those who who used tohave they should be excited. So this
I think is a huge topic becausethey could basically enable automation within their organizations.
I will add that I mentioned doctorunderstand, Doctor and doctor composed because

(32:13):
if you go straight into cubernetics andyou don't know the lower container theory where
everything came from, and black maybe a little bit like eh so little
by little based on a personal learningand growth path, Doctor composed, then
probably curberneties and then go with somany of the crazy things around that you

(32:34):
can do with Cuberneties gets definitely,you have to understand repositories and how to
get the code is to live toyou and our friend and Mudena had this
like rule or minimum bar that herteam had to pass. Have you done
a pull request first? Do youknow how to do a rollback some of

(32:58):
those things that are based on cubernetiesand to Docker, which now that you
understand cubernetics and doing a rollback anda pod that you have the history for
the previous versions and you can goback. Oh my god, this is
so easy and before that was justlike a nightmare if you released anything into
production that was ugly. So Ithink those would be anything that we are

(33:22):
missing for this. I think ifyou cover the containers, we will have
the notion of how resources are sharedwith the container environment. So what are
the metrics that you pay attention?That could be interesting. But at the
end, yeah, I think thatwould be a good list to start.
We probably will have extra topics thatwill come up in the in the this

(33:42):
journey, this modern journey of performanceengineering, I mean topics we're going to
cover so so many. But fora modern performance engineer, skills like you
would say like you're Batman and youcannot get out of the bat cave without
these skills in your bat belt.Comedy community, Yeah, being able to
tell. Why does it explain explaincomplex things to your to your three or

(34:06):
to your four years old kid,especially because this kid, we all know
where are we talking about? Things? That the cloud is infinite and almighty
and you can do all you wantand the bill for the cloud will be
almighty if you're not careful as well. So, oh, I'm thinking of
something that we should add in thislist because now we used to drive performance

(34:30):
through response times and resource usage,but people are very eager to optimize costs,
optimize energy. How can I makemy code greener? And so on?
So maybe introducing this concept how canI measure the costs? How can
I measure energy? That would bealso something useful because at the end,

(34:52):
it's an extra keyp you add thatyou can add into your your your testing
approach. I think that also alitt within the communication part where part of
what we used to have to reportis like the it won't release, it's
too heavy, it will bring downyour production server, the big box that
you have in the basement. Buta metric that we need to kind of

(35:15):
be able to analyze better and understandbecause the cloud is very dependent on your
performance to be cost efficient, andthat's something that many do not understand or
when they just migrate like yeah,I'm not going to spend on bigger servers
ever again, and then they getthe cloud deal like what just happened?

(35:37):
Right, So to wrap in,those are the skills that we think our
performance engineer should have. If youthink that we missed any like critical twenty
twenty three to almost twenty four skills, please leave the comments below. Let
us on a comment and stars onpodcasts, Google podcasts, YouTube and all

(36:01):
the media that we are going tobe publishing this and let us know if
you want us to cover something anddig deeper or if we forgot something a
moment ago, let us know us. But more perfect is coming, right,
We have some plans. This isgonna be like a busy Christmas.
There's gonna be some perfect Christmas presentscoming up soon. Stay tuned. Wink

(36:22):
wink, spoiler alert and all thosethings and what else is coming here?
Greg? You also have a fewmore news. Yeah, so Perfit France.
There's a live stream next week thatwill happen with Carfu October. We
will have a couple of ice streamplanned with a couple of retailers, one

(36:44):
retailer for each episode. And againis there the ruble. I have tons
of episode, a lot of icestream coming up, but I think we
should what I would suggest also,if you are doing crazy things at your
project, you're doing something that isreally innovative and reach out to us.
It would be cool to bring youto the show and just interview and chat

(37:07):
about what you've been achieving. Thatwould be very interesting to share your experience,
share stories to the community. Yeah, so that everyone who is also
kind of stumbling and you figured thewheel, well that not everyone listening has
to reinvent it. Well, youalso will get a good cool start for

(37:28):
sharing something awesome that you did.And we would love to be that platform
for everyone to share let us know. So don't miss it. Is it
observable like subscribe as well. Wewill leave links here in the YouTube if
you are watching this on YouTube andstay tuned because we were going to have
a Christmas present from the pervide friendssome probable surprise guests. Let's see how

(37:53):
this end of year party goes atperfects and stay tuned because we want to
do a lot more and there's somuch more coming and I think that we
will see you soon. So withthat, Amigo Hendrick, you have something
else. No. I thank youeveryone for watching, and see you soon
for another perfect episode. Thank youvery much, And as we say in

(38:16):
perfect Espanol, muchas gracias, andhow do we say in perfect mercy, adios adios
Advertise With Us

Popular Podcasts

Stuff You Should Know
Betrayal Season 5

Betrayal Season 5

Saskia Inwood woke up one morning, knowing her life would never be the same. The night before, she learned the unimaginable – that the husband she knew in the light of day was a different person after dark. This season unpacks Saskia’s discovery of her husband’s secret life and her fight to bring him to justice. Along the way, we expose a crime that is just coming to light. This is also a story about the myth of the “perfect victim:” who gets believed, who gets doubted, and why. We follow Saskia as she works to reclaim her body, her voice, and her life. If you would like to reach out to the Betrayal Team, email us at betrayalpod@gmail.com. Follow us on Instagram @betrayalpod and @glasspodcasts. Please join our Substack for additional exclusive content, curated book recommendations, and community discussions. Sign up FREE by clicking this link Beyond Betrayal Substack. Join our community dedicated to truth, resilience, and healing. Your voice matters! Be a part of our Betrayal journey on Substack.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2026 iHeartMedia, Inc.