All Episodes

August 14, 2025 66 mins
In this episode of JavaScript Jabber, we sit down with Vinicius Dallacqua, a seasoned software engineer with a passion for performance and developer tooling. Vinicius shares his journey from coding in central Brazil with limited connectivity to building cutting-edge tools like PerfLab and PerfAgent. We dive into the intersection of AI and DevTools, exploring how artificial intelligence is transforming performance debugging, web development workflows, and even the future of browsers.

We also tackle the big questions: How do developers avoid bias when building in high-performance environments? What role will agentic browsers play in the evolution of the web? And how can AI-powered DevTools lower the barrier for developers intimidated by performance profiling? If you’re curious about the future of frontend performance, DevTools, and AI-driven development, this conversation is packed with insights.

Links & Resources


Become a supporter of this podcast: https://www.spreaker.com/podcast/javascript-jabber--6102064/support.
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Hello, everybody, Welcome to another exciting episode of JavaScript Jabber.
I am Steve Edwards, the host with the face for
radio and the voice for being of mine. But I'm
still your host at least I lost the coin flip
today with me on our panel as mister Dan Shapiir
coming live from Tel Aviv.

Speaker 2 (00:21):
How you doing, Dan?

Speaker 3 (00:23):
Hello from Tel Aviv where we're not at what with
a run anymore?

Speaker 1 (00:27):
Yes, the last time we had Dan on, I believe
he had run when bomb sirens started going off.

Speaker 2 (00:34):
So hopefully that won't happen to you.

Speaker 4 (00:35):
Yeah, missiles not bombs, but.

Speaker 1 (00:37):
Sorry, yeah, but slightly different, same end result exactly. And
our special guest today I'm going to try to get
this right is mister Benicius de Laqua.

Speaker 2 (00:48):
How you doing?

Speaker 4 (00:49):
Did great? Yes? Doing great. I'm here from Sweden where
we can't decide if it's hot or not this summer.
It's like days not. What is with this shot? It
can actually get like up to like thirty five degrees? Wow,
that is actually pretty hot.

Speaker 1 (01:06):
That's pretty warm. That's what around one hundred farenheit something
like that.

Speaker 3 (01:10):
Yeah, I can hear that it's quite close to here
that Europe is undergoing like this kind of severe heat
wave we are.

Speaker 4 (01:17):
Yeah. Yeah, Like south of Europe is quite brutal, right now,
like up to like forty six degrees or something.

Speaker 3 (01:24):
Wow.

Speaker 4 (01:24):
Places, Wow, that's quite bad.

Speaker 2 (01:26):
Is warm?

Speaker 3 (01:27):
Yeah, and not everybody has our conditioning. I guess no,
Europe is not built for heats.

Speaker 4 (01:32):
I'll tell this.

Speaker 1 (01:33):
Yeah all right, So what we're here to talk about
is the confluence of performance and AI. So actually, before
we get started, we know where you live. Why don't
you tell us what else you do and why people
should give you money if they should, and that kind
of thing.

Speaker 4 (01:49):
Yes, So I am a soft engineer. I've been around
for quite some years. As you can see from the
white hairs around there.

Speaker 2 (02:00):
It's like a lot of the gray going to So yeah,
I like to.

Speaker 3 (02:03):
Say white Harry is better than no here.

Speaker 4 (02:06):
Yeah yeah, yeah, yeah.

Speaker 2 (02:08):
I feel.

Speaker 4 (02:10):
I've been working for a while, I mean being around
since like I had to ship for even like a
four four oh wow. Yeah yeah, yeah, so I had.

Speaker 2 (02:20):
To remember those days, I.

Speaker 4 (02:23):
Tell you, focus on performance for the biggest of my career,
Like although never had really like the title of like
performance engineer, performance was always like the focus of my
interest and career coming from Brazil, Central Brazil, nonetheless, where
you know, internet speeds were never up to standards, let's
say like that, Let's put it like that. So you know,

(02:43):
ux performance front end kind of became a thing for
me as well, more and more over time. But performance always,
because always was a subject that I care deeply about.

Speaker 3 (02:53):
Before we continue, just I'm just curious centers, Like what
my mouse.

Speaker 4 (02:57):
That's not actually so central Brazil zo is like token
Chin's Brasilia, So that's that's within the center. I was
born in the north the States, but then like my
entire life are lived in token Chinese, which is central Brazil.

Speaker 2 (03:12):
Okay, Yeah, that's an interesting point you bring up.

Speaker 1 (03:15):
You know a lot of developers when they're developing apps,
they get so used to their you know and formax
on their high speed you know, fiber internet and don't
always think about the person might be using their app
that's in you know, three g out in the middle
of nowhere, you know, something like that. And so yeah,
performers like that in terms of bundle sizes for JavaScript

(03:37):
and whether you need that much JavaScript, you know, that's
books and all topics.

Speaker 4 (03:41):
Of Yeah, I'll tell you this, like for where I
come from, the average speeds for Internet is still like,
you know, around like fifteen megabytes, like one hundred megabytes.
If you're like very very well off, like it's very hard.
It's very hard to get like like a bite over.
It's like most people live on like four G connection.

(04:05):
I was actually checking out a map I would a
world map for like five G connectivity around the world,
and Brazil is still, like Central Brazil especially, still very
much mostly on four G.

Speaker 3 (04:17):
So back when I was working at WIGS, WIGS has
a pretty strong presence in South America and especially in
Brazil and Argentina, and that was definitely a concern about
making sure that we provided proper a good experience in
those regions. But I want to emphasize that it's not
just people in certain countries, it's everybody in certain situations.

(04:42):
I've been in airports in various places where I'm on
this funky data plan and I'm trying to check my
flight on an airline website, and it's super slow because
they're downloading this huge image of an airplane taking off
that I have absolutely no need for, or so you
know it, or you might be on on in a

(05:03):
subway with bad reception or in an elevator or I
don't know where, but it's this problem of connectivity. Of
bad connectivity is something that can happen to everybody in
certain scenarios, and very often in the scenario where you
especially need to get at that data.

Speaker 4 (05:23):
Absolutely, yeah, absolutely, And I mean it was always at
the forefront of like my biggest interest in concern when
I moved to Europe, you know, now I actually like
I became kind of like skewed towards the better end
of connectivities. And it's ver easy to build up your biases,
but you can be humbled very quickly when you come

(05:44):
to test stuff on connectivities there are its body or
or you know, unreliable. So one of the things that
I always care about and drove like used to drive
where whatever work is performance. So from that ied up
getting more into depth tools kind of became a thing

(06:05):
for me. Driving performance initiatives, driving performance as a subject
also all kind of always walked hand in hand with
the tools in a way, at least for me. So
it also walked hand in hand with product, which you know,
in the end brings me very close to being like
a product engineer. So it kind of helps self dog
food within the profession, but it brought me very close

(06:29):
to death tools over time, so I ended up building
integrations with highthouses and very early in the days where
we didn't even have like proper integrations with like house
and work with these kind of things to facilitate and
developers usage of death tooling and performance tooling, and over
time it became kind of the project that I'm currently
building now, which is a fork of depth tools in

(06:51):
a way to bring kind of a different vision of
how I think, not how I think, but like what
could there when it comes to death tools that I
was built for the app let's say, quote unquote the
average developer because one of the biggest problems that I
always faced, and they ended up being the reason why
I ended up working death tools and ended up building

(07:13):
this tool which the two that I say and to
put like, the tool that I'm talking about is called
per Flab and the perfect agent, which is Paul per Flab,
And the reason why I ended up building is just
one of the things that ended up happening. Even like
I worked at Spotify for for almost four years, I
worked to Klina for like almost three years, and a

(07:34):
very common thing was that developers either did not have
as much interest to open the tools and explore the
flame graph and explore you know, those problems the data
behind it, or they were kind of intimidated by it,
even at places such as Spotify and Klina, because it
is a lot of information all at once, and it
is it can become a very noisy, you know tool

(07:56):
if you don't if you're not an expert. So for me,
and then ended up being a way to experiment and
explore how to build something that is approachable by anyone.

Speaker 1 (08:07):
So you talking about performance the front end? Sorry, Dan,
it sounds like all yours is on the front end.
You ever do any back end performance stuff or it
was all your performance work pretty much been on the
front end in the browser.

Speaker 4 (08:18):
It's being mostly in the browser.

Speaker 1 (08:20):
Yeah, because I mean, and so I can remember doing
PHP profiling with like xd bug Kino profiling, So there's
certainly performance on the back end. I think a lot
of people take performance on the back end probably for granted,
you know, because the general rule is if you can
you know, when I'm dealing with a false stack app,

(08:40):
you know, I prefer PHP view with a nurse JS
for instant. That's one of my favorite stacks, and you
can always say if anytime you can offload stuff from
the front end to the back end and get your
data so that when you get to the front end,
you're not doing a bunch of calculations to do that,
because the assumption there is that you're going to have
beef your resources on the back end in terms of
yours and stuff, and so there's probably not as much

(09:03):
performance profiling, although I have known people who have been
really big on performance profiling on the back end, but
I think it's more it's just sort of assumed, Okay,
you're back and can handle us better because you've got
to be for your resources back there, and so the
front ends we're really where we need to focus on
performance stuff.

Speaker 3 (09:19):
First of all, I'd like to obviously mention that we've
had quite a number of people on the show in
recent months talking about this topic, So if people are
really interested, there's definitely a lot of shows here to
check out. We've had people from the Google from Google
talking specifically about building the tools we had. I think

(09:39):
I'm not mixing his name up talking about react profiling
the tools that he built for that, So we've had
Harry Roberts, so we've had a lot of people talking
about these subjects. So there's definitely a lot if people
are interested, there's definitely a lot of content here on
the podcast about it. But going specifically to your point, Steve,

(10:00):
it's you know, it's it's tricky because very often it
is about the back end because it might be just
simply an SQL query that's not properly optimized or doesn't
have the properity indexes defined and runs way too slowly
and something like that. But on the other hand, it
could your back end could be really, really well optimized,

(10:23):
but it's worthless because, as I give an example before,
you're downloading a huge image and that's just eating up
all your bandwidth, regardless of anything else that you're doing.
So you really kind of need to take a holistic
view of performance and try to pinpoint the you know,
the root causes and identify bottlenecks. And that's usually what

(10:46):
dev tooling is for, is for identifying those and understanding
what to focus on and doing the proper attribution of
locating the areas that are most problematic exactly.

Speaker 4 (11:00):
And it's really like that's one of the main reasons
why performance is such a difficult subject. Right, I like
to become let's say, an expert, even like someone that
is confident to solve performance problems. Is a very steep
vertical The tooling can be somewhat intimidating if you never
you know, delve tape into it or dedicated the hours
for it and force for experienced developers or specialists, what

(11:23):
is information can be noise for let's say the average
developer that doesn't go around on debt tools as much.
So with that premise, I was interested on experimenting around
and seeing, you know, what I could come up with,
So I ended up working Chrome dept Tools. The advantage
of Crome dept Tools. First of all, the source code

(11:45):
is beautifully documented, so self phone boarding on it very easily,
was very easy on It is Web components with typescripts,
so it's actually web technology, so it's really approachable for
the average developer out there.

Speaker 3 (11:58):
Yeah, who would have thought that Web thev tooling is
built on web technologies like Inception.

Speaker 4 (12:04):
Yeah, I mean because a lot of people think it's
simpless plus right, Like because the browser, the Chrome, the
main chrome is simplus plus the main interface, so a
lot of people actually end up thinking it is simplus plus,
which is not the case. It is built on top
of web technologies, but it communicates through the browser view
the debugging protocol, and it communicates also via the different

(12:25):
protocols and within the native layer. I ended up working it,
and the first try around was building per flab as
in the main let's say the main app, and per
flab was a fork of the Performance Panel, so extracting
it as a sound alone web component. It is already
built on web components, so extracting it was was not

(12:46):
as hard. And what ended up happening that I noticed
is that because I still used the performance a panel
as the main interface to drive interactions around stead of
being something very close to like dashboard. And one of
the things that I that I constantly had a gripe
with is that dashboards can be also very daunting and

(13:07):
its sub utilized. Like a lot of dashboard systems and
SaaS companies have really good product, but it's heavily understand
underutilized because people don't understand the data, they didn't understand
what they want to extract out of the data, end
up sub utilizing the systems and so ended up being
not as you know, initiate like friendly for earlier stage

(13:27):
developers or developers that don't have performance as their main
for tail, let's say, their main subjects.

Speaker 3 (13:33):
It's interesting that this is something that Google themselves have realized,
like that the performance tab is really daunting. It's a
very very busy interface with a ton of information, and
they actually created an alternative tab I think it was

(13:53):
called Performance Insights or something like that, which was, as
we've been told on the show, never intended to be
like a long term solution. Rather, it was this kind
of a playground where they were testing out various alternative
ideas that they thought that they would eventually bring back

(14:13):
into the main performance steps. So kind of playground to
try out various optimizations and way of making the interface
clearer and easier to understand, especially to junior developers.

Speaker 4 (14:26):
And the Chrome team has been on an absolute role
when it comes to improving the experience around that tools
in general. So my next adventure with building something with
that tools was I think about like three years ago, no, sorry,
about a year and a half ago. So when AI
in general became more broadly disc like part of the

(14:49):
common discourse, we started having better developer frameworks for building
with AI and stuff like that. So I was like,
and everyone was building their own chatbots in a way,
so I was like, you know, I'm gonna I'm gonna
see if I can build something that is approachable for anyone.
And so like I remember being I think it was
last year's perth Now talking with the different people from

(15:13):
the death Tools.

Speaker 3 (15:16):
And I also don't know, by the way, Perfonal Performance
Now is, yes, probably the leading performance related conference takes
place in Amsterdam, so that's actually fairly close to you.

Speaker 4 (15:26):
Yeah. Yeah, and I'm gonna be there this year as well.
I'm going to be actually talking this year. Oh cool. Yes,
it's been on a very similar subject. So this is
kind of like a teaser, I say, long long form
teaser of the talk. But so within you know, last
year's a perf Now I brought up with different members
of the death Tools teams like uh, kind of like

(15:48):
let's say a thought exercise of what could it be
that the smallest possible subset of a depth too looks
like and you know, helps people engage with it without
being daunting. So this this was kind of like the
thig exercise. I was trying to to to have within
those conversations and a lot of things, and I was
demoing perth Lab in within the within Perth now as

(16:12):
well get like gathering some some talks and discussions around
the tools building. So early this year I started, I
decided to build Perth Agent. So with perf Agent, startled
with the same kind of spirit of birth Lab, working
part of Chrome def tools. So within going too much

(16:32):
into details, so we have some some partitive depth tools.
There are the drivers of how dev tools reads and
trace information. So that trace Engine is something that I
then extracted. So instead of going for the main performance panel,
I started to take trace Engine to help me extract
information out of traces and with that create my own

(16:55):
agent that's that can understand traces and try to help
people to solve performance problems. So in a way I
ended up building a chat what in a way. But
the some of the things I was trying to play
with is of course with the advent of generative UI,
build that thought process of what is the smallest possible
self self of a debt tool that I can build

(17:15):
that is progressive and which means it doesn't present all
the information at once, so it tries to be more
friendly also like a knowledge assistant and helps with solving
performance problems, building on top of AI agents that can understand,
you know, trace files and can understand calls, tax and
all this kind of stuff and help you solve problems.

Speaker 3 (17:39):
So just to understand, you're building this what as an
open source project, as a service, as a product that
you intend to sell as all of the above, what
is it?

Speaker 4 (17:51):
It's a very good question. I am. I'm building it.
It's going to be open source, like within the next
couple of weeks, just cleaning up parts of the code
and making sure that it is in a good state
to be open sourced and collaborated with more people because
it's been just me so far, long givings and whatnot.
But it's also up of a flab so people can

(18:14):
not that they can join the wait list, and I'm
slowly getting people on boarded into the service, mostly because
it's just me. So inference costs is a thing as well,
one of the biggest being with the eyes inference costs.
So I'm going to come up with probably some form
of pricing model or hopefully sponsorship to keep it free
or whatever. But I don't have any. I didn't have

(18:37):
any necessarily like any big plans for it because it's
built on top of the open source and like most
of the tools I use nowadays, it's kind of let's
say common ground. It used to be more the secret
sauce as people say, where the system prompts, how you
organize the workflow, and how you organized the tools used

(19:01):
to be a big power of differentiates one a itol
from another. But as everyone pretty much started doing this,
it became less important so much to a lot a
lot of different companies open source their system prompts and
how they they're building workflows and things like that. Even
Yes code now with Copalative open sources the system props.
That is so so I'll ask my question in a

(19:23):
slightly different way. Is this your day job? No, it's not. No,
it's not my day job. It's a good question. It's
been a passion project. It's being evenings and weekends building
this stuff. Because now we live in a very interesting
point in time where there is this inflection point kind

(19:43):
of coming and I feel like the like as a developer,
as an engineer, I'm naturally curious, and I feel like
it's one of the things that a good engineer is
is that natural curiosity to try stuff and see where
they fit within your workflow. But I did not want
to just be a consumer. I also wanted to build
something with it, and so I ended up studying some

(20:03):
you know, even deeper ends of models and model platform
architectures and how to train models, the whole stuff. So
much so that actually as a side thing, I ended
up building the workflow to kind of simulate the mystery
of experts architecture, where you have like a router that

(20:24):
delegates to different workflows and different especialists. Because the whole
if you think about performance, as we just opened up
the show, it has different subjects, right, and that's one
of the biggest challenges of building a system, an AI
system with a complex subject such as performance, because it's
very deep and also has different branches that goes all
of them go through Deep and Google kind of like

(20:47):
if you go digging around the Depth Tools source code,
you will see that they're using kind of the same
approach where they have AI agents that represent different parts
of the Depth tools. You have one for the network
one or call frames, one for CSS, one for elements.
So it's a very similar way but then within the
whole thing that I built is has like a router

(21:08):
that that we understand the request and you know, navigate
workflows and different specialists to to call within you know,
the two chain. But it's like it's a very interesting
inflection point that we're going to have in a few
in the next few years, and experimenting with it, it's
felt like almost like an obligation to me to understand

(21:31):
where do all of us fit in within the next
few years as developer, where does the web even fitting
Because one of the things that I started thinking a
lot about is and that's ended up being the reason
why and they're building PERF agent is what is a
dev tool in this new era when we can have
agents kind of talking to each other solving problems. So

(21:53):
last week we had last week we had I you
connect and for you connects. One of the things that
I wanted to do for your connect is to see
if I So I built a quick hack over the
weekend just before I connects where I created an mcpiece
of a built inside of DEB tools. So the purpose

(22:17):
of that is because I already built the agent tool
that understand traces and can solve problems, but you have
to haunt it traces. So now MCP is all the
hype and all this kind of stuff, So I wanted
to see how can I, okay, how can deb tools?
How can a browser adopt those new things? Right? So
I started with depth tools and depth tools, So I

(22:37):
built an MCP serve within debt tools, so it connects
to clot code, to cursor, whatever is your AI editor
of choice, and it connects through MCP. So what happens
is that within the agent, what you can do is
you just have it ask it to like go fire
trace and then get insights and it creates this self

(22:58):
healing loop because as people nowadays have these kind of
tools to build stuff, and it's more approachable than ever
for people to prompt their way into building an app. Now,
as we all know, it doesn't build the most correct way.
It works, but at what cost we don't know yet.
So that's kind of the next step, right, instead of
let's see how would it work within that kind of workflow.

Speaker 3 (23:20):
So it's kind of like within cursor, I could theoretically
tell it like here's my code, make it faster, and
it would try to optimize it and then run it
through dev tools get back some data, make modifications, run
it again and see whether it's improving or regressing stuff

(23:44):
like that.

Speaker 4 (23:45):
Yeah, and even like it can even fix actual points
within it code because I hooked the same internals. So
within the tools, I hooked the same internals that I
used to build my agent within the trace engine. So
when the mcps have is called from cursor and say okay,
record the trace, and record of course the trace, and

(24:05):
then then it has another two that will say okay,
now give me the insights. So we will connected with
the server, will return actual insights with like a prompt
to guide like this is what I see on the
CAULD trace, and we'll return a full on prompt to
guide cursor on what to patch, what to fix, and
cursor like can self execute and improve the code based

(24:27):
on that. So connecting straight into debt tools.

Speaker 3 (24:30):
So what have you seen this kind of system be
able to achieve? What kind of improvements have you seen
it make?

Speaker 4 (24:40):
I have just before I built this, and as a
thought exercise, I introduced a very big problem with inputs,
So I purposely made the I INP metric worse. So
without telling curs exactly what I built or what I've done.
I just said, okay, go fire deb tools with the
MCP server and fix whatever problem you face. You say

(25:03):
you see in there, And it did precisely that if
from the call stack and the prompt that dev tools
returned back to it, it was able to find within
the code what was the problem and fix it. So
it can really create this like self healing look within
an MCP connection straight into the tools. So this is
the kind of of experimentation that I'm very interested on
now on like what is dev tools in this new

(25:24):
era of agents, and how is a browser to behave
as an agent with agents? And how can we how
can both browsers and dev tools embrace this and move
forward with this new generation of developers, because it's one
of the things that I was I'm constantly talking with
different people from from def tools team outside of the

(25:44):
deft tools team, is like, how can we ensure that
the web as a platform can adapt and thrive within
this new self paradigms. On agentic you see perplexities building
their own browser. You see open ai is not open
only developing, but they are developing their own browser.

Speaker 3 (26:02):
And oh yeah, I wasn't aware they've.

Speaker 4 (26:05):
Been poaching you know, so they are certainly building their
own browser. And you have DIA from the browser company
up there, and here in Stockholm we have a browser
company called stuff. It's called Strawberry, which I ended up
looking up with them and talking with them because they're
very local, so I wanted to talk with with them.
But they also have an agentic browser. So agentic browsers

(26:27):
is something that we keep hearing about.

Speaker 3 (26:28):
And what and by agentic browser, do you mean something
that instead of the actual website, it reads the website
and then presents me with its own version as it
were the website that's more appropriate for my particular needs
and wants. Is that what it does? What does it

(26:50):
do exactly?

Speaker 4 (26:51):
That is a good point, Like I don't think there's
a definition yet, So when it comes to agentic browser nowadays,
they should take a for example, DIA, it should take
comment which is coming up soon as well for open access.
For what I have seen is a browser with a
with an agent panel like with a chatbot on the side,
so you can either extract information out of a web

(27:12):
page or you can automate some things. But it's not
yet entirely what you said, although that seems kind of
anticlimactic for you for now, it is, but it's definitely
I feel like every developer out there is seeking that
next situation of what you said. I myself have written

(27:34):
a sort of like an essay that's pinned into my
into my X page, which is about what you just
said on what is a browser to be when in
the age of agents? Right, what are we to be?
Because if you consider if you consider the follow the
following premise on and when we're going to go back
to performance in a minute, because it kind of build

(27:55):
builds up to that. So if you're considering the current
premise of nowadays, a lot of people are automating more
and more in regards to what they do on the
web with with open eyes, Chator or Claude or anything
as such, and what they get to return is a
mark down. So there is a natural evolution of that,

(28:16):
and you see Considerts driving mcpuy or rather the capabilities
of MCP rendering UI, which ends up being like us
embedding micro interfaces as a response to agents. There's a
lot of discussion within the web on what is going
to be or what is the possibility not necessarily like

(28:37):
what is going to be what is the possibility of
us interacting with the web in this kind of agentic workflow.

Speaker 3 (28:43):
So, for example, these days, when you ask Google a question,
you know the response their reaction to what open Ai
did when you ask Google a question. In many cases,
instead of just returning a list of links of possible pages,
they use a German to generate an answer for you.

(29:03):
It's not a huge leap. And by the way, in
many cases, if that answer is actually good enough, then
that's it. You don't actually go to any page, You
just use that answer that Google provided. Well, it's theoretically
not a huge leap to think of a scenario where
they literally generate a website on the fly that provides

(29:28):
the exact information you ask for, not just as text,
but has a holistic website with images, with menus, with
an interface that's specific to you and your use case
in your query at that point in time. It's not
such a huge leap.

Speaker 4 (29:47):
No, it's not so much so that I feel like
it's kind of converging a lot of people converging too
something similar. I feel like from within the COME team,
I know pol Killen is writing a lot about his
thoughts on what is the to the platform around agents
and things like that. He has this website called AI
Focus where he writes a lot of his insights, and

(30:08):
he's being around for for a long time as well.
He's the most respected names out there when it comes
to the platform and depth tools. So he's been writing
a lot of insights that converted to similar theories and
similar places where you are arriving where I have a
arrived at Cancie dogs I see the remix guys are
arriving on the same kind of points as well, So
there is a lot of convergence and it's up to

(30:30):
the platform, of course, to ensure that that there are
good primitives out there that provides this kind of experience.
And actually, like I was joking with Malte from Cito
over Seal, like in passing like that they should have
an MCP for zero because it kind of enables experiences
like that, right, if you have a browser that can

(30:51):
connect to different cps and you can use different services
to generate. Because it ended up being a taste thing, right,
some people prefer this model over that model. Some people
would prefer this AI generator over that AI generator, And
it's about providing the right primitives in the end to
access these kind of things. But one of the things

(31:12):
that I was discussing getting back to the performance part
of the subject is after I shared that all that
uh that I did the map for connected five G
connectivity of the world, one of the things that one
of the insights that I get nowadays is since a
lot of the web traffic is being diverted away from websites,

(31:36):
and because it's so much it's being done via different
workflows within different agency the open eye or entropic or
what have you, what ends up happening is that a
lot of that traffic is not within their mobile anymore
because what they gain returns the mark now with some
images in a way if you think about it, not
I don't think that's the end result for us. The
web should be rich and full of experience. But as

(31:59):
an interim, as a operacess or at least like a
thought provocation, is that that's actually better if you have
low low work connectivity.

Speaker 3 (32:07):
Right.

Speaker 4 (32:07):
It's kind of what Google Google try to provide with
what is was it Google Sites or no? What was
it called? I know what you what you're talking about?
I don't remember the name as it escapes now yeah,
exactly the same, but those those like read only or
read mode for for websites, but you have because mark

(32:27):
down is so so much faster to render. It kind
of ends up being a good experience for people that
lives in for connectivity because they get information they want,
they can get things done without having to pay heavy.

Speaker 3 (32:40):
But on the other hand, on the other are two
problems with it. First of all, when you still have
that model generating the response on the fly, which is
obviously much slower than a pre prepared response and also
also much more expensive to generate, certainly consumed a lot

(33:00):
more energy. So a world where every query for everybody
is answered with the page that's generated specifically for that person,
we might get there eventually, but we but it's certainly
not it's not going to improve the performance of the
initial load, let's put it this way.

Speaker 4 (33:20):
No, So without getting too much into the economics of it,
there is of course those scaling problems. But my thought
on this is is purely on like the platform level,
on how what kind of experiences are we you going
to end up having. One of the things that I'm
very much interested on thinking about nowadays is is kind
of now that we are capable of generating interfaces on

(33:43):
the fly. There is a nuance to that, of course,
but we are capable of eliminating quite eliminating quite a
lot of noise from the from the UY from the interfaces.
We don't have to generate static menus and side but
a sidebar with lots of options and deep nested menus
things like that. So we now can generate a much
simpler and much more concise interface and progressively introduce the

(34:06):
users or nuage the users towards different levels of information.
So both for depth tools, I feel like that's a
great thing because you can lower the threshold, lower the
bearer for entry for developers building something that is more
approachable at first, and you can of course dig deeper
with different workflows. And also for websites.

Speaker 3 (34:25):
What would that mean in the end, So putting aside
the bigger issue of what's the future of the web
going to be and moving back to the quote unquote
simpler issue of dev tooling and performance tooling. So you said,
you build an MCP server that's integrated into depv tools,

(34:45):
and then you can get information out of dev tools
into something like cursor or something like that as actionable data.
What else are you thinking about in the context of
the future of performance tooling.

Speaker 4 (34:58):
I feel like tooling Google if you observe what Google's
doing within the Depth tools, I'm kind of helping on
using a lot of that work to power the other
experiences I'm building around it. It is a lot on
providing hints and providing knowledge assistance, providing debugging assistance. So

(35:20):
there is different, of course, experiences and levels you can dig.
So you can have something that is more approachable or
not approachable rather but like something that is more active,
so it's actively trying to help you, or something that's
more that's going to be requested. Like currently on Depth Tools,
you you hover or you click right click to activate

(35:40):
the eye insights. Some of the insights are being delivered actively,
but you know there are different experiences you try and
can try to gauge. I'm trying to lean on on
the experiences that are more actively helping people or trying
to deliver something that is more actively trying to help people,
trying to be more knowledge system at the same time
as a tool that can help to solve actual problems

(36:02):
performance problems.

Speaker 3 (36:03):
So one of the things I recall hearing about though,
to be perfectly honest, I've not actually had a chance
to try it out and see how good it is.
Is I recalled, they've been talking about using local Gemini
to analyze the stack traces in the flame chart, Yes,

(36:24):
and provide better understanding. Like for those of you like
what I said, sounds like gobly goop when you are
record performance session. When you record the performance of a
session in the Chrome Dev Tool's performance tab, one of
the things that it does is that it analyzes JavaScript
execution over time and basically checks at every point in

(36:48):
time what the stack trace looks like, and basically maps
the progress of executions through the actual application code. And
then what you get is called often a flame chart
because it looks it uses different colors, so it looks
kind of like a fire. And if the stack is deep,
then you see a lot of boxes, but if it's shallow,

(37:11):
then you see fewer, and you can kind of follow
along and see where executions spent its time, which method
got called very often, which methods or functions took a
long time to process, et cetera, When where you blocked
maybe on a network operation, et cetera, et cetera, And
those flame charts are sometimes difficult to analyze because again

(37:36):
they represent a ton of information in a very condensed
sort of way. So if you can get better hinting
and suggestions about what it means, like okay, but what
does this function actually do? That could be very helpful.
But I've not actually checked the quality of information that

(37:57):
it provides. I guess that you've been playing with it
quite a bit. What do you think about it?

Speaker 4 (38:01):
So I'm actually using a lot of the internals that
they are using to provide that aside, of course the
on board Gemini model that they have on Chrome. I
am using Gemini, but the hosted version for that both
flash and a pro and two point five flash and
to per five pro for different kind of things. But

(38:22):
to answer the crux of the massa, it's getting really
good right now. It's too struggles with some form of directions,
but that's mostly on prompting. So one of the things
that I did I use most of the same prompting
that the Chrome de tools uses for Gemini to analyze
a flame graph. But the things I differentiate is that
I'm trying to drive it to be a bit more

(38:43):
direct when it comes to offering suggestions that are what's
next to do. So, for instance, I used to kind
of the same it's pretty much the same prompt but
what difference is it's the prompt I'm using is in
some different parts of prompt engineering of how I want
the model to be more decisive and I also want
to teach the model. For instance, when it's analyzing a

(39:05):
bundle from next years, you will know. When s analyzing
a bundle that comes from different kind of frameworks, you
will know and will try to give you a bit
more decisive actions like okay, I see that these kind
of call frames are coming from this ACT, which is
a framework asset, so you kind of have a framework
logic there where you can try to fix. So it's
trying to be a bit more directed towards giving you

(39:26):
an answer and then use I use the same model.
I use the same kind of prompts for the MCP too,
which kind of helps courser be also be more decisive
of fixing stuff.

Speaker 3 (39:36):
So again, can you give a concrete example of some
practical results or outcomes that you've seen from this type
of operation right now?

Speaker 4 (39:47):
Will be It's kind of in between. It's a bit
of a hit and miss. It can be a bit
of a hit to miss, but those two the tools
are actually it can be a little bit of a hittimism,
but the tools are evolving quite quite a lot, and
with more prompt engineering, with more data collection understanding evow
or like you're having better evows, so it will only
get better. So it is, it is, There is definitive value,

(40:10):
especially for people that are trying to understand the flame
graph the AI asked within Depth tools, the agent I'm
building can be tools that will help you understand things
better and hopefully give you insights. So the manage may vary,
of course, but over time it will only get better.

Speaker 3 (40:26):
So aside from the MCP connectivity, again, what else are
you building?

Speaker 4 (40:31):
So right now I'm focusing again. I did this MCP
as a thought process because I mean, I can't ship
the MCP that are built unfortunately, because I built straight
into Depth Tools, so I can't ship my own version
of Depth Tools within Chrome. But I actually am going
to continue exploring because like I was talking with the
Depth Tools team, the next thing is could be like
to help a tool to have a tool that can

(40:53):
use the bugging protocol to drive different kinds of scenarios.
Let's say now the agents can also execute. Let's say
you whilst the agent, can you please measure the interaction
when you click this button and try to fix any
performance problems. So instead of you manually go into the
browser and recording that trace, the agent can do that
for you and return with the insights and the prompts

(41:14):
that will help it fix the next stage. Right, So
that's kind of like it became a thought process that
I've the thought that I want to try to execute
on to see how can you make the browser respond
to as a dev to to an agentic flow. So
that's something that I'm looking to experiment with next. But
when it comes to shipping within within the within perflab,

(41:37):
the next thing for me is helping the agent to
understand more of the performance as a subject. So because
right now what I've built is web vitals Insights, so
it understands different parts of the web virus and it
can generate quite good insightful reports on how to fix
different problems from the webviders that it's sees within the trace.

(41:58):
But then next for me is that the network getting
better insights for the network, so it does LCP so
kind of has a network agent there, but I want
to build better network agents to understand server timings, understand
the kind of other subjects within the workflow, and help
with different subjects within performance because there is a lot
more things to fix. And then alongside of that, bake

(42:22):
in a memory layer so you can the agent can
remember how to fix the problems and it gets a
bit more proactive on fixing problems, understanding that what problems
is seeing before and what kind of fixes is seen
before with the memory layer. So I'm interested on seeing
how the agent starts to self evolve in a way
when you start introducing your memory layer to solve performance problems.

Speaker 3 (42:42):
So a couple of thoughts of suggestions even about that. So,
first of all, it seems to me, based on the
workflow that you're describing that integration with something like Playwright
might be a more natural evolution. And trying to couse
Chrome to record kind of headless sessions or sessions that

(43:05):
don't involve actual user interaction because you're literally automating the
web in order to perform a genetic operations.

Speaker 4 (43:13):
Yes, and I would go one step further because I
know I know play right, I've used it for a
long time but I've been experimenting with and I actually
like be in contact briefually with the people from Browser
based and stagehand, so they have this kind of really
good automation for the browser. One thing that I'm going
to actually check with them is that I plan to

(43:33):
check with them, is how much of the dev tools
that they have access to, because as far as I know,
within the debugging protocol, it does not really have access
to yet at least does not have access to some
of the insights. So the way I'm driving the MCPS
of experiment that I'm running locally is all based on
the AI insights, and those are internal depth tools, so

(43:57):
I'm you leveraging those AI insights the same that Google
is surfacing for the ask AI within depth tools to
create a loop with cursor to this self healing mode
for developers that are using that, either Vibe coding or
even developers that don't necessarily know how to fix performance problems.

(44:18):
So within the debugging protocol, you don't necessarily have access
to that kind of stuff. You can drive a browser,
a headless browser, or a browser session autonomously, but what
I'm interested is less on the driving the session autonous,
but more on getting access to those internals to get
the information out of the telemetry I get to give

(44:38):
back within an mcpiece of to cursors or any AI
editor to solve problems autonomously. So it's less about driving
the web autonomum autonomously, but more about fixing things autonomously
based on a telemetry I can get from depth tools.

Speaker 3 (44:51):
So that's kind of what you're doing, these kind of
loops where I guess where you're you're trying a certain
well you're not you the age and the model is
trying a certain optimalization and then checks to see whether
it actually improves or the grades the actual performance yestt
making any difference at all.

Speaker 4 (45:13):
Yes, So for that kind of stuff, you can use
different services. For soundboxing, there's many like Daytona or actually
Versella has released one recently, so you can use a
soundboxing service where you can hook up to like an
automated session to test actual fixes. But then for me,
there is that pop that, of course I will experiment
with within perflab, but I'm also interested on the reason

(45:36):
why I'm playing with internals, dept tools and MCP integration
within depth tools and different kind of experimentation within a
browser space is kind of to play with this next
version of browsers, debv tools, and the web in general
on how they look would look like or can look
like rather because as I feel, I feel like I
think it was, I think it was. I don't remember

(45:58):
what I said it, but we are very where we
are one good, really good ux away from a complete
shift on how we think about interfaces or the web
in general. So there's a lot of experientation that can
be had and should be had. So this is kind
of where I'm navigating nowadays.

Speaker 1 (46:14):
Yeah, I heard that quote recently too. I want to
say it was West Boss, but yeah, I've heard.

Speaker 4 (46:17):
It was it. Yeah, I mean I'm not sure if
it was, if it was Wes or or someone else,
but I have a feeling it was, Yeah, someone from
those circles earlier this week. So yeah, I know you
worry about.

Speaker 3 (46:32):
The web becoming something generated by agents for agents.

Speaker 4 (46:38):
There is a layer, so there is layers to that,
and I share so I will repeat AI focused from
from Paul Killen, like if you haven't read yet, it's
really really good. But there are those fears of like
when it comes to publishers monetization experiences out like the
excessive may may be accessive automation isn't an actual concern,

(47:01):
it's very valid one. But one thing that I'm kind
of looking forward to is for personalization, for enabling what
people are calling the personal web. So in a way,
we're kind of going back to your cities where people
just put silly websites that they just wanted to build
and build it like that. That's why I love I
see tools like Lovable, like Vizero as kind of us

(47:24):
returning to that ingenuity and accessibility of people creating what
they want and putting things out there. So, of course
there are different challenges both on a social aspect, on
an economical aspect, on an environmental aspect that we should
not be blinded to, but there is also a lot
of good that can be had.

Speaker 3 (47:46):
One more thing that I wanted to suggest to you,
especially if you're looking at memory. By the way, I
was supposed to give a talk at the conference here
in Tel Avisi in Israel on using hip dumps to
analyze performance issues in node rather than the browser. I think,

(48:09):
and that's the point. I think that node is a
great candidate for these types of analysis. Operations as well.
If in fact, optimizing node operations in many ways is
even more difficult than optimizing browser operations. That is true,
and it's in its effectively a lot of the same

(48:31):
processes and the same APIs and the same protocols. Yes,
the solutions that can be applied to the browser can
be well, obviously they would need to be somewhat modified,
but can generally be applied to node as well.

Speaker 4 (48:45):
Yes. So for back end is something that is very
much in my head. We have serviy timings, which I
wrote about. One of the things that I love about
modern day tooling is that we have things like INP
and low of animation frames. I believe is one of
the best APIs released in a more odern times for
JavaScript or the web performance in general. I've talked about
it this year, destination last year and something that I

(49:07):
really like.

Speaker 3 (49:08):
But when it comes back and at least so bepole
you proceed, can you elaborate on that a little bit?
What do you mean by that? Those APIs?

Speaker 4 (49:16):
So I n P Interaction to next Paint is a
relatively new API made into Chrome Virals as an official
metric as of March this year, and it's driven by
an internal metrical animation frame, and the animation frame is
where you aggregate different sections of work that the browser

(49:38):
needs to produce in order to ship the next frame.
So that's why you have long animation frames or low
app as an API to fetch animation frame. Animation frames
there are with a timing larger than fifty milliseconds, which
is the well known or at least is the standardized
metric for long tasks.

Speaker 3 (49:57):
Yeah, it just to a little bit of context or information.
It actually comes from the other way around. Basically, in
order to provide a good visual experience jank fee experience
on displays, you usually want to achieve something like sixty
FPS or sixty frames per second. And when you divide

(50:20):
sixty and when you check how long you have to
actually work on each frame in order to achieve sixty FPS,
you see that it's sixteen points something milliseconds per frame.
So that's where it comes from. Now, given that crowd
there itself needs to do a little bit of work itself,
it leaves you with fourteen or fifteen milliseconds to do

(50:40):
whatever it is that you want to do, and whenever
you exceed that, you get junk because you cause the
browser to drop below that sixty FPS. And by the way,
it's worth noting that on high end displays and on
three D displays, you actually, like if you're using something
like GLA smart glasses, you actually want to hit one

(51:02):
hundred and twenty FPN exactly.

Speaker 4 (51:03):
It's which it's hard, which believe is you even less
work window. I actually have a talkie my like Dan
about the background from where where do? Where did we
come from to where we are now? In regards to
interactivity metrics, I've long ann mention frames. The name now
escapes me, but I did last year at REAP Advanced,

(51:24):
and I also did in person for the Mozilla meet
up at Personnel the same day. So I think you're
going to joy I'm going to share the link with
you for.

Speaker 3 (51:34):
Sure, probably also if you don't, if it's public, we
can share in this public in the show notes for this.
By the way, what I would suggest to you if
you're interested in these things is also participate in the
Receiver Performance Working Group where we generally talk about all
these metrics and how we can improve them and involve them.

(51:55):
But again coming back to what we were talking about,
So you're talking about this API that can measure and
see when you're exceeding the allotted time for frames, resulting
in a jan ky user interface or in a no
exect somewhat non responsive user interface.

Speaker 4 (52:11):
Yes, so animation frames. The reason why I love it
so much is within both per flab and pervagent, I
created histograms use the animation frames to show you activity
like processing activity based on the different virus thresholds. But
the reason why I like it so much is that
it aggregates within one common name space or metric all

(52:32):
the different problems that can happen for you two that
might lead to JENK. So you have processing time, input delay,
and presentation delay, all of which have different subset of
problems that may impact the different subsections. So my talk
this year at Gestination is about that. But the reason
why I love it so much is that it makes
the conversation a bit more productive because it gathers within

(52:55):
one common metric. It gathers all different aspects of shipping
a frame instead of just focusing on processing or just
focusing on style on layout. So it's really really good
and makes it the conversation a bit more productive. The
reason why I use it so much and moving back
into the back end is because thinking about this kind
of metric and work, what I'm going to end up

(53:16):
doing for perflab and pervagent to teach agents how to
deal with back end is then zoom out into hotel
and spans in general. Right, So it's about rebuilding attribution
based on spans and building attribution based on server timings
hotel spans. So it is one of those things that
I'm curious to try on. It's like open telemetry span, yeah, exactly,

(53:37):
So trying to build this kind of attribution, like we
use the same medal because I feel like for the
front end, we have so much work done to standardize
set of timings and build this kind of aggregation of
data to better understand performance problems, and we could leverage
some of this knowledge of some of this at least
way of thinking to build automation for for processing times

(54:01):
on hotel spans or just serving stuff via servery time
can be also a great deal help.

Speaker 3 (54:08):
You should be speaking with mattel Colin and the other
I didn't speak with him at the GEST Nation this year,
so I'm going to I'm curious to doing more conversations
about this stuff with him, because I know he's very interested.

Speaker 4 (54:18):
Days I'm actually sharing I've shared per flab with him,
So let's see what can be done with that.

Speaker 3 (54:23):
And before we finish, what are your future thoughts, Like,
where are you thinking about taking this tool? What are
you thinking about in terms of new technologies and new approaches.

Speaker 4 (54:34):
So for me right now, I'm going to be open
sourcing it in the next couple of weeks and it's
going to be available for as a party to and
anyone that want to collaborate. Of course, it's more than
welcome to and I would, and it's I'm curious to
see what kind of collaborations comes out of the open
sourcing of it. The two for me is a way
of expressing my thoughts and my feeling towards how I

(54:58):
feel like dev tools can evolved or not evolved, but like,
how could they be embracing AI first or taking the
subject of a side is how to build something that
is approachable for people that are not as technical and
trying to remove all of those natural barriers that people
keep bringing up when it comes to the tools, which
I feel like that tools is such a great, great tool,

(55:19):
but for me, I'm used to deptols have been used
for more than a decade right now, and something that
is kind of second nature. But I'm very acutely aware
of working with different teams of different knowledge levels. That
not the case for every developer out there, and we
can be trying to, you know, force a squarepag into
a into a circle and just tell them like you

(55:40):
should learn the tools which which they should if I'm
being honest, But then you know, the approach may not
have may have varying success to different people, especially on
what their interests are. So building something that can be
the smallest sub self of a debt to building more
progressive and approachable for people of all knowledge levels is
something I'm curious and also evolving from the de tools

(56:03):
on to the browsers space as well, which is where
I'm growing deeply more interested on experimenting with. You know,
how can a browser be a better agent and how
can agents interact better with browsers is something that I'm
experimenting with nowadays.

Speaker 3 (56:18):
Something that might be really interesting to look at and
in a certain way both simultaneously easier and more difficult,
is thinking about how to optimize not necessarily the JavaScript,
but the CSS. Yes, people underestimate the impact that CSS,

(56:39):
especially modern sophisticated CSS, can have on rendering performance, and
in general, I find that developers are a lot less
familiar with the intricacies of CSS, and CSS is seems
like something that should be handled by AI. Anything in
that context would be really helped for So being able

(57:01):
to play around with the CSS and see how that
impacts performance sounds something that that could be really really interesting.

Speaker 4 (57:09):
Yes, absolutely, And there is a lot of value on
getting you know, knowledge assistance and also automation based on
on on on things that developers consider either bootstrappy or
hard to approach, so that there is a good mix
where we can experiment with and build good experiences around.

(57:30):
And this is kind of where I'm very curious to
experiment with.

Speaker 3 (57:34):
So if you're going to be releasing this as an
open source, where will people be able to find it?

Speaker 4 (57:41):
So within from my ex handle, you have the links,
it's the live versions is agent dot perf lab dot
io and that's where you have the agent and then
perflab dot io is where you have let's say the
legacy one where it's not legacies live and will be
but that's the let's say the rest version. The Agents

(58:01):
is where I'm focusing most of my time nowadays to
experiment with and the Agent. Both Agent and per Flab
will be fully open source in the coming weeks, but
the services are going to still stay up and I
mean I'm constantly going to be writing about it on
X and Blue Sky as also. You can always catch
up there, all right.

Speaker 1 (58:19):
So with that, we are a little over time, so
we're going to start wrapping things up. So if people
want to follow you and give you money and give
you accolades and all those kinds of things, were the
best places to find you.

Speaker 4 (58:32):
So on X, I am web Twitter. So Twitter is
t W I t R. So literally that the.

Speaker 2 (58:42):
Web two point oh spelling of Twitter, I think.

Speaker 4 (58:44):
Kind of build right around that.

Speaker 3 (58:48):
You should have changed it to WebEx or something.

Speaker 4 (58:50):
I tried. I try to use WebEx because by the
time I built, like I got my Twitter account, I
used WebEx everywhere. But WebEx was taken already. So that's why.

Speaker 3 (59:01):
Water there's a company called WebEx.

Speaker 2 (59:04):
I think exactly think there is that sounded familiar as
soon as you said that.

Speaker 4 (59:07):
So yeah, so so yeah, web web Twitter, t W
I t R on X and on Blue Sky, I
am web x at top blue Sky. In consistencies, maybe
maybe I should try to find some new personal branding
to to try and like consolidate to this name.

Speaker 3 (59:30):
Look at Italian Roberts who stuck with CSS Wizardry Harry.

Speaker 4 (59:34):
Harry is very fortunate to have one one consistent naming
everyone every socials. What a legend. But yeah, so you
can find me, I mean if I believe, if you
look it up for Venetius de Laqua on Twitter or
Blue Sky, there will be very little. It's fingers crossed.
It's not so such a common name. I think that's right.

Speaker 1 (59:53):
Well, you have an interesting combination. I think beforehand you
were telling us the name of Italian. So you're Brazilian
with an Italian name, swe.

Speaker 4 (01:00:00):
Name exactly married to a British woman.

Speaker 1 (01:00:05):
Oh wow, that's four levels of four level journalism, right, yeah, exactly.

Speaker 2 (01:00:10):
Already.

Speaker 1 (01:00:11):
So with that will move to picks picture, the part
of the show where we can talk about anything we
want within reason that doesn't get us fined, and tech
are non tech related, Chuck sunt here. So we probably
will not get our board game fixed this weekend, but uh,
never fear. He will be back next week. He's celebrating
his twentieth wedding anniversary.

Speaker 4 (01:00:31):
With his wife. This.

Speaker 2 (01:00:31):
Wow, so that's why he's not here.

Speaker 1 (01:00:35):
I'll start My picks are generally the at least I
consider him the high point of any episode with dad jokes.
But before I get to the dad jokes of the week,
I'll pick a couple interesting posts that I saw today
on Hacker News, and in light of the conversation we
just had about AI cloud Flare, which we know is
a tool that a lot a lot of people use
for deploying apps, or for shielding or you know, for

(01:00:58):
any number of services they are inducing.

Speaker 2 (01:01:00):
This is an article in the New York Times.

Speaker 1 (01:01:02):
They're introducing a default blocking of AI data scrapers. One
of the uses, Yeah, one of the uses of AI,
obviously is setting a tool to go out and beat
the crap out of websites and get all their data off.
And websites don't like that so much, so they're providing
that option. Also is Figma, you know, which is a

(01:01:22):
very very popular tool for design and any number of things.
They had a I don't remember the reason for why
it failed, Dan, Maybe you can refresh my memory where
they were going to get bought by Adobe and then
it fell apart.

Speaker 4 (01:01:34):
Yeah, because of the DOJ.

Speaker 2 (01:01:37):
There's anti trust, right.

Speaker 4 (01:01:38):
And they trust some monopoly or something along these lines.

Speaker 2 (01:01:41):
Right, So they are filing for an IPO.

Speaker 1 (01:01:45):
They are going to go public and trying to make
up for the nine million dollars they didn't get from
Adobe and get money this way. Interesting to see how
that goes. And now to the dad jokes of the week.
So let me see where was they at. Oh yeah,
the other day, this is what not to do if
you're married. My wife said, for our anniversary, she wanted
wanted me to take her to one of those restaurants
where they make all the food in front of you.

(01:02:06):
You know, it's really good, and you got the chef
right there and everything. I said, sure, so I took
her to Subway.

Speaker 4 (01:02:13):
Right.

Speaker 1 (01:02:14):
Subway is a sub sandwich shop where they make it
right in front of you. If you're not aware of that,
here's a conversation, the devil saying, this is the lake
of Lava. You will be spending eternity in me. Actually,
since we're underground, that would be Magma. Devil says, you
do understand this is why you're here, right right?

Speaker 4 (01:02:32):
Yeah?

Speaker 1 (01:02:33):
And then for you Leslie Nielsen fans. He was amazing median.
I went to the doctor and the doctor told me
you have cancer, but we can treat it. I said,
what's the cure? He says, The cure is a British
rock band fronted by Robert Smith. But let's try to
stay focused.

Speaker 3 (01:02:48):
Yeah, anybody who's ever seen the movie Airplane, Airplane, it's
full of these types of joke.

Speaker 2 (01:02:55):
Hospital.

Speaker 1 (01:02:57):
That's a large building with lots of patients, But that's
not important exactly, Dan, What do you got?

Speaker 4 (01:03:03):
Four us picks?

Speaker 3 (01:03:04):
Not so much in the way of picks I mentioned before.
Happy to say that at least the war with Iran
seems to be over, at least for now. Hopefully the
other wars that we're currently experiencing will be ending soon
and not soon enough, we will see.

Speaker 4 (01:03:19):
But those are not my picks.

Speaker 3 (01:03:21):
So I obviously have opinions on politics, geopolitics and stuff
like that. I don't share them on the podcast. Obviously
I don't share them on X so much either, because X,
from my perspective, is intended for more of the technical
aspects of what I'm involved in. But if anybody is

(01:03:41):
interested in my views on like I said, politics, geopolitics, diplomacy, war, whatever,
all the soft stuff, also some tech stuff, but also
a lot of the soft stuff. You can find that
on Quora, where I post as I don't put up questions.
Don't think I've ever actually put up a question, but

(01:04:02):
I do post a lot of answers, and I've actually
gotten a lot of views. So I've got over the years,
I've gotten over six million views. I've been a top
writer on several occasions and on several topics and in general.
So if anybody is interested in that part of my life,
just going to Quora and search for Dan Shapier and

(01:04:25):
and see what I think, and see if it matches
your opinions on stuff. All good, whether you agree or disagree.

Speaker 2 (01:04:32):
That's interesting.

Speaker 1 (01:04:33):
I can remember Quora being a thing way back. I
didn't know it was still a thing to be understand.

Speaker 3 (01:04:39):
I don't don't stalk, you know, but I do get
tens of thousands of views there on a weekly basis,
so apparently people are still using it. I actually also
think that AI agents use it a lot because it's
it's information that's easily accessible and usable and rankable by

(01:05:02):
by the various AI tools. So so like Wikipedia, it's
a source of information for a lot of these agents.

Speaker 4 (01:05:10):
Mmm. Interesting, but anyway, be that is it made. That's
my pick for today.

Speaker 1 (01:05:16):
Alrighty, and finally will save the best for last, Benicias,
you got any picks for us?

Speaker 4 (01:05:20):
Yes, so set up a couple of times their focus
from from Polking and really good reader for trying to
wrap her head around you know this higher level concepts,
how the platform can play or has played this this
long game. It's really good information in general, but on
a non technical one. For tomorrow, we're actually going to

(01:05:41):
have the second season of Sandman on Netflix. As avid
reader of the graphic novel and someone who really enjoys
season one, I'm looking forward to that big time.

Speaker 2 (01:05:51):
Sandman.

Speaker 4 (01:05:52):
Yeah, Sandman.

Speaker 3 (01:05:54):
Sand Despite new game means fall from grace, It's still
considered a very good read. It's a brilliant breed separating
the artists from the art as it were.

Speaker 4 (01:06:05):
Yeah, yeah, well, Sunman is a brilliant read. The show
is really really good as well, the season one and
season two. It's coming up soon, so I'm looking forward
to that.

Speaker 3 (01:06:13):
Yeah, season one was good. I enjoyed season one.

Speaker 1 (01:06:17):
Yeah alrighty, So with that we will wrap up this
episode of JavaScript Jabber. Thank you to Vinicius for joining
us from Stockholm. My pleasure and hopefully we feat your
interest in AI and dev tools, which I have to
admit was a combination I had not a lot of
before today. Thanks Dan for joining us and we will
talk at you next time on JavaScript Jabber.

Speaker 4 (01:06:38):
Bye Nebe
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Law & Order: Criminal Justice System - Season 1 & Season 2

Law & Order: Criminal Justice System - Season 1 & Season 2

Season Two Out Now! Law & Order: Criminal Justice System tells the real stories behind the landmark cases that have shaped how the most dangerous and influential criminals in America are prosecuted. In its second season, the series tackles the threat of terrorism in the United States. From the rise of extremist political groups in the 60s to domestic lone wolves in the modern day, we explore how organizations like the FBI and Joint Terrorism Take Force have evolved to fight back against a multitude of terrorist threats.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.