Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Joel Moses (00:05):
Okay. Hello,
everybody. Welcome to
WebAssembly Unleashed, yoursource for news and views about
the world of WebAssembly. As apodcast, we are raw and
unvarnished all the time, sortof like an IKEA flat pack of
YouTube content. I'm your host,Joel Moses, and aside me today,
standing in for Oscar Spencer ismy colleague, Chris Fallon.
Welcome, Chris. Good. As well asour good friend and regular,
(00:25):
Matt Yakabuchi. We spend ourtime inside this growing
WebAssembly community becausewe're passionate about what hard
work can be done with it. Andthe fact you're tuning in means
that you care too.
And thank you for that. If thisis a return visit with us,
thanks for your loyalty and yoursubscription. But if it's your
first time here, welcome to theparty and make sure to
subscribe. We've got a greatWasm community guest lined up
(00:47):
for you today that's also wellknown in the JavaScript
ECMAScript community. And we'regonna talk about the
intersection of thosetechnologies with WebAssembly in
detail.
But first, we've got to do thenormal pleasantries. If you've
been with us here on thepodcast, you know Matt and that
Chris Fallon has been anoccasional guest, but he's on co
host duties today. So welcome,Chris. Always start these things
(01:08):
with a quick rundown what you'veseen inside the community
recently that you think isimportant or compelling. So do
you have anything for us?
Chris, what's going on in thecommunity that you think is
interesting?
Chris Fallin (01:19):
Yeah. I think sort
of the interesting thing with
WASM these days is that it'smaturing. Right? And so things
are actually kind of slowingdown in some sense, we're
reaching the interestingimplementation phase of a lot of
technologies. So we have garbagecollection now as a feature
that's out there, for example.
So yeah, it's cool to see howpeople are starting to use that
and use Wasm for real.
Matthew Yacobucci (01:40):
Yeah. One
thing that I wanted to point out
in regards to that is Zig islooking at refreshing their
async model or redoing theirasync IO layer. And it's
interesting, Zig has always beenable to compile to Wasm. They
started with that philosophy,but coming into it redeveloping
(02:04):
and coming out with like a very,like a re engineered feature
that is thinking about Wasm. Sothey have a couple
implementations that aren'tgoing to be immediately Wasm
supportive, but they arethinking about it if you read
Loris's blog about the designprinciples there.
Yeah. It's it's just it's justreally wild to see that WASM is
(02:27):
a is a first class citizen herein in these things.
Joel Moses (02:30):
That's fantastic.
Alright. Now it's time to
introduce our special guest.Wanna bring on Oliver Medhurst.
Oliver is a Nexmozilla engineerwho's been working in the
JavaScript community for a whilenow.
And I had the pleasure ofhosting the recent TC thirty
nine event where you were an, aninvited expert based on the
project we're gonna talk abouttoday called Porffor, which is a
(02:50):
compiler kit for JavaScript thathas some unique features, and
among them, the ability to aheadof time compile JavaScript to
WebAssembly. We're gonna diginto that ahead of time business
in just a moment. But first,welcome, Oliver. Thanks for
joining us today. Before we getstarted, why don't you introduce
yourself to our audience?
Oliver Medhurst (03:08):
Yeah, thanks
for the invite. I'm Oliver. I
used to do Firefox things a fewyears ago. And since August,
I've been doing compilingJavaScript to WebAssembly full
time.
Joel Moses (03:21):
That's great. Now
for those listeners who are not
familiar with the term, can youexplain to us what ahead of time
compilation is and maybe diginto why it's a bit of a tricky
business with a scriptedlanguage like JavaScript?
Oliver Medhurst (03:33):
Yeah, so ahead
of time is usually as opposed to
just in time, which basicallyevery JS engine today, say like
VA, SpiderMonkey, JavaScriptcall does to try and get the
best performance, which is wheninstead of compiling, say, like
C plus plus on the developer'smachine, and then you ship a
binary to your users, you justship the source code and then
(03:54):
compile on the user's machinebecause that's great for
performance, but also tricky forvarious other reasons as
compilers are. But ahead of timeis like C plus plus or REST or
any of the more traditionallanguage, which is you compile
your source code on thedeveloper's machine, you ship a
blob to our users, and they justrun it either natively or with
(04:18):
WebAssembly or elsewhere else.
Joel Moses (04:20):
Now JavaScript, of
course, brings along with it an
interpreter. And the interpreterwell, has to lack of a better
term, you kind of have to shipinterpreter logic as part of
this. So why is thatspecifically tricky? Why is that
tricky to do?
Oliver Medhurst (04:38):
Yeah, so
JavaScript is a very dynamic
language. Anyone who's used itcan tell you that it's very hard
because the main thingscompilers do is statically
analyze to be like, oh, thesepipelines could just be five
instructions. But withJavaScript, it's very hard to do
that because it's just sodynamic that you could have
(05:03):
string concatenation and integeraddition at the exact same
instruction and look the exactsame. It just depends on what
data you have at runtime. Soyeah, it's very tricky, but I
think it's promising.
Chris Fallin (05:15):
So I'd love to get
more into the type aspect of the
dynamic types. I had ton ofquestions there. But first,
maybe it'd be useful for ourviewers to understand why would
you want to do ahead of timecompilation, right? So just in
time compilation has been thestandard in browsers for, what,
fifteen or twenty years now. Andit seems to work fine.
It does take CPU time on theclient side. But why would you
(05:37):
want to do ahead of timecompilation?
Oliver Medhurst (05:39):
Yeah, so there
are, I guess, a multitude of
reasons, one of which is thereare many environments where you
just can't do just in time.Like, say, you're shipping to an
embedded device or a gameconsole, but for security
reasons, they just don't allowjust in time, you still want
good performance. Because insome micro benchmarks,
(06:00):
interpreters can be worse than10 times slower because it's
just interpreting. And thenthere's other reasons, like say
you just want a native binaryfor your JavaScript. Like say
you're making a CLI app, thenyour only option today is to
ship, say like all of Node.
Js or like one of thosealternatives, which is like 100
(06:21):
megabytes, which is not reallyideal. But with ahead of time,
you can just ship a like lessthan one megabyte binary. Just
And looks hopefully.
Joel Moses (06:32):
And I imagine the
systems that also launched and
process a number of differentstreams simultaneously, where
instantiation cost is at apremium, is also one of those.
Oliver Medhurst (06:43):
Yeah, Cloud
offerings like say AWS Lambda,
they really struggle with coldstart times, which is you have
to start the entire JS runtimeif ASM has started recently,
which can easily take likehundreds of milliseconds, which
for web request is not ideal.But if you're compiling to
WebAssembly or native binary,it's basically zero three times.
(07:04):
It's very nice.
Chris Fallin (07:06):
So so let's dig
into the the dynamic aspect of
JavaScript a little bit more.You you kind of alluded to just
in time compilation being thestandard and that having to do
with the dynamism of thelanguage. Maybe could you
explain to our viewers why it'stypically good for an engine to
observe the program executingbefore it tries to optimize?
Oliver Medhurst (07:27):
Yeah, so
because JavaScript types are so
wide ranging, if you say have aloop which adds 100 variables
just statically, usually you donot know all those strings, all
those numbers, all those floats,they integers. So just in time
compilers being able to analyzeon the fly, where this function
(07:49):
has been run five times, it'salways been run with integers. I
can just rewrite this code toalways use an integer. Worst
case, you just de opt, fall backto the interpreter, which is
still relatively fast anyway. Soit specializes on the fly for
those typical situations, whichis great usually.
(08:11):
But then there's also theproblem of the kind of
monomorphism of, like, say youhave a function and you call it
100 times of an integer, thenyou call it again with a float.
You can easily see that one calltaking like 100 times longer,
which some big projects, like Ithink there was the TypeScript
compiler, because that's allwritten in TypeScript and
(08:31):
JavaScript. They have had somejust like changing a few lines
to make things monomorphic andit's like a 10 times speed up
for some functions.
Chris Fallin (08:40):
Wow. That sounds
difficult to deal with as a
developer having theseperformance clips.
Oliver Medhurst (08:44):
Yeah. So like
ahead of time, for example, you
can't do that dynamicobservation. You just have to
statically analyze as best asyou can. Because if you guess,
oh, this is always called avintages, but then it isn't, you
can't optimize on the flight,you just crash.
Matthew Yacobucci (09:02):
So are there
like ultimately will just in
time compiling give you bestperformance, but ahead of time
compiling will maybe degradeperformance a little bit. That's
probably not the right word, butit gives you more consistent
performance. You don'tnecessarily have these spikes
(09:23):
and troughs.
Oliver Medhurst (09:25):
Yeah. At least
for now, my goal isn't to be as
fast as just in time. It'ssomething to be faster than say
interpreters. But I thinkthere's a happy in between of
it's like reliably say fivetimes faster interpreter and 50%
slower than just in time. Likeno matter what types you give
it, it has that consistency.
(09:48):
But then also I haveexperimented with, say, profile
guided optimization, which kindof does the same thing as Just
In Time, but that's like its ownfield and also has its own
wormhole of tricks and troubles.
Chris Fallin (10:04):
So walk us through
you're not able to observe the
program and see the types. IsPerfor doing any type
specialization still with somesort of ahead of time analysis?
Like are you inferring types?That sort
Oliver Medhurst (10:16):
So probably
there's at least a thousand
lines of code just for typeinference and like analyzing
types. It's still, you stillhave to like only do things if
you really know it. So like myinferring is very hesitant to
just completely delete code. Butif you know this is just a
(10:37):
string and then you say like doa plus sign, you know that's
concatenation because that's astring, which is really nice.
But then say like functionalarguments, typically you can
infer those types because thatcould be called of any arguments
anywhere.
So unless you have really goodanalyzation, then it's very
tricky, which I'm starting toget, but it's still very, only
(11:00):
does it if it's completely sure.
Chris Fallin (11:03):
So for the
compiler nerds on the call, I
guess it sounds like you'resaying you have intra
procedural, but not interprocedural analysis?
Oliver Medhurst (11:09):
Yep. Yeah.
Basically. And I'm starting to
do that, at least for myoptimization approach, I am
trying to be very low levelessentially where it's like Wasm
instruction by Wasm instructionprobably them being really high
level because it's like muchmore annoying to do, but the
kind of payoffs you get is muchwider reaching. Like say I have
(11:32):
dead code elimination, but Ihaven't specially written that.
It just knows, oh, this ifinstruction is always called
with, like, true. So just nevertake the else path.
Chris Fallin (11:45):
I think I saw
something about this in your
repository. You have somepartial evaluation. The approach
you're using?
Matthew Yacobucci (11:51):
Yep. Sorry.
Mhmm.
Chris Fallin (11:52):
That's super cool.
Oliver Medhurst (11:53):
Thanks.
Joel Moses (11:55):
So let's talk about
the performance aspects of this.
And I know that there's adeficit. You support in Porffor
JS to Wasm compilation, but alsoJS to native. And I think native
is significantly faster, butWasm is of course portable
across run times, for forWebAssembly. But compare that to
compare that to the being ableto run a JavaScript routine in
(12:18):
in Wasm itself.
What is the benefit of Port fourto JavaScript in Wasm?
Oliver Medhurst (12:26):
Yeah. So one of
the main benefits is say like
your website has plugins and youwant to be able to run those
with good security. There aresome like JS sandboxes today,
but they are all have CVEsyearly at least because
(12:47):
sandboxing JS is very hard. Butif you just run-in WebAssembly,
you have control of every singleimport and export, like no
matter what. So you get thatkind of free sandboxing.
And also a lot of times it'sjust say my hosting provider
only takes WebAssembly for somereason, like the specially
(13:07):
optimized WebAssembly, then youjust have to have a WebAssembly
binary. There's no And other
Joel Moses (13:14):
most of the
JavaScript tooling for
WebAssembly bundles aninterpreter inside. And your
approach, of course, does not dothat. Now, what does that do to
both the size and theperformance?
Oliver Medhurst (13:26):
Yeah, so for
the size and performance, I will
gather up the data so I'm notjust talking.
Joel Moses (13:35):
You actually have
graphs on this one,
Oliver Medhurst (13:37):
Yes. So, yeah,
so for JS to WebAssembly versus
an interpreter, I get like 10 to30 times smaller binaries. So
instead of like one megabyte,get like 100 kilobytes, which is
very nice.
Chris Fallin (13:51):
And
Oliver Medhurst (13:52):
it's similar
for performance as well because
you're just compiling instead ofinterpreting.
Joel Moses (13:58):
Now, presume since
this is not a full interpreter
that you are essentially, you'redoing some of the JavaScript
interpretation and buildyourself. So there's a
conformance gap, I imagine. Somethings that are supported and
not supported. Where is Porfforright now in that?
Oliver Medhurst (14:19):
Yeah, so that's
the main trouble of doing this,
not even just the ahead of timepart, but running an entire JS
engine from scratch. Because JSis a big language because it
supports the entire web that hasto be. So I currently support
about 60 of the entire languagebecause there are tests written
by the spec authors ofJavaScript. And they cover
(14:42):
everything from basic variablesto for loops to date to newer
proposals like temporal, whichis the new fancier way of doing
the end time things, which mostbrowsers don't deploy yet, but
they do this.
Matthew Yacobucci (14:57):
Does this
mean there's an optimum use case
for Port four right now?
Oliver Medhurst (15:02):
Yeah, I think
the main ultimate thing right
now is just say like, I thinkthat the short term dream is say
you have a CLI app, you can justrun out of this and get like a
one megabyte binary file, whichyou just run instead of needing
the user to install Node. Js orsomething or shipping like 100
(15:23):
megabyte binary of some runtimebundles. Because even with
something like Quick. Js, whichis like the kind of go to
JavaScript interpreter, which ismuch smaller than say node or do
you know, or etcetera. But eventhat is like a few megabytes.
Look, if
Matthew Yacobucci (15:39):
you could
save people from using NPM, I
think you're gonna have a lot offriends. Yeah.
Joel Moses (15:46):
We have plenty of
experience with JavaScript
interpreters here, both in JSfor NGINX directly and also
QuickJS is also supported withinthe NGINX console complex as
well. And yeah, thefunctionality gap is real you're
implementing.
Oliver Medhurst (16:06):
Yeah, QuickJS
is very nice. I think my
proudest moment making this sofar was I did a benchmark, which
is I think, like a twentyfifteen VA benchmark. And me me
compiling and running inWebAssembly was two times faster
than QuickJS running natively.Wow. That's very nice.
Joel Moses (16:27):
That's very nice.
Chris Fallin (16:30):
So I'm curious how
one even gets started building
something as large as aJavaScript engine from scratch.
You mentioned the 60% number. Dois the way you think of this
just to kind of start with,like, the oldest version of
JavaScript and then keep addinglike, okay, now we support ES3
and now ES5?
Joel Moses (16:44):
That's a question.
Chris Fallin (16:45):
Are you building
based on, I have this one holy
grail thing that I want to run.I'll build everything it needs.
How do you go about this?
Oliver Medhurst (16:53):
Yes. So
starting out is more like what
is easiest? Like what can I dothis week? But now lately it's
more of like a target. Like Ithink like one of my big targets
coming up is being able to selfhost, which is being able to
compile itself fully with itsown compiler, which is both very
(17:14):
nice to have, also very hardbecause writing it, I'm like,
oh, I'll use all the new fancymodern JS things, which I now
have to support to compile.
But say, like every built inAPI, like say, array. Push,
dates, all of those are allwritten in TypeScript, and then
compiled with Portal toWebAssembly to have those built
(17:34):
in APIs. I So already kind ofhave that to a degree, which is
really nice for that kind of dogfeeding. Because I've had it a
few times where like Yes, I wasimplementing Regex, a Regex
interpreter, which is written inTypeScript compiled by itself.
There's a few things I wroteanother compiler failed, and I
was like, Oh, guess I'll fixthis compiler bug I accidentally
found.
(17:55):
So yeah, I think for the mostpart, it's just have a nice
subset, which is like mostthings need, which I'm kind of
getting to, like say Temporal,which is a newer proposal. I
probably won't talk about for awhile because not even most
browsers support it yet. Mostpeople don't use it yet. But the
main things I'm lacking is morejust I haven't gotten to it
(18:16):
rather than for any technicalreason.
Joel Moses (18:20):
Yeah. Cool. Cool.
Well, the JavaScript community
does tend to be a little bitsome how do I say this? A little
bit religious in some of itsaffiliations.
Yes. What would I know that youwere invited to to present at
TC39. And what's the reaction inthe JavaScript community been to
Port 4?
Oliver Medhurst (18:39):
I'd say it's
been surprisingly positive. I
was expecting some pushback of,we like, didn't want to learn a
new JavaScript engine or have todeal with new limitations. But I
think probably everyone I talkedto has been really looking
forward to it. It's just mostlythe unfortunate state of people
really want to use it when it'sready, Which like there's still
(19:02):
not a huge gap, but like a largegap of just getting things to
work, which is like anunfortunate necessity.
Joel Moses (19:11):
Yeah, I think on
your website, describe it as pre
alpha with usability beginningin 2025. Are you there? Are you
there yet?
Oliver Medhurst (19:19):
I'm definitely
starting together. Like even
though the start of this year ispassing less than 40% of the
entire spec. It's been a nicebump. The main thing now is not
even just what parts of the JSspec I support, but just like
not crashing. Like I fixed most,like say like I can run a lot of
(19:40):
things, but if you run them forthirty minutes, I'll have some
memory leak and crash.
But it's lot of things like thisis just like getting it stable.
So
Matthew Yacobucci (19:52):
does that
mean I do have some garbage
collection issues
Oliver Medhurst (19:55):
or like with
the Yeah. It's Okay. PC is like
its own art and all myallocation, etcetera, is very
simple, which is just like, howcan I get this working well
enough? So, yeah, for like longrunning programs, isn't that.
But my main target probably forthis year is say like an AWS
(20:17):
Lambda thing, is just you havefive hundred milliseconds to
respond to a web request.
Stuff like that, I can probablyjust not have a GC because it
will just die in five hundredmilliseconds. So I just don't
have to worry about running outof memory, etcetera.
Chris Fallin (20:32):
So there's the
WebAssembly GC proposal, right,
an extension to the corebytecode that allows using a
host side GC instead of thething built in. Have you thought
about using that or a mode thatwould use that in Perfor? I
imagine there are someshortcomings still with like
finalizers and stuff that but,yeah, what what do you think
about it?
Oliver Medhurst (20:49):
Yeah. I think
my kind of dream is to have like
a setting somewhere, which isjust I want no GC, I want full
force on GC, or I want Wasm GC.Just being able to configure
that because say you're runningin a web browser, that's like
the dream for Wasm GC. It willrun really well. If you'll say,
compiling to C or to like, aserver side Wasm runtime, which
(21:11):
doesn't support Wasm GC, thenyou all want to not use that.
I think the dream is just havingletting the user decide and
having some, like, nicesubscription.
Chris Fallin (21:23):
So so one other
thing I wanted to ask about was
the the community aspect ofthis. So so we've mentioned, you
know, this has gotten and youmentioned this has gotten
positive interest in theJavaScript community. Really
impressive progress. Have youfound that it's been productive
in terms of people come in, seeit's not ready and say, let me
help and contribute? And hasthat built kind of a community
(21:44):
that's working well?
Or how have you found thatprocess?
Oliver Medhurst (21:47):
Yes. I think
the main thing which has helped
me recently is just peoplefiling books rather than PRs
because it's just like someonetries to run this code, it
doesn't work. Because there's somuch JavaScript, it's hard to
just know what to try, like whatpeople are actually using today.
I've had a lot of issues, whichis just like, here's a JS file,
(22:08):
please fix, which is typicallynot that helpful. But it's nice
to just have the code to sample.
And there's been a few peoplemaking PRs, which is really
nice. Like, one nice thing is,like, the actual compiler itself
isn't that nice to work onbecause I don't think any
compiler is. But say, like, allthe built in APIs, that's
(22:29):
basically just normalTypeScript, which like hundreds
of thousands of people probablyknow because it's basically web
dev, but cursed. So it's nice tosee people like working on that.
Like, it's typically pretty easyto work on.
You you
Matthew Yacobucci (22:48):
mentioned
people giving you bug reports.
How how do you how do you wannabe contacted? What's your what's
your favorite way? Do you have,like, a Discord or Zulip or or
is it mainly just throughGitHub?
Oliver Medhurst (22:59):
Yeah, so I have
GitHub and Discord, which both
they use. GitHub is nicer forthat kind of organization. But
if someone's just like, how do Ido, like, how do I run a file?
For example, like if they can'tunderstand the docs, then this
code is great for just beinglike, sick man, please try it.
Excellent.
Yeah. So for everyone listening,
Matthew Yacobucci (23:21):
we'll have
that we'll have those linked
below.
Joel Moses (23:23):
Definitely. Now you
all you ship a essentially a
WASM to c engine two c, JS as asI understand it, for
compilation. Is there is theresomething in WebAssembly that
would make it easier for you toto maintain or or construct a
cross compilation utility? Isthere something it's missing?
Oliver Medhurst (23:46):
Yeah, so it
wasn't, I forgot which project,
but there's like a officialwasn't to see compiler, which is
great, but it's very like, itwill give you C to, like, use in
a C project. It would just giveyou a C file to use, like, say,
any imports or exports. It willjust give up and make you define
(24:07):
them in C, which is fine to adegree, but it's nice because I
have my own. I can say, like, ifyou have locals, I can name them
properly and see for, like,easier debugging, or I can do a
few performance tricks. Like, Ican recognize an if as an if,
not just like, oh, there's liketwo blocks on an if.
(24:28):
I can usually work around thatto have a bit faster C
performance. But I'd saytypically it's fine. There's
some stuff like, say, Wasm GC,will probably never support that
and just move it up the chain asan abstraction. Or some other
WASM proposals are kind oftricky, like say, I haven't done
this yet, but say like atomicswould probably be tricky in C.
(24:51):
But I think for the most part,it's pretty nice because a lot
of it is just like, one or twolines of c per Wasm instruction.
Joel Moses (24:58):
Now the people that
you see that are interested in
Port four R, do they tend to beJavaScript people who are
looking for ways to gate toWebAssembly, or are they
WebAssembly people looking tosupport JavaScript?
Oliver Medhurst (25:08):
I'd say it's
kind of probably not fiftyfifty,
but a lot of interest from bothsides I've experienced. Just
wasn't people say that making aproduct which is a cloud thing
for only users wasn't files asinput. So many of them want to
support JS without having toship an entire interpreter for
(25:29):
various reasons on their side.So a lot of interest there and a
lot of interest from just, Iwant to ship my JS as a blob.
Even regardless of sayperformance and size reasons,
say you're working on a capture,which you don't want anyone to
reverse engineer for securityreasons, having that as a WASM
(25:49):
blob is probably 10 times harderthan reverse engineering some JS
instead.
Joel Moses (25:56):
Now I know that on
your GitHub repo, at the very
bottom, state that you intend toonly use WASM proposals that are
common. The ones that areuncommon or draft, you stay away
from. Are there some draft onesthat you have been tempted by?
Oliver Medhurst (26:15):
Yeah, a few. I
think the first one that comes
to mind was, I think there was aSIMD proposal after the initial
one, which I think is, it mightbe merged now. It's might be
really merged merged now.
Chris Fallin (26:28):
Relaxed SIMD, is
that the one you're
Oliver Medhurst (26:29):
thinking Yes, I
think that is. I think that's in
most engines. But when I wasfirst doing it, it was I was
either not in V eight or behindthe flag or something. There's a
few ones like that which arereally nice, but like,
especially if, say, V eightdoesn't support it, even if it's
behind a flag, most people willjust not know how to do that.
Chris Fallin (26:52):
Are there
proposals that you or changes to
the bytecode that you would liketo see that would make it easier
to target?
Oliver Medhurst (26:59):
I mean,
honestly, the first one that
comes to mind is having some wayof duplicating a value on the
stack without having to use alocal, which I feel like is a
tiny ask. But I feel likethere's probably a way it
doesn't exist. Like, there'sprobably a reason why. But
there's just so many times whereI have to have some temporary
local to do some stackmanipulation. Where it's like, I
(27:23):
want to swap two values on thestack or I want to duplicate a
value And just there are no niceways of doing that.
It's understandable to a degree,but
Joel Moses (27:35):
So is there anything
the community can do for the
Porphyro project that you wouldlike to see?
Oliver Medhurst (27:42):
I guess the
main thing is just harassing you
with bug reports. It's
Joel Moses (27:45):
harassing you.
Oliver Medhurst (27:46):
One of the most
useful things, which is just
people trying things and beinglike, this is broken.
Joel Moses (27:51):
Gotcha.
Oliver Medhurst (27:52):
Like, can just
add that to my list of, like,
things to try. Got it.
Matthew Yacobucci (27:56):
I'm wondering
if there's I'm curious in this.
This is a little bit of adigression, but are there any
lessons that other ahead of timecompiler people could learn?
Like, let's say, I think it'ssafe to say Python is a popular
language and it also has asimilar paradigm to JavaScript
(28:18):
being dynamic. Are theregeneralities that us as a
community can learn from whatyou're doing?
Oliver Medhurst (28:26):
Yeah, think
it's a good question. I think
generally my advice is it mightbe a hot take, like sometimes
I'll spend days theorizing onlike a good way to do something,
but just going for it is oftennice to just see like what
works. Like, there's been somethings about, say, I'm doing a
performance change and I canspend easily days in a Google
(28:49):
Doc writing code by hand,theorizing, I think this will be
50% faster than just doing itAnd just seeing kind of real
world results is nice. Like whenI was first starting the
project, I thought it would failimmediately. Like I thought I
would hit some hurdle and belike, I give up.
(29:10):
Because I just thought of it asa side project. So I was like,
this will occupy my free timeand make me learn more lesson
stuff and compiler stuff. Butlike, I haven't hit a brick
wall. So it's basically justiterate, try things.
Joel Moses (29:26):
Yeah. Well, it's
certainly impressive progress.
We're ticking down to the endhere, unfortunately, but I've
got one final question. And Ijust because I've read the
GitHub, but I just for ourlisteners, why Porffor? What's
the name?
Oliver Medhurst (29:41):
Yes. I was
learning Welsh when I started
this project, and Porffor ispurple because no other JS
engine is purple colored. Nice.Almost everything is yellow or
blue or some shade, but I waslike, need the SEO.
Joel Moses (29:57):
That's great. That's
great. Well, Oliver, thank you
very much. Look, you've made atremendous amount of progress.
And it was wonderful to host youhere for TC39.
And I know that the communityreally enjoyed the discussion.
And as always, thanks for Chrisfor pitching in as cohosts.
(30:17):
Before I go, a quick reminder,friends don't let friends jit in
production, so compile ahead oftime and responsibly. It's a big
application world out there, andit's time to unleash the power
and promise of WebAssembly. ForOliver, for Matt, for Chris,
myself, thank you for listeningin.
Be sure to subscribe for futureWebAssembly news and views. And
until next time, take care.