Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Shay Nehmad (00:00):
This show is
supported by you and Elastic.
(00:02):
Elastic is the company behindElasticsearch that help teams
find, analyze and act on theirdata in real time through their
search observability andsecurity solutions. We're gonna
talk about them more in the adbreak, but thanks a lot Elastic
for sponsoring this episode andthe live meetup that's with it.
(00:25):
Jonathan, is that yoursoundboard?
Jonathan Hall (00:28):
No. That was not
my soundboard. That was actual
live people, I think. Yeah. Youdon't have video, so you don't
know.
I'm assuming that was honest.
Shay Nehmad (00:36):
Stick around to the
outbreak to hear more about our
beautiful sponsor. This isCapaGo for 05/27/2025. Keep up
to date with the importanthappenings in the go community
in about fifteen minutes perweek. I'm Shain Ahmad.
Jonathan Hall (00:59):
I'm Jonathan
Hall.
Shay Nehmad (01:00):
And we have a bunch
of beautiful people here live
waiting for us to start talkingabout go news. So how about you
kick us off?
Jonathan Hall (01:07):
Alright. So we
have a new issue that's or a
proposal that's been acceptedthat I wanna talk about. Do you
have any idea, Shai, how Goknows how many CPUs it should
use when it's running?
Shay Nehmad (01:19):
It's always messed
me up because when I use a
container, I have to, like,change it. So I don't remember,
like my DevOps guy did it, butit's always like annoying ish.
Jonathan Hall (01:28):
Yeah. So yeah, it
should be a little bit less
annoying ish after thisproposal. So the proposal is CPU
limit aware go max prox as thedefault. And my goodness, I
learned a bit about some of thisstuff reading through just the
description of this proposal.It's a really kind of messy
problem.
(01:49):
Like you might think it's prettystraightforward, right? I have
however many CPUs in my system.I have four cores in my laptop,
and so I should run for You
Shay Nehmad (01:57):
have four cores in
your laptop? Or whatever. Right?
Do we need more Patreonsupporters, John? Probably.
Is your code just thisefficient?
Jonathan Hall (02:06):
But when you
start getting into, like you
said, containers and VMs andstuff like that, it gets really
weird because you could youmight have like a 64 core or
more physical system, but maybeyou're allocated four of those
cores. And when you try to likedecide how should Go decide how
many CPUs to use, an argumentcould be made. It should try to
use 64 because then it couldspike to 64 CPUs for moments,
(02:32):
even if on average it can onlyever go above four go up to
four. Right?
Shay Nehmad (02:35):
Yeah. But that's
that's a problem because you
have, like, 64 containersrunning on the same server
because that's why you boughtthat huge server you're running
a cluster on, but then everyserver or every container thinks
it, like, owns the whole work.
Jonathan Hall (02:48):
Well, I mean,
still, the hypervisor or
whatever is controlling thatwill still limit you to whatever
your share is over on averageover some time period. Maybe
it's one hundred milliseconds orsomething like that, but maybe
you want to be able to spikeabove that four cores or
whatever you're allocated whenyou're when you have the chance
when nothing else is using thosecores. The point is it's
(03:09):
complicated and this goes intoall sorts of nuances and details
that I'd never really thoughtabout. That makes sense. And
then so it was sort of thepremise that I thought this is
good.
This makes sense. I have noflippant idea how you should
solve this problem. What makesfor a reasonable default? And I
have to be honest, I don'tactually know what default they
(03:30):
settled on. Like the details ofthat sort of got fuzzy and I was
like, okay, this makes sensenow.
I'm gonna go eat a hotdog orwhatever I did that day. But
TLDR, this should make your lifeeasier when you're dealing with
VMs or Kubernetes or whatever,and the previous default just
(03:50):
didn't make sense. It shouldmake more sense now. You'll
still probably want to fine tuneit in certain circumstances, but
it should be better.
Shay Nehmad (03:57):
So I used to do it
every time when I deployed with
GoMaxProx or whatever, theenvironment variable. And now
the trick that I memorized isuseless. Thank you.
Jonathan Hall (04:08):
Deprecating your
knowledge one one Go release at
a time.
Shay Nehmad (04:12):
You know what?
That's actually good. I have I
less need for knowledge isactually better. I can focus
more on my business stuff.
Jonathan Hall (04:18):
Very good.
Shay Nehmad (04:19):
So this episode is
live, recorded with an audience,
and you're very excited toprobably hear about AI because
we're in San Francisco, right?Who doesn't love AI? Me. Yeah.
Everybody's like nodding.
We don't wanna listen about AI.
Jonathan Hall (04:31):
I'm not in San
Francisco, so I'm allowed not to
like AI, right? Yeah. As long asyou don't
Shay Nehmad (04:36):
wanna move here.
I'm keeping it under wraps, just
in this meetup I'll say, it'sBut there was actually a good
LLM talk from incident.io. Iknow you're surprised you're
like shaking your head. The mainpoint of the talk was it's just
Golang. So I won't like hasheverything they said in the talk
(04:58):
because it's just fifteenminutes and you should probably
just watch it but I do, it'slike a very strong
recommendation, I'll give youlike the main hook.
So incident.io, they 're workingon like automating incident
agent stuff which makes sense,right? They would want to do
that and Rory Malcolm from theirteam just shared like how
they're building the entire AIinfrastructure inside Incynt
(05:21):
Diona. They use Go and it turnsout to just like be basic Go
tools underneath because likethey said in the Go blog, all
the LLM stuff is mostlyspecialized hardware and APIs
anyway. All the all the actualmodel stuff is happening over
the network, like not on yourservice. And what you're doing
(05:42):
is exactly what Go is good at,which is like network calls and
APIs and text templating, whichwas just besmirched in Josh's
talk, which just happened.
But they basically justdeveloped their internal library
and he just shows it becauseit's pretty simple with objects
like prompt and snippets andlike templates which use Go
(06:04):
templating underneath the hoodand tools, which give the LLM
access to, like, real lifestuff, like search, which all
sounds very like AI engineeringcomplicated, okay, I'm gonna
make $4.50 ks living in SanFrancisco doing this stuff, but
it's actually very simple GoCo.It's just part of how they're
just working in Incident.io. SoI suggest watching the talk,
(06:26):
it's pretty cool. Did you get achance to integrate with LLM
systems yet or did you get achance to avoid it rather,
Jonathan Hall (06:32):
I should ask?
I've had a chance to avoid it. I
actually do want to integratewith some for for my the startup
I'm working on, but it hasn'tbeen a high enough priority to
to actually get there yet, butit'll be there before long.
Shay Nehmad (06:43):
So if you're in the
same similar position to
Jonathan, just go watch thattalk, it's fifteen minutes, and
I think you'll get a lot of,like, tried and true knowledge
from Insaneo on how to start,like, the internal, you know,
GoDash AI library inside yourcompany.
Jonathan Hall (06:57):
Awesome. It's in
my watch later list now.
Shay Nehmad (07:00):
Wait, so you're
saying you're putting it in your
watch later list?
Jonathan Hall (07:03):
Yeah.
Shay Nehmad (07:04):
Do you store that
on stool app by any chance?
Jonathan Hall (07:07):
No. I put it on a
I store it with a tried and true
trusted database. How's that fora Reddit style comment? For
sure. Yeah.
Stu Stulap. What is Stulap?Stulap is a high performance SQL
database written in pure Go withzero dependencies. Wait. Is it?
(07:27):
Wait a minute.
Shay Nehmad (07:28):
Yeah. It's it's
been making the rounds on
Reddit, which means we we saw itthrough the lens of negativity.
But what do you make of thisproject after you dug in a bit?
Jonathan Hall (07:37):
So the the main
thing that jumped out at me is
that it's columnar rather whichis something I knew about mostly
from when Matt Topol was on theshow talking about, Apache
Arrow.
Matt Topol (07:48):
Matt Topol, I
currently do I work for Voltron
Data, and my I primarily justwork on the Apache Arrow
libraries in general. It's myday to day job now.
Jonathan Hall (08:00):
But it's
optimized apparently for in
memory performance with optionalpersistence. So I guess it's
kind of Redis ish in the sensethat, you know, it's it's
designed for faster memory. Andif you want to persist it as a
an afterthought maybe, then youhave that option too. So I
haven't I haven't tried it. Ihaven't tried using it.
(08:21):
I've just been reading aboutwhat I saw shared on the
Internet.
Shay Nehmad (08:24):
And a brand new DB,
that that sounds pretty
ambitious, I would say. Sure.Especially with so many
features.
Jonathan Hall (08:32):
But it but you
have but so the thing is, I
mean, a lot of the people onReddit were kind of pointing out
what you just said. They'relike, this sounds really
ambitious. There's no way thisis possibly ready for prime
time. How dare you say that youhave something that's fast? What
is fast?
You know, define highperformance. But this was this
was a research project initiallyor initially it was a hobby
(08:54):
project turned into a researchproject and now it's released. I
love that. Why aren't peoplejust saying that's amazing? A
hobby project turned into aresearch project and now it's
open source.
What's not to love? Who cares ifthis becomes the new Redis or if
it just sits there as it is? Ithink it's great.
Shay Nehmad (09:11):
So first of all,
I'll say that you got your Redis
voice really good in that whereyou're like, oh, it's not fast
enough to the point whereeverybody in the room had like a
giggle. Oh, he's an Internettroll. Jonathan is actually
nice, I promise, even when westop the Yeah. But the problem
when you I'll play devil'sadvocate and I say, when you put
on your website you're, like,production ready and you're fast
(09:33):
and then, you know, someengineer at a company is like,
oh, this is production ready andfast and plugs it into their
architecture, suddenly theyrealize it's a project that's
actually maintained by just oneperson, even with all the best
intentions in the world, youknow, they could go on a
vacation for three months, even,you know what, even a medical
emergency or something And thensuddenly your project is stalled
because you thought it wasproduction ready, but in
(09:54):
reality, like, you need to havesome community around it or
something just to have somesafety, right?
Jonathan Hall (10:00):
So so my my
response to that is that
engineer needs to be fired. Hedeserves whatever he gets.
Shay Nehmad (10:07):
That Reddit energy
is really seeping.
Jonathan Hall (10:10):
The thing is my
my my backyard sandbox is your
production ready and vice versa,depending on what we're talking
about. Production ready doesn'tsay anything except somebody
seems to be using this inproduction. So do your do your
due diligence before you adopt,especially a new technology.
Shay Nehmad (10:27):
That makes sense.
So it has been, like, pretty
popular on Reddit, like, I thinklast week when we added it to
the backlog and then we didn'tget to it, so it got pushed to
this week. But I did see, like,a minor release and a pull
request that's, like, in theworks, so it does seem, like,
from two days ago, so it doesseem like people do work on it.
If this sounds like somethingthat might fit your use case and
(10:47):
you like being a super earlyadopter or you're looking to
score some like open sourcecontributions in the data world,
this might be a cool interestingproject to join.
Jonathan Hall (10:57):
So I guess we
kind of agree that maybe you
need to do some due diligencebefore you use Stoolap, but what
if you need to do some sort oflike, I don't know, full text
search or something like that?What kind of product could you
use that would integrate wellwith Golang?
Shay Nehmad (11:11):
Well, I could say
Elastic, but people will say
we're biased. So, I'm just gonnasay, yeah, I'm gonna say just
use Tail and Grep, you'll befine.
Jonathan Hall (11:21):
Perfect, I love
it. Tried and true.
Shay Nehmad (11:23):
God, if you have
to, if you have to, Elastic just
released their new Go clientversion 9.0.0 and as someone
who's used Elastic, one of thethings I less enjoyed is like
writing these huge JSONs todefine like an index or a re
index or whatever and then justputting it into the JSON file
next to my code. The main newthing in this version other than
(11:46):
making it compatible with 1.23is that you now have a DSL to
talk to Elastic, which
Jonathan Hall (11:51):
is pretty cool
because you
Shay Nehmad (11:52):
can do like dot new
index, dot replace, dot
whatever. Sort of a blessing anda curse. I know some people hate
like DSLs and ORMs and they justwant to talk to the API in the
rawest form possible and thatmakes sense and I respect that,
that the option didn't go away,but the DSL looks super nice,
it's like fluent, you know whatI mean, where you you call a
function and then call anotherfunction to add another thing to
(12:12):
that query. So it's like dot newindex open paren, clopra, and
then dot add whatever. So itreads like English, which is
nice.
It reads sort of like that thoseJSONs, just a lot less verbose.
Jonathan Hall (12:24):
So I first read
this this release notes. It says
this release introduces anoptional package for typed API
named ESDSL, but I first read itas named as named Edsol, and I
thought, why would they namethis Edsol? But I realized I was
reading the release notes fordyslexic search. Dyslexic search
(12:45):
is pretty good actually.
Shay Nehmad (12:46):
Like fixes your
typos and whatever.
Jonathan Hall (12:50):
Or introduces
them maybe, I don't know.
Shay Nehmad (12:52):
Yeah, get some
extra time on the test. Well,
one last thing on the backlogthat we really wanted to get to
is this sort of big and sort ofintimidating proposal about
green tea garbage collection,which I opened and started
reading and got intimidated andclosed it and put it like tagged
you on the backlog, so you teachme about it.
Jonathan Hall (13:11):
All right.
Shay Nehmad (13:12):
So teach me about
it.
Jonathan Hall (13:12):
So I started
reading this and I got confused.
I'll just give you an example.Go's garbage collector
implements a classic tricolorparallel marking algorithm. Now
I can't see the room there. Iwant to see a show.
I want you to tell me how manypeople raise their hands. How
many of you understood thatsentence?
Shay Nehmad (13:29):
We have one maybe,
two pretty confident high ups
and Josh Smile, Josh BleeckerSnyder standing together
smiling. Yeah, I wrote thecompiler, know every line in it.
And I just want to clarify, aremore than four people in
Jonathan Hall (13:41):
the room as you
can hear. So from there on, it
says this is at its core. Soit's like now it's explaining it
in simpler terms. This is at itscore just a graph flood where
heap objects are nodes in thegraph and pointers are edges.
Now I understand that a littlebetter.
Like I know what edges are inthe graph node. I don't know
what a graph flood is, but like,so it's still not enough for me
(14:03):
to have a concept in my headwhat this is, but like, oh,
there's terms I understand now.Yeah, I know some of these
words. Here's what I did when Iwas reading this. I said to chat
GPT, explain this to me like I'mfive years old and it says
imagine your toy box is messy.
Oh no, but it did help break itdown for me. So basically, as I
(14:25):
understand it, and I'm sure somepeople in the room that those
who rose raise their hands aregoing to correct my
oversimplification. But ratherthan sort of doing a sweep over
all of the memory to see whatshould be cleaned up, which is
time consuming and isn't veryoptimal for CPU caches and so
on. It does this in smallerchunks, which can be
(14:47):
parallelized and done faster. Ithink it's kind of like cleaning
up your toy box one drawer at atime instead of everything at
once ish.
It has a cool name, so I'm goingto stick with that part. That's
cool. Green tea. We like thefood food names on this show.
Shay Nehmad (15:03):
Yeah. Like open
tofu and tamago and everything.
But is it actually faster? Like,you could you could convince me
because it sounds impressive,but is it actually faster?
Jonathan Hall (15:12):
Not when you take
the time to read this and
understand what it means, but ifyou skip that step and just go
to using it, it claims to beanywhere between 10 to even 50%
faster under certain workloadsin certain situations. It seems
to be much more faster whenyou're using a high parallelism,
so multiple CPUs and multiple goroutines and stuff like that.
(15:35):
But yeah, it's faster or elsewhy would they do it? You know,
I mean, yeah, let's do this newthing. It's slower, but it
sounds cool.
Shay Nehmad (15:45):
And the state of
the proposal is like accepted,
implemented, is it already inthe language? When will I enjoy
50 Now that I know it exists,even though I'm not worthy of
it, I want it now in all myproduction workloads.
Jonathan Hall (15:58):
So it is not
accepted yet. It is still being
investigated, but they have aworking prototype that people
are experimenting with and youcan try if you're ambitious
enough. And there's someactually cute little like ASCII
art charts and stuff in herethat are kind of neat to look
at. Yeah, you could try it out.I I don't know when it would be
(16:22):
I don't know how soon it wouldbe around.
The 01/2025 freeze is happeningvery soon. I'm sure it would not
be there even if the freezeweren't happening. Probably want
an extra cycle at least forsomething this this critical.
Shay Nehmad (16:34):
But if I want to
contribute to this discussion, I
should go to the discussion andlook at the pretty graphs and
contribute some more? Is whatyou're saying basically? Yeah.
Oh, I see the graphs. I I'm I'mlooking at them right now.
They are very cool. Oh, they usebraille as like ways to do the
dot, that's smart. I'm sorry,I'm geeking about the graphs and
(16:54):
not the garbage collectoritself. So, if I wanna try it,
do I like turn on someexperimental flag? Because that
sounds like they could use thefeedback.
Jonathan Hall (17:03):
So it says how to
try it, install GoTip, the GoTip
tool, so you install it fromthe, you know, the the most
recent version, and then goexperiment equals green t g c.
Shay Nehmad (17:13):
So Please focus
your attention on whole
programs. Micro benchmarks tendto be poor representative of
garbage collection. Oh, I seewhat they mean. So they want you
to run it on like a realproduction workload and not just
a benchmark loop. Especiallysince now that I think about it,
you remember that loop proposal?
It takes out all the compilerlimitations, etcetera, which
this might impact as well.Anyways, Go experiment green tea
(17:36):
or Go experiment no green teafor a coffee person. Yes, Josh.
And just to round out thatdiscussion, we have a Go
compiler expert here in the roomand he's gonna try to shed just
a tiny bit more light on it togive us some intuition. Doesn't
that sound awesome?
Like, I planned it and whatever.It didn't happen at all.
Josh Bleecher Snyder (17:56):
First of
all, the compiler and the
runtime are totally separate,the runtime is really hard. So
when when, garbage collectorpeople talk, they talk about the
garbage collector and themutator, this awful thing out
there, which we think of as likethe thing doing the useful work.
And the mutator is the thingthat makes this mess that the
garbage collector has to go andrun around and clean up. And the
challenge about cleaning this upis that the pointers that are
(18:18):
live, the parts of your, memorythat you actually still want to
have, are scattered all aroundthrough memory. And we know that
memory cache misses are slow.
And so if you're busy chasingthese pointers around all over
memory, a lot of what you'redoing is gonna be cache misses.
You're gonna spend all your timestalled. So the idea is let's
waste quote unquote waste someCPU time to try to get some
(18:40):
locality. So instead of chasingeach individual pointer, we're
gonna work on chunks ofpointers, on chunks of memory,
and this might end up beingwasted work. We do all of this
work for this whole chunk ofmemory, and we only found two
relevant pointers or threerelevant pointers.
It's okay. CPUs are really fast,and they're getting faster. And
you have more and more of them,but the memory bandwidth isn't
(19:02):
keeping up. So let's waste CPUso that we can now do less
memory chasing. The added bonusis that we can then throw in
SIMD and other really advancedtechniques to get extra speed
out of this.
So the intuition is burn moreCPU, but keep it local.
Jonathan Hall (19:21):
Cool. I have
questions like, does that
universally apply? Are theresystems where memory is
relatively faster than CPUswhere you would not want to do
that? But I think we're gettinginto
Shay Nehmad (19:33):
Josh is nodding his
head very strongly.
Jonathan Hall (19:36):
So I think we're
getting into details that
probably are for another forum.Maybe we can talk about it on
the channel. That's a good segueto our break where we talk about
that, right?
Shay Nehmad (19:44):
Yes, our Slack
channel, cupago, kebab case on
the go for Slack. Thanks formentioning it. Alright, so let's
move to our ad break withactually quite a lot of
interesting updates this time,so if you're one of these
podcast listeners who does skipat this specific moment, don't
do this on this episode. Thisepisode is sponsored by Elastic.
(20:09):
Thank you Elastic for sponsoringthis episode and hosting this
meetup and giving us free pizza.
Jonathan, next time you're inthe Bay Area, we'll do another
event. Need to like AI though,as mentioned at the top of the
show.
Jonathan Hall (20:24):
I'll do it for
that week.
Shay Nehmad (20:25):
You'll like AI just
temporarily. Critical section,
enter critical section, you landin SFO Airport. Anyway, Elastic
is the company behindElasticsearch, which helps teams
find, analyze and act on theirdata in real time through their
search observability andsecurity solutions. They're
building a fair, like a fairamount of their stuff with Go
(20:46):
and it's become one of theircore languages. Their new
Elastic Cloud serverlessplatform is predominantly built
with Go on top of Kubernetes,which is also Go, obviously.
So they're taking, like,resilient high performance HTTP
APIs powered by OAPI cogen, justby a friend of the show, Jamie
Tennell, we'll get to his blogpost in the Lightning Round,
that can basically scaleinstantly without you having to
(21:08):
think about infrastructure,which if this is not if you're
not the direct competitor ofElastic and you just want to
outsource search, to a companythat does that for many years
and does it pretty well, it'sgood news for you, right? Have
you used Elastic in the past,Jonathan?
Jonathan Hall (21:22):
Yes.
Shay Nehmad (21:23):
So have I, and like
other stuff we've had on a
couple of go that sponsored us,obviously we like like money and
free pizzas, but we wouldn't letthem sponsor us if we didn't try
it ourselves and we knew like toactually recommend it to our
listeners. I've used it in a fewcritical production workloads
and I can just say it's one ofthese like trusty workhorses
that you get to know it and theexpertise is really worth it and
(21:46):
their Go, like Go integrationgoes even deeper than their HTTP
APIs, their ingestion products,Elastic Agent and APM servers
are also written in Go andthey're actually one of the top
contributors to theOpenTelemetry Collector, which
is also built with Go. So, mostof their ingest collectors
across the platform are Go basedtoo. So, it sounds like a good
(22:06):
bet if you use Go, you're gonnahave a good time using Elastic.
On the security side, they'vegot a ton of Go powering their
threat detection and responsecapabilities and for developers,
they've got the dev toolingwritten in Go to make their
development smoother like wejust talked about in the show,
the 9.0.0 version of theirclient.
Also, they're hiring across theboard, so if you use Go and they
(22:28):
use Go, that sounds like apretty good match. Check out
jobs.elastic.co and they'd loveto have more gophers on board.
So thanks a lot to Elastic forsponsoring this episode and
giving food to all the people inthe room. Other than Jonathan,
he's looking very sad right now.He's not getting any food.
Jonathan Hall (22:45):
Awesome. Yeah. If
you're not in San Francisco and
you're like me not enjoyingpizza, you can still help the
show. I don't know how that'srelated, but we'll we'll go with
it. You can join our Patreon,which helps to pay for editing
and hosting fees.
You can join us on Slack, asChai mentioned, we have a
channel where we're the go forSlack. You can also send us an
(23:06):
email, tapagood. Dev, find ouremail address there and you can
buy some swag. We have somenifty mugs and t shirts and few
other deals. And of course youcan share this podcast with a
friend, a colleague, a petstudent, whatever, anybody who
might be interested.
It seems like a lot of folks aredoing that. I want to talk about
some some metrics. So we havehit a record this month. The
(23:30):
one's not even over yet. Westill have a few days left, and
we have beat our previous monthover month record by over 600
downloads.
Our previous record was Februaryof this year where we had 6,557
downloads. We are already at7,114, almost almost 600 more.
Well, definitely, we've got 600more by the time the week's
(23:51):
over, month's over.
Shay Nehmad (23:52):
And the episodes
weren't even that good. You say
Jonathan Hall (23:57):
say that, but our
most popular episode to date was
the one two weeks ago where weinterviewed Kevin Hoffman of IT
Lightning about spark plugs. Andwe've had other very popular
ones. I think the second mostpopular to this date. Yeah, with
Ian. That one's not there yet,but that's still the most recent
episode.
That one's only 1,100 so far. Weinterviewed Carlos Becker.
(24:21):
That's the second most popularto date.
Shay Nehmad (24:25):
Oh, from a
Jonathan Hall (24:25):
couple months
ago.
Shay Nehmad (24:26):
From GoReleaser.
Jonathan Hall (24:27):
Yes. Yes.
Shay Nehmad (24:28):
Yeah. That tracks.
It's a super popular project.
Jonathan Hall (24:30):
Yep. But we're
routinely getting 1,500 plus
downloads per episode, whichjust blows my mind. When we
started, I was kinda crossing myfingers. I hope we get a hundred
one day. And so thanks everybodyfor supporting the show and
making it a success.
We do have a Patreon this week,Mikhail Christensen, so thanks
(24:51):
for joining. And I think thatkind of wraps it up for the ad
break.
Shay Nehmad (24:56):
Yeah. One final
thing you forgot, even though we
have a checklist that's inTrello, I'll add the checklist
next time, is that you can also,other than sharing the show, we
don't pay you to advertise, soother than sharing the show like
with a friend or a colleague ora co student, leaving a review
on Spotify or Apple Podcasts orjust physically here, like,
(25:16):
slapping five stars on my facehelps promote the show in
various algorithmic charts, suchas Spotify's recommendation
engine, and that's also at thispoint with a show that's this
big, it's actually it doesmatter, it's a new growth vector
for us. I think, some peopletold me that's how they found
(25:36):
out the show, just scrollingthrough these apps, which is
cool. That does it for our adbreak. Stick around for the
lightning round to round outthis episode.
That wasn't better. Can you canyou save me here?
Jonathan Hall (25:49):
Yeah. So I think
that wraps it up. Let's do a
lightning round and then putthis show to bed. I don't know
what that's better either.
Shay Nehmad (25:59):
I've more
Jonathan Hall (25:59):
than put it in My
kids waiting for me to put him
to bed right now, so that'sthat's on the brain. Alright.
Shay Nehmad (26:09):
Lightning round.
First item for the lightning
round is the Google IO Gopresentation. So I was super
pleasantly surprised as I thinkmany people were to see Google
IO this year talking about Go.They had like a twenty minute
talk just about Go and what itdoes and what they're planning.
It's basically our show.
They basically stole the formatof our show where they talk
(26:30):
about things that happened andthings that are coming but the
main thing they they on Reddit,they explained why they did it.
They wanted to show that Googlehas more support for Go because
recently there has been some Idon't know if it's a direct
correlation, but recently somepeople have been talking about,
oh, Ian left and I don't know,the proposals have gotten
slower, maybe Google is ditchingGo because it's not AI or
(26:53):
whatever. So they really wantedto show like a public show of
support. Look, we are the peoplebehind this, we're we're totally
on on it. And also they wantedto get more people using Go, so
putting it into the Google IOwhich basically everybody
watches was a good way to getmore people to know about Go.
And they even at the end,because I guess everything has
to do has to have some AIconnotation right now, at the
(27:14):
end they show like how to buildthe AI stuff with Go, is very
similar to the talk where yourecommended the news, another
good pattern. I was just happyto see it. If you know enough
about Go or you listen to theshow regularly, you shouldn't
really watch it because theyjust talk about the stuff we
already talk about in lessdetail.
Jonathan Hall (27:31):
You heard it here
But I think
Shay Nehmad (27:32):
if you're trying to
like get someone to be
enthusiastic about Go, likeright now, like a manager who
doesn't know about it and youneed to pick a tech stack,
that's a pretty good like strongrecommendation on why to use
that versus another language.Cool. What's your Lightning
Round thing?
Jonathan Hall (27:45):
My Lightning
Round pick is XLise. XLise is a
library that lets you read Excelfiles in Go, which I'm using,
which is why I thought it waskinda cool to mention it. I saw
there was a new release thatcame out recently. I have some
love and hate for this newrelease though. So the the new
release is version two pointnine point one, which includes
(28:06):
breaking changes.
I I don't like that a patchrelease of all things contains
breaking changes, especially inthe Go ecosystem where you're
kind of like mandated to useSemver. But putting that aside,
there's some cool new featuresin here. A whole bunch of new
features, new field widthcapabilities and I don't know.
(28:29):
There's just a whole list ofthings makes it more capable for
reading Excel files. I don'tthink I need these features for
what I'm using.
I'm just reading a very simpleExcel file. It might as well be
a CSV, but it's not. But thisstill a cool library. So if you
ever need to read Excel files inGo, check out XLise. The link is
in
Shay Nehmad (28:47):
the show notes. Are
you using this package like
literally in production rightnow?
Jonathan Hall (28:50):
I've mentioned in
passing once today and maybe in
the past, a startup I'm workingon. I'm using it for that. I'll
go into the details about thatstartup on Sunday. But so, yeah.
I mean, production ish, we don'thave anybody paying us right now
for it, but that's theintention.
Shay Nehmad (29:06):
So it sucks that
they did a breaking change.
Jonathan Hall (29:09):
I don't think it
broke us, but yeah, it's
Shay Nehmad (29:12):
It might. Yeah, but
I guess if you're using Excel
files and you need to parsethem, this is a good package to
do that. I don't how manyalternatives you have as well.
Maybe use CSV, like you said. Myfinal thing for the Lightning
Round is Anton put out anotherblog post.
We mentioned him a few times onthe show and he was even on the
show.
Anton Zhiyanov (29:32):
My name is
Anton. I do some open source
stuff and I write interactive,maybe I can call them guides or
books, and interactive articleson my blog. That's mostly what I
do in my free time.
Shay Nehmad (29:47):
He's the
interactive release notes guy
that does the he has a black,like, black themed site with all
the interactive release notes,which is real nice. He just put
out a blog post that you didn'treally love about the default
transport design in the Gostandard library, there's like a
global default transport thingand Anton sort of digs into a
(30:09):
proposal and into that designand why he doesn't like it. I
think if you're into languagedesign and you want to see like
a very specific nitpick, notlike a big huge thing like a
garbage collector, but just likesomething small that people use,
this could be a pretty cool blogpost to just look at a decision
that I agree with Anton, is notthat great about a global
(30:30):
variable in the Go standardlibrary. It is kind of neat
picking weird, like I never hada problem with it myself, but it
is interesting to read at
Jonathan Hall (30:37):
least. Awesome.
Shay Nehmad (30:38):
And that's all we
have for this episode. Thanks
for listening. And that's it,program exited. Say goodbye and
then I'll
Jonathan Hall (30:45):
I'm supposed to
say goodbye. Goodbye.
Josh Bleecher Snyder (30:47):
Whoo.
Shay Nehmad (30:58):
Program exit up.
Goodbye.