Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:07):
Hello everyone, and welcome back to another episode of Adventures
and DevOps. You can see today I'm flying solo slight
replacement instead of I have a fact Plorox sues Cognizant
for a ridiculous amount of money because they outsource their
customer support and cognitives like, hey, hackers, you want access
to Chlorox, no problem. Here are their passwords. I don't know,
(00:28):
really interesting read. It will be in the podcast description
after the episode. But today I'm really looking forward to
our episode. I have general manager of Plumy with me here,
Megan Kojakar.
Speaker 2 (00:38):
Welcome, Hi, Thanks Warren.
Speaker 1 (00:40):
It's nice to be I saw you have a really
strong product background. Before moving into the general manager role
at Plumy, you were a senior product manager. What does
a general manager do?
Speaker 3 (00:49):
It's funny because LinkedIn gives me ads for working at
McDonald's and car dealerships, So I feel like I have
good career progression in that sense. So Plimian, I think
when I was at a Tobs is kind of a
similar model. A general manager is kind of like the
owner of an entire product area, so that means everything
from product to engineering to you know, I have documentation
(01:12):
folks have data engineers, and it's kind of the whole
product is under one organizational structure, and you're the point
person for it, everything from pricing to interacting with sales,
to working with marketing, and so it's kind of I
think a model where you have really high ownership and
you want product teams to operate pretty sufficiently within their
(01:34):
organizational structure. And it's pretty common at a TOBS. Folks
always joke that a TOBS is just a collection of
startups and that's part of them building their org structure
of just being able to let people run within their
little Yeah.
Speaker 1 (01:46):
I mean, I've heard the title often like managing director
for a line of business, for a particular whole business unit.
So I mean, I think it's much smarter than a
lot of these companies that have a bunch of senior
technical folks and they just have them all report directly
the CEO or some leadership team. But really identifying that
(02:07):
you're fundamentally responsible for this whole product area you're making
all the critical decisions, really shows that there's an alignment
of how critical it is and who's accountable for the
success of the organization.
Speaker 3 (02:19):
Yeah, I think in Poluma's case, it was really about,
like we want it to feel like a small startup
inside a bigger startup, if that makes sense, and have
folks be able to all be aligned on the same
goals across many different code bases. And there's president cons
to any model, but so far, I think there's a
lot of benefits.
Speaker 1 (02:36):
Yeah, and I don't think you have to be worried
about your career progression because if you leave Polumi, I've
heard there's a general manager role open at like a
lot of large sports organization. The GM is often like
you know, running the football team for instance, for you know,
a multi billion dollar organization. So you know, there's definitely
up I guess or I guess side grade depending you know,
moving from SaaS to sports. But yeah, you don't have
(02:59):
to become as.
Speaker 3 (03:01):
I'm Canadian and I play ice hockey, so I love
hearing that. That's great news.
Speaker 1 (03:07):
We're going a fight about the best sports teams. Then
I have to ask, like, what made you want to
shift from being a senior product manager at a AWS,
which I think you were like in the database space,
to eventually become the GM at Bloom was there like
already writing at the wall like four years ago or
so that you just like I have to leave or
was it something about your career.
Speaker 3 (03:25):
I loved my time at ABS in a lot of ways.
I feel like, especially for a product manager, going to
Amazon is like going to college, like you're going to
like PM school, because it's like very regimented and how
they build product there, and you learn so much, you
know the infamous, like every doc you right getting reviewed
in a a like in a synchronous meeting where someone's
like redlining it in front of you. Truly like going
(03:47):
to school. But I started my career at a startup.
I kind of fell into founding a startup. I was
the first employee, and that was like an amazing roller coaster.
I think we had like forty employees when I left,
and I was there for four years, and I ended
up in a similar role to what I'm in now,
leading products and engineering there. And so I knew I
wanted to try the big company thing. And you hear
a lot about fang companies and what it's like to
(04:09):
work at them, and so moved to Seattle, did the
A OFS thing, and like I said, I learned a lot,
but ultimately I missed having a really high amount of ownership,
and at an organization like Aight of Us, you have
so many talented leaders around you, and I learned a
lot from them. But it also means you have just
like small scope in nature, but we had so many
customers and so much data. And one of the things
(04:29):
that made you really excited about Pollumi is being open source.
It has a ton of users, and so you're able
to get really quick signal on like what your users
are interested in and like, hey, did we build the
right thing? And I knew I would miss that at
as is so nice. You know, you have unlimited amounts
of data on like what customers are doing with it
and like how users are using your product, and so
moving to Plumi, I knew I wanted to go to
(04:51):
a space that was smaller and I had a lot
more autonomy and ownership of the area. But I didn't
want to give up on having that signal. And so
it's it's been amazing like working an open source company
where anyone could open a GitHub issue and that is
your roadmap, and I really have liked those challenges.
Speaker 1 (05:08):
Well, there's something definitely to be said being careful about
responding to every issue as if it's the most critical
thing for all your customers. But I totally get that
with AWS, you're limited on the number of open source
technologies and they're not the core business and unless you're
in a technical account manager role or a solution architects side, like,
you're not as close to the actual challenges that customers
are having at that moment, even though you're in a
(05:28):
product focused responsibility.
Speaker 3 (05:30):
Yeah, they definitely do a good job of like bringing
that into their DNA, both by having all the data
that the folks interacting with customers bring in, but also
product managers of a to BS talk to customers a ton,
but pluently being a smaller company and like having this
huge community, it's amazing, Like we meet with I meet
with multiple customers a week, and I feel like that's
what makes our jobs great, right, is like seeing the
(05:51):
person using your product and like how they use it.
It makes it so much more real. You feel the
impact of what you're working on.
Speaker 1 (05:57):
Yeah, having worked in some bigger companies, I always felt
slighted a bit when I had intermediaries between me and
the actual customers, Like, I trust you are conveying the
right information to me, but I really want to hear
it from their mouth exactly what they're saying, because there's
definitely things lost in the telephone game and just some
nuances or priorities, et cetera that they're not really sharing.
And unless you really have someone that's a really great
(06:19):
communicator and understands both the product side and also the
customer side in that responsibility, you're definitely going to lose
things there, and then you're gonna get You're gonna have
an issue in the long run when you end up
building not exactly quite the right thing and then have
to go back to the drawing board to actually deliver
the real value.
Speaker 3 (06:35):
Yeah, what's interesting, It's like open source helps with that too, right,
that game of telephone, because it's like you're going from
the community member straight to the engineering team. You know,
we have zero in intermediary between whoever's opening the issue
and the engineer working on it, so that's really nice.
They just interact directly. They're like, Hey, I can't repro this,
can you please give me more info? I?
Speaker 1 (06:53):
Oh wow, I'm like on both sides of this because
we have our products is totally proprietary, but we have
lots of open source SDKs and whatnot, and we always
get questions about open sourcing it, and I think we're
going to have an open source something in the near future,
but it like I have this fear of there's just
like there's a lot of issues that pop up that
are just you know, basically support tickets.
Speaker 4 (07:13):
Can we have help with this?
Speaker 1 (07:14):
Can we do this thing and don't necessarily align with
like a long term direction that we want to go
or would be even benefit.
Speaker 4 (07:22):
Official for your users?
Speaker 1 (07:23):
How do you filter out like the number of support
requests you're getting to actually make sure that the issues
on your GitHub are useful both for them and for
Blooming as well.
Speaker 3 (07:32):
That's interesting because we definitely get some of that, but
it's not a huge problem for us. And I think
part of it is we have a very active Slack
community and so a lot of our community support happens there.
And it's so cool, like, like users helping users is
like the funnest interaction model because you know, they have
similar challenges at the same time, and so they're meeting
each other at the same phase and they they're much
(07:54):
more empathetic, right, They're like, hey, I just went through this,
Like let me help you versus an engineer who works
on it who's like, oh, I don't know how you
run into this. I mean, what's interesting about what you
said there is like part of it is how you
develop product in that if you over an index on
like the really noisy customer or a customer that needs
a lot of support, and build something very useful to them,
but it's kind of custom and therefore not generic for
(08:16):
other customers. I mean that is like a huge part
of building product is thinking about how do I make
sure that this is an investment that other customers have
that pain point instead of just like building some super
custom thing for one customer. And like you know, Plummy
has some massive, massive customers and it's hard at times
not to get pulled into that direction. But I feel
like it's often just understanding the root of things instead
(08:37):
of the solution itself. And so a lot of times,
either in a gihub issue or talking to your customer,
they'll say, hey, we need this, but it's actually a solution,
and you have to figure out, all right, how to
do you what is the actual pain point here? And
then once you get to that, often that is the
same thing with other customers. It's just the solution might
be different.
Speaker 1 (08:53):
Just have to worry about just as much about other
customers when they are throwing money at trying to get
you to solve the problems as if you were have.
Speaker 4 (09:02):
Only proprietary code based.
Speaker 1 (09:03):
So like that problem obviously doesn't go away, but it's
I feel I do agree with you. There is something
about having an open source product where you're able to
build a community around it to have those conversations happen,
whereas with a proprietary product.
Speaker 4 (09:15):
It does especially.
Speaker 1 (09:16):
Feel like customers that you have aren't interested in really
talking with each other. And I don't know if it's
just the result of the culture around it or whether
or not there is is just necessary Like when you
have something open source, people expect that they can communicate
and they want to chat and the types of in
your case, engineers users that come on board are thinking
(09:37):
about the community. I assume, of course, having open source
fundamental as your you know, the core of your product,
it's intentional. Any thought about how that would have driven
the community like this like a nice win where you
saw that happening, or was this like a huge expectation
that went into how you were designing how the business
would work well.
Speaker 3 (09:54):
Ploye's been around for eight years and it's been open
source a whole time, and I've only been at the
company like four years, so I feel like I missed
some these like core decisions on like let's go open source,
and like I don't know, did they have in their
head that this would all happen, that we would have
like hundreds of thousands of users who are helping each other,
Like probably not right, I think that's that's likely, Like
wasn't part of the strategy, but it's very much part
(10:16):
of pluming strategy to be open source. And it's interesting,
like as the side we could talk about like the
industry and open source right now, because there was kind
of a huge boom during the time I was at
a tows where we had like you know, Elastic Search
and like all of these products who were open source communities,
you know, move away from a patch of two licensing
and in like our space, you know, Hashi court ter
(10:38):
form moving away from being open source to a BUSL license.
It's been a huge shift, Like I can I feel
like I can count on like one hand how many
companies are still fully open source. And it's been it
used to be like completely normal and now it's it's
really changed, and it's been interesting seeing the market adapt
to that. And we have a lot of customer distrust
because of all this, where they're like, what if you
(11:00):
what if you go close doors tomorrow? And what if
you know all future releases I need a license for
and I have to pay you for And how do
you You can't promise them the future, right, how do
you say, like no, no, no, will be different, Like
I know everyone else said that, but will be different.
And so we've been really intentional about just talking about
our why of why open source means so much to
our business and like our founders are like huge lifetime
(11:22):
believers in it. But it's it's a tough one, like
there's no assurances.
Speaker 1 (11:27):
I feel like there's actually an easy answer there is,
Like we like, we're not stupid. We can see what
happened with reddis and Mango dB and Elastic Search and
Hashi kar By doing this, and like what they're happening
to their user base. Well, they're all like, not only
are their competitors that spawned up in every single one
of those examples, the communities for those all boomed, right
(11:49):
like Valki now is now seen as like not not
just a replacement, but really total successor of reddis in
every way, even though they've walked back on the license.
Speaker 4 (11:59):
I think walked back on the license. I don't know,
we're open sourced really went there.
Speaker 1 (12:03):
So maybe that's not the best example, but like the
CNCF has, you know, totally bought into open TOFU. And
we see a lot of the guests that we have
on the show, you know, they talk about open TOFU.
Now we see in the communities it's open TOFU much
less terror form.
Speaker 4 (12:17):
So so.
Speaker 1 (12:19):
Yeah, I mean you say you bring these up as examples, like, yeah,
we see this, like we know what will happen.
Speaker 4 (12:24):
If this happens, someone will just be like, yep, we're
forking it and that's the end of the story.
Speaker 1 (12:28):
So you realize that it's you're not providing the benefit
to your users, but because it's open source, they're making
it possible for you to run a business. And I
think that's a that's an answer that everyone should be
comfortable hearing.
Speaker 3 (12:41):
You know, it's interesting. It's so I really like hearing
that from your perspective because you're like seeing it from
the outside end. And I feel like sometimes working in
the industry, you see such a different lens and so
that's really fascinating. To give you an interesting example of
like how it could be hard to see it in
that way. Specifically around the open tofu stuff is recently,
(13:02):
like a month ago, we launched so that you can
run a terraform or open tofu module within Plumi, so
you don't have to convert everything right away, which is
really exciting. And so I was demoing it to customers
and I had a handful of like large companies not
know what open tofu was, and we were like, oh,
are we like way too close to it where we
think that this is like the norm now. And there's
(13:24):
so many people who, like, you know, their job, they've
been doing terraform for like ten years and they have
never heard about open tofu. Were like knew about all
of this, and it's it's just fascinating because you think
there's a lot of people who are like always, you know,
reading the news and like staying up with tech news.
But that's not everyone, right, Like sometimes it's your job,
and sometimes you just go to your job and use
the tools that they use, you know, and the stuffitly
(13:45):
not a bad thing. That's just a different mode of operation.
And it was a really good learning for us because
we had kind of planned to launch it being like, oh,
open tofu, you can use it within Plumi, and then
actually talking to folks inside large companies they were like,
what's open tofu And so we pivoted feel like terrorforma
and open Tofu in some of our messaging. But it
is interesting to hear you say like, oh, that's you know,
(14:06):
that's the default now, because it's not always so black
and white.
Speaker 4 (14:11):
Yeah, no, for sure.
Speaker 1 (14:12):
I mean I like being like as much on one
end of the spectrum as possible because I feel like
if I go down that way, there's a lot of
people on the other side of the spectrum, more somewhere
in the middle, and so it helps shift people in
that direction. I think open Tofu is great for the
whole ecosystem compared to having used terraforma. I feel like,
and you brought this up earlier in the episode, basically
(14:34):
how you manage the get up issues Like I remember
years and years ago I had filed problems there, like
real issues sometimes with poor request and this is how
I completely change my whole approach to doing software development
when I was still doing it for open source stuff.
Speaker 4 (14:48):
You open the issue first.
Speaker 1 (14:49):
You don't do any work, don't don't create a poor
QUTS first, just open the ticket and see if a
human response, because if it's a hashy corporate repository, after
thirty days, there'll be a blot that comes and says, hey, hey,
no one's messaged on this, We're going to auto close
this ticket for you. And I'm like, yeah, like fix
your process. Honestly, this is clearly a bug. No one
cares about it, and so it auto gets closed. Like
(15:11):
that's a huge problem, and so what you You make
a message there to keep it open, and again and again.
After a while you're just like I'm done. So, you know,
having a mature response to how to handle your issues
that show up and get up, you know, like that's
really critical and so like it's really great to hear that.
Speaker 4 (15:26):
You know.
Speaker 1 (15:26):
It's really interesting though about the bias of like how
close we are to things we're in the quote unquote
security demain.
Speaker 4 (15:33):
I mean, we do log in and access control, we.
Speaker 1 (15:36):
Generate JWTs for our users, and we have to remember
just don't bring up competitors ever. Actually, because while we
know every single competitor that's in the market, and there's
like another one like every single day. Our users never
do and they don't care because they don't find these
and so it doesn't make sense to even really like
talk about it most of the time.
Speaker 2 (15:55):
That's interesting, So you.
Speaker 1 (15:57):
Know, that's one perspective. I mean, if you know, you
obviously have customers coming to you and be like, yeah,
you know, there's a huge challenge getting and I think
this is one of the challenges that actually we went
through with Plume.
Speaker 4 (16:07):
It's like it is a challenge to manage.
Speaker 1 (16:09):
You know, we have like nine or ten language SDKs,
Like getting a provider SDK in every single language is
just something that's just a huge amount of overhead and
being able to pull already existing go and like the
propository that's running your terraform or open tofu in it's
just a huge win for the users and like the
(16:31):
end users, it's a challenge with the open sided platform
or multi sided. So yeah, I mean you're in a
weird space there, right. Obviously you need to admit that
there's multiple pieces here that are at play and some
users care about but you know, it like depends on
that type of conversation, and so yeah, I mean it's
like the curse of being a great product manager. You
know everything's going on in this space, you know what's
(16:51):
going internally, you have your own ideas for innovation as
well as what your competitors are doing. But then you
have to remember, yeah, actually our users don't even really
know all those things.
Speaker 3 (16:59):
Yeah, totally. It's I feel like you summed it up.
It's like you have to be able to see through
your users' eyes, and you're often way too biased to
actually see that. And there's a lot to be said there,
like when using a product like you're you know, let's
say I'm like building Honeycomb, Like I understand fully how
to write that query, right, but it's very hard to
see how a user would see it for the first
time and know what that learning curve would be like.
(17:20):
And so it's it's good acknowledging your bias and like
how close to it you are.
Speaker 1 (17:24):
I mean it's like there's another one like you brought
up Honeycomb, Like everyone knows what this is. It's like,
you know, they may know what what elastic search is,
or they may know what you know, the kabana who
they are or whatever, and any of the n other
options are out there, it's like it's another one and
you talk to people who just have never been in observability.
I have no idea what hotel is or collectors and
(17:46):
you bring up a product or whether, it's like I
have no idea what that is.
Speaker 2 (17:50):
Yeah, it's a good.
Speaker 3 (17:50):
Call, it like, especially just like in any form like this,
ensuring there's an intro. I feel like often we do
this a lot with acronyms too, like which I guess
is a lot more standardized. But it's like I'll say
all the time, like ICP and my engineers are like,
what is that. It's like it's our ideal customer profile,
Like this is the customer we want. But it's it's
(18:10):
a good point. It's easy to just like stick to
the things you already know and not.
Speaker 2 (18:15):
Know what you don't know.
Speaker 1 (18:16):
Actually, on this podcast, I'm a huge stickler for acronyms
because you never know, you know what the experiences that
people have had, and just saying what those things are
and calling the manage is like so much easier for
people to see. So I'm going to use another acronyum
right now, is V or independent software vendor. And you
know it's really interesting, is like we end up having
to go through a lot of tools out there to
(18:36):
add in integrations to work with our product or one
of our products, and I find this is a miss
and a lot of platforms. So we had integrations into
Bubble and WordPress and they're just so different compared to
open tofu. The experience that we see so much so
that like we don't support Bubble and WordPress really anymore
because it's just such a huge challenge they don't care about.
There's a platform is vs are on one side and
(18:59):
customers users on the other one, and we've decided to
make Bubble and word Press, it's say, worse by not
offering a first class integration there. I feel like this
is an important thing to really consider because your partnerships
within your platform can have a huge impact for your customers.
You've really recognized this, And I was actually going to
bring up, like how much do you think about this
problem at blow me?
Speaker 3 (19:19):
So when you draw that analogy there of like integrating
not being easy and versus like open tofu being easy,
what you mean there is more like the fact that
Bubble and word plus haven't built an integration at all,
or that they're not being responsive to like issues because
they don't have a method for that.
Speaker 1 (19:36):
I mean, you look at your core customer, your ICP,
and you're like, well, Bubble and WordPress, the customers are
specifically the ones that are using the site or building
the site, but they're not. They're like, we're not a customer,
We're an ISV. We're just providing plugins that customers use.
But our experience building those plugins is so bad that
we're choosing to not build those plugins, which means we're
(19:56):
actually providing less value to their customers because they less
to pick from. It's like Microsoft Teams versus Slack. Slack
is much easier to write a Slack bot for than
Microsoft Teams is to write a pot for, so more
people will write bots for Slack. Therefore, Slack offers a
platform which can bribe more value to the users that join,
and why people like Slack more is just one of
the reasons compared to Microsoft Team.
Speaker 3 (20:17):
Got I got it, Yeah, I see what you're saying.
It's interesting because like I wonder if I'm at word
Press as a PM, like, what is the metric I'm
trying to drive? And that would be interesting because it
probably all trickles down from there. So like Polumi's equivalent
is like we there's a lot of equivalents within Plumby,
but like there's a lot of ways in which Plumy
could not integrate and feel like you want people to
(20:38):
be within your closed garden. So a direct example, as
we've been building an IDP product or sorry, yeah, internal
developer platform and we non internal developer portal. You say IDP,
and I think you think identity. We actually struggle with
this internally, even because we have integrations with all the
(20:58):
identity providers as well. But basically a home where you
could have your documentation and all of your best practices
and your services that like your platform team can set
up and then developers can self serve. And there is
Spotify's open source backstage product in this space, and so
we have customers who are like.
Speaker 2 (21:16):
Oh, you know, while you're building that, it doesn't.
Speaker 3 (21:19):
Have you know, the full functionality of a full IDP,
like all the docks integrations and whatnot, like can we
have an integration with backstage? And we could have easily
been like, nah, we have this vision to have this thing.
But Pulumia is very much like our ultimate goal is
like resources under management. Like our ultimate goal is like
you're getting value from having your instructure managed by Pulumi.
(21:39):
Our goal is not like that we get a lot
of clicks, or that we get like daily active like
number of active users and things like that, Like that's
all great, but we care about growing our overall infrastructure
manage and so that's a no brainer. It's like, let's
go build a backstage integration and so we build a
plug in for backstage that like scaffolds out for you
the ability to have your Pumy sacks and deployments all
(22:00):
within your backstage environment. And that's very much like core
to our beliefs is like APIs on everything. Like if
you don't want to use our UI and you want
to build something else, great, Like if you want to
use our CLI directly and you know, not interact with
our UI, we try and have future parity across everything.
And similarly, like web hoooks, we have you can build
integrations with black or anything custom and service your needs directly.
(22:26):
It doesn't need to be like some Plumy feature that
has like a bow on it. We're happy to just
like meet you where you are, and so that's interesting.
That's how we think about some of those trade offs.
And the backstage one is like a direct example of
like where we could have just been like, no, we're
not going to.
Speaker 2 (22:38):
Go do this.
Speaker 1 (22:40):
That must contribute though, to like extra work, extra time
before being able to roll out something I could imagine.
Now you have the maintenance of managing the backstage plug in,
how do you, like, how do you organize around that
challenge to make sure that you are staying up to
date with whatever breaking changes backstage has to make sure
the plug is still valid. Are You're now supporting a
whole bunch of new users who are probably coming to
(23:02):
your community and be like, hey, this thing with backstage
is broken, and you're like, well, it's not really us.
We can't really help you, hear, this is the backstaging,
and now you're teaching them about how to use backstage.
Speaker 3 (23:11):
I mean, the backstage chef has been kind of simple.
But what you're talking about is true of like our
entire provider landscape, So like we build providers for like
two hundred plus like cloud or SaaS Offerings and so
like AWS GCP all the way to like Snowflake and
Launch Darkly, so like everything has a Plumby provider, and
that's exactly what you're describing, Like will the customer come
(23:32):
to us to be like, hey, I'm trying to use
this like Opta provider you have and something's broken and
it's like, okay, it broke upstream, and like we don't
have access to their code, so like we can't help,
and so it's there's.
Speaker 2 (23:43):
A lot of things to unpack.
Speaker 3 (23:44):
Their one is like how plumy spends its time, and
that's honestly an ongoing struggle because kind of what you
said earlier is like every GitHub issue has the same equivalents,
Like you never know what the impact that's going to have,
and you don't want to pick and choose like, oh
I think this one's important versus that, and so you
want to help every customer button saying that we're like
a tiny amount of engineers supporting this whole thing. So
(24:06):
we try and do like prioritization based on volume as
much as possible, So like if we have a huge
volume of customers using one provider, we make sure that
that's like first class. We basically have like a teering
system and so you know what support levels and slas
we're going to have for each of our providers, And yeah,
there's definitely times where like oh, you're like, you know,
(24:28):
one of a small like a few hundred using a
certain provider that's like very niche, and so you might
get like a longer wait time on getting a fix.
But like ideally you understand that, Like that's what's most
important to us is like you know what the expectations are,
you have a response. You aren't just like flying blind,
which would be like the worst scenario.
Speaker 1 (24:45):
So can I pad the metrics for our provider by
going and just downloading it a lot of times, be
a different IP addresses or something like that, so it
looks like there's a more usage there.
Speaker 2 (24:55):
Yeah, go for it.
Speaker 3 (24:58):
We would one thing that we've been talking about, We
would love like to add these metrics to also be
user facing because like we can say like what percentage
of at least like polumy cloud state has the that
provider like how many resources, right, and so we could
say like, oh, you're in like the tenth percentile of
like providers for PULYMI or something like that, because yeah,
that transparency could be cool having just general insights into
(25:20):
like how your provider is being used.
Speaker 1 (25:22):
Yeah, No, I mean I always find the metrics interesting there,
like something that certain platforms are always in an opportunity
to do, and then that it seems like they don't
actually go forward. Like if you know that someone is
using I don't know, grafhauna in someplace and you like
you can actually go in and be like, oh, yeah,
you know, like did you know everyone else is using
polluming Graffhanna has this configuration, but you don't like that
(25:44):
may be something to actually investigate.
Speaker 2 (25:46):
Pullumy cloud like.
Speaker 3 (25:47):
We think about that a lot, especially as you're getting
into these more common that people are using AI tools
to write poluming. It's like we can pull in that
context for your AI developer tool to be like, hey, hey,
this is an architecture that most customers have with this
resource or.
Speaker 2 (26:01):
Something like that.
Speaker 1 (26:02):
I'm smiling because you said the magic word that I
think I need a Clackson here for. So now now
that you've mentioned it, I'm going to have to ask
you some LLM based questions here. So now that you
know that, do you know how much of the code
for Poluomi that you know written by your customers is
written by an LM versus by an engineer.
Speaker 4 (26:22):
Is this even something you can track?
Speaker 3 (26:23):
It's not something we can track general open source Pulmovi
we have zero telemetry. But in terms of like our
existing customer base we are we kind of get things
like potentially ID and like your VCS provider, but not
like if you use an LM or not. It's interesting
because you could maybe think about like poll requests that's
say like open with cloud code or like codex or
something like that, but for the most part we're like
(26:45):
blind to it. However, we have a AI product within
plumy Cloud that you can use for code generations, so
we have metrics on that, but just the general ecosystem
unsure sample size of it probably still less than you
would think at this point, growing extremely fast, especially talking
to customers. But infrastructure is a space where like people
are inherently more cautious and are it's not moving at
(27:08):
the pace of app development. There needs to be like
guardrails and ways to make sure that that isn't gonna
you know, you're not going to accidentally vibe code your
production cluss.
Speaker 1 (27:16):
I mean you say that, but Replet just had a
huge controversy over deleting all the infrastructure and databasechema related
stuff for one of their customers because the LM they
had decided to during what they had called code freeze,
still make changes and push database changes and because querries
weren't responding, then wipe the whole thing. So you know,
(27:38):
it's getting there. I think maybe the question I want
to ask is were you ahead of the curve here?
I mean, I know Pulumius had LM based answer generation
inside the docks for a while now, way before, way
ahead of the curve. But as far as the product
goes or the features that you're building, have you pivoted
in any way to potentially deal with the packed of
(28:00):
customers now utilizing more LM tools.
Speaker 3 (28:03):
Yeah, definitely, I mean it's it's definitely a huge focus.
Speaker 2 (28:06):
It's interesting. So yeah, pullin.
Speaker 3 (28:07):
We had I think we had an AI product before
Chattobt had a UI, like when it was just an API,
but I would have to check my exact timing on that,
But around that time we were like very quick to
adding an LM to our product and the main use
case at the time, which it is funny looking back
because at the time everyone thought this would be a
good idea, But like if had said it was like
(28:28):
not a good idea. We were like, oh yeah, you
can like ask questions and we'll have like an LM
respond to it. And at the time they just like
hallucinated a bunch. They would like give incorrect links and
like in some ways it was like a funny way
to get product feedback because it would like hallucinate API
endpoints and we'd be like, actually, it probably should be
damned bad, like if the LM probably thinks that, because
(28:50):
that is how most products would name this, And so
it was an interesting feedback loop. But originally definitely like
took some time to figure out, like we didn't limit
what assets, so like people were just like using our
like in product chatbot as their GPT basically like just
like completely unrelated to POLU meant questions like help with things.
(29:10):
But we rEFInd it a lot over time. And I
think to your question around like knowing that you know
your user base is changing how they're doing things pretty drastically,
which is using lms to write code, like how how
does like pollum me inform like how does that change
what we think about? And I think there's a couple
of things. One is one thing we're hearing is there's
(29:32):
a ton of app code that's now coming to like
the speed of application code is increasing due to these tools,
which means the infrastructure team is becoming a bit of
a bottleneck. And so we have teams who are like,
we have so much work right now like this, like
we're feeling the increasing speed and so like for us,
it's like figuring out how do we automate things that
like make it easier to stay on top, so like
how do we help platform teams with this new dynamic change.
(29:55):
But then there's also the people actually writing plumy right,
writing plumy code. And the first thing we ever did
in this space was one of the early problems with
using LLLS or PULLUMEI code was that it hallucinated resource
names a lot, because there's a lot of different versions
of a provider, and it's actually very important that it
gets that right. Otherwise you're just going to be constantly
(30:15):
having to go fix a bunch of things to get
it to run. So the first thing we did was
provide it context for all of the latest versions of
every provider so that it was just much more accurate, and.
Speaker 2 (30:24):
That was pretty good.
Speaker 3 (30:25):
We got a good amount of usage, like more than
we would have expected of customers writing code using that.
And it's very simple, right, but it's really just like
what mcps are today, like giving the context that the
model needs at the time it needs it. And now
we've grown a lot, so like you can get any
information about your PULLUMI environment from our LM and that
is like in our MCP, but also in our in
(30:47):
big product which there's some interesting product dynamics there now
with like building mcps, and so we're going to make
everything that's available in any of our features available in
our MCP.
Speaker 2 (30:57):
Which using acronym Model Context Protocol.
Speaker 3 (31:01):
I know you guys had a session on this already,
so your listeners will kind of be familiar.
Speaker 4 (31:06):
Definitely. I'll just plug that.
Speaker 1 (31:07):
You know, if you don't know what MCP is, go
watch the previous episode where we talked a lot about this.
Speaker 4 (31:13):
Pretty good.
Speaker 1 (31:14):
So yeah, I mean one thing that comes up here actually,
and maybe i'll say it, I'll preface it with I
think this is going to be controversial.
Speaker 4 (31:22):
Your trading.
Speaker 1 (31:22):
I mean, I think we know this is the case
with LMS, your trading quality.
Speaker 4 (31:26):
For speed, you need to be.
Speaker 1 (31:28):
Most concerned about the quality, even more so in your
infrastructure given that small changes, especially during refactoring, can have
huge production impact. Whereas a random bug in one of
your websites or one of your endpoints isn't so bad.
Speaker 4 (31:41):
Having a small change in.
Speaker 1 (31:43):
Your say database provider, or if you're using rds and
you change the schema or roll out in version, you
could have downtime or worse, you know, your whole database
guts crash. I'm going to argue making sure the users
are still going slow is providing more value than allowing
the ability to go fast.
Speaker 4 (32:02):
I know your users are probably going to disagree with
that statement. Any thoughts about that though?
Speaker 3 (32:08):
That's really interesting. What are our CEO Joe Duffy. He
has this really good analogy for what we're seeing right now,
which is you wouldn't vibe code without get right, Like
you're not going to directly change all your code and
like push it to your production server, and you kind
of want that GET layer. Maybe you would, Sorry, but
let's say it's a helpful less to have, Like here's
(32:31):
the GIT changes and I can review that and see
what's going to change, right, And that's a lot of
application coding is going in the space of like reviewing code.
Now you can like tag the GitHub copilot agent on
a PR and they'll write it and like you can
review it. And for small stuff like that, that largely works,
like we do that in plume me for a handful
of things, but you need that for infrastructure too, right,
(32:52):
Like you want a way to understand like what is
desired satan, what's going to change, and like Plumy is
great for that in the sense, and you know open
tofu has similar things. But the one difference with Polumi
is that you're using a programming language. So like these
models have a ton of sample data and context on
using a programming language, but Pulumi is a desired state
versus actual engine, and so you can see exactly what's
(33:15):
going to change and run a preview on it. And
so to your point about like helping users by like
moving slowly, Like the best thing plymy can do is
like give you previews of what's actually going to happen,
And that layer becomes so much more important. And so
as we're building stuff with AI, you know, we're starting
to get into the space of automating things within Plumy
with AI, the most important thing is like what are
(33:35):
your checks and balances?
Speaker 1 (33:36):
Yeah, I mean, I know you said you wouldn't vibe
code without GET, and I'd agree with that. I mean,
I don't vibe code to begin with, but I definitely
wouldn't vibe code without GET. But there are for sure
lots of people who would vibe code without GET, and
the idea of using some sort of version control there
is a whole complexity that they probably have never even
thought of, especially because we see LMS as raising the floor,
(34:00):
so those with less software experience being able to start
doing things they haven't done before, which means they're not
going to be using all the tools and best practices
that the industry has created to deal with quality issues
or production impact.
Speaker 4 (34:15):
As you know that has happened in the past, so
you know there's something interesting there.
Speaker 1 (34:20):
I think the other thing we see is that because
of the quality speed trade off, there's a lot more
code being generated, which really reduces the value of what's
being checked in.
Speaker 4 (34:30):
And there's actually a corrollary to this.
Speaker 1 (34:32):
Whereas I don't want to read an LLM generated post,
that means that the value in that post is probably
whatever the prompt was, which is way more valuable than
the output.
Speaker 4 (34:43):
So if you are vibe coding.
Speaker 1 (34:45):
The thing that makes sense more to be committing is
the intent rather than the output. There, so I can
see a world that you know, get in. The whole
development workflow does change in a way, there is still
this intermediary step, and I really like to call out
that it's the review and when we may be automating
some parts of the review for you, because I do
see a lot of value in Hey, you know, does
(35:06):
this match with certain policies or other expectations that we
have as an organization or even just as a team
or a service, or best practices or what everyone else
is doing all what comes in through there.
Speaker 3 (35:16):
Yeah, I think that's super important, like plumbing, knowing all
of your best practices and ensuring that that's what it's
putting out. And I was going to say to put
to challenge a little bit. You're like, we're trading speed
versus quality. The one kind of difference there is like that,
I think that's largely true if like you've done this before.
But we have a lot of users who are like, oh,
I'm like new to using pluming, Like my platform team
uses it, but I've never written it. And this is
(35:37):
like a lot of helping with the zero to one
where like it can actually if you've never done something before,
it helps you learn it a lot better in the
sense of like let's say you come to pluming, you're like, hey,
I need a program for like a cloud flow worker
or something, and it generates it for you and it
uses Hey, this is how your organization best practices are
for this resource, and it pulls in all of your
like template of how to do things. Then like you're
(36:00):
like up and running a lot faster. And so yeah,
maybe the quality is not as good as like someone
who does this for a full time job, but someone
new to it gets a lot of benefits from having
all this context of this is how we do it
in our organization. This is our best practice. These are
security best practices of doing things by the way we've
enabled like policies and ensured that like you have short
lived access tokens on this deployment and like all of
(36:21):
these things come with it, and so in a bunch
of cases we have like new users that are like
this is great.
Speaker 2 (36:25):
This has helped me so much.
Speaker 1 (36:26):
So I will agree definitely on the if you don't
have experience in something that the quality of the LM
generated output, especially in spaces where there are examples or
can be moderated or validated or reviewed, is going to
be much higher than what they would have put out
previously without that that.
Speaker 4 (36:44):
That for sure is true.
Speaker 1 (36:45):
However, I will debate whoever, Like, if you're inexperience in
a particular area, So a new user comes in and
hasn't used polluming before, I don't think they would you
be actually learning anything. They would not be learning about
cloud flare or polluming if they're using an LM to
generate that code. There is there is a study that
was sponsored by Microsoft about the loss of critical thinking
as a result of utilizing LM models.
Speaker 4 (37:07):
So it does.
Speaker 1 (37:08):
It does, for sure helps those users get to valuable
output of a higher quality then they would have had
without it, But it doesn't help them become experienced engineers
bloomy experts, or even be able to use it to
build new things without also relying on the LM in
the future. So I do want to ask about that,
maybe something like I think we all have to contend
(37:29):
with our engineers utilizing LMS within our own company. Have
you seen this in any way, like, are your inexperienced engineers,
you know, ones that you hire from university or from
other companies that don't have infrastructure as code experience, helping
them in some way to combat the loss of experience
that they would have gained now that they're using LMS.
Speaker 3 (37:50):
Yeah, that's very interesting in terms of internally, we actually,
to be super transparent, don't have a ton of more
junior engineers. And this isn't like a new LM thing.
This is just like, as long as Plumi has existed,
we basically hired people who have a lot of experience
like building languages or seks and there you.
Speaker 2 (38:08):
Know, obviously aren't a ton of new roads that have that.
Speaker 3 (38:10):
As we grow as a company, we'll have like, you know,
a need for a lot more.
Speaker 2 (38:16):
Like people who are.
Speaker 3 (38:17):
Just coming out of school and like the But today
we don't have like a ton of great mechanisms to
onboard people and like train them, and so it might
be like not the best experience, but we have some
and it's just like I'm giving the disclaimer of like
this is not the thing that we do best. You know,
I'm gonna call that a day one. But those we
do have, it's interesting. I mean, in Plumy, I think
(38:37):
a lot of organizations are probably having this similar thing
where you're.
Speaker 2 (38:40):
Thinking about how far do we go with this AI?
Speaker 3 (38:42):
Think like there's companies, like large organizations that send emails
to every manager with the number of AI queries that
each developer is doing. So like that's one end of
the spectrum where you're like forcing it down, and then
there's the other end where you're just leaving it wide open.
Speaker 4 (38:55):
Yeah, I mean, can we collectively agree that, like that's
all that's wrong?
Speaker 3 (39:00):
Yeah, it's it's it's an approach, right, I don't know
what it depends. All depends on your desired outcome is
in this case you know, well.
Speaker 1 (39:08):
I mean, like Eddie, I'm gonna repeat the age old
quote which I'm sure some people still haven't heard before.
Any metric that becomes the target ceases to be a
good metric, right, And I think this is this is
an indication.
Speaker 4 (39:19):
Of knowing like using an LM.
Speaker 1 (39:22):
And I'm gonna keep saying LM for as long as
I'm a host of this podcast, because it's not AI
for me to solve a problem where it can help you. Right,
you lack experience in a particular area and you need something,
you need a second review on it, or you know,
to generate that blooming code the first time it gets
you there. It for sure does, and that's a good usage.
(39:42):
It's a bad usage if you take that output and
you send it to someone else and say this is
the right answer, Like you know, I just use an
LM to generate it, and you can't distinguish between those
two things in a metrics support So I think, you know,
that's a huge problem. Or I use this LM to
make critical business decisions, like Megan, I can ask you
how many times have you used an LM in the
last week to make critical business decisions for your organization?
Speaker 2 (40:05):
I mean it's not zero.
Speaker 4 (40:08):
I put the question in and I just do whatever
it says.
Speaker 2 (40:11):
Oh, definitely not that.
Speaker 3 (40:12):
I mean I strongly believe in like writing down large
product decisions and making sure that you have the options
you considered and why you didn't consider each of them,
like why you recommended what you did, and so if
the LM's like, oh, yeah, you should use this pricing metric,
but ultimately, you know, it's your logic that has to
stand up. And I'm a huge fan of still doing
(40:32):
doctor reviews, so we do that for like all of
our major product decisions, and so it's like a room
full of people are to like criticizing my logic on
a product decision, which I love. That's the best way
to figure out if you're doing the right thing. So
probably zero in the sense of if the barometer is
put it in and then do exactly what it says.
But I think it's helpful.
Speaker 1 (40:52):
Yeah, I mean that's the thing though, right, Like you're
utilizing in an intelligent way to critique what you've got
and be willing to throw away the.
Speaker 4 (40:59):
Output rather than seeing it as the expert.
Speaker 1 (41:01):
And I think this is the metric that we've come
up with internally actually at my company, which is realistically
how like who's ever using l M Are they using
it in a critical first manner where they're challenging the
output as whether or not not just you know, what
it's saying, but whether or not it is accurate, rather
than taking it for granted and believing that the lms
always give you better than accurate answers. So, you know,
(41:23):
as an expert, I think the ad in the area
and also being critical of the you know results, I mean,
you're you're a you're in a special mode where you're
actually looking for holes in what you're saying, and so
taking each one of those as valid arguments is a
great way of utilizing it.
Speaker 4 (41:38):
So I applaud you for that.
Speaker 1 (41:40):
You're definitely not one of those leaders who is just
tracking people on their LLM usage.
Speaker 3 (41:46):
I'm curious what your thoughts are on Like you mentioned,
you're always going to say LM as long as this
podcast is happening, what do you think about like the
models progressing in the sense of, like everything we're talking
about has a lot of bigg assumptions that the the
quality of the models is going to stay around the
same amount. But what if they do get so good
that they are the same quality of as like a
(42:07):
certain tasks that you would do already. Okay, so what happens?
Speaker 1 (42:11):
I guess I haven't shared this out loud too many
times before, so this will be a special thing.
Speaker 4 (42:17):
For our viewers here.
Speaker 1 (42:18):
So I think so far we've used lms to solve
easy problems, and the idea that they'll just keep on
getting better is a little bit misguided because at some
point we're going to have to solve a hard problem,
and no hard problems have ever been solved as far
as the creation of lms go, So it's actually a
technical difficulty and maybe we're at a fundamental limit to
(42:40):
actually getting there. A common argument against this is that
humans are biological computers, and if we're just a machine,
then of course we can build a machine or a
computer to compete with that. But that statement is an
analogy which doesn't actually mean it's true. What if we're
not biological computers, which means that we can't just make
(43:03):
a computer better to solve problems that humans can solve.
So it begs the question are humans biological computers? And
you'd have to prove the answer is yes first before
you can prove that a turn complete language or a
turning machine, or something that can be distilled down to
basically just a turning machine can solve more difficult problems.
Speaker 4 (43:20):
So no one's proven that yet.
Speaker 1 (43:22):
So we're still at the point where for sure, we
don't have what I'll call AI, which is a replication
of the intelligence that let's say humans have, I mean
not even getting into the story of like other species
or sentience or anything like that. So that's the first
step really there for me and all the things that
we've seen innovated in the last five years come out
of the transformer architecture paper that was written at Google,
(43:46):
and we haven't really gotten any better than that all
the improvements we've made. A good example is like, well
it couldn't do math before, and now it can start
trying to do math. Well, that's easy. You just reject
the result. You say, hey, LM, where's the math here?
You'd get you extra the math. You send it to
some mathematical solver, get back the result, and plug it
in as part of your answer.
Speaker 4 (44:06):
Stuff like RAG.
Speaker 1 (44:07):
Yeah, it's sort of interesting resource augmented generation where you
do part of the LM response. You send it to
your database, get the query, get the results back included
in the last level of your transform architecture, and generate
the real result. Yes, that's still solving a simple problem
making it more accurate, but it's not getting over the
real hard problem.
Speaker 4 (44:26):
And that's why I get stuck with this.
Speaker 2 (44:27):
That makes sense.
Speaker 3 (44:28):
I mean, I will tell you I'm a huge LLLM
believer in a lot of ways, so this is a
fun discussion. I feel like, though the value I see
in using LMS isn't solving hard problems. It's like speeding
up on all the things that don't need to be
high quality basically, and so like the value of speed
is like a lot of times, if we think about
(44:48):
things in engineering, if there is a speed to quality
trade off, it's like, Okay, you probably don't want that.
Like the a lot of times the opportunity cost of
like shipping a bug or something like that is so high.
But in a lot of like every operations, speed is
like very valuable, right, And like if you know, coming
from a startup perspective, being able to automate a lot
of things and build faster and like win more of
(45:10):
the market quicker is very high value. And so I
totally hear you. And there's like, you know, the big
discussion about like our human humans ultimately a machine, but
like there's really that on the side. But I do
see a ton of value in it, even if it
never can solve hard problems.
Speaker 4 (45:27):
I want to I want to be clear here that our.
Speaker 1 (45:30):
Categorization of whether a problem is hard or not isn't
actually my challenge to the l Alams. It's in the
manufacturing of l alams. What problems have we had to
overcome and order to manufacture them? The interesting the next
levels like basically what we have today is a statistical
next word predictors and that has been a thing since
(45:52):
like nineteen fifty eight or something.
Speaker 4 (45:54):
Uh, And I I don't know if that's the year.
Speaker 1 (45:57):
I'm terrible with years, but I swear there was a
Veritassic video where we're actually talking about this. For any
of the viewers who go to a Veritasium is like
highly rated, fantastic YouTube channel, you should definitely go out
and subscribe that. It's not going to be my pick
for this episode, because it was a pick for a
previous episode actually talks about this, And yes, we got
better at next word predicting, and that's.
Speaker 4 (46:18):
All we're doing.
Speaker 1 (46:18):
We're just keep improving our ability to predict next words
better by not only using the previous token or the
previous word or the previous paragraph, but also pulling in
all the context everywhere. We're getting better at that and
building better technology to solve that problem. But all we're
doing is improving the statistical analysis, and we have to
change fundamentally the technology to get much further away from
(46:39):
that or else will never eliminate hallucinations. And I think
that's one of the biggest challenges that we have. So
when I say solve hard problems. I mean until someone
has a new technology that fundamentally eliminates hallucinations, will never
have loms that I'm comfortable calling AI.
Speaker 2 (46:54):
I'm curious. Have you tried ide LM usage like cursor?
Speaker 4 (46:59):
I have. I have tried these things.
Speaker 1 (47:01):
I don't get very far because I find a lot
of the work goes into articulating with words what the
problem is. And once I've done that, I've done ninety
percent of the work, and doing the last part of
it is now a fight with an LM to even
produce the appropriate results. So an example of something like
I never use it to generate code whatsoever for two reasons.
Speaker 4 (47:25):
Actually.
Speaker 1 (47:25):
The first one is it's usually in a domain that
there aren't good correct examples, and it's like security related,
and usually the outcomes that I get.
Speaker 4 (47:34):
Are have to have high quality.
Speaker 1 (47:36):
An example where I did use it is I wanted
to use a government website that requires you to click
a link and schedule a meeting, and there are no
meetings available in any close location that I can possibly
get to, So I wanted a bunch of scripts that
goes and uses the CURL command to download the open
schedule appointments and then filter them and do something else.
And I'm like, I don't care the quality of this.
(47:58):
I don't care if it crashes whatever, I can iterate
on it, just go in throw it at that and
so like we'll definitely use that, and so like it
helps me get to that answer faster, and I don't
care about the accuracy or quality or whatever.
Speaker 4 (48:10):
That's a good example.
Speaker 1 (48:11):
But in anything that I absolutely do care about, it
only gets in my way for sure.
Speaker 3 (48:16):
Yeah, that makes a lot of sense. The reason I
ask is because like I think that the way that
Cursor has handled the user experience of hallucination is really
good in the sense Cursor and similar things like the
inscode also has but in that it returns code examples
for everything it says, and so like if you click
on the thing and it does not exist, then like
you know right away this was hallucinated. So there's like
(48:37):
a very good on the rails of like everything across
my code base needs to have a reference, and those
models work fairly well. But that was mainly just like
a thought experiment. It is interesting though, that space where
you have very little documentation, that's tough, and like we
have edge cases like that at plumy as well. But
there's use cases and I wonder if you guys run
into this given what you build, where if let's say,
(48:57):
for example, we implement something in Java, we have a
very good context reference for the model to go do
it in multiple languages. And that's a model that works
pretty well because the ele ones aren't good at novel things, right,
but if you give them an example, they're pretty good
at like using their knowledge base to figure out how
to do it in another language.
Speaker 1 (49:14):
So hypothetically translation from one language to another one, especially
like natural human languages, is the exact way in which
the models were built. And obviously large language models are
still slightly different, depending if they're or human readable or
understandable language or some other custom lexicon that is map
to your domain or even like software development. However, this
(49:37):
was actually one of our primary examples where we were struggling.
We need to write JBTS or jaw Jason webtokens de
serialization and token validation for security purposes, and we need
an example for every single language, and some languages are
very easy and come up with a correct answer because
there are a library is dedicated to solving this problem.
In other languages, the primitives don't really exist that well,
(50:01):
and you have to stack them all up together in
a complex way that no one's ever really done or
the people have done it. It don't really work anymore
with the particular versions that are available, et cetera, et cetera,
and the models are trocious at that. So even knowing
how this should work in ninety nine percent of the
implementations across all the languages still does not help you
(50:22):
get the last one out. And this is actually I
used to think something like, oh, all languages are pretty
much the same for the most part. There's some built
in stuff that causes you to write code one way
or another one. But now I can come out here
and actually say some languages cause you to write more
insecure code than other languages. For instance, I can tell
you very specifically Python and Ruby are more insecure languages
(50:44):
than even Php and JavaScript because the code that comes
out of those is less they are less examples of
having secure code be generated, and so models will more
likely write insecure code for those languages. So if security
is a concern for you, stop using those languages.
Speaker 3 (50:59):
And that's just like, sorry, why do you have that
observation for Python RVIE specifically because of like there's more
hobbyists like building with them, and so you.
Speaker 1 (51:06):
Get well, yes, exactly, there's there's almost no examples of
getting this right or like trying to get it right,
of what we actually need. And I think there was
an example from even a cloud Flairer did an experiment
where they were generating an o ath to compatible client,
and an expert in identity providers and oth to, you know,
(51:29):
went and tried to get it done, and there's just
riddle lots of problems and you just won't even see these.
And in some languages there are working examples of this,
and other languages there are not very good working examples
of this.
Speaker 4 (51:40):
So you know, if you just have no.
Speaker 1 (51:42):
Idea what you're doing here, or even if you do,
trying to get it to pop out just will always
be a problem, unfortunately. And I think I'm going to
keep repeating this because I like this idea that the
next successful programming language that humans utilize will be one
that is optimized. For examples, for LLM, so the generation
(52:02):
of code and also consumption as far as context goes,
rather than what we're doing today, which is like automating
the hands on the keyboard where you see we try
to merge all the code together and optimize the context
that we're passing to cursor or windsurf or whatever so
that it can actually fit all that all the tokens
in its context window. I think we're going to start
seeing new languages that are that are terrible to program
with but are great for llms to generate, because at
(52:25):
the end of the day, we want the working program
more than we care about the code that's actually being used.
Speaker 2 (52:29):
That's so interesting.
Speaker 3 (52:30):
So do you feel like languages like Java are probably
great by that the same framing where it's mainly enterprised
people who are using it and have examples online.
Speaker 1 (52:38):
Yeah, I mean, I think the examples and of the
thing that you're trying to do is paramount or using
the LLM. So if you're trying to do something that
no one has written before or isn't frequently done in
that language, yeah, for sure, stop doing that and you
can you can actually perform this test.
Speaker 4 (52:52):
It's pretty interesting.
Speaker 1 (52:53):
Go to an LLM, don't tell what language to use,
and give it a problem to solve, and see what
language it picks to write the solution. In and it
will pick different languages based off of the problem you're solving,
and that should actually tell you stop picking the language.
The LM will pick the language for you, and you
should use that one because it may not even be
correct in other language, or it may not even be possible.
Speaker 2 (53:11):
That's funny, that's very interesting.
Speaker 3 (53:13):
Well, yeah, I'm very applicable to Plumy's world of supporting
programming languages.
Speaker 1 (53:18):
I totally agree with that. So, I mean, you do,
you are hitting for sure a lot of points. And
as much as I would love to have a whole
debate on to LM or or not to LM, I
think I think the agreement is likely. There are use
cases where it makes sense and ones that it should not,
And it's anyone's best bet what is going to happen,
(53:39):
even a couple of years from now. So I'd rather not,
you know, spend too much time speculating.
Speaker 3 (53:44):
Yeah, makes sense. I think just to like wrap a
bow on a lot of what we talked about. There's
a lot of change happening in the developer space right now,
and there's a lot of change happening in infrastructure, and
I think we've talked about a lot of it, which
is like speeding up and the importance of slowing down
and how you can have like checks and balances along
that process. And it's something that I'm really passionate about,
(54:06):
like helping build tools for which is how do you,
you know, feel confident about the changes you're making in
this new age.
Speaker 2 (54:14):
And so it's been a good conversation.
Speaker 4 (54:17):
Oh yeah, of course.
Speaker 1 (54:17):
So with that, I guess we can move over to
our last thing, which is a picks.
Speaker 4 (54:22):
So I'll go first.
Speaker 1 (54:24):
My pick is going to be a specific community dedicated
to leadership that anyone can join. It's called the Rand's
Leadership Community, and it's just I think it's over thirty
thousand people now from tech backgrounds, non tech backgrounds, but
work in tech adjacent stuff where I think even we
had a little bit of a not a great time
(54:44):
for leaders in the last maybe couple of years, where
companies were like, we don't need leaders, we have lms
to replace everything. But I think some of them are
starting to come around. I think it's going to be
another year or so and we're going to see engineers
and other your colleagues that don't have leadership experience. And
if you find yourself lost and no one at the
company can help you. The community exists to be able
(55:07):
to ask questions to and get feedback and how to
grow in your career or just solve standard leadership manager
like questions. And I think out of every community I've
been in, it's for sure one of the best. You
do talk about leadership a little bit on this podcast,
because I do feel like it is in the back
of a lot of people's heads and there's a lot
of different things you can do.
Speaker 3 (55:26):
What's like an example of something you would talk about
in this community, Like I am struggling to think about
what is leadership on a lower level?
Speaker 1 (55:34):
Cus Well, I think it's it's not actually topics specific,
it's more of like how you approach any particular topic,
so like one of the most controversial things. And I
don't think I'm violating any rules of the community by
saying this that even outside the community is like micro
services versus modolists.
Speaker 4 (55:51):
And the interesting thing is when someone.
Speaker 1 (55:53):
Posts a question in the community like oh, I had
this problem, should we switch to micro services? You may
get a debate like which one's better, but you'll often
get a question like, well, why do you want to
do that? Like, what's the core problem you're trying to solve.
Is it a technical challenge or is it an organizational
issue or a culture issue or you know, an interpersonal one.
Speaker 4 (56:13):
Are the inssentives in line?
Speaker 1 (56:14):
You know what's going on there, and then the conversation
may pivot to actually talking about that, And so it
helps you see not just like whatever problem is in
front of you, but anything that could be happening. And
that's like on the technical side, there are like thousands
of channels on you know, whatever arbitrary topic you could
possibly imagine that could be relevant. So maybe you are
going through a reorg and you want to know how
(56:34):
like whether the messaging makes sense and you're not sure
how people will take it, or maybe you're dealing with
a boss or a manager who says, yes, you are
going to count your llm usages and you're looking for
arguments why that could be a good thing or how
to push back against it. And I feel like this
is a place where you can go and actually have
that conversation. And there may be people from other companies
(56:55):
that you have heard of or ones that you don't,
who have gone through a similar process and can provide
you inside into how they approach it or be a
thinking partner for how to solve it.
Speaker 2 (57:03):
Oh, that's very interesting.
Speaker 3 (57:04):
My pick for today is a book I am at
the moment thinking a lot about how to build great teams,
both in that you know, they high velocity, but also
just like they're a good culture to work in. People
are excited to be there, they're happy, you know, working
out what they're working on. So something A book that
I just finished is The Manager's Path by Camila Forearner,
(57:25):
and it talks a lot about like the transition from
you know, being a technical ice to a manager and
then like a leader within an organization. And there's a
lot of good topics everything from like how do you
step back in the technical strategy piece and like grow
people to play that role and like what your role becomes.
Speaker 2 (57:44):
So definitely would recommend.
Speaker 1 (57:47):
You're reading it and like preface for making sure that
your leaders are taking a path that you can approve of.
Speaker 3 (57:53):
Yeah, I also thing it's just like really good to
think through some of these topics, Like it just adds
different perspectives of how other companies do things. But also yeah,
even things like how do you run a one on
one and it gives you a framework of like here's
you know, a way to do it, and you might
not take all of it, but there's things, there's value,
there's nuggets all over to be able to to pick
up an adult.
Speaker 1 (58:11):
Yeah, I do think the book is pretty great in
that way that it's like if I've never done this
thing that, how would I even like what do I
even need to be aware of? And it's not true
that you need to be aware of all those things,
and but it's like here's a list and you can
ignore the list or you know, dive into it, and
then I think the most important thing, of course, is
adjusting to whatever situation that you're actually in. So I
(58:32):
think I think the Manager Paths great book, great pick.
Thank you for for sharing that.
Speaker 2 (58:37):
Yeah, thanks for having me on. Yeah, so debating m.
Speaker 1 (58:42):
You know, I worry sometimes that our podcast may go
too much in that in that direction, like there's a
this is a desire to either you know, jump up
and down and celebrate it or be critical of it.
So you know, we had a proponent on the show
this week. I think last week there was a fight
against it. So you know, everyone everyone can everyone can
(59:03):
can pick their their performance episode and with that I'll
say thank you Megan for coming into the show for us.
I think this has been a great episode, and uh,
thank you to all the viewers and listeners however you're
consuming this for for listening to this episode. And uh
we'll be back hopefully.
Speaker 4 (59:21):
H M.