All Episodes

May 14, 2025 51 mins

Is the prevailing approach to Artificial General Intelligence (AGI) missing a crucial step – deep, focused specialization? 

For the first time since co-founding Poolside, CEO Jason Warner & CTO Eiso Kant reunite on a podcast articulating their distinct vision for AI's future with our host, Conor Bronsdon. Poolside has intentionally diverged from general-purpose models, developing highly specialized AI meticulously designed for the specific, complex task of coding, viewing it as a direct and robust pathway towards achieving AGI, and revolutionizing how software is created.

Jason and Eiso dive deep into the core tenets of their strategy: an unwavering conviction in reinforcement learning through code execution feedback and the burgeoning power of synthetic data, which they believe will help expand the surface area of software by an astounding 1000x. They candidly discuss the "devil's trade" of data privacy, Poolside's commitment to enterprise-grade AI for high-consequence systems, and why true innovation requires moving beyond flashy demos to solve real-world, critical challenges. 

Looking towards the horizon, they also share their insights on the evolving role of software engineers, where human agency, taste, and judgment become paramount in a landscape augmented by AI "coworkers." They also explore the profound societal implications of their work and the AI industry more generally, touching upon the "event horizon" of intelligent systems and the immense responsibility that comes with being at the forefront of this technological wave.


Chapters

00:00 Introduction and Guest Welcome

01:19 Founding of Poolside

02:56 Vision for AGI and Reinforcement Learning

05:36 Defining AGI and Its Implications

10:03 Training Models for Software Development

17:08 Scaling and Synthetic Data

20:12 Focus on High-Consequence Systems

26:17 Privacy and Security in AI Solutions

28:09 Earning Trust with Developers

31:08 Reinforcement Learning and Compute

34:29 The Vision for AI's Future

39:50 Will Developers Still Exist?

47:07 Poolside Cloud's Ambitions

49:37 Conclusion


Follow the hosts

Follow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Atin⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Follow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Conor⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Follow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Vikram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Follow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠Yash⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠


Follow Today's Guest(s)

Website: poolside.ai

LinkedIn: Jason Warner

LinkedIn: Eiso Kant


Check out Galileo

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Try Galileo⁠⁠⁠⁠

⁠⁠Agent Leaderboard


Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Poolside is an AGI company. We're going after AGI because we
want to build intelligence on a compute and we saw that there
was an opportunity to become oneof the four or five companies in
the world that could possibly achieve that.
Welcome back to Chain of Thoughteveryone.
I am your host Connor Bronson and I'm delighted to be joined

(00:21):
by two Titans of the AI industry, the Co founders of
Poolside, Jason Warner, CEO and ISO Can't CTO.
Poolside is taking a unique approach to building AI for
software development, moving beyond general purpose models
towards something that is fundamentally different.
They've raised 626 million to accelerate the journey towards

(00:43):
AGI by focusing on code specifically, and we're going to
dig into that approach. Jason ISO, thank you so much for
joining me. I know it's been ages since
you've 2 have been on a podcast together, so it's a unique
opportunity for us. It's fantastic having you both
here. Thanks for having us.
Thank you, Connor. Looking forward to it.
Let's dive in. It's really fun having you both
here because not only can we dive into some of these deep

(01:03):
topics, but I, I used to listen to you both on your engineering
leadership podcast together. And so I know that you 2 have a
great dynamic here. So I'm, I'm counting on you guys
to crush it and, you know, really share your perspective on
the AI landscape, what it takes to build critical AI systems.
So let's just start at the beginning of Poolside.
What was the core problem or opportunity that you both saw in

(01:25):
the AI landscape that led you tofound Poolside in 2023?
I'll get 24 minutes in a minute,I promise I'll answer this
question. But you have to remember ISO and
I met in 2017. So we go all the way back to
there where ISO had probably theworld's first AI for source code
company on the planet. It was called Source and they
were doing something unique and novel and new in the world at

(01:48):
the time. And I had just taken over the
CTO job at GitHub with an ambition to turn GitHub from
basically a collaborative code host into an end to end software
on the platform. Very importantly, this next part
infused by intelligence and the intelligence was going to be
key. So ISO, and I know I tried to
acquire his company, didn't workout.

(02:10):
We went to go do the podcast later, but we basically bonded
over neural networks and applicability to software
because that's what we care about.
That's what we think about. That's what we had kind of
obsessed over and it didn't workout from a GitHub perspective in
terms of what we want to go do. And then obviously GitHub gets
acquired and in the office of the CTO with several folks, we
ended up incubating GitHub copilot under a very, very

(02:32):
different mandate and very different structure and all that
sort of stuff. But you go back to that time to
speed run over back to 2023, which was, you know, everything
in the world was playing out theway we thought it might.
And we looked at that and said, hey, we want to go in or do we
want to sit on the sideline still and can basically learn to
paint and sail because you couldsee the end game starting to

(02:54):
emerge from there and ISO and I'd always said we wanted to
build intelligence on compute that's even Poolside.
Poolside is an AGI company. We're going out for AGI because
we want to build intelligence ona compute.
And we saw that there was an opportunity to become one of the
four or five companies in the world that could possibly
achieve that. And, and there was a, if you
take back the kind of post to the post ChatGPT moment, the

(03:20):
world at the beginning of 2023 at the point of view that kind
of generally was help that to reach human level intelligence
and capabilities. All we have to do is just scale
up next token prediction, scale up model size, provide more data
and, and we are in a quote UN quote AGI and we held a very
different point of view at that moment in time.

(03:41):
We said that, yes, skill mattersincredibly, scale of compute
matters immensely, but the scaling access that we were most
excited about was the scaling access of reinforcement
learning. And if you go back two years
ago, that was not a widely held belief.
Actually, a lot of people thought were kind of crazy for

(04:01):
having that belief. So it wasn't just that we saw,
hey, the world was on a trajectory of closing the gap
between models and human level intelligence and capabilities.
It was also that we realized that we had our own unique point
of view on how to get there and and kind of that over many
conversations Jason, I had over over quite a few months
culminated into into deciding tocreate poolside.

(04:25):
I think one thing to reinforce on this is, is always funny to
me because in 23 when we went out, we were talking about
starting this company with others.
People thought we were nuts again just to scale up the
cluster. We already understand how to do
this and we knew there was a different past, but, you know,
more efficient, more structured,whatever you want to call it,

(04:46):
but it was going to be different.
And the axis, as ISO likes to point out, was very different in
this way in 20 sides. It looks more than prescient.
It looks like, holy shit, what did the we miss in 23?
We saw this. And I think this kind of goes to
the, the, the ideas around poolside in general, but just
generally Speaking of what we'regoing to need to do to scale up

(05:08):
to achieve this. And here we are sitting what,
almost midway through 2025. And I don't think I could be
more convicted in our approach, but I couldn't be more convicted
in the future because it's superclear basically what happens
from here to the next, to the next to the next.
It does involve scaling up compute, but it you need to
understand what these things do to understand how to.

(05:30):
Scale. So you mentioned this
conviction, this belief in this is what happens next.
And a lot of that conversation is around this term AGI.
But often in those conversations, the people who
are on different sides of it maynot have the same definition of
what AGI means to them. So I saw I'd like to ask you
what does AGI mean when you and Jason are talking about it here?

(05:52):
So we try to always be careful with the term because it's a,
it's a moving target. It's got 50 different
definitions. And so the the one that I think
is most useful for the moment we're in is the world getting to
a point where we have human level intelligence and
capabilities for the vast majority of knowledge work that

(06:13):
we do behind a laptop. And, and if you break those
things down into the three partsof it, I think we are still very
much in a, in a moment in time where AI is not yet embodied.
We're very early exciting. So we're early in robotics,
we're early in, in, in those fields.
So the first horizon really is all of the the world of bits
that you know that we operate and do so much of our work in

(06:34):
and their model. Then if we take it from an
intelligence perspective, we have seen early models, you
know, with great capabilities and and understanding a lot of
knowledge and building this kindof worldview and world model as
as represented by knowledge, buta really poor reasoning and and
thought over long time horizons and complex tasks.

(06:54):
And software development is kindof this proxy task for
intelligence, right? It requires you to be able to
understand the world, have a lotof knowledge because the
software development touches everything.
But you also need to be able to,you know, go from a high level
objective all the way to a full system built that required a lot
of reasoning and thought, you know, to get there.
And then there's the actual ability to interact in an

(07:16):
environment, right? So the ability to use a
computer, the ability to use tools, the ability to really,
you know, get to the point, that's our ability to be
intelligent also is our ability to act.
And so I think those 3 components are the horizon that
we're on right now. So I try to usually shy away
from the term AGI because it's probably a moving target for the

(07:38):
next decade. But I do think if we look at
economically valuable work in the world that drives the cost
of goods and services down, thatpushes the frontier science and
technology, That's something we're on the knowledge work
side. We're on a trajectory now in the
world for, for solving for that.And I've heard you both talk

(07:58):
about this and the rationale forwhy you're focusing on code and
the fact that there is this massive database for code that
is out in the world. There's a lot of good code,
there's a lot of bad code out there, but there's so many
parameters that can be leveragedto take on this problem.
And your approach at Poolside has been different than a lot of

(08:21):
your competitors. Jason, you've you've used this
compelling analogy saying everybody is building A4 door
sedan, but we're building a truck.
What makes Poolside's approach fundamentally different from the
sedans of the AI space? Well, I think that obviously it
goes into as a reminder for folks, if you're not familiar,

(08:41):
again, we are in the pantheon ofthe open AI is in the anthropics
of the world and that we are pretraining models from scratch.
So there's a lot of things that we do that other folks might not
do if they were fine tuning somebody else's model or just
using somebody else's model behind the scenes.
And so we make very principled day zero decisions on a lot of
things that are different than what others would do.

(09:04):
And I think going to the analogy, it just means that
we're fundamentally thinking about this different because we
think about the use case very differently.
So you know, if you want to allow me a second to abuse the
analogy of the four or sedan to a truck, you can't slap the
brakes from a sedan on a truck. It just won't work.

(09:24):
You can't. You can't based on the same
chassis, it won't achieve the results.
And so far with general purpose models, effectively what we've
been doing is abusing the four door sedan for truck like
utility. You know, we've been putting the
tow hitch on air, we're putting it on the farm or the works work
site or whatnot. It's because we didn't have a
truck. We don't, we didn't know what

(09:44):
that looked like. And largely it's
indistinguishable from 50,000 feet because they got four
wheels and a chassis, you know, So it's hard for some people who
might not understand that, but when you use it, you know the
difference. If I if I now throw out of the
herd allergy, but I think it's, it's, you know, one of the
things that that we've always taken from day zero at poolside

(10:06):
is that we, we've really tried to look as into an intelligence
in a human inspired matter. And I think the, the truck, you
know, versus the sedan analogy applied to humans is that we
have very different types of intelligence areas where
different people are far more knowledgeable, but also have,
you know, really sit on different ends of the spectrum

(10:28):
of where their skill sets lie. And one of the things that we've
really looked in that is that a lot of that has to do with when
in your training, when in your learning, you start really
branching off and really focusing and making sure the
data and the feedback, which is where reinforcement learning

(10:48):
work comes from, starts focusingon software development
capabilities. So in a world where we have, you
know, infinite compute and infinite data, we're all the
same model, right? We're all at, we're all at this
super intelligence model that everything could apply.
But we don't live in a world of infinite data or infinite
compute. We actually have constrained
resources. We can't make a parameter, you
know, tension parameter model and serve it to users because

(11:10):
it, you know, no one can afford to call it.
So we're all in a compute budget, compute budget at
inference time that people can use.
And then at the kind of frontier, we, we make sure, we
try to make sure we have similarcompute budgets for training.
But the reason I'm going on on this tangent here a little bit
is that at the very early part of our training, it looks

(11:32):
extremely similar to how you would train any general purpose
foundation models. And actually, our models share a
lot of the data as general purpose foundation models.
They're great at planning your trip to London or writing a poem
or a bedtime story. They're not just about code or
software development. You have to understand the world
and build up knowledge and intelligence.
But then about 1/3 into the training, we start biasing them

(11:54):
massively towards software development.
And then once we get through theparts of the training that are
really around learning from the data that's out there, what's
represented on the web and in data sets that we've built an
increasingly more synthetic, to be honest, like very much so
that our view is that, you know,in the future it will likely
almost all be synthetic. And then towards the end of the

(12:18):
training, our really strong efforts are reinforceable
learning from code execution feedback, giving models time to
think and reason to over complexsoftware tasks across what is
now almost 1,000,000 repositories that we have fully
containerized with their full test suite in this huge
diversity of domains and languages and getting these

(12:39):
models the chance to do tasks and learn from when they're
right and wrong. So it's really about like from
an infra perspective, a lot of things that initially look
similar. If we tomorrow, you know, chose
to change a couple of dials in our in our pipelines, the models
would become much more like general purpose models, but
we've decided to really apply our compute in our efforts to
really pushing them towards their software development

(13:01):
capabilities. It's not that similar, like
maybe a bad, bad way to think about this.
It's not dissimilar to whatever budgets we all have in other
domains in our life. So we all have time budgets, we
all have energy budgets, we all have whatever budgets.
So you apply them. So this is a compute budget.
We have a you understand how we have to apply it, but there's

(13:21):
this concept and again, everyoneknows me knows I'm really good
at just throwing analogies out there, but kids in life are
supposed to be general purpose for a long time.
Athletes kid, athletic kid is supposed to be general purpose.
And at some point, if they want to specialize, you do it to
GoPro essentially keep keep doing this, but the more the the

(13:42):
sooner you overly specialize, you can have, you know, have
collapsed, basically have over specialization and all that sort
of stuff. But then the late, if you never
specialize, you basically end upwith no advantages.
And so you have to actually build this out.
So you know, baseball players and or you're supposed to be an
athlete, then a pitcher. Football players are supposed to
be football and basketball and soccer players.

(14:03):
And then you can specialize and continue to do this.
It's actually similar in this way.
We all have budgets, they have time budgets, energy budgets and
all that sort of stuff. At some point, they need to
figure out what they're going togo do.
So let's get really clear for our listeners here.
Poolside is purpose building models like Malibu specifically
for software engineering and you're using this approach of

(14:26):
reinforcement learning via code execution feedback.
To do it, but what? Advantages Does that specialized
approach actually offer developers and technical leaders
who want to leverage poolside compared to using more
generalized AI coding assistantsor models?
I think it's a really good question.
Our view is that for for us to succeed as a company, we only

(14:49):
succeed if we are constantly pushing the frontier of
capabilities. At the end of the day, you don't
want to work with the dumb model, the dumb agent.
It's just that it's just that straightforward.
All of us at the end of the day are in a race to human level
capabilities and there's a lot of overlap in the work that I
think we all do, but we really push that towards software
development. Now the other thing where that

(15:11):
we focused on from day zero is we said we lived in this world
over the last couple of years ofmodels, API's, applications, all
of these things that are kind ofbeing put together.
It felt a little bit like the Android world to me, but there
hasn't been the Apple equivalentwhere everything is built from
end to end to seamlessly work together.

(15:33):
And that's what we've been doing.
We've been building the model, we've been building the context
engine. The ability for it to adapt and
learn from its environment and the application experience is
always one thing. One of the things that my
favorite thing to see is when you see someone on the product
engineering side coming up with an idea, trying it out, and then
realizing all the models aren't very good at this.
And then you see someone from the applied research team

(15:54):
jumping in and saying, oh, we can actually improve that in the
next iteration. And 24 hours later, you see a
new version coming out. It's that deep integration that
we focus on. And on top of that, we've taken
a first approach to that to try to deliver that kind of
experience in the enterprise. Overtime, it will become
available to everyone. It's our mission and has always
been to try to affect every, youknow, line of code and every

(16:16):
developer in the world, everyonewho wants to build software.
But it's that, yeah, Apple like end to end building together
that I think is quite unique in our culture and our approach
that we've taken. I love the grand ambition of
what poolside is trying to do And, and I so I remember hearing
an interview from you where you talked about this race and how
you didn't want to regret not running as fast as you could in

(16:39):
it, which speaks, I think, to how you and Jason are pushing
poolside forward. Yet there's this dirty little
secret that I've heard Jason talk about, which is that the AI
industry, hey, we all pretty much have access to the same
data. So how much can you really
differentiate on your approach and and what you're doing at

(17:02):
poolside if the actual inputs are largely the same?
So I think we live in a, you know, two years ago and, and,
and it is probably what we were talking about this, you know, we
spoke about this world where, yes, we all use the web and
variance of it and, and, and such, and that's that same data
set. But as we are increasingly

(17:23):
moving to scaling up more and more compute into reinforcement
learning for our models to learn, we're increasingly living
more and more in a world of synthetic data and where it's
about giving the model, you know, complex tasks with lots of
diversity where they can go and explore towards a correct
solution and everything from thethoughts and actions that they

(17:44):
take. All of this becomes data and so
you end up in a place where you end up building quite a large
mountain of proprietary data that really becomes yours as
you're as you're building models.
It's not just us, I think it's for everyone in the field.
So it's on one hand, you know, if you think about a model, at
the end of the day, it's a compression of the data into a
neural net. And that forces this

(18:05):
generalization that that we lookat as intelligence and where now
I think at the limit, a lot of us will look very similar over
the course of five plus, you know, years.
I think we'll all get good at the things that each other are,
are are better AT and have advantages on overtime.
But so it is not just about the model.
I think it's the model is one part of it.

(18:27):
I think the product that everything becomes really one
thing. I think we're starting to see
that a little bit in our space as well.
But I don't want to understate what happens when you have a
team just focused with an immense amount of compute on
becoming the world's best in in one domain and you build out the

(18:47):
scale of engineering for that. It becomes a compounded set of
advantages. And now two years in, we see
those advantages have really stacked up.
What you're saying about synthetic data really resonates
with me as well. We've seen the same success with
our evaluation approaches here at Galileo, particularly when
working with enterprises to create feedback loops that drive

(19:07):
improvement through reinforcement learning.
So it's exciting to hear you're seeing the same at at poolside
in. I'll say I would love to to
learn Jason, how how many folks are working at poolside today?
How much is the company scale atthis point?
We have over 100 people, decent sized go to market team, very
large. Most of the folks are obviously

(19:28):
an applied research and distributed systems because
every if you listen to what we've said also it's not just a
model problem as you understood it to be, but it's a model plus
systems problem. And then we're building the
middleware and applications as well.
So it's all of those things, butyou know, we're two years in, we
expect to grow more. We also don't want to go massive
in terms of the team. We're a small but mighty team.

(19:49):
We we believe very, very, very much that a small set of highly
opinionated, mission driven people are going to outperform
other people in this market. Absolutely.
And as I so mentioned, this opportunity to compound the
advantage by this extreme amountof focus is a really interesting
approach that has huge potential.

(20:10):
And I've also heard both of you expressed concerns about the
focus as an industry on low consequence systems and flashy
demos and the danger of applyingthese learnings directly to
banking or health care systems enterprises that are where B2B
businesses actually make money. Jason, why is this distinction

(20:32):
so critical and what are the risks engineers and leaders
should be aware of when evaluating anti tools and ICE?
I'd love for you to chime in as well.
I'm probably one of the maybe weirdest people to comment on
this to a degree too, given my background.
CTO, GitHub, VP of Engineering, Heroku.
Before that, same thing at Canonical.

(20:53):
People make a bunch of Linux, soUbuntu, Heroku, GitHub.
But if you look at the history of those companies, most of them
started as incredibly virally popular is what they were.
And they all of them were tryingto jump to enterprise at some
point with varying degrees of success, including varying
degrees of success. But with a Mac with a great go

(21:13):
to market team, they were able to.
And what I see happening at the moment is people are making the
mistake of a flappy bird's cloneor an asteroid single shot is
saying this is the same thing asbuilding the underlying systems
that control money movement, MRImachines, drone software.
And it could be further from thetruth in terms of all of those

(21:36):
things. And the mistake, this is a
massive mistake that Silicon Valley has always made, in my
opinion. And I made this in my own
history, which was if you could get every single X, you could go
to the Y or if you could go, youknow this and you can always
jump over. And it's it's hard to do both of
those things are my job, our job.
And we've always said full size is going to affect every single

(21:58):
line of code on the planet, every single developer on the
planet. And we're going to start
fundamentally with the hardest possible place to achieve that,
which is these very large enterprise environments.
And you in those environments, if you intimately understand
them, you understand that they are not looking to quote, quote,

(22:19):
vibe code their way to a solution.
What they're trying to do is they're trying to achieve an
outcome underneath all of these different regimes, whether they
be safety or regulatory or compliance or all these internal
mechanisms that need to be takeninto consideration.
And so it's not has nothing to do with words.
It has nothing to do with even like emotions or whatnot.
Has to do with this idea of understanding the customer and

(22:43):
understanding the user and what happens every day for them.
And at the at the root of it, what it is is if someone says,
oh, look what you can do with a flashy thing in 30 seconds in a
one shot over here, massive defense contractor should just
do this. It shows a a a such a staggering
misunderstanding of that customer and user that I want

(23:05):
you nowhere near them because myI want to sleep well at night.
Hot take, but I, I think, you know, if you put that in
context, where models came from and, and where we're heading to
with AI, you know, we, we came from, from code completion to
chats to now early agentic. And we're on a trajectory to the

(23:25):
truly autonomous agents, right? That that look like you're
adding to your workforce, like you're adding to your team and,
and in, in that environment, you, we're, we're not far out in
the world. It's it's likely less than three
years where you truly have humanlevel capabilities in software
development and you have agents that are collaborating with you

(23:46):
in your organization and it in high consequence software
environments with the software that, as Jason said,
essentially, you know, allows our world to operate on from
from electricity to banking to healthcare.
That is going to be that's a huge shift from where people are
today. And, and more so than that,
those agents are, are not a, a static model call.

(24:08):
They're going to have to learn from your data.
They're going to have to get access to all of your systems.
They're going to build up a history of thoughts and actions
that they've taken. That will be, you know,
centrally available and, and other agents can access that
data will be used for, for versions and of models to be
fine-tuned and to improve. So, so we're on this trajectory
of, of AI in, in our view, really becoming a Co worker and

(24:33):
becoming party organization. And and that is where, yeah,
the, the rubber midst meets the road in the real world.
Now, that doesn't mean, Jason, Idon't get equally excited about
and have fun vibe coding an app,you know, like it's, it's, it's
100% there. But where we probably spend more
time obsessing over is on the model side on really, how do we
push the intelligence and capabilities?

(24:55):
How do we allow these models to continually learn in enterprise
firms and then on, on everythingthat's built around the model.
How can you bring this safely behind the firewall of a
customer? How can you bring the data?
You know, can you bring the model to the data instead of
sending off the data to the model somewhere else?
Because at the end of the day, this is becoming a highly,
highly critical infrastructure for for our world and and for

(25:18):
organizations to use. And I think that's the key thing
is that this this is the end of here.
In the limit, what these things look like is that critical
infrastructure and you have to have a certain orientation
around it. So as I said, we use poolside
all the time to to vibe toward our way to stuff.
It's fine. I do it all, you know, maybe
maintain very side projects or things that we're doing.
And it's not like it's not good at that.

(25:39):
But what I call these, these high consequence environments in
computer science versus term called NP hard.
And it's a set of category of problems that if you solve 1,
you solve all well, if you solvethese environment problems, you
solve, you can make it work for these folks in these spaces.
You can make it work for anyone in those spaces.
But if you would just solve the the I want to vibe code my way

(26:01):
to something. The no code low code is shorter
replacement sort of thing right here.
If not solve that or the problem, you still have to do
it. If you can do it over there, you
can do it anywhere. Jason, I've also heard you talk
about this idea of a a devil's trade that the industry is
making where companies are askedto send private data like their
source code to AI providers withthis promise that it won't be

(26:22):
misused. Given that source code is often
a company's most valuable asset,how should the developers, the
technical leaders listening, evaluate the privacy and
security implications of different AI solutions that
maybe don't have an enterprise? Focus yet?
Well, I've always said at the when you're in a selling motion

(26:44):
for something like this, which is in my view, this is probably
the most important digital technology of our lifetime.
So when you're talking about enterprises adopting this, it's
very different than a developer signing up with a credit card
call and going to try something and stuff they just don't care
about. And maybe they care, but they
don't really care. But if you think about what

(27:04):
enterprises are doing in this motion as you're really
understanding it, you have threepeople that you're selling to,
three people's whose requirements you have to satisfy
that the CTO and CIO, effectively a primary buyer, but
you also have the general counsel or the GC and you have
the C, so the security officer and there's decisions across the

(27:26):
board on what you build and how you expose it and need to
satisfy all three of those people.
And it's going to become incredibly more critical in the
future as people become aware ofwhat's actually happening as
opposed to what they think mightbe happening.
But you know, the devil's trade I've talked about is I never
want to ask a customer to have to send me their most valuable

(27:48):
asset, essentially their, their source code and to quote UN
quote, trust me. I want to show them that I've
understood their concerns and I'm, I'm meeting them where they
need to be and all of that sort of stuff, because we've all seen
what happens in the space. And it's like, yo, bro, trust me
on this. And we're going to earn the
right working with enterprises is earning the right every day

(28:11):
to continue to to satisfy them or any with developers.
It's not dissimilar, but a developer is an individual.
There's no homogeneous set of developers and enterprises are
the same, but they they typically boil down to similar
sets of concerns. And so we just need to earn the
right to satisfy them and do this.
I don't want to ask them to sendme the source code of
thankfully, from a technical perspective, this is something

(28:32):
that how our unique. We don't need to get the source
code to continue to train betterand better models.
That's goes back to the previoustechniques of reinforcement
learning symphatic data. We don't have to ask them to do
this, but from a satisfy the customer I don't want to ask
them to do this. I also think there's a, there's
a second thing here, which is very few products in the

(28:52):
enterprise that are products we love.
A lot of the products we love are the consumer products.
And one of the things that I, I,you know, I think is inherent
all of us developers is we only want to use the product we
really love. We all, it doesn't matter if
we're if we're at home or if we're at the enterprise, like
wherever we are, we want to use the the product that we think is
best that that we love. And so by kind of obsessing

(29:16):
over, you know, all sides of this, the intelligence in the
model that the user experience is making it work end to end
Apple like, but then still beingable to, to deliver that in
these like highly complicated environments that have, you
know, the data locality and security boundaries.
And, and all of these things that come with it was what has
kind of been our, our, you know,our, our dream and, and, and,

(29:38):
and what we've been working towards.
And it's, it's really nice seeing all of that come together
today in these places, because it's, it's that trade off is one
that you don't want to make. I think the best products are
the ones that, you know, you're happy to use anywhere.
You have to use them at home andyou're happy to use them at
work. And and I do think that if we
Fast forward, you know, a coupleof years, the form factor is

(30:00):
going to drastically change for AI.
Yeah. Whatever the way the form factor
is today, which you know, is heavily, you know, built around
editor extensions and editors and, and, and, and CLI tools,
the reality is it gets more and more capable and closes more of
the gap between what we're capable of doing behind our
laptop and, and what models are capable.
The form factor will evolve, butwhat will actually stay is, is

(30:24):
the full system, everything thatlives, that becomes the
substrate inside an enterprise. The, the access to all of the
data that's there. The, the model that is learning
from, from the interactions thatpeople have with it.
The the management of the agents, like all of these things
are, are starting to, to become that layer.
While the form factor on top will probably be changing every

(30:44):
six months for the next couple of years, right?
And, and there will be incredibly new form factors
thought of by people that are not poolside and that are
others. And you want to make sure you
can empower them. So it's, it's going to be, it's
going to be a really exciting and, and and weird time at the
same, the same moment. I think weird is a really good
description. It's very exciting to see us

(31:05):
overcoming some of the underlying technical challenges,
parameter budgets, the model serving and progressing along
this road towards human level intelligence of some of these
tasks and poolside is taking. A.
Kind of two lever approach here,leveraging compute but also
focusing heavily on data via reinforcement learning as you've

(31:28):
mentioned synthetic data generation.
I'd love to understand a bit more about how this two lever
approach potentially acceleratesprogress compared to just
throwing more compute at it. So, so I think you know there
there are two levers we talk about publicly.
So I think that that's. That's one thing.
Unless you want to spill some secrets.

(31:48):
I I. That that will be to the
audience, but but I do think, I do think there's a by the way, I
in the last four or five months,I think the world has has woken
up to reinforcement learning very much so.
We've seen it in the first reasoning models that others
have brought out. And so I think we are very, very
strong believers that the more compute that we can drive

(32:09):
towards reinforcement learning for both verified rewards, code
execution and math and the things that really push those
capabilities and having the world's largest environment
there for code execution by several orders of magnitude is
this huge advantage. We've also been doing a lot of
work in, in non verifiable words, you know, how do we, how
do we just truly improve the models capability of, of thought

(32:32):
and, and reasoning as a subset of thought?
And so, so our view is, is that that's really that's the scaling
axis. That's where the majority of
compute will go. But you mentioned, you know, not
just throwing more compute at it.
I think we've set this before. If you are not investing a
training computer at the same skill of, of your peers in the

(32:54):
frontier, you will fill. And we're, we've always been
very open and, and aware of thisreality.
And, and I've spoken about this,there is a direct relationship
today between intelligence and models across any measure and
breath and the amount of computethat was required to, to produce
them. But the question is, where are

(33:16):
you applying that compute? Are you just making the model
larger? Are you just training in another
epoch of 10 trillion tokens? We don't think that's the axis,
while scale and size of model does help quite a bit, don't get
me wrong, and so does more data.But our view is that compute is
best bet on reinforcement learning.
And we've been building that fortwo years to get us to a place
where we'd have the world's largest environment and also

(33:39):
have the bread and butter foundation model building really
well done. And so now we've got some things
still to prove. I want to also be very open.
Like I have a lot of respect. I would say right now, I look
at, you know, the, the latest cloud model and say, ah,
there's, there's errors where they're stronger than us still.
Now our job is to make sure thatthat in 12 months from now and

(33:59):
even earlier is is not the discussion, but we also feel
quite a privilege that we get towork on this and that we get to
have the resources to scale up. I I love that.
And I do think there is a reallyhealthy competition happening
right now. As you mentioned, you know,
cloud cloud sonnet is really good at some things.
Poolside is extremely good at other parts of software

(34:22):
engineering. And I think it's so interesting
to look at this in the context of this bat, this conviction
that you 2 had back when the ChatGPT had its major moment in
2023. You both said, hey, look, this
is the future. We see where this is going.
We see this event horizon you'vetalked about.
And Jason, you brought this up earlier.

(34:43):
You have this even deeper conviction today about what the
future looks like and what happens next.
Tell us about that vision. This is where you can start to
be one of those people that talks a little bit too
ethereally about the future too.And we always like to root in
the the moment and practical. But as I alluded to earlier, I
do think that this is the most important digital technology,

(35:06):
neural networks and the most important digital technology of
our lifetime. The application to solve
societal problems, the application to solve hard
interesting problems for us is within our grasp.
In a short order period of time,we saw a known set of problems.
Today, we can do a certain set of things and that's fascinating
and it's great. But what the potential here is

(35:29):
is profound. And so it's, it's similar to how
you might think about this in a way, which is I have X before
me. I know exactly how to solve it.
I'm going to go do that. Well, it's the unknowns, you
know, I don't know how to solve that problem.
Can I apply some compute budget to it effectively?
Do I have a, do I have one of these things that allows me to

(35:50):
apply some compute budget to it to go after that problem and
have it be solved? It's remarkable that we're even
talking about this. And that's weirdly where I think
of the convergence is happening in the future.
That's what we're all talking about here.
The privilege of a lifetime is to work on something like that,
something so profound that it matters at the societal level.
And that's what ISO is alluding to.

(36:11):
And I think to this is for us, what are one of the most
interesting, maybe the the most impactful things is that you can
see viscerally what that means. There's some event horizons out
there where none of us can see past, but I look at that set
when we start, you know, even approaching the event horizon.
You know, obviously we're working in the domain of

(36:32):
software right now. It's pretty obvious what will
happen over the next couple of years as you build this out.
But you can see the effect on other industries and other other
spots too. And I don't know, it's beyond
humbling to think that you're one of the five on the planet
that's allowed to go after this using these various problems in
that way. And you know, we're just going
to keep doing it. 1 foot, that'sthe other on towards the event

(36:54):
horizons. Jason talks and I really like
this notion of known compute budget problems and unknown
compute budget problems. And if we play it out over the
next 351015 years, everything today that is a broadly known
compute budget effort. It doesn't matter if that's
accounting software development.If, you know, work that we do in

(37:17):
large groups of people, you know, many of us do it and we do
it behind a laptop and in the future embodied in robotics in
the real world, all of those things will lead to to perfect
automation. I think.
I think that's now what that does though.
And I think this is where it gets exciting, where it's like
it is that you can just spin up compute, you know, to tackle it

(37:39):
is that it leaves an entire field open for us of unknown
compute budget problems. And the unknown compute budget
problems is this infinite frontier of technology progress.
We will never not find more science, more technology that we
can build, more exploration thatwe can do if that's the oceans
or the stars, if that's the the next, you know, breakthrough in

(37:59):
biology, if that's in, in in healthcare, material sciences.
And so I think what we will see is, is human level intelligence
as a skill up on compute is going to be a resource for all
of us that we can get access to and use and, and scale up.
Once we are through the things that drive the cost of goods
and, and, and services down to their raw material cost, we will

(38:22):
start increasingly more and morespending on the frontier of
science where we don't know how much intelligence it takes.
And we will also start building,taking this first primitive of
intelligence is human intelligence and really applying
it to very specific areas. I think A4 is an amazing example
of this where it's, you know, a very specific type of task.

(38:43):
Now, I think the combination of human level reasoning and
thought and capabilities combined with some very
concrete, you know, goals that humans could never figure out.
The next breakthrough in material sciences or the drug
discovery problem, that's where that combination is going to get
very interesting. So I think we will look back on
this in 10 years and say, oh, itwas so cute that we solve human

(39:05):
level intelligence. Look at what we now have these
super intelligences that we can apply to, to, to healthcare and
to all of these areas of science.
And, and along the way we will, we will find it very deeply
integrated in our lives. It will be normal that there
will both be knowledge work doneby AI as robots will be doing
construction and manufacturing and and we will find our meaning

(39:26):
amongst all of this. I do think it's interesting to
think about this philosophical question and it's increasingly
one that the industry is, I think, beginning to grapple with
as we see the 1st order impacts that are coming and start to
project into some of the 2nd order impacts this could have on
society. One of those first order impacts

(39:47):
is clearly on software development, the area where
poolside is explicitly focused. And I'd love to understand what
is the trajectory, Jason, that you're seeing for what the
day-to-day for software engineers and technical folks
will look like in the next couple of years as this

(40:07):
transition occurs? So as we, you know, first order
effect, that's effectively how you ask this what is what are
those first sets of dominoes to fall?
And another way to ask this question might possibly be do
software developers exist in thefuture?
Because we get that question while I'm sure you I think.
Yes, for the record. Well, so, so I think this is,
this is not one of those yes or no questions.

(40:27):
There's a whole nuanced conversation around this.
My simple answer to this withoutgoing into too much time on a
podcast is going to be developers exist.
The relationship to the tasks and jobs change.
We've had this throughout history, throughout all of
development history, we've had this.
It's no different here. The tool is different, it's

(40:50):
massively different, but it doesn't mean necessarily
developers go away. But what does change though is I
think enterprise themselves, theactual amalgamation of all the
people, the enterprises, There's, there's more change
there than there is for some of the developers.
But what I give recommendation for developers is many software
developers whole identity is wrapped up in the fact that

(41:11):
they're software developers. And a lot of folks in that way
are I am a software developer, I'm a Java developer, I'm a Ruby
developer, I'm a Python developer, I'm a distributed
systems engineer or ML or whatever.
And a lot of them are, you know,that I have perfectly memorized
or know the insurance and outs of the SDKS or the AP is or
whatever. Those types of things go away.

(41:33):
Those are not the skills that are necessarily the valuable
ones in that that future world, but the ones that still allow
you the taste discernment and judgement, the ability to
understand the unknown possible implications be asked the
questions of like, hey, does this cover X or Y or Z?
Does this satisfy the customer? That sort of stuff, that stuff
exists. And I think that this is kind of

(41:55):
one of those big changes. And I also think of day-to-day
looks different. This is where, again, a lot of
the things that happened in the moment don't matter long term.
So let's just take the moment like cool side of the full stack
model middleware applications that sit on top of it.
A lot of the application surfacearea that happens, it was just
just call ID ES today. A lot of that application

(42:17):
surface area as developers use AI via them, that surface area
in the ID ES is there because model gaps exist today.
So as models themselves in the systems that help support them
become more capable, the surfacearea available to the ID ES
diminishes. But it's also because the people

(42:39):
who are spending the vast majority of their time are not
inside the ID ES the way they were they once were.
They move that time to somethingelse.
Because if you have 10 developers on a project, you
might have 10 human developers, but you also might have 1000
digital developers working on something.
It's a very different interaction mode.
So this is 1 when we're talking to enterprises or even

(43:01):
developers every day, which is you have to understand the
implications of X to understand what's going to happen for Y.
And pulling this all apart is a is kind of an interesting
overall conversation. ISO, what about you?
How do you see the the role of? Technical folks like like
yourself changing in these next couple years.

(43:22):
I look at AI getting more capable and, and, and reaching
human level capabilities as we're adding to the global
supply of, of software development.
People can build software, but Ialso think that, you know,
software eats the world has barely started.
There's this huge surface area and, and you can just walk

(43:45):
around anywhere or walk into theDMV or like anywhere.
And, and you realize that there's so much more software to
be built to, to, to actually create a world where we have
more automation, where we, you know, bring down the cost of
goods and services to for everybody.
And so I think the surface area of software can definitely use
more software engineers. Now the question becomes is that

(44:07):
there's going to be a lot of tasks that are part of a
day-to-day software developer where the AI can can now do it
for you in a second that before would have taken you, you know,
20 minutes. And so the now we are inherently
lazy as software engineers, right?
Like it's, it's in our blood. If I know the system can do it

(44:28):
for me in 10 seconds and it would take me 20 minutes and
that system's offline, I'll go make coffee for 30 minutes,
right? Like, and, and so, and we're
already seeing that with AI assistance today, right?
If if the AI is down and and nowyou have to go do that thing
before that you know the model can do 10 times faster, you're
not touching it anymore. I am guilty of that for sure.

(44:48):
And and I think we all inherently are.
And so it's entirely so that gapwill entirely close.
And then it becomes about your agency, your agency to go tackle
problems, to go build things, tofind new surface areas where
where software is valuable. And and as Jason said, the the
construct, the change at enterprises and company
structures is probably larger than it is for you as an

(45:11):
individual. And it's true, if you really
have held your identity on to the fact that you are the
world's best person at, at optimizing, you know, CUDA code.
And now one day you wake up and you realize that that
optimization can be end to end, you know, done by AI faster than
you, you, you have to change, you know, you're, you have to
let go and change your identity on that.

(45:31):
And, and so I think more people will create software.
I think the surface area of software will grow still 1000 XI
think the the nature of the day-to-day role will look
increasingly more of 1 where youhave agency and are delegating
that's to a single agent. Or if it's 2000 or 100, it has
to a variable number that you'rescaling up and down as, as

(45:51):
you're tackling something, you, you find yourself less time in
the editor. And I think it was sucker work
on the interview this week. I just saw the snippet it was
named over yet that, you know, you look a little more like a
tech lead. And, and I think that's frankly
a, a nice starting point to talkabout it.
Yeah, I definitely hear what you're both saying here.
And I think there are these distinct traits that are

(46:14):
important ones for great engineers that maybe haven't
been focused on necessarily by the industry broadly.
So, you know, agency obviously is 1.
Everyone's talking about like, you know, just going and doing
things, leveraging the compute you have access to.
Creativity is obviously one. How are you thinking through a
problem? How are you problem solving?
And then Jason, you mentioned one that I've been thinking

(46:35):
about a lot lately, which is taste.
Paki McCormick wrote an article about taste reference to one of
his portfolio companies back in I think 2022 before AI had
really exploded. And I've been thinking about it
a lot lately from the standpointof really having that defined
sense of what does good mean andhow to create it is becoming

(46:57):
more and more important tied in with the agency that you're
talking about, ISO. And I think that's one thing
that Poolside is really optimizing for.
You're saying, hey, we're going to focus on large enterprises,
we're going to nail this. We're going to have taste in our
problem solving. We're going to really make sure
we're solving the hardest problem here, but I also know

(47:17):
you're planning to make solutions more generally
available with poolside cloud. What's the road map look like
for Poolside Cloud? And how do you see your
poolside's truck becoming accessible to a a wider range of
developers and organizations? Cool boy ambition for poolside
in the near term is every line of code affected or every
developer affected some way somehow.
So when poolside cloud launches an example, power applications

(47:41):
of all variety around the world with our state-of-the-art models
and things of that nature, as well as give people access
directly to our own full stack. So everything that we do all the
way up to the editor. But if they, if someone wanted
to use somebody else's editor, but powered by poolside, great,
they can do they can do that. And I think that this kind of
goes to, again, what we think isimportant in the long term is

(48:02):
that you understand what's happening and affect the, the,
the idea of number of lines of code created or manipulated or
number of places where people are actually doing interesting
work. Don't need to talk about too
much. I think it's kind of obvious
there. But ISO also pointed out that
there's a philosophical difference to approach.
And we do and talk about this internally quite a bit, which is

(48:26):
we want to be Apple and many other people might be Android.
And so we're not, we're, we're taking a very opinionated
approach to this. I mean, the first opinion is
expressed in that the domain is important, but does not just do
that, do general, let's go specific and software to start.
And we have very specific viewpoints on how to build out

(48:46):
the full stack on top of it. At the end of the day, it's all
about customer preference, all about users and developers and
developers are highly opinionated.
And again, they're not a monolithic block, the Ruby
developers versus the early developers versus the Python,
very, very different types of communities.
So you have to understand how tosatisfy some and all in

(49:06):
different ways. And then you have enterprises
and everything else in between. So in, in in some ways, it's a
massive problem to to go undertake.
But this is also why you build upon 20 years of experience for
myself or others in the industry.
And you've this, if you've dedicated your entire life to
developers, you understand exactly what I'm saying that
they are very different than dedicating yourself to

(49:27):
accountants or dentists or doctors.
And it's understanding the people themselves and what they
care about at the end of the day.
I love it. It's a great note to end on And
and Jason ISO, thank you so muchfor both sharing your
perspectives on Chain of Thoughttoday.
It's been an honor to have you both together for your first
podcast interview together sinceyou started the company.

(49:47):
I think that's a really cool opportunity for us, For everyone
listening, where can they go to find more information about
Poolside and follow the work you're doing?
Poolside that AI is our home page.
It's got information how to get in touch with us if you're
interested in installing poolside trying us, it's first
of all, thank you. The other is going to be that

(50:08):
just get in touch. We'll figure out how to to to
get you in the Pike. Thanks so much guys.
We'll link everything including Poolside's website in the show
notes. I really appreciate of you both
for the long ranging discussion.I feel like we could have gone
for the half hour. I knew I should have booked more
time. So to everyone listening at
home, we will be sure to have these two back at some point.
And so be sure to subscribe wherever you get your podcasts

(50:30):
and check out the Galileo YouTube channel for more content
like webinars, events, deep dives, and of course, every
episode of this incredible chainof thought podcast.
And you can watch ISO and Jason and I interact.
You can maybe see a little feature of ISO's dog who who
joined us very briefly on that YouTube.
And obviously check out the restof our episodes of our lovely

(50:50):
guests. Gentlemen, thank you so much
again. This is a ton of fun.
Thank you, Connor.
Advertise With Us

Popular Podcasts

NFL Daily with Gregg Rosenthal

NFL Daily with Gregg Rosenthal

Gregg Rosenthal and a rotating crew of elite NFL Media co-hosts, including Patrick Claybon, Colleen Wolfe, Steve Wyche, Nick Shook and Jourdan Rodrigue of The Athletic get you caught up daily on all the NFL news and analysis you need to be smarter and funnier than your friends.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.