All Episodes

August 19, 2025 • 43 mins

Join the Tool Use Discord: https://discord.gg/PnEGyXpjaX


Open Source AI is key to a positive future for humanity, and Alignment Lab AI is building the tools to make it accessible to everyone. In a world where a few big tech companies control the most powerful "black box" AI models, concerns about data privacy, algorithmic transparency, and entrenched power are growing. This episode explores the alternative: a future of decentralized, user-owned AI.


We're joined by Austin Cook, Chris Hardwick, and Jordan Parker from Alignment Lab AI to discuss their mission to democratize AI through radical transparency and distributed ownership. Discover how open source AI empowers you to control your own data, run powerful models locally on your own hardware, and create truly personalized AI assistants. We get into why specialized open source models can outperform giant frontier models for specific tasks, the importance of owning your compute, and how you can get started with your own local AI lab.


Learn more about Alignment Lab AI:

https://www.alignmentlab.ai/

https://discord.gg/r7NaHJZxCR

https://x.com/alignment_lab


Connect with us


https://x.com/ToolUseAI

https://x.com/MikeBirdTech

https://x.com/alignment_lab


00:00:00 - Intro

00:01:52 - What is AI Alignment?

00:07:05 - Can Open Source AI Compete with Frontier Models?

00:10:46 - The Dangers of AI and Your Data Privacy

00:25:13 - What You Can Do With a Home AI Lab

00:32:01 - The Future of Personalized AI Assistants

00:35:51 - How to Help the Open Source AI Movement


Subscribe for more insights on AI tools, productivity, and Open Source AI.


Tool Use is a weekly conversation with the top AI experts, brought to you by ToolHive.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
I think we have a way to give people superpowers here if we
play our cards right, and that'sreally what I want to do.
People aren't aware, I think, ofhow easy and accessible these
tools and these ideas and these possible futures are.
You don't have to learn how to train a model and how to curate
data for yourself just to have amodel that's fit to you.
I firmly believe that open source AI is the key to a
positive future for humanity. It might sound a little

(00:22):
hyperbolic, but as you start to look at the impact AI is going
to have on society all over the world, it becomes even more
important that we understand howsystems work.
And when they're in the hands ofa few organizations in this
magical black box that we interact with, there's a lot of
potential negative consequences.So in episode 53 of Tool Use,
brought to you by Tool Hive, we're talking with Alignment Lab

(00:42):
AI. They're a lab that was born to
democratize AI through radical transparency and distributed
ownership. We're joined by Austin Cook, the
CEO and director of the lab, Chris Hardwick, an AI developer,
and Jordan Parker, marketing andbusiness development.
We're going to get into why opensource AI is important, ways
that people can leverage it today, and a strategy to get it
into the hands of more people sowe can make sure the humanity is

(01:04):
a positive future. Let's get in the conversation
with Alignment Lab AI. Think it's important at least
right now before we have sort ofsettled into what the technology
is going to be. You know, I think when we got
our phones, our phones became that thing and it's very
difficult to change that once it's once it's sets in.
And so open source because one, honestly, it's like a Better

(01:27):
Business model than I was anticipating it being.
It turns out like there's a reason Google stuff is all free
usually. But two, you know, if you set
good standards now before there is standards, the bar is just
really high. And you know, and I think that's
the least, I guess, I guess on the alignment front, right, like
this is the play, the way to getthe most sort of human centric
alignment, I think is like aligned to the people and sort

(01:49):
of treat those high standards while we can.
What is alignment to you? I think it's, it's really not
nuanced. I think it's an AI that does
what you expect it to do when you expect it to do it.
And you don't have unknown consequences from that, right?
Like there's no secret stuff happening under the hood.
It's not blackmailing your grandma, you know, it's not

(02:10):
doing a bunch of unexpected things and just looking like
it's doing what you want. And I think, I think that's,
that's pretty easy to hit. Like it's just hard to hit from
the perspective of a company, right?
Like I'm trying to serve it to abillion people.
Like that is an entirely different problem than one
person with their own model that's just fit to what they
care about. Open source components in tech

(02:32):
and specifically in AI really had a lot to a lot of the same
analogies I had across all the different industries that I
worked in. Specifically, one of the last
chapters of my life was in the agricultural space and the
decentralized food networks are facing the same challenges, the
same issues, the same hurdles, and some of the same solutions

(02:54):
that would apply those same sortof patterns that I saw across
all these other different types of industries about having
access to the tools, about having a decentralized solution
to whatever that is and that having direct impact into the
communities who can adopt that. We were experiencing the same
thing with search. And so the same issues behind

(03:17):
having a centralized power within search, within the social
media platforms that we're all using the same two or three and
now the same issues in the same 3 or 4 labs distributing AI to
hundreds of millions every single day.
We face the the same sort of black box issue of not knowing

(03:40):
in particular one, how much of your data you're trading to what
the actual algorithm is game being games for and what are the
ramifications of that 10 years down the road.
And so everyone started playing the SEO games and early search,

(04:01):
you know, putting all the hiddenwords, keywords.
So it's, you know, whatever the incentives are are going to
drive the market. And in a competitive
marketplace, that's always goingto lead to unexpected outcomes
in complex systems. And so we went from surge to
social to now AI and it's this same sort of pattern taking

(04:22):
place. And so I think overall consumers
and especially tech are much more Privy to these issues.
And we've had a few at bats now culturally in the West.
But I think and we kind of talked on this before a little
bit about the infrastructure andthe resources actually being
there to do anything differently.

(04:44):
And whereas once somebody already has so much of the
market like Instagram, there wasno opportunity for any other
social media platform like Instagram to take hold and to
really expand from there. Whereas the kind of combination
between hardware, software, opensource models that were really

(05:05):
at a crossroads where the majority of tech users and
perhaps the early adopters in the in the sort of normie cycle
of, of technology use have the opportunity to make different
decisions and have drastically different outcomes and
experiences and get to be in thedriver's seat of what that looks

(05:27):
like for them to. Reference your Instagram
example. There were still other types of
social media you can go to, but your Instagram just totally
captured the online photo album market.
But with AI, it really feels like this is going to be
people's interface to compute their interface to information.
Like it's going to be so much more all-encompassing that
making sure we don't have the same type of entrenched
structure that forces a singulardirection.

(05:49):
For the vast majority of people,accessing the Internet seems
incredibly important. I mean, right, like this is just
software what we're looking at. I think like there's a lot of
abstractions and it's very hard to comprehend what's going on
under the hood, especially if you've never worked with it.
But really it's just because it's cheaper and faster if you
can hit a button and have a thing that just makes a website
than it is to make a website, right.

(06:09):
And I think AI being primarily atool which just like lets you
have the data that you want without having to construct it
manually yourself, I think that's a way of just kind of
turning everything into a buttonthat's that's kind of Labor
intensive, at least in terms of digital content.
And so, you know, with with the good and the bad that'll come

(06:30):
with that, I think primarily, yeah, it's gonna bleed into
everything, right? Like it's just cheaper to do and
at the I think that's a good thing.
I like that we're getting an overhaul.
Things were pretty entrenched, right?
Like they still are. One thing that some people have
apprehension towards the open source Ave. with this is just
strictly capability where in order to make a website with a
single click, you go to the premier model or companies like

(06:51):
Bolt V-0, those type of things because they're powered by the
most cutting edge model in able to get the best output.
So when they're trying to contrast the closed source
state-of-the-art versus open source, maybe I'll do a single
attempt, see the results, and then walk away.
How? How can we encourage people to
explore as open source models more?
It's much easier to get a model to be very good at less tasks.

(07:15):
And so these, the frontier models, like typically they're,
they're made to be good at everything.
And you know, there's a lot of downsides that like if you think
of like trying to learn a language, you know, you can
learn a new language a lot easier than you can learn all
new languages at the same time, right?
And anything that you can have afrontier model doing that's not
sort of an abstract kind of composite task like reasoning or

(07:36):
something just very general, like a specific utilitarian use
case, you can definitely do thatwith an open source model.
Definitely for a lot cheaper. Definitely like accurate almost
every time, right? Like much better as long as
you're narrow and you target it well.
I mean, it's just an efficiency thing.
You know, the scale of the models goes up and the cost
increases non linearly and that it becomes more expensive faster

(07:57):
than it increases in parameters.And the frontier models are just
gigantic. I mean, they're just enormous
things that have to run on supercomputers.
And a lot of the times it's like, man, I could use chat TPT
code and then I just go use likea glass of water per inference.
And that was before it took like10 minutes to go through all the
tokens to get to your code. And it's like, or you could just
take like, you know, a very small model and just train it on
your code and learn your libraries and your programming

(08:18):
language. That's probably a much easier
target to hit and you get a muchbetter code assistant out of it,
right. And so I think it's about making
maybe training accessible to people or reducing the cost to
customize the models or automating a lot of the way the
like the pain of that the sort of technical hurdles.
But then we'll get there, you know, there's, there's nowhere
else to go but there, I think asfar as infrastructure.
And one thing that you kind of touched on with the efficiency

(08:39):
aspect is the the environmental concerns.
People are are getting more and more concerned that by
leveraging AI for more tasks because it's not the most
computationally efficient process yet.
It's just causing more and more like electricity being used.
And if it's generated by uncleanmethods, how can open source
help with this? Resource constrained development

(09:00):
in AI in particular has historically been very impactful
and it still continues to do so.I mean, I think the models get,
while the models get bigger every day, they also get smaller
in that, you know, I mean, look at look at Google released
yesterday, right? The, the Gemma 270 million, like
very, very small. And if you notice they posted a
benchmark with it's like, hey, this thing is like way smarter
than like something that's much bigger.

(09:21):
And that was that big thing thatwas much bigger.
It was the last best model at this size, right?
And I think that's going to keephappening because like, while
there is a hard stop on scale where it's like there just is
not enough electricity to be on some point because you're paying
those costs quadratically. On the inverse compression, it's
like it seems like we don't evenknow where the floor is.
It seems like we can, we're juststill compressing the models
more and faster all the time. I think if you'd shown somebody

(09:44):
from like 4 years ago that Gemmamodel that's 270 million
parameters, they probably would have had a panic attack.
You know, it's like, I mean, it's fully conversational.
You know, the idea of something that you could train in a few
hours and like a home GPU that'sfully conversational is what I
mean. It's still absurd.
You ask me. I think that's where it's going
to come from. I mean, this is just like when

(10:05):
news, news research, even then they came up the, the sort of
rope, right? Like the the rope extension.
There was that blog post where somebody had been, I was
tweaking with the Theta value for the Rotary embeddings.
And I think it's the only blog post like cited in the white
paper. And then they just found out the
models could just extend their context really, really long,
which was like unheard of. And then it was like 4 or five

(10:27):
days before a bunch of open source researchers got together
and jumped on a supercomputer that was like doing open source
research and trained out this big model or this, this, this
llama model and release it. That had like a million tokens
of sequence length and was like the first sort of instance of
that that was like accessible and like computationally
tractable to use. And so, you know, I think that's
going to continue to happen. One area that I'm a little

(10:48):
concerned about is just the societal trend towards not
caring about privacy, where they've just put Alexa in their
living room. They're happy with Gmail
scanning every e-mail that comesin and out.
AI feels like it's going to amplify that risk.
And with open source, we're ableto kind of, you know, keep
things so keep it secure. But what do you think about the
general societal view towards it?

(11:09):
Do you think enough people care that that's actually going to be
something that is is a selling point to open source?
Do you think there needs to be something like a catastrophic
data leak for people to wake up to it?
What? Has seen the punchline for the
joke that was social. Media and.
In that when it first started, it was it was like, it was like
super cool and it was it was super bright and fuzzy and all
that. And people willingly gave their
data, you know, you know, like, wait and, and it was then they

(11:31):
were kind of putting like, you know, you know, like in our
heads, well, it's not real money.
It's it's, it's, it's not real information.
It's not going to hurt me. So I think like right now, we're
still seeing the effects of thatand still seeing the effects of
people kind of we, we kind of convince ourselves to not care

(11:51):
so much about how much data was,you know, like was you being
taken. And now we're kind of in this
boat where there's so much goingthrough like, well, I hope that
that doesn't do anything. So I'm going to continue
believing, you know, like, you know, like, you know, like that
doesn't. And what can I do about it now,
you know, and now when you talk to anyone, everyone knows that
that their phones and smart TV'sand everything else is, you
know, like spying on them. They just don't think about it.

(12:12):
You know, it just it just like this baseline anxiety that
everyone has. So so.
One thing that we're doing with,with centre is, is, is we're
trying to, and, and, and this could touch back on, we're
trying to redefine how people interact nowaday, trying to
really simplify, help help people interact with their

(12:33):
computer. Like in the 1st place.
If, if we're able to get this. So it's like self organizing.
So it's, you know, so it's like running your computer space for
you, protecting your data and all that stuff for you.
Now it's it doesn't seem like this monumental effort to take
your data back from these from these big companies.
AI is new in the market as far as a product that you can sell

(12:53):
as AI, but it's not a new product.
And we've had it for you. And this is like why this is
what people talk about when theyrefer to the algorithm, right?
It's recommenders. And I think recommenders have
had like a tremendously serious impact on their sort of social
organization, just as like a species.
And it's because it's obscure and hard to comprehend.
It's, there's really no comeuppance, right?
Like we don't really know how much data is getting taken from

(13:15):
us because it's being taken in weird ways for weird abstract
use cases to like organize thesecrazy systems to show ads
around, right? And I think that's one benefit,
I think of it being so out in the open.
And it's starkly ridiculous as it is now with the chat box is
that people are now aware that AI is absolutely a thing and
that their data absolutely is valuable.
And so we can give people the tools to control what it is that

(13:38):
they are sharing from a, from like a first principle stand
from like, you don't have to be an expert to know how to go
through Google's menus to find out what take out is.
And like, you know, all these other things to just remove
yourself from that system. It should be, you know, set up
some other way where like if somebody wants your data, they
should ask you for it. You should be compensated for it
because it's valuable, because you know, you capitalizing on

(13:58):
that first should be the most important thing, especially if
we're coming into this era whereAI is being integrated into
everything. You want your AI trained on your
data, not trained on your behalfusing your data, right?
Like by some middle party. Because otherwise we're stuck
right back where we began, right?
With all this entrenched sort ofcorporate interest on every
layer of everything. Yeah, I, I think one of the

(14:20):
issues that also really highlighted here is, is this
being a, an ephemeral on contextualize concept that, that
really as a society we're kind of only waking up to over the
last 4-5 years about how important data is, how it's
being used, how it's being extracted.
And I think on a broad scale, I would say most people aren't

(14:43):
even, I would have to say, like generally, most people aren't
even aware of the amount of datathat they're giving away and how
that's being utilized and, and funneled towards a profit model
of some sorts. And you know, I think if you
were to extrapolate it towards an example of a market

(15:05):
researcher standing in your living room and writing down
notes about what your family does every day, you would have
some sort of idea about like, hey, this is probably pretty
weird. But then you have that person
disappear and it happens to be contained in the device that's
always on. And like awesome was
highlighting about GPS and microphones and cameras and

(15:26):
habits. I just don't think that there is
a contextualized understanding of how encompassing it is across
every digital meeting, pretty much every device, smart TV's,
smart refrigerators, you know, your laptops, your phones.
And I think to highlight something else that you were
also talking about, Mike, is, isone of the issues of changing

(15:50):
human behavior to some sorts. And so as a sort of the first
overarching generation, generation that has picked up
technology and just ran with it without thousands of years of
cultural design and education and getting to see its impact

(16:10):
across other countries or other modalities of living.
There's not like a course that we all went to in high school.
You know, this isn't a componentof our understanding as a
species for the last 50 years. And so a lot of this is, OK, how
do you get the information out there?
How do you educate people appropriately?

(16:31):
How do you give them access to the resources and other options?
And how do you essentially smooth the friction of that?
And like we've talked about a bit in my background, just with
agriculture, it's it's the same thing.
OK, well, how do you keep other people?
How do you keep people from buying conventional monotrop

(16:52):
foods when it's the easiest, it's the cheapest?
OK, well, people start to learn about subsidies and the fact
that it keeps most farmers poor and that if they weren't getting
those subsidies, then it wouldn't actually be a system
that's taking place. It's a centralized system.
So the waste is is there even more.
So what do you do? Like Chris was highlighting the

(17:13):
grassroots components, right? You bring them closer to the
source, you have more direct one-on-one access to the source,
direct communication to those resources and the sources.
And slowly it'll proliferate outinto the communities, out into
those different subcultures who are the early adopters and out
from there. But yeah, it's a, it's a multi
faceted challenge across so manydifferent touch points.

(17:36):
And you know, we only have so much time and there's so many
issues and so many topics in thenew cycle that it's so fast that
it's like, OK, maybe everyone's not freaking out about AI taking
their job. But they haven't had the time or
the opportunity to learn about how they're kind of contributing
to the benefactors that are driven by market forces, that

(17:59):
are driven by the incentives forprofits and not necessarily
having that tangible contextual relationship to what I think
makes it just more challenging. There's a reason that Google
took a picture of every square inch of the surface of the
planet, like on it, like on the ground and in space like 10
years ago, right? Like, there's a reason that all
their products are free. It's because, you know, the data

(18:23):
is the thing that's valuable. It's like the depth of
information that you just created so huge.
And like, to some extent, you know, it's not terrible like we
there is like there's sort of a security obscurity and that
you're 1 of billions of people who are producing untold
gigantic volumes of data becauseit's just cheaper to store data
than it is to sort it. But what we're looking at with
AI is like now it's getting cheap to sort it too.
And so it's probably time to like, strap in, say, OK, let's

(18:46):
manage this and curate it like because it's like a garden,
right? Like you keep your data well
maintained and you understand itand you know, who's getting what
thing that becomes valuable for you, right?
Because if all the labs, and I know this because I train AI, if
all the labs only have the Internet that's like publicly
available to pull from, in two or three years, that's all going
to be garbage. It's going to be a lot of cheap
AI and like very low quality things in marketing.

(19:08):
And, and then, you know, just really difficult stuff that's
going to be impossible to clean and you suddenly you don't have
a human signal. You've only got worse AI than
your AI to train on. So where do you get the data?
Well, if everyone's curating andmanaging it, it's theirs.
You have to go buy it from them.And I would pay for it from
people. I pay for data all the time.
So I'm sure that Google has the coffers, right?
I guess in in a lot of ways to this, just not as a species

(19:32):
having the contextual history ofhow to deal with this and what
this looks like. There's not a marketplace for it
almost, you know, there's, there's not the layman's ability
to understand. OK, maybe I'm a farmer and I do
this. My dad is only worth this much.
Or I'm ACEO at a Fortune 500 company.

(19:54):
My dad is worth this much and here's how I can sell it.
And here's what it looks like. So I think part of the.
Interesting part there's how much are we going to give away
for free and how much are we going to have the opportunity to
kind of stop the bleeding where it's at and start to migrate to

(20:14):
other options. And so like also was
highlighting the tools have to be there, they have to be on
par. And then going back to the
education, like people just haveto be aware of the situation.
And I think that we stand a better chance now than any time
in the last 2025 years to do anything about that with the
hardware being there, with software being there with, you

(20:36):
know, a 270 million parameter model, just, you know, swinging
gloves. Yeah, I think it's been a long
time coming, but I think the opportunity is is better now
than ever. When we first hopped on the
call, Chris was training a modelas he goes, and that feels like
a vital step to actually reach this proliferation where people
can train their models easily. And in the previous episode, we

(20:57):
had Evan Armstrong on with Augment Toolkit, which allowed
you to train a model off of yourown data.
What do you think is needed to kind of make that more mass
market? How can we get someone who's AI
curious, maybe like a a tech enthusiast, but not necessarily
this level of developer expertise, to be able to
confidently say I want a model that can do this, I'm just going
to do it. There's projects that coming out

(21:18):
that are just going to morph around the users workflow, you
know, I mean, like you always see those, those tools.
Are and. My friend when he uses cursor.
I think windsurf does this too where everyone so I'll just.
Keep it. Like, you know, like, well, pop
up and say like, hey, it seems that you always run this like
this or just write that like as a rule or like, you know, like I

(21:39):
see that you hit no telemetry every time I, you know, like
that just fit. I'll just go ahead and just turn
all that off for it and just, and just do that, like, and just
do that in the future, AI is going to intuitively kind of
form around the user's workflow and, and, and the user's
personality and, you know, things like that.
So right now we're at the stage where some could use some could

(22:02):
use augment toolkit or, and you know, or Laura's or, you know,
like whatever and train up get like their own model for their
own for, for their own use case.It is a little technical and as
far as like is it, is it adding extra data, There's actually
reasoning within that data that's still a little fuzzy in

(22:22):
some areas. But but as far as the average
consumer, when, when can they get, when could they get an AI
product that that that's going to make their specific workflow
easier sooner than you think? One massive benefit to open
source AI is the ability to control your data.
Now when I use MCP servers, I also want to make sure that I'm

(22:43):
safe with my data and that's whyI use Tool Hive.
Tool Hive makes it simple and secure to use MCP.
It includes a registry of trusted MCP servers that lets me
containerize any other server with one single command I can
install in a client. Seconds and secret protection,
network isolation, or built in. You can try Tool Hive too, it's
free and open source and as you know I love open source and
learn more at 2 live dot dev andback to the conversation with

(23:05):
alignment lab AI. We're actually selling these
machines. We're AI engineers.
We need a computer to handle ourwork flows and experimentations.
It's got 20 terabytes of storage, 64 gigs of VRAM, $5000.
And I mean, it's good. So we, we really did a lot of
optimization. It's actually the home lab setup
that I've been curating myself and now I've built a few of
them. It's actually the machine that

(23:27):
Mister Chris is running on rightnow that he was training a model
on at the beginning. And it's also, and you know,
it's just useful. It's it's just like a machine
that's designed for at least thebuild out is designed for this
this specific type of traffic. You want a ton of storage, you
know, so you can hold your data sets.
You want a ton of the RAM so youcan do stuff with large models.
You really want to optimize everything for training.

(23:49):
And so we push them now with like a pretty, it's a pretty
light software stack and it's just Ubuntu, but we've already
handled all the kernels and all the weird sort of flash
attention oddities and sort of got everything nice and set up
so that you can just train. And then we try to containerize
that and put it aside. And over time, we're gonna be
building out on more of these tools that we've been hooking on

(24:10):
for, gosh, three years now, muchlike Augment Toolkit, who
actually, when I met Evan, it was very much similar when I met
Mr. Chris, where it was like, hey, here's this idea that I
had. It's like, hey, here's this idea
that I had. It's like, oh, these are all
really similar. And so we've done quite a lot.
I mean, we've taken a little slow way about it, but we've
barely been trying to automate everything, you know, and make
the entire pipeline self-contained.

(24:31):
And that way we don't have to doit for you.
And when you're at home on your computer, you know, you don't
have to learn how to train a model and how to curate data for
yourself just to have a model that's fit to you, you know, and
I think that that's, that's the place where we can go.
But I think for now, just havingthe boxes is autonomy, right?
Like just being able to say, Hey, I'm going to go train a
model on my own data that I collect and store it at my

(24:53):
house. And I don't have to rent that
from Microsoft or share it with anyone.
You know, it's a, it's just fantastically useful, especially
if you're like a startup and you're like, I just need
machines. I don't need a subscription
service to like a $7000 a month cluster of GPU's that I have to
use all the time to make my money back on.
I just need to run experiments like to see what this is and how
it works, you know? I think it's a great way to do

(25:13):
it. Could you give a couple concrete
examples what type of experiments that people would
want to run? What kind of capabilities, what
type of models like if, if someone's like on the verge of
going for a models that they're hosting or training in their own
house or at their own business, what what type of things can
they expect to be able to do? I think this guy's the limit.
I mean, it's hard to say. It's like whatever you can
capture with data, you can make a model do I think there's a,

(25:34):
there's obviously the compute constraints, just one computer,
but I think you could do a lot with one computer.
I think the thing that the industry kind of hides from the
people that aren't like kind of in the weeds on it is how cheap
the game actually really is. If you're just optimal, you
know, you can't pay someone to be innovative.
And so the labs wind up paying much more when they have much
more money. Everything looks like it cost
about a billion dollars when yougot a billion dollars, but when

(25:56):
you don't, it's not so dire. And I think as far as having the
ability to just like automate your own tasks, run whatever you
want in the open source, I thinkthat's available, right?
We can we're actually going to be pushing another sort of offer
on our page soon. Just it'll run the Kimi, the 1
trillion parameter model All in all in memory.

(26:18):
It won't all be VRAM, right, Butyou won't have to quantize it
until it doesn't work either, right?
It'll be that. So if you really want like a GPP
4 class model, like a trillion parameter, even that's available
to run like fairly cheaply. I mean, we're for the cost of, I
mean, it'd be like a really nicegaming computer, but it's got a
lot of GPU in it. It probably is a really nice
gaming computer if you wanna go that way with it.

(26:38):
But I think this is the sort of thing that'll over time become
cheaper. And so eventually the boxes will
be disks and they'll be flat andthen they'll just be in the
walls. I don't know.
I think we have the benefit too of being at a time when it's the
kind of opposite, it's kind of the benefit of the Commons as a
piece of information is it should only cost the one time to

(26:59):
create. And so whatever sweat equity
that somebody puts behind that now we're actually all able to
capitalize on. And so I think these software
and AI components are going to become easier and faster and
you'll have your option between more specialized or larger like
Kimi 2 that's more capable across more domains.

(27:21):
And so all we're seeing right now is, is the kind of
unification of decades of sweat equity.
And you don't have to wait for the big lab to do this one model
that can finally write about thenuances of healthcare or the
nuances of holistic management within regenerative agriculture.

(27:44):
And that was one of the interesting things about just me
kind of waiting for these tools to come up as somebody working
parallel to the space, but neverhaving the the high end skill
set to do all the work myself. All right.
But I can capitalize on all the work that's already been done
and, and utilize that with whatever my company's trying to

(28:06):
do, whatever my startup's tryingto do.
And so, yeah, I think we're we're at the point now this
really awesome crossroads where the hardware is cheaper than
ever, it's better than ever. The companies like Alignment Lab
are focused on distributing thatand getting it out to the
start-ups, the researchers, developers, the high end early

(28:29):
adopters to help educate the masses on this connection of all
these different resources comingtogether.
I get excited about the idea of being able to have a little box
running where nothing leaves there.
I can set up home automations, Ican have the always on
microphone and not have to worryabout data being sent anywhere.
It just it unlocks all the capabilities.

(28:50):
I'm curious about what other type of persona I fall into the
camp of the experimental AI engineer already leaning team
open source and team local models.
But who else do you think would be the type of person that would
benefit from picking up one of these rigs and having set up
whether it's in their home or their business or or something
along those lines? I think anyone concerned with
privacy and I think anyone who'strying to leverage AI for an

(29:12):
important use case that requires, you know, the mixing
of the personal information or just private information their
own, like, I think this is a good way of dealing with it.
As long as your computer is secure, your data is secure, you
can do it on like turn the Internet off, you know, and not
having the ability to do that, Ithink is not as common as it
should be. Places like, you know, hospitals
and banks and things of that nature have their own

(29:33):
infrastructure for dealing with such things, but it's very
inaccessible and very large and expensive.
And so I suppose like other thansort of startups, maybe
companies concerned with privacy, maybe just the loan
sort of hacker engineering type.I think overall it's, it's one
of those things where it's goingto become important for
everyone. I think right now it's important

(29:54):
for us. I I can't say like if the sort
of impact of access to AI scale compute at a local level will
change the shape of the industryquicker than it would be safe to
make bets. This is going to shape every
industry so to answer. Your question really everybody.

(30:14):
AI is going to. Touch every industry.
So it's, so it's really going tocome down to, I mean, I really
hope. I'd really hope to see.
A nation of decentralized local AIAI boxes and stuff, everyone
going to the same Gemini and chat CBT for, you know, for, you
know, like their solutions. And I think those who get ahead

(30:37):
of it now are are going to see an exponential return in the in,
in, you know, the job market in the next 5 to 10 years.
You know, it's ramping up fasterand faster and faster.
So I mean, like to to, you know,answer the question who can
benefit from this? Anyone who, anyone who works for
a living, anyone who anyone who needs who needs to get anything

(30:59):
done like in the future. Literally everybody.
And one thing Austin's brought up in the past is what happens
when you can't rent AGPU? What happens if they're either
the demand's too high or you just happen to not fall into
some company's favor and you don't have access to it?
As we move into this next era, the AI age, that's going to be a
vital component and that can be gate kept away from us.

(31:19):
Yeah, I think it's very lucky for us that there are as many
thirty 90s in circulation as there are.
I think a 3090 is still like in the top view in terms of price
per total amount of AI training compute that you can get out of
it and so. Yeah, we should build it up all.
We can, right? Like, we don't know what the
future holds, but we know that the market likes it when you
have to pay them for access to things.

(31:40):
And if that's going to be a barrier that risks something
that's as important as your dataand your sort of ability to
interface freely with information, then we should
probably get some compute in thehouses.
Although I'm hoping to make it cheaper than needing to spend
$5000 in the future. But right now, I think $5000 is
more affordable than it was to run a model like at the scale

(32:00):
that we can now. What do you see as the road map
division? Where?
What is the the direction that center is going?
What I'm most excited about is the assistant center that I've
been working on that's going to live on these boxes is that is
personality. It is it is going to watch you
and and it's going to change itsits own personality and workflow

(32:24):
to work around the user while also at at at in the same time
helping to organize the users life and workflow and take raw
messy human data and turn them into actual plans that worked
for the users. You know, so this is.
This is an example of AI living symbiotically with.

(32:45):
With humans, the plan I think for the outset, at least to me,
has always been to build the things that I was most greedy
for. I think I want the Tamagotchi
that learns on it's own based onjust whatever I'm doing that I
don't have to do any work to do.I wouldn't say that that the
angle that we're pursuing is something where my grandmother
is going to fundamentally be excluded from the product that
we're building forever, right? I think right now it's technical

(33:06):
because right now it's just technical time.
Like the industry still adaptingand the technology is changing
every 5 minutes. But as we settle down, I think
and we continue to learn the very interesting things that
we've learned so far about sort of how to do this optimally and
begin to implement more and moreof the systems that we've been
designing that we've been building.
What it's gonna look like for the end user.
Something very simple. They just having a computer that

(33:27):
kind of understands them and notin a way that's sort of like
ChatGPT where it's just kind of remembering like whatever the
list of recent facts are that you had said, you know, but
something that is actually builtout of your data and and sort of
curated over time to help you beas productive as you as you can
be about things you care about, right?
Like and knowing sort of what the questions are that you would

(33:49):
ask if you could think of them, you know, and answering them for
you before you have to ask them sort of that level of
enablement. Because I think we have a way to
give people super powers here ifwe play our cards right.
And that's really what I want todo.
I think there's a phenomenon that we in this room are all
aware of that people who are outside of the space are not
where in we've seen what happenswhen people use AI, right?
And they just go through this cascade of just like self

(34:11):
actualized progress in every single direction.
I mean, they just become completely and very much enabled
to just sort of interface wherever they need to and know
what to do, like when and when to do it.
And I think like, if we can justthat takes a lot of work though,
right? Like that takes like a lot of
understanding sort of the cleanliness of the data that
you're getting, kind of how I guess that's just like an
analogy, right? But when I say cleanliness of

(34:32):
the data, but I mean, like, you know, in in reality, this should
be accessible to everyone. You can talk to the freaking
things, you know, it shouldn't be a difficult, technically
challenging interface. And so if we can give that to
everyone, I think that we'll have plenty of room to gloat
about the alignment name. I think it really highlights a
lot of the things that we're discussing here and bringing
together like the simplified version of alignment that it

(34:53):
does, the AI does what you thinkit would do or what you expect
it to do. And so, you know, kind of like
the self improvement or the phrase goes about treat yourself
like somebody that you're takingcare of, right?
So if you're AI treated you likesomebody it was trying to take

(35:13):
care of, how would it help you in these ways?
And I think just any software, AI software company heading down
that direction is probably goingto do well in the next several
years, if not at least financially, at least do well
for the tangible good that they're producing out of their

(35:34):
their customers, their subscribers, their community
base. So I, I'm thrilled that you guys
are, are working towards this. I think the vision laid out is
what we need. I think that the plan to execute
with getting people the hardwarethey need along with the
software and gradually implementthis on on a wider scale is
exactly the the right path. How can people help?
How can developers, enthusiasts,people who also want to see this

(35:56):
future help ensure it happens? I think that the incentives are
such that if you are able to build AI, you're able to build a
safe AI. And if you have a reason to
build AI and you're, I mean, you're a war in the open source.
So you're just not sort of incentivized to do that beyond
whatever The thing is that puts you crazy enough to stare at
numbers long enough to be able to do it in an innovative way.

(36:18):
And on new things, then we should be OK trusting that
generally the state of open source software will be as it's
always been in that it is generally just made to be useful
for as many people designed to build products on, designed to
build good tools and things thatare available for people fast as
possible. Because I don't think that I
don't think the idea of the opensource is fundamentally
incompatible with like a business model.

(36:39):
I think you get a lot of, I get a lot of conversations where the
assumption is like, well, well, you care about open source, but
like, how do you make money? And I'll just go back to Google
again, right? Like Google open sources,
everything they do because beinginfrastructure is really
valuable. And I think right now we don't
have that. And so until we have like a
stable bed of infrastructure, that's all there is to build.
And so whether or not it's gonnabe safe really comes down to

(37:00):
like, well, are we going to let our infrastructure become this
sort of duct tape together pile of API keys and sort of rented
services from ephemeral compute at like a premium?
Or are we just going to do the cheaper thing and, you know,
just buy the box and run it at your house.
You don't have to worry about itas a bill and like get your
tools so that you don't have to pay for them every single hour
of the day. And, and I think just enabling
people to be able to do that in a non exploitative way is all

(37:21):
you have to do to get safe AI out of it.
I think a lot of this comes backto the lack of robustness in a
centralized system, right? So how do you have safe AI?
You, you can't have the same sort of safe standards across a
billion people in North America,you know, or like across the

(37:44):
West, like it's not going to be the same use case.
It's not going to be able to help the same people.
So as a Uber generalist model orUber generalist provider of AI,
that's going to be the limiting factor there.
So being able to decentralize it, bring it back to the
individual contextually relevantuse cases is truly how we're

(38:06):
going to find the safety in thatit's going to happen by itself,
right. And and same thing like Chris
was talking about with the complex community that is open
source. It is a self organizing system
in some sort. And if you do not have the
technical chops or skills to contribute to that, being a user

(38:33):
of what tools matter to you, I think it's just as useful
because if somebody builds the perfect open source software,
but nobody uses it, how perfect is it, right?
But if you lack those skills, perhaps finding the 8020 for
yourself that is the most important across your values or
your North stars. And if there's a software that

(38:55):
you can use, then use that and share that amongst your peers or
your colleagues that are in the same market or industry.
And so in the same way that we've had to shift other
behaviors and modifications of technology as we've kind of
found out about them, it, it's going to be a growing process.
It's going to be a process of building the right tools for the
right people, distributing thosetools out, communicating,

(39:16):
educating as much as you can, orjust using what is most useful
for you outside of the the big labs, outside of the, you know,
several providers of tech and AIat the moment.
Yeah, I think there's a bubble right now.
There's a lot of confusing products that are not really
what they claim to be or they'rea scam or they look like they're

(39:36):
AI, but then they're not really that much under the hood.
But like AAPI call to a frontiermodel that's just prompted.
And I think like that's a lot tosort of try to parse out if
you're like a business or, or just somebody who needs to adopt
this to like keep keep up or you're just interested in it.
And I think one place that is always going to be a good signal
is what is in the open source that is popular because there's

(39:59):
not a financial incentive for things to be popular in the open
source. No one's buying ads for open
source products, right? Like it's just good and useful.
And so it's proliferating based purely on that merit.
Open source is really a gift to the world, to humanity, because
everyone has to benefit from it.And if you get to benefit from
it, which most people do, if anyserver you're using has Linux on
it or what not, you get to reap the benefits.

(40:20):
It really helps to pay it forward.
And if all you have to do is usea product like for local
inference, something like Jan orwhat not, just these, these
tools that are made in the public in the open for the
public to use, it just makes it better for everyone.
I think alignment comes from from from gain this in the hands
of as many people as possible. That's, that's kind of my bottom
line that that humans are naturally good and how we keep

(40:44):
AI from going rogue is get in the hands of everyone as
possible, you know, and, and howwe, how we empower everyone is
get it in the hands of everyone as, as you know, possible.
So believe in yourself and trustAI.
So we had to build the boxes because because we're engineers
and we needed something to run our work flows and our
experimentation. They're 5 grand, 64 gigs of
Eram, 256 gigs of RAM, 20 terabytes of storage.

(41:07):
It's pretty spicy. They work pretty well.
You can check them out or we have a website.
It's alignment lab dot AI. And my word of advice to
everyone is you should pay attention to your data.
I think it's important that it, if you don't, I think you risk
that you risk having to pay for something I think every day that

(41:27):
you necessarily need, but you wouldn't otherwise have to pay.
I think make money instead I think is an is an alternative
that we can get as long as we'rejust careful about what we're
doing and the consequences of our sort of consumer habits.
Yeah, I don't know, Maybe it's aseems cliche on like the highest
level out of context to this conversation, but I think it it

(41:47):
your actions actually matter. I think your choices actually
matter. I think you actually have much
more say in whatever conversation is happening in
society. You just have to educate
yourself a little and put a little more time into it.
But I think across any part of life, you're going to reap the

(42:08):
benefits from that. And I think this is the same
thing. Your choices behind what tools
you use matter. And I think your ability to talk
about it even amongst your friend groups and your peers and
your business and your CMO or your COO, people aren't aware, I
think of how easy and accessiblethese tools and these ideas and

(42:29):
these possible futures are, you know?
Thank you for listening to this conversation with Alignment Lab.
AII know it's a bit heavier thanmost conversations doing the
same pizzazz as our tool demos, but the work that Austin, Chris,
Jordan, and the rest of the teamare doing I think is very
important. I think a lot of people who have
issues with AI don't really realize that open source AI
solves a lot of those issues. And if we can make sure that

(42:51):
open source proliferates to morepeople with the use of center
and other AI tools, I think it'sgoing to have a net positive for
humanity. So I want to give a quick thank
you to Tool High for supporting the show so we can have
conversations like this. And I really appreciate
listening in and I'll see you next week.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

NFL Daily with Gregg Rosenthal

NFL Daily with Gregg Rosenthal

Gregg Rosenthal and a rotating crew of elite NFL Media co-hosts, including Patrick Claybon, Colleen Wolfe, Steve Wyche, Nick Shook and Jourdan Rodrigue of The Athletic get you caught up daily on all the NFL news and analysis you need to be smarter and funnier than your friends.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.