Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:01):
We're rolling.
Michael Heinrich from Zero G.
Welcome.
Great to be here.
Thanks for having me.
Absolutely.
First off, congratulations on winning the Kaito leaderboard battle royale.
Zero gravity will be listed on the Kaito connect, think on Wednesday or next Friday orthis Friday.
(00:25):
That's exciting.
I definitely have some questions around that, but before we get into questions aroundmarketing, positioning, branding.
We'd love to first hear about your origin story.
How do you get into crypto and more importantly, like why do you stay?
Yeah, I've known about Bitcoin probably since 2011, but didn't really pay much attentionto it at that time.
(00:50):
And really, when I started going to graduate school, about 10 years ago is when I, morethan 10 years ago at this point, started to learn about Bitcoin because I heard Marc
Andreessen talk about it.
I heard Tim Draper talk about Bitcoin.
I was a DFJ fellow at that time.
I was at Stanford.
(01:11):
That was really my initial introduction into it.
And basically came in, like many people just invested some Bitcoin on Coinbase and thenread the white paper and totally got hooked.
when I really got hooked was kind of doing the ICO stage because I just saw this like2016, 2017, like this explosion of creativity.
(01:34):
And I knew I wanted to be part of the industry from an operator standpoint, eventually.
At the time I was runninga very fast scaling web two company in the health and wellness space, technology health
and wellness space, and even tried to see if there's applications of blockchain withinthat company.
(01:55):
For example, there's a way of aligning different suppliers via blockchain and then openlyshare where different aspects of the supply chain are moving.
But it was way too early at that point.
Everybody was like, what's a token?
Like, I can't hold this on my balance sheet.
I'm not going to share information with my competitors.
And I was like, okay, too early.
(02:15):
So that's how you visit this a little bit later.
And then kind of scaled this company quite a bit.
But then late 2022, I got a call from my classmate Thomas, and he basically said, hey, youknow, we've invested in crypto and a bunch of things together.
I know you wanted to do something in this space.
I invested in a company called Conflux, Ming and Fan, two of the co-founders, want to dosomething more global scale.
(02:40):
You're interested in meeting with them.
And I basically said, yeah, why not?
You know, I'm open to something new.
Met with them six months of co-founder dating later.
I basically came to the conclusion that, wow, they're like the best engineers and computerscientists I've ever worked with.
And I don't care what we start, we have to start something together.
And so that was the origin of zero G that was in May, 2023 at this point.
(03:05):
Yeah.
when you first met your co-founders, did you have the idea at that time to start Zero G tothe first AI L1?
Was that the initial product that you guys sought to build or was it something else?
Well, it really started with the team.
so then it went from, how do we take this amazing team of great computer scientists andkind business backgrounds and then build something in the space?
(03:35):
Just to give you a sense, like San has two Olympic gold medals in informatics.
He's MIT computer science PhD, wrote many top academic conference papers together withMing.
He's a professor at the University of Toronto as well.
And Ming, he was actually Ming's intern initially at Microsoft research.
(03:56):
That's how they met.
And then Ming's been at Microsoft research for 11 years and wrote some of the key papersin distributed storage, scalability.
He wrote some of the first AI algorithms for Microsoft Bing as well.
So like really, really strong kind of backgrounds.
And so we took that expertise and said, we want to find something that's at theintersection of what are we passionate about?
(04:20):
Where do we have domain expertise and what's the major unlock for the space?
And it took us probably a good like six months to really figure that out.
Just because we started speaking with builders in the space and initially we're thinking,well, maybe there's something on the interoperability side that needs to be solved because
the user experience still was kind of lacking at the time in web three.
(04:43):
But that eventually settled that AI is really where our passion lies and where we actuallysee the biggest impact that blockchains can make as well.
And so that was kind of late 2023.
And since then, I mean, the mind share of AI plus crypto has just taken off.
mean, it's the majority of conversations in crypto have been around AI.
(05:07):
And so Zero G has really kind of, you guys are in a perfect sweet spot right now.
And really, I feel like there's a lot of momentum.
Tell us about Zero G and kind of like the key aspects of like what makes it different fromother L1s.
you know, this idea of like the first AI layer one is a new category and zero G is clearlylike the first in that category and the first in any category is you can make a bet that
(05:39):
that's gonna win.
Tell us about that and like what differentiates, first of all, tell us about the category,AI L1 and then zero G what's different from zero G versus.
you know, your other layer ones.
Yeah, we purposefully wanted to purposefully build a layer one that's specificallydesigned for on-chain AI applications.
(06:04):
And so we basically designed every parts of the stack so that it's available for AI typeof workloads.
And so we basically have a few components.
One is the chain component.
So that's the layer one that has the consensus and so on.
The data availability piece.
is kind built on top of that.
(06:24):
Then we've got a storage layer, and then we've got a compute layer for things likeinference and fine tuning and eventually training as well.
And so all of those components are basically there so that you have a one stop shopexperience.
So if you're a builder, let's say you're coming from Web2 into Web3 and you're like, okay,well, how do I even start?
(06:45):
I want to build an agent, let'sI have to figure out, what agent framework do I use?
What do I use as my backend?
What like inference methodology do I use?
Do I use ZKML?
Do I use TML?
I need to fine tune my agent.
Like, where do I go for that?
Where do I even put a, you know, vector database?
How do I then make this all a testable on chain?
(07:06):
And so before you know it, you have like 10 different services you need to chat with.
And we just try to make it super simple so that it's a one-stop shop experience by puttingall of the best of Web3.ai at your fingertips.
And so we can't do this alone.
We work with more than 300 projects building on top of our chain, as well as integratingwith our infrastructure layer to make that a very seamless experience for builders.
(07:30):
And so that's kind of where you see a lot of that design thinking come into play.
The idea of vertically integrating all the services that developers need, what was thethinking behind that in some of the conversations you had?
I'm sure you did a lot of customer development as you spoke with potential developers andcustomers and the types of services and apps that they needed.
(07:54):
Tell us about those conversations that led to the product decisions you eventually cameto.
Yeah, I mean, we want to target both kind of web two developers and AI coming into webthree as well as web three builders.
And so there's generally two separate ways of thinking about the world for the web twobuilders.
It's very much like, Hey, why should I switch if I have really simple, you know, API callsand I have everything under the hood with open AI.
(08:21):
It's like very simple.
So for them, it was about simplicity initially.
And then the other factors that play into it is really how do Ibuilt my application in a very customizable way.
And from there, open source is generally superior because then you can really dial in likewhich model do I use?
(08:42):
How do I like, you know, fine tune the models?
What type of context windows do I need?
And then recently with, for example, DeepSeq's emergence, also the cost pieces, I forgotexactly.
It's like 500X more cost effective than OpenAI 01, for example, with the same performance.
And so now certain applications actually also become possible as a result of that.
(09:04):
And so that's kind of the Web2 builder perspective coming into Web3.
So the Web3 builder perspective, it's very much about like, how do I create somethingthat's performant enough so that I can actually get up and running very quickly?
So if I'm trying to build an agent, for example, that's, let's say,or multi-agent system that's chatting on X with each other, how do I reduce the friction
(09:28):
and make that kind of performant enough so that I don't have any kind of like performancebottlenecks?
So a little bit of a different perspective from a Web3 builder.
I looked at the ecosystem, you have 300 vertically integrated applications and services.
(09:53):
It's really incredible what you guys have built over the last really two years or so.
Tell us about the work involved in doing the outreach to all of these projects, gettingtheir buy-in to integrate and become part of the Zero-G ecosystem.
(10:13):
That's incredible work that you guys have done.
Tell us about that.
Yeah, appreciate it.
it's, both inbound and outbound.
So the inbound happened because we position ourselves very clearly, like, we're purposebuilt for AI.
And so as a result, we had, you know, data labeling companies, you know, come in inbound,we've had agent type companies come in inbound.
(10:36):
And then the outbound piece is very much like being present at events, being present onsocial media, being present on podcasts like this.
very much helps with getting a little bit of that mind share.
And then when we reach out to a specific project that we're really interested in, they'relike, oh yeah, I've heard of you guys.
(10:58):
I've heard of you here or there.
And that's been super helpful.
But we have a BD team of about three people as well, so they've been pretty active onsourcing as well as handling inbound requests.
Lately, the inbound has definitely been much more than the outbound.
(11:18):
Well, your team of three, they're definitely punching above their weight with being ableto close 300 plus partners.
That's really incredible.
And really big names too.
Now, if I were developer and I was building an app, the value prop that Zero G gives me isthat you guys built Zero G from the ground up based on first principles.
(11:47):
Like if you were to create a layer one with all the services that AI applications need,like what would they have?
What would it look like?
How performant does it need to be?
It's really valuable that what you guys have done.
Like you started from scratch.
Like if you were to build this thing from zero, like what would it look like?
And that I imagine is a huge value proposition for developers.
(12:13):
When...
When a developer is looking at building on zero G versus another layer one or anotherchain, what do those conversations look like and how do you convince them to build on zero
G versus another chain?
It really depends on where they are, let's say, in the AI stack.
(12:33):
So the conversation can be quite different because sometimes what they really need isreally fast storage, for example, because they're building a data labeling service and
they need a place to store things in a very decentralized and verifiable way.
Other times it's about, hey, I just need a very simple way for our users to be able to useyour inference service.
(12:57):
And so it depends a little bit on the kind of situation.
Every situation is a little bit different, but being aligned with an AI specific chain,that alone is very helpful because then you're part of this broader ecosystem that leads
to others to want to be part of it as well.
So it's kind of the self-fulfilling prophecy, if you will.
(13:19):
I think that alone has been a very strong draw.
And then...
The second draw is the kind of technological superiority in many instances.
And then the final is just around how we think about the cost perspective as well.
So if you're switching from, say, an Amazon S3, we can be up to 80 % less than that at thesame performance levels.
(13:44):
So for example, on the storage side, we've tested our storage at about 2 gigabytes persecond and throughput.
And that's the fastest ever recorded and kind of decentralized storage.
so very proud of a lot of those types of results.
That's great.
There's looks like from a product perspective, there's kind of four key offerings or fourkey main parts of the zeroes, zero G ecosystem.
(14:10):
So there's the chain, there's compute and then storage and then data availability.
Could you take us through each of those and maybe share kind of like what's unique aboutit and keeping in mind like the AI developer specifically.
Yeah, I'd say there's also a service marketplace component.
(14:30):
And I'll get into that in a second because we consider data availability kind of a bitmore part of the chain part and I'll explain why in a second.
So on the chain part, right now we're essentially kind of a version one of the chain justto kind of get things going so that people can, so that we can actually utilize the
(14:53):
benefits of, you know, consensus layer and so on.
Our goal, however, is to build a second version of this chain later this year, where we'veessentially figured out a way where we can get to infinite TPS, if you will, so that the
chain itself scales depending on the workload.
So one of the issues you have today, so for example, on Solana, like Pump.Fun makes upsomething like 70 % of the transactions.
(15:22):
you speak to Pumped.Fun and they're like, well, we want to move off the chain because wehave too many failed transactions.
We want to build our own infrastructure.
And so we can't let that happen for specific AI workloads.
So for example, if you're running training on chain, you start having a lot of failedtransactions, then you're never going to finish training that particular model or fine
(15:44):
tuning that particular model.
And so that's the big issue.
So how do we make sure that the chain consistently scales itself?
And so the way we figured that out is we built this data availability layer where everynode that gets added to the network adds to the overall throughput.
And so you can get to about, let's say, 5,000 nodes with a custom consensus layer.
(16:08):
And so that scales to about that.
And so every node, depending on what you use, can get to about 30 megabytes per second inthroughput.
And so you've got the gigabytes per second of throughput now, which by the way, no otherDA layer is capable of doing.
And then what happens is that the consensus layer itself becomes the bottleneck.
(16:32):
And so we've basically then figured out, what happens if we also horizontally scale theconsensus layers themselves?
And so...
That then leads to infinite scalability because you can just spin up an arbitrary numberof consensus layers.
So as the nodes increase, the DA nodes increase, you can increase the consensus layers aswell that you match with them.
(16:52):
Now, what you can then do is you can also modularize the execution layer.
So then you can combine the execution layers with different DA layers.
And so then you can essentially drive the transaction throughput, you know, however muchyou want.
And sotransactions per second no longer is an element or a blocker to actually building fully on
chain applications.
(17:18):
So there's a lot of like engineering that goes behind the scenes to make that happen.
The other thing we're trying to figure out is how do we go below 100 milliseconds andlatency on a layer one?
Because usually if you have a three shot consensus, if you have speed of light, let's say,you know, you have
validators in all parts of the world, the fastest you could ever get to would be 100milliseconds.
(17:42):
That's assuming, you know, speed of light.
Now, how do we get to 50 milliseconds?
How do we get to 30 milliseconds without giving up some of the decentralization aspects?
And so we're doing some kind of research into it with kind of, let's say, local consensusand then kind of getting to global consensus in different pieces, which would be the first
(18:03):
time something like that gets figured out on the layer one.
So those are some of the kind hardcore engineering problems that we're trying to solve forthis next version of the chain.
Once that's done, you can basically run any application fully on chain, whether it's AI,whether it's on chain gaming and so on, everything's possible.
The storage piece we touched a little bit upon, but the idea is to have a competitor tosomething like an S3 or kind of a Google Cloud, but just fully decentralized.
(18:32):
So there's no censorship resistance.
you have something that's disaster recovery built in because you generally have multiplereplications across the network.
And at a cost, again, that's 80 % less than centralized solutions.
(18:54):
Then the compute layer is verifiable inference, verifiable fine tuning, and eventuallyfully on-chain training.
The training piece is something we're still doing research in that's not available.
Fine tuning should be available end of the month, which we're really excited about.
So that's there.
And then finally, there's the service marketplace layer, because if you think about usingyour iOS or your Mac OS or Android or any of those types of operating systems, you usually
(19:21):
have something like an app store built in.
And the app store, what does it handle?
It handles registration of different providers and it handles payment.
of different providers.
So we basically enabled that service layer as well so that anybody, whether they're likean Akash or an Atheer or a Fala and so on, can register themselves, register the network,
(19:43):
and then other people can utilize that from the service marketplace.
So those are the whole components, and that's what we call the decentralized AI operatingsystem.
So you've got all that power basically at your fingertips building in this space.
That's exciting.
Question around the data availability piece.
Is that something you guys built from the ground up or is that through, that's servicedthrough a partner?
(20:06):
No, we built that from the ground up because that's the key component to actually get tothis massive amounts of scale to put AI fully on chain.
Got it.
Now, if another data availability project wanted to partner with Zero-G, so thatdevelopers that build on Zero-G would have an option to either build on Zero-G DA or
(20:28):
another partner, is that something possible?
You would give developers an option, but may not have the benefits of a from the ground upDA that is native to Zero-G.
Yeah, I mean, it's certainly possible because in this kind of architectural design,because it's modular in nature, you could have different consensus mechanisms.
(20:53):
Like we could use another L1, for example, to be a consensus mechanism as well.
We could use another DA layer to actually be this DA layer too.
Now, the downside of that is that most DA layers usually don't go beyond 10 megabytes persecond in throughput.
And so even a single node on our network does more than 30 megabytes per second inthroughput.
(21:13):
Actually, think Celestia may be the fastest at probably doing like 27 megabytes per secondfor the entire network.
And so why utilize something that won't meet the needs of kind of fully on-chain AIapplications?
So if you think about training, for example, NVIDIA and Finiband, I think is in thehundreds of gigabytes, if not now in the terabytes as well.
(21:35):
throughput.
So if you want to replicate that level of performance, yeah, even stringing together allof the other DA layers together and utilizing them won't get you that level of
performance.
But yeah, that's kind of the long answer.
No, that makes sense.
What if I have already another hypothetical, if I already have an application that needs adata availability service, but it's built on another chain, is the data availability
(22:05):
service on Zero-G, that, could I call, I guess, would I be able to use that though I'vebuilt my application in another chain?
Like, do I have to build it on Zero-G?
That particular chain would have to use our DA layer, essentially.
So there's a few options, essentially.
(22:26):
You'd have to either use our DA layer or you move on to an app chain and then use that DAlayer.
Another way is to basically have the chain run in alt-da mode, depending on what kind ofchain it is.
So you could have a primary and then a fallback DA.
So there's different kind of options of how to structure it.
(22:46):
But yeah, the chain itself would have to run the DA layer.
So it sounds like the key offerings that Zero G has, at least the DAPs, could also be kindof chain agnostic, it could be used by other chains.
That's pretty exciting from a business development perspective and from growth.
(23:07):
Is there a strategy to get the key services that Zero G has onto other chains?
Yeah, I mean, it comes from this design philosophy that we don't want to dictate how youbuild.
And so we wanted it to be modular enough so that if you just needed to use storage or ifyou just needed to use DA, you can definitely do that.
(23:31):
And so it's like an introduction to the ecosystem, if you will.
And that provides flexibility in terms of how you build and what you build.
And even from akind of our one perspective, if you natively deploy a Dapp, we would want for you to have
the choice of what language you build it into.
(23:53):
So we're fully AVM compatible, but if you wanted to use the move language, we want toenable that in the future.
Or if you wanted to use SVM, we would want to enable that in the future.
And so even at the execution layer perspective to be modular enough so that the developercan make choices.
Because in Web2, mean, many people write in Python.
and another amount of people write in TypeScript and then others write in Rust and so on.
(24:19):
So there's all these different preferences in terms of how people build.
And so we just want to make it really seamless and easy to get up and running in whatevershape and fashion you want.
And maybe you have very specific requirements.
You need really fast DAs.
You use RDA as well.
But then...
You want to use a different chain architecture because customization of the tokenmechanics is really important.
(24:43):
So you have your app chain.
So it really depends on the end user's use case.
I think that developer first approach is really admirable and I appreciate you saying thatand designing Zero G in that way.
As Zero G approaches Mainnet, and we'd love to hear more about what that looks like andmaybe any timing, curious about your growth strategies.
(25:09):
When Layer 1s moved up to Mainnet, having those first few set of applications that kind ofact as a magnet for users is really key.
What would love your thoughts on kind of what that looks like for Zero G?
Yeah, it's kind of coming.
(25:30):
A good analogy is like you go to an amusement park and there's no rides.
You just bought a ticket, but there's no rides.
And so very similarly, if you come to a chain and there's not some really interesting goodapplications, then it has the same type of issues.
so generally, L1 needs to have some kind of basic applications like bridging, like a DEX,lending type of protocols, type of launch pad as well.
(25:57):
there's kind of the basics that need to be fulfilled.
And so we do that through a combination of acceleration where we kind of handpick teams towork with, as well as builders that are just really excited and want to deploy in our
chain because of, let's say, the kind of AI components that we offer.
And so we believe more in quality versus quantity, even though sometimes meeting long-tailneeds is also very important.
(26:23):
So the quantity aspect needs to be there.
but from a kind of focused perspective, we're very much like, how do we really attract thebest builders to the chain and how do we create really interesting experiences?
So for example, on the deck side, our team, the team that's essentially working on thedecks is looking at, how can we include ZK Darkpools, for example, or how can we include a
(26:48):
deaf AI agents as part of the trading experience to really utilize the capabilities thatZero G offers.
so that it's not just like, you're coming to our L1, and it looks like every other L1.
There needs to be something unique as well so that you can actually utilize thecapabilities that we've built.
So every app will have like an AI component or can I build an app and not have an AIcomponent?
(27:13):
I mean, it's really up to the builder, but I think the builders that have chosen to cometo our chain so far, they really want to utilize all of the capabilities.
And that makes sense.
I was just trying to differentiate between kind of a general purpose L1 versus the veryspecific kind of AI purpose built piece.
That's really interesting that this project or this team that's building the decks has a,what is it, deaf AI component to it.
(27:44):
We'd love to hear more about that.
And that's gonna have its own kind of growth strategy.
in terms of bridging assets over and then participating on the decks.
How does Zero G support these developer teams that are building apps and how do you helpthem grow?
(28:07):
Yeah, we have a few kind of ecosystem programs that we're going to be releasing prettysoon.
We're going to have a big ecosystem kind of fund initiative where essentially we will haveboth an investment component as well as a kind of grant component to it.
We have an accelerator where the teams get a lot of my time as well as the team's time tocraft very clear go-to-market strategies, very clear.
(28:35):
investor structuring approaches, all of that type of perspective.
So a lot of attention and time is really spent by the team to make sure that whoever isbuilding on our chain or with our infrastructure is ultimately very, very successful.
So that's from the team side.
And then there's the external components.
We're building a very strong ecosystem of builders that can help each other, whether it's,you know, you've come through the accelerator or you're just building on chain or you're
(29:02):
building.
with some component, you get to be part of this community and then basically exchangeideas and learnings together.
And so really create a very strong builder community as well.
kind of internal external components that we support with.
That makes sense.
I remember I led growth at Harmony Protocol when one of the key apps that really helped togrow was DeFi Kingdoms.
(29:32):
we didn't expect it at the time.
knew we saw something that was really special.
And then all of a sudden we started seeing TVL go up.
all of a sudden TVL was over a billion dollars and bridged to Harmony.
our PCs were dying and we didn't know what to do.
And it was one of those things where, you know, some of it was in our control, some of itwas planned, but a lot of it was just like kind of luck.
(29:58):
And so as you're looking for these applications to build on Zero G, I guess what's theworldview you...
you look at, you use as you look at these applications and which ones will become themagnet to really draw in, you know, assets to be bridged to zero G, people, eyeballs,
(30:24):
attention.
It's a really hard problem.
But I'm curious the approach that you're taking.
It's a very hard problem because you have to take multiple bets in a way.
it's just like if you're a VC and you say like, with 100 % certainty, I'm going to tellyou all of these portfolio companies are going to return and they're going to be
(30:46):
successful.
Like nobody can guarantee that, right?
Even the best, let's say on the Web2 side, I think it's still close to a coin flipbasically from a probability outcome standpoint.
If I could tell you like with a hundred percent certainty, here's the 10 companies thatare going to do like amazingly well, don't trust me.
(31:08):
Just statistically, it sounds very, very infeasible.
given kind of human nature and taking a first principles approach, we can say, well,there's certain things that people really want to enjoy doing.
Like one is entertainment.
And entertainment can come in different types of forms.
can come in a game form.
It can come in more of a form like a launchpad, like pump fund up, for example.
(31:30):
It can come in the form of like, want to feel a sense of like belonging.
So based on kind of these core human needs, can then map out and say like, okay, here'sthe likely type of application with an additional kind of overlay of AI that can do well.
And so then based on kind of this map, we can then say, okay, there's probably this 50different application areas that we should look at.
(31:53):
Then we can rank order and prioritize them and say, okay, based on this, we're then goingto focus on this.
So the way we've approached this to say we need to get the basics right.
So let's focus on kind of the default components first, get all of the kind of keyliquidity elements in, and then we can focus on the more entertainment pieces second.
That's how we've approached it in general.
(32:15):
I think that strategic approach that boils down to the tactical pieces around which appcategories make the most sense and then even the sub-segments below that I think is really
wise.
And it's really the only way to do it.
Otherwise, you'll just be guessing.
But it sounds like you guys have a clear approach there that makes a lot of sense.
(32:41):
and I think that you'll find a lot of success there.
We hope that a more structured approach basically leads to a higher probability of successversus one that's like, let me just try a bunch of things.
A type of approach, yeah.
No, absolutely.
You know, another approach that I've seen other layer ones take is they, um, you know,they court current apps that are built already on other layer ones.
(33:06):
And so the problem that, that I've seen is, especially with EVM chains is that they, justclone the exact same app.
They ported over, but then they don't give it full support in terms of, uh, know,supporting developers that are using it or users, or trying to build a community on the,
on the new chain.
(33:27):
And that could be a problem too.
And I've seen that actually happen quite a bit and I've committed that same issue, thatsame problem.
As I've done business development and work with other applications and other chains to getthem over to my chain or the chain that I'm supporting.
But then the development team behind the application doesn't also build the community onthat chain.
(33:52):
You could have some issues.
Yeah, the thing that also we want to avoid as well, kind of that brings up this point ofwe don't just want to be like, who's the highest bidder come to my chain.
We want much more of a kind of mission type of alignment.
Like I'm here because I'm excited about the future of AI.
I'm here because I want to kind of stress test the limits of what decentralized AI is ableto offer.
(34:17):
I'm here because I want to make AI a public good.
So there's this mission alignment.
And then I also want to build really cool technology that makes a lot of people happy.
So that's what we want to stress versus just this kind of idea of let's be like supermercenary and just give like tons of grants to everyone.
So they just start building on our chain.
We want something that's like very long lasting basically.
(34:39):
And I think there was a time when the mercenary vampire approach and just giving grants toeveryone worked and then it didn't work.
And so I think that having built Zero G from first principles that is very AI focused,that key differentiator, I think will weed out exactly who ends up on Zero G.
(35:03):
They've already self-selected and I think that's really wise.
And being first in the category,is really key.
I think you'll see a lot of growth there.
And I'm excited for you guys.
Let me ask a kind of a fun question.
Tell us about the zero G panda mascot.
What was the thinking behind that?
(35:25):
Because when you think of AI, it's highly technical.
It's a lot of work, right?
And how did you end up with a mascot like a panda?
We wanted to humanize the aspects of technology or just make it a little bit more fun fromthat perspective.
So I think the way it came about is somebody in the community called Wint that one of mynicknames is Kung Fu Panda, which lovingly my wife gives to me because I love doing
(35:55):
Shaolin Kung Fu andI don't know, I guess I must look like a panda when I'm doing it.
So I have a bit of a bigger frame, I guess, from that perspective.
Somebody called Wintervid and they're like, why don't we draw like a space panda orsomething like that.
And then it was kind of this naturally organically created community initiative.
(36:17):
And we're like, okay, this seems super cool.
We should just adopt this as our mascot.
I love that.
Okay, that brings up several questions.
I do Wing Chung.
And so that's really cool.
That's really cool that you're doing, that you do Kung Fu also.
Tell us about the community.
I understand it's a really lively community.
(36:38):
I'm into the Discord and it's quite active.
How did you go about kind of, as a very developer-focused platform and a community that,you
some of whom are not developers, like how did you bring a community together of peoplethat can still have stuff to do, feel part of like they're part of something, yet are not
(37:02):
developers?
Like tell us about the community building aspect to it.
Yeah, there's many things we still have to build around it as well.
But it really started when we came out of stealth last year, sort of around Marchtimeframe.
All of a sudden we were just everywhere, kind of all at once.
Like we launched at ETH Denver and just did a bunch of kind of social campaigns.
(37:27):
And then people were just like, whoa, what is zero G?
I need to check this out.
And then just that curiosity led to a lot of people just coming into the discord.
and trying to figure out like, what is this about?
Who's doing what?
Like, who's the team behind this?
This seems super fun.
Let me be a part of it.
So it came from this just like drive of curiosity is what we noticed.
(37:49):
And then we started building up very quickly, like the moderators and ambassador program.
We even hired somebody from the community that helped us build the Discord, Jaan, forexample.
He was like one of the first to raise his hand and say like, Hey, you're getting a lot ofinflux of people.
We should have put some structure around this.
Like maybe we should have a poker night.
(38:10):
Maybe we should have like a quiz, like piece to it.
And so it really came organically from the community itself saying we want to engage andwe want to engage on different levels.
And a lot of our team members are pretty active in the discord, whether it's like avalidator saying like, this
why wasn't I selected or for this like active set at the test net or, you know, what elsecan I do or can we do some like marketing thing together?
(38:39):
So it became this like kind of self-fulfilling prophecy again, a natural five of which wewere really excited about.
So yeah, this, this initial kind of boom and expansion led to just a lot of excitement.
And you could really tell, like the communities that grow and build organically is, theyhave so much more staying power than those that feel kind of contrived and fake.
(39:03):
And you can really tell the zero G community is like, they're really interested andthey're driven by this mission of AI.
When you first came out last year at last, I guess it would be last February, right?
When you came out of stealth, did you feel, late February, around East Denver, did youfeel that AI
Were you worried that AI might be too early?
(39:25):
Because in the last 12 months, it's really taken hold.
But probably last February, I'm guessing it probably felt early.
I mean, we've heard you were starting to think about AI back in November before that.
And that's kind of really what we settled on because, and to us, didn't matter because wetook a very first principles approach.
(39:47):
said, what are we really worried about in the future?
Like, can we trust AI that we can verify?
We have this like, you know, GPT 3.5 and GPT 4 as it was coming out, like very powerfulsystems.
But what's under the hood?
Like nobody can verify it.
You don't know where the data came from.
You don't know who labeled it.
You don't know how the model was trained.
(40:07):
I don't even know which version of the model I'm being served.
And on top of that, you utilizing my data to inform future models?
Yeah, very likely.
In most cases, yes.
And so I don't even have control over that.
And so how can we trust this type of system when we start running bigger societal levelsystems with AI?
let's say, you know, like airports, for example, or could be as benign as like trashcollection or like large swaths of androids building buildings for us.
(40:37):
How can we trust that if we can't verify it?
And so then we started getting really worried about the future and saying, hey, there's apotential terminator situation that can actually happen here without the right oversight.
And so that was the initial line of thinking.
And then it didn't matter if we were early or not, because we knew we had to build thisfuture.
I think that's pretty visionary.
(40:58):
On the verifiability piece, I spoke with another project that is infrastructure forprojects with data that needs trusted execution environments.
And so I'm curious about the data verifiability and how you are, I guess what's theapproach that Zero-G is taking with that?
(41:21):
Are you using T's underneath?
For now, it's the most practical approach.
So if you want to basically have no overhead in your inference and be, you know,comparative to an open AI performance, then TEs is still the way to go.
ZKML is just too far off from a kind practicality standpoint for large scale LLMs.
(41:42):
So anything, you know, even above, you know, 10 billion has significant slowdowns in termsof parameters.
Another approach would be OPML.
still suffers from quite a bit of lag.
So really, TML is the most practical.
I'm hoping that in the future with better hardware, with better algorithms, that ZKMLbecomes the kind of overall solution, or even something like FHEML.
(42:10):
But we're just not there yet, from just a kind of development standpoint.
So TML is still the most practical right now, comes with its own set of issues.
Many academic papers talk about how it can potentially be exploited.
But for now, it's kind of, I would say the 98 % solution that works the best.
Yeah, when I led growth at Oasis Labs, which is known for trusted execution environments.
(42:36):
And that was a common question that we had around these TEs, they're potentiallyexploitable.
And so that was a common question that we had answers for.
But...
but definitely a lot of academic papers kind of questioning that.
(42:59):
But I think it's really the smart thing to do.
And it sounds like Zero G also looks at potentially other, what are they called?
Privacy enhancing technologies that are, when they become available and when they don'timpact latency, then they become available on the network.
(43:26):
Umand there's major strides that are being made.
Like we talked with a FHE team in Bangkok, for example, and they figured out a way of like1000x improvement versus, let's say, the leader in the space, Zama.
And so those types of things are like super exciting.
Of course, it needs to be more than 1000x in order to be practical.
But it's already a huge step of the way there.
(43:49):
That's huge.
Let's a couple of final questions.
Tell us about the Kaito campaign and how did that go for you and what are yourexpectations that now that you'll have a Kaito leaderboard?
It was super fun.
was kind of this like epic Olympic battle, I would say.
Because the first day basically we couldn't even vote because the website kept breakingdown.
(44:14):
And so it's like, I can't even place my own votes on my own kind of Kaito campaign.
And so we started probably in like, I don't know, like the 18th position or somethingbecause of that particular perspective.
thenOnce we understood all the mechanics, then we really got our community rallied around,
this is going to be important.
(44:36):
Once we have our Kaido leaderboard, we can also then tell who's really talking about zeroGU is really excited.
And so then it started with an upswell of the community.
then so within, I think a few days, we started being like number five or number four orsomething.
And then it was really like, how do we get the last bit?
(44:57):
I, what I basically looked at and mega-Eath, think at that time was already leading.
I basically looked at their growth curve and I estimated that they'll end up at roughlyaround 20 million votes.
And I was like, okay, how do we get beyond that?
How do we get to like 21 or 22 million votes?
And so I tried to basically write all of our investors, all of the communities that I'mpart of and then be like, Hey,
(45:19):
need to lobby for all these votes.
Let's make sure we get Zuruji kind of the number one spot.
basically three, four days of just doing that and talking with a lot of differentcommunities.
And there was a bit of a joke as well that basically the cabal stepped in and helped usout.
so a lot of top influencers basically came on our side, come on our side.
(45:44):
And then I thinkProbably 20 hours before it was over, we eclipsed MegaEath for the number one spot.
then every 10 minutes, I was checking to see if we're either decreasing our lead or ifwe're increasing our lead.
then the last 20 hours were just literally like that.
And I couldn't go to sleep.
was just like, OK, how are we doing?
(46:05):
How are we doing?
OK, looks like we're on a good track.
We're increasing our lead.
And then, no, somebody just put in 50,000 votes for MegaEath where we're decreasing ourlead.
So that's kind of what it felt like.
And I just rallied the community around it as well.
I just said, believe in something, go all in.
And then there were so many posts on X with different community members just saying, Ionly have 300 votes, but I went all in for zero G.
(46:30):
It had to be zero G and just hundreds of those posts.
It was super fun.
Super, super fun.
Well, I definitely, I pointed all my apps and smart followers to Zero G.
What are the goals now that, you know, that Zero G will have its own leaderboard?
And how will that help with growth and or brand awareness?
(46:53):
It's really helpful for our visibility because with that data, we don't have to be kind ofin the dark because, and actually chatter with you about this from Kaido, the founder,
CEO.
And basically what many projects end up having to do is they have to basically figure out,okay, I need to figure out some KOL round.
(47:14):
I'm going to get these 200, 300 KOLs or whatever on board.
And how do even know if they're effective?
Like, are they even doing any work?
because some might just say like, just want to get in and invest and then forget aboutdoing anything because I'm busy with my other stuff.
So this actually gives us visibility, not only for influencers, but then also for like,who are the community members that are actually really engaged?
(47:40):
Like who care about us?
Who care about writing consistent stories?
Who are consistent over time?
And it's a much more fair way of them saying like, okay, well,If you really care about what we stand for on the mission, how do we then make sure you're
adequately rewarded as a community member?
And so the Kaito board really gives that amazing visibility from a lot of signals thatthey collect on X.
(48:04):
Now I wish they could also do this for other social media pieces like YouTube or Instagramor TikTok and so on in the future, but this is a great start and we'll definitely give
some product feedback as well to you and the Kaito team.
Awesome.
And the leaderboard goes live on Friday.
(48:24):
That's exciting.
Well, I look forward to that and I hope that the Kaido leaderboard will help raise brandawareness for Zero G, both for just fans, but also developers.
think developers are interesting because they really follow momentum and they're alwaysasking themselves like, why I should build on your chain?
(48:46):
And they look at the community size.
And not only just like the features of the chain itself and its offerings, but also kindof the liveliness of the community.
And I think I'm hoping Kaido will help with that.
So that's exciting.
It should help quite a bit.
Yeah, it was a great mechanism.
Actually, managing director of the ecosystem, she made a kind of joke on X as well.
(49:09):
And she basically said that now that I'm done playing CMO for Kaido, I can actually resumemy real job.
But it's been an amazing kind of mechanism for Kaido as well, getting their brandawareness out and then, but then in turn, helping others with their awareness too.
Amazing.
Well, Michael Heinrich, thank you so much.
(49:32):
And we look forward to watching Zero G and you guys grow.
Yeah, super excited.
Thanks for being part of the journey.
Thank you.