Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:01):
Welcome back to the show.
We're joined today by Nick Emmons, founder and CEO of Allora Labs, which is building theAllora network.
Allora is aiming to bring adaptive self-improving machine learning to crypto.
Welcome, Nick.
Yeah, thanks for having me.
Let's get right into it.
Can you help explain what Allora Network is in plain English?
(00:22):
You know, pretend I know nothing about crypto.
Sure, for sure.
Yeah, so basically what we're building with Allora is a kind of uh aggregation layer forAI models, or you can think of it also as a model coordination network.
And I think it's useful to sort of zoom out and understand the existing paradigm thatexists in AI to really understand what that means.
(00:44):
So when you or I or anyone, developers, companies, individuals, et cetera, are looking tointeract with some AI model, we're sort of
picking specific models you want to interact with.
We're saying, I use chat GPT a lot, so let me use this.
Or I'm saying, I've heard Claude's better at coding, so let me use Claude for this.
We're sort of tasked with constantly surveying the universe of available models at anygiven point in time.
(01:09):
And because of this and because of the way models exist today, all of these models aresort of in these isolated silos, both from a consumption perspective and like uh a
performance perspective.
We can't takeall the good bits of GPT and all the good bits of Claude and all the good bits of Gemini
and really merge them together to create uh a new form of intelligence that represents thesum of the intelligence or the unique intelligence from those underlying models.
(01:36):
And this really, one, slows down the rate of progress for AI generally, because everyonehas to build in these isolated silos.
And two, it creates this consolidation of sort of AI market share that we're seeing wherea few large companies
increase their lead day after day in terms of being the source of the world'sintelligence.
so what we're building with Allora is basically this decentralized network where manydifferent models can come together and share their outputs with one another in
(02:04):
collectively solving different ML or AI problems or responding to queries from users andlearning off of one another in the process to
try to combat some of these hindrances or these impediments that are present in theexisting paradigm of AI, which is largely siloed and isolated.
(02:25):
so that's what we're trying to build with Allora, essentially.
And we're even seeing the siloing within some of these AI companies, right?
Like with even with chat GPT, it has like five or six different models and no one knowslike how they're different from each other.
um And so even within themselves, they're kind of like fragmented, um where like asupermodel where the user doesn't have to decide which one to use, because they really
(02:54):
don't know the difference, would be more preferable and more user friendly.
That sounds really cool what you guys are doing.
um I met with a sentient a couple of weeks ago and an open ledger and a bunch of differentAI slash crypto projects.
um And I am really encouragedthat Allora is in this space and that there's an attempt to make crypto and AI work
together.
(03:24):
Tell us the genesis of how you started Allora and what was like, you described theproblem, but I guess what led you to have enough courage to, let me try to fix this thing.
Let me try to bring a solution to the problem that I see.
Yeah, I think a lot of it is kind of philosophical in that I think we saw this in sort oflike the internet as a preamble to this.
(03:52):
think like the period of the internet really gaining ubiquity to now has been, you cankind of designate or define as the information age.
And I think the age we're entering now is the intelligence age.
But looking at the information age, I think the internet was built onthe set of values around decentralization.
(04:12):
It was built around the idea that information flow in society should be kind of free andopen and not controlled by anyone who's built on this set of open standards and
infrastructure.
And even though it was ultimately kind of uh commercially captured by various institutionsor enterprises, I think this underlying set of open infrastructure is a lot of why we've
(04:34):
been able to sort of realize this kind of stage in society we're in today.
And now as we entered this intelligence age, I think it's even more paramount for whatwill increasingly become humanity's kind of outsourced cognition to be instantiated as a
kind of decentralized public good, as opposed to some private thing that is uh developedand uh monetized by a small number of large enterprises.
(05:00):
And so I think philosophically, it's not only like...
a useful and beneficial thing and a number of more objective metrics.
But I think it's actually quite critical or even existential for society to instantiatethis decentralized brain more or less, or our outsourced brain as some sort of like
decentralized autonomous uh thing, as opposed to it being some private good again, ownedby a small number of enterprises.
(05:26):
And so that's a lot of the kind of motivation to start what we did.
I'm happy to go into some more tactical elements of theof the timeline, but I just see it as being critical, as AI increasingly and at a faster
and faster pace sort of redefines or reinstantiates a lot of society's core functions,especially as it pertains to sort of like intellectual freedom or just thought generally.
(05:50):
And I'm seeing that kind of, I guess, position from, you know, other crypto slash AIprojects where they see the incumbents and there's like five or six major incumbents.
And so it becomes like big tech all over again, except it's now in the AI space and theyhave, you know, tremendous funding, kind of, you know, unlimited resources.
(06:13):
And so it becomes a very big challenge forfor mission-driven projects like Allora to decentralize this AI thing, like how to make it
kind of like from the people for the people versus just from these five or six differentbig tech companies.
em On the homepage, this is the abstraction layer for intelligence.
(06:36):
And then Allora is a self-improving decentralized AI network that harnessescommunity-built machine learning models for highly accurate context-aware predictions.
maybe we can go under the hood a little bit and help us kind of understand like how thesemodels are developed, how they get into Allora, and maybe give us a sense of like the
(06:57):
maybe compare and contrast to, you how helpful are these models versus let's say, youknow, a chat GPT or whatever.
Yeah, I think that's a great question and good to provide some context.
So basically, the network is a network built around the sort of ultimate objective ofcommoditizing the world's intelligence.
It's the sort of general model coordination network to bring together the world's modelsand optimizing like various ML objectives or ML problems.
(07:24):
And the network is broken into these kind of sub networks called topics that are thesetightly scoped environments within the broader network.
that are each defined by different sort of ML problems or objectives.
So you might have one topic for uh predicting the price of say Bitcoin to USD an hour fromnow and doing that every hour.
You may have topics related to sort of fraud detection in various sort of financialdomains, banking domains, et cetera, maybe topics related to anti-cheat detection in
(07:51):
gaming, whatever it is.
And then within each of these topics, the participants are basically broken down intothree buckets.
The core participant are these, what we call inference workers.
And these are people that are building and running models to try to solve the coreobjective function of that topic.
So if you and I have some insight and maybe distinct insights on how to best predict theprice of say Bitcoin to USD an hour from now, maybe we'll go and we'll build some model to
(08:18):
do this.
We'll iterate it on over time until it gets to some adequate performance that we're happywith.
And then all we do to join thatthe network and contribute to its sort of like aggregate inferences or collective
intelligence is just start omitting the outputs from these models to the network on aregular cadence as defined by that topic.
So every hour we're saying an hour from now Bitcoin USD is going to be this price and thenthe next hour and do the same, et cetera, et cetera.
(08:44):
And what we're doing and doing that is we're sort of joining the other models that areparticipating in that topic in creating a single aggregate inference every hour to
most accurately predict the price of say Bitcoin USD.
When the second bucket of actors within a topic are people who are also building andrunning models, but they're trying to solve something a little different.
(09:05):
They're called forecasters and instead of building models to try to solve the coreobjective problem of the topic, like the price prediction, they're trying to, whenever
those first category of actors are producing their outputs, they're trying to predicthow accurate each of those outputs are going to be at the end of the hour, at the end of
the epoch and sort of like network nomenclature.
(09:28):
And so what they're doing is actually like quite uh critical for the network.
They're sort of why the network works and achieving some of these kind of self-improvingobjectives and the network being able to outperform individual model outputs.
They're sort of exploring this more unbounded or nebulousdomain space in context that may inform or provide useful signals to when some inference
workers output is going to perform well or not as well.
(09:53):
Maybe some inference workers end up just doing better on Mondays for whatever reason.
Maybe others are better in really non-volatile markets and others in volatile markets.
These forecasters are bringing those sort of out of band signals into the inferenceaggregation logic to make it this kind of context aware network inference.
And so both theForecasters model outputs and the inference workers model outputs are then kind of being
(10:14):
merged together to create this context aware aggregate inference that's being delivered atthat time of uh inference request.
And this is important because obviously in a lot of these domains, in this example we'retalking about with price prediction, we don't know what the price is going to be an hour
(10:34):
from now yet.
We need to wait an hour for that to be revealed.
And by that point, the inference nowis no longer useful to us.
don't have the alpha we would have gotten from having that inference at this point.
And so with both of those first two categories of actors, we get inferences at time ofrequest.
And then the third category of actor, they're not running models.
They are the model evaluators.
(10:56):
They're called reputers in the network.
at every sort of time step at the end of every epoch, ah in this case, at the end of eachhour, they're coming in, they're saying, what was the price of Bitcoin to USD actually?
how accurate were each of the inference workers and forecasters in their respectiveoutputs, and they're assigning the relevant weights to each of those actors.
(11:17):
And the weights are important because now in the next epoch, they inform how much sort ofuh out of the box influence they're going to have on the aggregate inference from that
next epoch, but they also determine how much of the rewards or incentives are going to beuh proportionally given to each of those.
each of those actors in those first two buckets.
(11:38):
If I was X percent more useful in constructing the best possible network inference, I'mgoing to get X percent more rewards, et cetera.
And so that's kind of how it works under the hood in terms of taking these three distinctclasses of actors within each of these sub networks, these topics on the network to
regularly produce aggregate inferences, as well as update the network sort of reputationsystem or waiting system as time progresses too.
(12:06):
Okay.
You said a lot there.
Let me try to summarize.
So there's four major kind of concepts.
So there's the concept of a topic, and then there's inference workers, there'sforecasters, and then the last one is UrepU.
What was it?
Reputors.
So they're kind of like adjudicators to see like how close to the mark was the prediction,right?
(12:30):
Something like that.
Got it.
And then they help kind of with a feedback loop for learning and then also with how closethey are to the mark, then um they get some kind of boost for the next kind of prediction
or something like that, Okay.
ohHelp me understand the forecasters, how they're different from inference workers, because
they sound similar, but it feels like they're also looking at kind of the tails.
(13:00):
Mm-hmm.
Yeah, so an inference worker is just solving the core problem of a given topic.
So in this price prediction case, maybe we're breaking out the quant textbook and we'resaying, these things are signals in markets.
These things are going to be useful in predicting the price of some asset to another.
Let me try to experiment with these different things, these signals and features, and seehow good of a model I can produce that predicts prices accurately.
(13:26):
Forecasters, maybe they still have some of that in their models, butWhat they're trying to do over time is learn the little idiosyncrasies in each of those
inference workers outputs to better inform an accurate prediction of how well theyperform.
These forecasters aren't predicting the price of Bitcoin USD an hour from now.
They're predicting, oh, inference worker uh A received this loss was this accurate.
(13:50):
Inference worker B is going to be this accurate, et cetera.
And so they are tasked with this kind of uh like more nebulous task and more complex one,arguably of saying, all right, over time, let me improve my understanding of how each of
the inference workers perform in a topic to like improve the network inference beyond justwhat the inference workers are able to achieve.
(14:13):
And it's because of these forecasters, it's because of the introduction of these likecontext specific signals that the aggregate inference that's generated from the network at
each time step.
is able to consistently outperform any of the individual models within the network.
Because if you weren't exploring this additional domain or feature space throughforecasters, the best network the aggregate inference could ever be is the performance of
(14:37):
the best individual model in the network.
And at that point, why not just route to the best model all the time?
We're trying to build something where the whole is greater than the sum of its parts,where a network can self-improve over time and consistently be better than its
its of inputs, its models that are contributing to the core objective.
So that's sort of the role for a faster plan.
(14:59):
differs from inference workers in the network.
It sounds like you've really prioritized kind of the self-learning kind of uh feedbackloop with with inference workers, forecasters, and reputers.
um That's really, really cool.
When I think a topic, is that how are topics different from like domain specific orverticals uh like law, for example?
(15:25):
Is topic much more like a sub?
like a subsection of, let's say, law or humor or...
it's a bit more granular than that.
So you could think of like a general, like subject or like vertical being made up of manytopics and sort of like, like at the lowest level and sort of like, like, uh, in kind of
(15:49):
AI language, a topic is just defined as a target variable, the thing you're trying topredict and a loss function, how, how we're measuring accuracy in predicting that target
variable.
And thenthat the rest of sort of the topics logic fits into this.
So you could theoretically define a really general target variable with an applicable lossfunction and have a topic pertain to a really general problem space.
(16:13):
But I think the way you get the most out of the network is by instantiating topics thatare as sort of tightly scoped or specifically defined as possible.
And that's what topics sort of enable out of the box.
Okay, that makes a lot of sense.
that kind of feels like, uh you know, something I've heard that, you know, some of theselarge, large language models, like, you know, open AI, the models that they use, for
(16:36):
example, they're like 90 % kind of on target.
But then it's like that last 10 % is actually where kind of a lot of the value is.
And that's kind of like that last 10%.
is pretty much like all these other kind of AI companies that we're seeing that are very,very domain specific.
Like Harvey AI, for example, focuses on law.
(16:59):
I'm involved with a company that is, or with a project that's building a large languagemodel for humor and like teaching a model like about like what's funny.
And so, um which is really, really interesting, right?
Because I don't think I shared this, but a long time ago when I was in grad school,my thesis was on computational linguistics.
(17:22):
And I built a neural net.
This was a long time ago before large language models, but the neural net I built was umaimed to predict um a problem called word sense disambiguation.
so like, you know, these models have a very hard time distinguishing like, you know, ironyfrom, you know, metaphors and analogies.
(17:47):
It's like,They just have a hard time.
so being able to predict like what, in what sense the word bank, for example, was, is usedis bank like financial institution or is a bank like a type of basketball shot or like a
side of a river.
Like a lot of these at the time anyway, I think they're better now, but uh being able todistinguish like in which sense a word was used.
(18:10):
Anyway, I built a neural net that, that aimed to do that with very, very low accuracy.
And so, but it sounds like that's kind of uh with Allora, you know, these three actors,right?
you know, um kind of like help to, what is it called?
(18:31):
The word escapes me right now, where they kind of like balance each other almost to get tothe most accurate prediction.
Does that sound about right?
Yeah, that's right.
Yeah.
I think they're all taking each other's inputs or outputs as inputs into their own sort oflogic and using that to kind of rip out the best pieces of each other's outputs as a
(18:56):
function of that.
Then those get incorporated into like what the network ultimately generates and deliversbasically.
Let's go into your background.
How did you get into this?
You sound very well versed in AI.
Is that in your background?
I'm a little bit.
Yeah, we've been building kind of AI stuff in the crypto space since 2020.
We've been building some of the earlier kind of AI powered crypto infrastructure.
(19:20):
um With my background, I come from uh more of a crypto background.
I first got into the space and I think like 2014.
um Prior to starting uh the company, I was uhleading blockchain development at one of the larger asset managers and insurance companies
in the US, John Hancock, which has a kind of an international company called Manulife.
(19:44):
And there we were doing a lot of the early work around public blockchains to be done bylarge institutions at the time.
This was the beginning of 2018 when enterprisesah experimenting with blockchain meant private blockchains, enterprise consortiums, DLT
technology, et cetera, missing a lot of the core benefits of blockchain technology, in myopinion.
(20:05):
And so we were working on a lot of the earlier public blockchain stuff and a lot of stuffaround building efficient markets for uh pricing and hedging against long tail exotic
risks.
So there's an AI element to that as well.
And then when we started the company, it started as more of like uha research endeavor into how do we build kind of decentralized networks or decentralized
consensus mechanisms for uh reaching resolutions to subjective questions through adecentralized way.
(20:37):
So not kind of like deterministic or objective things like, like, did we update the statetransitions according to the virtual machine of the, of the network, but trying to query
sort of subjective inputs, a kind of generalized Oracle problem.
And as we were building that, were kind of thinking about how to best come to market withit.
We were experimenting with some sort of crowdsourced use cases around pricing long tailassets and actors pricing those assets like NFTs, for example, being coordinated via these
(21:07):
like subjective consensus rules.
And we very quickly realized that humans and I guess like more analog inputs into thosetypes of systems are just very inefficient.
They're very inaccurate.
And so was around that time, late 2020, beginning of 21.
when we started building uh models, AI models for pricing long tail exotic assets,building various sort of AI informed or AI enhanced DeFi infrastructure, especially for
(21:32):
long tail assets.
And so that's sort of what, what led to where we are today.
And then as, as we start, as we reached like a, a really solid base and foundation andbuilding models in the domains we were operating in, we, this problem of siloed machine
intelligence becamevery directly known to us.
felt it ourselves as being just a single model developer in sort of the broader sea of usecases that are benefited by AI.
(21:57):
And so we took a lot of that early work we did around subjective consensus mechanismdesign, combined it with the years we had spent building AI models that applied to crypto
domains.
And it basically became the Allora network in a lot of ways.
Gotcha.
Let's talk about data for a second.
So a lot of these models that inference workers and also forecasters are basing theirpredictions on is based on data.
(22:23):
does, in the data supply chain, where does that come into play?
How are inference workers accessing, where are they getting the data from?
Mm-hmm.
on which they're kind of running their models against.
Yeah, the answer is a kind of anticlimactic in that they're just getting it from whereverthey want.
(22:47):
One sort of core principle we've held when designing the network is that the network hasto achieve its core objectives without any sort of opinionated approach to what data
models are using or how models are being run.
And that means that one models that are running on the networkA lot of them are closed source because all they're sharing with the network is the
ultimate model output.
(23:08):
don't even know anything about the structure of the methodology, the model, but also wedon't know what data they're using to inform their model outputs.
so people are getting data from all over the place from centralized data providers to datawarehouses to sort of aggregating data on their own.
uh Some people are leveraging other types of data outside of just market related data,like social data that they may be pulling from some social social network APIs or.
(23:33):
or uh news data they're scraping from a variety of news sources, whatever it is.
And so the network's been designed and built purposely in a way where we don't place anysort of opinionated directive on model creators about where they get their data.
We're working to build adjacent infrastructure and tooling to make getting data fromdifferent sources as easy as possible.
(23:55):
But at the end of the day, my design philosophy for networks is thatbuilding modularly and building kind of as tightly scoped of primitives as you can is the
way to sort of optimize efficiency in these systems.
And so I think outside of even centralized data providers, there's additionally a lot oflike, like, uh like really proficient data networks that are spinning up, obviously in the
(24:18):
crypto AI space and adjacent that I think model developers on the network are plugginginto to inform their models.
And so yeah, they're really getting them from everywhere, getting data from everywhere.
Now, if you have a, let's say an inference worker that is that's consistently on the markon, you know, a number of topics, would it benefit uh the Allora network to know like on
(24:40):
which data or the type of labeling that's being used that there's like, how are they soaccurate?
Is that something that you guys would be interested in?
Maybe, but I think I operate from the perspective that markets are the greatestcoordination mechanism that exists.
And if someone's able to out-compete all other market participants to such a large degreeand so consistently, then that's their alpha to maintain.
(25:07):
And the more they do so, the more they're consuming disproportionate rewards to others,the more incentive there is for others to...
go out and continue to experiment with data, import new data sources into their model,experiment with different feature designs, et cetera, to try to out-compete them.
yeah, we approach it very much from like, we've designed a very specific kind of marketstructure or market environment for provisioning machine intelligence.
(25:34):
And that's sort of where, like we have to, that's where we spend our time is how do webuild the most optimal market environment possible for provisioning this type of resource?
And then market dynamics are the specific sort of elements that go into how one marketparticipant is active in that market is sort of left to be governed by those participants
within this market environment.
(25:55):
And that makes sense.
It's like they're alpha to maintain.
And I think that's totally appropriate.
um Let's talk about incentives.
um How is the network incentivizing these three actors to do work?
Yeah, so basically at a high level or fundamentally, they're getting rewarded based on howwell they do their job.
(26:18):
Inference workers based on how accurately they produce outputs that solve the core MLobjective function in the topic, forecasters based on how or how accurately they predict
the performance of inference workers, and reputers based on uh how honestly and accuratelythey reveal uh the actual performance of inference workers and forecasters.
(26:38):
And then where those rewards are coming from is from two sources.
One, it's coming from fee revenue from consumers, applications, developers, whoever it be,who are paying for inference.
Those are coming into this bucket that then gets dispersed to these actors, as well as uhemissions from the network.
There's a native token that sort of governs the network.
It sort of facilitates all this coordination.
(27:00):
so fees come into this bucket along with the like emissions or inflation of the network.
And then those are dispersed across the core actors, again, based on sort of how well theydo each of their respective jobs.
And so that's sort of how the incentive flywheel works within the network.
Now, one set of actors we haven't talked about are the application developers that need umpredictions from these models.
(27:25):
Let's talk about that.
That's kind of the other side of the market, right?
It's like these developers who need that type of information or data um or prediction.
Tell us about that.
Yeah.
So I think like what the network really does for uh application developers or people inquotes on this demand side of the network um is, is it sort of shifts the paradigm by
(27:50):
which we interact with AI from what we have today, which is very model centric, which wekind of talked a little bit about earlier to one that is objective centric.
So model centric again, being that like I as application developer, whoever needs toconstantly be surveying the landscape or universe of models.
trying to pick which model is the best for my given use case across all of the domains andcontexts I'm going to operate in, and then constantly be doing this again as new models
(28:16):
arise, as my use case uh is iterated upon or changes, whatever it is.
um And uh that's sort of a model-centric paradigm for interacting with AI.
What the network enables application developers to do who want to integrate AI into theirproduct or application is they just have to specify what they want AI to do well.
(28:36):
They just need to specify an objective function and then this efficient market of modelsare all kind of competing and in turns working together to produce the best possible
outputs as a function of that.
And so it removes a lot of the overhead from the application developer to get access tothe best AI for any given use case.
And the way we've uh kind of designed the network in these initial versions and the kindof verticals we've paid particular attention to.
(29:03):
is in the kind of DeFi verticals, finance verticals, DeFi adjacent things.
And so a lot of the application developers building on the network today are DeFidevelopers, they're DeFi protocols, they're people building uh AI powered vaults in DeFi.
There's a lot of different AI agents that are interacting on chain, sort of outsourcingtheir financial cognition to the network.
(29:26):
And so that's where a lot of the kind of application builders today are building.
And yeah.
Is there a specific DApp developer that you could spotlight?
Maybe just to give us a sense of the liveliness and what it looks like maybe for theaudience.
(29:49):
Yeah, I think a few kind of cool examples are one of the earlier applications that wentlive when the testnet for the network started was PancakeSwap.
um And they have this kind of prediction market game where every five or 10 minutes,people are betting on whether or not they think the price of each USD is going to go up or
(30:09):
down in the next 10 minutes.
And they're betting against each other in sort of the V1 state of this game.
And what they did when the LoRa testnet came live is instead ofyou and I betting against each other in terms of it will go up or down.
The kind of network of models generating these predictions on Allora are emitting thesepredictions on a regular cadence.
And then users are betting that the AI is going to be correct or they're betting the AI isgoing to be incorrect and they're paid based on that they're betting with or against the
(30:37):
AI, which I think is this kind of an interesting way to make these types of like long tailnoisy prediction market verticals.
more efficient as a function of injecting some more informed input at the kind of baselayer.
There's been some other pretty cool stuff around prediction markets, I think.
I think prediction markets are interesting for the network because they're hyper longtail.
(30:59):
They really do benefit from having this more efficient source of compute, which is AI.
And so there was a uh team that we work closely with called RoboNet that builds AI agentsin the in the DeFi space.
And when the US presidential election was happening, they built an agent for trading thepolymarket US general election markets based on a myriad of different political models
(31:23):
that were running on the network and a political or a presidential election topic on thenetwork.
And it was able to trade quite successfully taking a fairly hedged posture or riskmitigated posture in these markets.
I think it generated like 68 % APY annualized just by trading on these markets.
which is pretty cool in such like a nebulous long tail market just plugging into thesedifferent AI models.
(31:47):
And then you see things that I think are like more accessible or more like familiar toeveryday DeFi users, just like general DeFi vaults.
There's a vault, there's a number of vaults live, but there's a vault um on a protocolcalled Vectis right now that's taking a bunch of price predictions for Seoul.
And then from the network and then using those to inform a kind of directional SOL tradingstrategy powered by AI that any sort of user can get access to or exposure to just by
(32:16):
depositing in the vault.
And it's performing quite well.
It's, it's, ah it's, it's leveraging this kind of more informed or more expressive form ofcompute in a bunch of these AI models, predicting the price of, of SOL to get access to
what like AI powered DeFI five primitives could look like.
like.
And so those are.
I think are kind of cooler that are in market right now.
(32:37):
That sounds really cool.
As you were speaking, I was thinking of a perps exchanges and how the output of some ofthe models on Allora could be really helpful for perps traders.
ah Have you looked into that?
Yeah, I want to say the Vectus Vault is trading perps markets, because you can you canthink of like even more sort of even like higher yield or more capital efficient
(33:01):
strategies that instead of a strategy that say using uh like SOL USD price predictions togo long, or to buy SOL when it predicts the price is going up and then sell SOL into USD
when it thinks the price is going down.
instead going long and maybe even informing its margin when it's predicted to go up andthen going short and informing its margin when it's going down.
(33:24):
So you can kind of capture both direction, both directions of kind of price movement asinformed by these AI models.
So yeah, we're doing uh a bunch of stuff in kind of the perps verticals.
We're seeing a number of developers in the ecosystem experimenting with different perpsvaults or perps tools that are leveraging the network right now.
I see that as a massive market.
(33:46):
As I think of crypto, there's stable coins, there's perpetual exchanges, um and reallycapital formation are the three main product market fit things in crypto that I think will
be exported to TradFi.
um Perp exchanges is an absolutely massive market and so innovative that...
(34:11):
I'm kind of proud that like that was born in crypto.
um But I can totally see, you know, Allora working with, you know, one of these perpsexchanges.
um Like that that would be so beneficial, I think, for for perps traders.
um Yeah, I agree.
(34:33):
think you could even start to do even more exotic and interesting things as well in that,in especially in like cash settled, uh, markets, you, you don't need the underlying to, to
interact with those markets, right?
You need a price feed and collateral and capital kind of trading in them.
And so you could, you could theorize like uh a whole new suite of perps markets by justplugging into
(34:57):
kind of AI generated price feeds being produced by these clusters of models on the networkand then using those as the kind of Oracle powering markets that are too long tail uh or
too exotic to be supported in kind of existing Oracle infrastructures.
And so I think there's a lot of really cool stuff you can do even just outside of theexisting like market domains where perps are really active.
(35:20):
um I met with a Hibachi last week.
They're an up and coming perps exchange.
And we talked a lot about kind of like what the competitive landscape looks like.
And there's going to be a lot of winners, right?
But right now it's kind of like Hyperliquid is like the main actor.
uh But there's going to be a lot of winners.
(35:40):
It's a very large market and there's a lot of space for everybody.
I wonder if Allora, if you guys have thought about like working directlywith hyperliquid, for example, and using their builder codes, um and where perps traders
could work through Allora versus working, uh you know, trading directly on hyperliquid.
(36:02):
Is that uh something that you guys have looked at?
It's something we've chatted a little bit on the team.
We've been supporting a number of build-outs on, uh of vaults on hyperliquid informed bylike Allora strategies for a while.
um But some of the stuff relating to builder codes is still kind of in the early, earlystages of just kind of like thinking through it on the, on our team, seeing kind of what
(36:23):
that could look like basically.
Yeah.
Let's talk about the go-to market and like where Allora is in terms of like, you know,testnet, mainnet, et cetera.
uh The go-to market feels pretty complicated because, you know, the supply side, you'vegot, you know, inference workers, forecasters, reputers, and then you've got developers on
the other side, the demand side.
(36:44):
Like, what does your go-to market look like and how are you getting the word out, brandawareness, and getting people to become involved in the network?
Yeah, on the demand side, think it like a healthy ecosystem has built up at this point.
think DeFi is where most of crypto's product market fit has been found to date.
(37:05):
I also think AI is sort of most mature in financial domains and used in finance fordecades at this point.
And so we found just a lot of likelike these kind of uh light bulb moments and sort of talking about AI enhanced DeFi and
seeing a bunch of excitement and development from kind of individual developers, differentprotocol teams, et cetera.
(37:28):
And so starting with a vertical such as DeFi where there is such a clear synergy with AIbeing integrated, where it does represent the majority of crypto's activity, et cetera, uh
has been sort of a core piece of the go-to-market on the demand side.
And then on the supply side, for lack of a better term, these model developers, I thinkthe value proposition is fairly clear and compelling in that today, if I'm a model
(37:56):
developer, there's a pretty long and direct path from developing some useful model andcapturing value from it.
I build a model and then if it's worthy enough to sort of turn into a company, I have togo raise capital.
stand up a bunch of kind of the administrative pieces of running company, build a product,achieve distribution and PMF, things like that.
(38:16):
If it's fund related, I go through sort of the fund related pieces of overhead.
Maybe I'm building a model just to hopefully get hired at some AI lab or something likethat.
But what Allora does, it creates the shortest path possible essentially from having someuseful innovation in model development, building some good model, and then turning that
into value, capturing value from it just by running it on the network.
(38:39):
not too dissimilar from what Bitcoin's done to energy markets, And that prior to Bitcoin,if I had access to energy, I'd have to sell it to a grid, send up an energy company, find
ways to turn that energy into value.
Bitcoin creates this efficient market for energy just by allowing people with energy toturn it into value by mining Bitcoin.
uh What Allora enables is kind of that for model developers, data scientists forturning models into value efficiently.
(39:06):
And so we've actually seen a lot of adoption amongst model developers.
think close to 300,000 workers or model developers have or models have been registered tothe network since testnet started, which is I uh think like a signal that like that value
proposition is sound.
And now a lot of what we're doing is working to sort of better make that side of thenetwork accessible for less crypto native AI.
(39:34):
because there's still bit of DevOps, there's running nodes, there's participating in thisnetwork, etc.
And so abstracting away as much of the kind of blockchain, the blockchain pieces from themodel developer journey uh is becoming a big priority.
And we're working on a few exciting things to really abstract away that blockchain piecefor AI developers.
And so that's what a bit of the go to market is the value prop on either side, I think isquite sound.
(39:58):
I want to go into a little bit about the abstracting the crypto piece, because that's kindof a theme I'm seeing with a lot of projects now that, you know, the crypto aspect becomes
like a source of friction for a lot of, you know, both users as well as developers.
so abstracting that is has become a priority.
(40:20):
Tell us about kind of like the what led you and the team to kind of think more about thatand how it affects the
I guess the model developer journey.
Yeah, think so.
Just to make it concrete, think like one of the most uh material sort of headwinds formodel developers getting onto the network is ensuring that they're packaging their models
(40:45):
in these kind of worker nodes and then deploying them, ensuring they're always online,ensuring that they're sort of delivering the right structure of information to the chain
on a regular basis, things like this.
And I think about uh when you think about sort of the pool of model developers,There's a pool of model developers that are really good at building models.
And then when you impose this additional requirement on these model developers also haveto be somewhat crypto native, or they have to have like material enough, MLOps or DevOps
(41:13):
experience, that pool shrinks considerably.
And so it becomes kind of paramount in just maximizing your total addressable market onthe supply side of model developers to try to keep it as purely to model developers by
abstracting away all these sort of confounding elements that mayreduce the size of that market that may create like the address market being some
meaningfully smaller subset of the total market.
(41:39):
I think in addition to that, the uh kind of rationale behind further abstraction is thatwhile the crypto space, I think it's safe to say at this point, has gotten quite excited
about AI, I don't think that's a bidirectional enthusiasm.
I think the AI industry in large is still quite skeptical of crypto.
(42:00):
um in various ways.
And so I think in order to like tap into the sort of largest and most competent talentpool for AI developers, you have to kind of create an experience that feels comfortable.
It doesn't feel like crypto for them.
And so that's been a big component of it as well is creating the sort of bridge comingback the other way of getting AI people more into the crypto ecosystem, largely via
(42:26):
abstraction of the crypto pieces.
I think that's really a mature kind of approach because it shows that you have anunderstanding of AI developers or model developers and really kind of the AI developer
persona, and that a lot of them just don't...
have not been interested in crypto historically and may not and um maybe have no interestin it at all.
(42:53):
But they're still interested in creating a model that is helpful and useful.
um so abstracting that, the crypto piece away, I think makes a ton of sense.
And I can see how that will increase adoption of model developers for Allora.
um In which stage is Allora right now?
(43:14):
Yeah, so we've been running test net since I think last July, give or take.
We've gone through many different versions, like iterations, version upgrades of thenetwork throughout test net.
Back in February, we released Dev Mainnet, which is basically the main net instantiationof the network.
But like not with all of the features, emissions, et cetera, turned on more so fordevelopers to get onboarded to.
(43:40):
the main net environment that will be the environment when public main nets released.
so ah it's just about wrapping up a few of the last pieces with that main net and thenkind of turning on the full feature set and releasing main net to the public.
And so it's like fairly mature in its development cycle in terms of being quite close tothis kind of public main net release.
(44:03):
That's exciting.
um And you don't have to share any dates, obviously, but I'm curious about like, what doesthat roadmap to Mainnet look like and kind of the work involved for you and the team?
Um, it's there's not too much, frankly, it's, it's, uh, the network is in a quite stablestate.
It's running well running as expected.
So it's, it's more about sort of wrapping up some development around some of the adjacentinfrastructure, adjacent tooling, kind of like visualizations of the network.
(44:32):
So people can like more easily access the data flowing through the network, continuing toonboard more cohorts of.
of model developers onto the network, things like that.
And so it's really just kind of these last sort of adjacent pieces to ensure that thenetwork is as accessible and as populated on day one as possible.
(44:52):
Okay, a spicy question.
You ready?
Okay, so let's say Allora works and is, you know, it works like, imagine like the wildestdream you've got and Allora totally works.
Like who is disrupted?
Like which incumbents will be disrupted the most?
Um, yeah, that's a good question.
(45:14):
I think the easy answer to that, which maybe is just my answer, are probably a lot of thelarge model companies today.
Not because I think they go away or anything.
think there is, is such a imbalance in terms of market dominance that they hold today as afunction of there not being a, like a competitive alternative in terms of like these
(45:36):
efficient market environments for provisioning and coordinating models.
And so I think the easy, like the most obvious answer is probably like the existing modelcompanies and they will hold their portion of the market still in like a form factor that
looks like how it looks today.
They may even like people or people leveraging their infrastructure may even run models onthe network and introduce these additional sort of revenue streams as a function of just
(46:03):
like contributing to the network's intelligence and in turn capturing value from that.
In terms of one category of existing participant and industry, that's probably the onethat is most disruptive.
Nick, is there anything that we haven't talked about today that you wish we had?
(46:24):
I think you did a great job.
think we covered everything.
I think this is pretty comprehensive.
I think we covered everything.
I think the more people want to get involved in the crypto AI industry, the community,joining various communities, not just our project, but I think there's a kind of
excitement that is often only present in the very early days of a new category or industrybeing stood up that is palpable.
(46:53):
it's exciting to be a part of.
And so I think there's like a, there is a really interesting sort of uh notion and kind ofanyone interested in this space even vaguely and getting like integrated into these
communities and playing a part in it or just being a part of it.
Okay, last question.
um Since you mentioned community, I want to talk about that.
(47:13):
Now, with almost all crypto projects, you've got various personas of community members.
You've got, at least for Allora, I'm guessing you've got model developers, you've um gotapplication developers who part of the community.
Is there room for community members that are not developers, that are not AI?
(47:34):
uhmaybe they just want to be part of a project that they believe in.
um Is there a room for that type of persona?
And what are some things they can do to contribute?
Yeah, for sure.
think there's pockets of the community today even where it's a less technical crowd thatis just really enthusiastic about crypto AI, specifically what we're trying to do in
(47:59):
decentralizing intelligence, things like this.
And I think the lowest-hanging fruit for being involved is honestly just being a positivefor spreading the message, bringing new people into the community.
assisting in this almost societal mindset shift of thinking of AI as moremulti-dimensional than we think of it today as centralized AI not needing to be the only
(48:26):
option or as open source AI not being the only alternative to centralized AI.
There's a whole other category of AI that is decentralized AI and agnostic to open sourceor closed source that I think is still underexplored in a lot of what we're pushing that
umcommunity members are very actively involved in today.
And then like in terms of getting involved in the network more directly, I think there's amyriad of interactions from contributing to the network's economic security, staking in
(48:56):
the network and like one capturing some of the emissions and fees from that, butcontributing to the network sort of overall economic security, contributing or interacting
with or experimenting with like uh lower code, no code.
tools being built around the network like no code agent builders that are sort of poweredby a lore out of the box or other types of sort of products that are being experimented
(49:20):
with that require a less technical user base, those types of things.
I think there's lots of ways for less technical people to really be like an important partof the community.
Cool.
Well, Nick Emmons from Allora Labs, thank you so much.
Yeah, thanks for having me.