All Episodes

December 1, 2023 95 mins

Ready to delve into the world of AI startups, media bias, and the future of research and education? This episode promises an enlightening conversation with Ben and Justin, a.k.a 'the AI boys' the brains (and potential boy band) behind Rava, an emerging AI startup and impact member at CoLabs Australia. We explore their journey into this fascinating field and the decision that led them to leave their academic pursuits to focus on this promising new frontier at the absolute cutting edge of applied AI. We also explore AI's potential use as a tool for good, the ethical implications, and the unique opportunities available for pioneers in the AI space.

Ever considered how media bias and algorithms can limit our exposure to diverse perspectives? The conversation takes an interesting turn when we discuss this overlooked aspect of our digital lives. We dissect how custom news feeds can lead to polarisation, and the challenges inherent in a system driven by advertising revenue. From Twitter's community notes to other tools designed to combat bias, we dig deep into the initial conditions of these systems and the urgency of working towards unity rather than division and explore how Raava are working on a media bias plugin to help folks make sense of the information they consume.

Lastly, we dive into the potential impact of technology on the world, the importance of wide boundary thinking in its creation, and touch briefly on the concept of complex adaptive systems. We traverse a range of topics, from using technology to solve challenges in different bioregions, to the potential of an 'AI tech stack platform' that could assess the reliability of scientific data. The discussion is jam-packed with insights that will leave you with plenty to ponder. So, join us as we explore the future together, guided by the insights of our inquisitive guests, Ben and Justin. Let's expand our horizons, shall we?

Keep up to date with Raava:
- Website
- LinkedIn

Still Curious? Check out what we're up to:

Or sign up for our newsletter to keep in the loop.

This experimental and emergent podcast will continue adapting and evolving in response to our ever-changing environment and the community we support. If there are any topics you'd like us to cover, folks you'd like us to bring onto the show, or live events you feel would benefit the ecosystem, drop us a line at hello@colabs.com.au.

We're working on and supporting a range of community-led, impact-oriented initiatives spanning conservation, bioremediation, synthetic biology, biomaterials, and systems innovation.

If you have an idea that has the potential to support the thriving of people and the planet, get in contact! We'd love to help you bring your bio-led idea to life.

Otherwise, join our online community of innovators and change-makers via this link.




Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Samuel Wines (00:01):
Hello and welcome to another edition of the
Strange Attractor.
This time we sat down with Benand Justin from raava, so Ben
Field and Justin Beaconsfield.
They were our entrepreneurs inresidence and have since set up
an AI startup and they operatefrom our space.

(00:23):
They are a very fun, dynamicduo, informally known as the 'AI
boys' at the lab space, and,yeah, just thought we'd sit down
and have a chat with them aboutsome of the projects they're
working on, because obviouslythe AI space is really really
fascinating right now with allthe potential good and potential

(00:46):
harm that can be going on inthis space.
It's interesting to have twopeople at the forefront of that
who are thinking deeply aboutthese questions, about how we
can ensure that we can havetechnology that helps humanity
get back in alignment with thenatural patterns and principles
of the biosphere that we inhabit.

(01:08):
Yeah, good little conversation.
Great that Andrew could join ushalfway through and without
further ado.
I hope you enjoy thisconversation with Ben and Justin
.
Ben, Justin, welcome and thankyou for joining us for another
episode of the strange attractor.

Ben Field (01:29):
Thanks for having us.
Thanks.

Samuel Wines (01:31):
Sam.
Yeah, no, it's a pleasure tohave you here.
I feel like there's such goodbanter that we have all the time
that I wish we could justsomehow have had that recorded.
I mean, we could probably askApple.
I'm sure they record everythingbut it would have been great to
have that in a conversationsomewhere, but I reckon we're
going to be able to make it makethe origin in dialogue.

(01:51):
So do you want to give us alittle bit of an intro, like to
who you are and what you guys do?

Justin Beaconsfield (01:58):
Yeah, I'm Justin.
I, what about me?
I just finished up.
Well, I didn't quite finishdoing my masters in data science
.
You didn't finish?
No, I had one semester ago andthen.
I know, once I messed up but Ileft to start start the business
with Ben.

Samuel Wines (02:15):
I was going to.

Justin Beaconsfield (02:15):
also, I was actually originally going to
leave to become a quantitativetrader and then Ben and I were
getting super into AI.
I was already interested in AIa lot through uni, more like the
actual like building AI, butthen like seeing the emergence
of the tools and like realizingthose as whole field of like
applying those AI tools, got gotreally interested in that,

(02:38):
chatted with Ben a heap aboutthat was kind of like traveling
for the first seven months ofthe year and then was going to
come by home and start job as aquantrator.
And then decided to blow thatup to start this business with
Ben rather, which I'm stillspeaking about in a second yeah,
for sure.
Yeah, I'll hand over to Ben,maybe.

(02:58):
Yeah, that's me.

Ben Field (03:01):
Yeah, I mean I was studying bio-med engineering and
also kind of bummed out of that.
Not bummed out, but I'm stoppedthat to do this with Justin,
I'm a little.
I've still got like a year anda half left, so hopefully don't
ever get that year and a halfdone actually Similar to Justin,
honestly.
I think we were just having alot of really interesting
conversations and I think wewere both kind of separately

(03:22):
thinking like I mean I was, Iwas really interested and still
am really interested in biologyand synthetic biology and that's
how I know you, sam, obviously.
But I think, justin, I both hadthis kind of realization of
like there's an inflection pointgoing on here and there's like
this is the point in time whereyou can have, you can build up a
lot of leverage just by beingearly and just by virtue of like

(03:46):
this is a field.
Ai is an old field, but likeapplied AI and like AI
engineering is like a veryquickly evolving field that
there doesn't really exist likemuch precedent for, like the
knowledge is evolving ratherthan like there isn't really a
syllabus for it yet, and sothere's a lot of value in just

(04:06):
like digging in and learning asmuch as you can.
I think that's something, justin, I really enjoyed doing is just
like researching and gettinginto rabbit holes and like I'd
kind of got the building bugfrom like some software we'd
built prior to that.
And, yeah, I think like havingsomeone else also came to just
dig in and just get up to speedin the field and really get to

(04:29):
like the frontier was a big likemotivating factor to like kind
of drop what I was doing andwhat I was interested in to do
something that like felt morepressing from like a career
perspective and something thatfelt like this is both extremely
interesting but also likepotentially like highly valuable
as a form of like leverage inmy ability to like do stuff

(04:49):
myself and do cool shit, yeah,and like on the back of that.

Justin Beaconsfield (04:52):
Like that I felt like a cool analogy for
like where we see theapplication of AI is like you
had like the invention of likecomputers and whatnot.
That was developing over awhile and computers started
getting really good.
And then, like from there therewas like the field of computer
science that merged, which allof a sudden had like all these
people studying not how to buildcomputers and not necessarily

(05:15):
understanding how computersworked under the hood.
I mean, they usually want tounderstand it to like some
extent, but like the you know,the field was about like how can
you create like good algorithmsor like systems and whatnot,
and like leverage the technology?
And we kind of maybe see AIheading to a similar point where
it's like, okay, now you don'tnecessarily need to understand,
you want to understand a bit ofthe computer science and math

(05:37):
and statistics that underliesthe AI itself.
But what if your specialty wasapplying the AI?
And that was kind of like thethinking for us and it's like,
oh, because this AI as a toolthat can be applied is really
only emerging now.
If we get to work right nowwith, the field has just started

(06:00):
, so we're just like instantlygoing to be right up there with
experts, because you can't be a10 year expert in this field.
That existed, for you know,right now it's a bit more than
like 12 months, you could argue.

Ben Field (06:14):
I mean there are people who've had starts like
this.
It's mostly compared like beinggood at this kind of new AI
engineering thing is, like itseems like mostly right now, a
mix of being good at softwareengineering, being good at like
traditional machine learning andthen being good at like just
being quite creative and justlike making stuff.
Yeah, but the actual AI foundthat we've learned the other
skills fairly quickly and we arelike getting there and it's

(06:35):
just fun, like allows you tolike.
I think I work well.
I learn very quickly when I'menjoying it and I don't learn
quickly when I'm like it feelsarbitrary.

Samuel Wines (06:45):
So if I was going to frame that as kind of like a
natural evolution of tech, rightis that you're kind of standing
on the shoulders of giantsyou've like great.
These people have opened thisentirely new field.
This is the beginning ofinfinity.
Yeah how can we build anddevelop things and take this,
that next step, and how can weapply what was, you know, maybe

(07:05):
primary research or justfiguring out how things can work
from like a serious playperspective.
And then you're sort of goingyou know, now that we've had
that convergence, thatdivergence, let's converge on
how can we apply this to havelike meaningful impact in the
world.

Ben Field (07:19):
Yeah, like basically up until now, I think if you
were applying AI and business,you are either a business that
had an enormous amount of dataand an enormous amount of like
technical expertise in house, oryou were kind of researching
stuff and like we've now gottento the point where things that
would have taken years ofresearch and some serious

(07:41):
expertise and a group of PhDsnow can kind of be done via like
an API call, and that opens upan enormous amount of surface
area for applications and it'salso like kind of learning how
to use these tools effectively,not even as a programmer, but as
just like an individual.
For chat, gbt subscription isgoing to become like a metaskill
, I think, where it acceleratesyour ability to execute in

(08:03):
whatever else you're interestedin.
So, yeah, we're very interestedin, I think, helping other
people as well learn how toapply this metaskill to execute
on whatever else they want to.

Justin Beaconsfield (08:19):
A lot of like.
What makes it really powerful isnot just that it allows you to
do things you could previously alot of those things.
If you, as Ben kind of alludedto, if you put like a really
experienced team of ML engineerson the task, you could achieve
a lot of the things that, like achat, gpt can achieve.
But what's kind of interestingis now, instead of putting in

(08:43):
months and of development timewith a bunch of experts, people
can very simply achieve tasks,and what that winds up doing is
it goes Okay, this thing thatpreviously wasn't worth pursuing
because maybe it wasn't goingto add enough value for the
resources you're going to putinto it, it now is worth
pursuing and if that part, ifthat is something that's part of

(09:06):
a bigger system, adds a heap ofvalue, then it just like
unlocks all sorts ofcapabilities with like the fact
that, oh, now that they requiredinput of resources to get some
level of output is like way lessthan before.
So what can we do now that itcosts less in terms of time,
expertise and money to be ableto achieve the same things and
the same results?

Samuel Wines (09:27):
Yeah, so I know you've been working on a couple
of projects whilst you're in thespace and we've bounced so many
ideas off each other, but Ithought, maybe, speaking to how
you've been going, okay cool.
If that's the case, how couldwe apply that?
You know, maybe with the mediabias, sort of?

Justin Beaconsfield (09:46):
thing that you've built.
Yeah, that was the first project, yeah, the first.
I think a good example of likemaybe, where some of like these
large language models are reallyuseful, is like the first
project pretty much we startedlooking at, which we're now
starting to ramp up again alittle, which was essentially
just the project that would takein articles and read the

(10:07):
article and then start assessingthem for all sorts of bias.
So it would start to look atthings like how emotive the
article is, if it's left orright leaning, if it's anti or
pro establishment, which issomething we kind of like took
from max tag market, mit withimproved the news really cool
kind of inspiration for what wewere doing.
But the idea was yet like, okay, now any article I'm reading, I

(10:32):
could hypothetically just havelike a large language model.
It's a bit more objective, readthat and just alert me to
things that I might miss.
And just, you know, media biasis obviously something that is
can be so subconscious, like inthe way it kind of targets us
and just having that drawn toyour attention kind of just like
tear apart so many things.
But just using large languagemodels to really simply pass in

(10:55):
all this text import can justhave really cool.

Ben Field (10:59):
I also think that idea was kind of spawned from
like we were having aconversation where it's very
obvious that these tools aregoing to be used to generate
disinformation at like quite alarge scale, and it's very
obvious that these tools aregoing to attach to all these
ideas of they already arealready are well, something I

(11:21):
learned interesting that Ilearned recently that I found
really interesting was I didn'tknow that bot farms, up until
this point, were actually havebeen human beings.
Oh so bot farms at least in thecontext of like Russian, russian
involvement in the US electionand stuff like that, like these,
these, both farms are actuallya lot of the time they are farms

(11:45):
of human beings sitting in a,in a room in perhaps like a
computer, like a ship ininternational waters or some
some room in in some countrywith pretty lax, like some focus
on this sort of stuff new ageGulag.
Yeah, it is, and like theythey'll have, like each person

(12:06):
will have, will matter computerand may or man, like 20 or 30
phones and they'll just be incharge of like.
I have like 100 Twitter accountsand I'm just going to go
through and like, comment abunch of shit based on my
objective, but it's actuallybeen human beings like there are
.
There are bots in the sense ofthings that just go in like
algorithmically, like stuff orlike follow people, but then
like bots that are actuallymaking comments.
Very like, for instance, ifyou're trying to inflame people

(12:29):
on the left back with like kindof inflammatory, like right wing
comments on Facebook, likethose comments were being
written by people with anobjective, but now, with a large
language model, something thatyou needed to pay people in the
third world to do, you can do ata much greater scale, probably
a high level of sophistication,and that's really scary.

(12:51):
Like.
I think these tools are goingto objectively be used for like
pretty horrific things at asocietal scale, and then just
now we're having thisconversation.
We're like it's very obviousthat these tools will be used
for that.
How can these tools be alsoused to combat that?

Samuel Wines (13:04):
like preemptively, how can we ensure that there is
stuff out there to assist andhelp people?
Because, like you see, you seethis all the time.
Right, like one of the mainthings that happens with
everyone now is that, because ofthe algorithms and the custom
news feeds you never get fedmultiple streams, your diet of
information.
It's like you only eat cornchips, you only eat ice cream.
Like you don't get a diverse,nutritious range of foods coming

(13:27):
into your diet.
And like I find this sofascinating that you can.
You either agree with someoneor they hate you.
Like there's no middle groundof like well, maybe some of that
, what you just said, isactually true and factual and
this part of it actuallydisagree with you can't have the
nuance.
It's got to be polarized andblack or white.

Justin Beaconsfield (13:44):
I mean like part of that's like the whole
like tick tock generation, likereally like low attention span
thing, because also how quicklythe algorithm.

Ben Field (13:53):
Again, an example of a I being used for ill is like
how quickly the algorithm, howgood the algorithm is at
recognizing what evokes anemotional response from you and
then like optimizing your feedbased off that.
Like isn't there, there's likesomething where you can get to
within 10 minutes.

Justin Beaconsfield (14:09):
You can get to something that you can't
have your distraction on toMusic, physics, music, music,
music like an OWN, thank you.
Thank you for your insight ongetting to me and Nancy Conitor.
Yeah, so I fear our very well.
Thank you so much for yourgreat insight and for your kind
of things that only take threeseconds to read or seven seconds

(14:32):
to watch or something like that, and it's like when you have
that, then you need to put inall the information that you can
in that three seconds or sevenseconds or whatever it is so
like.
Inherently, what you do is youare reductive and you reduce,
and that strips away any chanceyou have at nuance, because if
I-, it destroys context.
Yeah, exactly.

(14:53):
All you can do is just prettymuch take one side of any given
thing, put one spin on it, andif you want it to be at all
interesting, you can't give aneutral stance, because you have
three seconds to explain whatyou're explaining.
So you just do somethinginflammatory and it just like
works totally hand in hand withthat and it polarizes us.

Samuel Wines (15:12):
So we need this media bias bar up and running.

Ben Field (15:15):
Yeah, and like it was interesting thinking about the
design of that.
I mean, that was at the moment.
That's just like a little demothat we have running on our
computer that works reasonablywell.

Samuel Wines (15:25):
I was pleasantly surprised yeah it works pretty
well.

Ben Field (15:27):
but I think we're gonna now work with some like
alongside you and alongside someother people to figure out how
we can take this from demo intosomething real.
But the design is how you thinkabout.
Like, the design and thedistribution of these things is
as important as the capacity ofthe technology.
Like, I think the best exampleof combating media bias that's

(15:47):
in the wild is honestlyTwitter's community notes.
I think, if we're talking aboutcontext, like the fact that
Twitter community notes expandsthe context scope of whatever
piece of media that you'relooking at on Twitter, is really
valuable because I often findthat it serves to diffuse the
initial impact of that, whateverthe skewed tweet was.

(16:09):
But the reason that's powerfulis because it's operating at the
scale of Twitter.
Like, without media bias thing,you have to convince people to
download a Chrome extension andthen use the Chrome extension,
whereas, like, you don't installTwitter community notes.
If you're on Twitter, you getexposed to those notes and I
think, like, combating bias ismuch more, in a sense, about
distribution and about liketweaking the algorithms of these

(16:30):
massively, these massivelyreached, like these platforms of
enormous reach, than it isabout figuring out tools.
So it's an interestingchallenge.

Samuel Wines (16:41):
It's a tough one, right, cause it's like when you
have perverse incentives bakedinto the system, where you have
an entirely based off anadvertising revenue model and
eyeballs on screen and racing tothe bottom of the limbic
brainstem is the best way tolike.
Racing to the bottom of thebrainstem or limbic hijacking is
the best way to, I guess, evokethat feeling and get people

(17:05):
either enraged, angry orwhatever sort of heightened
state of peak emotion and thenhitting them with something like
an ad or something that targetsthem when they're in a state of
sensitivity.
Yeah, I just find, yeah, likeit's so hard to compete with
that unless you just activelylook at the structures of the
systems and acknowledge thatsomeone designed that Like you

(17:28):
could literally like it's likethe worst kept secret.
It's like these have all beendesigned to an extent.
You just need to like have alook at those initial conditions
and be like maybe don't do that, but I guess, yeah, the issue
is that convincing thestakeholders to be able to do
that.

Ben Field (17:41):
But maybe, like, we just need to build tools for
informational guerrilla warfare.
Like, every time you step ontoa social media platform, you're
being exposed to aninformational battlefield with
it, whether you know it or not.
And it is an interesting ideato kind of think about.
Like what is like aninformational freedom fighter?
Look like.
Like what are the guerrillawarfare tactics?
Look like on the online onlinescape.

Justin Beaconsfield (18:04):
I mean, there's so much to tackle as
well.
Like community notes is greaton Twitter, but if you had to,
then say just to do more or likemore to polarize us or to unite
us, you'd say almost certainlyto polarize us.
And it's like, well, okay, likecommunity notes exists, why
isn't that solving all the issue?
And it's like, well, the thingthat's like solving is like

(18:24):
blatant misinformation.
It's like if that's a lie, it'sgoing to point out there's a
lie, but the things thatpolarize us aren't probably most
predominantly.

Samuel Wines (18:32):
Sometimes they're truths.
Yeah, exactly, they're not.

Justin Beaconsfield (18:33):
It's just half truth, or it's a
disinformation, or it's itdoesn't, but I think, the most
insidious ones and probably themost common ones.
It's not even that it's adisinformation.
It's not even that it's a halftruth.
It's just that it's presented ina way that just frames you it
just positions you to kind oflike see the issue in one way,

(18:55):
or like, yeah, just through theuse of emotive language or it's
not even just emotive language,like I wish it could be reduced
to things as simple as emotivelanguage.
But it's just really cleverframing.
It's like just telling thefacts in a certain order, in a
certain way.
That just paints a picture, andit's a really complex thing.

(19:16):
I'm not exactly sure how to.
I wish there was like an easyway to articulate exactly what's
going on.
But I think we all sense it.

Samuel Wines (19:23):
But this is the thing again.
Going back to attractor points,right, we all know that that's
what that wins.
So that is something that, inan arms race or just in this
sort of context, is upvoted, soto speak.
And so if we know that that'swhat's works, then everyone ends
up reverse engineering whatworks from the account that do
it, that do it the best, andthen suddenly now everyone's
doing that same thing and it'sjust a race to the bottom sort

(19:44):
of thing.
It's like a positive,reinforcing feedback loop of
shitfuckery.

Justin Beaconsfield (19:48):
Yeah, I mean yeah and yeah.
The tricky thing is that, asmuch as it's hard for us to pick
up these algorithms, the AIthat's essentially running under
the hood for all these socialmedia platforms is able to pick
up this nuance really well andjust feed it to us and it just
knows that things we're justgoing to react best to things

(20:10):
that just sit in theconfirmation bias realm and just
affirm to us what we alreadybelieve.
And it's just like and it'sreally crazy, like I even
noticed I like made a newTwitter account and part of that
was because I had an oldTwitter account and I noticed
that really quickly I juststarted getting like a bunch of
like well, I found to be quiteintense like right-leaning posts

(20:34):
, like really quickly when Istarted reusing the account and
I was like ugh, what's up?

Samuel Wines (20:38):
My algorithm is wrong.
I'm just making a new account.
Was this Twitter or X X?
Ok, just had to clarify.
So, it's post Elon takeoveryeah.

Justin Beaconsfield (20:46):
Yeah, I mean I wouldn't surprise me
either way.
I think it was like I mean I'mnot sure, it's like necessarily
just an Elon thing.
I also just think it was likethe realm I was looking at, like
I was looking at a lot of techthings and whatever, and I just
think there was just this pocketthat I kind of wound up in.
Or like I think I looked at afew too many, even just like

(21:08):
looked at a few too many ElonMusk posts, because there is
that pocket of like fans and Ijust saw a couple of posts I
found interesting and then inhalf a second it was like, oh
well, my feed is just like it'swrong.

Samuel Wines (21:19):
This is literally what we were talking about
before how we've been trying tothink of how you could do this
like a mind map, neural networkof how things connect to one
another.
Essentially, what you're sayingis, rather than leaving that
shit under the hood, where noone actually knows what's going
on and how these thingsinterconnect and how they relate
to content, like we were havinga chat about, how could you
actually use what's been usedagainst us currently for bad for

(21:43):
impact-oriented innovation thatcan support a more resilient,
regenerative future?
Did you want to like?
I feel like this is a reallyinteresting place where you
could.

Justin Beaconsfield (21:50):
I hadn't considered that Wave it in.
They're like yeah, like theidea of like, if you could
hypothetically see on like agraph where those algorithms
were like pulling ideas together.
And I mean because, like allthese social media networks,
they run all the algorithms inlike a graph network form.
They store all the informationgraphically nodes connect to

(22:14):
other nodes and seeingrelationships and just all about
like relational data.
And it's like it'd be reallyinteresting if there was
transparency.

Samuel Wines (22:22):
I understand.

Justin Beaconsfield (22:24):
The thing is transparency.

Samuel Wines (22:26):
It's all open source but it's like some
visualization.
I mean, the Twitter algorithmis open source.

Justin Beaconsfield (22:29):
It's open source, but we can't visualize
it.

Samuel Wines (22:31):
It's like it's all representing numbers.
Show me how clicking on thisvideo ends up with neo-Nazis in
Slovenia, or something like that.
Yeah, it's yeah.
And just playing and riffing onthis thing like I'd really love
to double click on what we werechatting about before, like
what, how do you even frame thatproject?

Justin Beaconsfield (22:53):
You can probably speak to it well, Ben.

Ben Field (22:55):
So are we talking?
We're still in the context ofmedia bias, though Maybe just
like highlight what the graph.

Samuel Wines (23:00):
The graph network ideas, and then we can like
circle back and now we're going,like we remember how we're
talking about like an ecosystemtech stack of different AIs, so
the LLM, and then that'sintegrating with something else,
and then, at different scales,you've got different
relationalities.

Ben Field (23:15):
So like something that I and Justin have thought
about a lot is the fact thatthere is an enormous amount of
value of novel information thatcan be found through just
synthesizing and connectingexisting ideas that are
potentially siloed away Like.
Something where I'm a bigbeliever in is the value of

(23:36):
being into disciplinary, and whyis that?
And I know that you're very bigon that as well, sam.

Samuel Wines (23:40):
Transdisciplinary from our point of view, but no
biggie.

Ben Field (23:43):
Transdisciplinary.
But like why is beingtransdisciplinary Valuable?
Is because you find novelconnections and new ideas at the
margins, not at the center.

Samuel Wines (23:55):
Yeah, it's the overlapping of two ecosystems,
which is called an ecotone, iswhere you get the most
biodiversity, and that's kind ofwhere the best ideas also come
from.

Ben Field (24:05):
And so we see these like polymathic people who are
incredible at coming up withextremely creative insight
because they are able to thinklike a designer, but they're
also able to think like anengineer and they're able to
think like an artist and theycan take ideas from all of these
different spots and put themtogether.
And it's like well, how can weuse the new tools that have come

(24:26):
available to help expedite that?
And I think something that'sreally interesting is like we've
got all this scientificknowledge, but a lot of it is
siloed away in differentdisciplines and just by the
nature of like publications anddifferent disciplines at
university and people specializelike you have to specialize,
but specialists rarelycommunicate with each other.

(24:47):
And particularly like thescientific literature is like
there is this idea of likedisjoint sets of literature that
very rarely interact with eachother.
But if you were able to linkinsight from the two disjoint
sets, then you can potentiallycome up with something very
available.
And the idea of this is likethere was this guy it's called
Swanson linking which he came upwith a novel way of treating

(25:09):
headaches because he was readingabout how fish oil impacts
magnesium levels somewhere elseand because he could connect A
and B.
He knew that B connects to Cand A connects to B.
Then he could connect A to C.
But that was only because hewas able to look at these like
disjoint sets of literature andsynthesize.

Samuel Wines (25:27):
Proper systems thinking.

Ben Field (25:29):
And so we were having conversation today about like
LLMs aren't good at creativeinsight.
Vertically they are not moreintelligent than a human being.
They're like vertically theyare less creative, less good at
complex reasoning than even theaverage human being.
But you can horizontally scalethem so that they are able to

(25:53):
consume and pass huge amounts ofinformation.
It's like what if you set LLMsto just extract insight from the
entire body of the scientificliterature and to extract
identities and nodes of thesegraphs and connections between
the nodes, and then create adifferent representation of that
scientific literature than justlike a 500 gigabyte repository

(26:18):
of text.
Maybe we could have it in adatabase form and you can query
that database, or maybe you haveit in some new way that allows
expert humans to then kind ofsee those connections without
having to read the differentarticles.

Samuel Wines (26:27):
Or you could then use prompts to an LLM like GPT
off the back of that and going.
Can you find any novel ways inwhich, like for example, you
could ask a question and be likethere's got to be something in
like complexity science thatweaves in with biology and
physics so that we can try andfind a way to tap into, maybe,

(26:51):
quantum biology, to then figureout how to harness
photosynthesis?
and then use that as a way to do.
And then suddenly what you'resaying is that you can kind of
weave in insights from like the,I guess, biological, cognitive,
social or ecological lenses oflooking at life and you can sort
of amalgamate them together tocome up with something that's

(27:12):
qualitatively different than ifit was stuck within any of the
individual disciplines.

Ben Field (27:17):
And the thing is, it'll still be expert humans who
are making that final leap, Ithink.
But having a new way ofvisualizing that information or
a new way of traversing thatinformational tree is going to
hopefully be really valuable.

Justin Beaconsfield (27:28):
Yeah, and so that's like.
The main thing here is thatit's like can you use the power
of like live language models togo through all this information?
So, like the original one wereally spoke about was like
medicine, so could you gothrough the body of all of like
you know, medical papers.
Could you extract like relevantinformation, like we're still
kind of deciding what, you wantto do.

Ben Field (27:50):
I mean, it's a very larval idea right now.

Justin Beaconsfield (27:52):
It's like maybe it's just like keywords or
key not even keywords, but likekey ideas that are discussed in
each paper.
And then you like make each ofthose ideas like a node All the
papers are like a node and thenyou just like start linking
everything up and then it's likesomeone wants to explore some
idea related to I don't knowlike some disease, some disease

(28:13):
X, and they realize that diseaseX has this feature, that now,
because it's represented likegraphically, disease Y also has
that feature, and it's like, oh,but we actually like solved
disease Y, like we did this, andthen we can really realize, oh,
disease X is kind of likeactually we hadn't considered
that, but could we apply what weknow here?

(28:33):
And that because I guess we'veseen that there seems to be
something that works really wellwith this graphical
representation, like on socialmedia and things like that.
And the cool thing that, like,large language models, allows us
to do really well is thatpreviously, the task of like
taking all these papers and justextracting key ideas is a
really difficult task.
And it's not a difficult taskbecause we as humans aren't

(28:55):
smart enough to do it.
It's like quite a low levelreasoning task.
It's simple reasoning.

Samuel Wines (29:00):
Perfect for an AI.

Justin Beaconsfield (29:01):
But exactly , but it's at scale and that's
the thing we can't do Like Ican't sit there and read a
million papers in a day.

Ben Field (29:09):
Well, that's the thing.
This Swanson guy, he was like adoctor in the 80s.
I don't even know if he was adoctor, I just like avid medical
literature kind of nerd.

Samuel Wines (29:20):
But like it nerd in a complementary way we love
nerds, here we do.

Ben Field (29:24):
But he just took in a large body of information that
it was uncommon to take in inthe combinatorial sense, and
because of that he was able toconnect these notes.

Samuel Wines (29:41):
Slip on my microphone.

Ben Field (29:43):
And like we, as we were researching this, it says
like there's a whole fieldcalled literature based
discovery.

Samuel Wines (29:49):
But I think you're taking it a whole other level,
though.
What you're doing is this is ameta literature review.
This is like a literaturereview that creates literature
reviews of all of the work andliterature reviews and then
finds the threads that connectthem, like what it's like
creating a mycelial network thatcan go out and weave together

(30:10):
all, like this rich tapestry ofhumans knowledge, like we've
gone.
We've gone out into all thesedifferent disciplines and we've
discovered so, so much, but whatthe issue is is we haven't
found a way for them tocommunicate effectively together
.

Ben Field (30:22):
And, like the internet, has been this
incredible accelerator of accessto information.
Now we need better ways totraverse that information.

Samuel Wines (30:32):
Like.

Ben Field (30:32):
Google has opened up this whole world of the ability
for, like a poly or of anautodidact, to kind of go off
and read Wikipedia pages andfind courses.
But like, is that the optimalway to find information?
Is that the optimal way toconnect information?

Samuel Wines (30:46):
Can you reduce the foraging from having?
To actually go out and do thatto just something that has all
of the food sources there foryou to be able to go out.

Justin Beaconsfield (30:55):
And I think like also what will be
interesting as well will be likewe initially, when you create
like a graph network like thisof information, it starts
becoming like easier for somehuman to like read in relevant
information and like come upwith novel ideas because they're
like looking at the network andif you've got the example of
like disease X and disease Y,maybe this like particular

(31:17):
person is like trying to work ondisease X and so now they can
look at the graph network.
But it's like the next extensionto that is well like we also
seem to have a lot of likealgorithms and AI that have
existed now for over a decade.
If like social media, that it'slike really good at like
drawing inferences from thesegraph networks and there's like

(31:39):
graph neural nets, and so it'slike, okay, what if we now,
first of all, we use LLMs, weuse large language models to
create the graph network reallyefficiently, because it just
like does all this tedious workfor us, but then we use kind of
like clever algorithms andclever AI of a different sort to

(32:01):
now start reading in thatinformation and drawing
inferences from that and can wehave something emerge or, at the
very least, suggestions.
And, yes, humans will probablyneed to be in the loop almost
not, even probably.
We certainly need to be in theloop of like, taking those
suggestions and running withthem and researching it.
But can we use these things to?
Like just point us in the rightdirection?

Ben Field (32:22):
just shine the light, just accelerate discovery, just
accelerate the rate of theability for any one individual
to increase their output,increase their effectiveness and
then, as if you do that acrossthe scale of society, then you
increase society's output,society's ability to innovate
and to experiment and to findconnections, like we increase

(32:43):
the value of our informationalassets.

Justin Beaconsfield (32:48):
It's scary, the other, I mean it's not
scary.
It's mostly good, but I guess,yeah, the thing that this is
doing is it's like shining lightin like the dark places we
hadn't considered like orconsider this, consider that,
but then like evidently when youI'll be back.

Samuel Wines (33:00):
you guys keep going.
Yeah, I'll see if I can dragAndrew in.

Justin Beaconsfield (33:03):
Yeah, yes, yeah.
So actually, when you shinelight into those kind of like
dark places, you will also likewind up, like illuminating
things that like will probablybetter for people not to
discover.
Like, how can you use thesethings to like be exploitative?
Like, like you know, somepharmaceutical company could

(33:25):
like come along, some like newdrug that I don't know what it
does, but, like you know, it'slike detrimental people, but
gets people super hooked andit's like super easy to sell and
but I mean that's the thing.
It's like any new bit oftechnology has pros and cons.
I think overall it's superpositive.
It's just.
It's just like always somethingyou got to consider.
It's like all right creatingnew power, Just make sure, like
you do as much as you can tolike funnel it towards good, not

(33:47):
bad.

Ben Field (33:47):
I do think if you're going to work in technology, you
kind of have to be an optimistfor the value of technology as
well.
Right, like it's kind of I mean, we've spoken about like
Zuckerberg's appearance on Lex,but that shifted my opinion of
Zuckerberg a little bit in thesense of like I actually think
he like truly, but like whetherwhat he's building is for the
good of humanity or not is aseparate question but like, I

(34:08):
think if you're going to buildthings, you have to believe that
you're building stuff for good.
And that doesn't mean beinglike polyamoration, being like
blindly optimistic, but I thinkyou have to believe that if
you're going to createtechnology, it creates more net
good on on the it's net impactis is good.
I think the progress is good.
You have to believe theprogress is good.

Justin Beaconsfield (34:29):
I mean, yes , to an extent I've been, but I
wouldn't to me I don't thinkit's like.
The way I would like to thinkabout it is not something to
reduce it to something so simpleas to say our progress is good
or progress is bad.
I think it's to say that someprogress is good, some progress
is bad, and that we should stayaware and we don't just like
blanket enjoy all progressbecause like okay, like you take

(34:51):
the case of Zuckerberg, youknow, I Personally don't think
that social media, or at leastin the form it's in, is like
fantastic for the world.
I think, like we've got like alot of mental health issues and
whatnot those stem from socialmedia.
I don't think it's fantastic.
But if you ask Zuckerberg aboutit, I truly believe that he
whole hardly thinks he's doing agood thing for the world.
He's connecting people.
Yeah and I guess what I'm sayingis that I Maybe you have to be

(35:16):
blindly optimistic to be able tobe the one to create it, but it
doesn't necessarily mean thatyou're gonna do good for the
world, and I think I think it isalways worth Like, constantly
question you have to work doingit for the world.
No, I know, but I guess whatthat's.
What I'm saying Is it's likeyou can't just like blindly
believe that things are gonna begood and that, and because that

(35:37):
can lead you down the wrongpath.
It can lead you to likeblinding yourself.
So, I think it's always worthstaying vigilant and staying
aware of like I Don't know yet,just just trying to make sure
that whatever you're building isgonna have.

Ben Field (35:53):
I think, vigilance, but optimist, optimistic
vigilance.
I have to think that youbelieve that technology, when
done correctly, creates good.

Justin Beaconsfield (36:00):
Yeah.

Ben Field (36:01):
What's the point in making it?

Justin Beaconsfield (36:03):
I feel like I've heard, like Lex Freeman
talking about this a bit, but hetalks about like, like.
You have to be an idealist inthe extent that you have to have
a vision For a better worldthat you can pursue.
If you sit there as like askeptic and you're like the
world's messed and everything'sterrible, well then you're

(36:24):
almost just.
Why not manifesting that in theworld itself?
Whereas, like the idea of like,have this like idealistic view
of an ideal future and then workTowards it.
You have to, you have to believein a better future if you want
to like make the world better,you absolutely have to be
optimistic and believe in abetter future.
I but I think just like stilltrading with caution and not
just like blindly thinking thateverything you do that Changes

(36:44):
the world will change it for thebetter is also really important
.

Samuel Wines (36:47):
I think what you're just doing there is
essentially speaking to thelanguage of Complex.
Do you want some more water?
Yeah.

Andrew Gray (36:55):
I'll get.

Samuel Wines (36:55):
I'll get back to that sentiment in a second.
What you're essentially sayingis, like you're kind of
referring to Just a complexadaptive system like a
socio-ecological system whereyou, you have the propensity to
Learn and adapt and evolve in anongoing manner.

(37:17):
And it's what you need to bedoing when we're trying to
create this technology is thatwe need to be having these
feedback loops of checking andensuring that Like it's like
it's a designally approach to totech and innovation.
Where we're ensuring like isthis, you know, is this
supporting Resilient,regenerative future?
Is this actively Helping bringhumanity within planetary

(37:38):
boundaries?
Or raising social foundationsfor everyone?
And and being being real withthat and looking at that and
having strong views loosely held, so that if something comes
back and you need to be willingand open and accepting of
Dissonance and going, okay, cool, this is not what I thought.
And then you sit with that andyou and you're using collective
intelligence, not just oneperson sitting there you go,

(38:00):
okay, is this like justquestions, constantly questions,
it's like this.
There's no answer that will beright 100% of the time.
The tech and the things that wedo now will be the problems of
tomorrow, but how can we do ourbest to ensure that, for the
most part, these technologiesare going to be net positive?
You know, and and these things,it takes more time to do things

(38:22):
ethically and Sustainably thanit does to not.
But I think what's fascinatingabout tools like this is it can
actually open up and allow youto have more effective
deliberation and still besomewhat Rapid in the
development process.

Ben Field (38:38):
I guess.
But yeah, I mean it'll be.
We've still got a bigengineering challenge ahead of
us to build it, but I think theConcepts are really.

Samuel Wines (38:50):
Man, I'm 100% yeah like I'm so down to try and
find some capital to supportsomething like that.
I think it's Such a fascinatingconcept to be able to like
effectively and systematicallyaddress all of the let's say,
the challenges that we mightface in our current Bioregion or
any Bioregion, and it could bea really useful tool and
platform to even, like Andrewwas saying before, like just

(39:12):
making sense of scientific data.
I mean, like actually like thatpaper, that novel information.
It's actually like we've runsome like a root that can can
map it like actually it's mightnot be a hundred percent factual
or might have beensensationalized, or or the
opposite.
It might find these weak linksand inferences between things,
or it's like you should doubleclick on that because that could

(39:35):
be something really valuablethere.

Ben Field (39:36):
I'd be really interested in digging into more
with Andrew how he envisionsthat looking like, because I
don't really have a good mentalmodel right now of like how you
could identify, like what thesignals for poor research first
good research would be, I mean.
I mean, obviously there's likep-values and all that sort of
stuff, but I don't know how goodan LLM would be.

Justin Beaconsfield (39:54):
It like but it might just be the Next phase
of like like the actualassessment of the ground.
Yeah, it's like, yeah,originally, all you're getting
is just like mapping all thesethings and for sure You're not
necessarily getting an inferencefrom that.
It's like well, once you'vemapped them, do you start just
seeing finding like keyrelationships between like good
papers and bad papers, and it'slike how?

Samuel Wines (40:11):
it sits in the graph network.

Justin Beaconsfield (40:13):
I guess that paper just like, has
certain Relationships to otherthings.
It's like oh well, that's a,that's a.
Probably a bad paper can behard if data is not open.

Samuel Wines (40:23):
Which is which is why the thing like I don't know
if we've told you about start itstand.
I was reporting on datainitiatives.
It's it's a paper weco-authored with Jack none and
about a bajillion other peopleon, like how can you standardize
Literally data?
and reporting and how can youhave it in a immutable ledger so
you can have provenance of data, see all of the connections,

(40:43):
link everything together in anopen innovation framework.
To me that feels like a realway of the future if we can find
a way to ensure that it can'tbe gamified for For the negative
, like that.
That radical transparency, openinnovation, collaborative
innovation to me feels like areally strong potential
attractor if we can lay theright architectural foundations

(41:07):
from like a infrastructureperspective, from a social
structure perspective and like aCultural or superstructure
perspective do you think part ofthat is also just giving making
, lowering the barrier of skillsand resources needed to
reproduce a paper?

Ben Field (41:24):
like, for instance, if I want to like the
reproducibility Crisis is veryreal and, like, if I want to,
the easiest way to test of apaper is legit or not is if I
can go and replicate its results.
But like to replicate itsresults.
I need a lab and I need, like,a certain set of skills and can
you do that in silico?

Samuel Wines (41:40):
Is that what you're sort of saying?

Ben Field (41:41):
in silico, or do you have like cloud laboratories
where you can kind of likeautomate the reproduction of
experiments, like, obviously,what this is speculative, like
we're a long way off.
There are so many advances weneed in Automation and robotics
and all this sort of stuffbefore we can get there.
But I imagine, like if youcould find a way to like
systematize the reproduction ofpapers without that, whilst

(42:05):
lowering the strain on, like ourHuman intelligence, like on our
honor, on the humans who areable to to go and do that.

Justin Beaconsfield (42:15):
And it would be interesting and be
interesting, those just likelabs dedicated entirely to
reproducibility.

Ben Field (42:20):
Yeah, you also have people to staff the labs, right.
But if you could have a robotstaff the labs and you can have
like a Way of passing papersthat like extracts the method
and and kind of puts into pseudocode that the robots can
understand, open science thatare actually Actively doing that
.

Justin Beaconsfield (42:35):
Yeah, I wouldn't surprise me if there
was I can imagine, plenty ofpeople.

Ben Field (42:38):
Yeah, I'm gonna be awesome.
I mean, lee Cronin just cameout with like the computer,
which like isn't quite this, butit was like automated chemical
synthesis.
And then you told me aboutsomeone else who was doing
something very similar.
Yeah, we just had someone startin the lab.

Samuel Wines (42:50):
Oh, really, yeah, today, and that's why I went out
to try and find Andrew.
I'm like, oh, he's activelyhelping Bersin from Caramex sort
of set up his stuff.
Look at us on our phones on apodcast.

Justin Beaconsfield (43:10):
What are the screen ages?
I know.

Samuel Wines (43:13):
It's horrible.
I was actually my.
My excuse is I was trying tofigure out what that um, that
that group was, that it'sactually doing exactly what
we're speaking to.
It's, oh yeah, open science.
It's a guy from Virginia UnityCenter for open science.

Justin Beaconsfield (43:30):
I think so they're literally scrolling tick
tock.

Andrew Gray (43:33):
That's an alt-right content that's coming through.

Samuel Wines (43:37):
Anyway, what is um , what's rather?

Justin Beaconsfield (43:41):
Good question it's.
We're still figuring that outbut what?

Ben Field (43:45):
like what is the name , or like what is no?

Samuel Wines (43:47):
no, I mean, you can go there if you want.

Ben Field (43:48):
No, I mean, the name is completely uninteresting,
because we just like we'reoriginally based, to which like
has a bit more grounding andlike kind of a.
I mean just obviously based tonumbering system, but like you
weren't the first nerds to thinkabout.

Justin Beaconsfield (44:03):
Even close from hey.

Samuel Wines (44:07):
Hey, hang on yeah let's give him some intro music.

Ben Field (44:26):
Oh, I feel like a punk.

Samuel Wines (44:30):
Andrew Gray is in the house.

Andrew Gray (44:32):
Oh nice.

Samuel Wines (44:33):
Yeah, we're here.

Andrew Gray (44:34):
What are you guys been talking about?
Oh?

Ben Field (44:36):
all over the place, but we were talking about the,
what we were talking aboutbefore oh yeah with the kind of
idea of like can we transmuteour scientific body of knowledge
into a Different medium?

Samuel Wines (44:50):
So I think web Ben was trying to make sense of so
when you were saying how thatwould be useful in science, I
think Ben and myself couldn'tarticulate it because you're
kind of like, oh, that'd bereally interesting if you could
do x.

Ben Field (45:05):
Just in regards to like identifying bad science or
identifying like poorlyreproducible.
Oh, yeah, I'd be reallyinterested in, like what signals
you'd look for to follow.

Andrew Gray (45:15):
Man pub.
Here's a really good example ofthat.
Like with plenty of people inthe community that you know Go
there for a good laugh, hmmabout, like you know, bad
science being reported it's forit's not funny, but it can be
funny what people actually thinkthey can get away with when
they're going to like peerreview it.
Images is a big one, so a lotof people will doctor their

(45:37):
images.

Samuel Wines (45:37):
Oh man right Gel images and stuff like that yeah
paste.

Andrew Gray (45:41):
That's pretty bad, even colonies.
So, like you know, bacteriaplates, petri dishes, so trying
to say that they had all likehere we were successful in
transforming this bacteria, butlooking, I guess what I was
thinking about was if you havethese, this, this map.
I guess of all these differentcorrelations of Papers and the

(46:02):
you know, the linkages betweenthe papers and the data, not
just the images but the actualconclusions, which could be
translated by you know, an AI,if those you know we're talking
initially before about well,correlations that might not be
as strong could be an indicatorof Something novel and

(46:22):
interesting and worth for youknow, checking out.
But it could also highlight,well, maybe this wasn't done
correctly or something else, soit's gonna be.
Could be one or the other right, it could be something novel,
something worth pursuing, orcould be Something that needs
more scrutiny.

Samuel Wines (46:37):
Well, there's like a nice overlap there where it's
like um, the things that arepotentially like wrong, also
probably the things worthprobably checking out the most,
because they could also beDiscovery yeah, exactly, that's
not like a new thing to likegraph networks, that's just like
a no, no, no, just that, that,the novelty being where
something intriguing andinteresting is, I think, is yeah

(46:59):
, you hit the nail on the headthere.

Justin Beaconsfield (47:00):
No but.
But it's a cool thing also justto highlight Novel things in
general like a graph network andlike be like if you have now an
existing graph Network and youadd something to it and it goes
oh this looks particularly novelfor a paper like.
That's looking a bunch of ideas, it's like that.
That just like helps us alertthe public on some like now

(47:22):
quite Objective or quantifiablemeasure of like oh, this has
like a high novelty score.
Let's look over here.

Ben Field (47:29):
It's being bacon, incentives for research into
that and being, like you know,here is like a bounty or
something for exploring thisarea also on the note that
Andrew brought up before it kindof there's an incentive problem
, clearly of like you onlyincentivize to produce positive
results.

Andrew Gray (47:46):
It's like that.
We're really sure we want toincentivize your failures, but
like they should right.

Ben Field (47:50):
Like a failure if it is not just how we did this and
it didn't work, but like it is aEducational failure in that,
like it points you towards aStrategy or a result that like
is informative.
Like a informative in that thisis not the route to go down.
Like a failure can be asinformative as a Positive result

(48:11):
, but we don't incentivizefailures and so, like a, it has
the downstream effect ofincentivizing like faked
research, but also like we'remaybe losing a lot of value.

Andrew Gray (48:20):
Yeah, it's a tough one, right, cuz, like nobody
publishes I mean you.
You might publish failures aspart of a larger data set, like,
but ultimately, what's gonnadrive the publication of that
that paper is because you'vediscovered something of you,
advanced, you know.
When I say advanced, you know Imean like you discovered a new
thing and there's, you know,good science to back it up, and

(48:42):
the new thing is not that itdidn't work.

Samuel Wines (48:45):
I.
Unfortunately, because it costsmoney like journals actually
positively negative is a plusone.
There's a journal of negativeresults in biomedicine.
There's one for ecology andevolutionary biology Journal of
articles in support of the nullhypothesis Pharmaceutical
negative results, because Iimagine there is some salience
in those things.

Justin Beaconsfield (49:07):
In the context of linked together.

Samuel Wines (49:09):
It makes way more sense individually.

Justin Beaconsfield (49:11):
It's kind of I can see it being like not
that useful or if we're talkingabout, like efficient use of
resources, like how how manystudies have probably been like
Completed on something thatsomeone else already figured out
just like wasn't thatinteresting, so they never
really like did a report on itor 10 years ago, and so
someone's like I'm gonna explorethis and they just didn't have

(49:33):
a reference for someone to belike try to, it didn't work,
which I mean sometimes you wantto do.

Ben Field (49:37):
An externality of not publishing is actually quite
high.

Andrew Gray (49:40):
Yeah, sometimes you still want to try those things
but, it's like, um, if it's not,it's really strongly in the
first place, then maybe we couldjust get better allocate our
reason for it to get publishedit would need to be peer review,
like it would have to go up infront of a review board.
So there's resources need to beallocated.

Samuel Wines (49:55):
But I wonder if you know, maybe AI in this case
could you just submit itfascinating and just see if it
can actually peer review Like anopen innovation or an open
science, ai mediated peer reviewwhere anyone can publish for
free and you want to publishboth positive and negative, and
then you have an llm orsomething in the background that
can weave it all together andmake it contextually relevant

(50:17):
for what you need or want.

Andrew Gray (50:18):
I'm sure there's lots of arguments for and
against, but I mean I don't Like.

Ben Field (50:22):
A peer reviewer is quite a A highly experienced
person, right like I mean youwould but like.
I don't know if an llm it mightbe good at like a first pass of
a peer review, or maybe, if youcould like, Can I think it
would supplement.

Andrew Gray (50:44):
It wouldn't be a thing that you just rely on.
Yeah, yeah, yeah, just like youwouldn't, rely on publishers
and peer viewers to be thebeyond end all of having because
, like, how does this Badscience get out there?

Justin Beaconsfield (50:55):
but that's one of the thing is just
expediting these processes.
Yeah, exactly like like Iforget where I heard this
someone just talking about likejust kind of Kind of criticizing
not criticizing, but suggestingthat research in academia was
going to start being like uh,cannibalized largely by like
private research, becauseacademia is just so slow and

(51:16):
like peer, peer reviews taketime and these things are like
inefficient.

Andrew Gray (51:22):
Pre-publications now the bio.

Justin Beaconsfield (51:25):
Yeah, and so just things that can export
out those processes make itquicker, because I mean, there's
so many benefits to academiaand and that being the route
that we kind of like do research.

Samuel Wines (51:35):
Hmm, but also like , like you just called out, like
, for example, like we'relooking at setting up our own
research arm.
Because of exactly that, like,if you can remove, I guess,
multiple layers of bureaucracyhave, have quicker iterations
and feedback loops, you canpotentially come to something
novel way quicker without allthe bureaucracy, but obviously
the catch being like you need tohave the funding to be able to

(51:57):
back it and you need to have theexpertise, so that can be very
difficult, like kettle of fishfunding.

Andrew Gray (52:04):
Yeah, yeah but I really like the idea of what you
guys were talking about, thissort of like this graph network
of publications, because for me,like you know, being able to
you know help for example,bersin with his machine vision
drug discovery algorithm, likeeven just setting up a basic
experiment.
You know we went through Idon't know how many Papers

(52:26):
trying to back up every idea.
It's such a tedious process.
Um.
So if you had something therethat could at least Point you in
the right direction, or themultiple different directions,
you could go down out ofinterest.

Justin Beaconsfield (52:37):
When you're so, you're like You're going
through papers to try it like inthis process.
What are you looking for ineach of these papers?

Andrew Gray (52:44):
methodologies.
So looking at the methods andlooking at the conclusions like
looking at the data sets to seewhat they got Is this to see, if
you should like deciding whatto test on.

Justin Beaconsfield (52:55):
Is this to see, like, where there might be
things that you can applyYourselves, like trying to find
like links.
What do I mean Like?
What are you trying?
What trends are you trying tosee by looking at these
methodologies and results?

Andrew Gray (53:05):
Right.
So the outcome for us, whenwe're looking at the papers, is
to be able to create a, I guess,unique experiment to you know.
In this case it was to create a.
We wanted a baseline, set up anexperiment that could give us a
baseline so that we could testthese compounds against neurons
and Be able to get some measures, some sort of response.
You know how the neuronsbehaving in response to this,

(53:29):
the dosing of this particularcompound, and you know so, not
being a neuroscientist myself.
I'm relying heavily on papersbeing published, and it's also
nice to have a building full ofneuroscientists Cortical hey, do
you know how many millivoltsneuron produces?
Let me go ask.
That's pretty cool yeah but, um, to set up that experiment, you

(53:53):
know you need to like how?
How much does a Neuron produce?
As far, as millivolts, like.
So you're going throughliterature, you're asking these
questions and then when you find, I suppose, the right an, a
well an answer, then you'relooking okay, well, how do they
obtain these results?

Ben Field (54:10):
And so that's gonna basically become the the basis
of that experiment, mm-hmm theother interesting thing is like
You're looking throughneuroscience papers but then
like maybe there's someinteresting insight in machine
learning papers.
Maybe there's some interestinginsight machine vision.

Samuel Wines (54:25):
That's exactly right.

Ben Field (54:26):
But like, yeah, I'm very excited by this idea of
like being able to identify likeconnections of a to c by
identifying a to b and then b toc.

Samuel Wines (54:37):
It's, it's going to be the most valuable and
important area for science toprogress.
Like, if you look in thedisciplinary areas, like it's
just so much more effort to findanything new and novel at the
moment, like, like you see,papers with, like it used to be,
one person could write a paperand advance the field.
Now there's teams of like 30 or40 on a paper and it's tiny.
Like the overlappingintersectionality of how, like

(55:01):
you know, design, art, science,engineering, computer
programming all over andinterrelate with one another,
that that is the ripe area forthese innovations and new ways
of Thinking, doing and feelingscome about.

Justin Beaconsfield (55:14):
I wonder how much, like In terms of, yeah
, the biggest advancements andexactly based off what you're
saying, like I wonder how muchin terms of like advancements
that will come will come fromhumans, like discovering, like
truly novel ideas, versus howmuch it will be A matter of us

(55:38):
just linking up all the ideas wealready have like there's we've
, we've discovered so manythings in isolation.

Samuel Wines (55:43):
of innovation Is like linking two things together
that weren't linked to before,like so yeah, yeah.

Ben Field (55:49):
It's like there's a lot of talk about oh, we'll have
a gi, when we have an ai thatcan discover novel science.
And a lot of people take thatto mean like.
It means when ai and like cancome up with the final leap,
like the final insight To createsome novel science.
But there's probably a lot ofnovel science.
There is a lot of novel sciencesitting in, just like
connecting the nodes of thescience We've already discovered
.

Samuel Wines (56:09):
Um well, like look at all of the leading fields at
the moment biochemistry, likethat's an interdisciplinary
thing Like bioinformatics,bioinformatics.
It's all these overlapping aiin biology or whatever it is,
it's it's fin tech, it's all ofthese overlapping areas and what
we're trying to say is liketake a transdisciplinary
approach, throw all of thattogether into a particle

(56:29):
accelerator and see what popsout the other side.

Justin Beaconsfield (56:32):
It's crazy.
Yeah, it is crazy.
It's almost like we've justlike exhausted so many fields
now in terms of like we'vediscovered Most, almost like all
the low hanging fruit in likeso many fields, to the point
that anything else we find fromhere.
It was like a pretty prettytough.
So now the next thing is likethe second degree of like all

(56:52):
right now.

Samuel Wines (56:53):
Let's try all the different permutations of like,
combining two of them, and seewhat they get what you've just
perfectly articulated Is thefact that we have come to the
end of the functional use ofbreaking things apart and
looking at the smallercomponents, we have mastered the
art of reductionism To thepoint at which we get so small

(57:14):
that we actually can't even useit anymore, because it's like
heisenberg uncertainty principleit's all quantum down there and
you can't even know if, likeit's a field, it's not even a
fixed thing that the, the linearNewtonian mechanics doesn't
work, so what you've just doneis essentially call out exactly
what's happening.

Ben Field (57:30):
It's like we're coming to a paradigm shift where
we have to acknowledge that allof these disparate disciplines
are all interrelated andinterconnected also, like
categories are arbitrary humaninventions, like categorization
is is inherently fairly, likearbitrary, like the boundary
between like chemistry andbiology and the boundary between
like chemistry and physics islike, fairly, like fairly

(57:50):
arbitrary, fairly drawn in thesand and like, but their
convenient categories becauseHuman beings limited that.
If we can enhance the abilityof a human being to like, a
language doesn't allow for, youknow, yeah, non boundary.
Yeah exactly everything has tobe out of my.

Justin Beaconsfield (58:07):
Language almost just like, reflects, like
, the inner workings of ourminds, like our minds don't
really like we've categorizedthings like inherently, like
that's just what we do.
Everything else we just reallystruggled a few things as fluid.
We like them as fixed.

Samuel Wines (58:19):
So language is kind of pro reflects well,
especially, we like to look atthings.
We're mostly adjectives, we'remostly describing words, we're
mostly nouns, we like.
There are languages where it'sall verbs and that's way more in
alignment with how the worldworks because, like, the world
is a process, it's not a fixedstate.

Andrew Gray (58:35):
So on that, how does it?
Something like AI, which is,you know, it's like chat you TP,
for example, which is literallya word predictor, like how does
that?

Ben Field (58:44):
like to call it a word synthesizer?
Excuse me, how does that?

Andrew Gray (58:48):
deal with.
You know, do you reckon that inyour experience has been sort
of Doesn't, it doesn't respondto these borders?

Ben Field (58:55):
Well, not that it doesn't respond to the borders,
but that, like, from a breadthperspective, it is Far more into
, it is far more well educatedthan any human being on earth.
Right, it's not, from a depthperspective, more well educated
than than any expert in anyfield, but like it allows me to
like, if I know how to programfrom like a Higher level, like I

(59:18):
understand kind of like what'sgoing on, what a programming
language is.
It allows me to like obfuscatethe need to have memorized like
the syntax of javascript.
If I own the node python.
It means like I can like writea web app without having to
spend as much time like Figuringout like where to put my semi
colon.
Yeah, and I think that extendsto a lot of fields, right, like
I think.
Now it means that If yourexpertise is in field x, you can

(59:42):
kind of borrow some tools fromfield y without having to like
Take a real detour to memorizehow to use those tools.
You can like operate at onehigher level of abstraction,
which means you can like jumpboundaries much more easily than
you could before because youdon't need to learn the
techniques at the at thesyntactic level or at the like
Syntactic level or at the likespecifics level.

(01:00:03):
You can communicate at thelevel of ideas more easily now.

Justin Beaconsfield (01:00:08):
I think, on the like your question of like
Do these large language modelstend to categorize things?
I would say like.
I would say they're probablyit's.
It's hard to give you like aclean answer to that.
My sense is that they Do alittle bit because they're

(01:00:29):
trained on um, like human dataand so like.
Inherently, there is a lot ofum Just mimicking.
You know what we do, so the wayand the way we see problems.
But that said, it's also likeDoes do a really fantastic job

(01:00:49):
of like generalizing concepts tosome extent.
So there's, and and also thereis like, this idea of like
Contained in this large languagemodel.
It's like all this informationand all we get to see is input,
output but, we don't necessarilyget to see all the inner
workings of the language modeland we there's a lot of work

(01:01:10):
going into that, but they're, interms of how far along they are
, it's like essentially nowhereand and it's like so I mean it's
a hard thing to answer becauseit's hard to.
We don't know how these thingssee the world.
We don't know if they'recategorizing, like we are,
because we're like Still very inthe dark about what they're
doing and always sort of seeingis we're just getting responses

(01:01:32):
to the questions we ask.
But part of that can also belike well, maybe we're just not
Asking the right questions.

Ben Field (01:01:39):
I mean, they don't have new knowledge, they just
have the knowledge that isencoded in human text.
But like what they are is a newway of like Conveening with the
hive mind.
Like, up until this point, likethe, the internet was an
incredible tool because itallowed the Distribution speed
of information to Increase byseveral orders of magnitude.

(01:02:02):
Like now, you don't need to goto the library and like Leaf
through textbooks, you can kindof like find those, find access
to that information at a much,much higher rate.
But now, like, we've largelanguage models allow you to
like query the body of humanknowledge in a different way and
in a quickly improving way.
Is there a fat?
So there's this format in.

Andrew Gray (01:02:23):
That we use in biology, called faster.
I can't know what it stands for.
Dot j, oh yeah.

Ben Field (01:02:29):
That's, that's in like bioinformatics.
Yeah, yeah, yeah.
So you can basically get likeyour.

Andrew Gray (01:02:33):
DNA Sequence.
Yeah, so you download a fastfile.
I wonder why it's in a fasterfile, like it's just ATCG.

Ben Field (01:02:40):
I like, yeah, it's fast.

Samuel Wines (01:02:41):
Aii software, one of the first tools developed to
search for similarities inprotein and nucleotide sequences
.

Ben Field (01:02:52):
Oh, so it was just like a good way of giving it to
the software.
Yeah, yeah, so is there?
Is there a plugin for that?
On check.

Samuel Wines (01:02:58):
I don't know I'll, I'll just ask definitely would
be keen to do you have a pluginFaster, because that I feel like
once you start like, especiallywhen you're talking about
fields like synthetic Biology.

Andrew Gray (01:03:07):
I don't have a specific plugin.
I don't know why I've got sucha jovial voice for it.
But I don't have a specific,probably exclusively Turn down
Jovial yeah format data, but Ican certainly assist with tasks
related to faster files using myexisting capabilities these
tasks.

Ben Field (01:03:23):
I mean, I think a large language model would kind
of struggle with like DNAsequences because they're not
represented very well and likethey're just like enormous files
that require, like, high levelsof precision and like large
language models can like createa simile of like a DNA sequence,
but it's probably.
It's definitely not gonna begood at creating an exact DNA

(01:03:44):
sequence and it doesn't likebecause it's just essentially a
next word predictor.
I don't know, I actually don'tknow enough about bioinformatics
to know where it would beuseful or not.
I don't know, I don't know.
I don't know.
I actually don't know enoughabout bioinformatics to know
where it would be useful or not.

Justin Beaconsfield (01:04:00):
I mean the thing I think with Like the DNA
sequencing in terms of like AIbeing useful.

Samuel Wines (01:04:07):
It's not, I would imagine it's not large language
models.

Justin Beaconsfield (01:04:11):
But AI, like will be fantastic and I
think like, like, if you ask,like what are the large language
models doing where they'retaking language and they're
finding Patents in it?
So like, it's like its task isto optimize, to like predict the
next word in sequence.
So it has to do like, I wouldsay like, three main things like

(01:04:31):
fantastically has to oneunderstand like language and the
structure of languageimpeccably.
It has to pretty much knoweverything about the world,
because if you want to predictthe next word, it's going to
come in the sentence like um,you you have to know, like uh,
you know like a lamb is a baby,you know, you have to know
that's a sheep like that.
You can't predict the next wordif you don't have that.

(01:04:52):
And then the other thing itneeds to be able to do is it
needs to be able to reason andlike do like decently, like
Decent amounts of reasoning.
So it's like to be able to.
So what it winds up doing is itbakes all those things into the
model.
Like those are the patternsthat underlie human language, is
like kind of like language,world, knowledge, reasoning, and
it's like all right.

(01:05:13):
So we've got a large languagemodel.
It's fantastic at findingpatterns for those things.
Now let's take similararchitectures and Now what we're
optimizing for is somethingcompletely different.
And instead of trying to just,you know, hijack a next word
predictor, why don't we justactually create AI systems that
really, really intelligentlystart finding patterns in these
really complex pieces of dataand the patterns they're looking

(01:05:35):
for entirely going to beentirely different to large
language models.
But I would be like shocked ifwe don't get some like
Incredible, incredibleadvancements in the next not
very long time.

Ben Field (01:05:46):
neural networks how much like labeled biological,
like DNA Sequences do we is that?
Do we have a lot of label DNAwhere it's like this codon
sequence, codes for like thisparticular?

Andrew Gray (01:05:58):
Yeah, there's a typical outcome like yeah,
there's a.
So I mean there's a lot ofdatabases like, prime example,
is gem bank, which is I don'tknow, how many Submissions are
on gen bank Generally, anythingthat's going to be used in the
future, like generally anythingthat does a um, any sort of
announcement or any sort ofdiscovery involving DNA or even
like Coding or sequences,doesn't have to be huge.

(01:06:19):
It could be a small littleAll-agonycliotide, which is just
a fancy word for small piece ofDNA.
Um, those would generally getlabeled and submitted on there,
so like if you could Upload thatand you know, maybe it's not
something like chat gtp, maybethat's just calling on a plugin
initially to you know, translatewhat the prompts are, to then

(01:06:40):
be able to search that and thenwork with those sequences.
That's yeah, it'd be prettycomprehensive, I imagine.

Ben Field (01:06:48):
I'm sure a lot of people I mean I know a lot of
people are looking at like AIand biology.
It's a super interesting spacebecause we do have a shitload of
data.
We have so much data but theproblem is we don't necessarily
also, like in the case of DNA,it's we present it linearly but
like it's this incredible.
You can probably just as easilyrepresent DNA as, like, a

(01:07:09):
complex multi-dimensionalstructure where like this has a
feedback effect on that and 100%.

Andrew Gray (01:07:13):
I mean the other thing, the other sort of that
too, is that, like you know,there's not a lot of you know
like the the genome announcedlike the human genome project
Pretty sure that's just CraigVentra's genome, but like so
much Like research and so muchlike medicine has been based off
that genome.
Yeah, it's just yeah and I knowthere's projects now to like

(01:07:34):
kind of make Genomics moreequitable as far as like what
data sets we're using, not justfrom one wide dude, but you know
from you know all sorts ofdifferent backgrounds, so that's
also a caveat there.
But uh, yeah, that's man justtrying to think of.

(01:07:55):
Like what, what I would ask?
It Got nothing.
Yeah, that's the thing it'slike.
I remember when we built thelab initially for Barquis, it
was like oh, we built the thingLike what do we do with it?
I don't know.

Samuel Wines (01:08:11):
The first thing's always.
I didn't think that far Exactly.
Maybe we make Hangover freebeer.

Andrew Gray (01:08:18):
Yeah, that didn't work out.
Did you try to do that?

Ben Field (01:08:20):
Yeah, we did, but it didn't.

Justin Beaconsfield (01:08:22):
Yeah, how did you try?

Ben Field (01:08:23):
to make Hangover free beer.

Andrew Gray (01:08:24):
Oh, I always hangover in quotation marks free
beer.

Ben Field (01:08:27):
How would you make Hangover free beer Like do we
know what causes?
Isn't it just like dehydration?
What's a hangover?
Actually, I have no idea what ahangover is.
No, it's acetyl.

Andrew Gray (01:08:33):
This is a thing called acetyl aldehyde, so it's
like your body breaks downethanol into acetyl aldehyde and
that forms ketone bodies withall these things which flag your
immune system to go and attackit.
So, like the issue isn't thatyour body's breaking it down
into that, it breaks it downinto acetyl aldehyde, which
forms these things faster thanit can get rid of the acetyl

(01:08:55):
aldehyde.
So that builds up over time andyour body doesn't clear it out
as fast as it produces it fromthe ethanol, and then that
causes your hangover.

Ben Field (01:09:05):
I'm sure there's some If you could make really truly
Hangover free beer.

Justin Beaconsfield (01:09:08):
I think somebody's worked on that I was
gonna say you would pay off thelab in like 12 seconds.

Ben Field (01:09:12):
I think the problem is people just drink more of it
and then get like alcoholpoisoning?

Justin Beaconsfield (01:09:18):
Yeah, it's a little lousy yeah.

Samuel Wines (01:09:22):
And then straight up, what's that maximum power?
Is it maximum power principleor something like that?
It's like whenever you havelike energy efficiency people,
just then ram it back up and usemore of it.
I feel like it's maximumalcohol principle.

Ben Field (01:09:34):
Exactly.
I'm sure that it'll also belike some crazy negative
externalities where, like, oh,it doesn't make acetyl aldehyde,
but it like.

Samuel Wines (01:09:40):
Does make you go blind.

Justin Beaconsfield (01:09:43):
And that was how we sterilized half the
country accidentally.
Yeah, yeah, yeah.

Ben Field (01:09:48):
It's like the start of IAB legend.

Justin Beaconsfield (01:09:49):
It's not a cure for cancer, it's just like
everyone's getting drunk onhangover for a beer and we all
turn into zombies.

Samuel Wines (01:09:57):
I think we've just got ourselves the next Netflix
exclusive.
We never actually explainedwhat Rava is.

Justin Beaconsfield (01:10:05):
Oh yeah, okay, yeah, yeah.
Well, I mean, it's fairlyill-defined at the moment.
Like I think we're just, we cando better than we did.

Samuel Wines (01:10:11):
Yeah, the best things are all defined.

Justin Beaconsfield (01:10:13):
So I mean, the thinking was like Ben and I
got really into exploring theapplication of artificial
intelligence and it was likeokay, what we've noticed is
there is a massive translationalgap between, like, most
organizations and thecapabilities of these new tools,

(01:10:36):
and most organizations could beusing these tools to make
themselves way more efficient,but people just don't really
know how, they don't know whereto look, and, as the kind of
quote unquote experts in thefield, which didn't even
necessarily need to involveknowing much about like these
tools, we thought, all right,we've got the capacity to really

(01:10:57):
help with this translationalgap, with organizations being
able to like, apply these tools,and we can use that then to
start finding all sorts ofthings that will be useful for
organizations, and so we canhelp a bunch of organizations
out along the way and then, indoing so, start to find some
products that we couldcommercialize or products that

(01:11:21):
could be just like, really goodfor the world, all sorts of
things like that.
And we're still figuring allthose sort of things out, but
that's like.
The main idea is like all right, let's bridge the gap of like
what these tools do and likewhere they can be used.
We'll go into organizations,find the problem spots and then,
like, eventually productizethose things.

Samuel Wines (01:11:40):
Is there because you are working with a few
people already?
Ben?

Ben Field (01:11:43):
Yeah, we're working with like kind of a fairly broad
range of companies at themoment, but we are thinking that
we will kind of niche down justfrom a.
I think if you can speak thejargon of a particular industry,
it really helps and also, likeit makes your services more
repeatable.
It puts like less overhead onus to kind of come up with novel

(01:12:03):
ideas and learn the lay of theland every time and like, all
right, if we work with like alot of law firms or whatever,
then we know the jargon of thelaw industry and we know how to
like deliver things thatactually solve problems, rather
than rinse and repeat the samebasic AI idea for like lots of
different industries.
But I mean, I think like thereason we're doing all this

(01:12:24):
consulting stuff is just becausewe want to get good at building
and good at executing.
But I think we both have like amind to.
We probably want to build aproduct or build like.
The thing that gets us excitedis stuff like all right, can we
like make a knowledge graph outof all of the world's scientific
information?
Like stuff like that is isreally exciting.

Samuel Wines (01:12:41):
But I think we want to pay the bills in the
meantime, yeah.

Ben Field (01:12:44):
And also like know how to solve small problems
before you can solve bigproblems.

Samuel Wines (01:12:46):
I think it's a really effective way of like how
do I frame this?
It's almost like you're you'relearning through doing and
iteratively designing things andthen after a while you'll be
like oh now, I think here islike a problem, or maybe not a
problem, maybe a challenge areathat we're quite fascinated by.
Let's try and find a way toexpand upon this and
meaningfully contribute in apositive way.

Justin Beaconsfield (01:13:09):
I mean what ?
What you find, I think a lot ofthe time is that you have like
two key categories of peoplethat kind of like both have like
fatal flaws.
And that is, you either havelike really, really technical
people who try to build aproduct, but they're like not
fantastic at that because theyare like they don't really

(01:13:32):
understand what non-technicalpeople do and they just kind of
like fail there.
Or you have like non-technicalpeople who try to like build
technical products but they justdon't know where to begin.
And Ben and I were like allright, we can probably try to do
the that we could.

(01:13:53):
We could try kind of not fallinto either of those schools of
like we're technical people butif we just spend like a long
period of time insideorganizations actually
understanding like, all right,what is a non-technical small
business do day to day and whatare the actual issues that these
businesses face, like what arethe actual needs that people
have?
There's probably not a heap oftechnical people that like

(01:14:15):
deeply exploring those things.
So it seems like a really coolopportunity, as like as an
exploration phase, for us tojust like go into organizations,
see what their problems are,actually understand those things
and then start building toolsand programs that address those
needs.

Samuel Wines (01:14:33):
I have time for that, obviously.

Andrew Gray (01:14:36):
Really like what you said about setting those
sort of as far as learning andeducation, like especially
managing your own, your owneducation, setting those larger
goals Like that's really cool,I'd like to get there one day.
I don't know how to get there,but as long as you have that
sort of on the radar, on the mapof the things you'd like to do,
everything that you learn alongthe way opens up new pathways

(01:14:58):
towards that outcome.

Ben Field (01:15:00):
So, like I think I saw like I mean, I was in
Melbourne, New York and like Ilearned a lot, but like we
didn't actually make that muchstuff, despite being like a
master's of engineering.
And like I used to make musicwhen I was younger and like I
ended up doing pretty decentlywith some of the songs I made.
But like I reckon, as inhindsight, if I just made a
hundred times more songs, Iwould have been a lot, I would

(01:15:21):
have done a lot better.
And like quantity isn't inconflict with quality.
It's like indirect.
It like directly correlates toquality, at least when you're at
the learning stage.
And like I think just buildinglots of stuff is the best way to
like I learn a lot more fromlike the solar panel project
that we did than I did inanything I did at uni.

(01:15:42):
Like I think just buildingstuff is such an effective way
to and a satisfying way to likeget good at something.
But yeah, it seems like fairlyobvious that like all right,
like and also given that likethere's not really any, no one
knows what this field isbecoming Like you kind of just
have to like go on Twitter andthen like hack around on a
GitHub link that you found andlike see if you can like reverse

(01:16:04):
engineer something that somelike Russian guy did.
Like like it's, it's.
There's no like good universitycourses for this year.

Andrew Gray (01:16:12):
I was checking those links yeah it seems very
like sense probe and respond.

Samuel Wines (01:16:17):
Like you constantly just like Ensuring
that you've got streams of dataor insights coming from multiple
places and then you just lookat integrating that and finding
ways to apply it.

Justin Beaconsfield (01:16:27):
And it's like, kind of like Ben said,
it's the field moves so quicklyand it's changing so much that
we also kind of have thisawareness of, if we over commit
to something that we'll probably, this just leaves you so prone
to just like making somethingthat is just not relevant in
like 12 months, even like allthese startups that I mean, a
lot of them made a bit of moneyin the short time or whatever.

(01:16:48):
But, like you know, open AI,updates, chat, gpt and their
whole startup just tanks becauseit's like oh, that's now just a
feature on, on chat GPT.
Your whole startup is now justlike that was like using chat
GPT on a different.
Like that's like, that's done,it's now just like that's a
plugin.
Or that's like you just chatGPT now just like does that?
And so like, if you go too hardon trying to just like come up

(01:17:15):
with an idea right away, withoutactually just like letting this
field play out a bit and likereally getting into the weeds of
understanding what are thesethings not going to be able to
do?
Like what can you add to them?
that isn't just a feature onchat GPT, that that's like I
think, a really necessary thingto do if you want things to

(01:17:36):
stand and believe out building abetter future.

Ben Field (01:17:39):
I also think, like if you oversubscribed an idea or
like over invest in an ideaearly on, like the reason that I
mean we've spoken a lot aboutthis kind of knowledge graph
thing, but that's because wespoke about it this morning, and
like we're quite excited aboutit, but like we wouldn't have
the bandwidth to work on that ifwe weren't like churning
through projects quite quickly,it's like, oh, like no, we're
actually just doing the startupthat we've been working on for a
whole year.
We're not going to give up onthe startup because, like just

(01:18:01):
because new ideas come along,but like if you can just like
complete ideas quite quickly,put them out in the world, see
if you get a good response,iterate, it's like a way better
way of doing it than, especiallywith something that moves this
quickly, than like one year instealth mode.
Like building something likethat your initial hypothesis was
wrong.
Or like you sacrificed workingon the actually like way more

(01:18:22):
important idea because, like youput all your chips on the table
too early.

Justin Beaconsfield (01:18:27):
I mean like , even like a kind of like no,
it's not the same, but like akind of funny parallel.
I think it was like the bigconsulting firms invested all
this money after GPT-3 came outto like create their own, like
bespoke, like custom languagemodels that like built off, like
GPT-3 and like it's likehundreds of millions of dollars

(01:18:48):
and they like built them andthen, by the time they were
finished, gpt-4 was out, andit's just better than the thing
they built custom like, even fortheir own custom use.
Gpt-4 was just better and allthe work was like kind of down
the toilet and it's like ah,that's the way like nature works
as well, right?

Ben Field (01:19:07):
Like nature doesn't.
Like nature places a lot ofsmall bets and see which ones
hit, yeah.

Samuel Wines (01:19:13):
Yeah, redundancy sometimes breeds resilience in a
sense.
It's like and then you figureout which ones work, which ones
don't.

Justin Beaconsfield (01:19:20):
Yeah, yeah, yeah, I like that redundancy
yeah.

Samuel Wines (01:19:27):
Anything else you guys want to talk about?
I don't know.

Ben Field (01:19:31):
I think I'm, I think I'm.
Anything you want to talk about, it doesn't have to be AI.
You've got.
You've got lots of good ideascooking.
What's excited you recently?
Or a J.

Samuel Wines (01:19:42):
Well, yeah, we can .
We'll give you a little bit ofa recap of what's just.
We're in the middle of tryingto figure out like what a what
event, like a systems informedventure studio would look like.
We're also exploring what itcould look like to do I don't
want to use the word consultingbut maybe like prototyping with
large organizations, to try andgo like Large organizations that

(01:20:07):
actually want to make ameaningful, positive impact.
How could we help youtransition towards a more viable
future?
But yeah, I think what reallyexcites us for the moment is the
venture studio concept or it'sa really cool idea.
Well, like, yeah, the livingsystems Institute stuff I've
talked about as well, like, whatwould it look like to be able
to have spaces and places toteach Complexity, informed

(01:20:28):
living systems thinking andecological design?

Ben Field (01:20:30):
Yeah, I'm like teach through doing as well like teach
Literally.
It's a praxis.

Samuel Wines (01:20:34):
Yeah, yeah, I think.
What about for you, andrew?
I feel like the thing that'd bemost exciting for you would be,
by a cue, getting that back upand running.

Andrew Gray (01:20:41):
Yeah, the community live.
I mean, that's like that'swhere it all began.

Samuel Wines (01:20:44):
Yeah, for me that's.

Andrew Gray (01:20:45):
That's where all this sort of took off from.
And yeah, it's just kind ofhard to move forward without
acknowledging that that needs tobe it's essential, yeah, it is.
It's like I got my shoelacestuck in the door behind me.
I'm trying to walk forward butI have to address that.
You know it's Exciting settingup that up, saying that up again

(01:21:07):
.
I think with Broody impactneighborhoods over in Brunswick.
So an old school renovatepotentially 70 square.
So initially the community labwas a, you know, in a shipping
container, in a warehouse whichis very Brunswicky, next to a
brewery.
But this this next time, withall the you know, we've learned
so much along the way.
We've got more resources now.

(01:21:28):
We've got a huge network ofreally talented people around us
, so there's just so many betterup, so many opportunities that
we didn't have when we set upinitially on that.
So this like 2.0 version ofit's gonna be really exciting,
not just for education but alsofor innovation.
And, you know, giving people areally Safe place to fail, like

(01:21:51):
early, early on.
And if people don't want to godown the path of innovation,
they just want to create for thesake of creating.

Samuel Wines (01:21:57):
That's also you know what it's about.

Andrew Gray (01:21:59):
It's not necessarily a for-profit motive.
No, it's not.
Like you know, we have toCommercialize everything, like
it's primarily just curiositydriven, education driven which
is like with like, we're bothdoing this.

Samuel Wines (01:22:11):
Some this course from Etihad, which is designing
resilient regenerative systemsand a big thing that, they say,
is like it all comes from,obviously, education and
teaching people how to thinkdifferently, but a lot of it is
that primary research of justexploration and play and seeing
what happens.
Yeah you never know what's gonnahappen.
And then suddenly, maybe 10, 15years later, you have the tech

(01:22:33):
able to apply that at scale.
But we shouldn't be necessarilyfocusing on everything being
commercial.
We shouldn't focus oneverything being researched.
It's like have space for both.
That's why for from our pointof view, it's really important
to take that Perspective andprovide places and contexts for
people to experiment, also tobring that idea to reality in a

(01:22:54):
commercial sense, yeah, and thenhopefully, yeah, the.
The education as well isanother thing that we're really
gonna try and push more.

Andrew Gray (01:23:01):
Yeah, I think one of the the major, one of my
major observations withcommunity lab back in the day
was just how many people wantedto get experience.
You know they they're studyingthese really technical degrees
but there's not really anywhereto actually apply it in a way
where they, you know they'redesigning the experiment from
beginning to end.
You know if they fail, theyfail great, or generally.

(01:23:24):
When you go to universitythere's no time to do that.
It's like you're doing acompleting, a very small portion
.

Samuel Wines (01:23:33):
I guess I'm not prepared earlier.
Yeah, just use this pipette tomove this from collect the data
right report.
You know?
Yeah, you don't have ownershipover that.

Ben Field (01:23:40):
Yeah, you're making these like fake.
I hated that like I hated labsat uni because it was like I
have to make this like fakeReport about this fake thing
that we got told to do that like.

Samuel Wines (01:23:50):
Frosty hey, god, frostbite.

Ben Field (01:23:52):
Yeah, like, whereas, like if they told me like I
don't know, like grow some ratneurons and like put them in a
RC car and see if like Make acyborg, like I think just
actually giving people agency toactually make things, that it
that is cool, is like that's thewhole thing about engineering.

(01:24:13):
It's like, yeah, it's cool thatyou like have agents.
It's like magic you have likethis new found agency of the
world that you can go in likedoing new things with it, and
like it's so boring if it's justlike write this report on like
how much coffee was extractedfrom this cylinder in, like it's
I don't know.
I think you need to really failat it.
You need to really really fail.

Andrew Gray (01:24:31):
I mean like hands dirty here.
It's set up to cater to a hugenumber of people.
Yeah and there's no way thatyou can take a nuanced approach.

Ben Field (01:24:37):
Yeah, of course, at that scale.

Samuel Wines (01:24:39):
But that's where, like community organizations,
I'm like also very mind likeUniversities I mean some of them
have been around for thousandsof years, but like they are
still feudal age Organizationsthat have been brought up to
speed with the latest Capitalistinfused way of operating like
you're dealing with aninstitution that hasn't really

(01:25:01):
gone through a major phase shiftand how it operates for a long
period of time.
The only thing they've done iscontend, condensed, how long it
takes.
You know, back in the day Ithink I'm like Plato and
Aristotle, all those guys like Ithink with Pythagoras is like
oh, you want to come study withme?
Like the first year, you needto be silent and just listen,
just listen, you can't sayanything like.

(01:25:22):
Imagine trying to do that at auniversity.
Now it's like no, like not evensemesters, but trimesters.
Let's cram it into a year and ahalf and get your money and get
you out, and it's like when youturn it from something that's
meant to be about, like you know, lighting the like, the
curiosity, and like the flame ofcuriosity, you know, rather
than the filling of a vessel,it's some yeah.

Ben Field (01:25:43):
I think we're at the greatest age in history to be
able to teach yourself thingsand to be able to like Find
other people who want to learnstuff.
But like our institutions arefailing and it's like really sad
and there needs to be areplacement because you need to
be able to like educate at scale.

Justin Beaconsfield (01:25:59):
But that's, that's why I'm gonna start like
a, like a fifth centuryUniversity, just like a
completely like change the modelback, like throw it back 1500
years or something give me plate.

Samuel Wines (01:26:11):
Oh yeah, and what is it?
No, it's like Aristotle'sLyceum or Plato's Academy.

Ben Field (01:26:16):
Yeah, I think people at the stake Maybe less.

Justin Beaconsfield (01:26:21):
It's very selective, it's, and it's just
like you know, like five, fiveyoung, bright minds admitted per
year and they, yeah, they knowhow to talk.

Samuel Wines (01:26:31):
I like that like streak, like cone of silence
From deep listing and observingthat's the whole premise of that
being quiet for a year.
It's like yeah you shouldprobably actually shut up before
he think you know anything.

Ben Field (01:26:47):
Yeah, I know he did.
He'll have the teal fellowship,which seems like an interesting
kind of like he just givespeople like smart.

Samuel Wines (01:26:54):
One leader of your blood each week.
You're not a blood boy, you'rea.
You're an intern scholarship.
Blood, in turn, impacts.

Andrew Gray (01:27:09):
There's what we were talking about before.
There's a I saw an experimentcalm, that whole apprenticeship
style approach to teachingscience, and that's a Elliot
Roth from DIY bioorg.
But I think he's also headingup.
He started some sort of a algaeand Sign a bacteria startup by

(01:27:30):
manufacturing.

Samuel Wines (01:27:31):
It was called no.

Andrew Gray (01:27:34):
But I know the.
I know the experiment that hewas looking to run was
specifically in education andwhat would it look like if we
kind of had that more tailoredapproach, we had less people but
more mentors?
So you know, it was reallyabout sort of the way I
interpreted giving back where,like down the line, those who
kind of went through the programwould then become mentors or

(01:27:56):
like a select, and for them tomove on and like graduate, like
you would have to have that it'stough because, like I feel,
like I mean you feel like usingthe model of something so long
ago that it was like thearistocrats that would like go
to universities and now you,just you have I don't want to
call it the- issue.

Justin Beaconsfield (01:28:15):
No, but you just you have like this set of
new conditions where it's likelots of people are like pretty
well educated and have access toresources and Lots of them are
capable and like want to go touniversity, as opposed to like a
very small, like slither of.

Ben Field (01:28:30):
Should everyone be going to university?
Yeah, I think model actuallylike a much better model in
general for like a few ratherthan like a few select
disciplines like puremathematics.

Justin Beaconsfield (01:28:40):
I think most people should be like.
I think Most people should begetting higher education.
Yeah, I mean, yeah, maybe themodel needs to change, but it's
like, yeah, the resourceallocation was just maybe like a
bit of a simpler task to Solveway back, when you just didn't
have like massive Portions ofsociety.
That all want to go and get adegree like that just wasn't a

(01:29:00):
thing you didn't have, like Idon't know.
What right do we have?
Was, like, think, gettingtertiary education think, oh
yeah.

Samuel Wines (01:29:06):
And also then unemployment based off the back
of that.
It's like the going to theuniversity for the degree which
was guaranteed to get you a job,like 30 years ago.
It's like no longer the caseanymore.
It's like sometimes it's adetriment.
So it's like, in what ways canthat way of operating make sense
?
Like if you want to pursue apath of research, like sure,
that's great.
Then what are, like the techschools for a concept like how

(01:29:28):
do we bring that sort ofhands-on approach back?
Because we crushed that, wedestroyed them.
They were everywhere and wepretty much got rid of most of
them.
You know it's only just comingthrough and having a revival now
.
But like what if we did, yeah,the tech school or the
apprenticeship approach for allof these sort of more hands-on
disciplines?

Justin Beaconsfield (01:29:46):
Yeah, it is weird with like created a
general model of education, likesecondary education, but yes.
Most professions.
You should probably be doingsomething vastly different to
like the standard universitymodel.
Like, yeah, if you want to want, teach kids how to become
programmers fresh I mean, evenif it's not fresh out of high
school.
Like, let them just like I'maround for a few years.
I think it's quite good for alot of people to just do that.

(01:30:08):
But I just don't know thatUniversity is.
I mean, it's not a badenvironment.

Samuel Wines (01:30:15):
Oh they're there.
You can learn how to do it bestenvironments you can find, but
exactly it's the best of whatexists.

Ben Field (01:30:21):
But I could totally imagine a way better environment
to learn how to code maybe likea clockwork orange type, like
put me in a chair and Strap myeyes and stolen LLM.

Justin Beaconsfield (01:30:35):
Yeah and even alternatively to clockwork
as good as clockwork, I'd rathermaybe just like really
practical of like alright, yourlike whole semester is just like
build, build something, andthese are like the vague specs,
but like yeah, I don't have todecide.

Andrew Gray (01:30:51):
Yeah, until you can .
I mean again going back to thissort of like mass education
model.
Like you know, they need a wayto Score yeah, and that's the
challenge and couple that withthe fact that you know, as Sam
said, that people are.
You know the.
The narrative is that you go touni, you increase your job
prospects and then you go andget a job, whereas now like.

(01:31:14):
What you're pointing to isexperience like so when you're
actually creating something,you're not learning about it.
You're doing and you'relearning through doing, but
primarily you're gettingexperience.
You're building a portfolio ofwork or a body of work that you
can then go and show Perspectivehires.
That, like this, is what I'mcapable of.
And so that's, that's a.
You know how do we Bring thatinto it?

(01:31:34):
Because now it's a chicken andegg problem that everybody
always talks about.
I need a job to get experience,but I need experience to get
the job and and the tricky thingis as well.

Justin Beaconsfield (01:31:42):
That's like right now, the way we get
experience is through a job, butI mean, when you go work at
some big company, the experienceyou wind up getting.
A huge portion of that is notreally relevant, like the thing
you want to make your corecompetency, where it's like if
you really want to learn how tobecome a program.
I mean, maybe, maybeprogramming is actually not the

(01:32:03):
best example.
I don't know how much of a youknow a big corporation.

Ben Field (01:32:05):
I mean neither of us have actually worked as programs
.

Justin Beaconsfield (01:32:10):
Like less.

Andrew Gray (01:32:11):
I would argue it's less about that.
It's less about like the whatyou're trying to demonstrate
your proficiency, and it's likethe things that aren't
communicated you know, like youhave the ability to work for a
company.
You're showing up on time,you're, you know doing these
things, whereas if somebody offthe street were to apply it's I
don't know how they are theygonna show up.

Samuel Wines (01:32:31):
You know, are they gonna?

Andrew Gray (01:32:32):
yeah, they're gonna have a shower before they get
here.

Justin Beaconsfield (01:32:34):
Yeah, so it's, but it's like I guess it
would just be cool to haveenvironments where we get, like
we go to like an educationalinstitution to get experience.
Hmm but the experience we'regetting, like we don't just have
to get a job, we can get really, really tailored experience
like exactly the thing we wantto learn.

Andrew Gray (01:32:54):
It's still the experience getting there, like I
think that they're actuallytrying to with the micro
credentialing thing, like buildyour own degree, like you will
get, it's no longer gonna belike a degree and like a
bachelor's of science.

Samuel Wines (01:33:04):
It'll be like this very Nuance, what do you
actually want a need for thedirection you're trying to go?
Yeah, yeah we'll make way more,way more sense.
When that happens, I'm gonnahave to leave soon.

Ben Field (01:33:18):
Yeah, so we're.
Yeah, we're just about to jumpinto another meeting.
Time has flown.

Samuel Wines (01:33:23):
Nate, just how always does, it's like a little
time dilation bubbly crazy.

Justin Beaconsfield (01:33:29):
I cannot believe it's two.

Samuel Wines (01:33:31):
Yeah, we got some work to do this meeting about
the venture studio cool.
Um.
Thanks so much.
Is there anything you want towrap by letting people know
where to find you apart from atcolabs HQ?

Justin Beaconsfield (01:33:46):
We just built our website.
Oh and then, and then, if notalso just on LinkedIn.
Yeah, that's, that's it.
Glorious rather our double a VA.

Samuel Wines (01:33:57):
All right, I guess we just tap out on this one.

Justin Beaconsfield (01:34:00):
Yeah, thanks so much.
Thank you.

Samuel Wines (01:34:06):
Thank you for sticking around for this episode
of The Strange Attractor withBen and Justin from raava.
As you can tell, this is reallyinteresting sort of stuff that
they're working on.
That we find is quitefascinating the notion of being
able to use AI to advanceinnovation and Research that is

(01:34:30):
impact oriented and trying tokeep humanity within the
planetary boundaries whileraising social foundations for
as many beings on this planet aspossible.
That is pretty exciting stuff.
So, yeah, if that's somethingthat is of interest or if you're
curious about some of the ideasthat we've spoken about here,

(01:34:51):
please reach out.
We are a community driven,real-world laboratory or or
experimenting and exploringpathways towards more resilient
and regenerative future.
Please drop us a line, join ourcommunity.
We'd love to hear from you andwe'd love to find ways to
collaborate and coordinatetowards that more viable future.

(01:35:12):
Thank you.
Advertise With Us

Popular Podcasts

1. The Podium

1. The Podium

The Podium: An NBC Olympic and Paralympic podcast. Join us for insider coverage during the intense competition at the 2024 Paris Olympic and Paralympic Games. In the run-up to the Opening Ceremony, we’ll bring you deep into the stories and events that have you know and those you'll be hard-pressed to forget.

2. In The Village

2. In The Village

In The Village will take you into the most exclusive areas of the 2024 Paris Olympic Games to explore the daily life of athletes, complete with all the funny, mundane and unexpected things you learn off the field of play. Join Elizabeth Beisel as she sits down with Olympians each day in Paris.

3. iHeartOlympics: The Latest

3. iHeartOlympics: The Latest

Listen to the latest news from the 2024 Olympics.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.