Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
This AI is better
than any member I can talk to.
It's more knowledgeable,smarter, better answers,
obviously faster, and that isnot what they thought it would
do.
Even the people who are signingup and paying for a product
like this they're blown awayconsistently.
Welcome to Sidecar Sync, yourweekly dose of innovation.
If you're looking for thelatest news, insights and
(00:22):
developments in the associationworld, especially those driven
by artificial intelligence,you're in the right place.
We cut through the noise tobring you the most relevant
updates, with a keen focus onhow AI and other emerging
technologies are shaping thefuture.
No fluff, just facts andinformed discussions.
I'm Amit Nagarajan, chairman ofBlue Cypress, and I'm your host
(00:43):
.
Greetings, sidecar Synclisteners.
We're excited to be back withyou for an amazing episode all
about artificial intelligence,and we will be talking about how
this particular topic actuallyis super relevant to the world
of associations, and you'll findout very soon why I introduced
today's episode that particularway.
My name is Amit Nagarajan.
Speaker 2 (01:03):
And my name is
Mallory Mejiaz.
Speaker 1 (01:05):
And we're your hosts,
and it is an exciting time in
the world, isn't it, mallory?
And with AI in particular, it'sjust kind of nuts.
So I've been excited about thistopic.
I remember reading the news andwe're going to get into that in
a minute but a lot of peopleare going to be wondering why
we're covering something thatseems like a deep science topic,
which we tend to do from timeto time.
(01:26):
We'll digress from the world oflike hey, here's how you use
this particular tool to bigpicture stuff, but this one
definitely is notable, I think,in fact, so much so that I think
it's our only topic for today,right?
Speaker 2 (01:38):
It is our only topic
and I was just reflecting with
Amit before we started recording.
I don't think we've ever donean episode on a single topic.
We've done some evergreenepisodes, you know, about
getting your board on board withAI or data or foundations of AI
, but this is our first AI newsitem episode.
That's just one topic.
That's how important we believethis is.
Speaker 1 (01:58):
Yep.
Speaker 2 (02:01):
Amit, how have you
been?
Speaker 1 (02:01):
doing this week.
I have been really good.
I've been looking forward tothis.
I've been diving into a wholebunch of project work with our
teams in different areas, whichactually some of our project
work is kind of related to this.
That'll be fun to talk about.
But just all around good herein New Orleans this time of the
year as you know from livinghere and it's probably simpler
in Atlanta it's just starting toget really, really nasty
outside.
It's like 95 degrees or prettyclose, and 100% humidity, so I'm
(02:24):
not enjoying that, butotherwise I'm doing great.
How about yourself?
Speaker 2 (02:28):
Honestly, I hate to
tell you, amit, but the
weather's really nice.
Here in Atlanta it has not beenthat hot.
I mean, we've been getting likesome peak, maybe 83 Fahrenheit
days, but overall the weather'sbeen really nice.
I've been getting outdoors aton and we actually both my
husband and I just got new bikes.
So we are really excited to getout and about and ride those.
Speaker 1 (02:48):
Did you get e-bikes
or no motor?
Speaker 2 (02:51):
It's funny.
It's funny that you ask that,because my husband really,
really wanted to get e-bikes,which are just kind of, in my
opinion, outrageously expensive,so we didn't, we couldn't have.
One of us have an e-bike andone of us not which is what he
proposed initially.
I was like I'll be behind you,like wait up.
So we ended up getting I thinkthey're hybrid bikes, so road
bikes slash gravel bikes.
(03:12):
We had mountain bikes before,but we never went mountain
biking, so we decided we neededjust a more standard bike, I
guess.
Speaker 1 (03:19):
That'll be fun.
I love cycling Not so much herein New Orleans, although you
can do a little bit of cycling,primarily here up on the river
levee and also the lake.
You can do that because that'skind of like there's nobody
driving there and it's actuallypretty smooth.
But Atlanta, I imagine you havesome pretty good trails and
pathways to go right on.
Speaker 2 (03:38):
And some pretty good
hills and that's why we realized
we went to this one trailprobably last summer with
mountain bikes and we couldn'tget up the hills.
We'd have to get off the bikes,walk them up.
Everybody on their road bikesis passing us, so we decided to
make that change, but veryexcited to start using those and
get outside.
Speaker 1 (03:56):
E-bikes are pretty
impressive, that I'd say.
Like my wife has one up in Utah, and when we go mountain biking
it's just kind of amazing.
I actually don't particularlylike riding it most of the time
because it's heavy, and withmountain biking it's just kind
of amazing.
I actually don't particularlylike riding it most of the time
because it's heavy and withmountain biking you want to be
able to kind of move around alot and, you know, have a lot of
maneuverability, but it sure isfun going up hills.
Speaker 2 (04:15):
Yeah, oh yeah, it's
much, much easier.
Amit, I wanted to talk about onthe pod, this new CESS sidecar
partnership that's beenannounced.
If you can share a little bitabout that, For sure.
Speaker 1 (04:26):
So CESS is all the
STEM associations, the STEM
societies.
They have a wonderful communityof people, particularly in that
sub-niche, and it's funnybecause for people that aren't
familiar with the associationmarket just having a handful of
associations for associationslike the National Body with ASAE
, and then regional bodies likeChicago Forum or any of the
(04:48):
state SAEs, people are surprisedthat there's associations for
associations even at that level.
But there's, of course, evenmore associations for
associations that are specialtybased.
So there's SES and there's NABIfor the bar executives, there's
AMSI for the medical societyexecutives.
So there's a lot of thesewonderful organizations.
What's cool about them is theyhyper-focus on content and ideas
(05:09):
in their particular communities.
So in the STEM societies, in theworld of SES, the needs are
similar to other associations.
Of course Most associationsshare certain commonalities, but
STEM societies oftentimes havemuch deeper needs in the area of
scientific and technicalpublishing and content and
amongst other areas as well.
They tend to have lots ofmeetings that are, you know,
(05:32):
formal in nature, scientificproceedings.
There's those kinds of thingsthat are going on.
So their requirements and theirfocus there, both in terms of
business and technology, areconcentrated and that's why
these types of groups form isbecause they want to talk about
the issues most relevant to them.
So we're super excited topartner with SES.
They are the premier body thatexists in this space for STEM
(05:53):
societies.
The partnership with them isawesome.
It's a member benefit to get adiscount on the Sidecar Learning
Hub and we couldn't be morepleased to partner with those
guys.
So very excited to roll thatout.
Speaker 2 (06:06):
Awesome, and so just
to be clear here, the sidecar AI
learning hub comes to theirassociation members at a
discounted rate the full AIlearning hub right.
Speaker 1 (06:16):
Yeah, the full AI
learning hub is available to
them as well as the promptingcourse.
Is also available Anything inthe sidecar AI learning hub
which includes those two options, or actually three options,
because there's the Learning Hubwithout the certification and
there's Learning Hub withcertification.
So, ses members, through theirmembership, they have a new
benefit which is a discount onall the Sidecar products.
Speaker 2 (06:37):
Very cool and I feel
like that was a really good
segue into our topic for today,which is very heavy on the
science.
Today we're talking aboutGoogle's Alpha Evolve Amit.
I'm really glad you flaggedthis for me, because you sent me
the LinkedIn post and, honestly, you probably feel the same way
often, but when I see such aninundation of information around
(06:57):
AI all the time pretty muchevery day it can be hard to
decipher what's really of animpressive magnitude, even
though all of it is like what Ineed to pay attention to and
what I don't.
So I feel like I might havejust glazed over this post until
you sent it to me and said allright, I think this is a full
episode quality.
Speaker 1 (07:16):
I think.
I think I first learned aboutthis through YouTube.
Maybe I don't think it was.
LinkedIn, I think it was YouTube, and YouTube's algorithm for
recommendations is.
It knows me quite well.
It sends me a lot of reallynerdy stuff like this, and you
know, this particular topic aswe get into it, I think might
initially sound bothintimidating, perhaps, but also
not particularly relevant to alot of association leaders.
(07:37):
So can't wait to get into that,but it just caught my attention
because I think it represents asignificant new capability that
has thus far, as far as I know,only been hypothesized but
hasn't yet been proven by anyoneelse.
So let's get into it.
Speaker 2 (07:52):
Yep.
Alpha Evolve is a cutting edgeAI agent developed by Google
Deep Mind, designed toautonomously invent, test and
refine computer algorithms for abroad range of complex real
worldworld and scientificchallenges.
Alpha Evolve leverages thepower of Google's Gemini large
language models combined with anevolutionary framework, to go
(08:13):
beyond simply generating code.
It actively discovers newalgorithms, optimizes existing
systems and solves mathematicalproblems that have long eluded
human experts.
So to get into a little bit ofhow it works, it uses two
versions of Gemini Gemini Flash,which rapidly explores a vast
space of possible solutions, andthen Gemini Pro, which focuses
(08:35):
on deeper, more nuanced analysis.
And here's an overview of howthe process works.
So first a user provides a taskand an evaluation function, so
basically the metric for successfor that task.
Then Gemini Flash rapidlygenerates multiple candidate
solutions.
Next step would be Gemini Proanalyzes and improves the most
(08:56):
promising candidates and thenautomated evaluators rigorously
test each solution against thedefined metric that you provided
at the beginning.
Finally, the best performingsolutions are selected, mutated
and recombined in successivegenerations, evolving toward
optimal or novel algorithms.
This method allows Alpha Evolveto autonomously refine not just
(09:18):
short scripts but entire codebases and system level solutions
, producing readable andauditable code that engineers
can easily review and implement.
I want to talk a little bitabout the discovery that has
been getting a lot of press withAlpha Evolved.
So it broke a 56-year-oldmathematical record by
discovering a more efficientalgorithm for multiplying four
(09:40):
by four matrices, reducing therequired number of scalar
multiplications from 49, whichis the previous best to 48.
This surpassed the results setby German mathematician Strassen
in 1969, a milestone incomputational mathematics.
So at a glance that might notsound the most impressive.
We'll break that down just alittle bit.
But Amis shared with me thisgreat YouTube clip and I want to
(10:03):
insert just a piece of that inthe podcast because he gives a
quick overview of kind of howAlpha Evolved did this and why
it is so impressive.
So I'm going to play that now.
Speaker 3 (10:13):
Most of the time
entries of your matrix are going
to be real numbers.
But Alpha Evolved realized ifwe use complex numbers which
have real numbers as a subset,we can get some magical
cancellations and reduce thenumber of multiplications to 48.
A lot of researchers wouldprobably assume using complex
numbers would make the problemmore difficult, but Alpha
Evolved somehow realized that'sa good approach.
(10:35):
4 by 4 is very small but, justas a reminder, we can do this
recursively for larger matrices.
In fact, the larger the matrix,the bigger the effect, because
now, instead of 49 times 49, youhave 48 times 48 for 8 by 8
matrices and the gap keepsgrowing the bigger the matrix.
Speaker 2 (10:53):
Beyond breaking this
56-year-old mathematical record,
in a set of over 50 challengingmathematical problems, alpha
Evolve matched state-of-the-artsolutions in about 75% of cases
and improved uponstate-of-the-art solutions in
roughly 20% of cases.
So, amit, I often do this onthe podcast.
I like to quote you when youshare something with me because
(11:15):
I feel like I can pull a lot ofinsight from that.
You shared a link I think itwas to the LinkedIn post and you
said absolutely stunning and100% predictable.
I've got to ask.
So explain to our listeners.
What did you mean by that?
Speaker 1 (11:28):
Well, you know, I
think people that are deep in
this stuff have been expectingsystems to have this concept
that would broadly categorize asan early exploration of this
concept of recursiveself-improvement, which is where
a system is able to improveupon itself and improve upon
itself, which is effectivelywhat's happening here, and so
(11:50):
it's predictable, becausethere's a lot of people working
towards this.
The folks at DeepMind tend tofocus on these types of
unsolvable problems.
I find a lot of their work tobe just incredibly inspirational
, and so 100% predictable isbecause we know that we have a
lot of the core elements to dothis.
Still, even though you expectit to happen, it's stunning to
(12:11):
see.
So that's where I was comingfrom.
I was just really excited aboutthis.
Speaker 2 (12:16):
So you talked about
recursive self-improvement that
was the phrase you said, rightand the ability for AI to now
discover novel algorithms, whichis something, as far as we know
, has not been shown before.
Can you kind of provide somemore tangible examples of
situations where algorithmdiscoveries have changed the
world, where algorithmdiscoveries have changed the
(12:38):
world?
Speaker 1 (12:40):
Well, I mean,
algorithmic improvements have
helped us do everything overtime, from, you know, from the
earliest stages to where we'refiguring out how to do very
complex things now.
So algorithms are essentiallyit's a complex, fancy word for
saying it's a step-by-step wayof solving a type of problem.
So, you know, if we know how tosolve more problems and more
and more complex problems, andthen if we come up with smarter,
(13:02):
better, faster, more efficientways of solving the same problem
, there's value there too.
So we might know how to domatrix multiplication.
We've known how to do that fora long time now, but we know how
to do it in a way that's prettycompute, intensive, right.
So if we can improve that bysome degree of efficiency in
this case, you know, one out of49 doesn't sound like a massive
(13:23):
increase, but as these matricesbecome larger and larger, the
percentage of efficiencyincreased by this new algorithm
actually goes up quite a bit.
But the point is is that eventhat that level of whatever that
percentage is it's very smallis actually a stunning impact If
you think about global energyconsumption by AI systems, which
heavily, heavily, rely onmatrix multiplication.
(13:45):
That's the core of whatinference is doing.
That's the core of a lot ofwhat happens in training.
If you can make that slightlymore efficient, that's a pretty
big deal.
It's both good for performance,but it's also good for energy
and cost.
But to me, the examples areactually literally anything you
can imagine that's in thiscategory of unsolvable.
So, mallory, I'd remind some ofour listeners to go back in time
(14:08):
in the sidecar sync to ourepisode on material science, and
I'm trying to remember what itwas called specifically.
But there was a materialscience episode Alpha Fold.
Alpha Fold was the bio-relatedone, but very similar.
In fact, we talked about thematerial's genome in that
episode.
We'll have to go back and lookit up because my memory is
failing me on this, but in thatconversation I actually think it
(14:31):
was also Google DeepMind thathad a materials AI.
It was discovering novelmaterials and this was
incredibly interesting becausethey were actually able to
physically fabricate many ofthese materials and test their
properties that werehypothesized by the AI and prove
that the AI was correct aboutthe vast majority of them.
So essentially, we have thereone example.
(14:57):
You mentioned AlphaFold.
Alphafold has gone throughseveral of its own evolutions,
in this case, in AlphaFold'scase, by humans evolving it, but
in the most recent AlphaFold 3,for example.
That's being commercialized bya lot of different people.
But the people over atIsomorphic Labs, which is
another branch of Google thatDennis Isavis also leads they're
doing novel drug discovery withAlphaFold 3.
(15:19):
And they've built a layer ofsoftware on top of that.
I'm sure the concepts inAlphaEvolve are bleeding over
there and back and forth.
So let me zoom out for a minuteand try to explain why I think
this is a big deal If a computersystem can be given an
arbitrarily complex problem andsaid improve upon this, make
this better, solve this problem.
For me that I don't know how tosolve, that's very different
(15:41):
than what we've been doing, evenwith the state-of-the-art AI
systems we have in our hands,which, in many respects, are
still effectively required tohave somewhere in their training
set something that essentiallycontains the answer.
So thus far, this is not 100%true what I'm saying, by the way
, but it effectively is.
Thus far, all of our AI systemsare capable of doing anything
(16:04):
that's in their training set,but not really generalizing in a
broad sense and being able tocreate new solutions that didn't
exist solutions.
There's 50 challenging problemsthat they ran through it right
and in 75% of cases, theymatched the current best known
(16:25):
algorithm to humans, invented byhumans.
Right, but those knownalgorithms were not included in
the training set, so they wereable to essentially create new
algorithms rather than use priorinformation.
So that's by itself veryimpressive.
And then, in these 20% of thescenarios where they created new
algorithms that were better,that's quite fascinating.
Now they did this through asystem.
(16:46):
This is not a new model.
This is the smart use ofengineering on top of the
underlying models.
They actually used olderversions of Gemini, gemini Flash
and Pro 2.0, which are bothexcellent models, but they're
not even the latest 2.5.
So when they do this again withGemini 2.5, which is a
reasoning model that has its ownlevel of increased intellect,
it'll be quite fascinating tosee what happens.
(17:07):
So to me, it's all about thestuff that we don't know how to
solve, right.
Like for those of you onYouTube and have heard me talk
elsewhere, I have this flywheelin my background all the time
and it's there to remind myselfas much as share it with anyone
else.
But the very first thing is isthat we want to seek and destroy
unsolvable problems in theassociation market.
That, to us, is what drives usto move forward as a family of
companies is to find theseso-called unsolvable problems,
(17:31):
and then we want to go figureout how to actually make them
solvable right.
So that's the place that I loveto spend time thinking about,
because that's where you canreally unlock new business
models, new sustainable sourcesof revenue for the association
community.
So we'll get more into thatshortly, I'm sure, but there
will be ways to apply this ideafor lots of organizations, not
(17:51):
just people that are in deep inscience and math.
Speaker 2 (17:55):
I love the
terminology seek and destroy,
I'm guessing did you come upwith that phrase?
Speaker 1 (18:00):
That's got my
fingerprints on it.
Yeah, we had some livelydebates in our planning meeting
we were coming up with that, buteventually we got everybody on
board with it.
But yeah, I like visuallycaptivating imagery where we're
like you know what?
That's what we're going to godo.
We're going to seek them outand we're going to just crush
these unsolvable problems.
Speaker 2 (18:17):
Yeah, it definitely
makes you feel something, so I
like that as well.
Amit, you talked about the ideaof creating novel algorithms.
I think for some of ourlisteners even for me included,
when we use a large languagemodel, it can seem like AI is
coming up with novel quoteunquote solutions.
For example, if I give it someideas we have about digital now,
it might come up with thistheme that I had never thought
(18:40):
of.
But what you're saying is, inits training, somewhere that
information exists, it wasn'tcreating something truly novel.
Speaker 1 (18:47):
Yeah, I mean the
models we have now, especially
the reasoning models, are ableto synthesize new ideas in a
sense, and that they're able tocombine ideas from other ideas.
Right, there has to be like acopy of a piece of text that
literally answers your question.
That's and that.
In fact, even going back to theearliest language models,
that's not what they did.
They were creating new answers,but if they hadn't been trained
(19:10):
on something, it would beextraordinarily unlikely for
them to come up with somethingtruly unique.
Now these models have gottenbetter, partly also because they
have access to tools where theycan write code and run it.
They can search the web.
So that was, of course,originally the main of just
perplexity and a couple of otherearly innovators, but now every
major AI tool has built in websearch.
So you know they can.
(19:32):
They can go discover newinformation that's outside of
their, their true training set.
So that statement is evolvingas we speak.
But ultimately, the way I thinkabout it is that so far these
models, they don't work the wayour brains work in terms of this
creative space.
They have very limited abilityto, you know, really ponder the
problem and kind of go throughthe creative process the way you
(19:54):
know a lot, of, a lot of peopledo in order to solve novel
problems.
Right, you think about, like,what's the journey of a
scientist that's trying to finda cure for something.
It's not really a linear path,right, it's always, always all
over the place.
You hear all these stories aboutpeople waking up in the middle
of the night and in their dreamstate they thought of this.
Or you know, they're walkingtheir dog and they saw something
in nature that inspired them tothink of a new solution, and on
(20:15):
and on and on.
You know the apple falling onNewton's head, right that kind
of stuff.
So you know, ai doesn't reallyhave that experiential type of
component and therefore it's notas creative.
Yet that doesn't mean it won'tbe, but at the moment, you know,
what we use day to day in Clodand ChatGPT is not that level of
creativity, and creativity isthe ingredient that drives any
(20:35):
kind of new discovery, whetherit's in poetry or in science.
Speaker 2 (20:40):
If you all have been
listening to the podcast for a
while, amit, you've definitelysaid multiple times even if the
AI we had today doesn't getbetter at all, we will continue
to see discoveries or newapplications over the next
probably I don't know five to 10years.
I feel like this is a primeexample of that.
Right, because it's not a newmodel, it's simply engineering
that was put together with themodels we already had.
(21:01):
Right.
Speaker 1 (21:02):
Totally yeah.
And you know I'll give you aquick example of that.
In one of our AI agents.
That's a tool called Skip,which does you know data science
and analytics and stuff likethat through a conversational
interface.
Up until the most recentversion of Skip, which we're
about to release, we would run arequest through our agentic
pipeline once.
So essentially what wouldhappen is is you go to Skip and
(21:22):
say, hey, I need to analyze mymember data to figure out
retention by geography and seeif there's a correlation between
member retention by geographyand age, or you know, just I'm
making stuff up right Just anyarbitrary question.
Skip gets to work, looks atyour data, figures out what to
go, pull in, starts writing code, puts together a report, sends
(21:46):
it back to you.
So that happened once.
Now, with AI costs dropping somuch and compute being more
abundant, we're actually runningsome of these components of
that agentic pipeline dozens oftimes in parallel and then
picking the best answer.
So the final step that Skipgoes through into creating a
report is to actually generatefairly complex code.
It can be in many casesthousands or even tens of
thousands of lines of code.
And rather than doing that oncewe say, hey, do it three, five,
(22:08):
10, 50 times in parallel, andthen we have another AI process
that's capable of evaluating theoutput and picking the best
answer, which in some ways issimilar to what Alpha Evolve is
doing.
Now.
Our stuff is not nearly asadvanced as what they're doing
in terms of testing differentalgorithms, because what we're
doing doesn't require novelalgorithmic discovery, but in a
way it's a similar process,because we're basically running
(22:30):
a lot of these processes overand over and then iteratively
improving within the agent.
It's not in the model layer,but it's in the sygentic layer.
To me, that's why in pastepisodes I've said many times,
the terminology is interesting,but not necessarily that
important, because what happensin the model versus what happens
in the agent layer or theapplication layer to the user
does it work or not is thequestion.
So those are some things Ithink I'd point people back to.
Speaker 2 (22:54):
Yep, I want to start
zooming out just a bit and
really talk about what makesAlpha Evolve unique, at least
what I found in my research.
So the first which we'vestarted to touch on is that
general purpose capability.
Unlike previous domain-specificAI systems we just talked about
AlphaFold, that wasspecifically for protein folding
Alpha Evolve is a generalpurpose algorithm discovery
(23:16):
agent, so it's capable oftackling a wide variety of
problems computing, mathematics,engineering.
I'm sure we can evenextrapolate further than that.
The second piece of what'sunique about it is its
evolutionary search.
So the evolutionary approach,combined with automated
evaluators, enables itautonomously to explore and
optimize the solution space farbeyond what's possible right now
(23:39):
with traditional AI codegeneration tools.
And then human-readable outputs.
So the system produces clear,structured and auditable code,
making it practical forreal-world deployment and human
collaboration, amit of thesethree things that I mentioned.
So human-readable output,general-purpose capability and
evolutionary search.
(24:00):
Can you kind of talk about eachof these unique components and
maybe we'll get into how thesemight apply to associations?
Speaker 1 (24:07):
Yeah, totally.
I think each of those is reallyimportant.
I'll actually start with thelast one first, because it's
probably the easiest one todescribe.
I think it's really importantthat AI systems communicate in
natural language, aka naturallanguage to us humans, as
opposed to some kind of funkycomputer communication mechanism
, because that makes it not onlyinterpretable and discoverable
but also it's something we can,you know, keep up with right as
(24:29):
opposed to computers couldprobably find a much more
efficient way to communicate,because our natural languages
are designed for our brains andcomputers can do things
differently.
But that's important.
Not all systems of this typenecessarily will have to work
this way, but I think it'sreally important to have
human-readable output andhuman-readable steps along the
way.
One of the leaders in this whatI'm referring to kind of broadly
(24:50):
is this category calledinterpretability, which is a
really big, important dimensionof AI research.
Actually Anthropic, who we'vetalked about a number of times
in this pod they're the makersof the Claude AI system.
Those guys are really reallygood in their research efforts
and their emphasis oninterpretability and human
readable output is a big thingthat those guys and many others
(25:12):
emphasize, because that's likethe side of it.
We can see those guys go intothe model itself to understand
what's happening in the actualneural network and that's super
interesting, but a little bitoff topic.
So, in terms of general purposecapability, we've touched on
that in the first part of ourdiscussion, mallory, where this
is not just about mathematics,certainly not just about matrix
multiplication, but it can beabout anything.
(25:32):
So imagine you had access to asystem like this in your
association.
You said, hey, I need toimprove member retention, go
figure it out.
Said, hey, I need to improvemember retention, go figure it
out.
And so what are some of thecommon playbooks that people
would pull up and say, well, weshould run an engagement
campaign.
Let's try to figure out how toget people to more of our events
, because we know there's acorrelation between people who
attend events and betterretention.
(25:54):
Now, is that a correlation oris it causation, meaning, are
people coming to the events kindof come to the events and they
renew because it's a byproductof being at the event, or is it
some other effect?
Right, and we don't necessarilyknow that, but we might think.
Well, that's one playbook orplaybook theme we know is drive
event attendance to try to driveretention.
(26:15):
What about sending out bettercontent.
That's another common one.
What about just communicatingthe value of membership that
they've received?
A lot of people forget aboutall the things they do with the
association.
These are things like in ourbrains common association ideas.
But what about just come upwith other ideas, right?
So what if we had a system thatcould explore the space around
member retention, automatically,hypothesize a bunch of possible
(26:36):
solutions and then come up withhey, here are 10 different
things you could go test andthen help you actually go test
these novel ideas.
Some of them might not be verygood, some of them might be at
the level of your currenttactics and some might be better
.
So that's kind of applying theconcept from the world of, let's
say, math or engineering to adomain in business.
(26:59):
The evolutionary search part isa really important piece to come
back to and make sure that'sclear.
So how does evolution work?
Over a very long period of timefor biological species, stuff
happens in our environment.
Largely it's various forms ofradiation that cause us to
effectively have mutations inour DNA and over time that
(27:21):
causes slight variations tooccur in one generation to the
next, to the next to the next,and some of those adaptations
are helpful and some are not.
So when those adaptations arehelpful, that particular branch
of those generations tends tothrive and the other branches
tend to not thrive right.
And so over a period of timethat happens over thousands and
(27:44):
thousands or millions ofgenerations, and you have a lot
of evolution, a lot of changethat occurs.
So in this whole evolutionarycomputing category, which is
very closely aligned with AI,but it's its own branch of
computing and computer scienceresearch, people have been
exploring this space for a longtime and pre-AI, or pre-modern
AI, it was a very slow processto do this.
But now, with AI coming up withthousands of candidate
(28:07):
algorithms, the evolutionarypiece tied to it, to say, okay,
what if we mutate thesealgorithms a little bit?
What if we change the approachto this part of the algorithm or
this part of the algorithm,making these small tweaks,
rather than being caused byenvironmental factors, they're
caused by intentionalmodification or intentional
mutation.
And then we test the offspring,the next generation of the
algorithm.
(28:27):
So you take an algorithm andyou tweak it a little bit, and
then you test it and tweak it alittle bit and you test it.
Now, in the context of abusiness domain problem, we're a
little bit a ways from beingable to actually do this,
because to actually test thesedifferent ideas, you don't want
to really test all this stuff onyour members, right, and say,
hey, what would happen if wesend them all this?
Now you could A, B test thingsand you can definitely get close
to empirical results or actualempirical results with subsets
(28:49):
of your audience, and that'svery helpful for things you
think are likely to work.
But just to like, have thecraziest possible ideas and test
them on your live members,you're probably not going to do
that anytime soon.
So if you think about some ofour conversations over time in
this pod the idea of digitaltwins or simulation systems what
if you had a digital twin foryour membership and your
membership, through all the dataand all the attributes you have
(29:12):
across every system and everyinteraction you've ever had, was
modeled into an effectivedigital twin of your
association's membership, whichessentially means it's a
simulation of how your you know10, 20, 30,000 members would
behave to different stimuli,down to the individual node and
individual members.
So if this message goes across,this is how this member would
react.
And then what would happen tothis other member, you know,
(29:33):
because there's all sorts ofchains of things that happen
based on social and so on and soforth.
Right, so if you had that kindof a digital twin of your
membership and then you had anevolutionary discovery algorithm
tied into a system like this,that could test out all sorts of
different ideas like how can weimprove engagement, how can we
get more people to events, howcan we drive increased renewals
(29:54):
and I'm picking, frankly, whatwill eventually be seen probably
as fairly pedestrian problems,but they're what occupy our
minds today, if those are ourissues.
But if you could test all thesedifferent hypotheses from an
evolutionary algorithm againstthe digital twin it's not 100%
real, but it could get prettydarn close to giving you a good
prediction about whichalgorithms might be good for you
and which ones might not begood for you.
Speaker 2 (30:14):
Wow, I laughed when
you said digital twin Amit,
because I think you and I areevolving into the same person
slowly because, as you, weretalking, I'm like, oh my gosh,
what if someone had a digitaltwin of their associate?
And then you said it and I said, wow, okay, we're obviously
very in sync you gave was a bitmore theoretical.
(30:42):
In actuality, with maybe notright in this moment, but with
Alpha Evolve, let's say, six to12 months from now is that
something an association coulduse it for, like if they gave it
access to some data source andcould run all these theoretical
outcomes?
Speaker 1 (30:52):
I think this is
probably more of like a by the
end of the decade kind of thing,as opposed to six to 12 months.
I think the amount of competeyou need and the level of
sophistication you need will bedifferent.
You know, let me actually zoomin on a little bit different
version of the problem.
I started to describe what ifyou wanted to make a decision on
where to have your next annualconference and you have five
different choices, you know, interms of contenders, and you're
(31:15):
having a hard time making thedecision.
Well, you know that's somethingyou could say well, let's test
that through this process withthe digital twin.
Let's simulate how that's goingto happen.
And then, once you pick a site,you might say well, what's the
best marketing campaign for thisconference?
And there might be essentiallyan infinite number of variations
of how the marketing campaignmight work in terms of timing
(31:36):
and messaging and sequencing,and you know channels and all
this kind of stuff.
You could have essentially anevolutionary algorithm with AI,
come up with a whole bunch ofdifferent hypotheses for what's
the best campaign and then testthem on your digital twin and
see which one performs the best.
And then the feedback loop, ofcourse, would be OK.
You pick one or two of them toyou know to run against your
(31:57):
actual, real population.
You get the continuous loop offeedback from that and that
feeds right back into theprocess.
So the algorithm ishypothesizing and testing the
outcome against a population ofessentially fake or digital
versions of each of your membersthat in aggregate are a digital
20 of your association'smembership, and then that
feedback loop is all synthetic.
(32:18):
But then when you get real databased on using the experimental
idea once you think it's good,that further reinforces the
cycle.
So I think these things aregoing to be real.
But you know this is not stuffthat's going to happen anytime
in the near future, partlybecause the data prerequisite is
where most associations willhave a tough time.
You know, coming back down toearth for a minute, a minute.
Most associations just have ahard time answering a question
(32:39):
like which of my members receivethese publications and have
taken these courses and havebeen to our website more than
once in the last three months,because that data is stored in
their CMS and their LMS andtheir AMS and they have a hard
time getting that result.
So people need to solve forthese more basic challenges
first, then feed into the kindof thing we're talking about.
So what we're talking abouttoday everyone is not science
(33:02):
fiction.
It's totally real and it'ssuper exciting.
But it's more about helpingtrain your brain on where things
are going so that you cananticipate this and say you know
?
I remember on the sidecar sinkback in 2025, I was hearing
about this really cool alpha ofall thing.
In 2025, I was hearing aboutthis really cool alpha of all
thing, and now it's 2027.
(33:22):
You just have that noted inyour brain somewhere and you're
like we're designing our nextsystem that does this.
By then, ai has doubled inpower.
You know three, four or fivetimes.
So what can you do then?
Maybe this is like you know,just something you can click a
button on on a website to get atthat point.
Speaker 2 (33:35):
Yeah, wow, absolutely
mind blowing.
You're right, it does sound likescience fiction, but it's real
life, folks.
I want to talk about how Googleis also using this internally,
so not just for mathematicalchallenges, but they're using it
to improve their own business,which is the company's cluster
(34:00):
management system.
Its new scheduling heuristicimproved the utilization of
Google's global computingresources by 0.7%, translating
into millions of dollars insavings and significant energy
resource conservation.
The AI has also contributed tohardware design, so optimizing
circuits in Google's tensorprocessing units, or TPUs, and
improving the speed of key AItraining kernels by up to 23%,
(34:21):
resulting in a 1% reductionoverall in model training time.
So that's one piece howGoogle's using it internally,
which I think goes back to whatyou were saying, amit, about
maybe how potentially anassociation could use this
internally as well.
I also want to mention that, interms of the future, deepmind
is developing a user interfacefor Alpha Evolve and plans to
(34:42):
launch an early access programfor select academic researchers
with the possibility of broaderdistribution in the future,
which is something we'll keep aneye on.
Downstream effects ofassociations that have, perhaps
researchers as part of theirmembership and how this will
(35:03):
obviously will change a lotwithin the realm of research,
how an association can kind ofgrapple with this, provide
education on it, things likethat.
Speaker 1 (35:12):
Definitely.
I'm glad you brought that up,mallory, because I think that
for an association's internalbusiness operations, this is
interesting, but not the mostimmediately pressing or really
available thing, like we werejust discussing.
I mean, if an associationreally wanted to test ideas like
this and was pretty far alongwith their data aggregation and
AI data platform and all thatkind of stuff, they could
totally do experiments withtechnology like this not Alpha
(35:35):
Evolve specifically, but there'sways of emulating these
concepts today.
But most associations aren'tgoing to be playing with that
for internal operations.
But, to your point, manyassociations have communities
full of people that are doingscientific research or doing
other things that are kind of ina similar vein that this
(35:55):
discovery, this capability,would be extremely relevant for
those folks.
So I think associations need tobe the conduit through which
their members, their community,learns and continues to stay
informed on what's happening inthe world of AI and
contextualizing it for each ofthose communities, just like we
attempt to do here for ourassociation friends.
So there's an amazingopportunity, I think, for all
(36:18):
associations, quite frankly, tobe the center of AI learning for
their communities, whether it'scommunities of mathematicians
or computer scientists, or ifit's communities of teachers or
doctors or lawyers or whateverit is.
The association knows thecontext of that space probably
better than just about anybody,and so to be able to bring AI
(36:39):
content into that world and tocontextualize it so that it's
helpful and it's relevant is anamazing opportunity, both to
advance your mission as anassociation but also to drive
revenue, because if you'reconsistently providing great
content on this topic we cantell you from our own experience
it generates a lot of interest,which is exciting.
And then from there, there'sopportunities to develop you
(37:01):
know member value ads, where youcan perhaps provide some
content as part of membership,but certainly to develop courses
and deliver incredible amountof value to your members.
And so, yeah, I mean, if youthink about this topic and
you're listening to it and youare anywhere, even a degree or
two, separated from, let's say,a scientific realm, but
certainly those that aredirectly in it, this is
absolutely a topic you shouldmake sure your members are aware
(37:23):
of.
To not do that would be, youknow, really problematic as an
association for your space isthe way I would put it.
So I think it's an incrediblyimportant thing and a big
opportunity.
Speaker 2 (37:35):
I want to look at the
human element of that opinion
too, amit, because this isobviously very new.
You said it was predictable.
I would say many of ourlisteners, including me, right.
I don't think I would havepredicted this per se, but you
spend a lot of time thinking inthis realm and so, given that
it's new, given that it's a bithard to understand, it's a bit
ethereal, and if you imagine anassociation of computer
(37:59):
scientists or people that wedeem highly technical, I could
imagine our listeners sayingwell, we cannot produce content
on something that GoogleDeepMind has been studying for
years and years.
What do you say to that if anassociation feels intimidated by
the level of expertise that itseems this requires?
Speaker 1 (38:19):
the level of
expertise that it seems this
requires.
Well, so a couple of things isthat, first of all, at a minimum
, you can make sure they'reaware of it so you can share a
link in your newsletter.
That doesn't take that mucheffort at all, right, so that's
one thing you can do withouttrying to assert any level of
expertise.
You can also partner withpeople to deliver AI content
people who have deep expertisein AI that can help you develop
content, develop learningmodules, things like that.
(38:40):
We actually do that.
By the way, for those of youthat aren't aware, we partner
with a lot of associations tohelp them with training for
their audiences, where we createcontent specific to their
industry.
But there's tons of people whocan help you with that.
That is an option as well is touse some of your resources to
develop that content withoutside expertise.
But the one thing I want topoint out about what you said
about people who are intechnical realms, and this might
(39:02):
be doctors, it might be mathscience, it could be engineering
.
A lot of times, the associationstaff are, in fact, intimidated
to even raise the topic.
That's even somewhat technical,because they assume that their
triple PhD average member isalready way up to speed on all
that stuff, and my experiencehas been that the people who are
(39:25):
like super deep in a particularrealm, they might have like a
conceptual understanding of howAI works this is even true for
computer scientists andparticularly even AI researchers
in computer science but they'reso deep in their one area that
they often don't see the pattern.
They don't recognize the macrotrends and a lot of times they
assume things that might bebased upon something that they
(39:46):
researched years ago.
So a lot of times the folksthat are deeply technical and in
particular narrow fields don'tsee what's happening overall.
And that's part of your job, asfar as I see it as an
association, is to take thatsomewhat uncomfortable stance of
saying listen, we think it'simportant that we provide an AI
intro course for our engineers,even if they're in a field
(40:08):
that's adjacent to computerscience or adjacent to even AI.
Directly is right.
If you just keep doing whatyou've always done and stayed in
a comfortable lane, well, thatlane may eventually just end.
Maybe that lane goes off theside of a cliff because it's not
needed anymore.
That lane may not be the placeto stay.
So sometimes you got to switchlanes, and this hits home for me
(40:29):
because I'm teaching myyoungest right now how to drive
and she's not a big fan of lanechanges.
But, you know, try to make surethat she does plenty of those,
because when I get out of thecar, when she turns 16 shortly,
I want to make sure she's safe.
Speaker 2 (40:41):
So just get her an
e-bike, you know, to push it off
a little longer.
Speaker 1 (40:46):
Actually I'm pretty
excited about her driving.
Speaker 2 (40:48):
It's going to be
great for her, the association
having context, and is arguablyperhaps the best entity in the
world in terms of context fortheir profession and industry,
at least within their geographicregion.
So don't doubt yourselves, youhave that context.
(41:10):
A singular computer scientistmay be a technical, may
understand how AI, neuralnetworks work in theory, but
you've got the greater contexton kind of all of that put
together, if that makes sense.
Speaker 1 (41:21):
Totally.
I think that's a really goodway to phrase it, and, you know,
I think one of the things thatwe need to do a good job of and
I have a hard time with this alot of times is to zoom back out
from time to time and retestour assumptions, retest our
beliefs, our views on what isand isn't, what can be and what
(41:42):
can't be.
We have these, you know, deeplycalcified systems of
assumptions as people, andthey're used as essentially
heuristics to give us shortcutsso that we don't have to
reprocess everything we think weknow.
But sometimes those heuristicsor shortcuts essentially lead us
down a false path, and it mayhave been true even six months
ago, but it's no longer truetoday, and it's really hard in
(42:04):
an environment that's changingthis fast.
But I find that the people whohold on to those assumptions
most dearly are the people whooftentimes are the most
intelligent, best educated anddeep in some space, because
they've always been told they'rethe smartest person in the room
, and so it's your job as theassociation to say maybe you are
, but you haven't paid attentionto this, and you can say it in
(42:25):
a much nicer way than that ifyou want, but sometimes it's
good to go knock people on thehead and say, listen up, you got
to look at this, and I do thinkthat's the job of the
association.
Your job isn't to just kind oflike be the bystander and to say
, hey, I'm going to hang outhere and just give you the same
stuff I've always done.
Your job is to optimize for thesuccess of your members and to
help them do their work, whichultimately influences the
(42:46):
well-being of the world, andthat's what I find motivating
about helping this space.
So I don't think it's a matterof well.
No, our members would never wantthat and you know our members
can't tolerate that.
Our members would never usethat.
You know we, and you know ourmembers can't tolerate that.
Our members would never usethat.
You know we've been hearingthat a ton over the last couple
of years with another one of ourcompanies, betty Betty's, our
knowledge agent, and you knowthat company has worked with
close to 100 associations atthis point and growing really
(43:08):
fast, and many of them are very,very technical organizations.
Tons of medical societies,engineering societies, nursing
societies, accountingorganizations, people that have
extremely deep technical contentand subject matter, and
consistently, one after the next, after the next, after the next
in deployment, come back andthey are told by their most
experienced members.
(43:30):
This is amazing.
This AI is better than anymember I can talk to.
It's more knowledgeable,smarter, better answers,
obviously faster, and that isnot what they thought it would
do.
Even the people who are signingup and paying for a product
like this they're blown awayconsistently, and so that
assumption set if you can showthat people would use a tool
like that to inform thedecisions they're making in
(43:51):
their field, right in theirfield, their expertise area,
they most certainly will be opento the idea of the association
giving them some insights on AIas well and contextualizing it
for their field.
So it's a massive opportunity.
If you don't do it, somebodyelse will.
As an entrepreneur, I look atthis and say the brand asset and
the corner resource in terms ofdata and content that
associations have is such acompelling opportunity to build
(44:16):
businesses, to build franchiseswithin your world, where you
have distribution, where youhave relationships, you have
content and you have thisincredible brand value.
To not use.
That is nuts to me and it'sbasically sitting there waiting
for you to go capture it.
You can generate a lot ofrevenue from this if you think
about your business model in acreative way and you can do an
(44:38):
incredible job serving yourmembers.
Speaker 2 (44:41):
I would say you have
a knack for predicting things
pretty well, or at least on thepodcast, I feel like a lot of
the things you predicted me havecome true in some regard.
So I'm curious looking aheadwith Alpha Evolved, what do you
expect to see near term, longterm, by the end of the decade?
What kinds of challenges willwe find solutions to?
How do you expect to see nearterm, long term, by the end of
the decade?
What kinds of challenges willwe find solutions to?
How do you expect to see thisplay out?
Speaker 1 (45:03):
So what I think is
going to happen with this
particular technology iseveryone's going to replicate
the concept and it's going tofind its way first into coding
tools, so tools like Cursor andWindsurf and Cloud Code and all
these other tools.
They're going to incorporatethese concepts.
It's going to make code better,even better, even smarter,
better at solving problems thatthe developers that are training
(45:24):
these tools or guiding thesetools, I should say, don't know
how to solve.
So that is going to be verypowerful and I think it's going
to have a compounding effect,because where the code goes,
everything else follows.
You know, that's why codingtools have been such a natural
place for these companies tofocus on.
It's one way to make a lot ofmoney in the world of AI today.
It's also super competitive,but it's a direct, you know, use
(45:46):
case of all these technologiesthat is insanely productive.
I mean, the things you can dowith one engineer now would have
required a team of 20 last yearat this time, and I'm not
exaggerating that and so ifthat's like, one person can do
what a team of 500 could havedone a year, you know that's
that's going to come from thistype of improvement.
It's not just faster, bettermodels.
It's this kind of additionalcapability, so you can see it
(46:08):
there.
I think people will rapidlyadopt this in other specific
domains, like branches ofmedicine or things like that.
So I think it's going to bereally interesting to think
about, like the number of caseswhere people say I don't know
how to solve, I don't know how,I don't know what this issue is
that this patient has, or ifmaybe you know what it is but
you don't know how to cure it.
But maybe somebody else doesright, or maybe there's some
(46:28):
novel cure that you know an AIcan come up with.
So I think there's just so muchmore bigger than what we know.
It's like I think most peoplerealize the percentage of
discovery that's left to be hadin space and even in our own
oceans is vast.
Right, like what we know aboutmarine biology relative to what
(46:49):
is knowable is a tiny fraction,and that's true with AI,
certainly.
So I think all this explorationis going to get accelerated,
which is fundamentally exciting.
Speaker 2 (47:00):
Yep.
Well, everybody, thank you fortuning in today.
Hopefully you learned a littlebit more about Alpha Evolve and
how it might pertain to yourassociation sooner than you
think.
We will see you all next week.
Speaker 1 (47:14):
Thanks for tuning in
to Sidecar Sync this week.
Looking to dive deeper?
Download your free copy of ournew book Ascend Unlocking the
Power of AI for Associations atascendbookorg.
It's packed with insights topower your association's journey
with AI.
And remember Sidecar is herewith more resources, from
webinars to bootcamps, to helpyou stay ahead in the
(47:36):
association world.
We'll catch you in the nextepisode.
Until then, keep learning, keepgrowing and keep disrupting.