Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Amith (00:00):
I think AI is.
You know, this provides us,theoretically unlimited
intellect on tap to go solve theworld's problems.
You know, many of the world'sproblems are materials problems.
Many of the world's problemsare energy problems, right, and
so those are things that getexciting.
Welcome to Sidecar Sync, yourweekly dose of innovation.
If you're looking for thelatest news, insights and
(00:21):
developments in the associationworld, especially those driven
by artificial intelligence, anddevelopments in the association
world, especially those drivenby artificial intelligence,
you're in the right place.
We cut through the noise tobring you the most relevant
updates, with a keen focus onhow AI and other emerging
technologies are shaping thefuture.
No fluff, just facts andinformed discussions.
I'm Amit Nagarajan, chairman ofBlue Cypress, and I'm your host
(00:43):
.
Greetings and welcome to theSidecar Sync, your home for
awesome conversations all aboutartificial intelligence and the
world of associations.
My name is Amit Nagarajan andmy name is Mallory Mejiaz.
And we are your hosts.
And before we get into anotherexciting episode at that
(01:04):
intersection of AI and allthings associations, we are
going to take a moment to hear aquick word from our sponsor.
Mallory (01:11):
If you're listening to
this podcast right now, you're
already thinking differentlyabout AI than many of your peers
, don't you wish there was a wayto showcase your commitment to
innovation and learning?
The Association AI Professional, or AAIP, certification is
exactly that.
The AAIP certification isawarded to those who have
achieved outstanding theoreticaland practical AI knowledge.
(01:34):
As it pertains to associations,earning your AAIP certification
proves that you're at theforefront of AI in your
organization and in the greaterassociation space, giving you a
competitive edge in anincreasingly AI-driven job
market.
Join the growing group ofprofessionals who've earned
their AAIP certification andsecure your professional future
(01:56):
by heading to learnsidecarai.
Amit.
How's it going today?
Amith (02:02):
It's going fantastic.
It's, you know, like acompletely different experience
in New Orleans.
Like two weeks later, after thesnow that we talked about
recently, it's now 70 degrees.
I played a little tennis earlythis morning and it was like a
thousand percent humidity and itfelt like it was early summer
already.
So I don't know that cold snapdid not last.
Mallory (02:22):
Yeah, I've been outside
.
We've been having a warmerfront in Atlanta as well.
I've been outside every day formultiple hours a day, which has
felt so good after it beingcold.
Only, really, I know I'm a wimp.
Only for like a month and ahalf we had cold weather, but I
realized that I do miss thewarmth for sure.
Amith (02:40):
Yeah, it is nice.
At the same time, I'm kind ofhoping for a little bit colder
weather for the next couplemonths here in New Orleans,
because you know, when it getswarm here, it gets warm and wait
(03:01):
.
We're recording this episodejust a bit early.
I know the Super Bowlhappenings are going on in I can
so.
I love football and the SuperBowl is super fun, but I am not
going anywhere near theSuperdome anytime in the next
few days.
Mallory (03:11):
That's fair.
I've had friends go out there.
It seems fun.
It seems like chaos, but youknow, hopefully it's a great
time.
Amith (03:19):
It's not nearly as crazy
around here as it was when
Taylor Swift was in town.
That was much more crowded,much crazier.
Mallory (03:32):
Okay.
Well, she's going to be in townagain, though, because you know
her significant other plays forthe Chiefs, so you might be
dealing with like a Super Bowlplus Taylor Swift debacle.
Amith (03:38):
That's a good point.
Actually I hadn't thought aboutthat, but I don't think she's
performing.
So if she's performing, youknow, then yeah, new Orleans is
just crushed.
Mallory (03:48):
Only time will tell.
Amith (03:49):
Well, I mean.
Mallory (03:50):
I wanted to mention too
, because I don't think we've
talked about this on the podcastrecently, but we do have the
Blue Cypress Innovation Hubscoming up.
We have one in DC and that isMarch 25th, and then we're doing
another version of thatInnovation Hub in Chicago, which
is April 8th.
For those of you who don't know, we launched this event maybe
two years ago and I was the onewho worked on it.
(04:12):
It was just in DC at that timeand it's kind of a one-day event
all about innovation.
We talk quite a bit aboutartificial intelligence, but
other innovative technologiesthere as well, and I just wanted
to share it with all of you incase you want to hear Amit speak
right, Because you'll be at theone in DC.
What about Chicago?
Amith (04:30):
I will definitely be at
the one in DC.
That's confirmed.
I am very likely to be at theone in Chicago as well.
I just need to get the hall passand then, assuming I have that
I will be in Chicago as well.
That's, I'd say, 75, 80% ofthis point.
Hopefully my wife's notlistening to this, because I
haven't actually officiallyasked her yet about that trip.
We'll see, but in any event,yeah, I think I'll likely be in
Chicago and I'm super pumpedabout it.
(04:53):
It's a little bit differentformat and a different feel than
Digital Now, Our flagship eventat the latter part of every
year.
You know, we launched the twoevents in Chicago and DC to have
a smaller, more intimatefeeling regional event in the
springtime in each locationwhere there's obviously large
concentrations of associationsin both Washington and in the
(05:14):
Chicago area and we have lots ofwonderful relationships in both
towns.
So we thought it'd be great todo something kind of on the
other end of the calendar butalso take a different approach.
Digital now is, definitionally,at the intersection of
technology and strategy.
That's what digital now hasbeen for 25 years and the
innovation hub is just.
It's purely about innovation.
As you pointed out, it'sartificial intelligence big time
(05:36):
.
Of course, we're talking a tonabout AI, but it could be
innovation in a businessstrategy, it could be innovation
in a financial model, it couldbe innovation in culture, it
could be any kind a financialmodel.
It could be innovation inculture, it could be any kind of
innovation, and we feature anumber of speakers both within
the Blue Cypress family and alsoin the client community, so
it's a really cool event.
It's just a one-day event, soit's super easy to block off.
(05:57):
It's a great educationalexperience, so I'm super pumped
about it.
I'm looking forward to seeingpeople in person in.
I guess it's just a handful ofweeks at this point.
Mallory (06:05):
Exactly, yeah, like
Amit mentioned, it's really
intimate.
We're expecting probably around50 to 75 people maybe a bit
more than that and so it's areally awesome opportunity to
connect not only with folks inthe Blue Cypress family but, as
Amit said, association leadersas well.
So if you're interested inchecking that out, we will be
including links in the shownotes for both locations.
Today, we've got two excitingtopics lined up.
(06:28):
First and foremost, how couldwe not talk about OpenAI's
recently released Deep Research?
And then we'll be talking aboutJavon's Paradox, which is a
phrase that has become prettypopular in the last few weeks.
So, first and foremost,openai's Deep Research was
launched on Sunday February 2ndof this year, and it's an agent
(06:48):
capable of performing complex,multi-step research tasks on the
internet.
Now, if you're having a feelingof deja vu right now, it's
because we recently coveredGoogle's Deep Research on a
previous pod a few weeks ago.
So what does OpenAI's DeepResearch do?
It's pretty similar to Google's, but it can gather, analyze and
synthesize information fromhundreds of online sources to
(07:10):
create comprehensive reports.
At the level of a researchanalyst.
It can accomplish in 5 to 30minutes what would typically
take a human many hours or evendays to complete.
It's useful for variousapplications, from providing
intensive knowledge forresearchers to assisting
shoppers with hyper-personalizedrecommendations.
Every output, of course,includes clear citations and a
(07:32):
summary of the agent's thinkingprocess, which makes it easy to
reference and verify theinformation that you're seeing.
Deepresearch is powered by anoptimized version of OpenAI's
upcoming O3 model, which cansearch, interpret and analyze
massive amounts of text, imagesand PDFs on the internet.
Openai claims that deepresearch achieved a new high
(07:53):
score of 26.6% accuracy inhumanity's last exam, a
benchmark test for AI modelsacross various subjects.
That some say is the hardest AIexam out there for models.
The tool also topped anotherpublic benchmark test called
Gaia, which evaluates AI modelson reasoning, multimodal fluency
, web browsing and toolusability.
(08:15):
Deep Research is currentlyavailable to OpenAI Pro users at
the time of this podcast, witha maximum limit of 100 queries
per month, and access will beexpanded to plus team users next
, followed by enterprise users.
Right now, the tool isexclusively available via the
web, with plans to integratemobile and desktop applications
later this month.
So, amit, this was an excitingrelease.
(08:37):
We've talked about Sora on thepod.
We've talked about OperatorAgent from OpenAI.
All of those you said, eh, I'mnot interested in upgrading to
Pro.
I don't need to test those outright now, but with this one you
pinged me and you said I thinkI'm going to upgrade to Pro.
So I want to hear your initialthoughts, and why is this the
thing that's got you reallyinterested?
Amith (09:05):
but I couldn't get it to
work because we are a Teams user
I think that's what it's called.
We have, you know, like I don'tknow how many people just a
whole bunch of people across theBlue Cypress family on this one
OpenAI account and I couldn'tupgrade my account.
I definitely wasn't ready toupgrade you know many dozens of
people across the organizationto the $200 a month thing, so I
couldn't figure out how to do it.
I was thinking, okay, well,I'll just create a separate
personal account, but I didn'twant to go through the hassle.
So just a quick side notefriction is bad and it would be
(09:27):
good to make it easy for peopleto spend significant dollars
with you, and I thinkassociations need to
continuously remember that.
Even with a captive audience,where there's a novel tool,
where people are like I reallywant to try this out, people are
busy.
I was busy that day.
I didn't have time to go messaround.
I really didn't want to createa separate account anyway, I
just wanted to be able toupgrade just my account.
(09:48):
So I'm sure it's somethingthey're thinking about.
Actually, another quick sidebarabout usability.
You were mentioning this beforewe started recording.
Tell our listeners a little bitwhat you're saying about the
naming of these models and justhow kind of ridiculous it is,
specifically with OpenAI.
I don't know that Google's awhole lot better, but what were
your thoughts about that?
Mallory (10:04):
Oh well, I was telling
Amit that I saw a post from Mike
Kaput from the Marketing AIInstitute on LinkedIn where he
initially pointed this out, butthat within the OpenAI world of
models, all of them are namedwith the most horrible
conventions, like O1, o3, mini,o3, medium, high, mini, high and
, honestly, like how wouldanybody know what those do?
(10:26):
I feel like even you and I andme talking about this all the
time on the podcast I'm a bitconfused now on what each of
those individual models do.
So I've got to point that outthat I think they could do a
better job with naming themodels.
Amith (10:39):
Yeah, totally.
And as a side note for those ofyou that are really interested
in marketing content, weobviously cover a lot of it here
, but we love the guys at theMarketing AI Institute.
As a disclosure, we wereactually their lead investor in
their seed round a couple ofyears ago, so we like their
business a lot too, but wereally think their content is
extraordinary.
So you should check them out,in addition to continuing to
check out all the stuff Sidecardoes, car does.
(11:04):
But my comment would beactually pretty simple.
I'd love to shoot a note to SamAltman and say, hey, you guys
have this really cool thingcalled custom GPTs.
Why don't you create one calledNaming Agent and see if you can
get some help, because it'squite the mess?
So, not that I'm good at naming, I'm terrible at it, but it's
tough.
It's tough to get it right, butI can see that also, at the
speed they're moving, it'sactually even more important to
step back and say, hey, what aresome of the products we're
(11:26):
going to release over the next12, 24 months and what's the
brand architecture for how thesethings fit together?
There's got to be a better wayto do that.
So, in any event, coming backto your actual question.
I do believe that this is worthreally paying attention to,
because deep research has thefacility to really dig in and go
(11:46):
deeper, as the name implies,and since, of course by the way,
since they couldn't come upwith their own name for this,
they just copied Google's deepresearch.
They could have at least madeit funny and said deeper
research or something like that.
So in any event, coming back tothe question, what is novel
about this is the depth and theamount of compute that it spends
.
So Google's deep research toolwhen I used it, was pretty good
(12:10):
actually I'd recommend it andit's freely accessible to people
with a Google account.
But it only goes so far.
I think it limits itself to 10or 15 different sources and then
it kind of stops computing andyou can ask it to do more, but
then it kind of starts fromscratch.
It's not really incrementallygoing.
I have not played with deepresearch from OpenAI yet, but
from what I've seen from EthanMollick and others, it does
(12:31):
appear like it's going far, farbeyond that hundreds of sources
and really crunching the numbersa lot better.
So I'm quite excited by that.
I'd love to throw some deepermarket research questions at
this tool as an example.
We are constantly brainstormingwhat are some of the best
categories that we couldintroduce new AI agents into
(12:53):
within this particular vertical.
So the question I had askedDeep Research, maybe 30 days ago
, whenever it was I want you tostudy all of the available data
on labor efficiency in theassociation market.
Look at any reports, look at990 data, look at anything that
you can get your hands on andgive me categories where there
appear to be choke points, where, in essence, there are excess
(13:16):
amounts of labor investmentsrelative to the work that's
being output.
And it did a pretty good job.
It identified choke pointsaround event planning.
It identified choke pointsaround member communications,
which are clearly areas that AIagentic, ai specifically can be
very valuable.
But, in any event, I think toolslike this are very interesting
(13:37):
to go a lot deeper and to do alot of the homework.
Now you couple deep researchwith the ability to take actions
and you start getting into somereally interesting combinations
of things.
I know that OpenAI claimedmaybe it was Sam Altman, maybe
it was someone else that atleast 1% of all economically
valuable activity could beperformed by deep research.
(13:59):
So that's an interestingstatement.
On the one hand, it's anunimpressive percentage, but on
the other hand, it's 1% ofglobal GDP, effectively, because
most of that is labor or alarge percentage is labor.
So, in any event, I guess thepoint I would make is I think
this is yet another step in thatendless progression, seemingly,
of advancements we're gettingweek by week, and people need to
(14:21):
pay attention to this becauseit really steps up what you can
do with just a single request toan AI.
Mallory (14:27):
Mm-hmm, and I mentioned
that this deep research is
powered by the O3 model with theconfusing naming convention,
and you mentioned before the podAmit that O3 Mini is actually
available for free to all users.
Amith (14:41):
That's right.
Yeah, so I believe this isclearly in response to DeepSeek
being so well received globally.
When DeepSeek if you missed theearlier pod or haven't heard
much about it other than youknow I don't know where you've
been living for the last week,but DeepSeek is a model from a
Chinese startup and hasperformance very similar to the
OpenAI 01 model and so, not tobe you to be outdone by anyone,
(15:04):
openai said this week thatthey're going to give away or
actually, late last week,they're going to give away 03 to
any user on the free tier ofChatGPT and they also made 03
mini available through the APIfor a very reasonable cost.
So clearly it's a competitivereaction, because I think their
normal pattern is to make theirbest model available only to the
(15:25):
most premium tier and thenbring it down over time.
But clearly there's a lot ofcompetitive pressure.
So also, I think it was twodays ago or maybe it was
yesterday, that Google releasedGemini 2 Pro, which has some
very advanced reasoningcapabilities of its own.
So competition is heating up.
That's exciting.
That's good for everyone on theconsumer side of this.
Mallory (15:45):
I went down a bit of a
rabbit hole with this humanities
last stand test because I hadnever heard of it and I started
looking up example questions,which I highly recommend that
you, Amit, do if you haven't,and all of our listeners,
because you will be shocked athow some of these questions are
phrased and how difficult theysound.
But since OpenAI achieved atthis point the highest score
that we've ever seen on thisexam, which was only a 27% but I
(16:07):
think was more than 10% thanthe second place rankings, does
this mean that this agent modelis the most powerful reasoning
model that we've seen thus far?
Amith (16:20):
I think that's a pretty
clear and fair statement that,
at the moment, o3, to the extentthat we're aware is the best
reasoning model that's out there.
So if you need the very bestmodel with reasoning skills for
a specific problem you'reworking on, definitely give O3
Mini a chance.
I would say, you know, Iwouldn't really pay too much
attention to that, because veryquickly after O3 Mini, you're
(16:43):
going to get something from Meta.
They're due for a new release.
I mean, it's been sinceDecember, since they released
something big which was a dotrelease Lama 3.3, which was a
big deal, but the trainingprocess for Lama 4 has been
underway for some time.
I would be shocked if therewasn't at least a reasoning mode
, if not a special reasoningmodel, coming from Meta as well
(17:04):
that they'll open source.
So we're going to see a lot ofcool stuff, and it seems to be
accelerating, not staying thesame.
So if you didn't like lastyear's speed of innovation that
was happening, this year isgoing to.
It seems it's going to be evenfaster.
My bottom line, though, that Ikeep advising people on is you
know, it's important to stayaware of what's happening in
(17:27):
terms of new tools, like deepresearch, or new models like O3,
mini and, obviously, deepseek.
But it's more important to lookat the broader arc of what's
happening, because there's somuch churn in all these models
that you know you think you'reup to speed and then you're not.
And so the more important thingis to look at where things are
going, and that broader arc I'mreferring to is saying, okay,
(17:49):
what am I actually trying to getdone?
What's my business goal?
Why am I trying to do this andwhat are the obstacles?
Right?
So to think, to break down theproblem you're trying to solve
into smaller chunks and say,okay, well, which models can
solve each of these things?
And you may not need the mosthigh-end model for a lot of your
work.
You know, we find actually,like for example, within our
(18:10):
Skip AI data analyst agent, thatmuch of the work can be done by
lower models.
There's certain pieces of thework that Skip does that require
higher-end models, but not mostof it, and that's a very
complex piece of software.
So in most of the businesscases I see out there, you can
actually solve a largepercentage of your problems with
something like a Lama 3.3 orwith the GPT-4.0, and you don't
(18:35):
even need the reasoning models.
So that's the main thing I keeppushing on for folks that are
kind of really just looking atthis in terms of adoption.
You know you might beinterested in what's the latest,
but really you need to adoptwhat we have today and get that
pushed into your workflows.
Actually adopt these tools, youknow, in your day-to-day work
as part of your business process, not just an extra thing on the
(18:57):
side.
Right and I think that's what2025 is going to be about is,
most people in 2024 were juststarting to experiment.
In 2025, what I'm hopeful foris that people will actually
deeply integrate these toolsinto their workflow, and so for
that, it's important to choosethe model right and to say, okay
, for this particular step inour customer service workflow,
(19:18):
we are going to use 01 or 03 orGPT-4.
You want to do that thinkingbecause that will give you
consistent results.
If you just let your differentcustomer service agents or
member service agents usewhichever tool they want, not
only is that a questionablething from a policy viewpoint,
from a cybersecurity viewpoint,but it also is going to produce
different results.
(19:39):
But it's not aone-size-fits-all thing.
So you don't need to say, hey,everyone needs GPT, you know 4.0
.
Or everyone needs O3.
Everyone needs the pro $200licensed, or maybe you like
Anthropic better.
Everyone needs Cloud 3.5,sonnet, or everyone needs
whatever.
There are a number of differenttools that you can use for
different things, you know.
So some tools require a pickuptruck, some tools require a
(20:01):
sports car and some tools, somethings that you're trying to do
it's just fine to have a Camry.
Mallory (20:07):
Yeah, and you've
mentioned the idea, too on the
pot of me, of AI models becominga commodity where we can kind
of pick whichever one we want inthe future and they'll all kind
of do roughly the same thing,maybe with some distinctions and
features and functionality.
But what you're saying is thatright now it're going to run
with this, we love this, we'regoing to integrate it into our
(20:32):
association, and then, bam, afew weeks later, openai releases
a more powerful version.
That must be.
It's hard to navigate, for sure, as a leader.
Amith (20:40):
Yeah, exactly, and I
think that, again, it's
important to be aware of what'sgoing on.
I don't think, unless you'relike us and you're just super
into this stuff and you want togo play with everything, to
really get hands-on, you know,insights, and for us and the
work that we do, obviously it'simportant for us to go do that
because we're, you know, askedlots of questions about these
things.
That's part of our job.
But for most business folksthat are out there trying to
(21:03):
make sense of this and decidewhat to do, it doesn't make
sense to go and try every singletool that comes out.
It's too frenetic of a pace andthe value to you as a business
leader isn't that high in tryingeverything.
I don't think you should trynothing either.
So it's not a license to kindof sit around and twiddle your
thumbs, but rather I would saybe selective and once a month or
once every couple of weeks, dotry something different.
(21:25):
But pick the tools that youwant to drive deeply into your
business process.
Want to drive deeply into yourbusiness process.
Don't be happy with the ideathat we're experimenting.
Still, figure out some keyprocess.
And the way I sniff these thingsout in organizations that I'm
advising is I look for chokepoints.
I look for pain where they'rehaving a hard time.
And I look for choke points,which is usually because not
(21:46):
enough labor, not enough peopleare available who know how to do
a certain thing, find thatthing and then figure out how to
codify it in an AI first, or atleast an AI enabled process.
And then that's where the modelselection and the tool
selection becomes quiteimportant, because you want to
make sure the vendor you'reworking with is one you trust,
because you want to be able toput sensitive data into these
tools.
You know, a lot of people havehad this very generic blanket
(22:07):
statement saying, had this verygeneric blanket statement saying
thou shall not put confidentialdata into AI.
And initially, when ChatGPTfirst came out, that was an
absolutely good thing to tellpeople for like the first six
months, because, first of all,chatgpt was the only game in
town for a while in terms ofbroadly available tool and
secondly, there was zeroprotection whatsoever.
(22:28):
Either you know whether youbelieve OpenAI is a good company
or not.
Just like in their terms ofservice, they literally said
they could use your content formodel training.
Now that changed and OpenAIshifted gears, and in the paid
version and the team's version,the pro version.
You can opt out of that and infact in our team setting, one of
the reasons we do that is weset the flag that says you may
(22:49):
not use any of our content fromanything for any purposes across
our whole organization.
We can set that policy once,and a lot of other companies,
just by policy, do not do thatat all.
Like Anthropic, for example,doesn't use conversations to do
model training.
They use the feedback in termsof good bad, but they don't use
the content itself.
So you have to pick the vendoryou're comfortable with, because
(23:11):
if you limit yourself to onlypublic domain information and
you're not willing to put yourconfidential material in at all,
the use cases, they narrow downquite quickly.
So that's when it gets reallyimportant, when you want to
essentially take these thingsfrom the lab into production
mode.
You have to have vendors thatyou can really rely on in terms
(23:31):
of quality and consistency, butalso security and safety, and
that's a very important decision.
And again, it doesn'tnecessarily have to be one
vendor, but you have to knowwhat your go-tos are and what
the models are.
We've talked a lot on this showabout Grok, groq and how they
have really rapid inference.
But the other thing we loveabout those guys is their
commitment to security.
They have a number of models tochoose from.
(23:52):
They're all open source models.
They've actually taken DeepSeqand distilled it with Lama 3.3,
and they have a very interestingreasoning offering too, and
they're big on security andtheir data centers.
You can choose to inferenceyour workloads here in the
United States.
They have offshore data centersas well.
Mallory (24:22):
But that's one of many
companies right that are options
.
People just think open AI andthat's the only.
That's as far as their brainsgo.
Of course, that's beneficialfor open AI as the first mover.
That occurs when technologicaladvancements increase the
efficiency of resource use, butinstead of reducing overall
consumption, it leads to anincrease in the demand and total
resource utilization.
(24:42):
This concept was first observedand described by English
economist William StanleyJavon's in 1865 when studying
coal consumption patterns duringthe Industrial Revolution,
which I did not know prior tothis podcast.
The paradox operates throughtwo main channels, one being
improved efficiency lowers therelative cost of using a
resource, which increases thequantity demanded, and then the
(25:05):
second one is that efficiencygains increase real incomes and
accelerate the economic growth,further driving up resource
demand.
Now the paradox has beenobserved in various sectors, and
sometimes it helps to providethese real-world examples to
understand it.
So within energy, despiteimprovements in fuel efficiency,
gasoline demand has notdecreased as expected.
(25:26):
Instead, people have opted forlarger vehicles and increased
their driving distances.
We see it in home energy aswell More efficient HVAC systems
and windows have led to largerhomes rather than reduced energy
consumption.
And then lighting theintroduction of energy-efficient
LED bulbs has resulted in morewidespread use of lighting,
offsetting the potential energysavings.
(25:48):
So what are the implicationshere?
I think there are some good,some bad.
On the bad side, obviously,there are environmental concerns
.
So the paradox challenges thenotion that technological
efficiency alone can solveenvironmental issues.
It suggests that efficiencygains must be coupled with other
measures to achievesustainability goals.
But on the good side, oneconomic growth, while
(26:10):
potentially problematic forresource conservation, we see
that the paradox can driveeconomic expansion by making
resources more accessible andmore affordable.
So why are we talking about iton this pod?
Well, we know that it remainshighly relevant today,
particularly in discussionsaround energy efficiency,
sustainability and emergingtechnologies like AI.
Of course, as AI makesprocesses more efficient, it may
(26:33):
paradoxically lead to increaseddemand for those processes,
which potentially drives upresource consumption in
unexpected ways.
So I was not familiar withJavon's paradox until the last
few weeks, so this has been alearning experience for me as
well, like with, I'm sure, a fewof our listeners.
Amit, why do you think the termhas become popular recently?
Amith (26:55):
So, mallory, I think that
when we're heading into the
unknown and we have thisunbelievable opportunity in
front of us but it's unclearwhat it means, it's helpful, and
sometimes it's instructive, tolook back at similar patterns
that we've seen.
They may have a differentmagnitude, but they have similar
patterns in the past.
The ones you provided exampleson, I think, are really good.
(27:16):
Another example is justcomputing in general.
The cost of computing hasdramatically lowered year after
year after year, and that'sdriven an enormous increase in
demand.
The broader pattern is when aresource is scarce, its
consumption is necessarilylimited and the ultimate
resource we have is humanintellect.
(27:39):
That's the ultimate resource wehave as a species, more so than
anything else.
So up until now, if we wantedto create more intelligence, we
needed to create more humansPretty simple, right.
And then you had to spend 18years or longer, 22 years in
some societies training thesepeople to do something from
birth through when they enterthe workforce.
(28:01):
And then you have moreintelligence, and then it takes
maybe another five years or 10years or whatever to get them
into the mainstream of theircareer, and on and on.
So it's a long tail process andobviously you have to, you know
, get people to have kids.
So there's, there's a lotinvolved there.
Now you know, the whole reasonthis is so crazy is if we really
are on the cusp of unlockinghuman level, or perhaps better
(28:25):
intelligence on average.
I think some could argue thatwith O3 mini and models like
that, which are performingbetter on like PhD level exams
than 90% of PhDs kind of therein some ways.
So, but the point is, is thatwhat in the world does this mean
?
Right, like this hasimplications in every aspect of
(28:45):
society, of the economy.
It's going to affect politics,it's going to affect
entertainment, it's going toaffect our lives and the way we
raise our kids.
So it's going to affecteverything.
So I think this has become amore popular phrase recently
because it's instructive withregards to the pattern, and I
think that here, with AI, we'retaking a resource that's
extremely scarce, which is humanintellect, particularly in
(29:06):
subdomains.
You say, oh well, accountingpeople entering the accounting
profession.
A lot of our friends in theassociation community are in the
accounting world and they aretalking broadly about how
difficult it is to recruit newaccountants into school, into
the program and then into theCPA world, and that's a great
example of scarcity.
You have a scarce resource,that's driving up costs and
(29:28):
making availability lesser thanyou'd want in terms of meeting
the market's needs, andpotentially AI can help solve
for that.
So I think the broader ideathat is so incredibly compelling
is the lower cost.
So when we say, hey, what's theresource?
Is the resource legal advice,medical care, whatever it may be
(29:50):
, if the cost can be loweredsufficiently so that it's
essentially abundant andavailable for everyone, that's,
of course, a massive quality oflife improvement for everyone on
planet Earth.
And that's really the mostcompelling, the most exciting
thing, because a lot of resourceconstraints have not only led
to industries having challenges,but they've led to geopolitical
(30:10):
tensions and sometimes armedconflicts, right.
So if we can solve for a lot ofthese resource constraints and
switch from a scarcity mindsetto an abundance mindset, there's
something really exciting there.
The other thing is thecompounding factor.
In the talks that I give on theexponential era, I talk a lot
about the convergence ofmultiple distinct exponential
(30:32):
curves.
Compute is one of them.
Ai is on another one, that's,you know, a completely different
level in terms of its speed.
But we also have these curveshappening in material science,
innovation and energy and anumber of other areas and so, as
they compound together, thatcreates this abundance scenario.
That's quite exciting.
What I would tell you, too, isthat I think AI is, you know,
(30:54):
this provides us theoreticallyunlimited intellect on tap to go
solve the world's problems.
You know, many of the world'sproblems are materials problems.
Many of the world's problemsare energy problems, right, and
so those are things that getexciting.
So, coming back to Javon'sparadox, if we say, look, right
now we know that more energyconsumption, for example, is
fundamentally a concern becausewe have a limited amount of it
(31:17):
for one.
And on top of that, we knowthat, generally speaking, when
we consume more energy, it'scausing problems for climate.
If we can solve for that, right.
If we can create an abundanceof clean energy over time, with
AI helping a whole bunch, andthe materials discovery, the
fundamental science, whetherit's fusion or SMRs, or if it's
(31:37):
better solar or better batterytechnology all of these
innovations on something that'sin the world, the physical world
, coupled with AI world.
So, mallory, just a quickcomment that's related to this
is at the macro level.
One of the ways to study thisis to think about global gross
domestic product, or theaggregate gross domestic product
from all nations across theplanet over a period of time,
(31:57):
and so the short version of thestory that I tell in much more
detail when I give keynote talkson exponentials and AI and
associations is that it tookessentially the course of all of
human history to get to theequivalent of, in today's
dollars, the equivalent of atrillion dollars in global GDP,
and that was through about themid-1700s.
(32:18):
Then the Industrial Revolutionhappened, and for about 200
years, we had a 10x increase inglobal GDP.
Again, this is normalized.
It's on a real basis, meaningan inflation-adjusted currency.
So we went from a trillion inthe 1700s to around 1950, we got
to 10 trillion in global GDP,and then, from 1950s-ish through
(32:39):
the early 2010s, we went from10 to 100.
And that was, of course, thecomputing revolution, and so
what's happened here is exactlywhat you started this segment
describing, which is thatsomething that has gone from
being incredibly scarce to beingincredibly abundant drives
massive increases in consumption, which is actually
fundamentally exciting, becausethere's a lot of disparity in
(32:59):
quality of life and there's alot of disparity in terms of
access to services, likehealthcare being one, and many
others that I think AI is goingto have a big effect on.
So there's I mean, everyindustry becomes a growth
industry in a sense.
Of course, there's all thesequestions about like, well, with
unlimited intellect on tapthrough AI, what does that mean
for most of us?
What do we do day to day?
And I think there's both thescary side of that and then
(33:22):
there's the exciting part ofthat.
It's like you know, how do youadd value on top of the
fundamental compute that'shappening.
That really does represent realintelligence.
So that's why I think this hasbecome a bigger conversation
recently.
Plus, it's just kind of a funterm to throw around at a
cocktail party.
Mallory (33:37):
Javon's paradox.
I'm going to start using thatone at the next party.
Amith (33:40):
I go to.
Mallory (33:41):
I feel like I've seen
it in reference to DeepSeek's
release and kind of like oh, isthere fear that one company or
one model is going to destroyall the rest?
But the idea with Javon'sparadox is, well no, as we keep
progressing and innovating, thedemand will go up as well.
So I think that's interesting.
I have concerns on theconsumption side, like just
(34:03):
humans as endless consumers.
I like your positive spin on itin terms of abundance and
obviously the environmentalconcerns.
I just think this is a reallyinteresting paradox.
But I like what you said aboutthe converging exponential
curves, so maybe we can hopethat there's a chance that
energy can kind of keep up withAI innovation.
Amith (34:23):
Well, I think that you
know, when we think about the
behavior and the decision makingof each individual person, and
we say, well, that individual is, theoretically, at least when
they're acting rationally, goingto behave in their own
self-interest.
That's not always actually true, but in general there's some
truth to that statement.
And so, in their ownself-interest, what are they
going to do?
They're going to ensure thattheir basic needs are met, for
(34:44):
themselves, their family, andthen go higher up the hierarchy
of needs.
Essentially, and of coursethey're thinking, especially as
they get to the higher levels ofthat conversation what does
this mean for society, for theworld?
Am I being a good citizen andall that?
But a very large percentage ofthe world isn't thinking in that
higher order level.
So when they get access toservices they haven't had access
to, like the ones we're talkingabout, it's life-changing.
(35:07):
It literally can be life-savingas well in some cases.
So I think that has to be ourpriority.
I'm not saying to hell with theenvironment at all, by the way.
I'm extremely concerned aboutit.
But I also feel that we're notmoving fast enough.
Even with all the brilliantminds that are deeply committed
to solving climate problems, Idon't know that we're moving
fast enough to solve themwithout a lot of help from our
friend AI.
So that's where I get excitedis the kind of stuff we talk
(35:30):
about in this pod.
You know, in the materialscience realm, in terms of, you
know, fundamental physicsdiscoveries, things that we
generally believe we're veryclose to unlocking with AI, are
going to be fundamental gamechangers in making things like
energy, for example, effectivelyinfinite, abundant and very low
cost.
And if we can do that, we cansolve a lot of other problems
(35:50):
downstream, because if you solveenergy, everything else is
fixable right.
You can solve water, you cansolve materials, you can solve a
lot of other problems if youcan solve for energy.
So in the next 20 years, Ithink there's a very strong
chance we can do that if we usea lot of AI.
If we don't use a lot of AI,sure, I mean humans are amazing,
we could solve it anyway, evenif we take away compute, but I
(36:14):
think it's much more likely thatwe come up with incredible
solutions sooner and better andmore affordably if we use a lot
of AI.
That's really the point I'mtrying to make.
You know the one thing I wantedto pivot on before we wrap up on
this whole Gervais paradoxconversation is another
implication for associations tobe thinking about, which is
their own products and services.
So associations live in anenvironment where historically
(36:34):
they have been number one, thecenter of the universe.
It's kind of nice to be anassociation in the old school
way because you're the only gamein town for the content and the
community you provide.
Of course that's no longer thecase, but associations often do
benefit from major brandstrength as well as content,
product strength in their fields.
So the question is okay, well,if great information and great
(36:56):
connectivity is abundant andavailable anywhere, does that
disintermediate the associationfrom the content and from the
community and make them lessrelevant content and from the
community and make them lessrelevant?
Or can the association befinding a way to jump on Javon's
paradox and to find ways toincrease the abundance of their
own content and products andservices in the market?
And there's lots of ways tothink about that strategically.
(37:18):
But be part of that revolution,be part of that abundance
mindset versus being displacedby it, because there's abundant
high quality stuff out there.
I'll give you a great example.
Let's say I'm in a particularbranch of medicine and I have
great content, I have greatlearning, I have a great
community, but let's just say,in this particular example, I'm
(37:42):
not embracing AI and so my toolsare kind of old school.
I have online learning, butit's the same stuff we've had
for 20 years, right, or at least10.
I have content, but it's youknow, again, not particularly
easy to navigate, it's hard tosearch and it's all the usual
stuff that people havechallenges with.
And maybe the best content itmight be unbelievable great
people and great content, butit's just kind of hard to deal
(38:04):
with, it's kind of hard to getto, and I haven't put AI on top
of it, so I haven't, you know,made it accessible in the way
people are quickly becomingaccustomed to.
Well, enter O3 Mini, brand newand totally free and operating
at better than PhD levels acrossnot just your discipline, but
all disciplines.
So what's going to happen?
If someone has a questionthat's in your field and they
(38:25):
can get a really, reallyexcellent answer from O3.
And even if it's not using yourcontent, right, it's just using
public domain content, they'regoing to go there because it's
not only free but, moreimportantly, it's low friction.
So my point is is that you knowthat overabundance scenario and
that that Javon's paradoximpact is going to affect the
economics of what you do.
(38:47):
Just be part of it.
It's an opportunity and the wayto be part of it is obviously
to embrace AI but then to thinkof ways to segment and
differentiate different tiers ofvalue creation.
So we talk a lot about this inthe book in Ascend and we talk
about this a little bit in theAI Learning Hub content in the
strategy course.
But just the basics of havingcontent that by itself probably
(39:09):
isn't going to be a competitivedifferentiator for you,
certainly in five years,probably not even today.
But how do you layer value ontop of that?
How do you monetize it whenpeople are going deeper?
Right, but then how do youreach out to people and create
this ecosystem?
That's far broader than you'vebeen able to.
So it requires a new frame, anew lens, if you will.
I find that really exciting.
(39:30):
I also think it's going tocrush a lot of the associations
that are in the space, sadly,because many of them are not
moving quickly enough.
I talk to a lot of folks whostill tell me today when I ask
them hey, what's going on inyour organization with AI,
they're like oh no, we're prettymuch on top of it.
We have three people out oflike 100 plus usually, that have
been experimenting with chat,gpt, and I'm like, hey, that's
cool, like I'm glad you haven'tlike blocked it, but what are
(39:52):
you really doing?
You know so it's 2025, guys,we've got to get moving on this.
Mallory (39:56):
Yeah, yeah, I think,
like Amit said, we're seeing
Javon's paradox play out in theworld, and I think we're at a
really interesting pivotal pointwhere we can watch it play out
in associations as well, whereyou can ride the wave of
abundance or potentially becrushed by it.
So you're already taking a stepin the right direction by
listening to the Sidecar Syncpod, and we will see you all
(40:18):
next week.
Amith (40:20):
Thanks for tuning into
Sidecar Sync this week.
Looking to dive deeper?
Download your free copy of ournew book Ascend Unlocking the
Power of AI for Associations atascendbookorg.
It's packed with insights topower your association's journey
with AI.
And remember, sidecar is herewith more resources, from
webinars to boot camps, to helpyou stay ahead in the
(40:43):
association world.
We'll catch you in the nextepisode.
Until then, keep learning, keepgrowing and keep disrupting.