Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Amith (00:00):
People are going to
figure out how to stretch the
boundaries of what this thingcan do.
It'll be used for all sorts ofapplications.
Imagine a laptop that never hasto be plugged in.
Imagine a phone that just worksforever.
Welcome to Sidecar Sync, yourweekly dose of innovation.
If you're looking for thelatest news, insights and
developments in the associationworld, especially those driven
(00:22):
by artificial intelligence,you're in the right place.
We cut through the noise tobring you the most relevant
updates, with a keen focus onhow AI and other emerging
technologies are shaping thefuture.
No fluff, just facts andinformed discussions.
I'm Amit Nagarajan, chairman ofBlue Cypress, and I'm your host
.
Greetings and welcome to theSidecar Sync, your home for all
(00:46):
things association plusartificial intelligence.
My name is Amit Nagarajan.
Mallory (00:52):
And my name is Mallory
Mejiaz.
Amith (00:54):
And we are your hosts,
and we have your first official
2025 Sidecar Sync episode herefor you today.
This is the first episodeMallory and I are recording in
2025, is what I mean by that.
We did air something this week,but we did record it last year,
so this is our first timerecording this year, and we'll
(01:14):
get into all sorts of cool stuff.
We've got some great contentfor you.
Before we dive into thatcontent, though, let's just take
a moment to hear from oursponsor.
Ad V/O (01:24):
Introducing the newly
revamped AI Learning Hub, your
comprehensive library ofself-paced courses designed
specifically for associationprofessionals.
We've just updated all ourcontent with fresh material
covering everything from AIprompting and marketing to
events, education, data strategy, ai agents and more.
(01:45):
Through the Learning Hub, youcan earn your Association AI
Professional Certification,recognizing your expertise in
applying AI specifically toassociation challenges and
operations.
Connect with AI experts duringweekly office hours and join a
growing community of associationprofessionals who are
transforming their organizationsthrough AI.
(02:06):
Sign up as an individual or getunlimited access for your
entire team at one flat rate.
Start your AI journey today atlearnsidecarglobalcom.
Mallory (02:20):
Amith, the more and
more we meet and do this podcast
, I realize we should probablystart recording right when you
and I hop on the call, becausewe tend to have really
interesting convos right beforewe press record.
Just now, we were talking aboutoh, this is a scary statement
that digital now is 10 monthsaway, and talking about how is
this possible.
I feel like I just did digitalnow in 2024, but talking about
(02:43):
how life passes by so quickly.
But you kind of have a factualtheory behind that if you want
to share.
Amith (02:50):
Yeah, I have my
perception on it anyway.
So, first of all, actually, onDigital Now yeah, it's 10 months
away, less than 10 months away.
10 months from now, digital Now2025 will be over.
So what are we doing November2nd through 5th?
Is it Mallory?
Yes, yeah.
So what are we doing November2nd through 5th?
Is it Mallory?
Yes, yeah.
And it's in Chicago at theLowe's Hotel, right?
Yes, indeed, I've never been inthat property.
(03:10):
I hear it's really cool.
If you're Chicago-based, youprobably know the property.
It's really, really nice.
So can't wait for that, that's,you know, not even 10 months
away, and I'm sure we'll haveamazing community conversations.
We definitely will have somegreat keynotes, we'll do some
exciting things around town, soI'm pumped about it.
I love Chicago.
Chicago is one of my favoritecities.
(03:33):
But talking about time zoomingby, it's interesting because the
holidays are always a time atleast where there's an
opportunity for some reflection.
I don't always partake inreflection.
I tend to go, go, go, butsometimes I do, particularly
when I'm on a chairlift.
I'm a big skier and if I'msitting on the chairlift,
especially if I'm flying solo ona ski run or two, I tend to
(03:56):
just kind of meditate or thinkand look around and enjoy the
natural beauty, and sometimes,you know, I zoom back out.
I'm like what's going on in theworld and in my life and all
that.
And to your earlier point abouttime zooming by, I don't know,
in my mind for maybe the lastfew years I've had this, this
theory of the of the universeand my own simplistic way of
thinking about it, which is, Ithink there's like a denominator
(04:17):
of time and a numerator of time.
If you think about yourperception of speed of time, it
kind of like make a fraction outof it and say, well, how long
have you been alive and how muchtime is passing by.
So if you're 10 years old andyou're talking about the next
year of your life, it's 10% ofyour lifespan, and so for a
10-year-old, a year seems like areally long time, and then for
(04:38):
an old guy like me it feels likea tiny fraction of time, tiny
sliver of time.
So it's kind of funny, becauseI do think that is definitely
how our perceptions reflectreality.
All of us experience time, youknow, in actual terms, of course
, in the same way.
Otherwise the universe would bereally screwed up if we all
actually experience timedifferently.
(04:58):
But our perceptions, I think,reflect that.
And so for me, as I look aheadand I say, well, this year is
going to zoom right by, I thinkthat's part of it, I don't know.
What do you think of all that?
Mallory (05:08):
I think that makes a
ton of sense.
I like how you strategicallydid not include your own
denominator in your example,fair point.
Amith (05:14):
It's a very large number
at this point.
Mallory (05:16):
I think that makes
sense, and what I was getting
into before this call with you,amit, was sometimes life seems
like it's passing by so quicklythat we are kind of just like
viewers of our own lives, orthat things are happening so
quickly we're not necessarilyactive participants, and I think
that probably relates to what alot of our listeners feel in
terms of workloads andespecially, given all this AI
(05:38):
stuff, like it's happening soquickly.
How do you become kind of anactive participant in that?
How do you slow things down?
I don't know that we have theanswers to that, but that's
where my mind goes.
Amith (05:48):
Well, you know, there's a
lot going on all the time,
every day, every week.
This week, ces is going on inVegas, which is a big, big event
.
It historically has been aroundconsumer electronics, but more
and more it's become kind of thekickoff to what's happening in
AI at the beginning of the yearand Jensen Huang, who's the
founder and CEO of NVIDIA,someone who's done a lot of
(06:10):
amazing things.
One of the things that happenedwhile he was being interviewed,
I believe, on stage at CES.
Someone asked him why he doesnot wear a wristwatch, and I
don't know if you know thisquote, mallory.
He said the reason he doesn'twear a wristwatch is because the
most important time is now.
That's pretty deep right centeron the idea of being present
(06:31):
and kind of experiencing thingsand kind of noticing more, right
.
So it's kind of, if you closeyour eyes, you hear more.
If you really take time tobreathe, things work a little
(06:54):
bit differently.
So I find that to beinteresting.
I don't do nearly enough ofthat in my own life.
I have a meditation practiceevery morning, but I do it
sometimes, I don't do itsometimes, so I'd like to get
better at it this year.
But I think it's interestingbecause the time to be present
to reflect, to think more is, infact, a really interesting
aspect of what's happening inthe world of artificial
(07:15):
intelligence research right now,because people are giving these
models an opportunity toactually, you know, step back
and take a deep breath and say,you know, am I really completely
hallucinating or am I doingsomething that makes sense?
So I think that's also it tiesinto our world of AI for sure.
Mallory (07:33):
Yeah, I like that.
I like the idea of beingpresent, I have my own goals.
It feels like every quarter ofmy life I'm like I'm going to
get better at meditating andthen I struggle with it.
But I will say something thatis seeming to have worked for me
, and maybe you've tried this,amit, but I'm using a journal.
I think the brand is calledBest Self, but it's essentially
quarterly priorities, but foryour life.
(07:53):
So it's kind of like havingOKRs or top priorities, daily,
weekly, and then you kind ofanalyze them on a quarterly
basis, and that is seeming toresonate with me pretty well.
Because I'm so used to thatconcept, at least in work, I'm
taking it and applying it to myown life, which has been fun.
Amith (08:08):
That's fantastic.
I mean, it's the old adageabout priorities or goals.
If you just assert a goal toyourself, it's kind of a certain
level of probability thatyou'll achieve it, which is
obviously variable by individual.
And then if you write it down,though, and compared to just
(08:30):
asserting it, you increase yourlikelihood of achieving it.
And then if you share it withother people, then you're even
more likely to achieve it.
And then if you put a post-itnote or some other way of
remembering it and put itsomewhere where you can't ignore
it, like you know, you take agoal and you write it on a
post-it note and you stick it inyour bathroom mirror so you see
it every single day, multipletimes a day, right, then it
reinforces.
It reinforces that you saidthis was important to you.
(08:51):
So writing a goal down, I think,by itself is magic in a way,
and very few people do that.
They just say, oh, I want toget better at X, y and Z, and
then do nothing about it.
So I think it's just a verysimple system of reinforcement
like that, and I get away fromthat from time to time and I
come back to it, and then I youknow, I tried to do some of
those same things that I talkabout, but it's very powerful.
So kudos to you for getting ajournal like that started.
(09:12):
That's awesome.
Mallory (09:13):
Well, we'll see how it
goes.
I'll report back at the end ofthe quarter and let you know my
percent attainment of my goals,but I really like the quote you
said, Amit.
Which is the most importantmoment is now.
Is that the quote?
Amith (09:26):
Yeah, that's the
paraphrasing I think Jensen Wong
said.
The reason he doesn't have awristwatch on is because the
most important time is now Makessense to me.
Mallory (09:59):
Ethan and I for a
couple weeks now.
We're excited to talk with youall about it.
And then the second topic is alittle different.
We're talking about diamondbatteries, and if you want to
know what that means, you got tostay tuned to find out.
So, first and foremost, geminiDeep Research is an AI powered
research assistant developed byGoogle, available to Gemini
advanced subscribers for $20 amonth at the time of the
recording of this episode.
(10:19):
It transforms how usersinteract with information by
autonomously exploring a vastrange of sources, analyzing
relevant data and synthesizingfindings into comprehensive
multi-page reports within just afew minutes.
So kind of the step-by-step onhow this works is a user submits
a research question or topicthrough the Gemini interface.
(10:41):
Gemini then creates amulti-step research plan which
the user can review and approve,or even modify.
Once approved, gemini beginssearching the internet using
Google search framework to findrelevant information on the
topic.
The AI refines its analysisover several minutes, mimicking
human research behavior bysearching, identifying
(11:01):
interesting information andinitiating new searches based on
what it learns.
And I do want to share screenand show you all a little bit
inside Gemini deep research.
For those who are tuning inaudio only, I will do my best to
walk you through that process,and audio only.
(11:22):
I will do my best to walk youthrough that process.
So I am inside Gemini right now.
You'll want to navigate to 1.5Pro with deep research to access
this feature.
I ran a quick little experimentyesterday with some time when I
was prepping on for the podcasttoday, and I asked it to create
a research report on AIopportunities for associations
in 2025.
I said be specific and bedetailed, which was something I
(11:45):
pulled from a LinkedIn user whodid the same experiment or a
similar experiment, and theysaid that that would help with
the output.
So I entered in that prompt andthen Gemini comes back to me
and kind of outlines its plan.
So it's going to researchwebsites and it gets pretty
granular here with how it'sgoing to do that.
It's going to find researchpapers and articles on AI
(12:06):
opportunities for associationsin 2025, find case studies of
associations that haveimplemented AI, so on and so
forth all the way to findinginformation on the future
outlook for AI and theassociation sector.
All the way to findinginformation on the future
outlook for AI and theassociation sector.
It will then analyze all ofthose results and create a
report and it took about a fewminutes.
But as I was going through thisprocess, I decided to edit that
(12:31):
research plan just a little bitbecause I realized there
probably are not a ton of casestudies out there publicly right
now of associations using AI.
So I said include case studiesoutside of the association
industry and then clarified whyit provided an updated plan and
then I said, ok, that's good togo, and I will say this whole
process, once I clicked startresearch took, honestly, five
(12:54):
minutes or less.
I actually clicked out andstarted working on some other
tasks, so I don't have the exacttime there, but it generated
this lovely report which I willshare with you, opening it in
Google Docs so you can get amore comprehensive overview of
it.
But it breaks down thepotential benefits of AI for
associations, from increasedefficiency, member engagement,
(13:14):
reduced cost, enhanced security.
It talks about challenges andrisks, particularly for
associations, like data privacyand bias cost accuracy.
Then it breaks this down intospecific opportunities for
associations as it pertains toAI case studies of successful AI
(13:35):
implementation from Coca-Cola,ups, a few other examples there.
It talks about specific AItechnologies that might be
relevant to associations, likenatural language processing and
machine learning, and then whatI really like here I'm going to
scroll down is we get this nicekind of table which, amit, I
remember you can talk about yourexample as well, but the one
(13:57):
you shared with me had a reallynice table where it broke down
um tasks of like how difficultthey were and then what the
impact would be on those, whichI thought was really neat.
In my own report it breaks downum survey sources and then year
and then the key findings ofthose.
So this is going back to 2023,because I'm not sure if we have
that 2024 data just yet.
(14:19):
But, for example, 60% oforganizations with reported AI
adoption are using generative AI, and the key finding there is
60% of organizations withreported AI adoption are using
generative AI.
And then, scrolling to the end,I've got a conclusion on that
report and I also have all ofthe works cited there and I've
(14:41):
got to point out somewhere inhere oh, yes, yes, oh wow Top
six AI guidelines forassociations to follow.
That is us sidecar with a linkto our website.
So I'm pretty proud that we'rebeing included in the work cited
here and I did not know thatwas going to happen.
Just a heads up.
Ad V/O (14:58):
Good job Google.
Mallory (15:06):
Good job, google.
That is a little, really quickoverview of what you can expect
in with Gemini deep research.
Amit, as I mentioned, youtested this out yourself.
I'm curious if you can kind ofshare what and you've probably
ran several reports at thispoint but what you've done with
it and why you were impressed.
Amith (15:17):
So what I did with it was
it's similar in some ways.
I asked Google's deep researchto help me determine where AI
agents would be most impactfulin the sector.
So we at Blue Cypress arelooking at launching a minimum
of four new AI products for theassociation sector this year,
essentially one per quarter.
(15:38):
Then we want to go up-tempo in2026 and do two new products per
quarter and then the followingyear we're going to go even
harder.
So our plan is to keep goingreally, really aggressively.
Some of the stuff we've done sofar that a lot of people have
heard of are just, to us, themost obvious use cases of AI for
associations andpersonalization and knowledge
agents, data agents and so forth.
But there's a lot of veryspecialized, different kinds of
(16:00):
functionality like managingabstract submission or being
able to automatically do speakerroom assignments, things like
that.
That are massive efficiencygains.
So we tend to work at BlueCypress with a lot of input from
the community and also,obviously, our collective
intelligence and experience,having been in this space a long
longpress, with a lot of inputfrom the community and also,
obviously, our collectiveintelligence and experience,
(16:20):
having been in this space a longlong time across a lot of our
team members.
But we said, you know what,let's think a little bit more
about the data in terms of wherethe efficiencies would be
greatest and where theimplementation complexities
could be ranked from high to low.
So it's kind of a classicaltwo-axis grid where we're saying
how much of an impact wouldthis thing have versus how much
(16:41):
effort would it take to do itright?
And so I asked Google's deepresearch to do that, and it
produced a pretty compellingreport.
I mean, some of it was prettygeneric, like the one you just
showed is.
You know, it's a starting point, right.
There's a lot of things in workthat would have taken probably
(17:02):
you know somebody hours to puttogether, right, doing all those
searches and pulling ittogether.
So I also think I always thinkabout, with these tools, it's
not so much that, ok, that wouldhave taken four hours of work
or eight hours of work and Idon't have to do that.
I think what this is going todo is allow us to do more
creative work, because we'll beable to ask more creative
questions.
We're limited.
(17:23):
Our mental pipelines, right,collectively across our teams,
are limited by how many laborhours can we put in, and
therefore we oftentimes take alot of shortcuts to get to the
answers we get to, but if we canask more questions and deeper
questions and get better answers, I think that's going to help
inform our decision making.
It's going to get us to be morecreative.
(17:44):
So I view this as a creativetool as much as anything else.
Mallory (17:49):
I like that idea a
creative tool.
What do you see as somepotential use cases for a
feature like this forassociations?
In terms of which areas mightthey need to be thinking about
doing this kind of research?
Amith (18:02):
Well, I think for staff
of the association, being first
of all aware you know, awarenessis where everything starts with
this stuff that this toolexists.
It goes way beyond a chat, gptor a cloud type experience.
Those chatbots are wonderful.
They do so many things, butthey're pretty much giving you
an answer almost immediately.
Even 01 and soon to be 03, thereasoning models from OpenAI.
(18:23):
They're not as detailed as this.
They don't take minutes of timeto do deeper research and
compile results.
Google's Gemini deep researchproduct to me is an agentic
system, so I view it as anapplication as opposed to a
model, and being aware of thefact that there's a
(18:45):
consumer-grade access tool to aresearch assistant like this is
step one.
So anytime you have to godeeper than just a quick search
on Google or something whereChatGPT can answer it, this is a
tool that potentially can giveyou a deeper, better thought-out
answer to something you'reasking.
It's also the synthesis of AI'sintelligence, with really
heavily leveraging search.
Obviously, google has thestrongest search capabilities in
(19:08):
the world, at least at thepresent time, and so their
ability to leverage that Googlesearch function within Gemini
Deep Research is really powerful, and you saw that in the
citations and kind of thebreadth of content it's able to
pull in.
It's really amazing.
And so I think first step isawareness.
Then the next thing beyond thatis where do you want to use
this?
(19:28):
And I think that imagineyourself in a sector like
accounting and you say, okay,I'm working with a bunch of CPAs
.
I want to see what do CPAsthink of the possibility of tax
reform coming up here in 2025with the change in
administration at the federallevel.
How do people feel about that?
What do they think is going tohappen?
(19:49):
Well, I can start reading allthe different reports that are
out there.
Lots of big firms have putreports out there in terms of
their position on what theythink is going to happen in 2025
.
A lot of people reported onthis.
But what if we wanted tocompile all that together and
say I'd like to know how theindustry feels, or how do people
in my state feel about it?
Google Deep Research probablycould help with that.
(20:09):
I don't know for sure.
That would be great at it, butI think that's the kind of thing
you could throw at DeepResearch.
That wouldn't be something thata regular chatbot would be good
at answering.
I, that wouldn't be somethingthat a regular chatbot would be
good at answering.
Mallory (20:19):
I'm thinking as well,
potentially event location
research, or even maybe if youneeded to look up certain rules
and regulations by state or evenby community, like within a
state or parish county, whateverthat may be, that could be
helpful.
Amith (20:32):
Sure.
Mallory (20:33):
Now the sources are
cited, as we've seen, which is
great, but it's still an AImodel, so we all know that
hallucinations are certainlypossible.
I don't know how to ask this.
I don't know if you would givelike a percent confidence on
this, but how much weight shouldsomeone give to a report that
Gemini generates?
Amith (20:52):
I mean, first of all, the
hallucination problem has
dramatically improved sinceChatGPT launched just over two
years ago.
So back then it was very commonto have this problem.
But the models themselves, juston a standalone basis, have
gotten far better at reducinghallucinations.
And then the systems that siton top of the models, that use
citations and check their workand all that are much, much
(21:15):
better.
So I would put a lot ofcredence into the deep research
product's ability to be accurate, because it's using cited works
and it's actually doing aniterative agentic type loop
where it's checking its work incompiling that output that it
gave you.
The output it gave you is theproduct of many cycles of this
agent working on the process andfiguring out like what it
(21:36):
should do, quality checking it,all that kind of stuff.
So I think that deep researchprobably is one of the best
tools in terms of accuratecontent.
It may or may not be exactlywhat you want, but it's likely
to be very close to accurate, atleast based on the citations.
If it has a citation that'sfalse, then of course that could
lead it to giving you an answerthat's incorrect.
(21:57):
So as far as I know, it's notat the moment.
You know cross-referencing andcomparing multiple citations to
check facts, things like that.
It'd be great to test it outand say hey, for each citation
you bring.
In fact, check it with at leasttwo other sources that either
corroborate or dispute thatparticular statement, right?
So there's a lot of things likethat.
I think journalists could usethat.
There's a lot of coolopportunities here.
(22:19):
So I wouldn't go out there andbet that it's that great.
Like for everything, I'd alwayscheck the work of the AI, just
like I'd check the work of acolleague.
That gives me something to read.
Mallory (22:30):
Yeah, and now that I'm
thinking about it, that might be
why this report that wegenerated was quite general,
because I feel like the amountof resources out there on AI for
associations are probablypretty minimal.
That's just my guess, so maybethat's why we kind of got that
generic output.
Amith (22:46):
I think that's right.
I think your update during theprocess of what you demoed,
where you prompted it and thenyou told it to change its
approach, since AI case studiesin this market are somewhat
limited I think was good andthat's the kind of thing that
leads to better results, butit's still pretty high level.
So one other example I'd giveyou that we've been playing with
is another one of our productsis focused on personalization
(23:11):
and we have this particular usecase we're really excited about.
So last year, with severalassociations, we did a test of
personalizing event contentreally across two dimensions,
number one being sessionpersonalization.
So recommending to Mallory orto Amith that these are the
sessions you should considerattending at our upcoming event,
(23:31):
both to really recommend thingsso that people will register,
but also, once someone hasregistered, to tell them which
sessions might be the best fitfor them.
That's a classical problem forevent managers is how do you
engage people through thecontent of the event, both to
get them to register but, oncethey've registered, to give them
the best possible experience?
And if you have a show or anevent where you have hundreds or
(23:52):
even maybe over a thousandconcurrent sessions and keynotes
and breakouts and so forth.
It's really really hard to getpeople to the right session and
AI solves that completely, andso we experimented with.
That had unbelievable resultsin terms of the happiness and
engagement people got when theywere doing that.
And then we also tested thesame kind of concept, but with
networking.
So being able to say, hey,mallory, you really need to meet
(24:14):
this person or these threepeople, and if you have a great
experience with that and you'reable to connect with people at
an event in a good way, that canbe incredibly valuable.
The value creation theassociation is helping create
there is really amazing.
So the reason I'm mentioningthis is because what we wanted
to do was research whether ornot there's been any studies in
(24:38):
the market that suggest that ifpeople have really good
engagement at an event meaningthey connected with other
professionals that they didn'talready know they had a good
experience if that would lead toa higher probability of
returning to a future event,right.
So we were looking for research.
Did MPI or PCMA or ASAE orsomebody study this problem and
(25:00):
say, hey, if we do a deeper diveand we connect people really
well at our events, that has anROI right, and we can prove that
through longitudinal analysisof a bunch of events.
We did not find this, but weasked the research tool to help
us collate data.
It found a bunch of things thatwere kind of on the edge of
that problem.
Really, what we're looking foris kind of this obvious use case
(25:23):
ROI to say hey world, listen,if you do a great job with
AI-driven networkingrecommendations and session
recommendations, your event'sgoing to be better.
People are going to want tocome back.
I think it's intuitivelyobvious that that's true, but we
wanted to put an ROI to it.
So I think stuff like that,where you're searching for an
answer, is where this tool canbe very powerful.
Mallory (25:45):
The last thing I want
to dive into a little bit, amit,
is when you've talked aboutGemini 1.5 Pro, with, step by
step, that we are allowing for akind of system to thinking
within the AI model.
Amith (26:18):
Very much so.
I mean, if you think about whenwe talk about we were talking
about this back when it wascalled Strawberry, that then
became 01.
We've talked about this inother episodes in different
contexts.
But if you give these systemsand I'll say system is a loose
term, which might mean model, orit might mean like an entire
software system that is built ontop of a model, like the deep
(26:38):
research product is, the systemultimately has more resources to
devote to a particular task andso it's able to actually think
through the problem and do moreresearch or just think about it
longer, like in the case of 01and the soon to be released 03
that OpenAI announced inDecember.
These are reasoning models where, baked into the model itself,
(27:01):
they've trained the model totake more time and it's a
tunable parameter.
You can tell it how muchcompute to use either a little
bit or a lot and the answers,unsurprisingly, get better the
more compute you give it,because you're essentially
saying take more time to figurethis out.
That's like saying, hey, I'mgoing to give you a math quiz.
I'm going to give you onesecond to answer this math
(27:22):
problem, or I'm going to giveyou 60 seconds to answer this
problem, or I'm going to giveyou two minutes to answer this
problem, and so if I give youone second, it's just
instantaneous reaction.
You look at the problem.
You have to guess what thenumber is right.
It's more of like predictivethe next token.
It's like that's the version ofyou that you get, whereas if
you have a minute, you maybe canwork through the problem.
Maybe you don't check your workthat well, but you work through
it.
If I give you two or threeminutes, maybe you check your
work a couple times, maybe youthink of different approaches
(27:44):
because you have more time it'sreally the same thing, and so
whether that's happening in themodel itself or if it's
happening through the system,iterating with the model
multiple times, ultimately, Ithink that's an implementation
detail that doesn't reallymatter to most of the people
listening to this.
The idea, though, is that thesesystems are capable of realizing
that they need more time tosolve the problem right.
(28:05):
You know, if you give an eighthgrade, ninth grade math problem
to adults that have beenthrough college, they probably
can solve it in a few seconds.
But if you give them, you know,a 12th grade problem and they
haven't looked at that math in awhile.
They might need to go and pulla book off the shelf or look up,
you know, a quick refresher onYouTube or something.
So you know, these are thingsthat we all do in life, and
(28:26):
whatever the domain is, and Ithink we're just essentially
allowing the AI to have a littlebit more time.
So to me, that's all good thingsbecause, you know, we expect
instantaneous answers fromcomputers because they're
computers and we just assumethey're right about everything
and that they work instantly.
But in reality, you know, theseare complex issues.
Mallory (28:43):
So more compute to the
AI models or to the software
that's kind of working with themodels, creates more time for
them to kind of process theinput and create better output.
This I'm just going to ask thisbecause maybe someone listening
or viewing has the samequestion.
But we've talked about Grokchips, G-R-O-Q and how
(29:03):
incredibly fast they are, orwhen you see AI models run with
those chips, Can you explain,like how that is different from
this?
So would, yeah, can you justexplain how those are linked?
Amith (29:14):
Well, think about it this
way.
So Grok chips are.
They're amazing, they'relanguage processing units, which
is a novel, fundamentallydifferent hardware architecture
than GPUs, and they can run AImodels at, in some cases, 100x
faster than GPUs, in many cases10 to 20x faster.
And so it isn't that they'rechanging the way the models work
(29:37):
, it's that the fundamental unitof what's happening, this
fundamental unit of computation,which is inferencing the AI
model, is way faster.
So, to give you an example, ifyou think about tokens per
second, where a token is roughlyequivalent to a word, that's
all you need to really thinkabout it.
When you inference withOpenAI's 4.0 model, the maximum
(29:59):
speed you tend to get issomewhere between 30 to 50
tokens per second, sometimes alittle bit faster than that.
And if you compare 4.0 on theOpenAI platform to LAMA 3.3, 70
billion parameter model, whichwas released in December and is
comparable in terms of itsintelligence inferencing on Grok
, you get roughly 1,000 tokensper second.
(30:21):
And they have a new version ofit coming out which is based on
a technique called speculativedecoding that will be over 3,000
tokens per second.
So it's a radically differentexperience, both because the
model is substantially smaller,but it's comparable in
intelligence, so the size of themodel really doesn't matter if
its capabilities are equivalent.
So Grok is an enablingtechnology.
(30:42):
It means that, becauseinference is so much faster and
so much cheaper as well, peoplewill build applications that are
smarter because they use moreinference, they use more compute
.
Ultimately, the way I woulddescribe it, though, is that
this ability for the system toknow that it's allowed to use
more compute is more about thesystem designer saying I'm
(31:04):
willing to invest more time ormore inference, or more compute
cycles into this problem.
It's like the teacher saying,hey, mallory, spot check, give
me the answer instantly.
I call on you in class, versusI say hey, here's 60 seconds, or
three minutes, or 15 minutes tosolve this problem on paper.
It's a similar kind of analog,you know.
(31:26):
I think, zooming out from thisconversation a little bit, my
observation is this particularlyin the association market, I
think, zooming out from thisconversation a little bit, my
observation is this particularlyin the association market, but
I think it's true for everyoneAll of this terminology at some
point is going to fade to thebackground.
It's super interesting right now, because it's a new frontier
where people are figuring outall sorts of stuff.
This stuff's changing reallyfast, but ultimately what you
(31:47):
care about is the utility, thevalue creation to you.
When you think about a softwaretool that you're using, be it
like HubSpot or Salesforce orMicrosoft 365, you don't really
think about, like whoa, whatkind of storage technology are
they using and how fast is thenetwork speed and what kind of
computer is it running on?
And blah, blah, blah.
Those are just parts of thesolution.
Now all you think about is doesit work, is it available?
(32:09):
Does it solve my problem?
And AI is going to become thesame thing.
All this stuff's going tobecome super commoditized,
instantly available, veryinexpensive, and that's already
happening right now.
So I think it's good for peopleto know about how these things
are constructed, because itgives you kind of a window into
what might be possible withthese systems.
But I think the averagebusiness user just needs to know
(32:32):
that these systems are gettingsmarter, not just because of the
fundamental algorithm gettingbetter, but because we're giving
more resources to the computer.
Mallory (32:41):
That makes sense.
Gearing up for topic two, todaywe're talking about diamond
batteries.
Scientists and engineers fromthe UK Atomic Energy Authority
and the University of Bristolhave achieved a groundbreaking
feat by creating the world'sfirst carbon 14 diamond battery.
So what are the components of adiamond battery?
Well, you've got theradioactive source, which is
(33:04):
carbon 14, a radioactive isotopethat serves as the center of
the battery, and then you've gotthe diamond encapsulation
around the carbon 14.
It's a synthetic diamondstructure I want to add that as
well.
So it's functioning, is similarto that of solar panels, but
instead of converting lightparticles into electricity, it
(33:24):
captures fast moving electronsfrom within the diamond
structure.
It captures fast-movingelectrons from within the
diamond structure.
How does this compare to regularlifespan of a normal battery?
Well, this battery has ahalf-life of around 5,700 years,
meaning it would take that longto deplete 50% of its power
output.
This lifespan is close to theage of human civilization itself
(33:47):
.
On the flip side, conventionalbatteries I didn't know this
like standard alkaline AAbatteries, which are designed
for short-term use, would runout of power in about 24 hours
if operated continuously.
Now, while the carbon-14diamond battery produces less
power than conventionalbatteries in the short term.
Its longevity is unparalleled.
(34:07):
So a single carbon-14 diamondbattery containing one gram of
carbon-14 could deliver 15joules of electricity per day,
compared to a standard AAbattery that has a higher
initial energy storage rating of700 joules per gram but
depletes very quickly.
You might be wondering this allsounds great, but what could
(34:28):
this be used for?
So the idea is for applicationswhere replacing batteries is
impractical or maybe evenimpossible.
So think medical devices likepacemakers, hearing aids and
ocular implants, spacecraft andspace exploration equipment and
extreme environments on Earthwhere battery replacement is
challenging environments onEarth where battery replacement
(34:51):
is challenging.
So the groundbreakingtechnology offers a safe,
sustainable way to providecontinuous microwatt levels of
power for thousands of years,far outlasting any conventional
battery technology availabletoday.
So, amit, you were the one thatshared this news article with
me.
I don't know if I would haveseen it otherwise.
To be totally honest, why didthis pique your interest?
There's a lot going on here,but kind of what stands out to
(35:12):
you.
Amith (35:13):
Well, first of all,
thanks for including and doing
such a great job with theoverview.
I think you broke down thescience in a very consumable way
, and certainly for me, becauseI am not a scientist, I'm not a
physicist, I don't have anybackground in nuclear technology
at all.
I just find it fascinating, inlarge part because AI is so
power hungry.
Our world is increasinglyconsuming power, in fact growing
(35:35):
at a rate that is unprecedented, and so and more and more of
what we do is on the move.
So being able to have batterytechnology of various kinds,
various shapes, variousapplication categories or
application profiles is reallyimportant.
Some types of batteries, likelithium ion batteries that are
used in all sorts of thingsthey're used in your phone,
(35:55):
they're used in electric cars.
They're capable of discharginga large amount of power really
quickly.
They have a limited number ofcycles, meaning the number of
times you can recharge them.
But they have applications thatare different than, let's say,
this technology, at least in itsinitial incarnation.
But the idea of essentiallylike a limitless power source,
(36:15):
that's nuclear powered, safe andcompact enough to fit into lots
of small applications is reallyreally interesting From an AI
perspective.
Remember that AI models arebecoming smaller, they're
becoming faster, they'rebecoming cheaper.
Imagine a world where you couldtake a very small AI model, a
(36:36):
tiny model, something that'sunder a billion parameters, 100
million parameters, somethinglike that, but a model that's
based on advanced neuralnetworks, that's based on all of
the things that have happenedover a number of years in the
world of AI, and that model,let's say, is capable of
translation, right, it's capableof language translation,
real-time audio-to-audio, youknow, as opposed to text-to-text
.
We take that and we embed thatinto an implant and we power it
(37:01):
through one of these batteriesand it becomes part of your ear
and you don't even know it'sthere.
It's just for the rest of yourlife, and that's.
This device could give you theability to hear in other
languages and, you know, respondback, right.
There's more to it than that.
I'm kind of making up a sci-fiscenario, but this is a
component that would enable thatkind of a sci-fi scenario.
Humans, ai is pretty cool, butI think our creativity at the
(37:22):
moment is still a lot better.
But we tend to find ways to dothings that the technology
wasn't initially thought to besuitable for, and a great
example of that is actuallygoing back to lithium-ion
batteries, if you go to thattechnology and say, well, was it
(37:44):
initially designed for electriccars?
And the answer is no, not atall.
A single cell lithium ionbatteries.
You know it's very small, itdoesn't contain a lot of power,
it's suitable for like aportable radio, at largest maybe
.
And what ended up happening ispeople started chaining them
together into systems and thesebattery packs that you know make
up, you know, a very largepercentage of the mass of an
(38:04):
electric vehicle, are capable ofdoing things that you see them
doing on the road every daythese days.
But people did not believe thatlithium ion was a technology
that would scale in that way.
They thought it was unsafebecause of the potential
combustibility.
They thought it was not costeffective because at the time
lithium ion batteries were super, super expensive, even for like
a very small cell.
(38:24):
But with economies of scale,just through manufacturing,
along with process improvementsand improvements in the tech
itself, incrementally it now isaffordable.
Now there's lots of issues withlithium-ion batteries, don't
get me wrong.
But the point of this is thatif this technology, at the
fundamental science level, isshown to be effective and safe,
(38:44):
then it will scale over time.
Then it will scale over time.
You mentioned.
Something also that's reallyimportant for our listeners to
understand is that we're talkingabout people encapsulating this
radioactive material inside aartificial diamond, which
essentially encapsulates it in away that makes it totally safe
At least that's what the sciencesuggests.
Right, that would be testedlots of different ways before it
(39:06):
was used in any applications,but particularly in embedded
applications in our bodies.
But assuming that that ends upbeing true, what's going to
happen?
People are going to figure outhow to stretch the boundaries of
what this thing can do.
It'll be used for all sorts ofapplications.
Imagine a laptop that never hasto be plugged in.
Imagine a phone that just worksforever.
There's a lot of really coolstuff that comes from that.
(39:29):
If you just disconnect and saywe have a limitless power supply
in the form factors of thedevices that we care to use.
Now getting back to AI.
Ai is a big problem from anenvironmental perspective.
Right now, it's consuming acrazy amount of energy and
growing really, really fast.
That's why you hear storieslike Jeff Bezos, amazon founder,
(39:49):
working on a deal with, I think, constellation Energy, if I
remember correctly, but one ofthe nuclear power companies to
restart Three Mile Island, right, so that would have never have
thought to be a thing previously.
But because AI is so hungry forpower, it makes sense to say,
hey, we're going to do that.
In other news I think Microsofthas, or no, it's Meta they have
(40:10):
an RFP out right now for anupcoming data center project
they're going to do, wherethey're asking a company to
build a nuclear reactor just forthem, a massive one, something
that could power New York Cityfor their data center.
So you know, nuclear isattractive in general because it
puts off zero emissions.
It is, at least in theory,something that should be very
cost effective relative to otherforms of energy.
(40:34):
So there's obviously issues,downsides, risks, but people are
pursuing this reallyaggressively and when there's
this much demand for something,you tend to find really creative
solutions.
Mallory (40:45):
It seems like over and
over on the pod now that I'm
reflecting in the new year, thatwe keep going back to this
point where power is abottleneck for technological
innovation and we're constantlytrying to solve for that, Would
you say it is.
The biggest bottleneck at thispoint is just figuring out ways
to power things.
Amith (41:03):
Yes, and I think we have
lots of reasons to be optimistic
.
I mean, you know, if you thinkabout like just the energy
received by planet earth duringa given single day of the year,
we receive a hundred times theenergy the entire planet needs
for the entire year.
We just have no idea how toharness it, how to store it, how
to distribute it, how toconsume it in a way where we can
(41:24):
take advantage of that naturalform of energy 's, of course,
completely clean.
There's lots more that we cando with wind.
There's more we can do with themotion of the ocean.
There's more we can do withnuclear.
There's more we can do withways of cleaning up fossil fuels
.
So there's a lot we can do, Ithink.
Ultimately, you know and that'snot even to speak of fusion,
(41:45):
right?
So there are a lot of reasonsto be optimistic.
Ai is going to compound andaccelerate scientific research
and, ultimately, discovery,which is the most exciting
aspect of it, whether it's inbiology or physics or material
science, which we've talkedabout quite a bit on this
podcast.
This is an application of anumber of those things coming
(42:05):
together.
There's lots of unsolvedproblems here, but we have both,
I think more intelligent peopleand intelligent machines
working on these problems thanever before.
So we're going to seeunprecedented scientific
discovery.
I wouldn't be surprised if thisparticular problem was a
completely solved problem by thetime we are in the next decade,
so in the next five years.
It wouldn't surprise me at all.
(42:27):
I'm not suggesting that, I'mpredicting that.
I just would not be surprisedby it at all.
Mallory (42:32):
We'll wait and see for
our prediction episode four
years down the line.
We'll see.
Amith (42:36):
Yeah that'll be episode
400 or something.
Mallory (42:38):
Exactly so, Amit.
This is an incrediblyinteresting topic and it makes
me feel good to hear that you'reoptimistic.
This is not something I know asmuch about, certainly, but what
do you think is kind of the keytakeaway here for our
association listeners in termsof keeping this in mind?
What does this mean?
Amith (42:55):
It's more of a macro
topic to hopefully give people
insights into yet another reallyamazing scientific discovery
and engineering opportunity.
That should give people bothoptimism that we're going to
have really good solutions forsome of these issues, because I
know a lot of associations areconsidering kind of the overall
(43:17):
responsibility of AI adoptionand the more we adopt it, the
more energy we consume.
Of course, we don't really havea choice other than to adopt it
really, but at the same time,people are thoughtful about that
, which I admire.
I think it's really importantfor people to be thinking about
their carbon footprint, or theirtotal energy consumption.
Footprint is the best way tothink of it, because it's
hopefully not carbon-based, ormaybe it's carbon-14-based
(43:38):
instead of traditional carbon.
So you know, I think that it'sreally important for people to
just have that macro insight.
So sometimes we cover topicsthat are around economics or
around fundamental science andresearch, and a lot of times
we're covering topics like theearlier one where it's like, hey
, here's a practical tool.
So to me, this isn't so muchthat you need to go out and do
anything about it.
If you find it interesting,share the pod, give us a like,
(44:00):
subscribe on YouTube or whereveryou listen to pods.
Share this with other people.
My point of view is simply thatit's helpful for people to know
more about what's going on, andthat these things are not like
scientific research.
Historically.
You might hear about somediscovery in the 1980s that
actually saw the light of day asa practical application in 2020
, or something like that, butthese, all these cycle times are
(44:22):
compressing, and that's why I'moptimistic we're going to see
solutions.
So, again, this doesn't solvethe energy problem we have this
moment in time, and thisparticular technology probably
won't solve the macro energyconsumption needs, right, but it
serves to reinforce everythingelse we talk about in this pod
over and over, which is thatthese exponentials are feeding
(44:43):
into other exponentials.
Material science is begettingsmarter AI.
Smarter AI is begetting smarterAI.
Smarter AI is begetting smartercompute.
Smarter compute, of course,then, is compounding all of that
, and that's not to speak ofbiology and other forms of
innovation.
So I like to get peoplethinking about that kind of
stuff, even if it's not.
You know, we like to geek outon this stuff a bit on this pod,
but, you know, even if youdon't find this like
(45:06):
fundamentally just reallyinteresting to geek out on, I
think it's just good to be awareof it.
That's really what it is.
Mallory (45:12):
This is fundamentally
interesting.
I'm just going to say I thinkthis is a fundamentally
interesting topic diamondbatteries Everyone.
Thank you all for tuning intoday, for being present with us
on our first live episode of2025.
We're excited for a great yearwith all of you and we will see
you next week.
Amith (45:39):
Thanks for tuning in to
Sidecar Sync this week for a
great year with all of you, andwe will see you next week your
association's journey with AI.
And remember, sidecar is herewith more resources, from
webinars to boot camps, to helpyou stay ahead in the
association world.
We'll catch you in the nextepisode.
Until then, keep learning, keepgrowing and keep disrupting.