Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Amith (00:00):
Remember that what you
have in your hand today, that
it's the worst AI you will everuse, full stop.
So three months from now, sixmonths from now, 12 months from
now, the AI that you have atthat point will be dramatically
better.
Welcome to Sidecar Sync, yourweekly dose of innovation.
If you're looking for thelatest news, insights and
(00:22):
developments in the associationworld, especially those driven
by artificial intelligence,you're in the right place.
We cut through the noise tobring you the most relevant
updates, with a keen focus onhow AI and other emerging
technologies are shaping thefuture.
No fluff, just facts andinformed discussions.
I'm Amit Nagarajan, chairman ofBlue Cypress, and I'm your host
(00:44):
.
I'm Amit Nagarajan, chairman ofBlue Cypress, and I'm your host
.
Greetings everybody and welcometo the Sidecar Sync, your home
for content at the intersectionof associations and artificial
intelligence.
My name is Amit Nagarajan.
Mallory (00:56):
And my name is Mallory
Mejias.
Amith (00:59):
And we're your hosts and
we have another crazy episode,
this time a really crazy episode, lined up for you, and you'll
find out why momentarily, butbefore we get into the exciting,
topics that we've picked fortoday.
Mallory (01:18):
Let's just take a
moment to hear a word from our
sponsor.
If you're listening to thispodcast right now, you're
already thinking differentlyabout AI than many of your peers
.
Don't you wish there was a wayto showcase your commitment to
innovation and learning?
The Association AI Professional, or AAIP, certification is
exactly that.
The AAIP certification isawarded to those who have
achieved outstanding theoreticaland practical AI knowledge as
(01:41):
it pertains to associations.
Earning your AAIP certificationproves that you're at the
forefront of AI in yourorganization and in the greater
association space, giving you acompetitive edge in an
increasingly AI-driven jobmarket.
Join the growing group ofprofessionals who've earned
their AAIP certification andsecure your professional future
(02:02):
by heading to learnsidecarai.
Amit, I know you were fresh offa few days of skiing.
Tell our audience a little bithow that went.
Amith (02:13):
It was fantastic.
You know, skiing is a passionof mine and got to go up to Utah
and hang out with a bunch offriends and ski really hard for
three days, not think aboutreally anything, because I skied
so hard every day that Ibasically couldn't do anything
other than ski.
So in fact I could barely skiby the third day.
So mission accomplished.
Mallory (02:33):
That's awesome, do you
feel like?
Because I feel like your mindis kind of always going amiss.
I'll get those periodic likeearly morning teams messages
late night when it meets, likewait, what if we did this on the
pod?
But do you find that whenyou're skiing is your brain kind
of totally clear or are youlike working through business
problems?
Amith (02:51):
No, no, no, no, if that
would be a mess.
You know I get into trouble onskis because I'm not quite as
young as I used to be, so I butI think I am sometimes that I'm
out there on the mountain and Isee something I'm interested in
going after.
Sometimes I'm out there in themountain and I see something I'm
interested in going after, butyou know, if I was thinking
about AI or if I was thinkingabout associations or business
in general while I'm skiing, I'dprobably be wearing some kind
(03:12):
of cast right now.
So I definitely I have that'sone of the things I like about
it that, and mountain biking andwater sports and anything
really is that it's anopportunity to really clear your
mind.
I actually find that if I havea period of time like that where
I'm able to really disconnect,I come back and all of a sudden
all sorts of you know new ideasdo come to mind.
(03:32):
So I think it's a reallyimportant concept.
You know, in our culture, atblue cypress and at sidecar, we
talk a lot about this idea ofwork-life integration, which to
us it's not a euphemism forsaying there's no work-life
balance, by the way, just to beclear, because some people hear
me talk about.
They're like ah, you're full ofit.
Basically, that just meanseveryone works all the time and
never does anything else, andthat is not the case.
We do work really hard overhere, but the point is is to be
(03:55):
able to weave together thecomponents of work and the
components of life in a healthyway, in a balanced way.
In a balanced way in a sense,but really balance.
I don't like that term simplybecause it implies that you're
out of balance at all times.
And this like theoreticalconstruct of balance means that
you're always kind of having toseparate the two on opposing
sides of the of kind of thependulum.
(04:15):
But the point I'd make aboutwhat I'm trying to say here is,
you know, if you can integratework and life, the way I try to
myself and set an example is youcan, you know, go and do other
things and then weave that in.
And then, you know, sometimeswork pops up at weird times and
that's cool too.
Especially if what you're doingis something you enjoy doing,
it actually works out great.
Mallory (04:35):
Yep, yep.
Do you feel like that'ssomething you've always been
good at, that work-lifeintegration, or has it taken
time?
Amith (04:41):
It's taken time.
I mean, in my earlier years Iwas just basically flat out all
the time, every day, all day,every night, you know.
So it doesn't work that wayanymore, but you know, I've got
a lot of other responsibilitiesand a lot of other interests
these days, so I think it'sreally important to be able to
weave things together like that.
Mallory (04:58):
For sure, and I think
you set a good example of that,
because, as much as you are kindof always thinking, it seems
there are also times whereyou're, you know, unavailable,
or you go to Europe for twoweeks or a meet skiing for three
days, so he cannot respond tothis team's message.
So I think you do a good job ofit.
Amith (05:15):
Yeah, and you know Siri
has a habit of, you know,
starting to announce things attimes when you're doing stuff
like skiing and you start totell Siri go away for a little
bit.
Mallory (05:24):
Yeah, we'll get back to
the podcast on Monday.
Amith (05:27):
Yeah, sounds good.
Mallory (05:29):
Well, today we've got
some exciting topics lined up
for you all.
We're talking about Claude 3.7and these new Gen 3 models that
are emerging, and then we aretalking about something that I'm
going to tell you all is kindof breaking my brain a little
bit.
We're gonna talk about quantumcomputing and Majorana 1 from
(05:51):
Microsoft, and we're going toreally break it down because
it's complicated for me even inall this research I've done to
prep for this pod.
So I'm excited to kind of hashthis out with you, amit.
But first and foremost, claude3.7.
We are seeing the emergence ofa new generation of AI models,
specifically Claude 3.7.
We are seeing the emergence ofa new generation of AI models,
specifically CLAWD 3.7 and GROK3.
That is, grok with a K thistime.
These Gen 3 models are trainedwith at least 10 times the
(06:14):
computing power of GPT-4.
And this leap in computationalscale has enabled these models
to excel in tasks like coding,math and reasoning, while also
demonstrating creative andanticipatory abilities.
So CLAWD 3.7 SONNET is thefirst publicly available hybrid
reasoning model offering twodistinct modes of operation.
(06:35):
In its standard mode, we see animproved version of CLAWD 3.5
SONNET, providing quickresponses for general inquiries,
but then we also see anextended thinking mode where the
model engages in more detailedstep-by-step reasoning for
complex problems, improvingperformance on tasks like math,
physics, instruction followingand coding.
(06:56):
Users can toggle between thesemodes, and API users have
fine-grained control over themodel's thinking time
quote-unquote allowing for abalance between speed, cost and
answer quality.
Interestingly enough, clawd 3.7can also generate interactive
visualizations, createfunctional programs from natural
(07:17):
language prompts and evenproduce creative outputs like
interactive experiences.
With minimal output, the modelseemed to be getting better
basically every day, and notjust better, but drastically
better, and so when we'rethinking about this and how we
can explain this trend, wereally need to dial in on two
scaling laws to explain it.
(07:37):
The first scaling law istraining compute.
So this law states thatincreasing the computational
power used during the trainingphase results in more capable AI
models.
So, essentially, larger modelswith more parameters tend to be
more intelligent.
A 10x increase in computingpower typically yields a linear
(07:58):
improvement in performance.
That's the first scaling law.
The second one is inferencecompute.
This law focuses on thecomputation used during the
problem solving or inferencephase.
So allowing an AI model moretime to think or compute during
a problem solving improves itsperformance.
This discovery led to thedevelopment of reasoner models,
(08:20):
and reasoner models can performmultiple inference passes,
working through complex problemsstep by step.
So the new models that we'reseeing, like CLOD 3.7 and GROK 3
, leverage both scaling lawscombining massive training,
compute with the ability toscale during problem solving.
These scaling laws continue todrive the development of more
(08:40):
powerful and versatile AIsystems, pushing the boundaries
of what is possible inartificial intelligence.
So, amit, so much to talk aboutthis week.
I want to dial into, I want totalk about Claude37 and Grok,
but I want to talk about thisidea first, of these Gen 3
models.
What do you think is mostexciting there?
Amith (09:01):
Well, I think I might
start off by saying I would love
to have been a fly on the wallat Anthropic when the marketing
team was thinking aboutversioning numbers, because 3.7,
it's like, where did that comefrom?
I mean, this is a pretty bigdeal.
And 3.5, honestly probablyshould have been 4.0.
I mean, I know that they aretrying to be modest in terms of
(09:22):
the gains, I guess comparativelyspeaking.
I know that they are trying tobe modest in terms of the gains,
I guess comparatively speaking.
But it is interesting to seehow AI labs, as brilliant as
they are in all things, it seemsand they have access to pretty
good AI here they don't seem toget the marketing side very well
, but in any event, not that I'ma marketing expert, but it just
(09:47):
seems a little bit on, anyway.
So what I like about Cloud 3.7specifically and more broadly,
this Gen 3 model category, isthe keyword that you mentioned
earlier, which is hybrid.
So I put a LinkedIn post outthere earlier this week when the
Anthropic team announced Cloud3.7.
I was really excited about itbecause it was, from my
perspective, the firstmainstream, accessible model
that you can get to that ishybrid.
Now let's talk about that forjust a minute.
(10:07):
So when we have a problem tosolve, we don't stop to say,
okay, should I use myinstantaneous reactive feedback
capability, right, my fastthinking?
Or should I use my slowthinking capability and really
think through the problem stepby step and break it down?
(10:28):
We just kind of know which ofthe two we should be doing right
.
Like Mallory, if I gave you twoplus two, you'd say yeah four.
Right, but whereas if I gave youa problem, that was like a
fairly complex formula, you'dprobably say, well, I need to go
and kind of refresh my highschool algebra and break it down
step by step, right.
Mallory (10:48):
Yep.
Amith (10:49):
I wouldn't need to say
hey, mallory, make sure that you
like Mallory, version one whichis like Mallory that is only
you know capable of doinginstant reaction.
If I gave you the other problem, you'd basically do your best
guess Right?
You'd say, oh well, I've seenproblems that kind of.
Look like that.
So the answer is X right, or Xequals seven right, or something
like that, whereas if you hadtime to think through and break
(11:10):
it down.
So that's the differencebetween like the models that
think fast versus thinking slow,and essentially what's
happening in hybrid models isthey're capable of choosing.
So in the case of like O1 andO3, which we've talked about, or
in the case of R1, which is thedeep seek model everyone was
going crazy about a couple weeksago, those are pure play
reasoning models or reasoners,whereas GPT-4.0, cloud 3.5,
(11:35):
sonnet these are not reasoningmodels.
They're single passes atinference, which means they just
give you the fastest possibleresponse.
Now, what software developershave been doing since reasoning
models have become available,really in the last few months,
is to build intelligence intotheir software to say, hmm,
should this step of my agent orshould this step of my program
(11:57):
use a reasoning model, or shouldI just use a regular model,
right, kind of a classical LOM,and you would make that choice
kind of somewhatdeterministically in your
program or in your code, right,Kind of a classical LOM, and you
would make that choice kind ofsomewhat deterministically in
your program or in your code,right.
But now the models are smartenough to be able to make that
choice for themselves, which isa really powerful advantage.
And it kind of makes sensebecause you're essentially
trying to create like theequivalent of a full brain, and
(12:21):
you know, a more sophisticatedbrain is going to know when it
needs to like just react becauseit's an obvious question versus
when it needs to think moredeeply.
So that's the number one thingis that these hybrid models
blend the best parts ofreasoners with classical llms
and are therefore able toautomatically think longer if
they need to, or think with lesstime.
(12:41):
Now the part you mentioned thatI think is also important to
note is the budget.
Like, how much time do youspend?
So if you were in a math examback in high school or in
college and someone said, hey,mallory, you can have an
infinite amount of time tocomplete this test, you'd
probably spend quite a bit oftime maybe not infinity, but
you'd probably spend a lot oftime to make sure it's the
perfect answer.
(13:02):
But if I said, hey, you have tocomplete 50 problems in 90
minutes, you would give yourselfan internal budget, maybe give
yourself a minute or two perproblem or something like that.
Right, so it's the same idea isthat we want to give
constraints to the model so thatit spends the amount of time
that we think is reasonable.
So that's the part that I alsothink is important to be aware
of.
Those things all are helpfulfor end users in the cloud
(13:25):
application, but they're alsovery powerful for application
developers that are buildingsystems on top of cloud.
Mallory (13:33):
That is very helpful.
I love how your brain goes tomath with the thinking fast and
slow.
My mind immediately went toseeing like a cockroach in my
house or something, and how, inthat moment, I wouldn't go.
Well, mallory, what should I do?
Should I scream?
Should I kill the cockroach?
You know, that would be myinstantaneous reaction.
I don't like roaches, by theway, so that's where my mind
(13:55):
went, but I think it makes sensewhat you're saying, amit.
I guess what I want to know isone.
Amith (14:15):
Have you tried Clawed 3.7
since it's been released and
then have you noticed any?
Mallory (14:18):
of these differences,
like as a user, I've tested it
out a bit and everything seemsroughly the same.
Amith (14:20):
So I'm looking forward to
checking it out, and so, from
my perspective, I think the keyto it is like, if you compare it
to what's happening in chat GPTright now, where you have to
choose the model, you have thismenu of GPT 4.0 or GPT 4.0 mini
or 0.1 or 0.3 or 0.3 mini or 0,o3, mini, high right, like so
(14:45):
complex user interfaces neverwin.
I'm sure OpenAI is all overthis and they're going after it
really aggressively, but youknow, my point of view is that
simpler is almost always better,and so that's why I think
Cloud's going to pick up a lotof.
They already have a lot of fans, but I think they're going to
pick up more fans from this.
Mallory (15:06):
Yeah, you all know I'm
a big Claude fan.
Claude is my preferred modelthese days, except I don't like
the usage caps those are thethings that drive me nuts.
But other than that, I reallyam a big fan of Anthropic and
Claude.
I know you haven't used it, butI know we both shared an
article with one another fromProfessor Ethan Malik where he
(15:26):
shares like an interactiveexperience that he created using
Claude 3-7.
Have you watched that video,Amit?
Amith (15:37):
Yeah, and I think the
interactive experience part of
what you mentioned earlier inthe overview is super
interesting.
I actually thought specificallyabout this in the context of
just education in general,context of just education in
general.
So when you think about howassociations are, one of the
primary things that associationsdo to create value with their
audiences is deliver content andspecifically learning content,
and that comes in a lot offlavors.
That can come in, obviously,text in the form of articles on
(15:58):
a blog or in a journal,obviously in the form of
conference content, butinteractivity potentially could
be super interesting forsimulations or for kind of
reactivity.
So seeing a user does this,what happens then it's those
kinds of interactive experiences.
Of course, that can be donethrough building software where
(16:18):
you could say build me asimulator that asks someone to
choose from one of severalprompts and then give them
different reactions using an AI.
So that's the first thing mymind went to.
What use cases did you have inmind when you saw the
visualization or the interactiveexperience?
Mallory (16:35):
Well, I'll share with
you all the video in case you
haven't seen it.
But Ethan Mollick did a timetravel example where he created
kind of this interactiveexperience and he could click
time periods historically, go tothose time periods and then see
, maybe, what was happening.
I will say the images were veryrudimentary and it was like a
very simple interactiveexperience but very exciting in
(16:58):
terms of what's to come.
I would say my mind went thelearning direction as well, but
I don't know that.
I was kind of stumped, to behonest with you, because I saw
that and then was thinking what?
I guess?
What does this mean forbusiness?
I don't know exactly.
Amith (17:15):
Well, I think one of the
things from a learning modality
perspective.
If you think about theprevailing form of digital
education, which you know wealso subscribe to with Sidecar
and our AI Learning Hub it's aform of asynchronous content,
and so that asynchronous contentis largely one way.
It essentially is there'svideos and there's documents and
(17:37):
there's assessments, and thelearner goes through it, you
know, not necessarilysequentially, but usually
somewhat sequentially watches avideo, listens to some audio,
maybe reads a document, perhapsdoes an assessment, maybe
there's an interactive exercise,but these things are like
hand-built painstakingly bylearning content folks, and so
(17:58):
they're very rare.
Actually, in most learningenvironments they're extremely
rare because they're very, veryexpensive to build.
So the concept might be that ifyou have an AI that can build
interactive experiencesdynamically, like this, and if
you can also have an AI residentin your LMS, can you provide an
(18:21):
experience that actually feelsa lot more like a traditional
synchronous learning experience.
Right where a traditionalsynchronous learning experience
is, that you know in the mosteffective form that we know it's
a one-to-one session where,mallory, I'm trying to learn a
topic that you're an expert inand you say sure, I'll spend a
half an hour with you and I'lltalk to you all about that topic
(18:42):
and answer your questions, andthat type of session is
extraordinarily powerful, but itdoesn't scale until now, right,
and so an AI avatar and thatkind of solution, which is
different than what we'retalking about right now, could
be a component of that.
But interactive experiences arealso ways of helping people
visualize complex ideas.
So, for example, let's say thatwe're talking about educating
(19:06):
perhaps a bunch of accountantson some concept.
Let's say the concept has to dowith how to recognize revenue
for membership.
Membership gets paid upfront atthe beginning of a year or the
beginning of a particular cycle,and you typically recognize it
one twelfth per month as theindividual receives the value
from the membership.
(19:27):
It's a common concept in gapaccounting, and so if we want to
illustrate that, maybe we canshow an example where numbers
are literally flowing from oneside of the accounting equation
to the other in the context ofthis type of experience.
Right, and of course this is asuper simplistic concept.
(19:47):
For most professionalaccountants, they heard about
this back in their second yearof accounting school.
But the idea is, is you haveconcepts that are best
illustrated visually, where theinstructor might use the
chalkboard, but maybe sometimesthe student goes up to the
chalkboard to fill in part ofthe problem, right?
And so that becomes part of theinteractive experience in kind
(20:07):
of the classical sense.
Is there a way to emulate thatwith this type of tool is the
question I would put out there.
And to me it becomes anotherdimension or another canvas upon
which we can craft new kind ofinteractive storytelling and
interactive learningopportunities.
So I get excited about stufflike this.
I also think you know we talk alot about the whole idea of
(20:29):
moving from a scarcity mindsetto an abundance mindset.
And this is yet another greatexample where you know we've
talked about software a lot.
Right, we say software has beensuper expensive and very hard
to build.
Now it's becoming lessexpensive and much easier to
build, with AIs being able tobuild stuff for us.
Theoretically, the cost kind ofapproaches zero over time.
(20:49):
Well, in the case of interactivelearning experiences, same
thing.
You know, it requires highlyspecialized skills, takes a long
, long time, is super expensiveand so very rarely do they get
used.
But how can we consider thoseassumptions and you know kind of
re-evaluate what we can andcan't do if the assumptions are
(21:09):
essentially invalidated?
Right, we say, oh well, now allof a sudden, if you could have
unlimited interactiveexperiences for no additional
cost.
It's just like a feature ofyour LMS, where you can tell the
LMS you want interactiveexperience to be dropped in here
and give it some idea of whatyou want, and it builds it.
Of course, under the hood itwould use cloud or whatever.
That stuff's coming, it will bethere.
I think.
(21:29):
For now you have to kind ofmanually stitch together the
solution, but it's available toyou for people who are, you know
, really looking to do somethingdifferent.
I'd love for us to experimentwith these kinds of ideas in the
Sidecar AI Learning Hub,because I think you know we can
show some leadership there, ofcourse, but I think it would
really enhance the learningexperience.
(21:50):
To me, ultimately, the goal withjust continuing on the learning
track a little bit longer, thegoal isn't about signing up
learners.
The goal isn't about peoplecompleting the course.
The goal isn't about getting acertification.
Those are all milestones oressentially heuristics to tell
us whether or not someone hasactually gained knowledge.
The real goal is did they gainknowledge and can they apply it
in a productive way in theirprofession?
And ultimately, one of the bestways to determine.
(22:12):
That is some form ofinteractivity, right, and that's
what, going back to theone-to-one tutoring, is so
powerful because dynamically areally good tutor can determine
is this student picking this upand can they apply it or not,
and then dynamically shift gears.
So we're getting really closeto being able to do.
Mallory (22:30):
In my previous life,
amit, you know, I was a private
tutor, so this really resonatesfor sure, the one-to-one
teaching, and, you're absolutelyright, adapting as you go,
gauging understanding, and Iassume an AI would be better
than a human at kind of gaugingthose things and identifying
those patterns of theinteractive experiences is the
(22:55):
concept of digital twins, whichwe've talked about on the
podcast previously, the ideathat you have, like, a digital
version of your business youcould make changes to and see
what happens.
Do you think not at this pointright, claude37, these
experiences are very basic rightnow.
Could businesses start thinkingabout that though, now that
this capability is emerging oflike maybe using a model to
create a digital twin of theirbusiness?
Sure Well, the model itself, Ithink, a model to create a
digital twin of their business.
Amith (23:15):
Sure, well, the model
itself, I think, is going to be
a component of the idea of adigital twin.
The digital twin concept youknow we've covered that in the
past and the idea there isreally powerful.
And for larger enterprises,whether they're government
entities or you're modeling likea system, or if you're modeling
a business or a factory orsomething like that, a system or
if you're modeling a businessor a factory or something like
(23:36):
that, you can do these thingsand they essentially are, you
know, real-time simulations ofthe full complexity of the
system.
So what you might actually haveis, you know, ultimately like a
CLOD 3.7 caliber modelmultiplied by 10,000 within your
digital twin, because to reallyreflect all the different
moving parts that exist, if youwere to model, let's say, let's
(23:57):
just say we want to model onedepartment in an association,
not even the whole association,let's just say the membership
department.
You know there's a lot ofmoving parts there.
If you were to like break downinto, like, like Lego, building
block type constructs, all thedifferent business processes,
all the different people whoexecute those processes, the
customers externally thatinteract with them, those are
(24:18):
all things that you'd model inthe digital twin and then you'd
say, okay, well, I have likeessentially the model for that,
the model being a generic modelprobably but then like
instructions on top of it thatturn it into a process agent or
some artifact like that, andthen they all work together to
essentially you know, if youaggregate them all together it
forms like, let's say, adepartmental digital twin for
(24:39):
the membership department andthen you can simulate okay, well
, what if we changed the way weapproached our policy for member
renewals?
How would that affect the wayour business operated?
Right?
And you change that policy andthen all the different
components of that digital twinimmediately reflect the change
and you can see how it wouldbehave in operating the business
(25:00):
.
So it's essentially a veryfancy simulation, but it's real
time and kind of continuous.
So that can be very powerful.
These models are not quiteready for that in the most
generic sense, but that's whatyou're describing is exactly the
kind of thing these modelscould power.
If people are wondering well,that sounds really interesting,
(25:20):
sounds maybe kind of sci-fi, butlike, how could that be
valuable for me?
Well, imagine if you modeled adigital twin of your entire
membership, where you had thedata for every single individual
member or organizational memberand we essentially had a
digital twin that models each ofthose people, how they might
behave, how they might react todifferent stimuli, such as
(25:43):
increasing your membership viewsor such as changing the
location of next year's annualmeeting or changing the dates.
We have a lot of data thatmight indicate to us how each
person may behave.
And then and sometimes thedynamics are such that one
person behaves in a certain way,other people behave based upon
their reaction, because there'sinfluencers in your community.
(26:04):
So it's not just if you have20,000 members, it's not just
20,000 individual decisions thatare completely independent.
They're actually interdependent.
So these digital twins have toaccount for that, because you
know one actor in thatsimulation makes a decision that
are completely independent.
They're actually interdependent.
So these digital twins have toaccount for that, because you
know one actor in thatsimulation makes a decision and
then others may follow, or youknow so and that's what happens
in real life, right?
Like I say, oh hey, mallory,we're both members of
(26:25):
Association X, and you're amember and I'm a member, and I
know you're going to the annualconference and you know you tell
me that you're not goinganymore because it's going to a
city that you don't want to goto.
I might say, oh well, I'm notgoing to go either.
Or I might say, oh, mallory'snot going, I'm definitely going
to go.
Mallory (26:41):
Okay, it could go
either way.
Amith (26:43):
Yeah, so, like in
people's behavior, you can't
necessarily predict it with 100%certainty.
However, there's a lot to besaid for being able to predict
with a high degree of certaintyand, of course, AI is ultimately
a prediction machine.
It's just a really good one.
So we can do that at that levelindividually and then model the
(27:04):
interdependent predictions ofhow the system starts to behave.
Now we can simulate okay, well,what would happen, and so when
the board is thinking through,should we raise dues?
No-transcript.
(27:41):
What you're describing andbringing that topic up, I think,
is a great application.
The models becoming smarter andsmarter make the likelihood of
that happening higher, but alsomakes the quality of those kinds
of simulations far better.
Mallory (27:56):
That makes sense.
I want to touch briefly on thescaling laws piece before we
move on to good old quantumcomputing.
It sounds like the idea isthrowing more compute, both in
training and in inference,creates better models.
That's kind of the simpleoverview.
In theory it sounds simpleenough right to be like well,
let's just throw more compute,more compute.
(28:17):
If that's kind of thebottleneck, let's do more of
that.
But can you kind of justexplain what the holdups are at
this point in creating morecompute power?
Amith (28:28):
Well, I mean, there's a
lot of constraints around
compute related to manufacturing, related to energy, related to
building data centers, relatingto have enough people to run
those data centers withadditional compute.
There's constraints in terms ofthe compute itself, in terms of
both the actual hardware beingable to run more calculations
per second in parallel.
There's the interconnect, whichis essentially like the mini
(28:51):
internets that exist between allthe chips in a data center,
right, where, from one chip tothe next chip, they have to
communicate at really, reallyhigh speeds, and so there's
scaling that needs to happenthere.
So there's lots and lots ofconstraints in terms of the
physical and the economicaspects of compute.
All that's being worked on, ofcourse, across all of those
different components I justmentioned.
(29:11):
But ultimately, if we were tosay let's just say we had an
unlimited amount of compute andwe threw it all at training,
right, which is what washappening for years in AI, we're
just through more and more andmore compute at training, and in
some cases that meant also moreand more data.
And the question is is thatwill training by itself continue
(29:31):
to scale the way you describedearlier in the pod, which is to
say that an order of magnitudeincrease in compute, aka a 10x
increase in compute.
An order of magnitude increasewould result effectively in a
doubling in power, right?
So a doubling in power is thatlinear increase, whereas in
order to get the linear increasewe need 10x increase in compute
(29:53):
.
Will that hold true forever?
Right?
And we were already starting tosee signs that there might be
limitations to that.
There were algorithmicimprovements that would kind of
extend that kind of like withMoore's law.
People would say, well, youcan't pack an unlimited number
of transistors into a tiny chip.
And then, lo and behold, we'dcome up with a new node and
process manufacturing that wouldresult in smaller and smaller
(30:15):
and smaller chips and other waysof stacking transistors, like
three-dimensionally and stuff.
So I think there was definitely, and there still is definitely
headroom in terms of the firstscaling law.
There will be a lot of upsidethere.
But lo and behold, along camethe second scaling law, which
was like, oh, if we give thesemodels a little bit more oxygen
in their tank, I'll give them alittle bit more time to think
(30:36):
what will happen.
And that's when, like, thiswhole thing was strawberry,
which then became O1, was likebig news.
And then R1 and now the 3.7edition of Claude.
Really, all you're doing issaying, hey, model, think about
it again.
Does that make sense?
First of all, I come up with aplan, then think about it some
more.
Right, and we wereapproximating that with things
(30:56):
like chain of thought prompting,but still that was just more of
like a better tip to the modelthat was still reacting
instantaneously.
Now the models actually have theability to run multiple
iterations.
Essentially, that's really allthat's happening is the model
itself can iterate and thinkthrough the problem and say
what's the problem?
How should I break it down?
Let me solve it step by step.
Okay, here's the solution topart one.
(31:18):
Here's the solution to part two.
Here's the solution to partthree.
Cool, now let me look at thatsolution.
Does it make sense that I'veput it together, yes or no?
Oh, it doesn't.
Let me go and change part one.
Okay, now does it make sense?
Oh, yes, so now it makes sense.
So now we're good.
That's kind of the way we mightwork through a complex problem
of whatever kind, right, writingan article or doing a math
problem or whatever.
Well, models can now do thatright, prior to models having
(31:45):
this test time compute orinference time compute, scaling,
where they can think more atruntime.
Essentially, that's what agentbuilders were doing for several
years actually.
So products like Betty and Skipare essentially doing that kind
of iterative, likemulti-inference pass work, and
that's part of how they work.
Now, more of that being done inthe model in theory is great.
There's still value in thesystem or the agent doing some
(32:05):
of that for different reasons.
There's different purposes todifferent kinds of approaches.
The real point I would try tomake is this we're just starting
on the second scaling law.
It's like this isn't reallytrue, but it's like all of a
sudden the model developer'slike, wow, if we give the model
more time to think at runtime orat inference time inference
time just is a fancy way ofsaying like when you ask the
model something right, so if yougive the model more time to
(32:27):
think, it'll give you a betteranswer and sure enough, it works
better.
It's just like we producebetter answers if we have a
little bit of time to thinkabout it.
So there's a lot of upsidethere.
There's lots and lots ofheadroom for this second scaling
law, which is also, by the way,why folks at inference
computing companies like Grokwith a Q and others are so
(32:49):
focused on saying that there'sgoing to be this ridiculous
surge in demand for inference,computation more so than
training, because the basemodels are pretty darn smart
already.
What happens if we throw acrazy amount of inference at it?
Well, you get reallyinteresting results.
So that's.
What's exciting about this isthat you know these are
engineering problems.
They're not even scientificdomain problems, meaning like we
(33:11):
don't need a breakthrough inthe algorithms in order to
produce radically better AI.
Like the next couple of years,you're going to see breakthrough
after breakthrough, which arereally more engineering
breakthroughs, like the DeepSeek.
R1 was an engineeringbreakthrough.
You're also going to see somescientific stuff happen, which
is cool.
But in any event, I think thesecond scaling law, the main
takeaway for our associationlisteners who are not so
(33:33):
interested in the technicaldetails, is this Remember that
what you have in your hand todayin terms of Cloud 3.7, as cool
as it is, or Grok with a K, theGrok 3 model that came out,
which has some similarattributes remember that it's
the worst AI you will ever use,full stop.
(33:53):
So three months from now, sixmonths from now, 12 months from
now, the AI that you have atthat point will be dramatically
better, so why does that matter?
It doesn't mean that the AI youhave now is actually bad.
It's staggering how powerful itis.
However, what you need to thinkabout is what will you be able
to do if you have Grok 5 orGPT-7 or Claude 5 or whatever
(34:18):
these guys choose to name theirnext generation of models?
That's a really, reallyimportant thing to be thinking
about.
So one way to frame that as alittle thought exercise for
yourself is this what can you donow with Claude 3-7 that you
couldn't do before?
So go figure that out, playwith it and learn how to do some
stuff.
There's plenty of examples.
You mentioned Ethan Mollick,who we talk about a lot and
(34:39):
follow his work.
He does a lot of this type ofthing, saying you can now do X,
y and Z that you couldn't dowith GPT-4.0.
Then put yourself in a timemachine and go back to 12 months
ago, or even six months ago,and say, hey, what were you
struggling with?
And then think about it and sayif you had known back then that
GPT 4.5 or 5.0 or, in this case,cloud 3.7, was available in
(35:04):
late February 2025 and it woulddo these things, how would that
change the way you planned whatyou would do in your business,
right, and then think about thegap in capability.
What you would do in yourbusiness, right, and then think
about the gap in capability,then try to project that forward
and say, okay, well then, sixmonths from now, I will have
this next set of capabilities.
What will I do with it then?
Because undoubtedly, you willrun into things that Cloud 3.7
(35:25):
cannot do.
So, rather than like throwingup your hands in the air and
saying I'm frustrated, I can'tdo this, put that on a list and
come back and test it again intwo months, three months, six
months time, and most likely,you're going to see that these
future models will be able tosolve your problem.
Mallory (35:45):
Moving on to topic two
of today, we're talking about
Microsoft's Majorana One, butbefore we dive into that, I
think it's important for us totalk a little bit about quantum
computing at a high level.
Don't don't run away, don'tturn off the podcast just yet.
This was a learning experiencefor me too.
But quantum computing is anadvanced field of computer
(36:07):
science that harnesses theprinciples of quantum mechanics
to perform complex calculationsand solve problems that are
beyond the capabilities ofclassical computers.
So, unlike traditionalcomputers that use binary bits,
(36:31):
zeros and ones, quantumcomputers utilize quantum bits
or qubits, which can exist inmultiple states simultaneously
due to a phenomenon calledsuperposition.
So superposition is how qubitscan represent both one and zero
at the same time, allowingquantum computers to process
multiple possibilitiesconcurrently.
Also, there's entanglement andinterference, which I'm just
going to talk about reallybriefly.
And entanglement qubits can becorrelated in such a way that
(36:52):
the state of one qubit instantlyaffects the state of another,
regardless of distance.
And then interference isquantum waves can amplify or
cancel out certain computationaloutcomes, helping to arrive at
solutions more efficiently.
Computational outcomes helpingto arrive at solutions more
efficiently.
So I have spent a lot of timelooking into quantum computing
(37:13):
and it's one of those things forme, amit, where I watch a video
on it and while I'm watching itI think, yeah, yep, this all
makes sense, I get it.
And then leaving the video andthinking about having to explain
that to someone else is where Irealize perhaps I don't get it
as much as I thought.
So it sounds to me like quantumcomputing.
Let me backtrack.
It sounds to me like there aresome problems on Earth,
(37:33):
challenges, issues that we don'tunderstand, that couldn't even
be solved by the biggestclassical computer in the world.
Like we could have a computerthe size of the Earth, it could
still not solve some of theseproblems that quantum computing
could solve.
Is that, is that make sense?
Amith (37:50):
Yes, that's correct, and
I will preface what I'm about to
say on this that I amdefinitely not anywhere close to
an expert on quantum computing.
In fact, I know very littleabout it.
I find it super fascinating,but my level of knowledge on
this topic is definitely stillin the novice category.
You know, when I talk to peoplewho are in this field, it's
(38:10):
hard to just even understandwhat they're saying.
A lot of times especially whenthey get excited.
So it's pretty cool because thisis definitely very sci-fi like.
So if you're a fan of Marvelmovies, go watch Ant-Man again.
You know there's a lot ofquantum discussions there, but
what I would tell you is it isabout being able to consider
multiple possible things inparallel at the same time.
(38:32):
We can kind of approximate thatnow with parallel processing
with GPUs.
So I want to talk about thatfor a minute and compare and
contrast GPUs with quantumprocessing.
So a GPU or graphics processingunit, which is what powers AI,
is really good at mathematicalcalculations that are executed
(38:53):
in parallel.
But they're executed inparallel because you have
thousands and thousands ofseparate little GPUs within the
big GPUs.
So it's like you have the GPU,but that really means you have
10,000 or 20,000 or 100,000cores and each core is capable
of doing just one calculation ata given moment in time.
(39:14):
So GPU is massively parallel,but that's because it has lots
and lots and lots of tiny littleprocessors in there that are
really good at thesecalculations.
So it's not actually trulydoing like multiple things at
once with the same physicalpiece of hardware.
If you break it all down, a GPUis a package of many thousands
of might have a dozen cores or50 cores, but GPUs are where
(39:51):
we've seen this explosion ofcores and that's, you know.
That, coupled with the factthat they do something called
floating point operations reallywell, is why they're so suited
for graphics, but also that sametype of math that powers AI.
Now, in comparison, quantumactually is doing multiple
things in parallel with the samequbit, right?
You're saying, hey, it can bezero and one at the same time,
(40:15):
and so the concept.
Again, this is not somethingthat I can actually explain at
any level of depth, but if yoususpend disbelief, as weird as
that sounds and that soundsreally freaking weird, right?
Really really strange, becausehow can one thing be two things?
But the idea is essentiallythat there are multiple parallel
states that exist in thenatural world that we just can't
(40:37):
contemplate because, frankly,the vast majority of humans,
including me, like can'tunderstand this, right, but in
actuality, the states areactually in parallel, and so
that's why in like sci-fi, youhear about multiverses and all
this other stuff, or you know,there's actually quite a bit of
scientific, like research andfoundational thinking that
(40:59):
support the idea that that maybe a true thing.
There isn't, like anythingthat's proven that, obviously
yet.
But um, but the concept and thefundamentals of what you're
talking about with quantummechanics are part of that, and
so the idea of qubits being zeroone at the same time, that even
entanglement which is superinteresting, where this has
actually been shown in the lab,where, you know, at remote
distances of thousands of miles,you can flip the position of a
(41:22):
qubit and you can see theentangled qubit in another lab
thousands of miles awayinstantly change and you can
prove that it's quantumentanglement, because it's
faster than the speed of light.
Right, Because the speed oflight is what we know is the
fastest thing that you canactually like matter can travel
at, which is photons, right, butlike at the same time, what
we're talking about here isinstantaneous, right.
(41:44):
It's not discernible that timehas passed when these qubits
have changed because they areinterwoven that way.
So again, I have no idea howthat actually works, but that is
a concept that's been provenand quantum computing is
building on that.
So I guess the point I wouldmake is this so this overview is
cool, we're about to talk aboutin terms of Microsoft's
breakthrough, really makes itreal, because over the last
(42:06):
several decades, as the researchin this field has gone from
pure theory to very, verysimplistic basic prototypes,
like Google's been big in thisspace.
You've had environments whereyou've had a handful of qubits
on a so-called chip, but thechip was really like a giant
machine.
And so what we're seeing is thatif a qubit can represent
(42:26):
multiple states simultaneouslyand therefore can have these
properties that allow formassive parallelization, way
beyond what a GPU can do, whichessentially just approximates
true parallelization, right, wecan solve problems in the
natural world and possiblyelsewhere that we just can't
solve in any reasonable amountof time.
So there are certaintheoretical, mathematical
(42:48):
problems that can be solved withquantum computers, that can't
be solved with classical.
But think about somethingthat's a little bit more
tangible, like weatherforecasting.
We can do that right now, butwe're making essentially a lot
of shortcuts happen.
We're using a lot ofapproximations, a lot of
summarizations.
What if you could essentiallypredict the change in the state
(43:11):
of every atom in the atmospherein parallel, simultaneously at
scale, and then have essentiallylike 99.99% accuracy in weather
forecasting?
Right, what would that mean?
So that requires just enormityof calculations that happen in
parallel and they're interwovenin this interdependent state.
So that would take an infiniteamount of time.
(43:34):
Essentially, if you added upall of the classical computers
together that we have on earth,you wouldn't be able to solve
for that.
Nobody even tries to solve forproblems like that.
They create heuristics orshortcuts, problems which are
very helpful, but they'redesigned to solve problems with
the tools we have today, whichare classical computers.
Now the last thing I'll say,really quickly, is, before you
(43:55):
get into the Microsoftbreakthrough, which I think is
fascinating, is that classicalcomputing doesn't go away.
When quantum computingeventually does come online
which, to be clear, is not aboutto happen tomorrow, it's, you
know, maybe by the end of thisdecade at the earliest.
But classical computing basedon zeros and ones with
traditional bits, whether it'sGPUs, cpus, a mixture thereof,
(44:18):
will be around for a long timebecause it solves a set of
problems that quantum computingis actually not well-suited for.
So, for example, things likehaving a database that tracks
your customers.
You don't need a quantumcomputer for that and in fact a
quantum computer would be fairlyill-suited for that kind of
deterministic type of process.
So it's an interestingscientific concept.
The reason we chose to featureit here that I think is worth
(44:40):
just mentioning also before wego further is it's a cool
scientific breakthrough thatwill have ramifications on AI,
on the world and, of course,your association as a byproduct
of that.
And it's not that far off, it'snot like it's 50 years from now
.
It's likely to happen,certainly in our lifetimes,
probably within our careers, andquite likely, like within the
(45:01):
next five to 10 years, there'llbe some commercial impact from
these technologies, based onwhat you're about to describe
Microsoft has revealed to theworld.
So I'm excited about it forthat reason, because it's
actually a practical realitythat we're going to be facing
fairly soon and it simplyaccelerates what we're already
seeing with AI.
Mallory (45:19):
Well, amit, if you're a
novice and you just said all
that, I think I'm like a cavemanwhen it comes to quantum
computing, but I'm gettingbetter, all right.
Microsoft's Majorana 1 quantumchip utilizes a groundbreaking
new state of matter called atopological superconductor,
which is artificially createdand does not occur naturally in
(45:40):
the universe.
This novel state of matterforms the foundation for a
potentially revolutionaryapproach to quantum computing.
So this topologicalsuperconductor material enables
the creation of topologicalqubits, which Amit was just
talking about.
These are the building blocksof Microsoft's quantum computer.
These qubits are designed to bemore stable and error-resistant
(46:03):
than traditional qubits,potentially solving one of the
biggest challenges in quantumcomputing maintaining quantum
states long enough to performcomplex calculations.
The technology could be appliedto solving complex optimization
problems in various industries,accelerating drug discovery and
material science research,enhancing cryptography and
(46:25):
cybersecurity measures, andsimulating quantum systems for
advanced scientific research.
So this is profound for manyreasons, but I'll cover just a
few of those One.
It represents a significantadvancement in our understanding
of quantum physics and materialscience.
It could dramaticallyaccelerate the development of
practical, large-scale quantumcomputers, potentially bringing
(46:48):
them from the realm of theoryinto reality within years, as
Amit said, rather than decades.
If successful.
This technology could enablesolutions to complex problems
that are currently intractablefor classical computers,
potentially leading tobreakthroughs in fields like
climate modeling, financialanalysis and, of course,
artificial intelligence.
(47:15):
The ability to create andcontrol this new state of matter
demonstrates a level of atomicscale engineering that was
previously thought to beimpossible, opening up new
avenues for technologicalinnovation.
The development represents aconvergence of fundamental
physics, material science andcomputer engineering that could
reshape the landscape ofcomputing and scientific
research in the coming years.
So I think I'm glad we kind ofbroke that into two parts of me.
(47:37):
I feel like now that makes alot more sense to me after what
you said.
Why are you so excited aboutMajorana 1?
Amith (47:47):
To me, what it represents
is the scaling of the number of
qubits and kind of theresilience of them.
You know you mentioned thisearlier, but the quantum compute
that we've had, you know, thusfar from other labs, including
Google, have been veryinteresting to look at, but
they've been like, I think, inthe order of magnitude of
hundreds of qubits at thelargest.
(48:08):
I don't remember exactly thenumbers, but very, very small,
and Microsoft's talking aboutbeing able to put a million
qubits on a single chip that canfit in the palm of your hand.
Now, granted, the machinethat's required to run that chip
is quite substantial in size,but, as is the case with all
things, as they get simpler andwe get smarter about them
putting things into production,it'll likely shrink down, but a
(48:29):
million qubits is enoughhorsepower basically to do some
of these extremely complexcalculations, if the qubits are
stable enough the way youdescribed.
So, you know, let's talk aboutone of the fundamental problems
we have.
You know people talk a lot aboutAI in a negative way in the
context of several dimensions.
One is AI safety, which I thinkis a super interesting and
(48:52):
really important topic.
As much of a proponent of AI asI am, I'm also one of the
people out there saying, yeah,it's going to be a big problem,
we have to be on top of it.
I don't think that means we canor should try to slow AI down,
but it means we have to beconstantly thinking about
frameworks, governance, training, all sorts of things to make AI
safe and use it responsibly.
(49:12):
The other dimension of AInegativity that I hear
frequently is energy consumption.
It is true that the datacenters that are used to train
AI models and the data centersthat are used to run AI models
are ridiculously energyintensive.
Now they are getting moreefficient all the time.
The chips that are being builtbecome smarter, faster, more
(49:32):
powerful.
The models are becoming smalleras well, so there's going to be
some help on the way, so tospeak, based on the progression
of the industry.
However, demand is growing at aridiculously fast pace, like
we've covered in prior episodes,where, as the quality of the
product increases and the costgoes down by orders of magnitude
, kind of all at once, demandexplodes, because the utility of
(49:54):
AI is enormous.
So energy consumption is aproblem.
We've got to figure that out,and so people talk about the
impact on climate, how not alldata centers are being run in a
green way, and the reality is iseven the ones that say they're
run green, they really can'tright now, because they need way
more energy than they can getfrom sustainable sources in most
cases.
So what do we do about thisright?
(50:14):
So when you have breakthroughsin AI and if you have
breakthroughs in quantum,potentially, potentially you can
also solve for some of thecurrently intractable problems
in energy generation.
Probably the grand prize in allof it is fusion, and you think
about, like, what are some ofthe problems stopping fusion
from being a reality?
There's a lot of fundamentalscience that has to happen to
(50:37):
make fusion.
You know, a household device.
I don't know if we'll ever getto the point where you have,
like, a fusion reactor in yourneighborhood, but maybe and in
order to do that, there's a lotof science that has to happen.
In order to do lots of science,you know, to do it in parallel,
to do massive amounts of it, totest a lot of hypotheses.
You need a lot of creativeideas, because science always
starts with creativity.
It starts off with curiosity,it leads to creativity, it leads
(51:00):
to hypotheses and then peopletest these things out.
The scientific method worksreally well.
It's just kind of slow.
And so what if we could, inparallel, simulate and test
millions or billions ortrillions of different ideas?
Right, and with quantum, youhave the computing horsepower
and with AI, you havepotentially the creativity?
You need to do a lot of this.
So the mixture of the two couldlead to breakthroughs in fields
(51:23):
like fusion or maybe other waysof thinking about solar or
geothermal or wind power orwhatever to make things more
efficient, and thinking aboutthings like material science.
We've talked previously aboutroom temperature,
superconductive materials, thetransmit power to be more
efficient, to have less loss ofenergy after we generate it.
That becomes another.
It's a grand challenge ofmaterial science.
And so this type of compute,coupled with the intelligence
(51:47):
that AI is giving us, you knowyou have unlimited intelligence
on tap, and if you haveunlimited raw compute on tap for
these styles of problems thatrequire massive parallelization,
you can probably solve a lot ofproblems.
You know, because it's going tofeel like you know, the current
things that we're doing toconduct scientific discovery are
going to feel like you know,pre-computing right, it's going
(52:09):
to feel like it's from the 1500sor something, or even further
back compared to what we'reabout to get.
So that's what's so excitingabout this it's going to lead to
these new discoveries.
That's going to lead to acompounding effect of helping us
then fuel more discoveries.
Of course, the reason I'm soexcited about AI is that
intelligence on tap this waywould lead to, you know, a
greater compounding effect thanany of the other things.
(52:30):
But if you think, for example,could we solve for fusion with a
combination of really advancedAI let's say, ai that we have
five years from now, coupledwith the first generation of
truly useful quantum, maybe wecould.
And if we can solve for energy,pretty much everything else is
downstream if you think about it.
So what are some of the otherfundamental problems we have on
(52:51):
earth?
Well, we have problems withaccess to clean water.
But if you can solve for energyand you can do it in a
carbon-free way, right, in asustainable way and in a
low-cost way or potentially afree way at scale, you can solve
for clean water.
You can solve, then, for food,because you know land that's
suitable for agriculture islimited, but how can you solve
(53:12):
for that?
Well, we know actually how tosolve for that in a lot of ways
with things like vertical farms,but we need more advanced ai,
we need more advancedmanufacturing, we need more
materials, we need more energy,we need more water.
So, again, energy is kind oflike this input that, if we can
solve for lots of other, goodthings happen.
So I get excited about thesefundamental things because I
kind of visualize the downstreameffects solving a lot of
(53:34):
fundamental problems we havearound the globe and it gets
exciting, right.
And so why do we want to talkabout this with associations?
Well, association folks, youguys are citizens of planet
Earth, just like Mallory and I,and so it's important to be an
educated citizen, in my opinion,and at the same time, hopefully
this is interesting, hopefullythis gets you excited, but also
(53:55):
hopefully gets you thinkingabout what happens to your space
, to your industry, to yoursector, to your profession if
these things come online.
Maybe not even in a few years,maybe in 10 years time.
What does that mean?
So that's the future we'reheading into.
I think that you know the waywe see our job at Sidecar is to
help break these ideas down in away where in some cases like
(54:15):
Cloud 3.7, it sounded kind ofcool when we were covering it,
but compared to this thing, itseems kind of simple, right, but
we need to cover things thatare immediate and obvious and
useful to you today, but alsothink about where we're heading.
And one of the best ways tothink about where we're heading
is to try to put our heads inthese environments, with people
who are trying to build thatfuture, and then try to distill
(54:37):
it down and say what does thismean?
And we're going to be wrong waymore than we're right about the
timing, about what these thingscan and can't do, but the fact
that these things are happeningright now in front of our eyes
is truly remarkable.
Mallory (54:48):
Yeah, this is
incredibly exciting.
One because I feel like I havea much better grasp of it and
two because correct me if I'mwrong, but it seems like we're
generally we can be prettybullish on quantum computing.
It almost sounds like it couldjust solve all the major
problems we have in the world.
Is there anything scary we needto be worried about, like with
AI taking over?
It sounds like quantum is justgood.
Amith (55:10):
Well, yes, we should be
excited.
However, all technologies thathave suitable sufficient power
are dual use.
So what could bad guys do withquantum?
If it came online and it washighly available, and if you mix
really really good AI with thehorsepower of quantum,
potentially really bad thingscould be done as well.
Right, you know quantumcryptography, or quantum proof
(55:32):
cryptography, is a major area ofresearch because you know
quantum comes online things likecurrent blockchain.
You know what is fundamental towhy Bitcoin is a secure digital
currency.
It's gone.
The current form of encryptionis crackable instantly by
quantum computing, even at amuch smaller scale than a
million qubits.
So there are a lot of reallysmart people in cryptography
(55:55):
that are figuring out how tocreate quantum proof algorithms,
which isn't so much aboutmaking keys dramatically longer
that actually doesn't work,because then classical computers
won't be able to encrypt anddecrypt, because you know
quantum can do all the classicalalgorithms so much faster but
rather coming up with algorithmsthat quantum computers can't
solve, which is a category Ireally can't speak to, other
than to say that I find itfascinating.
(56:17):
That's a problem, right?
So if quantum comes onlinebefore we have mainstream
quantum proof encryption, we'vegot some major issues and, aside
from Bitcoin, you knowgovernment secrets and your
association secrets, right Likeeverything, all of a sudden
becomes public, so we have tosolve for that.
There's a lot of work happeningthere and, in general, if you
(56:37):
think about the combination ofreally smart AI with limitless
parallel computing power forscience, it's tremendously
exciting and it could be usedfor extreme bad purposes as well
.
So we have to have our eyeswide open with this stuff.
But I think that you know fornow that part.
You know it's a practicalreality with AI, that you know
(56:58):
cloud 3.7, deep seek these aretools that anyone on the planet
can use, basically for anything,in spite of, like, the supposed
guardrails that exist in thesemodels.
They've been proven.
Even the publicly availablemodels that are from the major
labs can very quickly be likechanged in order to suit ill
intentions.
So we have an issue with AI,and quantum is kind of a
(57:18):
multiplier for that in a sense.
Mallory (57:21):
Yeah, well, it was nice
while it lasted.
Now I'm thinking about all thebad stuff.
Well, if you're listening stillwith us, the Sidecar Sync pod
could be like a quantumcomputing pod in the future.
I guess we will see how thatgoes.
Everyone, thank you so much fortuning in.
We will see you next week.
Amith (57:40):
Thanks for tuning in to
Sidecar Sync this week.
Looking to dive deeper?
Download your free copy of ournew book Ascend Unlocking the
Power of AI for Associations atascendbookorg.
It's packed with insights topower your association's journey
with AI.
And remember Sidecar is herewith more resources, from
webinars to bootcamps, to helpyou stay ahead in the
(58:03):
association world.
We'll catch you in the nextepisode.
Until then, keep learning, keepgrowing and keep disrupting.