Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Forget about what's
happening to you and your
association.
But in your sector, what willpeople in your sector look for
from the association?
That's the open question.
Figure that out and then gobuild that.
Welcome to Sidecar Sync, yourweekly dose of innovation.
If you're looking for thelatest news, insights and
developments in the associationworld, especially those driven
(00:23):
by artificial intelligence,you're in the right place.
We cut through the noise tobring you the most relevant
updates, with a keen focus onhow AI and other emerging
technologies are shaping thefuture.
No fluff, just facts andinformed discussions.
I'm Amit Nagarajan, chairman ofBlue Cypress, and I'm your host
.
Greetings everybody and welcometo the Sidecar Sync, your home
(00:46):
for content at the intersectionof all things associations and
semiconductors.
Actually, I meant to sayartificial intelligence, but
today we are going to be talkinga little bit about
semiconductors and it's going tobe a lot of fun and we'll tie
it back to AI.
We'll tie it back to all thethings you're used to hearing on
the Sidecar Sync, so we'll jumpright into that momentarily.
My name is Amit Nagarajan.
Speaker 2 (01:08):
And my name is.
Speaker 1 (01:08):
Mallory Mejiaz and
we're your hosts.
And before we jump in, here's aquick word from our sponsor.
Speaker 2 (01:15):
If you're listening
to this podcast right now,
you're already thinkingdifferently about AI than many
of your peers, don't you wishthere was a way to showcase your
commitment to innovation andlearning?
The Association AI Professional, or AAIP, certification is
exactly that.
The AAIP certification isawarded to those who have
achieved outstanding theoreticaland practical AI knowledge.
(01:38):
As it pertains to associations,earning your AAIP certification
proves that you're at theforefront of AI in your
organization and in the greaterassociation space, giving you a
competitive edge in anincreasingly AI-driven job
market.
Join the growing group ofprofessionals who've earned
their AAIP certification andsecure your professional future
(01:59):
by heading to learnsidecarai.
Heading to learnsidecarai.
Amit the intersection ofassociations and semiconductors.
I never thought we'd be talkingabout this on the podcast, but
I mean it when I say I'm veryexcited to have this
conversation.
Speaker 1 (02:14):
I am too.
I've been looking forward tothis.
I kind of geek out about thisstuff and it's fun to understand
a little bit more about whatgoes on under the hood, so to
speak.
You think about what actuallypowers AI and what powers really
most things in the world.
It's chips, and how are theymade?
Who makes them?
These are questions a lot ofpeople really don't know the
answer to, and it's acomplicated answer.
(02:36):
So I think that right now is aperfect time for us to spend a
little bit of time unpackingthat, because, you know, we are
in this really interestingmoment in time globally.
We're at this interestingmoment in time in terms of, like
globalization ordeglobalization is what I mean
by that and also with respect towhat has been really the leader
of the pack for many, manydecades in the United States
(02:58):
Intel no longer being what theyonce were as an industry titan
and really having fallen fromthat and what happens there.
So I think that's going to be areally great conversation to
have, so I'm excited about it.
But you know, semiconductorspower AI.
We talk a lot about exponentialgrowth.
Yesterday at the Innovation Hubin Chicago, I had the pleasure
of opening the event and talkingabout exponentials and you know
(03:21):
a big part of it is theunderlying advancements that's
happened in semiconductor design, but also the manufacturing
process.
So it's just a.
It all ties together in thisway.
Speaker 2 (03:32):
I'm excited to unpack
this, but first you mentioned
the Innovation Hub in Chicago,which I haven't really talked to
you about yet, so this will bethe first time I'm hearing.
You said you kicked off theevent.
You also said you talked to agood few people about the
podcast this podcast.
So how did it go Like?
What were the vibes at theInnovation Hub Chicago?
Speaker 1 (03:56):
It was awesome.
I had a number of people tellme that they really enjoyed
listening to the podcast, whichI know makes both of our days
when we hear that and peoplewere saying, yeah, it was just a
really it's a helpful way tostay in sync with the latest
that's happening in the worldand in the world and a
contextually relevant way forassociations, which is, of
course, our goal is to be withyou on this journey, and that
was awesome.
And then, as far as theattendance, we had record
attendance both in DC two weeksprior and in Chicago.
(04:19):
These started out as reallylike informal small community
gatherings about three years agoand this is the third time
we've done this event and theidea was, hey, let's get
together in the springtime inboth Chicago and DC just to have
some small local events, getthe community together and share
ideas around innovation, and wethought that would also be a
(04:39):
good counterweight to DigitalNow being on the calendar every
fall, and so it's an opportunityto engage our community
face-to-face.
And both in DC and Chicago weactually had quite a number of
people fly in for the event,which was I wouldn't say it was
surprising necessarily, it wasjust it was a delight to hear
that people were making thatinvestment in time and dollars
and energy to join us becausethey were saying the value was
(05:02):
so strong.
So that was awesome.
What I would say is similar tohow I felt about the DC event a
couple of weeks ago.
People are doing things.
They're not just talking aboutthings, which a year ago.
Some people were doing things,of course, but most associations
were kind of just like testingthe water a little bit and maybe
starting to learn a little bit,but not really doing things.
(05:23):
Now I see associations actuallyrunning active experiments and
starting to deploy technology atscale and seeing impact, seeing
members tell them that theirquality of service is better,
seeing their engagementstatistics improve, seeing their
financial results improve as aresult of improved service or
services that they've been ableto deploy.
That would not have beenpossible without AI.
(05:44):
So to me that's reallygratifying to see people do that
in the community.
Of course, I do realize thatour community of folks that are
coming together, that watch ourcontent and come to our events
are, you know, self-selecting inthat are probably a little bit
ahead of the general curve ofthe association markets adoption
.
Nonetheless, I think it is, youknow, setting great examples
(06:05):
for associations who you knowmay be a little bit less
advanced in their at the momentin terms of their AI journey.
Speaker 2 (06:11):
Yep, yeah, that's
great to hear.
My follow-up question was goingto be did you have the same
takeaway of associations areactually doing this innovative
work?
They're actually working withartificial intelligence?
But it sounds like you did.
Were there any kind ofdistinctions between the DC and
Chicago event?
Speaker 1 (06:26):
It was much colder.
Speaker 2 (06:27):
Okay.
Speaker 1 (06:28):
So Chicago is colder.
It was early April like, oh,it's going to be great, it's
going to be beautiful earlyspring weather in Chicago and
actually I would say on the dayof the event it was a beautiful
morning.
It was about 29 degrees but itwas crisp and the sky was blue
and it wasn't super windy, so itwas beautiful.
And we were very happy to havethe event hosted by the American
(06:50):
College of Surgeons at theirbeautiful office on the top
floor of their building.
They have this unbelievableconference space overlooking the
city and you can see the lake,so it's a beautiful space.
Everyone seemed to enjoy that.
Being in Chicago is always fun.
I love the city.
It's a great place to go.
Tons of wonderful associationthings there.
It's just a fun place to hangout.
I was there a few days early,caught the Cubs home opener at
(07:13):
Wrigley, which was super cool,and then my son was in town as
well for checking out colleges.
We're taking a look at someuniversities up there for him is
up there for him, so it was agreat trip all around.
Speaker 2 (07:23):
Awesome, awesome.
Well, everybody, we won't haveanother round of Innovation Hubs
until the spring of next year,but be on the lookout for those
dates and sign up if you want toattend.
All right, our topics for today.
Well, you know, we're talkingsemiconductors.
Particularly we're talking theIntel TSMC joint venture, which
is kind of shaking up thesemiconductor industry, and then
(07:45):
we will be talking about therelease of LAMA4, which we're
really excited about.
So I've got to give you alittle bit of deeper background,
I think, for the Intel TSMCjoint venture, and then Amit
will kind of do the same in ourconversation after.
So, bear with me.
Before we dive into thisindustry news, I want to just
make sure we're starting withkind of the basics and the
foundations.
(08:05):
What is a semiconductor?
So a semiconductor is amaterial that has electrical
conductivity between that of aconductor, like metal, and an
insulator, like glass.
These materials, usuallysilicon, form the foundation of
modern electronics.
When precisely engineered withvarious elements added to them,
they become the microchips thatpower everything from your
(08:26):
smartphone to supercomputers.
So now that we've got anunderstanding of semiconductors,
let's talk about the businessmodels that have been at play
historically.
So, traditionally, the chipindustry has operated under two
main approaches.
First, there is the integrateddevice manufacturer, orm model.
Companies like Intel bothdesign and manufacture their
(08:48):
chips in their own factories,which are called fabs.
They control the entire process, from concept to finished
product.
These functions so Fabless.
Companies like NVIDIA, amd,apple and Qualcomm focus
exclusively on designing chips,while foundries like TSMC, which
(09:10):
is Taiwan's semiconductormanufacturing company, and
Samsung specialize inmanufacturing those designs for
others.
About 20 years ago more or less, we saw a major industry shift
as companies like NVIDIA choseto go fabless, focusing all
their resources on design whileoutsourcing manufacturing to
specialized foundries like TSMC.
(09:31):
This approach reduced capitalrequirements dramatically, since
building and maintaining modernchip fabrication plants cost
tens of billions of dollars.
It also allowed for fasterinnovation cycles and let each
company focus on their corestrengths.
Tsmc emerged as the dominantfoundry, especially for
cutting-edge manufacturingprocesses.
Intel, however, maintained itstraditional IDM model.
(09:54):
While this offered certainadvantages, it increasingly
struggled to keep pace with thespecialized expertise of TSMC's
manufacturing capabilitiesexpertise of TSMC's
manufacturing capabilities.
That context is essential forunderstanding what might be the
most ironic twist insemiconductor history.
Intel's financial crisis may beover with support from its
biggest rival.
(10:14):
So Intel and TSMC have reacheda preliminary agreement to form
a joint venture aimed atrevitalizing Intel's struggling
chip manufacturing business.
Under this partnership, tsmcwill take a 20% stake in Intel's
US-based chip fabricationfacilities, with Intel and other
US investors controlling theremaining shares.
This represents a seismic shiftin the semiconductor landscape.
(10:39):
Intel, which has faceddeclining revenues and
technological setbacks, reporteda staggering $13.4 billion
operating deficit in its foundrydivision in 2024 alone.
Despite efforts to transformunder its IDM 2.0 strategy,
intel has struggled to competewith TSMC's advanced
manufacturing capabilities.
(10:59):
So this joint venture has hugeimplications for the capacity of
chip production and ensures theUS continues to have meaningful
manufacturing capacity foradvanced chips.
With geopolitical tensionsrising, particularly around
Taiwan, where TSMC isheadquartered, establishing
robust American manufacturingcapabilities has become a
national security priority.
(11:20):
So, amit, that's a lot ofcontext.
You sent me an email with thistopic idea maybe a few days ago,
and you kind of included a longblurb with it to help explain
this to me, because I was notsuper familiar with this whole
industry prior and even withthat email, I was a bit confused
.
So I went on a deep dive.
I listened to the Acquiredpodcast episode on TSMC, which
(11:42):
is fabulous.
It is two and a half hours andI know what you all might be
thinking like I don't know ifI'm going to listen to that.
It's really interesting howthey frame the whole story of
the founder of TSMC.
So, amit, I want to get yourinitial thoughts, but I kind of
also want you to lay thelandscape on, like what this
industry has looked like in therecent history.
Speaker 1 (12:03):
Well, I think you did
a great job providing an
overview of the competing models, where you think about what is
the fundamental business modeland what has worked historically
.
And if you go back in timeseveral decades prior to the era
that you're describing, youknow, which really kind of
started in the 90s, where NVIDIA, you know, at the time was a
startup and they started off asa fabulous chip company and they
(12:26):
never had their own plants andmany other companies followed
suit and so these companies, asfabulous chip designers, were
kind of this first generationthat were able to go out there
and essentially leverage,outsourcing.
And at the time there werearguments against that, saying
well, if you're not fullyintegrated, vertically
integrated where you have yourmanufacturing, you have a
(12:48):
weakness, you don't have anadvantage there.
But it allowed you to do whatyou described, which is the
speed at which you couldinnovate when you have those.
Separations of concerns arereally another way to put it is
essentially specialties, rightAreas of focus, and you know
people have been doing variousforms of specialization of labor
since the beginning of time andthat has something.
(13:09):
That's something here thatyou're seeing play out in chips.
So Intel, prior to that erawhen fabulous manufacturing
became a thing, was, you know,really the dominant chip
manufacturer for themicroprocessors specifically,
that powered the PC revolution,and well into the internet boom,
and their IDM model of bothdesigning chips and then
(13:31):
manufacturing chips was what AMD, their only other competitor in
that particular category.
Right, they did the same thingup until actually late 2000s,
where AMD spun off their entiremanufacturing business, and we
can talk more about that later.
But the essence of that shift isreally interesting to think
about, because you had a companythat had tremendous advantages
(13:55):
in scale.
They had advantages in process,right, because Intel was known
as being extraordinary in itsmanufacturing process, and then,
over a period of time, theystarted to have a decline, even
though they had, effectively amonopoly.
Not exactly, there was kind ofa designed duopoly, with AMD
being kind of a manufacturerthat had licensing access to
(14:16):
their IPs.
They could make, you know,basically, these things called
x86 compatible chips, which is,you could get a PC with either
AMD or Intel CPUs for a longtime.
But the point and that wasreally a government satisfaction
thing, so that Intel wouldn'tbe broken up.
But the essence of what I'mdescribing, though, is a shift
in an industry which is worthconsidering, because part of
(14:37):
what was going on that enabledTSMC to rise is a maturity of
the industry, the accelerationof the underlying technology
that made it possible tomanufacture chips at scale.
Prior to that era, the chipmanufacturing process was so
incredibly specific to the exactchip you were building that it
(14:58):
was very, very difficult toconsider, hey, I'll be able to
manufacture chips for someoneelse, because it was really
really.
Every single piece of equipmentwas super custom.
It was very early days of thatso that as that process became
more and more and moresophisticated and the machinery
became more adaptable and morecapable of producing different
kinds of semiconductors, it madeit possible to say, hmm, what
(15:20):
if we actually had a separatesector focused on the fab
process, which is themanufacturing process, and Intel
completely missed it, right?
So Intel, as strong as theywere, really relied upon an
aging architecture and whilethat architecture still powers
the vast majority of PCs, theycompletely missed out on GPUs,
(15:42):
which is the math processorsthat power graphics, but also
power AI, which is a similarmathematical operation, and they
obviously had this fundamentalissue in their business model of
IDM versus going fabulous.
It's relevant to the AIrevolution that we're in the
midst of right now, but alsobecause it represents an
(16:05):
opportunity to reflect back on adominant player.
Not that long ago, people wouldhave thought of Intel to be a
surefire blue chip stock topreliminary agreement, but it's
(16:26):
not that far off from that insome ways too, which we can.
We'll talk more about that in asec.
But I think associations needto pay attention because in many
respects, they're the dominantplayer in their space.
They have a business modelthat's existed for decades, or
in some cases over a century,and they also live in a world
that is rapidly changing.
And so do they want to be theIntel or do they want to
(16:48):
potentially adapt and find abetter model because the
environment's changed?
That's the main reason.
Actually, I wanted to talkabout this.
I think the technology isinteresting.
I think the sector is somethingwe can all benefit from
learning more about.
Most people know very littleabout semiconductors, know very
little about all this stuff, sothat'd be an interesting kind of
educational topic.
But to me it's more about hey,wait a second.
(17:09):
There's a company that even 10years ago, few people would have
predicted the decline of Intelto this extent.
And now here we are.
Speaker 2 (17:17):
Yeah, man, I think
this stuff is so fascinating
because it's easy to look backwith hindsight and say, well,
yeah, it makes sense that wewould break this out right into
chip design and chipmanufacturing.
It doesn't seem that novel fromour perspective, but listening
to that acquired podcast that Imentioned, you realize at that
time, when TSMC was founded,they were creating a solution to
(17:37):
a problem that didn't reallyexist, because all of the chip
companies were designing andmanufacturing their own chips.
So they were going out tostartups and saying, hey, now
you can form your companybecause we can help manufacture
these chips.
So it wasn't a given.
It seemed very controversial atthat time to have this idea, so
I just wanted to upplay that alittle bit.
So, amit, would you say Intel,by going into this joint venture
(18:01):
, preliminary joint venture, isessentially admitting defeat on
the IDM model.
Like that is no more.
Speaker 1 (18:07):
I'm not entirely sure
.
I think that part of thethinking is Intel has a giant
amount of manufacturing capacityin the United States.
It's not necessarily all at thecutting edge, but some of it's
close and with that footprintthe idea is that TSMC can come
in and help Intel build betterchips.
Basically, so TSMC is part ofthis deal.
(18:27):
I think it's going to own achunk of the company or it's
some relationship along thoselines.
It's a little bit murky, basedon the latest news that's out
there, but ultimately whatthey're contributing isn't so
much capital as it is processexpertise.
So we talk aboutHamilton-Helmer's seven powers
and we talk about the sevenpowers that show these routes to
enduring differential returns,or really a strategic moat that
(18:49):
some people would call it.
One of them is called processpower, and TSMC is an
extraordinary example of processpower, probably best
illustrated this way that if youtook all of TSMC's equipment
and even all the documentationthey have about how to run the
equipment and how to run theirbusiness, I don't think it would
be possible for other people togo into their plants and
actually execute the sameplaybook, similar to, like the
(19:10):
Toyota manufacturing process.
So TSMC has developed thisculture, this technology stack.
They have, you know, anincredible concentration of PhD
level scientists who help themcontinually optimize what they
do.
So they're going to go helpIntel in this agreement, make
their manufacturing a lot better.
Of PhD-level scientists whohelp them continually optimize
(19:32):
what they do.
So they're going to go helpIntel in this agreement, make
their manufacturing a lot better.
So now, will that mean thatultimately, intel's fabs become
TSMC fabs effectively, maybe?
And then is Intel just going tofocus on competing more
effectively with ARM on the CPUfront, or competing in the GPU
race finally in some meaningfulway, maybe?
I think there's still a lot ofquestions to be had, but I do
think it's exciting from anAmerican perspective to,
(19:53):
whatever the ownership structureis, the revitalization of
American semiconductormanufacturing capacity and being
at the cutting edge isimportant because as a country,
we're behind at this point, likethe manufacturing side.
Not on the design side, we'restill world leading at the
moment, but on the manufacturingside we're clearly behind.
The closest in terms of fabsbehind TSMC would probably be
Samsung, which is in Korea.
(20:15):
Global Foundries, which used tobe part of AMD, has some
advanced manufacturingcapability, but not nearly at
the level or scale of what TSMCoffers.
So I think it's a good thingfor Intel, because the
alternative is to essentiallyfire sailing their manufacturing
capabilities, and I don't thinkthat's good on a lot of levels.
Speaker 2 (20:35):
You were talking
about this idea of specializing
in meat, this idea of expertise,and in the podcast I keep
mentioning the Acquired podcastthey had a phrase on there that
I wrote down.
It's something you've probablyheard before, but the phrase was
you can only do one thing well,and that really stood out to me
.
I started thinking aboutassociations.
In what way do you thinkassociations can learn from this
(20:55):
joint venture, from the idea ofspecializing, from the idea of
expertise?
Do you feel like there's onearea that associations do
incredibly well that they shouldkind of funnel all their
resources into?
Speaker 1 (21:06):
Or do you think the
current I think let me come back
to that just a second.
I just want to add one thing towhat you said earlier on the
acquired pod.
For those of you that haven'tactually had an opportunity to
take in one of their pods, Iwould highly recommend it.
The TSMC episode is great.
They have a series on NVIDIA,they have a recent episode on
Microsoft and they also haveepisodes in a variety of other
(21:27):
companies like Costco and Hermesand most recently on Rolex.
So if you're interested in kindof like a very well told story
about the history of aparticular business that you
might be a fan of or you are aconsumer of, I would definitely
recommend it.
I absorb their pods, you know,in 15 to 30 minute chunks,
usually while I'm out and aboutor in a car.
(21:49):
I don't typically listen tothem, you know, end to end,
because they tend to be very,very long, some of them actually
the two and a half hours youmentioned.
Malware and TSMC is one oftheir shorter podcasts, the one
I just listened to on Rolex, Ithink it's like five hours or
something.
It took me like a month to getthrough it, partly because it's
not a company I'm like,personally super interested in,
whereas the Microsoft one, Ithink, was like four and I
(22:10):
listened to it in like two, youknow, in two chunks, because
that's something I'm superinterested in.
So, at any event, for listeners, if you want to go deep on
business history of a particularcompany or sector, I highly
recommend Ben and David at theAcquired podcast.
They do fantastic work, but oneof the things they talk about
that's similar to the way youdescribed it is a lot of their
episodes they talk about hey,what makes the company's product
(22:31):
the best in the category?
And one way they describe it iswhat makes the beer taste
better, right?
So if you're a beer producer,if you're a brewery, only do the
things that actually make yourbeer taste better.
So if it doesn't actually makeyour beer taste better like
owning your own fields of barley, that probably doesn't make
your beer taste better thansourcing those grains from the
(22:52):
best producer that's regional toyou.
Can you get an advantage byvertically integrating or by
owning more of the process?
In the case of Rolex, rolexactually manufactures their own
steel because they have aspecification and a proprietary
approach to it that they believeis the best way in the world to
do that particular type ofsteel.
Is that necessary?
(23:13):
Maybe, maybe not, but in theirparticular case maybe that works
.
But vertical integration, assuccessful as it can be in
specific instances, if you studyindustries more broadly, tends
to not be the norm.
You tend to see specializationof labor, aka high distribution
of supply chains.
So if you look at, for example,auto manufacturing or any kind
(23:33):
of complex product, you tend tosee this whole ecosystem of
companies that highly, highlyspecialize in very specific
things.
Like in auto manufacturing.
You might have a company thatjust specializes in wiring
harnesses, for example.
That's all they do, and on andon and on.
There's everything, everysingle thing you break down.
So the question I would ask ofthe association world is does
(23:54):
the association need to be afully integrated delivery
mechanism to deliver valuethrough engagement, right?
If you think about why peopleare part of the association,
they come for value to bedelivered right, and that's true
for probably any business.
In the case of an association,what's the value?
What's learning?
Sometimes it's content.
(24:15):
A lot of times it's connectingwith other people.
It might be professionaladvancement because the
association helps them find newcareer opportunities.
Those are some examples ofvalue creation by the
association.
But the association tends tohave like this fully integrated
stack of services where they runevents and they provide member
services and they run technologyand do all this other stuff,
(24:38):
and they tend to have a radicaldistribution of different
specializations.
We often say there's anassociation for everything, and
so the question in my mind isdoes it make sense for all
associations to run their owntechnology stack?
Does it make sense for allassociations to manage their own
events?
Does it make sense for allassociations to produce all
their own educational content?
(24:58):
In many cases it might, and insome cases it might not.
Would it make sense forassociations to outsource some
of those things?
Would it make sense forassociations maybe to join
forces if they're in adjacentverticals with other
associations, which, of course,is not a novel concept?
That's happened for a long time.
But oftentimes you see peoplekind of repeating the same
playbook and everyone'smanufacturing their own steel,
(25:21):
and so I would ask you do youneed to manufacture your own
steel?
Do you need to have everysingle thing vertically
integrated in your association?
And a lot of times that wouldbe like an extreme way to
describe it.
Associations obviously use awide variety of vendors and all
that, but my question is reallyjust an open one.
I don't have an answer for itnecessarily.
It's just with what's happeningwith AI.
(25:42):
Does your current model forcreating value for the end
consumer make sense, right?
Are there other forms of valuethat you need to be able to
create that you're not seeingbecause you're so focused on
making this deal, because you'reso focused on that lower level
work of producing the event orcreating the courseware or
whatever it is?
Does that make you somewhatblind to the direction of your
(26:05):
sector, where your profession isheading, what people are
actually in need of, and are youable to respond to market
demand by creating the valuethat they want three years from
now in the world of AI?
That's really the question Ihave, and if you can anticipate
those changes and try to bewhere the curve is heading right
in terms of that futurecreation of value that offers
(26:27):
opportunities for players whoare able to think that way.
I don't know that means thattheir whole business model needs
to change, but it could meanthat, and the era of powerful AI
that we're in now we're juststarting in means that there's
more options.
Speaker 2 (26:41):
Yeah, and actually
what you just said at the end
there, now that we're in thisage, this era of AI, I wonder if
it is possible to do more thanone thing.
Well, I mean, my gut instinctis like focus on your one thing
and execute like heckessentially on that, but maybe
in the advent of AI, maybe youcan kind of juggle all these
things.
What do you think about that?
Speaker 1 (27:01):
I mean, I think that
it's a great question, right.
I look at Blue Cypress, ourfamily of companies and all the
different things that we doacross there.
I think the only way that works, at least in our case, is that
we have each business notisolated, but at least somewhat
separated and defined, whereSidecar is its own thing and
Sidecar has its mission toeducate a million people in the
association sector on AI by theend of the decade, and that's
(27:23):
very clear, and everythingSidecar is doing is focused on
delivering on that mission.
Similarly, the folks at theRasaio platform are focused on
delivering personalization atscale for all associations,
right, and they're not thinkingthat much about AI education.
They know about it, but it'slike there's a degree of
separation that allows forseparation of focus as well.
It seems to be working for uspretty well, but we are
(27:45):
ultimately, you know, oneintegrated company, On the other
hand, because there's obviouslycommon ownership and you know
these are related businesses, aswe talk about a lot, but
they're separated enough.
So that seems to work in ourcase.
I know that in a lot of othercases, you know, you have
companies that have multipledifferent products and divisions
and services, and it's reallyhard to see that work.
(28:06):
I think associations do have asingular focus in the sense that
they're there to create valuein the lives of their members.
That's ultimately what they'retrying to do.
The question is is that isthere?
Is there something disjointedunder the hood?
Right?
Do you need multiple, different, fundamentally different skills
to execute on the future ofthat vision, and what does that
(28:28):
mean in terms of the best way tosource those materials?
If you think of it as amanufacturing supply chain,
saying what's the best place toget my event production or my
event design or my planning andI'm picking on that one because
it's top of mind since we justran an event but, like that
particular critical process,does that make sense?
Or does it make more sense todo that a different way, Right,
(28:48):
I think?
I think events are a criticallyimportant part of the
association formula.
That's a form of engagementthat's likely to be durable for
centuries to come, because we'reall going to want to get
together.
That's just a thing that peoplewant to do.
So I think associations arewell positioned to deliver there
.
But I guess ultimately it's moreof a question than anything
else.
That I have in my mind is thatwhen you see something like this
(29:09):
happen when companies that yougrew up with that were the
dominant.
You know behemoth blue chip.
You know companies that peoplelook to and even say, hey, we're
going to base our methodologyfor management, Like the whole
OKR systems that people talkabout, objective, key result.
That's an Andy Grove thing fromyou know, decades ago, that he
implemented when he was runningIntel actually before he was CEO
(29:30):
and it's still a fantasticmethodology, by the way.
We use it ourselves and Ihighly recommend it to anyone
who's looking for a better wayto frame their priorities and
execute on them.
It doesn't mean that OKRs arebad All of a sudden.
It means that somethinghappened right.
This company that was at thisunbelievable pinnacle lost its
way.
Perhaps, or maybe the model youknow no longer made sense, so
(29:52):
that to me is the bigopportunity to ask that question
openly.
Speaker 2 (30:02):
So maybe, as an
association leader, consider
whether your organization mightbe the Intel and if there's
something like a TSMC out therethat's doing some element of
your business in ways that youcouldn't imagine that you could
partner with them.
Speaker 1 (30:11):
Exactly right.
I think it's just beingopen-minded about it, because
Intel identified like deep inits culture as a manufacturing
company, as much as they were asemiconductor design firm and
still are right.
They deeply identified as amanufacturing company and a big,
big part of their culture was,you know, the pride in that, and
I think, associations all of us, right, we have that the
(30:33):
rootings of where we came from.
Um, that isn't to say thatthat's bad, necessarily.
It might be the thing that youfocus on and you say you know
what.
That's actually what's going todifferentiate us, because we're
so good at producingworld-class events.
We do indeed do it better thananyone else, and by doing that,
our beer does taste better,right?
So that's what you have tofigure out is what is it that
(30:53):
people are wanting from you?
And I think that's part of thekey question I'm asking us to
all ask is what will people wantfrom you in two years time?
Forget about 10.
Two years from now, givenwhat's happening with AI in your
sector, forget about what'shappening to you and your
association, but in your sector,what will people in your sector
look for from the association?
(31:14):
That's the open question.
Figure that out and then gobuild that and then determine
whether or not building thatmeans retooling the way you're
structured.
It might it might not.
It might mean you might beperfectly positioned for it, but
I suspect there's a good numberof us in this world of
associations that might findthat does make sense to
(31:36):
reconsider existing businessmodels.
Speaker 2 (31:39):
That's a great place
to leave this and move on to
topic two.
We're going to be talking aboutthe release of Lama4, which
happened on April 5th of thisyear just recently.
This release marks asignificant step forward in
Meta's open source LLM ecosystem, with models designed to push
the boundaries of multimodal andmultilingual capabilities.
So I want to talk about thefamily of models in Lama 4, and
(32:03):
I've got to give them a shoutout.
The names are much better thanwhat we typically see with these
family of model releases.
So we've got Lama 4 Scout, acompact model with 17 billion
active parameters and a total of109 billion parameters.
It supports an industry-leading10 million token context window
, which we'll talk about in abit, making it ideal for tasks
(32:25):
like multi-documentsummarization and reasoning over
large data sets.
We've also got Lama forMaververick also has 17 billion
active parameters, but a totalof 400 billion parameters.
It excels in reasoning, codingtasks and long context
processing.
And then, aptly named, we'vegot llama for behemoth, which is
(32:46):
still in development, a massivemodel with 288 billion active
parameters and nearly 2 trilliontotal parameters.
It is positioned as one of themost powerful AI models globally
, but has not yet been released.
Lama4 models are nativelymultimodal, capable of
processing text, images andvideo inputs while generating
(33:06):
outputs and the models use amixture of experts framework, or
MOE, which enhances efficiencyby activating only the necessary
experts for specific tasksinstead of the entire model.
This design enables highperformance while reducing
computational costs.
So when I was looking up therelease of LAMA4, I watched a
(33:26):
few YouTube videos.
It seemed on my end like someof the reception to the release
was mixed, so I want to cover afew of those items.
Due to European Union dataprivacy regulations, lama 4
cannot be used or distributedwithin the European Union, which
I think is interesting.
Also, reports suggest that Lama4's release was accelerated due
(33:46):
to competition from Chinese AIlab DeepSeq's cost-efficient
models, which outperformedearlier Lama models, saying that
perhaps this release was a bitrushed or panicked.
There's also some commentary onthe open source side, so the
restrictive open weights licensehas drawn criticism.
For example, large-scalecommercial users require special
approval from Meta.
(34:07):
These limitations contrast withmore permissive licensing
models offered by competitorslike Deepsea, and some
developers were disappointed byinitial performance issues
reported through APIs.
Additionally, discrepanciesbetween benchmark scores
achieved by an experimentalversion of Lama 4 and the
publicly released model sparkedcriticism of Meta's transparency
(34:29):
.
So that was at least what.
Speaker 1 (34:40):
I was seeing Amit.
I know that you have.
I think that you're excitedabout this release and you're
very impressed by it.
So what are your initialthoughts?
A couple of things I'd pointout to add to what you said.
One is that they are nativelymultimodal.
So Lama 4 was trained on acombination of text, image,
video, audio etc.
And so by fusing together thetraining content that the models
(35:05):
were trained on, you can reallysee the model reason across
modalities.
So we talked about thisrecently in the context of
GPT-4.0's new omni-modal imagegeneration and how that image
generation is in the context ofthe whole conversation you have
with ChatGPT in that particularconversation.
And that's because it's asingle model that's doing both
(35:26):
text output and image output.
Now, at the moment, lama 4, thetwo released versions or the
two released sizes the Scout andMaverick are text output only,
but I believe that's a temporarything because they are truly
omnimodal models.
My suspicion is that with theBehemoth release, which will
probably be only inferencedeither on Lama or Azure or other
(35:49):
large scale clouds, it's such alarge model that they'll have a
true omnimodal model in termsof output and that'll be able to
compete with the latest fromOpenAI on the image front.
But what's interesting about itis the level of understanding
the model seems to gain frombeing trained on this.
Fusion of text and image andvideo is significantly better in
a lot of other ways that arenot related to the question of
(36:12):
did it produce images as anoutput.
So that's one thing.
Another thing is we've beentalking about mixture of experts
, or MOE architectures, forquite some time, and the concept
is not new, but the technologycontinues to improve
considerably.
So DeepSeek actually has donesome tremendous work in
improving MOE models and theyhave open sourced all their
(36:34):
stuff and all their research isout there, and Lama4 is also an
MOE model family.
So I want to point out somethingthat you mentioned earlier.
For example, with Scout, you'retalking about a model that has
over 100 billion parameters intotal, but has 17 billion active
parameters, so a fairly smallpercentage of the total
(36:54):
parameters.
So what does that actually mean?
It means that for each tokenthat the model is looking at,
only 17 billion parameters areactive at that moment in time
for that token.
The next token, it might bedifferent experts within the
model.
So what's happening thereessentially is you have this
high degree of specializationwithin the model that allows for
really, really high skills,essentially in different
(37:16):
categories, and the model issmart enough to be able to
switch around dynamically withina given prompt to use multiple
models.
What this means is it's moreefficient.
So it's 109 billion parametermodel, which is roughly it's
about the same size as Lama3.370, which is what we talked
about in December when it cameout.
This is a tiny bit larger, butit's actually going to be way
(37:36):
more efficient than that andsmarter because of the MOE
architecture and a number ofother things.
So that's super, superinteresting.
There's a lot of noise beingmade about the 10 million token
context window and, I think,appropriately so.
There's a lot more testing thatneeds to be done on these
models in the public sphere.
Now that these models are out,the question will be like, how
(37:57):
good is Lama Force Scout atdoing things with that many
tokens?
So if you say there's thisthing called the needle in the
haystack problem, which is, youknow, can you find something
very specific in a very, verylarge context?
And just as a reference point,you know a token being roughly
equivalent to a very, very largecontext and just as a reference
point, a token being roughlyequivalent to a word, not
exactly, but for our purposesthat's a close enough
(38:18):
approximation.
It's about 10 million words,which is a lot, right, that's
equivalent to very large coderepositories.
It's equivalent to, I think,somewhere in the order of
magnitude of a couple hundredbooks.
So it's a lot of content.
And so what can you do withthat?
Well, in theory the model hasenough breadth of insight to
look at a very large corpus ofcontent let's just say an
(38:41):
association's entire knowledgebase or a very large chunk of it
and to be able to inferenceacross that, in theory, what
that would allow you to do is tohave an even more performant
knowledge assistant, even have amore capable database analyst
and so forth.
So larger context windowsshould, in theory, give you
better intelligence at theaggregate level.
(39:01):
The reason I kind of hedged thatstatement a little bit is
because with models like Geminithat have had long context
windows for a while, performancehas been good in certain cases
but doesn't necessarily bump upthe overall kind of
understanding of the contentthat the model has had, because
there's some limitations in theunderlying architecture still
that make that valuable.
But it's not necessarily, likeyou know, the silver bullet
(39:23):
we've been looking for that says, hey, you can have unlimited
content.
Also, if you did take 10million tokens and drop them
into Scout.
The time it would take and thecost for inferencing would be
unsustainable.
So if you do use the fullcontext window or anything close
to that, you're talking aboutprocessing a very large amount
of content.
It's going to slow down, it'sgoing to be way more expensive.
So don't necessarily take thatas being, like you know, the
(39:46):
ultimate solution to LLMshortcomings.
It's not that, but it's stillexciting.
Nonetheless, it's a valuabletool.
So those are some of my initialthoughts.
I think that you know what itis.
It's exactly what we expectedfrom Meta.
Right About a year ago theyreleased Lama 3.
Now it's time for Lama 4.
Notable is that they do nothave a reasoning model.
(40:07):
So there is no like equivalentto DeepSeek R1, cloud 3.7,
extendedended Thinking Mode orOpenAI's O1, slash, o3 models.
And for those that haven'theard us talk about reasoning
models, the quick synopsis ofthat is these are models that
are smart enough to realize thatthey need to break down a
problem into chunks, workthrough those problems that are
(40:28):
those complex problems step bystep, even check their work
before producing an answer.
But they do take longer toproduce the answer than a model
that's just doing a quick shotand answering your question as
fast as it can.
Lama4's release was accompaniedby a little bit of a cryptic
message from one of their seniorexecs saying that reasoning is
on the way.
That isn't to say that it'sgoing to be part of Behemoth
(40:50):
necessarily, but it is somethingthat cannot be lost on the
folks at Meta that that's acritical part of a frontier all
I'm offering at this point.
So I find it exciting.
I think you're going to see aDeepSeek R2 very soon, maybe
this month, and DeepSeek R2 isprobably going to leap past
Llama 4 in some ways.
Right.
And then you know OpenAI andGoogle.
(41:11):
Google released Gemini 2.5 Pro,which is no slouch.
It's a very powerful model.
It's actually free forconsumers to use through their
UI.
It's not free at the API level,but it's free for consumers.
And then, of course, there'sthe folks at Anthropic that make
the cloud model.
So this is just a crazycompetitive landscape and this
(41:32):
is just the latest.
You know, the latest bit to keeptrack of what it should tell
associations is the same thingwe've been saying for quite some
time now is you have choice,you have optionality.
You don't have to go down asingular path saying, oh well,
I've heard a chat GPT.
Therefore, I'm going to useOpenAI for everything.
That very well may be the rightsolution for certain use cases,
but there's so much choice.
Many association leaders talkto me about privacy, they talk
(41:54):
to me about data security andthey say, hey, I don't want to
take all of my data and drop itinto ChatGPT.
I agree with that.
You should be very cautiousabout whoever you share your
data with, whether they're an AIvendor or a traditional SaaS
vendor or anything else for thatmatter.
In the context of OpenAI or anyother major lab, do you really
want to, you know, have all ofyour data residing in their
(42:17):
ecosystem?
Now, their terms of use, theirlicense with you assuming you're
a paying customer does indicatethat they will not cannot
legally use your data fortraining future models.
But that's just an agreement.
Does that mean that thatagreement will be abided by?
And some people believe it,some people do not believe it.
So that's up to you todetermine.
(42:37):
The reason I raise all of thatstuff about privacy and data
security is that, if you aretalking about open source, you
have your choice of inferenceproviders.
So you can run Lama 4 on Grokwith a Q G-R-O-Q.
You can run Lama 4 on Azure.
You can run Lama 4 on Azure.
You can run it on AWS.
You can run it on tons ofdifferent places.
You could even set up your owninfrastructure to run Lama 4.
(42:58):
It's an open source model.
You can run it anywhere, andthat's true for all open source
models that have open source andopen weights.
You can run them wherever you'dlike.
Why is that important?
Well, think about everything interms of economics and
incentives.
Why would you be concernedabout OpenAI, anthropic Google
or even Meta itself, if you usetheir service, or DeepSeek, for
(43:20):
that matter having your data,even if the agreement said
they're not allowed to use yourdata for training?
Why would you consider that tobe a potential risk?
Well, potentially, there's anincentive to use your data to
make new models and useeveryone's data to make new
models.
That's far larger than thedownside risk of using that data
, even violating agreements insome cases.
(43:53):
If you're the model developerand you are also the inference
provider because there's noseparation of concerns in theory
there could be value ininappropriately using some of
that data.
That is something that could beargued In comparison if you're
an inference-only provider,someone like a Grok or some of
the other providers that are outthere, even some of the cloud
providers that might haverelationships with the model
developers, but they themselvesaren't the model developers.
(44:14):
There's a separation ofconcerns that may cause you to
have more comfort.
So, for example, the folks atGrok who we've talked about they
do not train models, they donot build models.
They don't have a horse in therace.
They work with Meta, they workwith Mistral, they work with a
bunch of other model providers.
They even have one of OpenAI'sactual open models running on
(44:35):
their cloud as well.
So they don't really have aneconomic incentive to, you know,
misappropriate your data at all.
So there's that piece of it aswell that I encourage people to
be thinking about.
So open source meansoptionality, it means cost
reduction, but it also meansbetter security.
Speaker 2 (44:53):
I was smiling there
because, as you were talking
about model providers versusinference providers, I was
thinking of Intel and TSMC and,like you know, it's expertise
and it's all kind of we'retalking about the same thing in
a circular way.
Amit, you said the idea here,which we go back to all the time
, is that associations havechoice.
I know we've mentioned on thepod before that several of the
products that fall within theBlue Cypress family of companies
(45:15):
use Lama models.
I don't know if they still doso what is your kind of personal
take or your business take onLama 4?
Is that the direction you wantto continue going?
Are you impressed with whatyou've seen or are you a bit
kind of skeptical?
Speaker 1 (45:29):
Well, we haven't
tested it in any significant way
yet.
We've played with Lama 4already, but we haven't plugged
it into Skip or Betty or anyother products in the family in
any meaningful way.
So that'll come shortly.
We are completely modelagnostic.
We're also inference provideragnostic.
That's the way we architecteverything.
We do again to allow ourclients, who are these
(45:50):
associations, to haveoptionality.
So with any of our products,you can choose different models
on whichever inference provideryou like, and that's a really
powerful thing to know that youhave that available.
Now.
Different models have differentcapabilities.
So if you were to say, hey, Iwant to use Lama 3.0 instead of
3.3, for some reason, you know,some things in certain products
(46:10):
aren't going to work as well ormight not work at all right.
So, for example, certainfeatures within our data analyst
AI, skip, do not work withmodels that are weaker than
GPT-4.0 or with LAMA 3.3.
It just wasn't possible to dothe current level of capability
we have in skip until thosemodels got as smart as they've
gotten.
So we tend to look at it reallyin like terms of model class.
(46:32):
We'd say, okay, lama 3.3 isroughly equivalent to GPT-4.0.
Lama 4 is a step above that.
You know, whatever comes likeCloud 3.7 is a step above that,
probably as well.
So we look at it more in termsof, like, the power level of a
model, and so within that, youknow, we're constantly swapping
models in and out.
(46:57):
The way we've designed all ofour software architectures is to
be completely model agnosticand you can literally plug and
play these things in a number ofdifferent ways, and sometimes
our products use a kind of amedley of models where we're
using, oh, we use Lama 3.3 forcertain things, but then certain
tasks are way more complicated,so we use a reasoning model
like an O3 mini or a DeepSeq R1.
And then, you know, we tend torun our inference, you know,
either within Azure or with Grok, and so we are.
(47:21):
You know, we have theseseparations built into the way
we've designed stuff.
That I think is reallyimportant to have in place,
because there's so much changehappening right now that you
cannot predict which of theseproviders is necessarily going
to be the best.
I think the advantage forsomeone like Meta and models
that are either Lama for itselfor one of the many fine-tunes
(47:41):
that will happen, you know, whenan open source model is
released very, very quickly,especially for the larger ones,
there will be dozens, if nothundreds, of fine-tuned versions
that have been, you know,adjusted in different ways to
make them better at certaintasks, like coding or whatever
the task might be.
So but using those mainstreammodels sometimes is beneficial
because there's so much moneyand so much development going
(48:03):
into them that that can behelpful.
But largely these things arecommodities.
Largely it's more about likeyou know what power level, if
you will, are these things atand you can plug and play them.
So different models havedifferent advantages, of course,
but you know what power level,if you will, are these things at
and you can plug and play them.
So different models havedifferent advantages, of course,
but you know, if you're gettingto the point where these things
are all really really good,that optionality leads to
radically lower costs, which iswhat we're seeing.
Speaker 2 (48:25):
The last question I
want to ask you, Amit, because I
know you've got a lot ofknowledge here.
There was some criticism on itsopen weights license.
Are these models truly opensource still?
Can you kind of explain whatthat means?
Speaker 1 (48:37):
Yeah, well, I mean,
you know Meta is essentially
saying that people who competewith them can't use the model
without permission.
So they're essentially sayingis hey, if you're, you know, if
you have more than I think it's750 million monthly active users
was the specific language, if Irecall correctly.
You know, that's certainly notus.
You know maybe one day, butthat's not us today or anywhere
(48:57):
close to that, and that's notany association I know of, and
it's probably people likeSnapchat, it's people like
Amazon, it's people likeMicrosoft, so it's their
competitors at that scale thatare barred from having access.
And it doesn't actually saythey can't use it.
It just says Meta has to getpermission, which they probably
would not.
But I also think that thosecompanies probably wouldn't want
(49:20):
to use Meta stuff.
So I think it is less of a bigdeal than some people are making
it.
I don't think it's unreasonablefor a company to have some
degree of restrictions on theiropen source.
I don't think all open sourceneeds to be a do whatever you
want, no matter what kind oflicense.
I think that there's lots andthere's lots of different
flavors of open source licensesthat are out there, some that
are actually far more restrictedthan this and some that are far
(49:42):
more permissive.
So I think it's totally finefor the association use case.
Speaker 2 (49:48):
Yeah, I don't think
you all have 750 million members
, but you might, especially ifwe reinvent that business model.
Speaker 1 (50:04):
Everybody.
Thank you for tuning in totoday's episode.
We will see you all next weekAscend Unlocking the Power of AI
for Associations atascendbookorg.
It's packed with insights topower your association's journey
with AI.
And remember, sidecar is herewith more resources, from
webinars to boot camps, to helpyou stay ahead in the
(50:24):
association world.
We'll catch you in the nextepisode.
Until then, keep learning, keepgrowing and keep disrupting.