Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Hey there, Sidecar
Sync listeners.
I'm Cartoon Mallory.
While I might not have all thenuances of the real Mallory,
this technology showcases how AIis transforming content
creation.
Speaker 2 (00:13):
Welcome to Sidecar
Sync, your weekly dose of
innovation.
If you're looking for thelatest news, insights and
developments in the associationworld, especially those driven
by artificial intelligence,you're in the right place.
We cut through the noise tobring you the most relevant
updates, with a keen focus onhow AI and other emerging
technologies are shaping thefuture.
No fluff, just facts andinformed discussions.
(00:36):
I'm Amit Nagarajan, chairman ofBlue Cypress, and I'm your host
.
Greetings everyone and welcometo the Sidecar Sync, your home
for content at the intersectionof associations and AI.
My name is Amit Nagarajan andmy name is Mallory Mejiaz, and
(00:58):
we're your hosts.
And before we get into today'stopics, which are super fun, as
usual, but particularly funtoday, we're going to take a
moment to hear a word from oursponsor.
Speaker 3 (01:05):
Let's face it Generic
emails do not work.
Emails with the same message toeveryone result in low
engagement and missedopportunities.
Imagine each member receivingan email tailored just for them.
Sounds impossible, right?
Well, Rasaio's newestAI-powered platform, Rasaio
Campaigns, makes the impossiblepossible.
Rasaio's campaigns transformoutreach through AI
(01:26):
personalization, deliveringtailored emails that people
actually want to read.
This opens up the door to manypowerful applications like event
marketing, networking,recommendations and more.
Sign up at rasaio slashcampaigns.
Once again, rasaio slashcampaigns.
Give it a try.
Your members and yourengagement rates will thank you.
Speaker 4 (01:45):
Amit, how are you
doing today?
I'm doing great.
How are you?
I'm doing well myself.
I'm also very excited for thisepisode.
I don't want to spoil it, butwe've got some fun conversations
and demos lined up.
Speaker 2 (01:57):
Pretty cool stuff.
Yeah, you showed me somepreviews of what you want to
show today and I'm excited aboutit.
Some really interestingimplications for this next
evolution in this technology.
Speaker 4 (02:07):
Yeah, I love that.
I love that the impact of thistechnology will be
transformative and just huge ingeneral, but also that it can be
so fun, and I told Amit beforewe recorded.
I had a moment thinking is thisreally my job?
To do this, because I washaving a blast.
Amit, I'm curious if you canshare with listeners.
(02:31):
How has that content automationthat we've been discussing for
our AI Learning Hub for?
Speaker 2 (02:33):
members version been
going.
It's really, really fun.
It's a great project.
We've been incubating this ideafor quite some time now.
It's been percolating really inour minds for probably over a
year in total, but really indevelopment for a number of
months.
And, for those who haven't heardus talk about it before, what
we're doing essentially istrying to attack the problem of
change across two dimensions.
(02:55):
Essentially for our learningcontent and I think all
associations can relate to thiswith their own learning content
is that learning content, theminute you create it, starts
becoming stale because theinformation it becomes out of
date.
And, of course, we're in theworld of delivering AI education
.
So when you're teaching peopleabout AI, this stuff is changing
so crazy fast it starts todecay really really quickly, and
(03:18):
so we tend to rerecord ourcontent in the traditional way,
with people like Mallory andmyself and other colleagues
recording new versions of thesame courses as well as new
courses on a high frequencybasis, and that's a problem,
because we want to be able toupdate our content even more
frequently than we've been doing, which is roughly every six
months.
We do a complete refresh and wethink it needs to be updated
(03:42):
probably every month or two, andso there's that issue.
The other dimension of changeis that we're starting to
partner with more and moreassociations where we take that
learning content about AI and weadapt it for their industries.
So we'll make it AI for X, xbeing, whatever your profession
or industry is, and we're doingthis as a really exciting
revenue share partnership withour friends in the association
(04:03):
community.
We're super pumped about it, andso, of course, that's another
dimension of change, because nowyou have all the changes
happening in AI and then youhave lots of variations of the
content that essentially, arelike specific versions for
different use cases anddifferent vocabulary that are,
of course, very important tomake it really hit home for a
given audience, and so we wantedto solve that problem.
(04:25):
We wanted to make it possible tomove really fast and do all
this at the same time, and sopre-AI, this would have been
effectively an impossible task,but what we have with AI is the
ability to generate high qualityaudio and high quality video
and then to be able to stitchthat together in a way that
ultimately results in us beingable to fully automate our
(04:46):
production pipeline of contentfrom source material all the way
to produced content on the LMS,so I won't go into more detail
on it because that's not thepoint of today's pod, but it is
something I'm really fired upabout we're excited to.
Over the next 30 days, we'regoing to start putting some of
the content on our learning hubthat's generated by this tool
(05:06):
and starting to get somefeedback.
We've already gotten someinformal feedback from folks
that it's been super positive.
So very, very excited aboutthat and I think hopefully it'll
serve as inspiration for whatassociations can do as well.
Speaker 4 (05:17):
Yeah, yeah, and it
kind of fits with today's topic
too, because we're talking aboutimage and video and I do have a
question in there Maybe we'llwant to start integrating some
cartoons into the AI LearningHub, potentially.
Speaker 2 (05:28):
That sounds like fun.
Speaker 4 (05:30):
All right.
Today we are talking about somefun topics, as Amit said.
We're talking about ChatGPT's4.0 image generator and Hydra,
which is a company I was notfamiliar with until I demoed it
for this pod, and then we'llalso be talking about Disney
robots.
I'll explain more on that in abit.
But first and foremost,chatgpt's 4.0 image generation
(05:51):
feature is OpenAI's latestadvancement in generative AI,
for now, offering users theability to create and edit
highly detailed and realisticimages directly within the
ChatGPT interface.
So here's an overview of somefeatures and functionality.
It excels at creatingphotorealistic images, editing
existing ones and rendering textwithin images an improvement
(06:15):
major improvement over earliermodels like DALI.
It supports multi-turnconversations for refining
images, allowing users toiterate and improve their
creations.
Users can specify details likeaspect ratios, colors using hex
codes or even requesttransparent backgrounds.
The model can transformuploaded reference images or use
(06:37):
them as inspiration for newcreations.
It's capable of generating awide variety of content, from
cinematic landscapes and logosto creative art styles.
It can also handle practicalapplications like designing
infographics or mock ups forbusinesses, or even comic strips
.
The tool is available now forall users, including those on
(06:58):
the free tier, though free usersare limited at this moment to
generating three images per day.
Paid subscribers, the plus andpro plans gain enhanced access
with fewer restrictions.
I want to take a little pausehere before we talk about Hedra.
Amith, I know you have donesome experimenting with the
image generator that we talkedabout recently.
You built a comic strip.
(07:19):
What was your experience like?
Speaker 2 (07:21):
Yes, I did this the
night that this product was
announced by OpenAI and in factI posted this comic strip on my
LinkedIn.
We can link to that in the shownotes, but if you follow me on
LinkedIn you can go to myprofile and see it.
By the time you listen to thisit might be a few weeks old, but
I thought it would be aninteresting test because I
wanted to take some conceptsfrom what we talk about a lot in
(07:42):
the association world and seeif we could create something
kind of sort of funny, butreally also see if could the
image generator like stitchtogether several panels in a
comic and actually keep thescene like logical, where you're
not like having differentcharacters appear and disappear
and put in text.
As you mentioned, that's beenthe big hole in all of these
(08:03):
image generators.
None of them have been able todo really high quality, reliable
text, and that's been the bighole in all of these image
generators.
None of them have been able todo really high quality, reliable
text, and that's been a bighole.
In fact, prior image generatorsyou couldn't even really
reliably tell them not to emittext at all, so a lot of times
they would like throw text outthere, even when he said please
don't do that, because yourintention was to take it into a
design tool and add textyourself.
Now you can.
(08:26):
I can say this like myimmediate reaction was oh my
gosh, this is amazing, because Iwas able to create a comic
strip that the copy in it wasn'tnecessarily funny, but like the
fact that it did all that inone shot and it maybe like took
60 seconds or something, wasreally phenomenal.
So it's definitely a next levelcapability in terms of image
generation.
And the thing I got reallyexcited shortly after that was
things like infographics orother kinds of typical business
(08:47):
communication tools thatnormally you wouldn't build
these things, but now you canstart to build them for all
sorts of different things toimprove your communications.
Speaker 4 (08:57):
I think I've told you
all at this point, Midjourney
is my preferred AI imagegenerator and I think there's
kind of use cases still for it.
I'm interested to see howMidjourney updates with this
ChatGPT 4.0 image generator.
I think I've got to double downon what Amit said about the
text.
It is fantastic.
I kind of thought when I gaveit instruction to put sidecar
(09:18):
sync, for example, in an image,maybe it would take me a few
times.
No, it got it perfect the firsttime with very minimal
direction.
I also think the ability toiterate with it is so helpful.
So in Midjourney you kind ofhave to do it in a one-shot
prompt.
You can do like subtlevariations or big variations
within Midjourney, but you kindof have to get the prompt right
(09:39):
the first time.
But with this one you can goback and forth and say, oh,
could you brighten up the colorsa little bit?
Can you do xyz?
So it's much more user-friendly.
I find that it still has an ailook, if you know what I mean to
me like you can now tell whichimages have been generated with
a gpt 4.0.
But overall I'm really, reallyimpressed with this.
(10:00):
I still think I will useMidjourney, maybe for more
realistic images, but in termsof creating cartoons, I'm very
impressed with 4.0.
Speaker 2 (10:10):
Well, I think that's
a great use case because
cartoons are a fantastic way ofmaking people remember stuff,
because, especially if the copyin there is well written and of
course we know language modelscan help you quite a bit with
that, and that's interesting.
I do want to point out onequick thing before we move on
about the 4.0 image generation.
That is a little bit ofsubtlety that might not be
obvious to everyone is that upuntil now, the tools like
(10:33):
ChatGPT that have had imagegeneration as one of their
features as part of like alanguage model interaction, have
actually been calling out to aseparate image generator model
whenever they need to generatean image.
So in the case of OpenAI andChatGPT, they would use DALL-E 3
under the hood, so you couldbasically use DALL-E from
ChatGPT and actually, if youremember a little bit earlier in
(10:55):
time, you didn't have thatability to generate an image.
You'd have to go to a separatepage for DALL-E and you would
prompt it directly.
And with DALL-ALI 3, theyembedded it in ChatGPT and in
fact, chatgpt rewrote the promptto DALI, but it was actually
calling a different model andthe DALI model.
The point is is the DALI imagemodel had no idea about anything
about the conversation otherthan the enhanced prompt that
(11:20):
ChatGPT sent to DALI.
So that's how it worked upuntil the moment in time when we
got to 4.0.
Now, when OpenAI released GPT4.0 and later the chat GPT that
used it, people were like whatis this 4.0?
People call it 4.0, which, bythe way, is not the name.
It's 4.0.
And O stands for omni and theintention behind it is it's
omni-modal right and that meansthat multiple modalities in
(11:43):
multiple modalities out.
And so the original 4.0 couldtake in images and it could
describe them to you and usethem as part ofa prompt.
It would understand the images,but it would not emit images
directly.
Now the 4.0 Omnimodel as far asI know it's the first model that
does this.
It's a single model that's ableto both have inputs and outputs
in various modalities.
(12:04):
Why does that matter?
Well, by having an omnimodalmodel, that means that the model
understands the fullconversation.
So what you described theability to have iterative,
continuous improvement for thatimage to represent kind of
holistically what theconversation is about that's a
very, very powerful kind oflayer of dimensionality that
(12:24):
image-only models are going tohave a really, really tough time
dealing with.
So my prediction on this ismodels that are pure images,
like just Midjourney unless theydo some really amazing magic
are going to have a hard timedealing with this because they
just don't have the context ofwhat an omnimodal model has when
interacting with the user, atleast for most use cases.
Speaker 4 (12:44):
Yeah, that's.
That's a really interestingdistinction.
I love mid journey, so dobetter guys.
I want to.
I want to be able to use both.
All right, I want to introducekind of a subtopic to this topic
, which is Hydra.
So Hydra is an AI companyspecializing in generative video
creation.
Founded in 2023 by two StanfordPhDs with experience at NVIDIA,
(13:05):
google and Meta, the companyaims to democratize video
production by making it safer,more expressive and accessible
to creators of all skill levels.
Character 3 is Hydra's latestomnimodal model and processes
text, audio and imagessimultaneously, enabling
seamless integration ofstorytelling elements in a
single workflow.
(13:25):
There's also the Hydra Studio,which is a unified platform that
combines various AI tools forvideo production.
It allows users to createcustomizable avatars with unique
appearances, voices andpersonalities, while offering
real-time previews and intuitivecontrols.
Key capabilities within Hydraare the ability to transform
static images into lifelikecharacters that can speak, sing
(13:48):
or rap.
There's support formultilingual text-to-speech
inputs, and you also get fastvideo generation, like 60-second
videos from 300 characters oftext with lifelike expressions
and synchronized movements.
So I'm wondering if you all cankind of deduce where we're going
with this.
So we talked about creatingimages with 4.0 and ChatGPT, and
(14:11):
then we've talked about usingHydra to animate some images, so
I'm going to share my screenfor those of us who are joining
us on YouTube, and then I'll tryto kind of talk my way through
this as best as I can.
I'll show you the finishedproject at the end, but I just
wanted to show you my process alittle bit and how I created
this.
So right now I'm using ChatGPT4.0 and I actually took my
(14:34):
headshot that I use on LinkedInand also that we use in the side
car sink cover.
I dropped it in here and I gaveit a really simple prompt just
to see what it would do with it,and I had not ran this
experiment prior.
So turn me into a cartoonwearing a yellow cap that says
sidecar sink.
I did get an error the firsttime, which might have been my
internet connection, but I askedit to try again and you all
(14:58):
have to check this out.
I'll probably post this onLinkedIn.
I get like an exact replica ofmy headshot, even down to the
leaves in the background,because I took this in front of
like a bush, where I live evendown to the leaves, to my
earrings, I'm pretty sure.
Yes, it is very detailed and Ihave this yellow cap on that
says sidecar sink.
(15:19):
Nothing misspelled, nothinglooks crazy, and my exact outfit
.
So that was step, step one.
I then had to do the same thingwith you and me.
Hopefully I didn't get yourpermission to do this, but I
feel like we're co-hosts, I'mjust allowed to.
Uh, I said do the same thing forthis image, so I didn't even
like necessarily repeat theprompt, but I said you know,
please include the sidecar sinkcap and then we get a meath
(15:42):
version.
I don't feel like this looks aton like you and me, but it did
get the bricks in the backgroundof your headshot.
It even got the stripes on yourshirt Pretty impressive all the
way around.
And then I went a step furtherand asked it to create a little
infographic for the Sidecar Syncpodcast with both of those.
So, amit, you were talkingabout context.
This is pretty impressive thatit then used both of those
(16:03):
images to create a new one andit says Sidecar Sync Podcast the
intersection of AI andassociation.
So that was kind of step one tothis.
Then I went to Hydra.
This is inside the Hedraplatform.
I had to do this separately.
So first I dropped in my avatarand then I generated a script.
(16:25):
I used Claude to help megenerate a really kind of short
and funny script, which you'llsee really soon.
I was also able to choose thevoice Lots of options here.
I did some digging because thevoice is pretty good and I had a
hunch that they may have used11 Labs and they do.
So you're actually getting 11Labs audio within the Hedra
platform, which is great.
I typed in my script here, Ichose the voice and then I said
(16:49):
she's an enthusiastic podcasthost.
I pressed generate and then itprobably took about three to
five minutes and I got my clip.
I did the same thing withAmit's clip and right now I am
going to stop sharing and we aregoing to play for you all the
clip.
Speaker 1 (17:09):
Hey there, sidecar
Sync listeners.
I'm Cartoon Mallory, createdusing ChatGPT's image generator
and brought to life with Hedra'sanimation technology.
Pretty wild, right.
While I might not have all thenuances of the real Mallory,
this technology showcases how AIis transforming content
creation.
Just think about thepossibilities for your
(17:30):
association personalized welcomemessages, multilingual
communications or scaling youreducational content without
having to be on camera all thetime.
But hey, don't just take myword for it.
Let's check in with my cartooncolleague, cartoon Amith.
What do you think about thistechnology?
Speaker 5 (17:47):
Thanks, cartoon
Mallory.
I have to say, being a cartoonis quite liberating.
No bad hair days and I canpresent from anywhere without
leaving my desk.
On a more serious note, whatexcites me most is how this
technology could helpassociations with limited
resources create professionalvideo content at scale.
Think about educational modulesor personalized outreach that
(18:10):
would otherwise be impossible toproduce.
The technology will only getbetter from here, and
forward-thinking organizationsshould start experimenting.
Now Back to you, real Malloryand Amith.
Speaker 4 (18:21):
Amith.
What did you think when you sawCartoon Mallory and Cartoon
Amith?
Speaker 2 (18:26):
It put a big smile on
my face and it was so funny.
I knew that you were preparingfor this pod and working with
these kinds of tools.
I'm like that's so awesomebecause it's fun and fun.
Things obviously are enjoyable,but they also really open up
perhaps a creative avenue thatyou might not otherwise choose
to exercise.
(18:47):
In some ways you think aboutlike, oh, the association's in a
serious business, but like, yes, you are, but your members also
like fun stuff.
And you also can think of thisas like, okay, well, what else
can you do with this?
And maybe it's not alwayscartoons, but I think cartoons
can be a very powerful way ofcommunicating something
important, something serious,but in just a really expressive,
(19:07):
interesting way.
Modal models really.
Well that you had thisconversation it was just natural
conversation that you're havingwith the model.
It knew what you had saidearlier.
It knew what it generated interms of the images it outputted
.
It knew about your input images.
It just all blended together.
(19:28):
So we've said on this pod anumber of times that at some
point it won't be this kind ofmodel, that kind of model.
It'll just be like AI or AImodel, right, and all these
models kind of blend togetherand that's kind of where things
are going.
So it's pretty impressive whatyou're able to demonstrate and
if you think about where we were, even maybe six months ago, I
don't think either of us wouldhave predicted that we'd have it
(19:50):
this quickly, that this levelof capability would be in our
hands, and it's essentially free.
Speaker 4 (19:54):
Yeah, free, yeah, and
I'm pretty sure was it last
week or the week before Amit onthe podcast?
I'm just not sure if it's outyet we were talking about.
The next step will be creatinga video version of these
cartoons and literally it wasalready available.
It goes back to the idea ofbreaking our brains.
We've got to just constantlychallenge ourselves.
Speaker 2 (20:12):
Totally.
Speaker 4 (20:13):
What does this make
you?
Speaker 2 (20:14):
Next, we need like a
hologram or something uh-oh.
Speaker 4 (20:17):
I'm sure we could do
it If we really wanted to.
We could probably have ahologram at Digital.
Now you mentioned kind of that.
This is a fun technology.
I had, frankly, a blast with it.
I highly recommend, if you allI think you do listening to this
podcast have some interest inthis stuff, that you play around
with it.
I think it could be also areally fun tool to like create a
(20:39):
cartoon for your family, foryour children, if you have them.
But where does your mind, amit,go for associations here, like
zooming out broader landscapetrend lines.
What does this make you thinkof for associations?
Speaker 2 (20:52):
Well, first of all,
I'm just thankful that you
didn't make my character sing orrap, so thank you.
Speaker 4 (20:56):
Okay, I didn't know,
I probably would have made your
character sing or rap.
I didn't know I could do that,but next time.
Speaker 2 (21:02):
That'll be in a
future episode, I guess.
So, in terms of what I thinkhappens with associations, I
think about this from a moregeneral lens, which is how do we
communicate, and how do wecommunicate with each other
effectively in an interestingway?
How do we communicate somethingthat might be kind of dry and
kind of a little bit moreexciting way?
So you take some key concepts,whether you're communicating
(21:24):
with, with kids or adults,whether it's professional
context or personal.
Um, if we can communicate, ifwe can express ourselves in
different ways and ways that gobeyond our personal creative
ability, right, like for me, Ican barely do stick figures, but
now I can express ideas I havein all sorts of cool kind of
artistic ways, right, is it realart?
(21:46):
Is it not real art?
I'll leave that to thephilosophers and the artists to
decide.
I know there's a lot ofcontroversy about this, but it
allows people like me who havezero artistic capability, to
have an outlet where I cancommunicate ideas in a
completely different way thanI've ever been able to.
So that is exciting and thatleads to an organizational
capability to say how can webest communicate, how can we
(22:07):
best educate, how can we bestdeliver content?
Those are the types of thingsthat are so exciting for
associations, becauseassociations are in the business
of educating.
Of course, communicating is thefundamental of everything, but
educating really are theapplications.
And then, of course, connectingpeople.
And so you know, connectingpeople for professional
networking, or connecting peopleto collaborate as volunteers in
(22:29):
a committee, or connectingpeople to, let's say,
collaborate on a standard thatmaybe competing organizations
are coming together in anindustry to work together, to
collaborate on an open standardto advance the industry, and on
and on and on.
There's so many ways thatconnecting us is such a critical
part of what associations do.
So to me, it's all those things.
(22:51):
I think some very immediate,obvious applications are think
about your learning content.
Think about adding some moredynamic elements to pretty
static learning content.
I know we're going to be doingthis with all of our AI content.
The Sidecar AI Learning Hub is.
We're going to have a lot morefun stuff like this to
illustrate concepts, because whynot?
Right, it would have beenprohibitive to create cartoons,
(23:14):
or to create tons ofinfographics everywhere, to
create whatever, but now we haveall these additional creative
expression outlets.
To me, that's the real ideahere.
Speaker 4 (23:24):
And something I mean
not to get into the philosophy
of the art side, but that thatside is interesting to me for
sure, but it's not like Sidecarwould have gone out there and
contracted an individual toanimate all of our content.
We just wouldn't have done it.
We would have recorded itourselves, to be frank.
So I do agree with you thatthis is opening doors to things
that we never could haveconsidered.
You did mention the AI LearningHub, content automation, amit,
(23:48):
and I'm just curious from yourperspective, would you consider
incorporating, like cartoons?
I know we're using human, okay,human AI avatars right now, but
would you use both, either or?
Speaker 2 (24:03):
Oh, a hundred percent
.
I would love to see that happen, because to me it's about how
effective are you at getting thematerial across in a way that
the person can relate to, canunderstand and will retain.
And so that doesn't mean likeeverything's a cartoon, because
then you kind of kill the point,but like if you introduce bits
and pieces here and there, Ithink people look forward to
that.
You know you have a lesson,like you know, data in the age
of AI, which is, like you know,super important but maybe not
(24:24):
the most compelling, likeexciting topic in the world.
Like don't watch that video atnight, but you know, but what if
we had a little cartoony thingin there, right?
Or maybe the cartoon comes tolife and each of the panels in
the cartoon individually comesto life as an animation with
cool audio and there's somesinging in there or whatever you
know.
Like there's all these ideas,and if you can do that
effectively just by thinkingabout the idea or even have the
(24:46):
AI help you brainstorm the idea,that becomes really powerful.
It makes us more effective atwhat we do.
So we 100% will beexperimenting with this across
Sidecar's AI learning content.
You'll probably start to seethis stuff in our LinkedIn posts
.
You'll see it in blogs.
You'll see more and more andmore of this content.
And think about this this waytoo.
If you're doing these thingsand your competitors to the
(25:07):
extent you have any in yourmarket but you have competitors,
by the way, in terms ofattention that people give you
right Like their time is splitamongst however many different
things they look at, so you arecompeting for attention.
So if you're doing this andothers are not, you have a
significant advantage, and sothat advantage may not last
forever, but it lasts for aperiod of time, and I think
that's exciting as well.
(25:27):
There is one other thing I wantto quickly mention about
content modalities like thisthat I think is underappreciated
, and it's an emerging area thatI predict is going to become a
much bigger deal in learning inthe next, say, 12 to 24 months,
and that has to do withinteractive games or interactive
role play type scenarios.
So think about this If you weredeveloping a learning module
(25:51):
for your members and you knowmost of it's, let's say one way,
single directional content likepre-recorded videos or, you
know, downloadable assets theycould read worksheets that's all
great and there's like a bulkof content, but sometimes you
say, hey, you know, thisparticular concept is so
important, we're going to havesome kind of interactive
learning exercise, right?
(26:12):
And a lot of times these thingsare frankly pretty lame, they're
not very interesting.
But what if you could have anAI just generate a game specific
to that lesson?
Right, where you say, oh, forthis particular lesson on AI
data platforms, we're trying toteach people how they can
connect all their disparate datasources to the AI data platform
.
Let's give them a game of somesort and we don't even specify
(26:34):
what.
But we go to Claude 3.7 or wego to Gemini 2.5 Pro both are
outstanding at this task and wesay we have a learning module
that has this kind of content.
We want you to think up of fiveideas for games that will
reinforce these concepts, andthen not only will it give you
those ideas, but it'll actuallybuild game for you and in an
interactive artifact in Cloud,you'll be able to play the game
(26:55):
and see if you like it, and ifyou like it, you can literally
copy and paste it into your LMS.
So the opportunity and ofcourse you could have done this
in the past, but that productionbudget for what I just
described probably would be sixfigures for a single such game,
right, maybe higher.
And now all of a sudden, it'sessentially free.
So it's again the thematicconcept here we're talking about
.
The broader stroke is goingfrom scarcity to abundance, and
(27:19):
that's extremely powerful.
Speaker 4 (27:22):
And if you're still a
skeptic of delivering rather
serious information through funmediums, I've got to shout out
Ninjio, which is the companythat we use for cybersecurity
training, and all of theirlessons are done with cartoons,
and obviously cybersecurity isquite a serious topic, but
they're really good.
I enjoy watching those videos.
I get a lot out of them, so beopen minded, all right.
(27:44):
Topic two for today is anotherfun one.
I'm calling this the funepisode because we're talking
about Disney robots.
So NVIDIA, Disney Research andGoogle DeepMind have announced a
collaboration to develop Newton, an open source physics engine
designed to simulate roboticmovements in real world
environments, by NVIDIA CEOJensen Huang during the GTC 2025
(28:11):
conference or GPU technologyconference in San Jose,
california.
This partnership aims torevolutionize Disney's next
generation entertainment robots,including Star Wars inspired
BDX droids, which are set todebut at Disney theme parks
worldwide starting in 2026.
I'll add an aside here If youhaven't seen this video on
YouTube of the keynote where hebrings them on stage, you've got
to check it out.
They're adorable.
(28:32):
They're not scary at all.
They're really cute.
Some key features of Newton itallows developers to program
robotic interactions withvarious objects like food items,
cloth, sand and otherdeformable materials.
The engine is designed to makerobots more expressive and
capable of handling complextasks with greater accuracy.
Newton integrates with GoogleDeep Mind's robotic tools, which
(28:54):
simulates multi-joint robotmovements.
Disney plans to use Newton toenhance its robotic character
platform for lifelike andinteractive robots.
The BDX droids showcased duringthe keynote represent just the
beginning.
Disney Imagineering SVP, kyleLaughlin, stated that this
collaboration will enable thecreation of robotic characters
(29:15):
that are more engaging andcapable of connecting with
guests in uniquely Disney ways.
Beyond entertainment robots,though, newton has potential
applications in industrialrobotics, ai-driven humanoid
assistance and manufacturingsystems.
It addresses the sim-to-realgap, enabling robots to learn
from simulations and adapt theirmovements for real-world
(29:36):
conditions.
In addition to Newton, nvidiaunveiled GROOT-N1, an AI
foundation model for humanoidrobots aimed at improving
perception and reasoningcapabilities.
Model for humanoid robots aimedat improving perception and
reasoning capabilities.
The company also introducednext-generation AI chips
Blackwell, ultra and Rubin and anew line of personal AI
computers.
So, amit, this is another funtopic that I think still has
(29:58):
quite serious implications.
We've been keeping an eye onrobots basically since the
inception of the sidecar sync.
Why do you think that'simportant for our listeners to
keep an eye on?
Speaker 2 (30:08):
Well, everything we
talk about with AI is in the
digital world, and so when wehave robotics, we're able to
connect AI with the physicalworld.
Or, put another way, it's goingfrom bits to atoms, right?
So it makes it real in ourminds and it becomes physically
real to us, right?
So I think the applications areenormous.
There's a lot of things in theworld that are not solvable
(30:29):
purely with digital solutions.
You know as much as we say thatwe want everything to jump on
digitization and therefore jumpon the back of Moore's law and
Internet and now AI.
There's certain things thatdon't work that way, like how do
you take care of an agingpopulation?
How do you take care of moreand more people that need
medical care, who either can'tafford it or we don't have
(30:50):
enough doctors and nurses to goaround, right?
So that's one application thatI think robotics is going to be
very, very interesting for.
And there's other things too,like think about, like you know,
all the different things thatyou have happening in your house
.
Either you know you don't wantto do your own laundry, or
perhaps you know you can't.
Right, what about something tohelp you do that that's
(31:11):
affordable and scalable and allthat.
So there's a lot of things inlife where robotics will be
extremely powerful.
I think the consumer facingstuff is interesting because if
you think about advancedrobotics, thus far it's either
been like demo videos thatyou've seen at shows or online
or you've seen them inindustrial settings.
(31:31):
Robotics has been a thing inmuch more of like fixed location
robotics, where you have like arobotic arm doing auto
manufacturing or something, butnonetheless robotics.
But those robotics were likekind of pre-AI, in the sense
that they were much more likespecifically programmed to do
very, very specific tasks,whereas part of what has been
announced here that's kind ofunder the hood, if you will of
(31:53):
you know the cute Disney robotsis this physics engine and this
new foundation model forrobotics that integrates with
all the other work thatDeepMind's done, which is this
interesting through acollaboration between, obviously
, a consumer brand that'spioneering in this kind of stuff
, with Disney, google's DeepMind, who have some of the best AI
researchers in the world, and,of course, nvidia, who's done
(32:14):
tremendous things with computeand hardware.
So it's worth watching and Ithink that you know, the ability
for a platform like this to beavailable for other people to
build on top of is also it'sexciting because there will be a
proliferation of robotics ifyou can make it easier and way
less expensive for anyone whohas an idea for robotics to just
(32:36):
build on top of a generalizedrobotics platform, which is what
NVIDIA and Google are afterDisney's, after you know cute
robots in the theme park, to getyou know people to come to.
Disney World and otherapplications as well, and movies
and so forth.
But you know, ultimately forGoogle and NVIDIA they're after
a generalized platform.
Speaker 4 (32:55):
So would it be fair
to say maybe robotics will have
less of an impact on internalassociations operations, but
maybe a huge impact, potentially, on their members.
Speaker 2 (33:07):
I don't know.
I mean, I think it depends on,like anything, where the
movement of something in thephysical world is necessary
could be something that roboticshelps with.
So you know, I don't know ifthere's certain associations
that have, you know, inventoryin their office or whatever
they're doing.
You know probably less and lessof that these days, but I
definitely think in the fieldwhere people are working, you
(33:28):
know probably less and less ofthat these days, but I
definitely think in the fieldwhere people are working, you
know, in fields that arefundamentally like you know,
people facing or industrialsettings 100 percent.
But I think this is going toaffect all of us in our daily
lives a lot faster than weprobably anticipate.
I wouldn't be surprised if bythe end of this decade it's
fairly commonplace but stillprobably a little bit on the
expensive side, but fairlycommonplace still to have some
(33:50):
consumer type robots in thehousehold doing some basic tasks
.
Wouldn't surprise me at all andif they're cute enough, like
the Disney robots, people mightbe accepting of those.
Speaker 4 (34:00):
I might get one If it
does my laundry and it looks
like that.
Speaker 2 (34:22):
I might be open,
we'll see talking about
something that we thought wouldbe number one.
It's interesting to ouraudience, probably interesting
to most people, that this isgoing on.
And then this is in addition toand on top of all the other
advancements in humanoidrobotics that have been going on
, and we've talked about that alittle bit on this pod.
There's so much going on there,there's so much investment
happening, there's so muchresearch happening there, along
(34:42):
with, you know, the continualprogress in industrial robotics.
All of this is coming togetherbecause the common denominator
now is that you can turn it intosoftware that actually drives
these things.
You can turn it into generalpurpose foundation models and
the physics engine kind ofcomplements that right.
So physics engine isdeterministic.
It's something that says, hey,like we know, this is the way
(35:03):
things will react, and based onthe laws of physics, essentially
.
So it's basically an enginekind of like, in a way like a
game engine.
You know the way certain thingswork in video games, but this
is a real world, threedimensional, you know real time,
open source physics engine.
It's a lot of words tobasically say it's a piece of
software that understands howthe world works, whereas AI
(35:24):
models don't really understandthat.
Some people are out theretrying to create neural networks
that have better understandingof the real world, and I think
that's great, but ultimately,you want the neural network to
be complemented by adeterministic piece of software
that has a rules engine thatsays look, this is actually the
way that physics would work inthat situation, which is going
to be a lot more consistent andreliable, and then leverage the
(35:45):
AI to make good choices aboutwhat's the right action or, you
know, reaction to have, based onwhat's happening.
So I think it's a greatcombination.
It's what a lot of people havebeen talking about.
You know the naturalprogression.
There's nothing really novelabout like what they're trying
to do, but I think thecombination of this skill set is
really interesting because youknow you're doing it in a way
(36:05):
where you have innovators inthree different dimensions
coming together to partner, so Ithought that was particularly
interesting too.
Ultimately, for associations, Ican't say that this has like
this specific applicability toyou tomorrow.
It's more like, over the nextfive years, 10 years, your
profession, your industry, willlikely be affected by this and
maybe there'll be someapplications for you within your
(36:27):
life or within your own work.
Speaker 4 (36:31):
So, in the situation
where an associations, industry,
profession is surely going tobe impacted by robotics let's
say manufacturing, for examplewhat responsibility do you think
the association has at thispoint?
I feel like there's a lot ofquestion marks still.
We're seeing a lot ofadvancements and investment in
this space, but it's not likewe're seeing right now major
(36:51):
disruption occurring.
Do you feel like associationsshould get ahead of the curve,
start producing content aboutthat?
What do you think?
Speaker 2 (36:58):
Yeah, I mean the
association should be the voice
of their sector and should belooking at things that are going
to affect the sector and helpthe sector prepare, help the
sector advance itself by takingadvantage of these technologies.
So that's generally a truestatement across all the
different fields associationsrepresent.
You know, what I would say withthis is that it might be some
of the less obvious industries.
(37:19):
So, for example, industrialautomation, robotics
manufacturing, automanufacturing in particular, but
a lot of other manufacturinghas had a heavy emphasis on
automation and robotics for along, long time.
So it's definitely not like anovel idea to talk to
manufacturing leaders about,like, how to add more robotics
or same thing with, like,industrial warehousing, like
someone like Amazon and lots ofother warehouses have that.
(37:40):
Where I think it's probablyless anticipated, but where this
advancement will be more likelyto have an impact, is some of
the kind of softer, fuzzierenvironments.
So if you're in a factory or ifyou're in a warehouse, there's
a high degree of predictability,or a higher degree of
predictability in terms of yourenvironments, your surroundings,
the kind of objects you'redealing with.
It's a lot more mechanistic,right, whereas what about
(38:03):
housekeeping?
What about painting?
What about delivering food?
What about all the things whereyou're in the real world where
there's all this like chaoticscenarios happening, much caused
by us as people, but just likethe weather or, like you know,
new Orleans potholes or whateverright Like, so you have all
this like chaos, and so havingrobotics that is smart and are
(38:23):
that are smart enough to handlethat and to still accomplish
whatever their task is, isinteresting.
So if a world comes up in thenot too distant future where
certain types of tasks likefolding laundry, but if you take
that one and say, well,extrapolate from that, well, it
can probably also wash yourfloors, it can probably also
wash your windows, it canprobably also make your bed,
(38:44):
maybe, and do some other things,so that's interesting.
But then what's the otherapplications of that?
So, at kind of like an industrylevel, what does that mean for,
like the house cleaningindustry?
Does it mean everyone has ahousekeeping robot?
Does it mean that there arebusinesses that start up that
lease these out or rent them out, or you have a service where,
like your housekeeper that comesto your house is a robot?
(39:05):
What about the industrialsetting?
Right?
So there's all these kinds ofquestions, and that's one
example, right?
What about in areas wherethere's tremendous labor
shortages, like when HurricaneIda came through New Orleans a
handful of years ago.
I think you were still herewhen that happened right, yeah.
It was terrible for a lot ofreasons.
The power was out for four weeksor three weeks, but also it
(39:26):
caused a lot of roof damage.
Was out for four weeks or threeweeks, but also it caused a lot
of roof damage and it was.
It took forever to get a rooferto come to your house.
It's a dangerous occupation.
There's not that many peoplewho do it.
Normally they don't have demandat the level they did, right.
So of course everyone's mad.
I can't believe it's going totake me six months to get my
roof repaired.
Well, normally they don'trepair half the roofs in the
city all at the same time, right, so it kind of makes sense.
But what if there were somegeneral purpose robots that
(39:47):
could learn to be roofinstallers very quickly?
And there's no issue with likerisk or hazard of safety or
insurance requirements for theserobots to be on your roof other
than them falling in your heador something, but like much
reduced insurance concernscompared to humans on your roof.
And all of a sudden everybody'sroofs repaired like way less
expensively and way faster,right.
So that's an interestingscenario where you have these
(40:09):
choke points or disaster reliefright or search and rescue.
There's just so manyapplications where robotics
could not just give us money,which I think by itself is
obviously going to be thedrumbeat of economic
decision-making, will always bethere.
Right To find that, but to alsofind things that we cannot do.
But for the scale of this kindof emerging technology.
(40:31):
That's where I think it getsmore exciting.
Speaker 4 (40:34):
That's a great point
of me.
So I feel like AI is going tobe able to do the knowledge work
and the physical work.
Last question for today'sepisode, because I heard you
mention this briefly.
I don't think it was on thepodcast, but you mentioned to me
that you were doing somecollege tours with your son, so
he is starting to think aboutwhat he's going to do in his
future.
Amit, what are you recommendingto your children in terms of
(40:56):
like career paths that they godown when we're having these
kinds of conversations?
Speaker 2 (41:01):
You know it's a tough
one.
I think part of it is thefollow your passion kind of
thing matters to some extent.
I think it.
I mean it matters as a person.
But, like in terms of careeropportunity, I guess, is how I'm
trying to answer your question.
Um, my thought process for bothof my kids and they're both
getting closer to that time is,uh, pick something where you get
really good at communicatingand you're also really good at
(41:23):
connecting with people, whereyour interactions with people
are really a lot of the value,and that can be said to be true
for a lot of differentprofessions.
But I think that the AI is goingto do most of the cognitive
labor for most people very soon.
I don't know if that's in threeyears or in seven years, but I
mean, in some ways we're alreadythere with a lot of the things
(41:46):
we're doing.
It's also to expand the rangeof possibilities.
This episode, in a lot ofrespects it's the fun episode.
It's also the possibilityexpanding episode, because the
things we can do with AI, youknow it's it's not like we're
displacing tons of graphicartists to create all these
cartoons.
We just never would havecreated the cartoons.
Of course, the inverse of thatis also true.
People who were creatingcartoons will use this tool
instead of hiring graphicartists.
(42:06):
So there's, there is that issue, but the demand for something
that is, you know, becomingabundant is enormous and that's
going to create more ideas andmore opportunities.
We tend to be pretty good wecollectively, as a species, tend
to be pretty good at likecoming up with new ideas.
So being in a creative pursuitof some sort, or having a
creative like thread to adiscipline, creative pursuit of
(42:28):
some sort, or having a creativelike thread to a discipline.
So I'm just trying to get mykids to maybe try a couple
different things and to reallyfocus on, on like just getting
good at building relationshipsand communicating, certainly
hard skills and specificdisciplines.
That's like a very genericanswer in a way.
That's important.
But, you know, ultimately Ithink that's going to be what's
super, super important for us.
Speaker 4 (42:49):
So lean on being
human, lean on humanity, it
seems like, is kind of theadvice.
Well, everybody, thank you fortuning into this fun but serious
episode.
We will see you all next week.
Speaker 2 (43:02):
Thanks for tuning
into Sidecar Sync.
This week, Looking to divedeeper, Download your free copy
of our new book Ascend Unlockingthe Power of AI for
Associations at ascendbookorg.
It's packed with insights topower your association's journey
with AI.
And remember Sidecar is herewith more resources, from
webinars to boot camps, to helpyou stay ahead in the
(43:25):
association world.
We'll catch you in the nextepisode.
Until then, keep learning, keepgrowing and keep disrupting.